OpenAI just dropped their new o3-mini model, and Sam Altman is calling it “smart and fast”!

This little powerhouse is pushing the limits of what small models can do. Sam, the CEO of OpenAI, recently introduced the o3-mini, which is now live in ChatGPT and its API services. Just a couple of weeks ago, he hinted that this model was on the way while DeepSeek was getting all the attention. And guess what? For the first time, there’s a version available for free users of ChatGPT, complete with rate limits.

In a post on X, Altman raved about how the new model can search the web, show its thought process, and is accessible to free-tier users through the reason button. Plus, ChatGPT Plus users can choose the ‘o3-mini high’ version, which digs deeper and gives even better answers.

The o3-mini was part of OpenAI’s “12 days of Ship-mas” last month and is designed to match the performance of the o1 model in math, coding, and science, but with quicker responses.

OpenAI shared on their website, “We’re rolling out the o3-mini, our latest and most cost-effective reasoning model, available in ChatGPT and the API today. This powerful and speedy model takes small models to the next level, especially in STEM areas like science, math, and coding, all while keeping costs low and response times quick.”

So, what’s the deal with the o3-mini? It’s a compact reasoning model built for real-world use, featuring function calling, developer messages, and structured outputs. It supports streaming and offers three reasoning levels—low, medium, and high—so you can choose how to tackle complex tasks or prioritize speed.

The rollout includes ChatGPT Plus, Team, and Pro users, with a tripled message limit of 150 per day. While it doesn’t have vision like the o1 model, it does integrate internet search for up-to-date answers. Free users can finally get in on the action, while Pro users enjoy unlimited access to both o3-mini and o3-mini high.

When it comes to performance, the o3-mini is a big step up from the o1-mini, especially in STEM subjects. It scored an impressive 83.6% accuracy on the AIME 2024 math competition, beating out previous models. Plus, it’s 24% faster, with an average response time of 7.7 seconds compared to 10.16 seconds for the o1-mini.c

Leave a Reply

Your email address will not be published. Required fields are marked *