{"id":8997,"date":"2026-03-19T07:10:43","date_gmt":"2026-03-19T07:10:43","guid":{"rendered":"https:\/\/wildgreenquest.com\/?p=8997"},"modified":"2026-03-19T07:10:43","modified_gmt":"2026-03-19T07:10:43","slug":"openais-new-frontier-models-mark-a-huge-change-in-how-ai-will-be-built","status":"publish","type":"post","link":"https:\/\/wildgreenquest.com\/?p=8997","title":{"rendered":"OpenAI\u2019s new frontier models mark a huge change in how AI will be built"},"content":{"rendered":"<p><br \/>\n<br \/><\/p>\n<p>In early March, OpenAI unleashed a one-two punch, dropping two major frontier models just days apart.<\/p>\n<p>First, we got the new GPT-5.3, an \u201cinstant\u201d model optimized for fast, accurate responses.<\/p>\n<p>Then, OpenAI released GPT-5.4 two days later. This is a \u201cthinking\u201d model optimized for deep analytical work.<\/p>\n<p>I was a beta tester for OpenAI in the early days, and today I spend hundreds of dollars per month using their models through the OpenAI API.&nbsp;<\/p>\n<p>I\u2019ve tested both GPT-5.3 and 5.4 extensively since their launch. The new models represent a totally different approach, and hint at a major change in how big AI companies build their tech.<\/p>\n<div style=\"position: relative;width: auto;padding: 0 0 33.59%;height: 0;top: 0;left: 0;bottom: 0;right: 0;margin: 0;border: 0 none\" id=\"experience-68a36f2a8e81d\" data-aspectRatio=\"2.97674419\" data-mobile-aspectRatio=\"1.01928375\"><\/div>\n<h2 class=\"wp-block-heading\" id=\"h-the-doer\">The doer<\/h2>\n<p>OpenAI\u2019s first new model, GPT-5.3, is built for speed. GPT-5.3 generally responds to queries within seconds.<\/p>\n<p>In its <a rel=\"nofollow\" href=\"https:\/\/openai.com\/index\/gpt-5-3-instant\/\">release notes for the new model<\/a>, OpenAI says that GPT-5.3 is built to be a snappy, clever writer and a fast communicator.<\/p>\n<p>\u201cGPT\u20115.3 Instant delivers more accurate answers, richer and better-contextualized results when searching the web, and reduces unnecessary dead ends, caveats, and overly declarative phrasing that can interrupt the flow of conversation,\u201d the company says.<\/p>\n<p>The model is different from other instant models OpenAI has released before. Previously, the company\u2019s instant models seemed to rely almost exclusively on their world knowledge to answer questions.&nbsp;<\/p>\n<p>In my experience, instead of crawling the Internet for fresh data, those earlier instant models often fell back on what they\u2019d learned during their initial training.<\/p>\n<p>This approach does indeed result in lightning-fast responses. But it meant that OpenAI\u2019s previous instant models were, to put it frankly, kind of dumb.<\/p>\n<p>If you wanted to quickly know the capital of California (Sacramento) or determine whether the plant you just touched was poison oak (Yes), you could send a photo or pose a query to earlier instant models and get a decent response.<\/p>\n<p>If you wanted to know about current events or news, though, the models struggled. Because they relied on pre-trained world knowledge, they were often stuck in the past, and struggled to integrate new information.<\/p>\n<p>In the ultimate irony, OpenAI\u2019s early instant models seemed not to know about their own existence. I recall chatting with an instant version of GPT-5.1. The model swore up and down that it didn\u2019t exist, and that GPT-5 was the latest OpenAI model.&nbsp;<\/p>\n<p>Why? Because at the time the model was trained, it indeed did <em>not<\/em> yet exist.&nbsp; Because it was stuck in that prior world, the model was unable to comprehend even this most basic snippet of new information.<\/p>\n<p>GPT-5.3 is different. It still relies heavily on its pre-trained world knowledge. But OpenAI says that it has been optimized to quickly browse and make sense of information it finds on the internet, and via other sources.<\/p>\n<p>The model \u201c&#8230;more effectively balances what it finds online with its own knowledge and reasoning\u2014for example, using its existing understanding to contextualize recent news rather than simply summarizing search results,\u201d according to OpenAI\u2019s release notes.<\/p>\n<p>The new model is also notably less timid. Instant models have limited time to think deeply about a user\u2019s query and understand their intent. In the past, that meant they tended to give vague, equivocal answers to queries with even the remote possibility of causing harm.<\/p>\n<p>OpenAI gives the example of a person asking about the proper trajectory needed for an arrow to hit an archery target. That\u2019s the kind of simple physics problem somebody might pose if they were practicing for an AP exam\u2013or simply trying to learn archery.<\/p>\n<p>Before, instant models often started their responses by scolding the user. They\u2019d warn that firing arrows might be dangerous, for example, and either provide a wussy non-response or write several paragraphs of disclaimers before giving the answer.<\/p>\n<p>OpenAI says that GPT-5.3 does a much better job of correctly understanding the context of users\u2019 questions. That lets it quickly understand that a user asking about trajectories isn\u2019t trying to murder someone with a bow and arrow. The model can thus answer the user\u2019s questions without lots of equivocating and hedging.<\/p>\n<p>In my testing so far, all these changes do appear to genuinely work well. GPT-5.3 is the first instant model I\u2019ve used that doesn\u2019t feel like a dumbed-down version of OpenAI\u2019s thinking versions.<\/p>\n<p>Instead, it feels like a full frontier model that can do nearly everything previous thinking models were able to accomplish\u2013only much faster and with snappier, more engaging prose.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-the-thinker\">The thinker<\/h2>\n<p>GPT-5.3\u2019s speed and cleverness free up GPT-5.4 to be something entirely different.<\/p>\n<p>Where GPT-5.3 is the \u201cdoer\u201d&#8211;quickly cranking out a decent version of a response to any query\u2013GPT-5.4 is very much the \u201cthinker.\u201d<\/p>\n<p>The model explores deeply before responding to queries. In my own testing, it sometimes took as long as five to ten minutes to get back to me on complex requests.<\/p>\n<p>Like many scientific or analytical people, the model is extremely detail-oriented and comprehensive in its responses. And like some of those people, it\u2019s also a little dull.<\/p>\n<p>Reading its responses feels a bit like perusing the instruction manual for your toaster or slogging through a fascinating but pedantic scientific paper. You learn a lot, but it\u2019s not exactly scintillating stuff.<\/p>\n<p>Again, that marks a new approach. Before, OpenAI\u2019s thinking models tried to do everything\u2014craft code, analyze scientific problems at a deep level, and write in a compelling and creative way.<\/p>\n<p>Like many human jacks-of-all-trades, this meant that the models did everything decently, but no one thing exceptionally well.<\/p>\n<p>Because GPT-5.4 seems to abandon the idea of writing creatively or responding in a snappy and pleasant way, it gains the space to excel at what <a rel=\"nofollow\" href=\"https:\/\/openai.com\/index\/introducing-gpt-5-4\/\">it was built to do<\/a>\u2014 crunch numbers, build software, and analyze data.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-the-bichon-test\">The bichon test<\/h2>\n<p>To compare the models, I gave both a simple prompt: \u201cChoose a specific topic related to Bichon Frises and then write an article about it.\u201d<\/p>\n<p>GPT-5.3 responded instantly with an article titled \u201cWhy Bichon Frises Are One of the Best Dogs for Apartment Living.\u201d<\/p>\n<p>Structured as a listicle, the article had a well-craft introduction that cleanly transitioned into the main topic.&nbsp;<\/p>\n<p>It included helpful, well-written notes about the breed\u2019s size (\u201cA Bichon can curl up beside you on the couch, nap in a small bed near your desk, and move around a one-bedroom apartment without constantly feeling underfoot.\u201d), temperament, and more.<\/p>\n<p>In contrast, GPT-5.4 chose to expound at length about the problem of Bichon Frise tear stains. Its article was filled with unbearably dry nuggets like this little doozy of a paragraph:<\/p>\n<p>\u201cTear stains are primarily caused by molecules called porphyrins. These iron-containing pigments are naturally present in tears and saliva.When tears sit on a dog\u2019s fur for extended periods, the porphyrins oxidize when exposed to air. That oxidation produces the rusty red or brown color you see beneath the eyes.\u201d<\/p>\n<p>GPT-5.4 feels a bit like the guy you\u2019d consult if you needed help doing your taxes or wanted to better understand particle physics.&nbsp;<\/p>\n<p>But you really wouldn\u2019t want to get stuck next to him at a party. The model is fantastic at complex analytical tasks, but appears deliberately built to eschew the creative, communicative side of work.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-a-better-approach\">A better approach?<\/h2>\n<p>At first, I found this bifurcated approach challenging.<\/p>\n<p>Before, I could simply default to using the most up-to-date thinking model available from OpenAI.&nbsp;<\/p>\n<p>These models were clearly the \u201cpremium\u201d version of OpenAI\u2019s lineup. The instant models felt built for people who couldn\u2019t be bothered to shell out $20 for ChatGPT access.&nbsp;<\/p>\n<p>Under OpenAI\u2019s new approach, though, that divide isn\u2019t so clear.<\/p>\n<p>I find&nbsp; that when I need help researching something deeply or doing anything involving numbers and data, I turn to GPT-5.4.<\/p>\n<p>Breaking down the stats from my YouTube channel, comparing the relative merits of Starlink and Comcast Business\u2014those are the kinds of things that I use 5.4 to do.<\/p>\n<p>When I want to converse with a chatbot for a quick (if somewhat cursory) answer, I find myself using the 5.3 model more and more.<\/p>\n<p>Recent personal queries I\u2019ve posted to GPT-5.3 include \u201cWhy do we yawn?\u201d (to cool the brain), \u201cWhat\u2019s this weird coin I found in my closet?\u201d (1936 British One-Penny), and \u201cHow do I clean fabric webbing?\u201d (With vinegar).<\/p>\n<p>I\u2019ve also used the model at work for simple Python questions, background research, and easy but tedious tasks like calculating the square footage of a room based on a series of measurements.<\/p>\n<p>One thing I\u2019ve realized in using GPT-5.3 is that speed matters more than I thought.<\/p>\n<p>Previously, OpenAI\u2019s instant models were too underpowered to be of much use for anything but the simplest of queries. Power users like me would always turn to the thinking models, which took as long as 5 minutes to render a response.<\/p>\n<p>Now that GPT-5.3 is good enough to provide genuinely useful responses, I\u2019m seeing how nice it is to get data back instantly.&nbsp;<\/p>\n<p>A few minutes of waiting for responses from a chatbot, sprinkled throughout a workday, doesn\u2019t feel like much. But those minutes add up. I find I can work faster and better now that I can use GPT-5.3 for more things, and get answers right away.<\/p>\n<p>Based on what I\u2019ve seen so far, I expect OpenAI will continue down this new, split model-building path.<\/p>\n<p>GPT-5.3 is snappy, and in many ways works better than GPT-5.4. But it\u2019s also probably much cheaper to run.<\/p>\n<p>Because the model presumably relies more on its pre-trained world knowledge, it likely burns through far fewer tokens to perform its work than a thinking model.<\/p>\n<p>If more power users like me find they can genuinely rely on an instant model for good responses, that will reduce the number of people who turn to the more expensive thinking models for everyday queries.<\/p>\n<p>That should allow OpenAI to reach profitability faster by cutting its costs while still collecting the same $20 (or more)&nbsp; per month from users like me.<\/p>\n<p>Longer term, if this approach proves fruitful, it\u2019s possible that we\u2019ll see a shift away from the use of thinking models entirely.<\/p>\n<p>For a while, the extra work that these models did yield a notably better response. With GPT-5.3, that no longer seems to be a given.<\/p>\n<p>If OpenAI can continue to improve its instant models, we may see a swing back toward quick-and-good-enough LLMs, and away from the slow, meticulous ones that are in vogue today.<\/p>\n<p>Those slower, more powerful models might become the purview of coders and data analysts, with everyone else relying on increasingly powerful instant ones. That would speed up the experience of interacting with LLMs, and help AI companies scale by dramatically reducing their costs.<\/p>\n<p><p>We\u2019re not there yet. But OpenAI\u2019s new pair of models is a big shift in the industry, and a tantalizing step in that new direction.<\/p>\n<div style=\"position: relative;width: auto;padding: 0 0 33.59%;height: 0;top: 0;left: 0;bottom: 0;right: 0;margin: 0;border: 0 none\" id=\"experience-68a36f2a8e81d\" data-aspectratio=\"2.97674419\" data-mobile-aspectratio=\"1.01928375\"><\/div>\n<\/p>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.fastcompany.com\/91507032\/openais-new-frontier-models-mark-huge-change-how-ai-will-built\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In early March, OpenAI unleashed a one-two punch, dropping two major frontier models just days apart. First, we got the new GPT-5.3, an \u201cinstant\u201d model optimized for fast, accurate responses. Then, OpenAI released GPT-5.4 two days later. This is a \u201cthinking\u201d model optimized for deep analytical work. I was a beta tester for OpenAI in<\/p>\n","protected":false},"author":1,"featured_media":8998,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-8997","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-brand-spotlights"},"_links":{"self":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/8997","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8997"}],"version-history":[{"count":0,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/8997\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/media\/8998"}],"wp:attachment":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8997"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8997"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8997"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}