Openai GPT-5 drops two open models ahead of launch: GPT OSS 120B and 20B explained in 5 digits
Just before the launch of GPT-5, Openai released two newly opened models. The GPT -SS 120B and 20B agent-style tasks are sewn and includes support for advanced logic workflows. Here is the whole story in 5 digits.
Listen to the story

In short
- Openai launched GPT OSS-120B and GPT OSS-20B under Apache 2.0 license
- The company claims that its open model has set a new benchmark in the open-weight category
- However, hallucinations are still an important problem
Openai has launched two new open -weight AI models, marking its first open release over five years before GPT -2. The newly unveiled models, GPT OSS-120B and GPT OSS-20B, are now available for download through the Hugging Face under the permissible Apache 2.0 license, making them freely accessible to developers and enterprises. It comes in time when Openai is eyeing to launch GPT-5. Let’s take a look at what is new with these two new models.
Openai’s new open model, GPT OSS-120B and 20B: What is new?
-Two models meet different use cases: large GPT -SS-12B is designed for deployment on a single NVidia GPU, while Lener GPT -SS-20 B can run with only 16GB RAM on consumer-grade laptops. Both are purely text-based and lack of multimodal abilities such as image or audio generation.
Openai says that the model is sewn to agent-style functions and includes support for advanced logic workflow. Although these open models can not process complex data like direct images, they can root the query for more powerful closed models of openaiI via cloud API, effectively acting as intelligent middlemen.
The models rely on a mixture-off-experimental (MOE) architecture, which allows them to activate only a small multocate of parameters per token, around 5.1 billion for 120B models, ensuring greater efficiency and accountability. A post-training process related to learning high-compute reinforcement further enhanced their argument abilities, closely align them closely with OpenAI’s O-Series’ Frontier model.
-Openai claims that its open model has set a new benchmark in the open-weight category. On the codforce, a widely used programming benchmark, GPT-OSS-121 B scored 2622, and the small 20B model scored 2516—–DiPsec performed better than Deepsek’s R1, but is still behind Openai’s O3 and O 4-My Mine.
– However, hallucinations remain an important issue, as reported by Techcrunch. On Openai’s Personqua benchmark, a test of factual accuracy about people, GPT -SS-12B made 49 percent of the time, while the 20B version did so in 53 percent of cases. This is quite worse than the 16 percent of the halight rate seen in Openai’s old O1 model, and even higher than the 36 percent rate for O4 mini.
Openai narrows it for low parameter activation and “world knowledge” of small models, a expected trade-bandage when moving away from the large frontier system.
– In a white paper with launch, Openai addressed concerns over possible misuse. The company says that both internal and third party evaluations were made to assess the risk of models being rebuilt for cybercrime or biochemical hazards. While GPT-OS can increase biological knowledge in bad actors marginally, Openi concluded that it does not meet its “high capacity” danger limit, even after fine tuning.
Unlike some completely open-source labs such as AI2, Openai has chosen to release the training dataset used for the manufacture of GPT -SS, which is a possible response to the ongoing cases alleging misuse of copyright in AI training.
Nevertheless, Apache 2.0 licenses provide wide freedom to developers, including commercial use, to pay Openai without any obligation or to seek further permission. This can significantly promote between startups and enterprises aimed at integrating the competent AI models without high license costs, especially between startups and enterprises.
-After years of a tightly protected, ownership approach, Openai has taken a decisive turn by entering the open source area again. The move is likely that Chinese players such as Deepsek, Moonshot AI, and Chinese players such as Alibaba have an attempt to re -create leadership at one place, all of which have recently made major progress in open model development. While the Lama of Meta once led this limit, its effect has decreased in the last one year.
With the release of GPT -SS, Openai developers and policy makers, especially a calculated bid to re-attach the Trump administration, have recently urged American firms to open more AI techniques to carry forward democratic values in global AI adoption. CEO Sam Altman admitted earlier this year that the openi could be “on the wrong side of history” when it came into transparency and openness.