“We don’t want to be remembered as the ones who hoarded intelligence,” says executive, as OpenAI walks a delicate line between openness and profit.
OpenAI has just released its first open-source AI model in years, marking a significant pivot for the once fully closed company. The new release, known as OpenAI’s “open model”, is part of a broader strategy to re-engage with the open-source community and reposition the company amid increasing criticism from AI researchers, competitors, and even its own former employees.
But the move, while headline-grabbing, also reveals a strategic duality: OpenAI is opening up, but only so far.
“It was clear we were on the wrong side of history”
The decision to release an open model follows mounting pressure on OpenAI to return to its roots. Founded with a commitment to transparency and collaboration, the company gradually became more opaque as its models, particularly GPT-3, GPT-4, and the newly released GPT-4o, proved commercially valuable. That shift alienated many in the AI research community who believe in the foundational importance of open science.
One senior OpenAI researcher reportedly said in internal discussions: “It was clear we were on the wrong side of history. We had to do something, anything, to avoid being remembered for locking it all away.”
By releasing the new open model, believed to be a GPT-3 level architecture with significantly fewer safeguards, the company is offering the public a glimpse of its technology while avoiding the release of truly cutting-edge IP.
A calculated form of transparency
The model is open, but it’s far from a full reveal. While the code and weights are available for download, OpenAI has not shared the training data, techniques, or other proprietary components that power its latest frontier models like GPT-4o. Critics have called the move “performative open-sourcing”, a way to win back goodwill without sacrificing strategic advantage.
Sam Altman, OpenAI CEO, addressed this in a recent podcast interview: “We want to lead with values, but we also have a responsibility to our investors and to the safe deployment of AI. Not everything should be open, especially at this level of capability.”
Why now? Timing, optics, and competition
The release comes at a time when rivals like Meta, Mistral, and Hugging Face are championing open-source alternatives to GPT models. Meta’s Llama 3 and Mistral’s Mixtral models have gained traction not just for their capabilities, but for their opennes, often used by developers, researchers, and startups looking to innovate without paying steep API fees.
With antitrust scrutiny increasing and tech watchdogs warning about monopoly control in AI, OpenAI’s move could also be seen as a regulatory defense mechanism. By demonstrating a commitment to openness, the company can present itself as a collaborative force rather than a closed commercial juggernaut.
Still guarding the crown jewels
Despite the release, OpenAI’s most valuable assets remain firmly locked away. The new open model does not represent GPT-4o, nor does it include multimodal capabilities, reinforcement learning from human feedback, or the advanced alignment techniques developed in-house.
This means developers and researchers can experiment, but only on OpenAI’s terms. In the words of one analyst: “They’re offering a sandbox, but not the keys to the kingdom.”
“Openness is not binary”
OpenAI’s stance underscores a growing reality in the AI world: transparency exists on a spectrum. While fully open-source AI may be philosophically desirable, it also carries risks, from misuse to national security concerns. OpenAI is attempting to carve a middle path, offering enough to appease critics while keeping its lead.
As the company continues to commercialize its flagship products, especially through partnerships with Microsoft and enterprise clients, it’s unlikely to revert fully to its early open-source ethos.
But this latest release is a nod to the idealism it once stood for. Whether it’s enough to repair its standing in the research community remains to be seen.