Last week OpenAI shared how they deployed their O3 reasoning model into one of the U.S. National Laboratories. Not through an API. Not via the cloud. Instead, as a custom, air-gapped, on-premises deployment onto the lab’s supercomputer.
Engineers literally walked the model weights into a facility where phones and electronics aren’t allowed (also known as the "SneakerNet"). They installed it directly on the customer’s hardware, wired into their networking stack, inside one of the most restricted environments imaginable.
If you’ve followed the history of enterprise software, you’ll understand why I think this moment is so significant.
Most of the canonical SaaS companies of the last two decades, including Salesforce, Workday, and Slack, grew up with a single conviction: everything would be delivered as multi-tenant SaaS. They did not bend to customer requests for on-prem deployments. If you wanted their software, you had to adopt it on their terms.
AI is proving to be different. The demand from enterprises, governments, and research institutions is too strong. Sensitive data, national security workloads, and proprietary research cannot simply be sent to someone else’s multi-tenant AI app. The most AI-native company in the world, OpenAI, recognized this and shipped one of their most advanced models into an air-gapped environment.
That is not a corner case. It is a direct market signal.
AI is not just another flavor of SaaS. It is a new category of software, one where the most valuable inputs are often the most sensitive. Maybe more critically, techniques are emerging that provide the capability for that data to power further last-mile training & fine-tuning of the underlying models. For adoption to happen at scale, enterprises need deployment options that align with their security posture.
“Self-hosted AI” is becoming the natural answer. It allows organizations to run models inside their own trusted environments, with full control over data, networking, and compliance. The alternative — “just send everything to our API” — will not work for the majority of meaningful enterprise use cases.
The ripple effects go beyond OpenAI. If the foundation model itself is available on-prem, customers will expect the surrounding ecosystem, from analytics to monitoring to MLOps tooling, to be self-hosted too.
It is no longer enough to say “we integrate with OpenAI’s cloud APIs.” Because OpenAI now supports a self-hosted deployment, customers will ask: do you integrate with that version too? Can you run alongside it in my environment?
This is how entire industries shift. What begins as a bespoke deployment for a National Lab quickly becomes a market expectation for the Fortune 500.
At Replicated, we have built our company on the belief that the future of software distribution is flexible deployment. Vendors need to meet enterprises where they are, whether that is in a public cloud, a private cloud, or an air-gapped data center.
OpenAI’s move validates that thesis. If they are delivering air-gapped deployments, then the rest of the industry has to take notice.
We do not need OpenAI as a customer for this to matter (though we would love to help). What matters is the signal it sends: self-hosted is not a niche, it is table stakes for serious adoption of AI and beyond. And it means the ecosystem around AI (startups, ISVs, enterprise vendors) will feel the same demand.
The future of self-hosted software is arriving faster than expected, pulled forward by AI. Vendors who adapt will win enterprise trust. Those who cling to a cloud-only worldview will confine themselves to less sensitive, lower-value workloads.
For the rest of us, this is both a challenge and an opportunity. At Replicated, our mission is to enable this future by helping software companies deliver their applications wherever customers need them, including the most secure, air-gapped environments in the world.
OpenAI just gave the entire industry a glimpse of what that future looks like. Now it is on all of us to build for it.