When we at Replicated begin working with teams building or modernizing on-prem software, we often see the same well-intentioned but costly mistakes. Vendors want to deliver something robust and future-proof—but in the process, they end up solving the wrong problems. Too often, that means engineering complexity for complexity’s sake, without pausing to consider what daily life will look like for the people actually running the software.
The result? An over-engineered deployment that looks impressive on paper but creates a heavy support burden and alienates the very users it's meant to serve. Whether it’s a data engineer trying to get a platform running or a field team stuck debugging a service mesh they’ve never touched, the disconnect between design and reality leads to friction, frustration, and failed implementations.
This blog is a guide to doing on-prem better. It’s about reducing that friction—by choosing simplicity, designing for real users (not idealized ones), building with flexibility from day one, and keeping engineering close to the support experience. If you're serious about winning in on-prem, these are the principles that will help you do it right.
When bringing your product on-prem, it’s tempting to over-deliver. More tools, more features, more architecture—it can feel like you’re offering more value. But in reality, simplicity wins. Every extra layer adds friction for your customers and becomes a hidden cost for your team.
There are two common ways complexity sneaks into on-prem solutions:
If your team is relatively new to Kubernetes, there’s a good chance you’re unintentionally making things harder than they need to be. When teams first start building Kubernetes-based applications, they often over-engineer—not out of arrogance, but due to lack of experience.
You want to follow best practices. You want it to feel “production-grade.” So you reach for sophisticated patterns and tools like GitOps workflows, service meshes, or extensive logging and monitoring stacks, assuming they're standard for any serious Kubernetes deployment.
The issue? Without the experience to distinguish what's essential from what's simply nice-to-have, you risk delivering complexity that your customers don’t actually need. Instead of making it easy for them to adopt your product, you're forcing them to understand your infrastructure choices first.
Your customers aren’t just deploying your app—they’re inheriting every early architectural decision your team made. The result is more support tickets, slower onboarding, and frustrated customers.
The other trap comes from experience—but applied in the wrong context. Some vendors try to lift their mature, production-grade SaaS architecture and drop it into customer environments. The result? A heavyweight stack filled with tools that work great at scale—but overwhelm smaller, self-hosted deployments.
You’ve built your SaaS platform with tools like Argo CD for GitOps, Istio for service mesh, and Vault for secrets. These are excellent solutions for your environment—helping your teams automate, secure, and manage at scale.
But here’s the problem: do you want your customers troubleshooting Argo CD sync issues? Managing a Vault cluster? Figuring out service mesh traffic policies just to get your app running?
What works for your SRE team doesn’t necessarily work for your end users. Instead of focusing on your product, they’re tangled up in infrastructure complexity they didn’t ask for—and don’t have the expertise to handle.
To help customers succeed, don’t force them to learn complicated infrastructure that slows them down from evaluating and deploying your software. Instead:
Let the advanced tooling be opt-in. Start with a foundation that’s easy to deploy, easy to troubleshoot, and focused on delivering value fast.
Engineers often assume they'll deploy their applications alongside infrastructure experts or seasoned Kubernetes administrators. In reality, this is rarely the case.
While it may seem logical that you'll interact primarily with central IT or dedicated platform teams, more often your users are actually practitioners closer to the core use-case of your product. If you're delivering a platform for cloud-native observability or AI tooling, you might indeed be working directly with infrastructure professionals. However, if your product is a data analytics platform, your primary contact is likely a data engineer—someone skilled in data pipelines and analytics, but potentially unfamiliar with Kubernetes and infrastructure.
Even when you do work with a central platform team, the benefits aren't always straightforward:
To mitigate these challenges, your best strategy is to lead with simplicity from day one:
Designing for ideal customers is easy. Designing for real customers—the ones actually doing the install—is how you win.
Many vendors delay building a cloud-agnostic architecture until it's too late. A major enterprise deal or government contract suddenly forces them to pivot—and that pivot is often painful, costly, and sometimes impossible.
It’s tempting to leverage proprietary cloud services early on—like AWS Lambda, Google BigQuery, or Azure Cosmos DB—because they offer rapid development, seamless integration, and quick scalability. These services seem convenient initially, allowing teams to move fast and build impressive demos quickly. However, this convenience comes at the cost of future flexibility.
Why choosing cloud-agnostic, open-source solutions matters:
By prioritizing open-source equivalents—such as Helm for packaging, Kubernetes for orchestration, and PostgreSQL or MongoDB for data management—you create a truly portable infrastructure. This approach unlocks the full potential of your market, ensuring you're not limiting your customer base to a specific cloud provider's ecosystem. Building for portability from day one is not just prudent; it’s essential for long-term growth.
Here’s the hard truth: supporting software in customer-controlled environments is inherently difficult. You don’t get Grafana dashboards or access to real-time Prometheus metrics. Even getting logs can be a slow, painful process—pulled manually, redacted for privacy, and sent over days of back-and-forth. In air-gapped installs, things move even slower. Every exchange—logs, screenshots, config files—can take hours or days, compounding frustration on both sides. And even if the root cause turns out to be a misconfiguration in their environment, not a bug in your product, the customer experience still suffers—and the blame often lands on you.
This isn’t a space where traditional support models thrive. Engineering needs to be in the loop from day one.
The people who built the system are the ones best equipped to:
Engineers may push back—they signed up to build, not support. But here’s the tradeoff: investing time into diagnosing real-world issues is the fastest way to improve the product. And it pays off. The pain becomes automation. The guesswork becomes visibility. The tickets go down.
Succeeding in on-prem isn’t about proving how technically sophisticated your team can be—it’s about delivering something your customers can actually run, trust, and support. The difference between a successful deployment and one that stalls often comes down to how well you’ve considered the realities of your users’ day-to-day.
Don’t optimize for your own convenience or expectations. Optimize for the people in the trenches—those installing, troubleshooting, and relying on your software to deliver business value. Choose simplicity over flash. Design for the user you actually meet, not the one you hope to. Build flexibility in from the start so you’re not forced into a painful rewrite later. And bring engineering into the support loop early—because that’s where product maturity and customer empathy are forged.
On-prem can be a competitive advantage. But only if you earn that trust by making life easier, not harder, for the people running your software. That’s how you reduce friction. That’s how you win.
For help distributing your application into self-hosted environments, reach out to the team at Replicated.