What’s wrong with Kubernetes?

https://twitter.com/jameswrubel/status/1375099557529391110?s=20

Nothing is wrong. And it’s a great tool that probably becomes a standard for Containers deploying nowadays. Since I already attracted your attention, I’d like to talk not about the tool itself but more about the decision-making process and why organizations decide to spawn Kubernetes clusters.

Having established Kubernetes clusters in a company dictates a development team’s certain architectural decisions. Instead of evaluating specific SaaS or Cloud Native solutions for a new feature, teams may decide to run it in the Kubernetes because it’s much easier to start now. Tasks and features will be implemented right away, and maintenance efforts will be kept by someone else.

What will happen later? There’ll be more and more services running in their cloud. Password manager tool, CI, assets management tool, VCS, Ticketing system, Chat tool, etc. Every single instrument is easy to deploy and maintain. But when you have dozens of multiplied X environments, you’ll need a big team only to manage that. Don’t forget downtime and about your colleagues that need to work at night. And it’s not an abstract example but a real case scenario.

I often hear about cost arguments regarding self-hosted solutions or deploying smth to a Kubernetes cluster. Costs are very „easy“ to calculate. You visit a Pricing page of a “Some name” SaaS solution, take a price per user, multiply it to the number of users, and voilà — you have your total estimations, let’s say €1k EUR a month. After you compare it with the costs of running it in your Kubernetes cluster, what might be around zero, the difference is clear, but can we compare them at all?

How many times, those calculations consider their maintenance costs and learning curve time? Hiring a small team of DevOps people for €10–20k a month to save a €1–2k doesn’t make much sense. Having something self-managed doesn’t mean it’s for free.

Given all the fantastic features of Kubernetes, it’s not a secret that the learning curve for new developers is quite long. Before the development team can be fluent in spawning new microservices, it can take a while until all get used to different tools, languages, and libraries. The opposite might apply to those who will maintain the clusters: easy to start and hard to keep running with acceptable SLA. Any attempts to run K8s on their servers will turn the IT department's work into a nightmare, and I believe there is no option now, thanks to Goggle Kubernetes Engine, AWS EKS, or Azure AKS, and more.

No solution fits them all, but a dev team might consider smth like AWS Elastic Beanstalk, GCP App Engine, or Azure App Service when it comes to Application development. They have all the necessary instruments to start right away, like container deployment, logging, alerting, monitoring, queuing, document storing, certificate management, etc.

TL;DR

Whatever way an organization decides to go, it’s crucial to consider the current needs and never rely on someone “because I can”. There might be alternatives to consider, and your own time and efforts could be underrated.

Software engineer from Hamburg (Germany) www.zeleniuk.com