White Paper

Choose DevOps Tools That Facilitate A Conversation


Containers are a great way to realize the promise of a microservices architecture, but in production, containerization is not going to work without developers’ involvement and buy-in.

Each side of this equation (Dev and Ops) has its own vendors, tools, and open source projects to help them with what it takes to move to a containerized world—but that’s not enough.

With APIs like Kubernetes exposed across the software life cycle, it is imperative for developers and operators to change the way they work and communicate with each other—and use tools that facilitate an operational conversation.

What you will learn

This paper is aimed at development leads and senior developers who wish to realize the promise of microservices and containers at scale, in production, and in an automated and operationally-sound way.

  • How the ubiquity of managed Kubernetes services emphasizes the bigger operational challenge.

  • What system characteristics become critical as Dev and Ops gets meshed together.

  • Why “CI/CD” discounts deployment complexity for Kubernetes pipelines (and why we should replace it with “CDP”).

  • Which strategic options you have in choosing tooling, and what each option means for your organization in the long term.

The promise of Microservices, the challenge of Containers

It is 2018, and microservices architecture needs no introduction. By breaking down their applications into parts defined as interacting services, developers around the globe (as well as their users!) are enjoying increased application flexibility and resilience, greater agility, and improved performance, amongst other benefits.

To fulfil the promise of microservices, containers have been the go-to infrastructure primitive. Containers have been the future for a few years now, and while in the past few years there has been consolidation around areas such as orchestration (Kubernetes) and monitoring (Prometheus), much of the rest is still evolving. This industry has, some argue, only just crossed the chasm.

On the infrastructure layer, if you are already running on a major cloud provider, you are a couple of clicks away from having your own containerized cluster—managed, serviced, and billed by the minute. The benefits are clear to operators: single-configuration servers (no more “snowflakes”), built-in high availability and resilience, and improved resource utilization, to name just a few. Just like virtualization and cloud, this can be a beneficial technology evolution that helps operators move the organization forward, without challenging the way they work.

But the fact that it can be so, doesn’t mean it will be. The reason is that, unlike previous waves of innovation, Kubernetes forces developers and operators to share the same world view through the API, which is exposed to both parties. Developers can now have more influence over the running environment, improved control over libraries and dependencies, and a narrower gap between production and development environments—but can operators live with that?

Kubernetes is a project built by system engineers for system engineers. In the end, it is up to good practice, based on sound knowledge, to make that into successful software delivery. As countless developers and operators have found out, it is highly advisable to get help on the journey.

Good fences don’t always make good neighbors

The meshing together of Devs and Ops in Kubernetes does not change the fundamental and disparate objectives that these functions have—it just mandates closer collaboration. As a developer, you want to create great user experiences by writing your best code, and shipping it and reiterating it fast, without being slowed down (and without getting into trouble!). Your operator colleagues want to put reliable, self-service mechanisms in place that they can trust, and to focus on system reliability, efficiency, security and performance. Freedom and controls, empowerment and governance, release pace and observability—nirvana is when they coexist within the same software delivery system.

Cloud 66’s stack has run on containers for years. In our experience, good container Ops is about a constant, open conversation. Luckily, corporate and community cultures are becoming more inclusive and open, and the relevant tooling is being built for this new reality. For everyone to succeed on this journey, the team culture needs to encourage failing fast, blameless post-mortems, openness and inclusion. On the technology side, we need tools that provide characteristics such as observability across the whole system, flexibility for developers, and oversight for operators and dev managers—to help bring out the best in the shared Kubernetes API.

Case in point: The container pipeline

When it comes to the “how” of making this a reality, the IT strategy is often similar across different categories of solutions. For this discussion, let’s take the pipeline as a private case. A recent article in a well-known publication posted that the perfect CI/CD solution for Kubernetes doesn’t exist. We disagree. In our opinion, and as we’ve discussed at length in a previous white paper, the nomenclature of “CI/CD” ignores a few key aspects of Kubernetes and needs to be replaced by CDP: the Container Deployment Pipeline.

Most CI/CD tools carry with them practices superimposed from the non-container world—as an example, think about the emphasis on testing in an environment which is (a) extremely complex and intricate (b) can be reiterated and redeployed quickly and easily. In contrast, a CDP tool should do the following:

  • Understand that microservices require a pipeline-wide view, from Git to Kubernetes, rather than focusing just on build & test.

  • Introduce key points of automation while providing advanced observability, configuration, security and policy management.

  • "Speak config", i.e. automate the creation, control and versioning of production-minded config files for any environment. (Easy environment creation should mean easy operations!)

  • Be easily scalable and deployable across substrates, teams, clusters and regions.

The confusion around CDP, or the CI/CD gap if you will, is leading many users to spend precious engineering time on developing their own bespoke operations tools, while others are trying to push existing solutions well outside of what they are meant to do (e.g., using a package manager like Helm as a deployment tool) or to focus on the wrong areas (e.g. introducing more manual control or exclude-first security steps into their CI pipeline, which slows down release pace).

Delivery Options

Let’s look at the main options for companies looking to adopt a CDP tool—or many other tools—for their Kubernetes environments.

Doing it yourself

Thankfully, the free software and open source revolutions have made it possible for developers and operators to access a plethora of useful solutions, including ones that help build a software delivery system for Kubernetes. Many large organizations look at two important criteria to using pure open source projects (i.e., without support) in production: (a) rely on projects which are backed by strong organizations and communities, and (b) account for hidden operational costs. Since the first is somewhat obvious, let’s look at the second, using a popular example, Spinnaker.

Spinnaker is a powerful continuous delivery tool developed by teams at Netflix, which as a technology company is probably the golden standard in hyper-scale open source software development. Many companies follow Netflix’s stack religiously, not heeding the warnings of its previous Chief Architect, Adrian Cockroft, who once pointed out that “people try to copy Netflix, but they copy only what they can see. They copy the results, not the process.” Indeed, the first step is to evaluate whether what Netflix did applies well enough to your own unique challenges—a surprising number of software professionals skip this critical step!

Then, to make the most out of the tool in production over time, you seek out talent that can help you understand and adapt this complex project, created by the best brains in the industry. By a conservative estimate, to customize a pipeline means 1.5 engineer-years (three people over six months) at a fully-loaded cost of $200,000 each, with an additional, recurring full engineer-year for maintenance, per about 15 developers—a total of $900,000 over a three-year horizon to serve a team of 15 developers. Furthermore, in this quest to avoid vendor lock-in, you might have inadvertently locked yourself into your own team: they now need to keep up with complex external projects, write and main complicated customizations—while you remain mindful of the acute implications of key people moving on in their career. At scale, cheap or free software can often mean expensive operations.

Outside help: consulting firms

Consulting is a good way to overcome talent shortage and jump the initial hurdle of building, or otherwise customizing and integrating, complex tooling into your environment. Most consulting firms will have ready access to talent, deep experience in delivery, and knowledge of the latest best practice.

However, assuming the pricing over the expected life of the system makes sense vis-à-vis the DIY option, your risk is twofold: you might be creating dependence on a custom tool in a rapidly-commoditizing world, and on a third party to update that tool. From a cultural point of view, by bringing in top-down experts and time-limited external advice, you might contribute to a feeling of disempowerment amongst your developers.

Outside help: solution vendors

There is a lot of talk about avoiding vendor lock-in, and it might be justified in consolidating areas of IT such as IaaS. However, in the DevOps space, sufficient competition exists to help users avoid or at least minimize this. In many cases, relying on an external solution with a low-medium switching cost makes much more economic and operational sense than being locked into a custom tool, a consultant, or a specific team’s skillset, all with higher switching costs (on top of the high running costs, as outlined above).

Another consideration is your cloud strategy: if you are committed to one cloud platform, of course, you may be able to find what you need within that provider’s portfolio (caveat: at the moment of writing, we do not think that any of the main cloud providers offer a viable CDP solution); if you have a multi-cloud strategy, you should seek out tools that support those clouds, and CDP is no exception.

This will also affect the deployment choice: whether to use a hosted solution, or one that is self-hosted in your public or private cloud/server infrastructure. Self-hosting normally comes at higher upfront costs, but delivers much more bandwidth, better support, tighter security, and easier compliance with data privacy regulations.

Lastly, your team is probably made up of people with diverse areas of expertise and differing levels of skill. In this real-world situation, it would be wise to discern between tools that are one-size-fits-all, and tools that provide an opinion which a savvy user can modify. The latter is better synced to the fast-paced evolution of projects like Kubernetes, and the real-world market for engineering talent.


Unlike virtualization and cloud before them, container orchestration technologies like Kubernetes force developers and operators to share a world view of software delivery onto infrastructure. This requires a new mindset, new processes, and a new approach to tooling which has collaboration, observability, and balance embedded into it. When choosing a delivery method, it is critical to think about the full economic costs, cultural impact, and technology implications of using open source software, hiring consultants, or choosing a solution vendor—and how all these fit in with the unique organization in question.

Cloud 66’s approach to CDP

We have been running thousands of customer workloads on containers since 2014, and on Kubernetes since 2016. All of our products and open source projects were created by us as a way to solve issues we came across when moving to containers, and while managing 4,000 customer workloads on containerized infrastructure.

  • Skycap — a complete git to any Kubernetes—Container Deployment Pipeline, and is designed to simplify configuration, security and operations for developers shipping code into Kubernetes.

  • Habitus — an open source build flow tool for Docker, Habitus can create a build chain to generate your final Docker image based on a workflow. Habitus also minimizes image vulnerability and IP leakage.

  • Copper — an open source tool to validate Kubernetes configuration files. You can use Copper to make sure your Kubernetes configuration files adhere to policies you set for your infrastructure.

  • Starter — an open source tool to get you started with containers, Starter generates the initial Dockerfiles and yaml files from raw code.

  • Maestro — A complete solution for deploying and managing your app on containerized infrastructure, Cloud 66 Maestro builds Kubernetes clusters on your cloud or servers of choice and gives you all the tools you need to scale, monitor and maintain your application, including non-containerized services.

Cloud 66 tools are available in the following delivery methods:

  • Hosted — as a hosted service at cloud66.com with no need to download, install or maintain anything.

  • Dedicated — a cloud installation hosted on your virtual servers, on any AWS or GCP region and availability zone, under your own account and control. Please contact us if you would like us to enable an additional cloud provider.

  • OnPrem — installed in your datacenter, giving you full control over storage and database resources, and on managing data compliance.

Need more information?

Organize a demo with one of our engineers

Request a Demo

Try it Free for 2 Weeks

Sign up with GitHub