By: Thu Nguyen

Read Time: 9 min

In a previous post, we explained the concept of configuration management and presented three of the most popular tools: Chef, Puppet, and Ansible. We also briefly explored the impact that containerization is having on configuration management, and how the two can be used in combination. This article takes a more in-depth look at this relationship by presenting different techniques for using Chef, Puppet, and Ansible to deploy and manage a Kubernetes cluster.

How Can Configuration Management Tools Support Kubernetes?

As we mentioned in our previous post, container management tools and container orchestration tools are not mutually exclusive. While both solutions solve the problem of managing environments in a consistent and automated way, they approach the problem in different ways.
Configuration management tools operate on systems to bring them to a desired state. DevOps teams define the final state of their infrastructure and applications, and tools like Chef, Puppet, and Ansible enforce this state. Meanwhile, container orchestration tools like Docker Swarm and Kubernetes abstract away the infrastructure entirely, letting DevOps teams deploy applications without worrying about infrastructure state or configuration. Both approaches can be used to deploy applications, but configuration management tools emphasize infrastructure management.
With this in mind, configuration management tools and Kubernetes can—and often do—work together in the same environment. Deploying and maintaining a Kubernetes cluster is challenging, regardless of whether the cluster is running on-premise or on the cloud. Using a configuration management solution lets DevOps teams hand off this responsibility to an automated process and focus instead on deploying applications.


Chef integrates with Kubernetes mainly through Habitat, a framework for deploying and managing applications and their dependencies. While Chef automates infrastructure management, Habitat automates application management. Kubernetes the Hab Way is a solution that uses Habitat to deploy Kubernetes over Chef according to the steps laid out in Kubernetes the Hard Way.
With Kubernetes the Hab Way, each Kubernetes component is bundled as a Habitat application. Deploying a complete cluster means deploying each component individually. The project’s repository contains a complete walkthrough of this process, as well as several scripts to help with common tasks such as updating the cluster and installing the Kubernetes command line tool (kubectl). The process involves a lot of manual setup initially, but results in a complete Kubernetes cluster.
To learn more about Kubernetes the Hab Way, visit the project’s GitHub repository or introductory blog post.


Puppet provides an official module for creating a Kubernetes cluster. In addition to installing the core components and administration tools, the module also generates the necessary SSL certificates, security tokens, and networking rules. It can also install Helm, a package manager for Kubernetes.
At the heart of the module is Kubetool, a Docker-based tool that generates the required configurations for bootstrapping the cluster. The tool takes several parameters as input including the container runtime, Container Network Interface (CNI), and whether to install the Kubernetes dashboard. It then outputs a set of YAML files that Puppet uses to configure and connect each node in the cluster.
Once these files have been added to Hiera, a key-value store for storing configuration data, you can specify nodes in your Puppet manifest as either a Kubernetes controller or Kubernetes worker. Controllers host the Kubernetes Control Plane and etcd, while workers run applications. To define a node as a controller, add this to the node declaration in your manifest:
class {‘kubernetes’:
 controller => true,
Or, add the following to make it a worker instead:
class {‘kubernetes’:
 worker => true,

Deploying the Cluster

To deploy the cluster, install the module as you would a normal Puppet module and apply your manifest. You can test your configuration using Kream, a virtual environment created specifically for testing the Puppet Kubernetes module. This lets you validate and test your configuration before deploying it to a live environment. You can also validate your configuration using the Puppet Development Kit:
pdk validate puppet –puppet-version='[your puppet version]’
You can find a complete walkthrough of the module on Puppet Forge.


Kubespray is a Kubernetes community project that uses Ansible to deploy a production-ready cluster. In addition to supporting bare metal and cloud deployments, Kubespray also offers an SaaS service for deploying to AWS or DigitalOcean. Like Puppet’s Kubetool, Kubespray lets you choose your container runtime, CNI plugin, and whether to deploy a high availability (HA) cluster.
To download Kubespray, clone the Kubespray GitHub repository and install the required dependencies using pip:
$ git clone https://github.com/kubernetes-sigs/kubespray.git
$ cd kubespray/
$ sudo pip install -r requirements.txt

Creating Inventory Files

Kubespray requires an inventory of nodes along with their role in the Kubernetes cluster (master, worker, or etcd node). The Kubespray project includes an example inventory as well as an inventory generator, which generates an inventory file from a list of nodes. For example, if we have three nodes with IP addresses between and, we can create a basic inventory by running:
$ cp -r inventory/sample inventory/mycluster
$ declare -a IPS=(
$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
These commands create a new inventory directory (called “mycluster”) and generate a new inventory file at inventory/mycluster/hosts.ini. This results in the following inventory:
node1 ansible_host= ip=
node2 ansible_host= ip=
node3 ansible_host= ip=

Deploying the Cluster

Using the newly generated inventory file, we can create the cluster by running the cluster.yml playbook included with Kubespray. This command assumes you are using SSH keys for each of your target hosts, and that ~/.ssh/private_key is the location of your private key:
$ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v –private-key=~/.ssh/private_key
If you want to add a node after deploying a cluster, modify your inventory file and rerun the command using the scale.yml playbook instead of the cluster.yml playbook. To remove a worker node, use the remove-node.yml playbook.
Each master node has local access to the Kubernetes API via the kubectl command. If you configured Kubespray with kubectl_localhost set to true, Kubespray will automatically install kubectl on your Ansible server. It also generates a configuration file in inventory/mycluster/artifacts/admin.conf, which you can use from your workstation by copying to ~/.kube/config.
To learn more about Kubespray, visit the Kubespray website or GitHub repository.


Of the tools mentioned in this list, Kubespray presents the easiest and most effective option for deploying and managing a Kubernetes cluster. It supports bare metal and cloud deployments, provides options for customizing the cluster, scales up or down on demand, and installs administration tools automatically. Since it uses Ansible, it can easily integrate into an existing Ansible installation.
Want to learn more about configuration management? Read our previous article on comparing configuration management tools.

About Thu Nguyen

Thu Nguyen is a technical writer who cares deeply about human relationships.


LogDNA and IBM find synergy in cloud

First published on www.ibm.com on October 7, 2019. Written by: Norman Hsieh, VP of Business Development, LogDNA You know what they say: you can’t fix what you can’t...

Case Studies

IBM Log Analysis with LogDNA

First published as a case study on www.ibm.com on October 3, 2019. What is Log Analysis? IBM Cloud™ Log Analysis with LogDNA enables you to quickly find...

Case Studies
181203 Logdna Illus Part2

What to Do When You Lose Logs with Kubernetes

Kubernetes has fundamentally changed the way we manage our production environments. The ability to quickly bring up infrastructure on demand is a beautiful thing, but...