Blog | Kubernetes | 4 minutes read

Using Chef, Puppet, and Ansible to Manage Kubernetes

181203 Logdna Illus Part2

In a previous post, we explained the concept of configuration management and presented three of the most popular tools: Chef, Puppet, and Ansible. We also briefly explored the impact that containerization is having on configuration management, and how the two can be used in combination. This article takes a more in-depth look at this relationship by presenting different techniques for using Chef, Puppet, and Ansible to deploy and manage a Kubernetes cluster.

How Can Configuration Management Tools Support Kubernetes?

As we mentioned in our previous post, container management tools and container orchestration tools are not mutually exclusive. While both solutions solve the problem of managing environments in a consistent and automated way, they approach the problem in different ways.

Configuration management tools operate on systems to bring them to a desired state. DevOps teams define the final state of their infrastructure and applications, and tools like Chef, Puppet, and Ansible enforce this state. Meanwhile, container orchestration tools like Docker Swarm and Kubernetes abstract away the infrastructure entirely, letting DevOps teams deploy applications without worrying about infrastructure state or configuration. Both approaches can be used to deploy applications, but configuration management tools emphasize infrastructure management.

With this in mind, configuration management tools and Kubernetes can—and often do—work together in the same environment. Deploying and maintaining a Kubernetes cluster is challenging, regardless of whether the cluster is running on-premise or on the cloud. Using a configuration management solution lets DevOps teams hand off this responsibility to an automated process and focus instead on deploying applications.

Chef

Chef integrates with Kubernetes mainly through Habitat, a framework for deploying and managing applications and their dependencies. While Chef automates infrastructure management, Habitat automates application management. Kubernetes the Hab Way is a solution that uses Habitat to deploy Kubernetes over Chef according to the steps laid out in Kubernetes the Hard Way.

With Kubernetes the Hab Way, each Kubernetes component is bundled as a Habitat application. Deploying a complete cluster means deploying each component individually. The project’s repository contains a complete walkthrough of this process, as well as several scripts to help with common tasks such as updating the cluster and installing the Kubernetes command line tool (kubectl). The process involves a lot of manual setup initially, but results in a complete Kubernetes cluster.

To learn more about Kubernetes the Hab Way, visit the project’s GitHub repository or introductory blog post.

Puppet

Puppet provides an official module for creating a Kubernetes cluster. In addition to installing the core components and administration tools, the module also generates the necessary SSL certificates, security tokens, and networking rules. It can also install Helm, a package manager for Kubernetes.

At the heart of the module is Kubetool, a Docker-based tool that generates the required configurations for bootstrapping the cluster. The tool takes several parameters as input including the container runtime, Container Network Interface (CNI), and whether to install the Kubernetes dashboard. It then outputs a set of YAML files that Puppet uses to configure and connect each node in the cluster.

Once these files have been added to Hiera, a key-value store for storing configuration data, you can specify nodes in your Puppet manifest as either a Kubernetes controller or Kubernetes worker. Controllers host the Kubernetes Control Plane and etcd, while workers run applications. To define a node as a controller, add this to the node declaration in your manifest:

class {‘kubernetes’:
 controller => true,
}

Or, add the following to make it a worker instead:

class {‘kubernetes’:
 worker => true,
}

Deploying the Cluster

To deploy the cluster, install the module as you would a normal Puppet module and apply your manifest. You can test your configuration using Kream, a virtual environment created specifically for testing the Puppet Kubernetes module. This lets you validate and test your configuration before deploying it to a live environment. You can also validate your configuration using the Puppet Development Kit:

pdk validate puppet –puppet-version='[your puppet version]’

You can find a complete walkthrough of the module on Puppet Forge.

Ansible

Kubespray is a Kubernetes community project that uses Ansible to deploy a production-ready cluster. In addition to supporting bare metal and cloud deployments, Kubespray also offers an SaaS service for deploying to AWS or DigitalOcean. Like Puppet’s Kubetool, Kubespray lets you choose your container runtime, CNI plugin, and whether to deploy a high availability (HA) cluster.

To download Kubespray, clone the Kubespray GitHub repository and install the required dependencies using pip:

$ git clone https://github.com/kubernetes-sigs/kubespray.git
$ cd kubespray/
$ sudo pip install -r requirements.txt

Creating Inventory Files

Kubespray requires an inventory of nodes along with their role in the Kubernetes cluster (master, worker, or etcd node). The Kubespray project includes an example inventory as well as an inventory generator, which generates an inventory file from a list of nodes. For example, if we have three nodes with IP addresses between 10.0.0.2 and 10.0.0.4, we can create a basic inventory by running:

$ cp -r inventory/sample inventory/mycluster

$ declare -a IPS=(10.0.0.2 10.0.0.3 10.0.0.4)

$ CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}

These commands create a new inventory directory (called “mycluster”) and generate a new inventory file at inventory/mycluster/hosts.ini. This results in the following inventory:

[all]

node1 ansible_host=10.0.0.2 ip=10.0.0.2

node2 ansible_host=10.0.0.4 ip=10.0.0.4

node3 ansible_host=10.0.0.3 ip=10.0.0.3

[kube-master]

node1   

node2   

[etcd]

node1   

node2   

node3   

[kube-node]

node1   

node2   

node3   

[k8s-cluster:children]

kube-master     

kube-node       

[calico-rr]

 

Deploying the Cluster

Using the newly generated inventory file, we can create the cluster by running the cluster.yml playbook included with Kubespray. This command assumes you are using SSH keys for each of your target hosts, and that ~/.ssh/private_key is the location of your private key:

$ ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml -b -v –private-key=~/.ssh/private_key

If you want to add a node after deploying a cluster, modify your inventory file and rerun the command using the scale.yml playbook instead of the cluster.yml playbook. To remove a worker node, use the remove-node.yml playbook.

Each master node has local access to the Kubernetes API via the kubectl command. If you configured Kubespray with kubectl_localhost set to true, Kubespray will automatically install kubectl on your Ansible server. It also generates a configuration file in inventory/mycluster/artifacts/admin.conf, which you can use from your workstation by copying to ~/.kube/config.

To learn more about Kubespray, visit the Kubespray website or GitHub repository.

Conclusion

Of the tools mentioned in this list, Kubespray presents the easiest and most effective option for deploying and managing a Kubernetes cluster. It supports bare metal and cloud deployments, provides options for customizing the cluster, scales up or down on demand, and installs administration tools automatically. Since it uses Ansible, it can easily integrate into an existing Ansible installation.

Want to learn more about configuration management? Read our previous article on comparing configuration management tools.

Read Next

3X Growth is Quite a Milestone, And It’s Only the Beginning Team
Team

3X Growth is Quite a Milestone, And It’s Only the Beginning

Read More July 16, 2019