Kubernetes Security Guide, Chapter 3. Securing Kubernetes components: kubelet, etcd and Docker registry

By on April 24, 2018
Falco GCSCC Kubernetes

In this chapter of the Kubernetes security guide, we are going to cover best practices related to sensitive Kubernetes components and common external resources like the Docker registry. We will learn how to secure the Kubelet, the etcd cluster and pull from trusted Docker repositories.

Kubelet security

The kubelet is a fundamental piece of any Kubernetes deployment, implementing the interface between the nodes and the cluster logic. It is often described as the "Kubernetes agent" software.

The main task of a kubelet is managing the local container engine (i.e. Docker) and making sure the pods described in the API are defined, created, run and stay healthy but also are killed and destroyed.

Thus, kubelets are an important Kubernetes security component as they need to read, create and modify multiple cluster resources.

You have two different communication interfaces:

  • Access to the Kubelet REST API from users or software (typically just the Kubernetes API entity)
  • Kubelet binary accessing the local Kubernetes node and Docker engine

Kubelet API Security

These two interfaces are secured by default using:

We are going to describe them and provide a few examples to verify they are working as expected.

Kubelet security - access to the kubelet API

The kubelet security configuration parameters are usually passed as arguments to the binary exec. For newer Kubernetes versions (1.10+) you can also use a kubelet configuration file. Either way, the parameters syntax remain the same.

Let's use this example configuration as reference:

/home/kubernetes/bin/kubelet --v=2 --kube-reserved=cpu=70m,memory=1736Mi --allow-privileged=true --cgroup-root=/ --pod-manifest-path=/etc/kubernetes/manifests --experimental-mounter-path=/home/kubernetes/containerized_mounter/mounter --experimental-check-node-capabilities-before-mount=true --cert-dir=/var/lib/kubelet/pki/ --enable-debugging-handlers=true --bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/etc/srv/kubernetes/pki/ca-certificates.crt --cni-bin-dir=/home/kubernetes/bin --network-plugin=cni --non-masquerade-cidr=0.0.0.0/0 --feature-gates=ExperimentalCriticalPodAnnotation=true

Some Kubernetes security settings that you need to verify when configuring kubelet parameters are:

  • Make sure that you have anonymous-auth set to false to disable anonymous access (it will send 401 Unauthorized responses to unauthenticated requests).
  • Your kubelet should have a `--client-ca-file flag, providing a CA bundle to verify client certificates with.
  • Make sure that the `--authorization-mode is not set to AlwaysAllow, the more secure Webhook mode will delegate authorization decisions to the Kubernetes API server.
  • Optionally, you can also set the --read-only-port to 0 to avoid unauthorized connections to the read-only endpoint.

We will also provide an example of how to secure run-time access to the kubelet or any other internal Kubernetes component in the next chapters of this guide.

Kubelet security - kubelet access to Kubernetes API

As we mentioned in the first chapter of this guide, for RBAC-enabled versions of Kubernetes (stable in 1.8+), the level of access granted to a kubelet is determined by the NodeRestriction Admission Controller.

Your kubelets are bound to the system:node Kubernetes clusterrole.

If NodeRestriction is enabled in your API, your kubelets will only be allowed to modify their own Node API object, and only modify Pod API objects that are bound to their node. It's just a static restriction for now.

You can check whether you have this admission controller from the Kubernetes nodes executing the apiserver binary:

$ ps aux | grep apiserver | grep admission-control
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota

RBAC example, accessing the kubelet API with curl

Typically, just the Kubernetes API server will need to use the kubelet REST API. As we mentioned before, this interface needs to be protected as you can execute arbitrary pods and exec commands on the hosting node.

You can try to communicate directly with the kubelet API from the node shell:

# curl  -k https://localhost:10250/pods
Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)

Kubelet uses RBAC for authorization and it's telling you that the default anonymous system account is not allowed to connect.

You need to impersonate the API server kubelet client to contact the secure port:

# curl --cacert /etc/kubernetes/pki/ca.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt -k https://localhost:10250/pods | jq .

{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {},
  "items": [
    {
      "metadata": {
        "name": "kube-controller-manager-kubenode",
        "namespace": "kube-system",
...

Your port numbers may vary depending on your specific deployment method and initial configuration.

Securing Kubernetes etcd

etcd is a key-value distributed database that persists Kubernetes state. The etcd configuration and upgrading guide stresses the security relevance of this component:

"Access to etcd is equivalent to root permission in the cluster so ideally, only the API server should have access to it. Considering the sensitivity of the data, it is recommended to grant permission to only those nodes that require access to etcd clusters."

You can enforce these restrictions in three different (complementary) ways:

  • Regular Linux firewalling (iptables/netfilter, etc).
  • Run-time access protection.
  • PKI-based authentication + parameters to use the configured certs.

We are not going to cover regular Linux firewalling, as there is plenty of documentation on the matter and it's out of scope for this Kubernetes guide.

Run-time access protection

A quick example of run-time access protection could be making sure that the etcd binary only reads and writes from a set of configured directories or network sockets, any run-time access that is not explicitly whitelisted will raise an alarm.

Using Sysdig Falco it will look similar to this:

- macro: etcd_write_allowed_directories
  condition: evt.arg[1] startswith /var/lib/etcd

- rule: Write to non write allowed dir (etcd)
  desc: attempt to write to directories that should be immutable
  condition: open_write and not etcd_write_allowed_directories
  output: "Writing to non write allowed dir (user=%user.name command=%proc.cmdline file=%fd.name)"
  priority: ERROR

See Run-time security behavior monitoring: Kubernetes security policies and audit with Sysdig Falco open-source and How to harden internal kube-system services to continue reading about Kubernetes run-time security.

PKI-based authentication for etcd

Ideally, you should create 2 sets of certificate and key pairs that are going to be used exclusively for etcd. One pair will verify member to member connections and the other one Kubernetes API to etcd connections.

Conveniently, the etcd project provides these scripts to help you generate the certificates.

Once you have all the security artifacts (certificates, keys and authorities), you can secure etcd communications using the following configuration flags:

etcd peer-to-peer TLS

This will configure authentication and encryption between etcd nodes. To configure etcd with secure peer to peer communication, use the flags:

  • --peer-key-file=<peer.key>
  • --peer-cert-file=<peer.cert>
  • --peer-client-cert-auth
  • --peer-trusted-ca-file=<etcd-ca.cert>

Kubernetes API to etcd cluster TLS

To allow Kubernetes API to communicate with etcd, you will need:

  • etcd server parameters:
    • --cert-file=<path>
    • --key-file=<path>
    • --client-cert-auth
    • --trusted-ca-file=<path> (can be the same you used for peer to peer)
  • Kubernetes API server parameters:
    • --etcd-certfile=k8sclient.cert
    • --etcd-keyfile=k8sclient.key

It may seem like a lot of parameters at first sight, but it's just a regular PKI design.

Using a trusted Docker registry

If you don't specify otherwise, Kubernetes will just pull the Docker images from the public registry Docker Hub. This is fine for testing or learning environments, but not convenient for production, as you probably want to keep images and its content private within your organization.

Allowing your users to pull images from a public registry, you are basically granting execution access inside your Kubernetes cluster to any random software found on the Internet. That said, most of the popular Docker image publishers curate and secure their software, but you don't have any guarantee that your developers are going to pull from trusted authors only.

To solve this problem, you need to provide a trusted repository using cloud services (Docker Hub subscription, Quay.io, Google/AWS/Azure also provide their own service) or locally rolling your own (Docker registry, Portus or Harbor, just to mention a few options).

You will pre-validate and update every image in your registry. Appart from any QA and testing pipeline you regularly apply to your software, this usually means scanning your Docker images for known vulnerabilities and bad security practices.

Assuming you already have a pre-populated trusted repository, you need to tell Kubernetes how to pull from it and ideally, forbid any other unregistered images.

Configure private Docker registry in Kubernetes

Kubernetes provides a convenient way to configure a private Docker registry and store access credentials, including server URL, as a secret:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

This data will be base64 encoded and included inline as a field of the new secret:

{
    "apiVersion": "v1",
    "data": {
        ".dockercfg": "eyJyZWdpc3RyeS5sb2NhbCI6eyJ1c2VybmFtZSI6ImpvaG5kb3ciLCJwYXNzd29yZCI6InNlY3JldHBhc3N3b3JkIiwiZW1haWwiOiJqb2huQGRvZSIsImF1dGgiOiJhbTlvYm1SdmR6cHpaV055WlhSd1lYTnpkMjl5WkE9PSJ9fQ=="
    },
    "kind": "Secret",
    "metadata": {
        "creationTimestamp": "2018-04-08T19:13:52Z",
        "name": "regcred",
        "namespace": "default",
        "resourceVersion": "1752908",
        "selfLink": "/api/v1/namespaces/default/secrets/regcred",
        "uid": "f9d91963-3b60-11e8-96b4-42010a800095"
    },
    "type": "kubernetes.io/dockercfg"
}

Then, you just need to import this secret using the label imagePullSecrets in the pod definition:

spec:
  containers:
  - name: private-reg-container
    image: <your-private-image>
  imagePullSecrets:
  - name: regcred

You can also associate a serviceAccount with imagePullSecrets, the deployments / pods using such serviceAccount will have access to the secret containing registry credentials.

Kubernetes trusted image collections: banning non trusted registry

Once you have created your trusted image repository and Kubernetes pod deployments are pulling from it, the next security measure is to forbid pulling from any non-trusted source.

There are several, complementary ways to achieve this. You can, for example, use ValidatingAdmissionWebhooks. This way, the Kubernetes control plane will delegate image validation to an external entity.

You have an example implementation here, using Grafeas to only allow container images signed by a specific key, configurable via a configmap.

Using Sysdig Secure, you can also create an image whitelist based on image sha256 hash codes. Any non-whitelisted image will fire an alarm and container execution will be immediately stopped.

Docker Security image whitelisting

Want to dig deeper into Kubernetes and Docker security? Next chapters will offer plenty of practical examples and use case scenarios covering run-time threat detection. Ping us at @sysdig or on our open source Sysdig Slack group to share anything you feel should be included in a comprehensive Kubernetes security guide.




Eager to learn more? Check out our online session: Building an Open Source Container Security Stack

On this session Sysdig and Anchore are presenting how using Falco and Anchore Engine you can build a complete open source container security stack for Docker and Kubernetes.

This online session will live demo:

  • Using Falco, NATS and Kubeless to build a Kubernetes response engine and implement real-time attack remediation with security playbooks using FaaS.
  • How Anchore Engine can detect software vulnerabilities in your images, and how can be integrated with Jenkins, Kubernetes and Falco.


Stay up to date!

Get new articles from this blog (weekly)
Or container ecosystem updates (monthly)

Thanks so much for signing up!
Please check your inbox for a confirmation email.