State aware applications like databases or file repositories need access to the same file system no matter where the container they are running on is scheduled. Kubernetes and OpenShift call this persistent volume.
Previously we went through:
How to Deploy Ceph on AWS (part 1 of 3)
- Quick Introduction to Ceph and alternatives
- How to Deploy Ceph on AWS
- Other Ceph deployment strategies for Kubernetes and OpenShift
In this second part will learn how to configure Kuberentes or OpenShift to use Ceph as persistent volume.
Ceph Persistent Volume for Kubernetes or OpenShift (part 2 of 3)
And next piece, will see:
How to Monitor Ceph: the top 5 metrics to watch (part 3 of 3)
- How to Monitor Ceph with built-in tools
- How to Monitor Ceph with Sysdig Monitor
- Top 5 metrics to monitor in your Ceph cluster
Ceph Persistent Volume for Kubernetes or OpeSshift
We have our storage cluster ready, but how we can use it within our Kubernetes or OpenShift cluster for Docker container volumes?
We have 2 options, store volumes as block storage images in Ceph or mounting CephFS inside Kubernetes Pods. We will follow the first approach for flexibility, performance and features like snapshots.
First, we will create a dedicated pool for our images. Make sure you read Ceph Cluster operations: Pools if you run this in production to understand how you choose your number of placement groups. From any Ceph node we will run:
# ceph osd pool create test 128 128 pool 'test' created
Then we need to create a block device image inside our pool:
# rbd create myvol --size 1G --pool test # rbd ls -l test NAME SIZE PARENT FMT PROT LOCK myvol 1024M 2
Note: if your Kubernetes cluster nodes run Ubuntu, you will have to disable some features as a workaround for bug #1578484:
# rbd feature disable --pool test myvol exclusive-lock object-map fast-diff deep-flatten
Now let’s move to our Kubernetes cluster nodes and install Ceph common client packages in all of them:
$ sudo apt install ceph-fs-common ceph-common or if using Red Hat / Fedora / CentOS: $ sudo dnf install ceph
Next is to copy the keyring to each of the nodes. You can find it in your ansible folder fetch/{my-cluster-id}/etc/ceph/ceph.client.admin.keyring
.
We are now ready to start deploying our Kubernetes or OpenShift entities. First let’s prepare the secret hash from the keyring we have in the ansible folder:
$ cat fetch/{my-cluster-id}/etc/ceph/ceph.client.admin.keyring | grep key|awk '{print $3}' | base64 QVFDS3pJaFlWdTBwTWhBQXJESmFWQXVOZTc5ZEZieTJ1bDBMSGc9PQo=
The secret entity looks like this:
And we will create it with kubectl:
# kubectl create -f ceph-secret.yaml secret "ceph-secret" created
We will create now a persistent volume:
# kubectl create -f ceph-pv.yaml persistentvolume "ceph-pv" created
And finally the persistent volume claim:
# kubectl create -f ceph-pv-claim.yaml persistentvolumeclaim "ceph-claim" created
We can have a look at the persistent volumes and claims:
# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE ceph-pv 1Gi RWX Retain Bound default/ceph-claim 1m # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE ceph-claim Bound ceph-pv 1Gi RWX 55s
Let’s make some use of the persistent volume, creating a MySQL Pod that mounts it on the database path:
# kubectl create -f ceph-mysql-pvc-pod.yaml pod "ceph-mysql" created
We can check inside the container and see how the Ceph block device is mounted:
# kubectl exec -ti ceph-mysql mount | grep rbd /dev/rbd0 on /var/lib/mysql type ext4 (rw,relatime,stripe=1024,data=ordered)
Now, we will be able to mount this block device as the Pod moves around our Kubernetes or OpenShift cluster!
Still eager to learn more? We were at last KubeCon EU and we loved Kubernetes Storage 101, you should definitely check out!
Moving into production
You got your Ceph running, checked. You can now schedule containers in your Kubernetes or OpenShift cluster using Ceph as persistent storage backend, checked. But next up, before moving to production an step not to be forgotten: monitoring health status and performance of Ceph. Let’s move on to part 3!