Ceph Persistent Storage for Kubernetes with Cephfs

ComputingPost
5 min readSep 27, 2022

--

In our last tutorial, we discussed on how you can Persistent Storage for Kubernetes with Ceph RBD. As promised, this article will focus on configuring Kubernetes to use external Ceph Ceph File System to store Persistent data for Applications running on Kubernetes container environment.

If you’re new to Ceph but have a running Ceph Cluster, Ceph File System(CephFS), is a POSIX-compliant file system built on top of Ceph’s distributed object store, RADOS. CephFS is designed to provide a highly available, multi-use, and performant file store for a variety of applications.

This tutorial won’t dive deep to Kubernetes and Ceph concepts. It is to serve as an easy step-by-step guide on configuring both Ceph and Kubernetes to ensure you can provision persistent volumes automatically on Ceph backend with Cephfs. So follow steps below to get started.

Ceph Persistent Storage for Kubernetes with Cephfs

Before you begin this exercise, you should have a working external Ceph cluster. Most Kubernetes deployments using Ceph will involve using Rook. This guide assumes you have a Ceph storage cluster deployed with Ceph Ansible, Ceph Deploy or manually.

We’ll be updating the link with other guides on the installation of Ceph on other Linux distributions.

Step 1: Deploy Cephfs Provisioner on Kubernetes

Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+.

vim cephfs-provisioner.yml

Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Cephfs provisioner.

---

kind: Namespace

apiVersion: v1

metadata:

name: cephfs



---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: cephfs-provisioner

namespace: cephfs

rules:

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

- apiGroups: [""]

resources: ["services"]

resourceNames: ["kube-dns","coredns"]

verbs: ["list", "get"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: cephfs-provisioner

namespace: cephfs

subjects:

- kind: ServiceAccount

name: cephfs-provisioner

namespace: cephfs

roleRef:

kind: ClusterRole

name: cephfs-provisioner

apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: cephfs-provisioner

namespace: cephfs

rules:

- apiGroups: [""]

resources: ["secrets"]

verbs: ["create", "get", "delete"]

- apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: cephfs-provisioner

namespace: cephfs

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: cephfs-provisioner

subjects:

- kind: ServiceAccount

name: cephfs-provisioner

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: cephfs-provisioner

namespace: cephfs

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: cephfs-provisioner

namespace: cephfs

spec:

replicas: 1

selector:

matchLabels:

app: cephfs-provisioner

strategy:

type: Recreate

template:

metadata:

labels:

app: cephfs-provisioner

spec:

containers:

- name: cephfs-provisioner

image: "quay.io/external_storage/cephfs-provisioner:latest"

env:

- name: PROVISIONER_NAME

value: ceph.com/cephfs

- name: PROVISIONER_SECRET_NAMESPACE

value: cephfs

command:

- "/usr/local/bin/cephfs-provisioner"

args:

- "-id=cephfs-provisioner-1"

serviceAccount: cephfs-provisioner

Apply manifest:

$ kubectl apply -f cephfs-provisioner.yml

namespace/cephfs created

clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created

clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created

role.rbac.authorization.k8s.io/cephfs-provisioner created

rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created

serviceaccount/cephfs-provisioner created

deployment.apps/cephfs-provisioner created

Confirm that Cephfs volume provisioner pod is running.

$ kubectl get pods -l app=cephfs-provisioner -n cephfs

NAME READY STATUS RESTARTS AGE

cephfs-provisioner-7b77478cb8-7nnxs 1/1 Running 0 84s

Step 2: Get Ceph Admin Key and create Secret on Kubernetes

Login to your Ceph Cluster and get the admin key for use by RBD provisioner.

$ sudo ceph auth get-key client.admin

Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes.

$ kubectl create secret generic ceph-admin-secret \

--from-literal=key='' \

--namespace=cephfs

Where is your Ceph admin key. You can confirm creation with the command below.

$ kubectl get secrets ceph-admin-secret -n cephfs

NAME TYPE DATA AGE

ceph-admin-secret Opaque 1 6s

Step 3: Create Ceph pool for Kubernetes & client key

A Ceph file system requires at least two RADOS pools: For both:

  • Data
  • Metadata

Generally, the metadata pool will have at most a few gigabytes of data. 64 or 128 is commonly used in practice for large clusters. For this reason, a smaller PG count is usually recommended.

Let’s create Ceph OSD pools for Kubernetes:

sudo ceph osd pool create cephfs_data 128 128

sudo ceph osd pool create cephfs_metadata 64 64

Create ceph file system on the pools:

sudo ceph fs new cephfs cephfs_metadata cephfs_data

Confirm creation of Ceph File System:

$ sudo ceph fs ls

name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

UI Dashboard confirmation:

cephfs-kubernetes-1024x303

Step 4: Create Cephfs Storage Class on Kubernetes

A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called cephfs.

vim cephfs-sc.yml

The contents to be added to file:

---

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

name: cephfs

namespace: cephfs

provisioner: ceph.com/cephfs

parameters:

monitors: 10.10.10.11:6789,10.10.10.12:6789,10.10.10.13:6789

adminId: admin

adminSecretName: ceph-admin-secret

adminSecretNamespace: cephfs

claimRoot: /pvc-volumes

Where:

  • cephfs is the name of the StorageClass to be created.
  • 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors. You can list them with the command:
$ sudo ceph -s

cluster:

id: 7795990b-7c8c-43f4-b648-d284ef2a0aba

health: HEALTH_OK



services:

mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h)

mgr: cephmon01(active, since 30h), standbys: cephmon02

mds: cephfs:1 0=cephmon01=up:active 1 up:standby

osd: 9 osds: 9 up (since 32h), 9 in (since 32h)

rgw: 3 daemons active (cephmon01, cephmon02, cephmon03)



data:

pools: 8 pools, 618 pgs

objects: 250 objects, 76 KiB

usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail

pgs: 618 active+clean

After modifying the file with correct values of Ceph monitors, use kubectl command to create the StorageClass.

$ kubectl apply -f cephfs-sc.yml 

storageclass.storage.k8s.io/cephfs created

List available StorageClasses:

$ kubectl get sc

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE

ceph-rbd ceph.com/rbd Delete Immediate false 25h

cephfs ceph.com/cephfs Delete Immediate false 2m23s

Step 5: Create a test Claim and Pod on Kubernetes

To confirm everything is working, let’s create a test persistent volume claim.

$ vim cephfs-claim.yml

---

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: cephfs-claim1

spec:

accessModes:

- ReadWriteOnce

storageClassName: cephfs

resources:

requests:

storage: 1Gi

Apply manifest file.

$ kubectl  apply -f cephfs-claim.yml

persistentvolumeclaim/cephfs-claim1 created

If it was successful in binding, it should show Bound status.

$ kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 25h

cephfs-claim1 Bound pvc-1bfa81b6-2c0b-47fa-9656-92dc52f69c52 1Gi RWO cephfs 87s

We can then deploy a test pod using the claim we created. First create a file to hold the data:

vim cephfs-test-pod.yaml

Add contents below:

kind: Pod

apiVersion: v1

metadata:

name: test-pod

spec:

containers:

- name: test-pod

image: gcr.io/google_containers/busybox:latest

command:

- "/bin/sh"

args:

- "-c"

- "touch /mnt/SUCCESS && exit 0 || exit 1"

volumeMounts:

- name: pvc

mountPath: "/mnt"

restartPolicy: "Never"

volumes:

- name: pvc

persistentVolumeClaim:

claimName: claim1

Create pod:

$ kubectl apply -f cephfs-test-pod.yaml

pod/test-pod created

Confirm the pod is in the running state:

$ kubectl get  pods test-pod

NAME READY STATUS RESTARTS AGE

test-pod 0/1 Completed 0 2m28s

Enjoy using Cephfs for Persistent volume provisioning on Kubernetes.

https://www.computingpost.com/ceph-persistent-storage-for-kubernetes-with-cephfs/?feed_id=4788&_unique_id=6332b2c606353

--

--

ComputingPost
ComputingPost

Written by ComputingPost

ComputingPost — Linux Howtos, Tutorials, Guides, News, Tips and Tricks.

No responses yet