Table of Contents
In this article, I will take you through step by step guide to create and use a Kubernetes persistent volume. Volumes are required mostly by Stateful applications which generates data that needs to be stored somewhere. In this tutorial we will focus on PV and PVC creation and see how a Pod can use PV using PVC for storing its data. More about persistent volumes
What is Persistent Volume(PV)
Persistent Volume is a piece of storage in the Cluster that has been provisioned by an administrator using storage classes.
What is Persistent Volume Claim(PVC)
A persistent volume claim is a request for storage by the user. To use the storage, a persistent volume claim needs to be created.
How to Create and Use a Kubernetes Persistent Volume
Also Read: Unable to drain out Kubernetes Cluster node for Maintenance
Step 1: Prerequisites
a) You should have a Kubernetes Cluster running with a master
and atleast one worker
node on a Linux environment. Here we are going to use below Lab setup:-
cyberithub - Master Node
worker1 - Worker Node
b) You should have sudo or root access to run privileged commands.
c) You should atleast have Kubernetes version 1.18.0
.
Step 2: Create a Persistent Volume
We are going to define our configuration in a file called pv.yaml
. In this configuration, we are going to define the persistent volume size as 2GB with access mode set as ReadWriteOnce
. But before that you need to set the kind of object as PersistentVolume
and then the name can be set as anything. We are using name as pv-storage
with the labels type set to local
. Under spec, you can define the storage capacity as shown below. Then the volume needs to be mounted on host path, here I am using /mydata
as my host path.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-storage
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: “/mydata”
To create the volume using above configuration, run kubectl create -f pv.yaml
command.
root@cyberithub:~# kubectl create -f pv.yaml
persistentvolume/pv-storage created
Then check the status of volume by using kubectl get pv command as shown below.
root@cyberithub:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-storage 4Gi RWO Retain Available 7s
Step 3: Create a Persistent Volume Claim
Next step is to create a persistent volume claim to request the storage we created earlier. For this we need to create another YAML configuration file called pvc.yaml
where we need to define the kind of object as PersistentVolumeClaim
. Then the name could be defined to any useful name, here we are providing pvc-storage
.
Then the most important part is to set the requests storage value. Here you need to make sure the request should always be valid. For example, earlier we had created a storage of 2GB which is currently free and available. So to utilize that volume we need to request storage for less than or equal to that volume size. For the moment we are requesting for 1GB storage.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
To create the persistent volume claim from above configuration, you need to use kubectl create -f pvc.yaml
command as shown below.
root@cyberithub:~# kubectl create -f pvc.yaml
persistentvolumeclaim/pvc-storage created
You can list out the created PVC by using kubectl get pvc
command. Here you can observe that it is bounded to the PV that we have created. It is because there is nothing else it can bind to. But in real life scenarios this is not the case. It will be upto the cluster to decide which PV a PVC should be bind to.
root@cyberithub:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-storage Bound pv-storage 2Gi RWO 5m
Check the persistent volume status by using kubectl get pv
command as shown below.
[root@cyberithub ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-storage 4Gi RWO Retain Bound default/pvc-storage
Step 4: Configure Pod
Next step is to create a pod configuration file where it will use the pvc that we have created as volume. You can notice from below that it is pointing to pvc-storage
under persistentVolumeClaim
parameter. This volume will be mounted on the path “/usr/share/server/html”
which is defined as mountPath
under the parameter volumeMounts
.
kind: Pod
apiVersion: v1
metadata:
name: pod-test
spec:
volumes:
- name: pod-storage
persistentVolumeClaim:
claimName: pvc-storage
containers:
- name: pv-container
image: nginx
ports:
- containerPort: 80
name: “http-server”
volumeMounts:
- mountPath: “/usr/share/server/html”
name: pod-storage
Create the pod by applying its configuration to the cluster using below command.
root@cyberithub:~# kubectl create -f pod-test.yaml
pod/pod-test created
Then check the status of the pods using kubectl get pods
command as shown below.
root@cyberithub:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-test 1/1 Running 0 43s
We have successfully created a pod which is using the volume that we created and mounted on a specific path. It is now time to test the working. To do so, we will create a simple text file either by login to the pod or via cli command in the path /usr/share/server/html
inside the pod.
root@cyberithub:~# kubectl exec pod-test -- touch /usr/share/server/html/test-file.txt
You can login to the pod and verify if file got created.
[root@cyberithub ~]# kubectl exec -it pod-test bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@pod-test:/# cd /usr/share/server/html/ root@pod-test:/usr/share/server/html# ls test-file.txt
As you can see file is successfully created. Please note that the file created inside the pod in the directory where pvc is mounted should also be visible in the pvc backend. In our case it is /mypath
. We have created a cluster of 2 nodes as shown below.
[root@cyberithub ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
cyberithub Ready master 8m17s v1.18.0
worker1 Ready 7m45s v1.18.0
We will ssh to the worker node and verify the file creation.
[root@cyberithub ~]# ssh worker1 ls /mypath Warning: Permanently added ' worker1,100.73.55.214' (ECDSA) to the list of known hosts. test-file.txt