Skip to content

Persistent Volumes

This guide describes the volumes required for QMigrator on Kubernetes.

QMigrator requires two types of storage:
1. Shared Storage: Shared across applications (ReadWriteMany).
2. Block Storage: Dedicated storage for the database and cache (ReadWriteOnce).

Info

  • Each manifest is tailored for its respective cloud provider's shared file system (e.g., EFS for AWS, Azure Files for Azure, GCS Bucket for GCP).
  • Review and customize the parameters as needed for your cluster setup.

Create StorageClass

Choose your cloud provider and create the StorageClass resources first:

Azure

azure-file-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: file-csi
provisioner: "file.csi.azure.com"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  skuName: Standard_LRS

Apply:

kubectl apply -f azure-file-storageclass.yaml

azure-disk-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: disk-csi
provisioner: "disk.csi.azure.com"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  skuName: StandardSSD_ZRS

Apply:

kubectl apply -f azure-disk-storageclass.yaml

AWS

aws-efs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: file-csi
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

Apply:

kubectl apply -f aws-efs-storageclass.yaml

aws-ebs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: disk-csi
provisioner: "ebs.csi.aws.com"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  fsType: ext4
  type: gp2

Apply:

kubectl apply -f aws-ebs-storageclass.yaml

Google Cloud

No need

gcp-pd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: disk-csi
provisioner: "pd.csi.storage.gke.io"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: pd-balanced
  availability-class: regional-hard-failover
  replication-type: regional-pd

Apply:

kubectl apply -f gcp-pd-storageclass.yaml

Minikube & Docker Desktop

Info

Minikube & Docker desktop uses the same StorageClass for both shared & block storage.

minikube-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-csi
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

Apply:

kubectl apply -f minikube-storageclass.yaml

docker-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-csi
provisioner: docker.io/hostpath
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

Apply:

kubectl apply -f docker-storageclass.yaml


Create Shared Storage (Static Provisioning)

Azure

Prerequisites

Create a Kubernetes secret for your Azure file share credentials:

# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group <resource-group> --account-name <storage-account> --query "[0].value" -o tsv)

# Create secret
kubectl create secret generic fileshare-secret -n <namespace> \
  --from-literal=azurestorageaccountname=<storage-account-name> \
  --from-literal=azurestorageaccountkey=$STORAGE_KEY

PV and PVC

azure-shared-storage.yaml
# PersistentVolume for Shared Storage
apiVersion: v1
kind: PersistentVolume
metadata:
  name: qmig-shared
spec:
  capacity:
    storage: "5Gi"
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: "file-csi"
  csi:
    driver: file.csi.azure.com
    volumeHandle: qmig-shared-pv
    volumeAttributes:
      resourceGroup: {{resource-group}}      # Replace with your resource group
      shareName: {{fileshare-name}}          # Replace with your file share name
    nodeStageSecretRef:
      name: fileshare-secret
      namespace: {{namespace}}               # Replace with your namespace
---
# PersistentVolumeClaim for Shared Storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: qmig-shared
  namespace: {{namespace}}                   # Replace with your namespace
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: file-csi
  resources:
    requests:
      storage: 5Gi
  volumeName: qmig-shared

Apply:

kubectl apply -f azure-shared-storage.yaml

AWS

Prerequisites

EFS Network Configuration

Ensure your EFS file system is created in the same VPC as your EKS cluster with desiganted security group

The EFS CSI driver requires IAM permissions to mount file systems. Create an IAM role with the following policy and associate it with the Kubernetes service account see more

PV and PVC

aws-shared-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: qmig-shared
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  storageClassName: file-csi
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: efs.csi.aws.com
    volumeHandle: {{filesystem-id}}          # Replace with your EFS filesystem ID
---
# Source: qmig/templates/app/app-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: qmig
  name: qmig-shared
  namespace: {{namespace}}              # Replace with your namespace
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: file-csi
  volumeName: qmig-shared
  resources:
    requests:
      storage: "5Gi"

Apply:

kubectl apply -f aws-shared-storage.yaml

Google Cloud

Prerequisites

Ensure the GKE node service account or workload identity has the following roles see more

# Grant permissions to GKE node service account
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \
    --member "principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/NAMESPACE/sa/KSA_NAME" \
    --role "roles/storage.objectUser"

PV and PVC

gcp-shared-storage.yaml
# Source: qmig/templates/app/app-pv.yaml
# REF: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/cloud-storage-fuse-csi-driver#provision-static
apiVersion: v1
kind: PersistentVolume
metadata:
  name: qmig-shared
spec:
  capacity:
    storage: "5Gi"
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: "file-csi"
  mountOptions:
    - implicit-dirs
    - dir-mode=777
    - file-mode=777
  csi:
    driver: gcsfuse.csi.storage.gke.io
    volumeHandle: {{bucket-name}}            # Replace with your GCS bucket name
    readOnly: false
---
# Source: qmig/templates/app/app-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: qmig
  name: qmig-shared
  namespace: {{namespace}}                   # Replace with your namespace
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: file-csi
  volumeName: qmig-shared
  resources:
    requests:
      storage: "5Gi"

Apply:

kubectl apply -f gcp-shared-storage.yaml

Pod Annotations Required

Add this annotation to all QMigrator deployments to enable GCSFuse:

annotations:
  gke-gcsfuse/volumes: "true"

Minikube & Docker Desktop

local-shared-storage.yaml
# Source: qmig/templates/local/local-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: qmig-shared
spec:
  capacity:
    storage: "5Gi"
  accessModes:
    - ReadWriteMany
  storageClassName: local-csi
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/qmig-shared                # Update path as needed
---
# Source: qmig/templates/local/local-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: qmig
  name: qmig-shared
  namespace: {{namespace}}                 # Replace with your namespace
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: local-csi
  volumeName: qmig-shared
  resources:
    requests:
      storage: "5Gi"

Apply:

kubectl apply -f local-shared-storage.yaml


Verify Storage Resources

After applying the manifests, verify all resources are created:

# Check StorageClasses
kubectl get storageclass

# Check PersistentVolumes
kubectl get pv

# Check PersistentVolumeClaims
kubectl get pvc -n <namespace>

# View detailed PVC status
kubectl describe pvc -n <namespace>

References