Persistent Volumes
This guide describes the volumes required for QMigrator on Kubernetes.
QMigrator requires two types of storage:
1. Shared Storage: Shared across applications (ReadWriteMany).
2. Block Storage: Dedicated storage for the database and cache (ReadWriteOnce).
Info
- Each manifest is tailored for its respective cloud provider's shared file system (e.g., EFS for AWS, Azure Files for Azure, GCS Bucket for GCP).
- Review and customize the parameters as needed for your cluster setup.
Create StorageClass
Choose your cloud provider and create the StorageClass resources first:
Azure
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: file-csi
provisioner: "file.csi.azure.com"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
skuName: Standard_LRS
Apply:
AWS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: file-csi
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
Apply:
Google Cloud
No need
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: disk-csi
provisioner: "pd.csi.storage.gke.io"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-balanced
availability-class: regional-hard-failover
replication-type: regional-pd
Apply:
Minikube & Docker Desktop
Info
Minikube & Docker desktop uses the same StorageClass for both shared & block storage.
Create Shared Storage (Static Provisioning)
Azure
Prerequisites
Create a Kubernetes secret for your Azure file share credentials:
# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group <resource-group> --account-name <storage-account> --query "[0].value" -o tsv)
# Create secret
kubectl create secret generic fileshare-secret -n <namespace> \
--from-literal=azurestorageaccountname=<storage-account-name> \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
PV and PVC
# PersistentVolume for Shared Storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: qmig-shared
spec:
capacity:
storage: "5Gi"
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "file-csi"
csi:
driver: file.csi.azure.com
volumeHandle: qmig-shared-pv
volumeAttributes:
resourceGroup: {{resource-group}} # Replace with your resource group
shareName: {{fileshare-name}} # Replace with your file share name
nodeStageSecretRef:
name: fileshare-secret
namespace: {{namespace}} # Replace with your namespace
---
# PersistentVolumeClaim for Shared Storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qmig-shared
namespace: {{namespace}} # Replace with your namespace
spec:
accessModes:
- ReadWriteMany
storageClassName: file-csi
resources:
requests:
storage: 5Gi
volumeName: qmig-shared
Apply:
AWS
Prerequisites
EFS Network Configuration
Ensure your EFS file system is created in the same VPC as your EKS cluster with desiganted security group see more
Access Point - Create an EFS Access Point for QMigrator and configure the POSIX identity as UID: 5000 and GID: 4000, with the root directory permissions set to 0777.
PV and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: qmig-shared
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: file-csi
persistentVolumeReclaimPolicy: Retain
csi:
driver: efs.csi.aws.com
volumeHandle: {{filesystem-id}}::{{fsap-id}} # Replace with your EFS filesystem ID and FSAP ID
---
# Source: qmig/templates/app/app-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: qmig
name: qmig-shared
namespace: {{namespace}} # Replace with your namespace
spec:
accessModes:
- ReadWriteMany
storageClassName: file-csi
volumeName: qmig-shared
resources:
requests:
storage: "5Gi"
Apply:
Google Cloud
Prerequisites
Ensure the GKE node service account or workload identity has the following roles see more
# Grant permissions to GKE node service account
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \
--member "principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/NAMESPACE/sa/KSA_NAME" \
--role "roles/storage.objectUser"
PV and PVC
# Source: qmig/templates/app/app-pv.yaml
# REF: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/cloud-storage-fuse-csi-driver#provision-static
apiVersion: v1
kind: PersistentVolume
metadata:
name: qmig-shared
spec:
capacity:
storage: "5Gi"
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "file-csi"
mountOptions:
- implicit-dirs
- dir-mode=777
- file-mode=777
csi:
driver: gcsfuse.csi.storage.gke.io
volumeHandle: {{bucket-name}} # Replace with your GCS bucket name
readOnly: false
---
# Source: qmig/templates/app/app-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: qmig
name: qmig-shared
namespace: {{namespace}} # Replace with your namespace
spec:
accessModes:
- ReadWriteMany
storageClassName: file-csi
volumeName: qmig-shared
resources:
requests:
storage: "5Gi"
Apply:
Pod Annotations Required
Add this annotation to all QMigrator deployments to enable GCSFuse:
Minikube & Docker Desktop
Paths from Minikube Mount
Ensure /path exists on the host and is mounted /mnt/qmigrator when starting Minikube (for example, use --mount-string="/path:/mnt/qmigrator").
# Source: qmig/templates/local/local-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: qmig-shared
spec:
capacity:
storage: "5Gi"
accessModes:
- ReadWriteMany
storageClassName: local-csi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/qmigrator/shared # Update path as needed
---
# Source: qmig/templates/local/local-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: qmig
name: qmig-shared
namespace: {{namespace}} # Replace with your namespace
spec:
accessModes:
- ReadWriteMany
storageClassName: local-csi
volumeName: qmig-shared
resources:
requests:
storage: "5Gi"
Apply:
Verify Storage Resources
After applying the manifests, verify all resources are created:
# Check StorageClasses
kubectl get storageclass
# Check PersistentVolumes
kubectl get pv
# Check PersistentVolumeClaims
kubectl get pvc -n <namespace>
# View detailed PVC status
kubectl describe pvc -n <namespace>