Skip to content

Using Helm

This guide provides step-by-step instructions to deploy QMigrator on a Kubernetes cluster using Helm.

Note

It assumes that your cluster (AKS, EKS, GKE, or Minikube) is already provisioned and accessible via kubectl.


Pre-Deployment Checklist

Before deploying QMigrator, verify:

  • Kubernetes cluster is running and accessible
  • kubectl is configured and authenticated
  • Helm 3.x is installed
  • StorageClass exists for shared storage (RWX)
  • StorageClass exists for block storage (RWO)
  • Network connectivity to source/target databases
  • Sufficient cluster resources (CPU, memory, disk)
  • LoadBalancer or Ingress controller installed
  • (Optional) cert-manager installed for TLS
  • Docker registry credentials
  • Project information
  • Latest QMigrator image list

Step 1: Prepare the Value File

  • Customize the values.yaml file with the required properties.
  • A minimal configuration is available in values.example.yaml, which can be further modified as needed.

Helm Value

For all available overrides, see the full Helm values reference in Helm Value.

Credentials

Project Credentials:
1. Retrieve your PROJECT_ID and PROJECT_NAME, then set them in values:

secret.data.PROJECT_ID: "your-project-id"
secret.data.PROJECT_NAME: "your-project-name"

Database & Cache Passwords:
2. Create strong passwords for Metadata DB and Redis Cache:

secret.data.POSTGRES_PASSWORD: "your-secure-db-password"
secret.data.REDIS_PASS: "your-secure-redis-password"

Image Registry Credentials:
3. Retrieve Docker registry credentials from your project information page
4. Configure image pull credentials:

imageCredentials.data.username: "your-registry-username"
imageCredentials.data.password: "your-registry-password"

Airflow Credentials:
5. Create strong passwords for Airflow login (If enabled)

secret.data.airflow_password: "your-secure-airflow-password"

Volume Reference

Option A: Static Shared + Dynamic Disk

Note

This approach uses pre-created static PVC for shared storage and dynamic provisioning for block storage.

  1. Create shared storage PV and PVC before Helm deployment following Storage Guide

  2. Retrieve the shared PVC name:

    kubectl get pvc -n qmigrator
    

  3. Reference the shared storage PVC in values:

    shared.persistentVolume.existingClaim: "qmig-shared"
    

  4. Identify available block storage classes:

    kubectl get storageclass
    

  5. Configure dynamic block storage class in values:

    db.persistentVolume.storageClass: "disk-csi"  # or your block storage class
    db.persistentVolume.size: "5Gi"
    msg.persistentVolume.storageClass: "disk-csi"
    msg.persistentVolume.size: "5Gi"
    

Option B: All Static Provisioning

Note

This approach uses pre-created PVCs for all storage types. Useful for environments with strict storage policies.

  1. Create all Shared PV and PVC resources before Helm deployment following Storage Guide and Block Storage

  2. Retrieve all PVC names:

    kubectl get pvc -n qmigrator
    

  3. Reference all existing claims in values:

    shared.persistentVolume.existingClaim: "qmig-shared"
    db.persistentVolume.existingClaim: "qmig-db-pvc"
    msg.persistentVolume.existingClaim: "qmig-cache-pvc"
    

Option C: All Dynamic Provisioning

Note

This approach lets Kubernetes automatically create all PVCs. Requires appropriate storage classes that support both ReadWriteMany and ReadWriteOnce access modes.

Warning

Ensure your cluster has a storage class that supports ReadWriteMany for shared storage (e.g., Azure Files, AWS EFS, GCP Filestore).

  1. Identify available storage classes:

    kubectl get storageclass
    

  2. Configure all storage with storage classes:

    shared.persistentVolume.storageClass: "azurefile-csi"  # Must support ReadWriteMany
    shared.persistentVolume.size: "5Gi"
    db.persistentVolume.storageClass: "managed-csi"        # Must support ReadWriteOnce
    db.persistentVolume.size: "5Gi"
    msg.persistentVolume.storageClass: "managed-csi"       # Must support ReadWriteOnce
    msg.persistentVolume.size: "5Gi"
    

Gateway & HTTPRoute

Note

You can skip Gateway & HTTPRoute configuration during initial deployment by leaving gateway.enabled=false and httpRoutes.enabled=false in your values file. The application will still deploy successfully without external access. Follow the post-installation later to expose QMigrator externally.

Prerequisites:

  • Ensure a Gateway API controller is installed (e.g., NGINX Gateway Fabric, Contour, or Istio) see on Gateway Guide
  • Verify the gateway class name available in your cluster:
    kubectl get gatewayclasses
    
  • (Optional) HTTPS based Gateway would need to install certificate manager TLS Setup

Enable Gateway:
1. Set gateway configuration:

gateway.enabled: true
gateway.gatewayClassName: "nginx"  # or your gateway class name
2. Configure listeners (HTTP and/or HTTPS):

TLS Annotation

The annotation tells cert-manager to issue a certificate for the gateway listeners.

gateway.annotations:
  cert-manager.io/cluster-issuer: "letsencrypt"  # or use cert-manager.io/issuer for namespace-scoped Issuer
gateway.listeners:
- name: http
  protocol: HTTP
  port: 80
  allowedRoutes:
    namespaces:
      from: Same
# Gateway configuration with HTTPS listeners
gateway.listeners:
- name: http
  protocol: HTTP
  port: 80
  allowedRoutes:
    namespaces:
      from: Same
- name: https
  protocol: HTTPS
  port: 443
  tls:
    mode: Terminate
    certificateRefs:
    - name: qmig-tls-cert  # Certificate name managed by cert-manager
  allowedRoutes:
    namespaces:
      from: Same

Enable HTTPRoute:
1. Configure route hostnames and listeners:

httpRoutes.enabled: true
httpRoutes.hostnames: ["your-domain.com"]
httpRoutes.parentRefs:
- name: qmig-gateway # or pre-created gateway
  namespace: qmigrator
  sectionName: http
httpRoutes.enabled: true
httpRoutes.hostnames: ["your-domain.com"]
httpRoutes.parentRefs:
- name: qmig-gateway # or pre-created gateway
  namespace: qmigrator
  sectionName: http
- name: qmig-gateway # or pre-created gateway
  namespace: qmigrator
  sectionName: https

Step 2: Set Kubernetes Context

Make sure you’re connected to the correct Kubernetes cluster:

kubectl config get-contexts
kubectl config use-context <cluster-context>

Step 3: Create a Namespace

kubectl create namespace qmigrator
kubectl config set-context --current --namespace=qmigrator

Step 4: Helm Login and Installation

helm registry login qmigrator.azurecr.io \
    --username <registry-user> \
    --password <registry-password>
helm install qmigrator oci://qmigrator.azurecr.io/helm/qmig -n qmigrator --create-namespace -f values.example.yaml

To upgrade later:

helm upgrade qmigrator oci://qmigrator.azurecr.io/helm/qmig -n qmigrator -f values.example.yaml

Step 5: Verify Deployment

Check if the pods are running:

kubectl get pods -n qmigrator
Pods Verify

Post-Installation: expose QMigrator

  • To expose QMigrator externally, follow the post-installation for gateway configuration.