Fabric Migration Guide
This comprehensive guide walks you through migrating databases to Microsoft Fabric using QMigrator, including Azure App registration, connection setup, and end-to-end data migration.
Prerequisites
Before starting the migration process, ensure you have:
- Access to Azure Portal with appropriate permissions
- Microsoft Fabric workspace access
- QMigrator application credentials
- Source and target database connection details
Part 1: Azure App Registration (Service Principal)
Service Principal authentication is required for secure access to Microsoft Fabric resources.
Step 1: Create App Registration
- Sign in to the Azure Portal
- Search for App registration in the top search bar
- Click New registration

Step 2: Configure Application Details
Fill in the required information:
- Name: Enter a descriptive name for your application (e.g., "QMigrator-Fabric-Connector")
- Supported account types: Select the accounts you want your app to support
- Redirect URI (Optional): Select Web under "Select a Platform" dropdown
Click Register to create the application.

Step 3: Capture Application Credentials
- Select the newly created App Registration
Warning
Note down the following values (you'll need them later):
- Application (client) ID
- Directory (tenant) ID

Step 4: Generate Client Secret
- Under Manage, select Certificates & secrets
- Click New client secret
- Provide a description and expiration period
Warning
Copy and save the secret Value immediately - it will only be displayed once

Step 5: Grant Fabric Workspace Access
- Open your Microsoft Fabric Workspace
-
Click Manage Access

-
Click Add people or groups
- Search for and select your App name created in previous steps
- Assign the Contributor role
-
Click Add to confirm

Part 2: QMigrator Configuration
Step 1: Login to QMigrator
- Navigate to the QMigrator application URL
-
Click Login to access your account

Step 2: Configure Source Connection
Set up the connection to your source database.
- Click on the Setup menu and navigate to Connections
- Select migration source as Database
- In the Connection Type field, choose Source

- Click on Add Source to add new source
-
Provide the following connection details:

- Connection Name: Reference name for the source
- Hostname/IP: Source database host
- Port: Database port number
- Database Name: Source database name
- Username: Database authentication username
- Password: Database authentication password
- .
- .
- .
-
Click Test Connection to verify connectivity
-
Click Save to store the connection

Tip
Use the Edit button to modify existing connections or Delete button to remove unnecessary connections.
Step 3: Configure Target Connection
Set up the connection to your Microsoft Fabric target.
- Click on the Setup menu and navigate to Connections
- Select migration source as Database
- In the Connection Type field, choose Target
- Click Add Target button

-
Provide the following Fabric connection details:
- Connection Name: Reference name (e.g., "Fabric_Mirror")
- Workspace ID: Your Fabric workspace identifier
- Tenant ID: Directory (tenant) ID from App registration
- Client ID: Application (client) ID from App registration
- Client Secret: Secret value from App registration
- Target Type: Select Fabric service (Warehouse, Lakehouse, etc.)

-
Click Test Connection to verify
- Click Save to store the connection
Part 3: End-to-End Migration
Step 1: Configure Migration Settings
Navigate to the E2E Migration screen and configure the following parameters:

Basic Configuration
| Parameter | Description | Example |
|---|---|---|
| Source Connection | Source database connection | Fabric_oracle |
| Data Load Type | Initial load method (Database or File) | Database |
| Source Schema | Schema to migrate | fabric |
| DR Connection | Deployment database connection | Fabric_mirror |
| Target Connection | Destination database connection | Fabric_mirror |
Operation Settings
- Operation: Select migration operation type
- Initial Data Load: Basic initial load without auto-deploying indexes
- E2E Data Load: Complete migration including data transformation and CDC
Table Selection
- Tables: Multi-select tables to migrate (supports segment loading)
Advanced Configuration
| Parameter | Description | Example |
|---|---|---|
| Config File Name | Name for configuration file | Testing4_Scheduler_20250225 |
| Request CPU | Minimum CPU per task (1Core=1000m) | 2000m |
| Limit CPU | Maximum CPU per task (1Core=1000m) | 4000m |
Step 2: Execute Migration
- After entering all details, click Execute
- The system will initiate:
- Schema extraction
- Code conversion
- Data migration
Step 3: Monitor Migration Status
Track your migration progress in the E2E Migration Status screen.
Extraction and Conversion Status
- Select the operation from the dropdown menu

- Monitor process status:
- Initialize: Process started and initialized
- Completed: Process finished successfully
Note
Duplicate execution is prevented while one migration is already running.
Data Migration Status
- Select Data Migration from the operation dropdown
- Select both source and target connections

- View generated configuration files ready for data migration
- Review the config Excel file containing:
- DAG (Directed Acyclic Graph) details
- Table information
- Chunk details
- Category classification (large, small, LOB)
Tip
Configuration Management: Deleting a config file also removes related DAGs.
Part 4: Job Agent Automation
The Job Agent automates DAG execution for continuous migration operations.
Configure Job Agent
- Navigate to the Job Agent screen
- Select the Source Connection
- Select the Target Connection
- Click Start Agent

The Job Agent will automatically trigger all available DAGs from the audit config table, eliminating the need for manual execution.
Monitor Job Execution
View running DAGs and their status in real-time through the Job Agent interface.

Part 5: Scheduler and Task Management
Access Scheduler
- Click on the Data Migration menu
- Select the Scheduler screen
- Login with your username and password

- Click Browse and select DAG Runs to view active DAGs
View Task Details
- Click on any running DAG to view its tasks

- Each DAG consists of the following tasks:
- Pre-validation: Initial checks before migration
- DAG Execution: Main data migration process
- Complete Validation: Post-migration verification
Understanding Task Architecture
- Each DAG task runs as an individual Pod in Kubernetes
- Concurrency settings determine the number of parallel pods
- Task statuses include:
- Scheduled: Queued for execution
- Queued: Waiting for resources
- Running: Currently executing
- Success: Completed successfully
- Failed: Encountered errors
Note
Pods may show "Pending" status due to image pull, resource unavailability, or node constraints.
Task Management and Recovery
If a task shows Failed, Queued, or Stuck status:
- Select the task in the scheduler UI
- Click Clear to reset the task state
- Select Downstream and Recursive options if needed
- Verify the task selection
- Click Clear to re-run the task
View Task Logs
- Click on a specific task
- Select Logs tab
- Review runtime progress logs for troubleshooting

Part 6: Data Verification
After migration completes, verify data integrity:
- Access your Microsoft Fabric workspace
- Query the target tables
- Compare row counts between source and target
- Validate data types and constraints
- Check for any data transformation issues

A successful migration should show matching record counts and data integrity across all migrated tables.
Best Practices
- Monitor Resource Usage: Adjust CPU limits based on migration performance
- Incremental Migration: Migrate tables in batches for large datasets
- Test Connections: Always test connections before starting migration
- Review Logs: Regularly check scheduler logs for early issue detection