Database Migrations

Understanding how nself handles database migrations with automatic generation, version control, and safe team workflows.

Overview

nself automatically generates database migrations from your DBML schema changes. This ensures your database structure stays in sync across development, staging, and production environments.

Migration Workflow

1. Schema Changes

Edit your schema.dbml file to add tables, modify columns, or update relationships.

2. Generate Migration

# Analyze schema and generate migrations
nself db run

# Example output:
# ✓ Analyzing schema: schema.dbml
# ✓ Backing up current database state to bin/dbsyncs/2025-08-06_14-30-00
# ✓ Schema changes detected!
# ✓ Generating migration: 20250806143000_add_posts_table
# ✓ Migration created: hasura/migrations/default/20250806143000_add_posts_table

# Review the migration files before applying:
#   - hasura/migrations/default/20250806143000_add_posts_table/up.sql
#   - hasura/migrations/default/20250806143000_add_posts_table/down.sql

3. Review Generated SQL

Always review the generated migration files:

# Forward migration (up.sql)
CREATE TABLE posts (
  id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
  user_id UUID NOT NULL REFERENCES users(id),
  title VARCHAR(255) NOT NULL,
  content TEXT,
  created_at TIMESTAMP DEFAULT now() NOT NULL,
  updated_at TIMESTAMP DEFAULT now() NOT NULL
);

CREATE INDEX idx_posts_user_id ON posts(user_id);

# Reverse migration (down.sql)
DROP TABLE IF EXISTS posts;

4. Apply Migration

# Safe team workflow (recommended)
nself db update

# Direct apply (for lead developers)
nself db migrate:up

Migration Commands

nself db migrate:create

Create custom migrations:

# Create empty migration
nself db migrate:create add_user_indexes

# Creates:
# hasura/migrations/default/[timestamp]_add_user_indexes/
#   ├── up.sql
#   └── down.sql

nself db migrate:up

Apply pending migrations:

# Apply all pending migrations
nself db migrate:up

# Apply specific number of migrations
nself db migrate:up 2

# Apply specific migration
nself db migrate:up --version 20250806143000

nself db migrate:down

Rollback migrations:

# Rollback last migration
nself db migrate:down

# Rollback specific number of migrations
nself db migrate:down 3

# Rollback to specific version
nself db migrate:down --to-version 20250806140000

Migration Structure

Migrations are stored in the Hasura directory structure:

hasura/
├── migrations/
│   └── default/
│       ├── 20250806140000_init/
│       │   ├── up.sql
│       │   └── down.sql
│       ├── 20250806143000_add_posts_table/
│       │   ├── up.sql
│       │   └── down.sql
│       └── 20250806145000_add_user_indexes/
│           ├── up.sql
│           └── down.sql
└── metadata/
    ├── databases/
    └── version.yaml

Migration Types

Schema Migrations

Generated from DBML changes:

-- Adding a new table
CREATE TABLE categories (
  id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
  name VARCHAR(255) NOT NULL UNIQUE,
  slug VARCHAR(255) NOT NULL UNIQUE,
  description TEXT,
  created_at TIMESTAMP DEFAULT now() NOT NULL
);

-- Adding indexes
CREATE INDEX idx_categories_slug ON categories(slug);
CREATE INDEX idx_categories_created_at ON categories(created_at);

Data Migrations

Custom migrations for data changes:

-- Populate default categories
INSERT INTO categories (name, slug, description) VALUES
  ('Technology', 'technology', 'Technology related posts'),
  ('Business', 'business', 'Business and entrepreneurship'),
  ('Lifestyle', 'lifestyle', 'Lifestyle and personal development');

-- Update existing posts with default category
UPDATE posts 
SET category_id = (SELECT id FROM categories WHERE slug = 'technology')
WHERE category_id IS NULL;

Function/Trigger Migrations

-- Create updated_at trigger function
CREATE OR REPLACE FUNCTION trigger_set_timestamp()
RETURNS TRIGGER AS $$
BEGIN
  NEW.updated_at = NOW();
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Apply trigger to tables
CREATE TRIGGER set_timestamp
  BEFORE UPDATE ON users
  FOR EACH ROW
  EXECUTE FUNCTION trigger_set_timestamp();

CREATE TRIGGER set_timestamp
  BEFORE UPDATE ON posts
  FOR EACH ROW
  EXECUTE FUNCTION trigger_set_timestamp();

Team Workflows

Development Team Process

  1. Lead Developer
    • Makes schema changes in DBML
    • Runs nself db run to generate migration
    • Reviews and commits migration files
    • Applies with nself db migrate:up
  2. Other Developers
    • Pull latest code changes
    • Run nself db update to safely apply migrations
    • This creates automatic backups before applying

Safe Team Migration

# For all team members (recommended)
nself db update

# This command:
# 1. Creates automatic backup
# 2. Applies pending migrations
# 3. Verifies database integrity
# 4. Provides rollback option if issues occur

Production Migrations

Production Safety

  • Manual confirmation - Production migrations require confirmation
  • Automatic backups - Created before any changes
  • Rollback capability - Every migration can be reversed
  • Zero-downtime - Most migrations are non-blocking

Production Deployment

# Production migration workflow
# 1. Deploy application code
# 2. Run migration with confirmation
nself db migrate:up --confirm

# Production migration includes:
# - Pre-migration backup
# - Schema validation
# - Dependency checking
# - Rollback preparation
# - Health verification

Migration Status

Check Migration Status

# View current migration status
nself db status

# Detailed migration information
hasura migrate status --project-dir hasura/

# Example output:
# VERSION        NAME                           STATUS
# 1723456789123  init                          Applied
# 1723456790456  add_users_table               Applied  
# 1723456791789  add_posts_table               Applied
# 1723456792012  add_user_indexes              Pending

Advanced Migration Features

Conditional Migrations

-- Check if column exists before adding
DO $$ 
BEGIN
  IF NOT EXISTS (
    SELECT 1 FROM information_schema.columns 
    WHERE table_name = 'users' AND column_name = 'phone'
  ) THEN
    ALTER TABLE users ADD COLUMN phone VARCHAR(20);
  END IF;
END $$;

Large Data Migrations

-- Migrate data in batches to avoid locks
DO $$
DECLARE
  batch_size INTEGER := 1000;
  total_rows INTEGER;
  processed INTEGER := 0;
BEGIN
  SELECT COUNT(*) INTO total_rows FROM old_table;
  
  WHILE processed < total_rows LOOP
    INSERT INTO new_table (id, data, created_at)
    SELECT id, data, created_at 
    FROM old_table 
    WHERE id > processed 
    ORDER BY id 
    LIMIT batch_size;
    
    processed := processed + batch_size;
    RAISE NOTICE 'Processed % of % rows', processed, total_rows;
    
    -- Allow other queries to run
    PERFORM pg_sleep(0.1);
  END LOOP;
END $$;

Migration Dependencies

-- up.sql
-- depends_on: 20250806140000_create_users_table

CREATE TABLE user_profiles (
  id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
  user_id UUID NOT NULL REFERENCES users(id),
  bio TEXT,
  created_at TIMESTAMP DEFAULT now() NOT NULL
);

Rollback Strategies

Immediate Rollback

# Rollback last migration
nself db migrate:down

# Rollback multiple migrations
nself db migrate:down 3

# Emergency rollback from backup
nself db revert

Rollback Considerations

  • Data loss - Dropping columns loses data permanently
  • Dependencies - Other systems may depend on new schema
  • Application compatibility - App code must handle old schema
  • Production coordination - Coordinate with deployments

Troubleshooting Migrations

Migration Fails

# Check what went wrong
nself logs postgres

# Check migration status
nself db status

# Manual intervention if needed
nself exec postgres psql -U postgres

# Mark migration as applied (if manually fixed)
hasura migrate apply --version [version] --project-dir hasura/

Schema Drift

# Detect schema drift
nself db status --check-drift

# Export current database schema
nself exec postgres pg_dump -U postgres --schema-only myproject > current.sql

# Compare with expected schema
diff expected.sql current.sql

Migration Conflicts

# When team members have conflicting migrations:
# 1. Coordinate with team
# 2. Merge schema.dbml files
# 3. Regenerate migrations
nself db run --force

# 4. Test thoroughly before applying
nself db migrate:up --dry-run

Best Practices

  • Small, focused migrations - One logical change per migration
  • Always review SQL - Check generated migrations before applying
  • Test in staging first - Never apply untested migrations to production
  • Backup before changes - Automatic but verify backups exist
  • Plan rollback strategy - Know how to revert changes
  • Document complex migrations - Add comments to explain logic
  • Coordinate with team - Communicate schema changes
  • Monitor after apply - Check application health after migrations

Next Steps