PostgreSQL Database


PostgreSQL serves as the primary database in your nself stack. This guide covers everything you need to know about PostgreSQL configuration, optimization, and best practices within the nself ecosystem.

Overview

nself uses PostgreSQL 15+ with carefully tuned defaults for performance and reliability:

  • JSONB Support: Native JSON processing capabilities
  • Full-Text Search: Built-in search functionality
  • Extensions: UUID generation, crypto functions, and more
  • Connection Pooling: Efficient connection management
  • Replication Support: Master-slave and streaming replication

Configuration

Basic Database Settings

# Database connection
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=myapp
POSTGRES_USER=postgres
POSTGRES_PASSWORD=secure_password_here

# Connection pooling
POSTGRES_MAX_CONNECTIONS=100
POSTGRES_SHARED_BUFFERS=256MB
POSTGRES_EFFECTIVE_CACHE_SIZE=1GB

Performance Tuning

# Memory settings
POSTGRES_SHARED_BUFFERS=512MB      # 25% of RAM
POSTGRES_EFFECTIVE_CACHE_SIZE=2GB   # 75% of RAM
POSTGRES_WORK_MEM=64MB              # Per query memory
POSTGRES_MAINTENANCE_WORK_MEM=256MB # For maintenance operations

# Checkpoint settings
POSTGRES_CHECKPOINT_COMPLETION_TARGET=0.9
POSTGRES_WAL_BUFFERS=16MB
POSTGRES_MAX_WAL_SIZE=2GB

# Query optimization
POSTGRES_RANDOM_PAGE_COST=1.1      # For SSD storage
POSTGRES_EFFECTIVE_IO_CONCURRENCY=200

Extensions and Features

Enabled Extensions

nself automatically enables essential PostgreSQL extensions:

-- UUID generation
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

-- Cryptographic functions
CREATE EXTENSION IF NOT EXISTS pgcrypto;

-- Full-text search
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- JSON operations
CREATE EXTENSION IF NOT EXISTS btree_gin;

-- PostGIS (if needed)
CREATE EXTENSION IF NOT EXISTS postgis;

JSONB Usage

Leverage PostgreSQL's powerful JSONB capabilities:

-- Store flexible metadata
CREATE TABLE products (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name VARCHAR(255) NOT NULL,
    metadata JSONB,
    created_at TIMESTAMP DEFAULT now()
);

-- Index JSONB fields for performance
CREATE INDEX idx_products_metadata_gin ON products USING GIN (metadata);

-- Query JSONB data
SELECT * FROM products 
WHERE metadata->>'category' = 'electronics';

-- Update JSONB fields
UPDATE products 
SET metadata = metadata || '{"featured": true}'
WHERE id = 'some-uuid';

Database Management

Schema Management

nself provides comprehensive database management through DBML:

# Generate migrations from schema
nself db run

# Apply migrations
nself db migrate:up

# Check database status
nself db status

# Validate schema
nself db validate

Connection Management

# Test database connection
nself db ping

# View connection info
nself db info

# Monitor connections
docker exec nself-postgres psql -U postgres -c "SELECT * FROM pg_stat_activity;"

Backup and Recovery

Automated Backups

# Configure automatic backups
POSTGRES_BACKUP_ENABLED=true
POSTGRES_BACKUP_SCHEDULE="0 2 * * *"  # Daily at 2 AM
POSTGRES_BACKUP_RETENTION=7
POSTGRES_BACKUP_COMPRESSION=true

# Manual backup
nself db backup --compress

# List backups
nself db backup:list

# Restore from backup
nself db restore --name backup-2025-08-06-02-00-00

Point-in-Time Recovery

# Enable WAL archiving for PITR
POSTGRES_ARCHIVE_MODE=on
POSTGRES_ARCHIVE_COMMAND='cp %p /var/lib/postgresql/wal_archive/%f'
POSTGRES_MAX_WAL_SENDERS=3
POSTGRES_WAL_LEVEL=replica

# Restore to specific time
nself db restore --point-in-time "2025-08-06 14:30:00"

Monitoring and Maintenance

Performance Monitoring

-- Monitor slow queries
SELECT query, mean_exec_time, calls, total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;

-- Check table sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Monitor database connections
SELECT 
    state,
    count(*) as connections
FROM pg_stat_activity
GROUP BY state;

Index Management

-- Find unused indexes
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan as index_scans,
    pg_size_pretty(pg_relation_size(indexrelid)) as index_size
FROM pg_stat_user_indexes
WHERE idx_scan < 50  -- Adjust threshold as needed
ORDER BY pg_relation_size(indexrelid) DESC;

-- Find missing indexes
SELECT 
    schemaname,
    tablename,
    seq_scan,
    seq_tup_read,
    seq_tup_read / seq_scan as avg_read
FROM pg_stat_user_tables
WHERE seq_scan > 1000  -- High sequential scans
ORDER BY seq_tup_read DESC;

Security Configuration

Authentication and Authorization

# Connection security
POSTGRES_SSL_MODE=require
POSTGRES_SSL_CERT=/etc/ssl/certs/server.crt
POSTGRES_SSL_KEY=/etc/ssl/private/server.key

# User management
POSTGRES_READONLY_USER=readonly
POSTGRES_READONLY_PASSWORD=readonly_password

# Row-level security
POSTGRES_ROW_SECURITY=on

Database Users and Roles

-- Create application user
CREATE USER app_user WITH PASSWORD 'secure_password';

-- Create read-only user
CREATE USER readonly_user WITH PASSWORD 'readonly_password';

-- Grant appropriate permissions
GRANT CONNECT ON DATABASE myapp TO app_user;
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO app_user;

-- Read-only permissions
GRANT CONNECT ON DATABASE myapp TO readonly_user;
GRANT USAGE ON SCHEMA public TO readonly_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;

Advanced Features

Full-Text Search

-- Create full-text search index
ALTER TABLE posts ADD COLUMN search_vector tsvector;

-- Update search vector
UPDATE posts SET search_vector = 
    to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content, ''));

-- Create index for fast searching
CREATE INDEX idx_posts_search ON posts USING GIN (search_vector);

-- Search function
CREATE OR REPLACE FUNCTION update_search_vector()
RETURNS TRIGGER AS $$
BEGIN
    NEW.search_vector := to_tsvector('english', coalesce(NEW.title, '') || ' ' || coalesce(NEW.content, ''));
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Trigger to auto-update search vector
CREATE TRIGGER update_posts_search_vector
    BEFORE INSERT OR UPDATE ON posts
    FOR EACH ROW EXECUTE FUNCTION update_search_vector();

Partitioning

-- Create partitioned table for time-series data
CREATE TABLE logs (
    id UUID DEFAULT gen_random_uuid(),
    message TEXT NOT NULL,
    level VARCHAR(20) NOT NULL,
    created_at TIMESTAMP NOT NULL DEFAULT now()
) PARTITION BY RANGE (created_at);

-- Create monthly partitions
CREATE TABLE logs_2025_08 PARTITION OF logs
    FOR VALUES FROM ('2025-08-01') TO ('2025-09-01');

CREATE TABLE logs_2025_09 PARTITION OF logs
    FOR VALUES FROM ('2025-09-01') TO ('2025-10-01');

-- Index each partition
CREATE INDEX idx_logs_2025_08_created_at ON logs_2025_08 (created_at);
CREATE INDEX idx_logs_2025_09_created_at ON logs_2025_09 (created_at);

Replication and High Availability

Streaming Replication

# Primary server configuration
POSTGRES_WAL_LEVEL=replica
POSTGRES_MAX_WAL_SENDERS=3
POSTGRES_MAX_REPLICATION_SLOTS=3
POSTGRES_ARCHIVE_MODE=on
POSTGRES_ARCHIVE_COMMAND='cp %p /var/lib/postgresql/wal_archive/%f'

# Replica server configuration
POSTGRES_PRIMARY_CONNINFO='host=primary-server port=5432 user=replicator'
POSTGRES_PROMOTE_TRIGGER_FILE='/tmp/promote_trigger'
POSTGRES_HOT_STANDBY=on

Connection Pooling with PgBouncer

# pgbouncer.ini
[databases]
myapp = host=postgres dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
admin_users = admin
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
reserve_pool_size = 5

Troubleshooting

Common Issues

Connection Issues

# Check PostgreSQL status
docker exec nself-postgres pg_isready -U postgres

# View connection details
docker logs nself-postgres

# Test connection from application
docker exec nself-api psql -h postgres -U postgres -d myapp -c "SELECT 1;"

Performance Issues

-- Find slow queries
SELECT query, mean_exec_time, calls
FROM pg_stat_statements
WHERE mean_exec_time > 100  -- milliseconds
ORDER BY mean_exec_time DESC;

-- Check locks
SELECT 
    blocked_locks.pid AS blocked_pid,
    blocked_activity.usename AS blocked_user,
    blocking_locks.pid AS blocking_pid,
    blocking_activity.usename AS blocking_user,
    blocked_activity.query AS blocked_statement,
    blocking_activity.query AS blocking_statement
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;

Best Practices

Schema Design

  • Use UUIDs: For primary keys to avoid conflicts
  • Add Indexes: On foreign keys and frequently queried columns
  • Use Constraints: Enforce data integrity at the database level
  • Normalize Appropriately: Balance normalization with performance

Query Optimization

  • Use EXPLAIN: Analyze query execution plans
  • Limit Results: Always use LIMIT for large datasets
  • Batch Operations: Use bulk inserts/updates when possible
  • Prepared Statements: For frequently executed queries

CLI Commands

Database Management

# Database operations
nself db status              # Check database health
nself db ping               # Test connection
nself db info               # Connection details
nself db logs               # View database logs

# Schema operations
nself db validate           # Validate DBML schema
nself db run                # Generate migrations
nself db migrate:up         # Apply migrations
nself db migrate:down       # Rollback migrations

# Data operations
nself db seed               # Apply seed data
nself db backup             # Create backup
nself db restore            # Restore from backup
nself db reset              # Reset database

Integration with nself Services

Hasura Integration

PostgreSQL works seamlessly with Hasura GraphQL Engine:

  • Auto-generated APIs: GraphQL APIs created from schema
  • Real-time Subscriptions: Live data updates
  • Permissions: Row and column-level security
  • Custom Functions: SQL functions exposed as GraphQL

Microservice Integration

Each microservice can connect to PostgreSQL:

  • Connection Pooling: Efficient resource usage
  • Environment Variables: Consistent configuration
  • Health Checks: Monitor database connectivity
  • Migrations: Coordinated schema updates

Next Steps

Now that you understand PostgreSQL in nself:

PostgreSQL provides a robust, scalable foundation for your nself applications with rich features and excellent performance.