Performance Optimization


This comprehensive guide covers performance optimization strategies for your nself deployment, from database tuning to application-level optimizations.

Performance Monitoring

Built-in Performance Tools

# Check overall system performance
nself status --resources

# Monitor resource usage in real-time
watch -n 2 'nself resources'

# Check service-specific metrics
nself metrics postgres
nself metrics hasura
nself metrics redis

# Generate performance report
nself performance report

Docker Performance Monitoring

# Monitor container resource usage
docker stats

# Check container performance over time
docker stats --no-stream

# Monitor specific services
docker stats postgres hasura redis

Database Performance

PostgreSQL Optimization

Memory Configuration

# PostgreSQL memory settings in .env.local
POSTGRES_SHARED_BUFFERS=256MB     # 25% of available RAM
POSTGRES_EFFECTIVE_CACHE_SIZE=1GB  # 75% of available RAM
POSTGRES_WORK_MEM=4MB             # RAM / max_connections
POSTGRES_MAINTENANCE_WORK_MEM=64MB # For maintenance operations

# Apply configuration
nself build && nself restart postgres

Connection Pool Tuning

# Connection settings
POSTGRES_MAX_CONNECTIONS=200
POSTGRES_SHARED_BUFFERS=256MB

# Enable connection pooling with PgBouncer
PGBOUNCER_ENABLED=true
PGBOUNCER_POOL_SIZE=25
PGBOUNCER_MAX_CLIENT_CONN=100

Query Optimization

-- Analyze slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements 
ORDER BY mean_time DESC 
LIMIT 10;

-- Create indexes for common queries
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
CREATE INDEX CONCURRENTLY idx_posts_author_created ON posts(author_id, created_at DESC);

-- Analyze query execution plans
EXPLAIN ANALYZE SELECT * FROM posts WHERE author_id = 'user-id';

-- Update table statistics
ANALYZE;

-- Vacuum regularly
VACUUM ANALYZE;

Database Performance Monitoring

# Check database performance
nself exec postgres psql -U postgres -c "
  SELECT 
    datname,
    numbackends,
    xact_commit,
    xact_rollback,
    blks_read,
    blks_hit,
    temp_files,
    temp_bytes
  FROM pg_stat_database;
"

# Monitor active queries
nself exec postgres psql -U postgres -c "
  SELECT 
    pid,
    now() - pg_stat_activity.query_start AS duration,
    query 
  FROM pg_stat_activity 
  WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes';
"

# Check index usage
nself exec postgres psql -U postgres -c "
  SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan,
    idx_tup_read,
    idx_tup_fetch
  FROM pg_stat_user_indexes 
  ORDER BY idx_scan DESC;
"

Hasura GraphQL Performance

Query Optimization

Query Complexity

Limit query depth and complexity to prevent expensive operations. Use pagination and selective field querying.

# Hasura performance settings
HASURA_GRAPHQL_QUERY_DEPTH_LIMIT=10
HASURA_GRAPHQL_NODE_LIMIT=100
HASURA_GRAPHQL_BATCH_SIZE=100

# Enable query caching
HASURA_GRAPHQL_ENABLE_QUERY_CACHE=true
HASURA_GRAPHQL_QUERY_CACHE_SIZE=100MB

# Connection pool settings
HASURA_GRAPHQL_PG_CONNECTIONS=50
HASURA_GRAPHQL_PG_TIMEOUT=180
HASURA_GRAPHQL_USE_PREPARED_STATEMENTS=true

Subscription Performance

# WebSocket settings for subscriptions
HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_REFETCH_INTERVAL=1000
HASURA_GRAPHQL_LIVE_QUERIES_MULTIPLEXED_BATCH_SIZE=100

# Connection limits
HASURA_GRAPHQL_WEBSOCKET_KEEPALIVE=5
HASURA_GRAPHQL_WEBSOCKET_CONNECTION_INIT_TIMEOUT=3

Optimizing GraphQL Queries

# Use pagination for large datasets
query PostsPaginated($limit: Int!, $offset: Int!) {
  posts(limit: $limit, offset: $offset, order_by: {created_at: desc}) {
    id
    title
    created_at
  }
}

# Select only needed fields
query UserSummary {
  users {
    id
    name
    # Don't fetch email, created_at, etc. if not needed
  }
}

# Use aggregations instead of counting all rows
query PostStats {
  posts_aggregate {
    aggregate {
      count
    }
  }
}

# Use variables to enable query caching
query UserPosts($userId: uuid!) {
  posts(where: {author_id: {_eq: $userId}}) {
    id
    title
  }
}

Redis Performance

Memory Optimization

# Redis memory settings
REDIS_MAXMEMORY=512MB
REDIS_MAXMEMORY_POLICY=allkeys-lru

# Memory optimization
REDIS_SAVE=900 1 300 10 60 10000  # Reduce save frequency
REDIS_TCP_KEEPALIVE=300
REDIS_TIMEOUT=0

# Enable compression for large values
REDIS_COMPRESSION=true

Connection Pool Configuration

// Redis client configuration
import { createClient } from 'redis';

const redisClient = createClient({
  url: process.env.REDIS_URL,
  socket: {
    connectTimeout: 10000,
    lazyConnect: true,
    keepAlive: 30000,
    reconnectStrategy: (retries) => Math.min(retries * 50, 500)
  },
  database: 0,
  // Connection pooling
  maxRetriesPerRequest: 3,
  retryDelayOnFailover: 100,
  enableAutoPipelining: true
});

Redis Performance Monitoring

# Check Redis memory usage
nself exec redis redis-cli info memory

# Monitor Redis performance
nself exec redis redis-cli info stats

# Check slow queries
nself exec redis redis-cli slowlog get 10

# Monitor hit ratio
nself exec redis redis-cli info stats | grep keyspace

MinIO Storage Performance

Performance Configuration

# MinIO performance settings
MINIO_CACHE_DRIVES=/tmp/cache1,/tmp/cache2
MINIO_CACHE_EXCLUDE="*.jpg,*.jpeg,*.png"
MINIO_CACHE_QUOTA=80
MINIO_CACHE_AFTER=3
MINIO_CACHE_WATERMARK_LOW=70
MINIO_CACHE_WATERMARK_HIGH=90

# Compression settings
MINIO_COMPRESS=true
MINIO_COMPRESS_EXTENSIONS=".txt,.log,.csv,.json,.xml"
MINIO_COMPRESS_MIME_TYPES="text/*,application/json,application/xml"

Upload Optimization

// Multipart upload for large files
const uploadLargeFile = async (file: File) => {
  const minioClient = new Minio.Client({
    endPoint: 'localhost',
    port: 9000,
    useSSL: false,
    accessKey: process.env.MINIO_ACCESS_KEY,
    secretKey: process.env.MINIO_SECRET_KEY
  });

  // Use multipart upload for files > 5MB
  if (file.size > 5 * 1024 * 1024) {
    return minioClient.putObject(
      'uploads', 
      file.name, 
      file.stream(), 
      file.size,
      {
        'Content-Type': file.type,
        // Enable parallel uploads
        partSize: 10 * 1024 * 1024 // 10MB parts
      }
    );
  }
  
  return minioClient.putObject('uploads', file.name, file.buffer);
};

Application-Level Optimization

Caching Strategies

Application-Level Caching

// Cache frequently accessed data
@Injectable()
export class UserService {
  constructor(private redis: RedisService) {}

  async getUser(id: string): Promise<User> {
    // Try cache first
    const cached = await this.redis.get(`user:${id}`);
    if (cached) {
      return JSON.parse(cached);
    }

    // Fetch from database
    const user = await this.userRepository.findById(id);
    
    // Cache for 1 hour
    await this.redis.set(`user:${id}`, JSON.stringify(user), 3600);
    
    return user;
  }

  // Cache invalidation on update
  async updateUser(id: string, updates: Partial<User>): Promise<User> {
    const user = await this.userRepository.update(id, updates);
    
    // Remove from cache
    await this.redis.del(`user:${id}`);
    
    return user;
  }
}

GraphQL Response Caching

// Apollo Server caching
import { InMemoryLRUCache } from 'apollo-server-caching';

const server = new ApolloServer({
  typeDefs,
  resolvers,
  cache: new InMemoryLRUCache({
    maxSize: Math.pow(2, 20) * 100, // ~100MB
    ttl: 300, // 5 minutes
  }),
  cacheControl: {
    defaultMaxAge: 300,
  },
});

// Add cache hints to resolvers
const resolvers = {
  Query: {
    posts: (parent, args, context, info) => {
      info.cacheControl.setCacheHint({ maxAge: 60 }); // 1 minute
      return context.dataSources.posts.getPosts(args);
    },
  },
};

Connection Pooling

// Database connection pooling
import { Pool } from 'pg';

const pool = new Pool({
  host: process.env.POSTGRES_HOST,
  port: parseInt(process.env.POSTGRES_PORT),
  database: process.env.POSTGRES_DB,
  user: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  // Connection pool settings
  max: 20, // Maximum number of connections
  min: 5,  // Minimum number of connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
  maxUses: 7500 // Close connection after this many queries
});

// HTTP connection pooling
import { Agent } from 'https';

const httpsAgent = new Agent({
  maxSockets: 25,
  maxFreeSockets: 10,
  keepAlive: true,
  keepAliveMsecs: 1000,
});

axios.defaults.httpsAgent = httpsAgent;

System-Level Optimization

Docker Performance

# Optimize Docker settings
# In .env.local
DOCKER_BUILDKIT=1
COMPOSE_DOCKER_CLI_BUILD=1

# Resource limits
POSTGRES_MEMORY_LIMIT=2048m
POSTGRES_CPU_LIMIT=2.0
HASURA_MEMORY_LIMIT=1024m
HASURA_CPU_LIMIT=1.0
REDIS_MEMORY_LIMIT=512m
REDIS_CPU_LIMIT=0.5

# Use tmpfs for temporary data
TMPFS_ENABLED=true
TMPFS_SIZE=100M

Nginx Optimization

# Nginx performance configuration
worker_processes auto;
worker_connections 4096;

# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types
  text/plain
  text/css
  text/xml
  text/javascript
  application/json
  application/javascript
  application/xml+rss
  application/atom+xml
  image/svg+xml;

# Enable caching
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
  expires 1y;
  add_header Cache-Control "public, immutable";
}

# Connection keep-alive
keepalive_timeout 65;
keepalive_requests 100;

# Buffer sizes
client_body_buffer_size 128k;
client_max_body_size 100m;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;
output_buffers 1 32k;
postpone_output 1460;

Performance Testing

Load Testing

# Install load testing tools
npm install -g artillery
npm install -g k6

# Test database performance
nself exec postgres pgbench -i -s 50 test_db
nself exec postgres pgbench -c 10 -j 2 -t 1000 test_db

# Test GraphQL API
# artillery_test.yml
config:
  target: 'http://localhost:8080'
  phases:
    - duration: 60
      arrivalRate: 10
scenarios:
  - name: "GraphQL Query"
    requests:
      - post:
          url: "/v1/graphql"
          headers:
            Content-Type: "application/json"
          json:
            query: "query { users(limit: 10) { id name } }"

# Run load test
artillery run artillery_test.yml

Benchmark Specific Services

# Redis benchmark
nself exec redis redis-benchmark -h redis -p 6379 -c 50 -n 10000

# MinIO benchmark
nself exec minio mc admin speedtest local --duration 60s

# PostgreSQL benchmark
nself exec postgres pgbench -c 10 -j 2 -t 1000 -h postgres -U postgres test_db

Monitoring and Alerting

Performance Metrics

# Enable comprehensive monitoring
MONITORING_ENABLED=true
PROMETHEUS_ENABLED=true
GRAFANA_ENABLED=true

# Key metrics to monitor:
# - Database query time
# - GraphQL response time
# - Memory usage
# - CPU usage
# - Disk I/O
# - Network throughput
# - Cache hit ratio
# - Connection pool usage

Alerting Rules

# prometheus_alerts.yml
groups:
- name: nself.rules
  rules:
  - alert: HighMemoryUsage
    expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: High memory usage detected
      
  - alert: DatabaseSlowQueries
    expr: pg_stat_activity_max_tx_duration{datname!="postgres"} > 30
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: Slow database queries detected
      
  - alert: RedisHighMemory
    expr: redis_memory_used_bytes / redis_config_maxmemory > 0.9
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: Redis memory usage is high

Performance Best Practices

Database Best Practices

  • Index Strategy: Create indexes for all foreign keys and frequently queried columns
  • Query Optimization: Use EXPLAIN ANALYZE to understand query performance
  • Connection Pooling: Always use connection pooling in production
  • Partitioning: Consider table partitioning for large datasets
  • Regular Maintenance: Schedule regular VACUUM and ANALYZE operations

Application Best Practices

  • Caching Strategy: Implement multi-level caching (application, database, CDN)
  • Lazy Loading: Only load data when needed
  • Batch Operations: Group multiple operations together
  • Async Processing: Use background jobs for heavy operations
  • Resource Limits: Set appropriate memory and CPU limits

Monitoring Best Practices

  • Key Metrics: Focus on metrics that matter to your users
  • Baselines: Establish performance baselines for comparison
  • Alerting: Set up alerts for critical performance thresholds
  • Regular Reviews: Review performance regularly and optimize
  • Load Testing: Test performance under realistic load

Performance Troubleshooting

Common Performance Issues

Slow Database Queries

Symptoms: High response times, timeouts

Solutions: Add indexes, optimize queries, increase connection pool

High Memory Usage

Symptoms: Out of memory errors, swapping

Solutions: Increase memory limits, optimize caching, fix memory leaks

Connection Pool Exhaustion

Symptoms: Connection timeouts, queued requests

Solutions: Increase pool size, optimize connection usage, add connection pooling

Next Steps

After optimizing performance:

Performance optimization is an ongoing process. Monitor your system regularly, identify bottlenecks, and continuously improve your nself deployment for optimal performance.