Monitor and manage your nself backend infrastructure with comprehensive dashboards, metrics, and real-time insights.
nself provides built-in dashboard capabilities through multiple interfaces, giving you complete visibility into your backend services, database performance, and system health.
Access the powerful Hasura GraphQL console for managing your API:
# Access at http://localhost:8080
# Features:
# - GraphQL query explorer
# - Database browser
# - API permissions
# - Event triggers
# - Actions and remote schemas
# - Metadata management
Manage your object storage with the MinIO web console:
# Access at http://localhost:9001
# Default credentials from .env.local:
# Username: admin
# Password: [generated password]
# Features:
# - Bucket management
# - File upload/download
# - Access policies
# - User management
# - Monitoring and metrics
While nself doesn't include a built-in database dashboard, you can easily add popular tools:
# Add to your .env.local:
PGADMIN_ENABLED=true
PGADMIN_EMAIL=admin@example.com
PGADMIN_PASSWORD=your-secure-password
# Rebuild and start
nself build && nself up
# Access at http://localhost:5050
# Lightweight alternative - add to .env.local:
ADMINER_ENABLED=true
# Access at http://localhost:8081
# Connection details:
# System: PostgreSQL
# Server: postgres
# Username: postgres
# Password: [from .env.local]
# Database: [your project name]
Monitor resource usage and system health:
# View current resource usage
nself resources
# Continuous monitoring
nself resources --watch
# JSON output for external tools
nself resources --format json
# Check all service status
nself status --verbose
# Health check with details
nself doctor
# Monitor specific service
nself logs -f postgres
nself logs -f hasura
For advanced monitoring, you can add Grafana and Prometheus:
# Add to .env.local:
GRAFANA_ENABLED=true
PROMETHEUS_ENABLED=true
# Configure Grafana
GRAFANA_ADMIN_PASSWORD=secure-password
# Rebuild with monitoring services
nself build && nself up
http://localhost:3000
(admin / [your password])http://localhost:9090
nself includes pre-configured Grafana dashboards for:
Integrate your application metrics with the monitoring stack:
# Example: Express.js metrics endpoint
app.get('/metrics', (req, res) => {
res.set('Content-Type', 'text/plain');
res.send(promClient.register.metrics());
});
# Configure Prometheus to scrape your app
# In prometheus.yml:
scrape_configs:
- job_name: 'my-app'
static_configs:
- targets: ['my-app:3000']
# Custom Grafana dashboard for business metrics
# Example queries:
# - User signups per day
# - API request volume
# - Error rates by endpoint
# - Database query performance
# - Storage usage trends
# Example Grafana alerts
# High CPU usage
avg(cpu_usage) > 80
# High memory usage
avg(memory_usage) > 85
# Database connection issues
postgres_up == 0
# High error rate
rate(http_errors[5m]) > 0.1
# Enable log aggregation
LOG_AGGREGATION_ENABLED=true
# Access logs dashboard
nself logs --dashboard
# Search logs
nself logs --search "error" --since "1h"
nself logs --grep "database" postgres
Optional ELK stack (Elasticsearch, Logstash, Kibana) integration:
# Add to .env.local:
ELK_ENABLED=true
ELASTICSEARCH_PASSWORD=secure-password
# Access Kibana at http://localhost:5601
# Monitor slow queries
SELECT query, mean_time, calls
FROM pg_stat_statements
ORDER BY mean_time DESC LIMIT 10;
# Check connection pools
SELECT state, count(*)
FROM pg_stat_activity
GROUP BY state;
# Table sizes
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
# Hasura query analytics
query GetQueryAnalytics {
query_collection {
name
average_execution_time
request_count
}
}
# Monitor GraphQL operations
nself logs hasura | grep "query-log"
# Enable audit logging
AUDIT_LOGGING_ENABLED=true
# Track:
# - Database schema changes
# - User permission modifications
# - Configuration updates
# - Service restarts
Access key metrics on mobile devices:
# Create role-based dashboards
# - Developer dashboard: logs, debugging, local metrics
# - DevOps dashboard: infrastructure, deployments, alerts
# - Business dashboard: user metrics, performance KPIs
# - Executive dashboard: high-level trends, costs
# Integrate with external tools
# Datadog API
POST /api/v1/metrics
{
"series": [
{
"metric": "nself.users.active",
"points": [[timestamp, value]]
}
]
}
# Custom webhook notifications
curl -X POST webhook-url \
-H "Content-Type: application/json" \
-d '{"alert": "High CPU", "value": 95}'