Production deployment guide

Deploy your Screenshothis instance to production with confidence. This comprehensive guide covers deployment strategies for Docker, Kubernetes, and major cloud platforms, along with security, monitoring, and scaling considerations.
Stay updated: Always check the Screenshothis repository for the latest deployment instructions and platform-specific requirements.

Pre-deployment checklist

Ensure you have everything ready before deploying to production:
1

Infrastructure prerequisites

Database: Set up a managed PostgreSQL instance or prepare to self-host Cache: Configure Redis for session management and caching Storage: Choose and configure an S3-compatible storage service Domain: Secure your domain name and SSL certificates
2

Security requirements

Environment variables: Configure all required settings for production Secrets: Generate strong, unique secrets for authentication and encryption Access control: Set up proper IAM roles and security groups
3

Monitoring setup

Logging: Prepare centralized logging solution Metrics: Set up application and infrastructure monitoring Alerts: Configure alerts for critical system events Health checks: Plan your health check and uptime monitoring strategy

Docker deployment

Choose the Docker deployment strategy that best fits your infrastructure needs:

Single container deployment

Deploy quickly with a single Docker container for small to medium workloads:
1

Build your application

# Build the Screenshothis image
docker build -t screenshothis:latest .

# Verify the build
docker images | grep screenshothis
2

Run in production

# Deploy with production configuration
docker run -d \
  --name screenshothis \
  --env-file .env.production \
  -p 3000:3000 \
  --restart unless-stopped \
  --memory="2g" \
  --cpus="1.0" \
  screenshothis:latest
Your application should be accessible at http://your-server:3000
3

Verify deployment

# Check container status
docker ps | grep screenshothis

# Test health endpoint
curl -f http://localhost:3000/health

Multi-container deployment with Docker Compose

Deploy a complete production stack with database, cache, and reverse proxy:
1

Create production compose file

Save this as docker-compose.prod.yml:
version: '3.8'

services:
  screenshothis:
    build: .
    env_file:
      - .env.production
    ports:
      - "3000:3000"
    depends_on:
      - postgres
      - redis
    restart: unless-stopped
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: '1.0'
        reservations:
          memory: 1G
          cpus: '0.5'
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: screenshothis
      POSTGRES_USER: screenshothis
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_INITDB_ARGS: "--encoding=UTF8 --lc-collate=C --lc-ctype=C"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./postgresql.conf:/etc/postgresql/postgresql.conf:ro
    restart: unless-stopped
    shm_size: 256mb

  redis:
    image: redis:7-alpine
    command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 512mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_data:/data
    restart: unless-stopped

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./ssl:/etc/ssl/certs:ro
    depends_on:
      - screenshothis
    restart: unless-stopped

volumes:
  postgres_data:
    driver: local
  redis_data:
    driver: local
2

Deploy the stack

# Start all services
docker-compose -f docker-compose.prod.yml up -d

# Check service status
docker-compose -f docker-compose.prod.yml ps

# Follow logs for all services
docker-compose -f docker-compose.prod.yml logs -f
3

Verify deployment

# Test application health
curl -f http://localhost:3000/health

# Check database connection
docker-compose -f docker-compose.prod.yml exec postgres psql -U screenshothis -d screenshothis -c "SELECT version();"

# Test Redis connection
docker-compose -f docker-compose.prod.yml exec redis redis-cli ping
All services should report healthy status and respond to connection tests

NGINX reverse proxy configuration

Configure NGINX as a reverse proxy with SSL termination and optimizations:
1

Create NGINX configuration

Save this as nginx.conf:
events {
    worker_connections 1024;
    multi_accept on;
    use epoll;
}

http {
    # Security headers
    add_header X-Frame-Options DENY always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Performance settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    gzip on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript;

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

    upstream screenshothis {
        server screenshothis:3000 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }

    # HTTP to HTTPS redirect
    server {
        listen 80;
        server_name api.yourdomain.com;
        return 301 https://$server_name$request_uri;
    }

    # HTTPS server
    server {
        listen 443 ssl http2;
        server_name api.yourdomain.com;

        # SSL configuration
        ssl_certificate /etc/ssl/certs/cert.pem;
        ssl_certificate_key /etc/ssl/certs/key.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers off;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;

        # General settings
        client_max_body_size 10M;
        proxy_read_timeout 60s;
        proxy_connect_timeout 5s;

        # Main application
        location / {
            limit_req zone=api burst=20 nodelay;

            proxy_pass http://screenshothis;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header Connection "";
            proxy_http_version 1.1;
        }

        # Health checks (no rate limiting)
        location /health {
            proxy_pass http://screenshothis/health;
            access_log off;
            proxy_set_header Host $host;
        }

        # Security - block access to sensitive files
        location ~ /\. {
            deny all;
            access_log off;
            log_not_found off;
        }
    }
}
2

Obtain SSL certificates

Set up SSL certificates using Let’s Encrypt:
# Install Certbot
sudo apt update && sudo apt install certbot python3-certbot-nginx

# Get certificates
sudo certbot --nginx -d api.yourdomain.com

# Test automatic renewal
sudo certbot renew --dry-run

# Set up auto-renewal
echo "0 12 * * * /usr/bin/certbot renew --quiet" | sudo crontab -
3

Test configuration

# Test NGINX configuration
docker-compose -f docker-compose.prod.yml exec nginx nginx -t

# Reload NGINX if needed
docker-compose -f docker-compose.prod.yml exec nginx nginx -s reload

# Test SSL and proxy
curl -I https://api.yourdomain.com/health
You should see SSL certificate information and successful proxy responses

Kubernetes deployment

Deploy Screenshothis on Kubernetes for enterprise-grade scalability and reliability:
1

Create namespace and configuration

Set up the Kubernetes namespace and configuration:
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: screenshothis
  labels:
    name: screenshothis
    environment: production

---
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: screenshothis-config
  namespace: screenshothis
  labels:
    app: screenshothis
data:
  NODE_ENV: "production"
  PORT: "3000"
  LOG_LEVEL: "info"
  MAX_SCREENSHOT_WIDTH: "3840"
  MAX_SCREENSHOT_HEIGHT: "2160"
  MAX_CONCURRENT_SCREENSHOTS: "20"
  RATE_LIMIT_MAX_REQUESTS: "1000"
  RATE_LIMIT_WINDOW_MS: "60000"
  DEFAULT_API_KEY_PREFIX: "ss_live_"
2

Configure secrets

Create secrets for sensitive configuration:
# k8s/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: screenshothis-secrets
  namespace: screenshothis
  labels:
    app: screenshothis
type: Opaque
stringData:
  # Database configuration
  DATABASE_URL: "postgresql://screenshothis:your-password@postgres.screenshothis:5432/screenshothis"

  # Redis configuration
  REDIS_URL: "redis://:your-password@redis.screenshothis:6379"

  # Authentication
  BETTER_AUTH_SECRET: "your-super-secure-random-secret-minimum-32-chars"

  # S3 Storage configuration
  AWS_ACCESS_KEY_ID: "your-s3-access-key"
  AWS_SECRET_ACCESS_KEY: "your-s3-secret-key"
  AWS_REGION: "us-east-1"
  AWS_BUCKET: "your-production-bucket"
  AWS_URL: "https://s3.us-east-1.amazonaws.com"

  # Application URLs
  APP_URL: "https://api.yourdomain.com"
  WEB_URL: "https://yourdomain.com"
Replace all placeholder values with your actual production credentials. Never commit secrets to version control.
3

Apply base configuration

# Create namespace and base configuration
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secrets.yaml

# Verify resources
kubectl get all -n screenshothis

Application deployment and services

Create the main application deployment with service and ingress:
1

Create deployment manifest

Save this as k8s/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: screenshothis
  namespace: screenshothis
  labels:
    app: screenshothis
    version: v1
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: screenshothis
  template:
    metadata:
      labels:
        app: screenshothis
        version: v1
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9090"
        prometheus.io/path: "/metrics"
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        fsGroup: 1001
      containers:
      - name: screenshothis
        image: screenshothis:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 3000
          name: http
          protocol: TCP
        - containerPort: 9090
          name: metrics
          protocol: TCP
        envFrom:
        - configMapRef:
            name: screenshothis-config
        - secretRef:
            name: screenshothis-secrets
        livenessProbe:
          httpGet:
            path: /health/live
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 3
          failureThreshold: 3
        startupProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 3
          failureThreshold: 30
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
            ephemeral-storage: "1Gi"
          limits:
            memory: "2Gi"
            cpu: "2000m"
            ephemeral-storage: "5Gi"
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: false
          runAsNonRoot: true
          runAsUser: 1001
          capabilities:
            drop:
            - ALL
            add:
            - SYS_ADMIN  # Required for Chromium

---
apiVersion: v1
kind: Service
metadata:
  name: screenshothis-service
  namespace: screenshothis
  labels:
    app: screenshothis
spec:
  selector:
    app: screenshothis
  ports:
  - name: http
    port: 80
    targetPort: 3000
    protocol: TCP
  - name: metrics
    port: 9090
    targetPort: 9090
    protocol: TCP
  type: ClusterIP

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: screenshothis-ingress
  namespace: screenshothis
  labels:
    app: screenshothis
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"
    nginx.ingress.kubernetes.io/rate-limit: "10"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
  tls:
  - hosts:
    - api.yourdomain.com
    secretName: screenshothis-tls
  rules:
  - host: api.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: screenshothis-service
            port:
              number: 80
2

Deploy to cluster

# Apply the deployment
kubectl apply -f k8s/deployment.yaml

# Check rollout status
kubectl rollout status deployment/screenshothis -n screenshothis

# Verify pods are running
kubectl get pods -n screenshothis -l app=screenshothis
3

Verify deployment

# Check all resources
kubectl get all -n screenshothis

# Test application health
kubectl port-forward svc/screenshothis-service 8080:80 -n screenshothis &
curl -f http://localhost:8080/health

# Check ingress
kubectl get ingress -n screenshothis
All pods should be in “Running” status and health checks should pass

Cloud platform deployments

Deploy to popular cloud platforms with platform-specific optimizations:
Deploy on AWS using EC2, RDS, and ElastiCache for a fully managed solution:
1

Set up infrastructure

# Create VPC and security groups first (recommended)
aws ec2 create-vpc --cidr-block 10.0.0.0/16
aws ec2 create-security-group --group-name screenshothis-sg --description "Screenshothis security group"

# Launch EC2 instance
aws ec2 run-instances \
  --image-id ami-0abcdef1234567890 \
  --instance-type t3.medium \
  --key-name your-key-pair \
  --security-group-ids sg-903004f8 \
  --subnet-id subnet-6e7f829e \
  --user-data file://install-docker.sh
2

Create managed services

# Create RDS PostgreSQL instance
aws rds create-db-instance \
  --db-instance-identifier screenshothis-db \
  --db-instance-class db.t3.micro \
  --engine postgres \
  --engine-version 15.3 \
  --master-username screenshothis \
  --master-user-password $(openssl rand -base64 24) \
  --allocated-storage 20 \
  --storage-type gp2 \
  --vpc-security-group-ids sg-your-db-sg

# Create ElastiCache Redis cluster
aws elasticache create-cache-cluster \
  --cache-cluster-id screenshothis-redis \
  --cache-node-type cache.t3.micro \
  --engine redis \
  --engine-version 7.0 \
  --num-cache-nodes 1 \
  --security-group-ids sg-your-redis-sg
3

Deploy application

# SSH to EC2 instance and deploy
ssh -i your-key.pem ubuntu@your-ec2-ip

# Clone and configure
git clone https://github.com/screenshothis/screenshothis.git
cd screenshothis
cp .env.example .env.production

# Configure with AWS endpoints
echo "DATABASE_URL=postgresql://screenshothis:password@your-rds-endpoint:5432/screenshothis" >> .env.production
echo "REDIS_URL=redis://your-elasticache-endpoint:6379" >> .env.production

# Deploy
docker-compose -f docker-compose.prod.yml up -d

Production configuration

Configure your environment variables for optimal production performance:

Essential environment variables

SSL/TLS configuration

Secure your deployment with proper SSL/TLS certificates:

Monitoring and Logging

Health Checks

The application provides several health check endpoints:
  • /health - Basic health check
  • /health/live - Liveness probe
  • /health/ready - Readiness probe

Prometheus Metrics

# docker-compose.monitoring.yml
version: '3.8'
services:
  prometheus:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml

  grafana:
    image: grafana/grafana
    ports:
      - "3001:3000"
    environment:
      GF_SECURITY_ADMIN_PASSWORD: admin

Log Management

# Add to docker-compose.prod.yml
services:
  screenshothis:
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Scaling Considerations

Horizontal Scaling

# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: screenshothis-hpa
  namespace: screenshothis
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: screenshothis
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Database Scaling

  • Use read replicas for read-heavy workloads
  • Consider connection pooling (PgBouncer)
  • Monitor query performance

Redis Scaling

  • Use Redis Cluster for high availability
  • Consider Redis Sentinel for failover

Backup and Recovery

Database Backups

# Automated PostgreSQL backup
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
pg_dump $DATABASE_URL > backup_$DATE.sql
aws s3 cp backup_$DATE.sql s3://your-backup-bucket/

Application Data Backup

# Backup uploaded files (if using local storage)
tar -czf screenshots_backup_$(date +%Y%m%d).tar.gz /path/to/screenshots
aws s3 cp screenshots_backup_*.tar.gz s3://your-backup-bucket/

Security Hardening

Firewall Configuration

# UFW firewall rules
sudo ufw allow ssh
sudo ufw allow 80
sudo ufw allow 443
sudo ufw enable

Docker Security

# Use non-root user
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs
RUN adduser -S screenshothis -u 1001
USER screenshothis

Troubleshooting Deployment Issues

Common deployment problems and solutions:

Port Binding Issues

# Check what's using the port
sudo netstat -tulpn | grep :3000
sudo lsof -i :3000

Database Connection Issues

# Test database connection
psql $DATABASE_URL -c "SELECT version();"

Storage Issues

# Test S3 connection
aws s3 ls s3://your-bucket --region your-region

Next steps

Complete your deployment with these essential guides:

Deployment checklist

Ensure your deployment is production-ready:
  • Security: SSL certificates configured and auto-renewal set up
  • Monitoring: Health checks and logging configured
  • Backup: Database backup strategy implemented
  • Scaling: Resource limits and auto-scaling configured
  • Testing: All endpoints tested and responding correctly
  • Documentation: Internal deployment docs updated

Getting help