Skip to content

Services Deployment

This document describes the deployment strategies and configurations for the various services in the tracker REST API system.

Overview

The tracker system consists of multiple microservices that can be deployed independently or together:

  • API Service: Main REST API application
  • Notification Service: Handles alerts and notifications
  • Geocoding Service: Address resolution and reverse geocoding
  • Status Processing Service: Tracker status analysis and monitoring
  • TaskiQ Workers: Background task processing
  • Shared Infrastructure: Database, cache, and message broker

Deployment Strategies

Development Deployment

For local development, all services run together using Docker Compose with the dev profile:

# Start all development services
docker compose --profile dev up -d

# Services included:
# - dev (API with hot reload)
# - frontend-dev (React frontend)
# - admin-dev (Admin panel)
# - db (PostgreSQL with TimescaleDB)
# - dragonfly (Cache and message broker)

Development Features:

  • Hot code reloading
  • Debug ports exposed
  • Volume mounts for live editing
  • Simplified configuration

Production Deployment

Production deployment uses the prod profile with optimized containers:

# Start production services
docker compose --profile prod up -d

# Services included:
# - api (Production API server)
# - frontend (Static frontend)
# - admin (Static admin panel)
# - docs (Documentation site)
# - db (PostgreSQL with TimescaleDB)
# - dragonfly (Cache and message broker)

Production Features:

  • Optimized container images
  • Health checks enabled
  • Resource limits configured
  • Security hardening applied

Microservices Deployment

For larger deployments, services can be deployed independently:

# Deploy individual services
docker compose -f services/docker/docker-compose.taskiq.yml up -d
docker compose -f services/docker/docker-compose.notifications.yml up -d
docker compose -f services/docker/docker-compose.geocoding.yml up -d

Service Configuration

Environment Variables

Each service uses environment variables for configuration:

# Core API Service
POSTGRES_SERVER=db
POSTGRES_USER=tracker
POSTGRES_PASSWORD=secure_password
SECRET_KEY=jwt_signing_key
REDIS_HOST=dragonfly
REDIS_PASSWORD=redis_password

# Notification Service
NOTIFICATION_EMAIL_ENABLED=true
NOTIFICATION_SMS_ENABLED=false
NOTIFICATION_WEBHOOK_ENABLED=true

# Geocoding Service
GEOCODING_PRIMARY_PROVIDER=nominatim
GOOGLE_MAPS_API_KEY=optional_api_key
GEOCODING_CACHE_TTL=86400

# Status Processing
STATUS_OFFLINE_THRESHOLD=2.0
STATUS_LOW_BATTERY=15
STATUS_ENABLE_ALERTS=true

Service Discovery

Services communicate using Docker's internal networking:

  • Database: db:5432
  • Cache: dragonfly:6379
  • API: api:8000 or dev:8000
  • Frontend: frontend:80
  • Admin: admin:80

Load Balancing

For high availability, use a reverse proxy:

upstream api_backend {
    server api_1:8000;
    server api_2:8000;
    server api_3:8000;
}

server {
    listen 80;
    location /api/ {
        proxy_pass http://api_backend;
    }
}

Scaling Strategies

Horizontal Scaling

Scale individual services based on load:

# Scale API service
docker compose up -d --scale api=3

# Scale TaskiQ workers
docker compose up -d --scale taskiq-worker=5

# Scale specific worker queues
docker compose run taskiq-worker taskiq worker -Q notifications,geocoding

Resource Allocation

Configure resource limits in Docker Compose:

services:
  api:
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 1G
        reservations:
          cpus: "0.5"
          memory: 512M

Database Scaling

For database scaling:

  1. Read Replicas: Configure PostgreSQL read replicas
  2. Connection Pooling: Use PgBouncer for connection management
  3. Partitioning: Leverage TimescaleDB's automatic partitioning
  4. Caching: Use Dragonfly for frequently accessed data

Monitoring and Health Checks

Health Endpoints

Each service exposes health check endpoints:

  • API: GET /health
  • TaskiQ: Worker status via monitoring endpoints
  • Database: Connection checks
  • Cache: Ping responses

Monitoring Stack

Recommended monitoring tools:

  1. Prometheus: Metrics collection
  2. Grafana: Visualization dashboards
  3. AlertManager: Alert routing
  4. Jaeger: Distributed tracing

Log Management

Centralized logging configuration:

services:
  api:
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
        labels: "service=api"

Security Considerations

Network Security

  1. Internal Networks: Use Docker networks for service isolation
  2. Firewall Rules: Restrict external access to necessary ports only
  3. TLS Encryption: Enable TLS for all external communications
  4. Secret Management: Use Docker secrets or external secret stores

Access Control

  1. Authentication: JWT tokens for API access
  2. Authorization: Role-based access control
  3. API Keys: Secure external service integrations
  4. Database Security: Encrypted connections and restricted access

Data Protection

  1. Encryption at Rest: Database and file encryption
  2. Encryption in Transit: TLS for all communications
  3. Backup Security: Encrypted backups with secure storage
  4. Data Retention: Automated cleanup of old data

Backup and Recovery

Database Backups

Automated backup strategy:

# Daily database backup
docker compose exec db pg_dump -U tracker tracker_db > backup_$(date +%Y%m%d).sql

# Backup with compression
docker compose exec db pg_dump -U tracker tracker_db | gzip > backup_$(date +%Y%m%d).sql.gz

Application Data

  1. Static Files: Regular backup of uploaded files
  2. Configuration: Version control for configuration files
  3. Secrets: Secure backup of encryption keys and certificates

Disaster Recovery

  1. Recovery Testing: Regular testing of backup restoration
  2. Documentation: Clear recovery procedures
  3. Monitoring: Alerts for backup failures
  4. Geographic Distribution: Off-site backup storage

Performance Optimization

Caching Strategy

  1. Application Cache: Dragonfly for session and API response caching
  2. Database Cache: Query result caching
  3. CDN: Static asset delivery via CDN
  4. Browser Cache: Appropriate cache headers

Database Optimization

  1. Indexing: Proper database indexes for query performance
  2. Query Optimization: Efficient SQL queries and ORM usage
  3. Connection Pooling: Optimal database connection management
  4. Partitioning: TimescaleDB partitioning for time-series data

Application Performance

  1. Async Processing: Background tasks via TaskiQ
  2. API Optimization: Efficient endpoint design and pagination
  3. Resource Management: Proper memory and CPU usage
  4. Code Optimization: Performance profiling and optimization

Troubleshooting

Common Issues

  1. Service Startup: Check logs and dependencies
  2. Database Connections: Verify credentials and network connectivity
  3. Memory Issues: Monitor resource usage and adjust limits
  4. Performance: Profile slow queries and optimize

Debugging Tools

  1. Container Logs: docker compose logs service_name
  2. Resource Usage: docker stats
  3. Network Issues: docker network ls and connectivity tests
  4. Database Queries: PostgreSQL query logs and analysis

Recovery Procedures

  1. Service Restart: Graceful service restart procedures
  2. Data Recovery: Database and file restoration steps
  3. Rollback: Application version rollback procedures
  4. Emergency Contacts: On-call procedures and escalation

This deployment guide provides a comprehensive approach to deploying and managing the tracker REST API system across different environments and scales.