Services Deployment
This document describes the deployment strategies and configurations for the various services in the tracker REST API system.
Overview
The tracker system consists of multiple microservices that can be deployed independently or together:
- API Service: Main REST API application
- Notification Service: Handles alerts and notifications
- Geocoding Service: Address resolution and reverse geocoding
- Status Processing Service: Tracker status analysis and monitoring
- TaskiQ Workers: Background task processing
- Shared Infrastructure: Database, cache, and message broker
Deployment Strategies
Development Deployment
For local development, all services run together using Docker Compose with the dev profile:
# Start all development services
docker compose --profile dev up -d
# Services included:
# - dev (API with hot reload)
# - frontend-dev (React frontend)
# - admin-dev (Admin panel)
# - db (PostgreSQL with TimescaleDB)
# - dragonfly (Cache and message broker)
Development Features:
- Hot code reloading
- Debug ports exposed
- Volume mounts for live editing
- Simplified configuration
Production Deployment
Production deployment uses the prod profile with optimized containers:
# Start production services
docker compose --profile prod up -d
# Services included:
# - api (Production API server)
# - frontend (Static frontend)
# - admin (Static admin panel)
# - docs (Documentation site)
# - db (PostgreSQL with TimescaleDB)
# - dragonfly (Cache and message broker)
Production Features:
- Optimized container images
- Health checks enabled
- Resource limits configured
- Security hardening applied
Microservices Deployment
For larger deployments, services can be deployed independently:
# Deploy individual services
docker compose -f services/docker/docker-compose.taskiq.yml up -d
docker compose -f services/docker/docker-compose.notifications.yml up -d
docker compose -f services/docker/docker-compose.geocoding.yml up -d
Service Configuration
Environment Variables
Each service uses environment variables for configuration:
# Core API Service
POSTGRES_SERVER=db
POSTGRES_USER=tracker
POSTGRES_PASSWORD=secure_password
SECRET_KEY=jwt_signing_key
REDIS_HOST=dragonfly
REDIS_PASSWORD=redis_password
# Notification Service
NOTIFICATION_EMAIL_ENABLED=true
NOTIFICATION_SMS_ENABLED=false
NOTIFICATION_WEBHOOK_ENABLED=true
# Geocoding Service
GEOCODING_PRIMARY_PROVIDER=nominatim
GOOGLE_MAPS_API_KEY=optional_api_key
GEOCODING_CACHE_TTL=86400
# Status Processing
STATUS_OFFLINE_THRESHOLD=2.0
STATUS_LOW_BATTERY=15
STATUS_ENABLE_ALERTS=true
Service Discovery
Services communicate using Docker's internal networking:
- Database:
db:5432 - Cache:
dragonfly:6379 - API:
api:8000ordev:8000 - Frontend:
frontend:80 - Admin:
admin:80
Load Balancing
For high availability, use a reverse proxy:
upstream api_backend {
server api_1:8000;
server api_2:8000;
server api_3:8000;
}
server {
listen 80;
location /api/ {
proxy_pass http://api_backend;
}
}
Scaling Strategies
Horizontal Scaling
Scale individual services based on load:
# Scale API service
docker compose up -d --scale api=3
# Scale TaskiQ workers
docker compose up -d --scale taskiq-worker=5
# Scale specific worker queues
docker compose run taskiq-worker taskiq worker -Q notifications,geocoding
Resource Allocation
Configure resource limits in Docker Compose:
services:
api:
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
reservations:
cpus: "0.5"
memory: 512M
Database Scaling
For database scaling:
- Read Replicas: Configure PostgreSQL read replicas
- Connection Pooling: Use PgBouncer for connection management
- Partitioning: Leverage TimescaleDB's automatic partitioning
- Caching: Use Dragonfly for frequently accessed data
Monitoring and Health Checks
Health Endpoints
Each service exposes health check endpoints:
- API:
GET /health - TaskiQ: Worker status via monitoring endpoints
- Database: Connection checks
- Cache: Ping responses
Monitoring Stack
Recommended monitoring tools:
- Prometheus: Metrics collection
- Grafana: Visualization dashboards
- AlertManager: Alert routing
- Jaeger: Distributed tracing
Log Management
Centralized logging configuration:
services:
api:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
labels: "service=api"
Security Considerations
Network Security
- Internal Networks: Use Docker networks for service isolation
- Firewall Rules: Restrict external access to necessary ports only
- TLS Encryption: Enable TLS for all external communications
- Secret Management: Use Docker secrets or external secret stores
Access Control
- Authentication: JWT tokens for API access
- Authorization: Role-based access control
- API Keys: Secure external service integrations
- Database Security: Encrypted connections and restricted access
Data Protection
- Encryption at Rest: Database and file encryption
- Encryption in Transit: TLS for all communications
- Backup Security: Encrypted backups with secure storage
- Data Retention: Automated cleanup of old data
Backup and Recovery
Database Backups
Automated backup strategy:
# Daily database backup
docker compose exec db pg_dump -U tracker tracker_db > backup_$(date +%Y%m%d).sql
# Backup with compression
docker compose exec db pg_dump -U tracker tracker_db | gzip > backup_$(date +%Y%m%d).sql.gz
Application Data
- Static Files: Regular backup of uploaded files
- Configuration: Version control for configuration files
- Secrets: Secure backup of encryption keys and certificates
Disaster Recovery
- Recovery Testing: Regular testing of backup restoration
- Documentation: Clear recovery procedures
- Monitoring: Alerts for backup failures
- Geographic Distribution: Off-site backup storage
Performance Optimization
Caching Strategy
- Application Cache: Dragonfly for session and API response caching
- Database Cache: Query result caching
- CDN: Static asset delivery via CDN
- Browser Cache: Appropriate cache headers
Database Optimization
- Indexing: Proper database indexes for query performance
- Query Optimization: Efficient SQL queries and ORM usage
- Connection Pooling: Optimal database connection management
- Partitioning: TimescaleDB partitioning for time-series data
Application Performance
- Async Processing: Background tasks via TaskiQ
- API Optimization: Efficient endpoint design and pagination
- Resource Management: Proper memory and CPU usage
- Code Optimization: Performance profiling and optimization
Troubleshooting
Common Issues
- Service Startup: Check logs and dependencies
- Database Connections: Verify credentials and network connectivity
- Memory Issues: Monitor resource usage and adjust limits
- Performance: Profile slow queries and optimize
Debugging Tools
- Container Logs:
docker compose logs service_name - Resource Usage:
docker stats - Network Issues:
docker network lsand connectivity tests - Database Queries: PostgreSQL query logs and analysis
Recovery Procedures
- Service Restart: Graceful service restart procedures
- Data Recovery: Database and file restoration steps
- Rollback: Application version rollback procedures
- Emergency Contacts: On-call procedures and escalation
This deployment guide provides a comprehensive approach to deploying and managing the tracker REST API system across different environments and scales.