Skip to content

Microservices Architecture

This section documents the modernized microservices that support the tracker REST API. These services have been redesigned to align with the new FastAPI-based architecture while maintaining compatibility with the existing database schema.

Overview

The microservices provide distributed processing capabilities for tracker report fetching, location data processing, and other background tasks. They are built using the same patterns and technologies as the main API for consistency and maintainability.

Architecture

Shared Infrastructure

All services share common infrastructure components:

  • Configuration: Pydantic-based settings extending the main API configuration
  • Database: SQLAlchemy integration with connection pooling and retry logic
  • Redis Client: Centralized Redis management with clustering support
  • Logging: Structured logging with service-specific context

Service Design Patterns

  • FastAPI Integration: Services can run as both background workers and API servers
  • Health Checks: Built-in health monitoring and metrics endpoints
  • Graceful Shutdown: Proper cleanup and resource management
  • Error Handling: Comprehensive error handling with structured logging
  • Type Safety: Full type hints and Pydantic schema validation

Available Services

Unified Geocoding and Geofencing Service

The unified geocoding and geofencing service provides comprehensive location processing, including address resolution, geofence event detection, and alert management.

Key Features:

  • Immediate Processing: Real-time geocoding and geofencing of newly fetched locations (24/7)
  • Backfill Processing: Throttled processing of missed locations during off-peak hours (19:00-07:00 UTC)
  • Database Protection: Single-concurrency worker with resource limits to prevent CPU overload
  • Caching: Efficient geocoding cache with >80% hit rates
  • Event Detection: Accuracy-aware geofence entry/exit event generation
  • Alert Management: Configurable alerts based on geofence rules

Documentation: Available in the services directory

Tracker Fetcher Service

The tracker fetcher service handles the automated retrieval of Apple FindMy tracker reports.

Key Features:

  • Redis-based distributed queue management
  • Apple account authentication and report fetching
  • Location data storage with PostGIS integration
  • Real-time notifications via Redis pub/sub
  • Multiple run modes: worker, API server, or single batch

Endpoints:

  • GET /health - Service health status
  • GET /status/queue - Queue status information
  • GET /status/worker - Worker status information
  • POST /fetch/batch - Trigger batch processing
  • POST /fetch/tracker/{tracker_id} - Fetch specific tracker
  • POST /queue/initialize - Initialize the tracker queue
  • GET /metrics - Service metrics

Deployment

Development

# Run as background worker
python -m services.tracker_fetcher.main worker

# Run as API server for monitoring
python -m services.tracker_fetcher.main api --port 8001

# Run single batch
python -m services.tracker_fetcher.main once

Docker Deployment

Development (with live code reloading):

# Start the main dev infrastructure first
docker-compose --profile dev up -d

# Then start the development services (with volume mounts)
docker-compose -f services/docker/docker-compose.services.yml --profile dev up -d

Production (using built images):

# Production deployment (no volume mounts, uses built image)
docker-compose -f services/docker/docker-compose.services.yml --profile services up -d

Key Differences:

  • Dev profile: Volume mounts source code for live reloading, connects to local db and dragonfly
  • Services profile: Uses built image only, connects to external Patroni and AWS ValKey clusters

Testing

# Test infrastructure components
python services/tracker_fetcher/test_service.py

# Check service health
curl http://localhost:8001/health

Migration Strategy

The new services can run alongside existing fetcher services, allowing for:

  1. Parallel Testing: Run both systems simultaneously
  2. Gradual Migration: Switch traffic incrementally
  3. Easy Rollback: Keep legacy services as backup
  4. Performance Comparison: Compare metrics between old and new

Configuration

Services use the same configuration system as the main API, with additional service-specific settings:

# Service-specific settings
FETCH_INTERVAL = 3600  # Fetch interval in seconds
MAX_KEYS_PER_BATCH = 50  # Maximum trackers per batch
PROCESSING_TIMEOUT = 300  # Processing timeout in seconds

Monitoring

Services provide comprehensive monitoring through:

  • Health Checks: /health endpoint with detailed status
  • Metrics: Prometheus-compatible metrics endpoint
  • Structured Logging: JSON-formatted logs with context
  • Redis Monitoring: Queue status and worker health

Next Steps