Skip to content

System Architecture

This document provides a comprehensive overview of the Tracker System architecture, including all components, containers, and their connectivity.

Architecture Overview

Tracker System Architecture

The Tracker System is a distributed application built with a microservices architecture, consisting of multiple frontend applications, a REST API, background services, and supporting infrastructure components.

System Components

Frontend Applications

Admin Panel

  • Technology: React with TypeScript and Tailwind CSS
  • Ports: 3000 (development), 8080 (production)
  • Purpose: Administrative interface for managing clients, brands, production runs, trackers, and locations
  • Features:
  • User management with role-based access control
  • Interactive maps for location management
  • Dashboard with system statistics
  • Dark mode support

User Frontend

  • Technology: React with TypeScript and Tailwind CSS
  • Ports: 3100 (development), 8200 (production)
  • Purpose: End-user interface for viewing tracker information
  • Features:
  • Tracker status and location viewing
  • Image gallery
  • Responsive design with dark mode

API Layer

FastAPI (Main API)

  • Technology: FastAPI with Python
  • Ports: 8000 (production), 8100 (development)
  • Purpose: Core REST API providing all backend functionality
  • Features:
  • JWT-based authentication with role-based access control
  • OpenAPI/Swagger documentation
  • Client filtering and multi-tenancy
  • Performance monitoring and caching
  • CORS handling

Documentation Service

  • Technology: MkDocs with Material theme
  • Port: 8001
  • Purpose: Serves comprehensive system documentation
  • Features:
  • API reference documentation
  • User guides and tutorials
  • Architecture documentation

Background Services

Tracker Services (Internal)

These services run as TaskiQ tasks within the main application:

  • Tracker Fetcher Service: Fetches tracker data from external APIs
  • Tracker Status Service: Processes tracker status updates and geofencing
  • Location Aggregator: Aggregates location data using TimescaleDB continuous aggregates
  • Geocoding Service: Handles address geocoding and reverse geocoding

External Fetcher Services

These services run as separate containers in the fetcher deployment:

  • Location History Refresher: Refreshes historical location data
  • Tracker Report Fetcher: Fetches tracker reports from Apple FindMy API
  • Batch Geocoding Service: Processes geocoding requests in batches

Infrastructure Components

Database Layer

PostgreSQL with Extensions
  • Port: 5432
  • Extensions: PostGIS, TimescaleDB
  • Purpose: Primary data storage
  • Features:
  • Geospatial data support with PostGIS
  • Time-series data optimization with TimescaleDB
  • Continuous aggregates for performance
  • Full-text search capabilities
Geocoding Cache
  • Purpose: Caches geocoding results to reduce API calls
  • Implementation: Database table with efficient indexing

Cache Layer

Redis/Dragonfly
  • Port: 6379
  • Purpose: Caching and session storage
  • Features:
  • API response caching
  • Session management
  • Task queue backend for TaskiQ
  • Performance optimization

Message Queue

TaskiQ
  • Purpose: Asynchronous task processing
  • Backend: Redis/Dragonfly
  • Features:
  • Background task execution
  • Scheduled tasks
  • Task monitoring and retry logic

External Services

Apple FindMy API

  • Purpose: Source of tracker location data
  • Authentication: Via Anisette server

Anisette Server

  • Deployment: AWS ECS container
  • Purpose: Provides authentication tokens for Apple services
  • Access: Internal service endpoint

Geocoding API

  • Purpose: Converts addresses to coordinates and vice versa
  • Usage: Batch processing and real-time geocoding

Data Flow

User Interactions

  1. Admin Users access the Admin Panel through load balancer (AWS ELB in production, Nginx in development)
  2. End Users access the User Frontend through load balancer (AWS ELB in production, Nginx in development)
  3. Both frontends communicate with the FastAPI backend via REST API calls

API Processing

  1. Authentication: JWT tokens validate user access
  2. Client Filtering: Multi-tenant data isolation
  3. Caching: Redis provides fast response times
  4. Database Operations: PostgreSQL handles data persistence

Background Processing

  1. Task Queuing: TaskiQ manages asynchronous tasks
  2. Data Fetching: Services retrieve data from external APIs
  3. Data Processing: Location aggregation and status updates
  4. Geocoding: Address resolution and caching

Deployment Architecture

Development Environment

  • Profiles: dev profile in Docker Compose
  • Hot Reload: Volume mounts enable live code updates
  • Debug Support: Debug ports exposed for development tools

Production Environment

  • Profiles: prod profile in Docker Compose
  • Optimization: Built containers with production optimizations
  • Scaling: Services can be scaled independently

Container Orchestration

  • Docker Compose: Manages all services and dependencies
  • Networks: Isolated network for service communication
  • Volumes: Persistent storage for database and cache data

Security Architecture

Authentication & Authorization

  • JWT Tokens: Secure API access
  • Role-Based Access Control: Different permission levels
  • Session Management: Redis-backed sessions

Network Security

  • Load Balancer: AWS ELB in production, Nginx in development handles external traffic
  • Internal Networks: Services communicate on isolated networks
  • Port Isolation: Only necessary ports exposed externally

Data Security

  • Database Encryption: Sensitive data protection
  • API Key Management: Secure external service authentication
  • CORS Configuration: Controlled cross-origin access

Performance Optimization

Caching Strategy

  • Multi-Level Caching: Redis for API responses and sessions
  • Database Optimization: TimescaleDB for time-series data
  • Geocoding Cache: Reduces external API calls

Database Performance

  • Continuous Aggregates: Pre-computed time-series summaries
  • Spatial Indexing: PostGIS indexes for location queries
  • Connection Pooling: Efficient database connections

Monitoring & Observability

  • Performance Metrics: Built-in performance monitoring
  • Logging: Comprehensive application logging
  • Health Checks: Service availability monitoring

Scalability Considerations

Horizontal Scaling

  • Stateless Services: API and background services can be replicated
  • Load Balancing: Nginx can distribute traffic across instances
  • Database Scaling: Read replicas for query performance

Vertical Scaling

  • Resource Allocation: Container resource limits can be adjusted
  • Database Tuning: PostgreSQL configuration optimization
  • Cache Sizing: Redis memory allocation based on usage

Technology Stack Summary

Component Technology Purpose
Frontend React + TypeScript User interfaces
API FastAPI + Python REST API backend
Database PostgreSQL + PostGIS + TimescaleDB Data storage
Cache Redis/Dragonfly Caching and sessions
Queue TaskiQ Background tasks
Proxy Nginx Load balancing and routing
Containers Docker + Docker Compose Deployment
Documentation MkDocs System documentation

Development Workflow

Local Development

  1. Clone repository
  2. Set up environment variables
  3. Run docker compose --profile dev up -d
  4. Access services at configured ports

Testing

  • Unit tests with pytest
  • Integration tests for API endpoints
  • Coverage reporting with detailed metrics

Deployment

  • Production deployment via Docker Compose
  • Environment-specific configuration
  • Health checks and monitoring

This architecture provides a robust, scalable, and maintainable system for tracker management with clear separation of concerns and modern development practices.