Skip to content

AWS ECS Networking and Service Discovery Map

Purpose

This document maps the current Docker Compose communication paths to the ECS networking model we should use in staging and production.

It focuses on:

  • which services need to talk to each other
  • what can stay on localhost
  • what must move to DNS-based service discovery
  • what should remain behind the ALB

Communication Rules

In ECS

  • Use localhost only for containers in the same ECS task.
  • Use ECS Service Connect or Cloud Map for private service-to-service traffic.
  • Use an Application Load Balancer for public HTTP traffic.
  • Do not rely on Docker Compose service names surviving unchanged.

In Compose Terms

The current Compose setup uses service names such as:

  • dev
  • api
  • admin
  • frontend
  • db
  • dragonfly
  • tracker-fetcher-2
  • unified-geofence
  • notification-service
  • materialized-view-service

These names are useful as migration references, but ECS should expose explicit service discovery names instead.

Use short, stable names that match the application roles:

  • api
  • frontend
  • admin
  • anisette-v3
  • tracker-fetcher
  • unified-geofence
  • notification-service
  • materialized-view-service
  • db
  • redis

If Service Connect is used, those names can become the internal DNS names.

Current Communication Map

Root Application Stack

Source Target Current Compose Method ECS Method Notes
frontend-dev dev service name private DNS or same-task localhost Development only
admin-dev dev service name private DNS or same-task localhost Development only
dev db service name private DNS PostgreSQL
dev dragonfly service name private DNS Redis/cache
frontend api service name ALB or private DNS Production web traffic should usually go through the API or ALB
admin api service name private DNS or ALB Prefer internal API access if the admin panel is private
api database env var host private DNS PostgreSQL / Patroni endpoint in production
api Redis env var host private DNS AWS-managed Redis equivalent
tracker-fetcher-* anisette-v3 env var host private DNS Private service-discovery endpoint on port 6969

Worker Stack

Source Target Current Compose Method ECS Method Notes
tracker-fetcher-2-dev db service name private DNS Development worker path
tracker-fetcher-2-dev dragonfly service name private DNS Development worker path
tracker-fetcher-2-worker-dev db service name private DNS Development worker path
tracker-fetcher-2-worker-dev dragonfly service name private DNS Development worker path
unified-geofence-dev db service name private DNS Development API path
unified-geofence-dev dragonfly service name private DNS Development API path
unified-geofence-worker-dev db service name private DNS Development worker path
unified-geofence-worker-dev dragonfly service name private DNS Development worker path
notification-service-dev db service name private DNS DB notifications
notification-service-dev dragonfly service name private DNS Queue/cache helpers
materialized-view-service-dev db service name private DNS Scheduled maintenance
materialized-view-service-dev dragonfly service name private DNS Shared runtime config

Public Access Pattern

API

The API should be public through the ALB.

Preferred path:

  • client -> ALB -> API ECS service

Frontend

The frontend may be:

  • public through the ALB
  • static hosting outside ECS
  • or an ECS service behind the ALB

If it remains in ECS, it should not need direct database access.

Admin Panel

The admin panel should remain public only if required.

If it is public, use either:

  • a separate ALB host rule
  • or restricted access controls

If it is internal, keep it private and let it reach the API over internal DNS.

Private Service Access

Database

The application services should reach the database through a private endpoint.

Production:

  • Patroni cluster endpoint

Staging:

  • single-instance PostgreSQL host

Redis

All services that need Redis should use the AWS-managed Redis equivalent through a private endpoint.

Anisette

Anisette should be treated as a private internal service, not a public ALB target.

Recommended staging and production pattern:

  • private ECS service
  • Cloud Map DNS name: anisette-v3.anisette-v3.local
  • TCP port: 6969
  • persistent storage: EFS-backed /data volume
  • EFS is the live filesystem for stateful runtime files, not S3-backed object storage
  • consumer env var: ANISETTE_SERVER=http://anisette-v3.anisette-v3.local:6969

Worker-to-Worker Calls

Try to avoid direct worker-to-worker HTTP calls unless there is a strong reason.

Prefer:

  • database-driven coordination
  • task queue semantics
  • shared event records

Same-Task vs Separate-Service Guidance

Use the Same ECS Task When

  • two containers need localhost
  • they always scale together
  • they share a release lifecycle

Use Separate ECS Services When

  • the service has its own scaling needs
  • the service should fail independently
  • the service needs different CPU or memory sizing
  • the service should be rolled out separately

Staging and Production Considerations

Staging

In staging, keep the networking model as close to production as practical:

  • same service names
  • same internal DNS patterns
  • same ALB access paths
  • same security group style

Production

In production, use the same names and endpoints that staging validated.

That keeps the deploy and rollback behavior predictable.

Implementation Notes

  • Remove Docker-only host aliases during the ECS move.
  • Avoid hard-coding service names in code.
  • Keep service endpoints in env vars or parameter store.
  • Prefer internal DNS over static IP addresses.