API Gateway and Service Mesh Tools for Developers
API Gateway and Service Mesh Tools for Developers
When you have more than one service, you need something in front of them: routing requests, handling authentication, rate limiting, and providing observability. API gateways handle north-south traffic (clients to services). Service meshes handle east-west traffic (service to service). This guide covers the tools that matter, when you need each one, and how to set them up.
When You Need an API Gateway
You need an API gateway when:
- Multiple services, one domain: Your frontend calls
/api/users,/api/orders,/api/paymentsand each is a separate service. - Cross-cutting concerns: Authentication, rate limiting, CORS, request logging -- you don't want every service implementing these independently.
- API versioning: Route
/v1/usersto the old service and/v2/usersto the new one. - Canary deployments: Send 5% of traffic to the new version of a service.
You probably don't need one when you have a monolith or two to three services behind a simple reverse proxy.
Kong
Kong is the most widely deployed open-source API gateway. It's built on NGINX and OpenResty (Lua), with a plugin system that covers authentication, rate limiting, logging, and more.
Setup with Docker
# docker-compose.yml
services:
kong-database:
image: postgres:16
environment:
POSTGRES_USER: kong
POSTGRES_DB: kong
POSTGRES_PASSWORD: kongpass
kong-migration:
image: kong:3.6
command: kong migrations bootstrap
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_PASSWORD: kongpass
depends_on:
- kong-database
kong:
image: kong:3.6
ports:
- "8000:8000" # proxy
- "8001:8001" # admin API
- "8443:8443" # proxy SSL
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_PASSWORD: kongpass
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
Configuring Routes
# Create a service
curl -i -X POST http://localhost:8001/services \
--data name=users-service \
--data url=http://users-api:3000
# Create a route for that service
curl -i -X POST http://localhost:8001/services/users-service/routes \
--data paths[]=/api/users \
--data strip_path=true
# Add rate limiting plugin
curl -i -X POST http://localhost:8001/services/users-service/plugins \
--data name=rate-limiting \
--data config.minute=100 \
--data config.policy=local
# Add key authentication
curl -i -X POST http://localhost:8001/services/users-service/plugins \
--data name=key-auth
Declarative Configuration
For production, use declarative configuration instead of the admin API:
# kong.yml
_format_version: "3.0"
services:
- name: users-service
url: http://users-api:3000
routes:
- name: users-route
paths:
- /api/users
strip_path: true
plugins:
- name: rate-limiting
config:
minute: 100
policy: local
- name: jwt
config:
claims_to_verify:
- exp
- name: orders-service
url: http://orders-api:3000
routes:
- name: orders-route
paths:
- /api/orders
strip_path: true
# Apply declarative config
kong config db_import kong.yml
# Or run Kong in DB-less mode (no Postgres needed)
# Set KONG_DATABASE=off and KONG_DECLARATIVE_CONFIG=/path/to/kong.yml
Strengths: Huge plugin ecosystem (100+ plugins), battle-tested at scale, DB-less mode for simple deployments, good documentation, active community.
Weaknesses: Lua-based plugin development is niche, the Postgres dependency adds complexity for small deployments, Kong's commercial features (Kong Konnect) overlap with open-source features.
Traefik
Traefik is a modern reverse proxy and API gateway that integrates natively with Docker, Kubernetes, and other orchestrators. It automatically discovers services and configures routing -- no manual registration needed.
Docker Integration
# docker-compose.yml
services:
traefik:
image: traefik:v3.0
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "[email protected]"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
ports:
- "80:80"
- "443:443"
- "8080:8080" # dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- letsencrypt:/letsencrypt
users-api:
image: users-api:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.users.rule=Host(`api.example.com`) && PathPrefix(`/users`)"
- "traefik.http.routers.users.entrypoints=websecure"
- "traefik.http.routers.users.tls.certresolver=letsencrypt"
- "traefik.http.services.users.loadbalancer.server.port=3000"
orders-api:
image: orders-api:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.orders.rule=Host(`api.example.com`) && PathPrefix(`/orders`)"
- "traefik.http.routers.orders.entrypoints=websecure"
- "traefik.http.routers.orders.tls.certresolver=letsencrypt"
- "traefik.http.services.orders.loadbalancer.server.port=3000"
volumes:
letsencrypt:
Middleware
Traefik's middleware chain handles cross-cutting concerns:
# As Docker labels
labels:
# Rate limiting
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
# Authentication (forward auth to an external service)
- "traefik.http.middlewares.auth.forwardauth.address=http://auth-service:3000/verify"
- "traefik.http.middlewares.auth.forwardauth.authResponseHeaders=X-User-Id,X-User-Role"
# Apply middleware to a router
- "traefik.http.routers.users.middlewares=ratelimit,auth"
Strengths: Automatic service discovery (Docker, Kubernetes, Consul), automatic Let's Encrypt certificates, excellent dashboard, configuration via labels (no separate config files), hot reload without restarts.
Weaknesses: Docker label configuration gets verbose for complex setups, less plugin variety than Kong, debugging routing issues can be frustrating.
Best for: Docker-based deployments where automatic service discovery and Let's Encrypt integration matter.
NGINX
NGINX is the workhorse of the internet. It's not specifically an API gateway, but it handles the job for many teams with straightforward configuration.
# /etc/nginx/conf.d/api-gateway.conf
upstream users_api {
server users-api:3000;
}
upstream orders_api {
server orders-api:3000;
}
server {
listen 80;
server_name api.example.com;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
location /api/users/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://users_api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /api/orders/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://orders_api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Health check
location /health {
return 200 "OK";
}
}
Strengths: Proven at massive scale, everyone knows it, excellent performance, simple configuration for simple use cases.
Weaknesses: No service discovery, no automatic certificate management (need certbot), no API-specific features (no request/response transformation, no built-in auth plugins), config changes require reload.
Best for: Simple routing and load balancing where you don't need API gateway features. Also works as the base layer under more specialized tools.
Envoy Proxy
Envoy is a high-performance proxy designed for service mesh architectures. It's the data plane for Istio, and it's increasingly used as a standalone API gateway.
# envoy.yaml
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match:
prefix: "/api/users"
route:
cluster: users_service
- match:
prefix: "/api/orders"
route:
cluster: orders_service
http_filters:
- name: envoy.filters.http.router
clusters:
- name: users_service
connect_timeout: 5s
type: STRICT_DNS
load_assignment:
cluster_name: users_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: users-api
port_value: 3000
Strengths: Exceptional performance, advanced load balancing (circuit breaking, outlier detection, retries), rich observability (metrics, distributed tracing, access logs), hot restartable, xDS API for dynamic configuration.
Weaknesses: Configuration is verbose and complex (YAML/protobuf), steep learning curve, overkill for simple deployments.
Best for: High-performance requirements, service mesh data plane, teams that need advanced traffic management.
Service Meshes (When You Need Them)
A service mesh handles communication between services: mTLS, load balancing, retries, circuit breaking, and observability -- all without changing application code.
Istio
Istio is the most feature-rich service mesh. It uses Envoy as its data plane.
Strengths: Comprehensive feature set, strong security (automatic mTLS), traffic management (canary deployments, fault injection), deep observability.
Weaknesses: Complex to operate, significant resource overhead (Envoy sidecar in every pod), steep learning curve, can be slow to start.
Linkerd
Linkerd is a simpler, lighter alternative to Istio. It has its own Rust-based proxy instead of Envoy.
Strengths: Simpler than Istio, lower resource overhead, easier to operate, excellent documentation, CNCF graduated project.
Weaknesses: Fewer features than Istio, smaller ecosystem.
When you need a service mesh: When you have 10+ services and need automatic mTLS, observability across services, and traffic management that's too complex for an API gateway alone. If you have fewer than 10 services, a service mesh is overhead you don't need.
Comparison
| Feature | Kong | Traefik | NGINX | Envoy |
|---|---|---|---|---|
| Primary Use | API Gateway | Reverse Proxy/Gateway | Reverse Proxy | Service Proxy |
| Service Discovery | Plugin | Native (Docker/K8s) | No | xDS API |
| Auto TLS | Plugin | Built-in (Let's Encrypt) | Certbot | SDS |
| Rate Limiting | Plugin | Middleware | Module | Filter |
| Auth Plugins | Extensive | Forward Auth | Basic Auth | External |
| Dashboard | Kong Manager | Built-in | Third-party | Admin API |
| Configuration | Admin API/YAML | Labels/YAML | Config files | YAML/xDS |
| Best For | API management | Docker/K8s routing | Simple proxying | High-perf mesh |
Recommendations
- Simple Docker deployment: Use Traefik. Automatic service discovery, automatic Let's Encrypt, and label-based configuration make it the lowest-effort option.
- API management: Use Kong. When you need authentication, rate limiting, request transformation, and analytics, Kong's plugin ecosystem covers everything.
- Simple reverse proxy: Use NGINX. If you just need to route traffic to a few services, NGINX is the simplest and most proven option.
- High-performance / service mesh: Use Envoy directly or via Linkerd. If you need a service mesh, start with Linkerd (simpler) before considering Istio (more features, more complexity).
- General principle: Start with the simplest tool that solves your problem. A monolith behind NGINX doesn't need Kong. Three services behind Traefik don't need Istio. Add complexity when you have the problems that justify it, not before.