LocalStack: Local AWS Development Without Cloud Bills
Running AWS services during development creates a frustrating cycle: every test costs money, slow round-trips to the cloud extend feedback loops, and misconfigured IAM policies cause production outages that are hard to reproduce locally. LocalStack fixes this by running a full AWS emulation layer on your machine.
What LocalStack Does
LocalStack spins up a local endpoint — by default http://localhost:4566 — that mimics the AWS API surface. Your existing AWS SDKs, CLI tools, and Terraform configs talk to this endpoint instead of real AWS. From the code's perspective, nothing changes. From the developer's perspective, services respond in milliseconds and nothing costs a cent.
The community edition (free, open source) covers most common services: S3, SQS, SNS, Lambda, DynamoDB, API Gateway, IAM, CloudFormation, and dozens more. The Pro tier adds services like RDS, ElastiCache, ECS, and MSK for teams with broader AWS footprints.
Getting Started
The fastest way to run LocalStack is via Docker:
docker run --rm -it \
-p 4566:4566 \
-p 4510-4559:4510-4559 \
localstack/localstack
For persistent development, use Docker Compose:
services:
localstack:
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566"
- "127.0.0.1:4510-4559:4510-4559"
environment:
- DEBUG=1
- PERSISTENCE=1
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
The PERSISTENCE=1 flag means your buckets and queues survive container restarts — useful when you want a stable local environment across work sessions.
Using the AWS CLI Against LocalStack
Install awslocal, a thin wrapper that automatically points the CLI at LocalStack:
pip install awscli-local
Then use it exactly like aws:
# Create an S3 bucket
awslocal s3 mb s3://my-dev-bucket
# Put and get objects
awslocal s3 cp ./data.json s3://my-dev-bucket/data.json
awslocal s3 ls s3://my-dev-bucket
# Create a DynamoDB table
awslocal dynamodb create-table \
--table-name Users \
--attribute-definitions AttributeName=id,AttributeType=S \
--key-schema AttributeName=id,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
Configuring AWS SDKs
Point any AWS SDK to LocalStack by overriding the endpoint URL and using dummy credentials (LocalStack doesn't validate them):
Python (boto3):
import boto3
s3 = boto3.client(
"s3",
endpoint_url="http://localhost:4566",
aws_access_key_id="test",
aws_secret_access_key="test",
region_name="us-east-1",
)
Node.js / TypeScript:
import { S3Client } from "@aws-sdk/client-s3";
const s3 = new S3Client({
endpoint: "http://localhost:4566",
region: "us-east-1",
credentials: {
accessKeyId: "test",
secretAccessKey: "test",
},
forcePathStyle: true, // required for LocalStack S3
});
In CI, set AWS_ENDPOINT_URL=http://localhost:4566 as an environment variable and most SDKs will pick it up automatically — no code changes needed.
Testing Lambda Functions Locally
LocalStack can execute Lambda functions in real Docker containers, giving you near-identical behavior to the actual service:
# Package your function
zip function.zip handler.py
# Deploy to LocalStack
awslocal lambda create-function \
--function-name my-function \
--runtime python3.12 \
--handler handler.lambda_handler \
--zip-file fileb://function.zip \
--role arn:aws:iam::000000000000:role/lambda-role
# Invoke it
awslocal lambda invoke \
--function-name my-function \
--payload '{"key": "value"}' \
output.json
cat output.json
Infrastructure as Code Workflow
LocalStack integrates with Terraform via the tflocal wrapper:
pip install terraform-local
Then use tflocal instead of terraform:
tflocal init
tflocal plan
tflocal apply
Your existing Terraform modules work without modification. This makes it practical to test full infrastructure stacks — VPCs, subnets, security groups, RDS instances (Pro), load balancers — before touching real AWS accounts.
Seeding Initial State
A common pattern is a seed.sh script that sets up the local environment from scratch:
#!/usr/bin/env bash
set -e
echo "Creating S3 buckets..."
awslocal s3 mb s3://app-uploads
awslocal s3 mb s3://app-backups
echo "Creating SQS queues..."
awslocal sqs create-queue --queue-name job-queue
awslocal sqs create-queue --queue-name dead-letter-queue
echo "Creating DynamoDB tables..."
awslocal dynamodb create-table \
--table-name Sessions \
--attribute-definitions AttributeName=sessionId,AttributeType=S \
--key-schema AttributeName=sessionId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
echo "Seeding complete."
Run this after LocalStack starts, and every developer gets the same environment without manual setup.
Where LocalStack Fits in Your Workflow
LocalStack is most valuable in three places:
Unit and integration tests — Replace mocked AWS clients with a real endpoint. Tests become more realistic without slowing down or adding cost. Jest, pytest, and Go test suites all work well.
Local feature development — Spin up the full application stack locally with docker compose up. Developers iterate against real S3, SQS, and Lambda behavior instead of guessing how the real service will behave.
CI pipelines — Add LocalStack as a service container in GitHub Actions. Tests run against real AWS APIs without needing AWS credentials or incurring costs:
services:
localstack:
image: localstack/localstack
ports:
- 4566:4566
env:
AWS_DEFAULT_REGION: us-east-1
Limitations to Know
LocalStack is not a perfect replica. Edge cases in IAM evaluation, VPC networking, and some service-specific behaviors differ from real AWS. For final validation before production deployments, test against a real AWS staging environment. But for the 80% of development work that doesn't require production parity, LocalStack eliminates a significant source of friction and cost.
The LocalStack dashboard (http://localhost:4566/_localstack/ when the container is running) shows active resources and recent API calls — useful for debugging when something isn't behaving as expected.
Getting Started Today
Install Docker, pull the LocalStack image, and point your AWS SDK at localhost:4566. Your first bucket creation should work within five minutes. Once it's in your docker-compose.yml, the rest of your team gets it automatically — no AWS account setup, no IAM user management, no surprise bills at the end of the month.