Message Queue Tools: RabbitMQ, Redis Streams, NATS, and Kafka Compared
Message Queue Tools: RabbitMQ, Redis Streams, NATS, and Kafka Compared
Message queues decouple producers from consumers, smooth out traffic spikes, and let you build systems where components can fail independently. But "just add a queue" is deceptively simple advice. The difference between RabbitMQ, Kafka, NATS, and Redis Streams matters, and picking the wrong one costs you either performance, operational complexity, or both.
This guide covers the four most popular options, with honest tradeoffs, local dev setup, and working code examples.
When You Actually Need a Message Queue
Before reaching for a queue, make sure you actually need one. A direct function call, a database-backed job table, or even a simple webhook might be simpler. You need a message queue when:
- Work needs to happen asynchronously and the producer shouldn't wait for the result
- Multiple consumers need to process the same event independently (fan-out)
- Traffic is bursty and you need to absorb spikes without overwhelming downstream services
- Services need to be independently deployable and shouldn't share a database
- Ordering matters and you need guaranteed sequence within a partition or channel
If your use case is "run this job later," consider a simpler background job tool first. Queues shine when you're building event-driven architectures or need to connect multiple services.
Quick Comparison
| Feature | RabbitMQ | Redis Streams | NATS | Kafka |
|---|---|---|---|---|
| Protocol | AMQP 0-9-1 | Redis protocol | Custom (NATS) | Custom (Kafka) |
| Message retention | Until consumed | Configurable | Until consumed (JetStream: configurable) | Configurable (log-based) |
| Ordering | Per-queue | Per-stream | Per-subject (JetStream) | Per-partition |
| Consumer groups | Yes | Yes | Yes (JetStream) | Yes |
| Max throughput | ~50K msg/s | ~100K msg/s | ~1M msg/s | ~1M msg/s |
| Latency | Low (~1ms) | Very low (<1ms) | Very low (<1ms) | Low (~5ms) |
| Operational complexity | Medium | Low (if you already run Redis) | Low | High |
| Persistence | Yes | Yes | Optional (JetStream) | Yes |
| Best for | Task routing, complex routing patterns | Simple streaming when you already have Redis | Microservices, request-reply | High-throughput event streaming, log aggregation |
RabbitMQ
RabbitMQ is the workhorse of traditional message queuing. It implements AMQP and excels at routing messages through exchanges to queues with flexible binding patterns. It's the right choice when you need complex routing logic, dead-letter queues, or priority queues out of the box.
Local Dev Setup
# docker-compose.yml
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
environment:
RABBITMQ_DEFAULT_USER: dev
RABBITMQ_DEFAULT_PASS: dev
volumes:
- rabbitmq_data:/var/lib/rabbitmq
volumes:
rabbitmq_data:
The management UI at http://localhost:15672 is genuinely useful for debugging. You can inspect queues, see message rates, and manually publish test messages.
Code Example (Node.js with amqplib)
import amqplib from "amqplib";
// Publisher
async function publish() {
const conn = await amqplib.connect("amqp://dev:dev@localhost");
const channel = await conn.createChannel();
const exchange = "orders";
const routingKey = "order.created";
await channel.assertExchange(exchange, "topic", { durable: true });
channel.publish(
exchange,
routingKey,
Buffer.from(JSON.stringify({ orderId: "abc-123", total: 49.99 })),
{ persistent: true }
);
console.log("Published order.created event");
await channel.close();
await conn.close();
}
// Consumer
async function consume() {
const conn = await amqplib.connect("amqp://dev:dev@localhost");
const channel = await conn.createChannel();
const exchange = "orders";
const queue = "email-notifications";
await channel.assertExchange(exchange, "topic", { durable: true });
await channel.assertQueue(queue, { durable: true });
await channel.bindQueue(queue, exchange, "order.*");
await channel.prefetch(10);
channel.consume(queue, (msg) => {
if (!msg) return;
const order = JSON.parse(msg.content.toString());
console.log(`Processing order: ${order.orderId}`);
// Do the work, then acknowledge
channel.ack(msg);
});
}
When to Pick RabbitMQ
- You need complex routing (topic exchanges, headers exchanges, dead-letter queues)
- You want message-level acknowledgment with redelivery on failure
- Your throughput needs are moderate (tens of thousands of messages per second)
- You need priority queues or TTL on individual messages
When to Avoid RabbitMQ
- You need to replay historical messages (RabbitMQ deletes messages after consumption)
- You need millions of messages per second sustained throughput
- You want the simplest possible operational setup
Redis Streams
If you already run Redis, Streams give you a solid message queue without adding another piece of infrastructure. Redis Streams are an append-only log data structure added in Redis 5.0 with consumer groups, acknowledgment, and configurable retention.
Local Dev Setup
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- redis_data:/data
volumes:
redis_data:
Code Example (Node.js with ioredis)
import Redis from "ioredis";
const redis = new Redis();
// Producer: add events to a stream
async function produce() {
await redis.xadd(
"events:orders",
"*", // auto-generate ID
"action", "created",
"orderId", "abc-123",
"total", "49.99"
);
console.log("Event added to stream");
}
// Consumer: read with a consumer group
async function consume() {
const stream = "events:orders";
const group = "email-service";
const consumer = "worker-1";
// Create consumer group (ignore error if it already exists)
try {
await redis.xgroup("CREATE", stream, group, "0", "MKSTREAM");
} catch (e) {
// Group already exists
}
while (true) {
const results = await redis.xreadgroup(
"GROUP", group, consumer,
"COUNT", 10,
"BLOCK", 5000, // block for 5s if no messages
"STREAMS", stream, ">"
);
if (results) {
for (const [, messages] of results) {
for (const [id, fields] of messages) {
const data = Object.fromEntries(
fields.reduce((acc, val, i) =>
i % 2 === 0 ? [...acc, [val, fields[i + 1]]] : acc, [])
);
console.log(`Processing: ${data.orderId}`);
await redis.xack(stream, group, id);
}
}
}
}
}
When to Pick Redis Streams
- You already run Redis and don't want another service to manage
- Your throughput needs are moderate and you want low latency
- You want consumer groups but don't need complex routing
- You need a simple, reliable queue with minimal operational overhead
When to Avoid Redis Streams
- Your messages are large (Redis holds everything in memory)
- You need cross-datacenter replication with strong consistency
- You need complex routing patterns (use RabbitMQ instead)
NATS
NATS is a lightweight, high-performance messaging system designed for cloud-native applications. Core NATS is fire-and-forget pub/sub with no persistence. NATS JetStream adds persistence, consumer groups, and exactly-once semantics.
Local Dev Setup
services:
nats:
image: nats:2-alpine
ports:
- "4222:4222" # Client
- "8222:8222" # Monitoring
command: "--jetstream --store_dir /data"
volumes:
- nats_data:/data
volumes:
nats_data:
Code Example (Node.js with nats.js)
import { connect, StringCodec, AckPolicy } from "nats";
const sc = StringCodec();
// Publisher
async function publish() {
const nc = await connect({ servers: "localhost:4222" });
const js = nc.jetstream();
const jsm = await nc.jetstreamManager();
// Create a stream that captures order events
await jsm.streams.add({
name: "ORDERS",
subjects: ["orders.>"],
retention: "limits",
max_msgs: 100000,
});
await js.publish(
"orders.created",
sc.encode(JSON.stringify({ orderId: "abc-123", total: 49.99 }))
);
console.log("Published to orders.created");
await nc.close();
}
// Consumer
async function consume() {
const nc = await connect({ servers: "localhost:4222" });
const js = nc.jetstream();
const jsm = await nc.jetstreamManager();
// Create a durable consumer
await jsm.consumers.add("ORDERS", {
durable_name: "email-service",
ack_policy: AckPolicy.Explicit,
filter_subject: "orders.created",
});
const consumer = await js.consumers.get("ORDERS", "email-service");
const messages = await consumer.consume();
for await (const msg of messages) {
const order = JSON.parse(sc.decode(msg.data));
console.log(`Processing order: ${order.orderId}`);
msg.ack();
}
}
When to Pick NATS
- You're building microservices and need lightweight service-to-service communication
- You want request-reply patterns built into the messaging layer
- You need very low latency and high throughput
- You want a single binary with minimal configuration
When to Avoid NATS
- You need the mature ecosystem and tooling that RabbitMQ or Kafka provide
- Your team is already deeply invested in Kafka's stream processing ecosystem
- You need very long message retention (weeks or months of log data)
Apache Kafka
Kafka is the standard for high-throughput event streaming. It stores messages as an immutable, partitioned log that consumers read at their own pace. This makes replay, reprocessing, and building derived views straightforward.
Local Dev Setup
Kafka's dependency on ZooKeeper made local dev painful for years. KRaft mode (production-ready since Kafka 3.3) eliminates ZooKeeper entirely.
services:
kafka:
image: apache/kafka:3.7.0
ports:
- "9092:9092"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_LOG_DIRS: /var/lib/kafka/data
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
volumes:
- kafka_data:/var/lib/kafka/data
volumes:
kafka_data:
Code Example (Node.js with kafkajs)
import { Kafka } from "kafkajs";
const kafka = new Kafka({
clientId: "my-app",
brokers: ["localhost:9092"],
});
// Producer
async function produce() {
const producer = kafka.producer();
await producer.connect();
await producer.send({
topic: "order-events",
messages: [
{
key: "abc-123",
value: JSON.stringify({ action: "created", orderId: "abc-123", total: 49.99 }),
headers: { source: "checkout-service" },
},
],
});
console.log("Produced order event");
await producer.disconnect();
}
// Consumer
async function consume() {
const consumer = kafka.consumer({ groupId: "email-service" });
await consumer.connect();
await consumer.subscribe({ topic: "order-events", fromBeginning: true });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
const event = JSON.parse(message.value.toString());
console.log(`[${partition}] Processing: ${event.orderId}`);
},
});
}
When to Pick Kafka
- You need very high throughput (millions of events per second)
- You need to replay events or reprocess historical data
- You're building an event sourcing or CQRS architecture
- Multiple teams or services need to independently consume the same event stream
- You need stream processing (Kafka Streams, ksqlDB, Flink)
When to Avoid Kafka
- Your message volume is low (Kafka's operational overhead isn't worth it under ~10K msg/s)
- You need request-reply or complex routing patterns
- You don't have a team comfortable operating distributed systems (or use a managed service)
Making the Decision
Start with Redis Streams if you already run Redis and your needs are straightforward. The operational cost of adding a new service is real, and Redis Streams handle most workloads well.
Pick RabbitMQ when you need smart routing, per-message TTL, priority queues, or dead-letter handling. It has the most mature tooling and the management UI makes debugging easy.
Pick NATS when you're building a microservices system and want lightweight, fast, and simple messaging with optional persistence via JetStream.
Pick Kafka when throughput, event replay, and long-term retention are requirements. Kafka is the right answer for event-driven architectures at scale, but it's the wrong answer for a small application that just needs to send emails asynchronously.
The biggest mistake teams make is picking Kafka for every use case because it's the most well-known. A 5-person startup processing 1,000 orders per day doesn't need Kafka. Redis Streams or a simple job queue will serve you better with a fraction of the operational burden.