← All articles
INFRASTRUCTURE Message Queue Tools: RabbitMQ, Redis Streams, NATS, ... 2026-02-09 · 7 min read · messaging · rabbitmq · kafka

Message Queue Tools: RabbitMQ, Redis Streams, NATS, and Kafka Compared

Infrastructure 2026-02-09 · 7 min read messaging rabbitmq kafka nats redis

Message Queue Tools: RabbitMQ, Redis Streams, NATS, and Kafka Compared

Message queues decouple producers from consumers, smooth out traffic spikes, and let you build systems where components can fail independently. But "just add a queue" is deceptively simple advice. The difference between RabbitMQ, Kafka, NATS, and Redis Streams matters, and picking the wrong one costs you either performance, operational complexity, or both.

This guide covers the four most popular options, with honest tradeoffs, local dev setup, and working code examples.

When You Actually Need a Message Queue

Before reaching for a queue, make sure you actually need one. A direct function call, a database-backed job table, or even a simple webhook might be simpler. You need a message queue when:

If your use case is "run this job later," consider a simpler background job tool first. Queues shine when you're building event-driven architectures or need to connect multiple services.

Quick Comparison

Feature RabbitMQ Redis Streams NATS Kafka
Protocol AMQP 0-9-1 Redis protocol Custom (NATS) Custom (Kafka)
Message retention Until consumed Configurable Until consumed (JetStream: configurable) Configurable (log-based)
Ordering Per-queue Per-stream Per-subject (JetStream) Per-partition
Consumer groups Yes Yes Yes (JetStream) Yes
Max throughput ~50K msg/s ~100K msg/s ~1M msg/s ~1M msg/s
Latency Low (~1ms) Very low (<1ms) Very low (<1ms) Low (~5ms)
Operational complexity Medium Low (if you already run Redis) Low High
Persistence Yes Yes Optional (JetStream) Yes
Best for Task routing, complex routing patterns Simple streaming when you already have Redis Microservices, request-reply High-throughput event streaming, log aggregation

RabbitMQ

RabbitMQ is the workhorse of traditional message queuing. It implements AMQP and excels at routing messages through exchanges to queues with flexible binding patterns. It's the right choice when you need complex routing logic, dead-letter queues, or priority queues out of the box.

Local Dev Setup

# docker-compose.yml
services:
  rabbitmq:
    image: rabbitmq:3-management
    ports:
      - "5672:5672"    # AMQP
      - "15672:15672"  # Management UI
    environment:
      RABBITMQ_DEFAULT_USER: dev
      RABBITMQ_DEFAULT_PASS: dev
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq

volumes:
  rabbitmq_data:

The management UI at http://localhost:15672 is genuinely useful for debugging. You can inspect queues, see message rates, and manually publish test messages.

Code Example (Node.js with amqplib)

import amqplib from "amqplib";

// Publisher
async function publish() {
  const conn = await amqplib.connect("amqp://dev:dev@localhost");
  const channel = await conn.createChannel();

  const exchange = "orders";
  const routingKey = "order.created";

  await channel.assertExchange(exchange, "topic", { durable: true });
  channel.publish(
    exchange,
    routingKey,
    Buffer.from(JSON.stringify({ orderId: "abc-123", total: 49.99 })),
    { persistent: true }
  );

  console.log("Published order.created event");
  await channel.close();
  await conn.close();
}

// Consumer
async function consume() {
  const conn = await amqplib.connect("amqp://dev:dev@localhost");
  const channel = await conn.createChannel();

  const exchange = "orders";
  const queue = "email-notifications";

  await channel.assertExchange(exchange, "topic", { durable: true });
  await channel.assertQueue(queue, { durable: true });
  await channel.bindQueue(queue, exchange, "order.*");
  await channel.prefetch(10);

  channel.consume(queue, (msg) => {
    if (!msg) return;
    const order = JSON.parse(msg.content.toString());
    console.log(`Processing order: ${order.orderId}`);
    // Do the work, then acknowledge
    channel.ack(msg);
  });
}

When to Pick RabbitMQ

When to Avoid RabbitMQ

Redis Streams

If you already run Redis, Streams give you a solid message queue without adding another piece of infrastructure. Redis Streams are an append-only log data structure added in Redis 5.0 with consumer groups, acknowledgment, and configurable retention.

Local Dev Setup

services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

volumes:
  redis_data:

Code Example (Node.js with ioredis)

import Redis from "ioredis";

const redis = new Redis();

// Producer: add events to a stream
async function produce() {
  await redis.xadd(
    "events:orders",
    "*", // auto-generate ID
    "action", "created",
    "orderId", "abc-123",
    "total", "49.99"
  );
  console.log("Event added to stream");
}

// Consumer: read with a consumer group
async function consume() {
  const stream = "events:orders";
  const group = "email-service";
  const consumer = "worker-1";

  // Create consumer group (ignore error if it already exists)
  try {
    await redis.xgroup("CREATE", stream, group, "0", "MKSTREAM");
  } catch (e) {
    // Group already exists
  }

  while (true) {
    const results = await redis.xreadgroup(
      "GROUP", group, consumer,
      "COUNT", 10,
      "BLOCK", 5000, // block for 5s if no messages
      "STREAMS", stream, ">"
    );

    if (results) {
      for (const [, messages] of results) {
        for (const [id, fields] of messages) {
          const data = Object.fromEntries(
            fields.reduce((acc, val, i) =>
              i % 2 === 0 ? [...acc, [val, fields[i + 1]]] : acc, [])
          );
          console.log(`Processing: ${data.orderId}`);
          await redis.xack(stream, group, id);
        }
      }
    }
  }
}

When to Pick Redis Streams

When to Avoid Redis Streams

NATS

NATS is a lightweight, high-performance messaging system designed for cloud-native applications. Core NATS is fire-and-forget pub/sub with no persistence. NATS JetStream adds persistence, consumer groups, and exactly-once semantics.

Local Dev Setup

services:
  nats:
    image: nats:2-alpine
    ports:
      - "4222:4222"  # Client
      - "8222:8222"  # Monitoring
    command: "--jetstream --store_dir /data"
    volumes:
      - nats_data:/data

volumes:
  nats_data:

Code Example (Node.js with nats.js)

import { connect, StringCodec, AckPolicy } from "nats";

const sc = StringCodec();

// Publisher
async function publish() {
  const nc = await connect({ servers: "localhost:4222" });
  const js = nc.jetstream();
  const jsm = await nc.jetstreamManager();

  // Create a stream that captures order events
  await jsm.streams.add({
    name: "ORDERS",
    subjects: ["orders.>"],
    retention: "limits",
    max_msgs: 100000,
  });

  await js.publish(
    "orders.created",
    sc.encode(JSON.stringify({ orderId: "abc-123", total: 49.99 }))
  );

  console.log("Published to orders.created");
  await nc.close();
}

// Consumer
async function consume() {
  const nc = await connect({ servers: "localhost:4222" });
  const js = nc.jetstream();
  const jsm = await nc.jetstreamManager();

  // Create a durable consumer
  await jsm.consumers.add("ORDERS", {
    durable_name: "email-service",
    ack_policy: AckPolicy.Explicit,
    filter_subject: "orders.created",
  });

  const consumer = await js.consumers.get("ORDERS", "email-service");
  const messages = await consumer.consume();

  for await (const msg of messages) {
    const order = JSON.parse(sc.decode(msg.data));
    console.log(`Processing order: ${order.orderId}`);
    msg.ack();
  }
}

When to Pick NATS

When to Avoid NATS

Apache Kafka

Kafka is the standard for high-throughput event streaming. It stores messages as an immutable, partitioned log that consumers read at their own pace. This makes replay, reprocessing, and building derived views straightforward.

Local Dev Setup

Kafka's dependency on ZooKeeper made local dev painful for years. KRaft mode (production-ready since Kafka 3.3) eliminates ZooKeeper entirely.

services:
  kafka:
    image: apache/kafka:3.7.0
    ports:
      - "9092:9092"
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LOG_DIRS: /var/lib/kafka/data
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
    volumes:
      - kafka_data:/var/lib/kafka/data

volumes:
  kafka_data:

Code Example (Node.js with kafkajs)

import { Kafka } from "kafkajs";

const kafka = new Kafka({
  clientId: "my-app",
  brokers: ["localhost:9092"],
});

// Producer
async function produce() {
  const producer = kafka.producer();
  await producer.connect();

  await producer.send({
    topic: "order-events",
    messages: [
      {
        key: "abc-123",
        value: JSON.stringify({ action: "created", orderId: "abc-123", total: 49.99 }),
        headers: { source: "checkout-service" },
      },
    ],
  });

  console.log("Produced order event");
  await producer.disconnect();
}

// Consumer
async function consume() {
  const consumer = kafka.consumer({ groupId: "email-service" });
  await consumer.connect();
  await consumer.subscribe({ topic: "order-events", fromBeginning: true });

  await consumer.run({
    eachMessage: async ({ topic, partition, message }) => {
      const event = JSON.parse(message.value.toString());
      console.log(`[${partition}] Processing: ${event.orderId}`);
    },
  });
}

When to Pick Kafka

When to Avoid Kafka

Making the Decision

Start with Redis Streams if you already run Redis and your needs are straightforward. The operational cost of adding a new service is real, and Redis Streams handle most workloads well.

Pick RabbitMQ when you need smart routing, per-message TTL, priority queues, or dead-letter handling. It has the most mature tooling and the management UI makes debugging easy.

Pick NATS when you're building a microservices system and want lightweight, fast, and simple messaging with optional persistence via JetStream.

Pick Kafka when throughput, event replay, and long-term retention are requirements. Kafka is the right answer for event-driven architectures at scale, but it's the wrong answer for a small application that just needs to send emails asynchronously.

The biggest mistake teams make is picking Kafka for every use case because it's the most well-known. A 5-person startup processing 1,000 orders per day doesn't need Kafka. Redis Streams or a simple job queue will serve you better with a fraction of the operational burden.