Error Tracking and Logging: Sentry, Axiom, and Structured Logging
Error Tracking and Logging: Sentry, Axiom, and Structured Logging
There are two types of developers: those who have been woken up by a production error at 3am with no context, and those who haven't been in production long enough. Good error tracking and logging is the difference between "there's an error somewhere" and "user X hit this bug at 2:47am because of a null pointer in payment processing."
Error Tracking with Sentry
Sentry captures runtime errors with full context — stack traces, user info, browser/OS details, request data, and breadcrumbs showing what happened before the error.
Setup (JavaScript/TypeScript)
// sentry.ts
import * as Sentry from '@sentry/node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
release: process.env.GIT_SHA,
// Sample 10% of transactions for performance monitoring
tracesSampleRate: 0.1,
// Capture 100% of errors
sampleRate: 1.0,
// Filter out noisy errors
ignoreErrors: [
'ResizeObserver loop limit exceeded',
'Network request failed',
],
beforeSend(event) {
// Strip sensitive data
if (event.request?.headers) {
delete event.request.headers['authorization'];
delete event.request.headers['cookie'];
}
return event;
},
});
Express Integration
import express from 'express';
import * as Sentry from '@sentry/node';
const app = express();
// Sentry request handler must be first
Sentry.setupExpressErrorHandler(app);
app.get('/api/users/:id', async (req, res) => {
// Add context that appears in Sentry when errors occur
Sentry.setUser({ id: req.params.id });
Sentry.setTag('endpoint', 'get-user');
// Breadcrumbs show what happened before an error
Sentry.addBreadcrumb({
category: 'db',
message: `Querying user ${req.params.id}`,
level: 'info',
});
const user = await db.users.findById(req.params.id);
if (!user) {
// Capture an explicit error with context
Sentry.captureException(new Error(`User not found: ${req.params.id}`), {
extra: { requestId: req.headers['x-request-id'] },
});
return res.status(404).json({ error: 'User not found' });
}
res.json(user);
});
What Sentry Does Well
- Automatic grouping: Similar errors are grouped into issues, so you see "this error happened 2,347 times" instead of 2,347 individual reports
- Release tracking: See which deploy introduced a regression
- Source maps: JavaScript stack traces point to original source code, not minified bundles
- Performance monitoring: Track transaction durations and identify slow endpoints
- Alerts: Notify Slack/email when error rates spike
Sentry Pricing
- Developer (free): 5K errors/month, 1 user
- Team: $26/month for 50K errors, unlimited users
- Business: $80/month for 50K errors with advanced features
- Self-hosted: Free, deploy from Docker Compose (resource-heavy: needs 8GB+ RAM)
Alternatives to Sentry
BugSnag: Similar feature set, slightly simpler interface. Good for mobile apps.
Highlight.io: Open-source, combines error tracking with session replay and logging in one tool. Free self-hosted.
GlitchTip: Sentry-compatible open-source alternative. Lightweight — runs on 512MB RAM vs Sentry's 8GB+. Uses the same SDKs.
Structured Logging
Unstructured logs look like this:
[2026-02-09 15:23:45] ERROR: Failed to process payment for user 12345 - timeout after 30s
Structured logs look like this:
{"timestamp":"2026-02-09T15:23:45.123Z","level":"error","message":"Payment processing failed","userId":"12345","error":"timeout","duration_ms":30000,"service":"payment","traceId":"abc123"}
Structured logs are searchable, filterable, and aggregatable. You can answer questions like "how many payment timeouts happened in the last hour?" without regex.
pino (Node.js)
pino is the fastest Node.js logger. It outputs JSON by default.
import pino from 'pino';
// Create logger
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
// Human-readable output in development
...(process.env.NODE_ENV !== 'production' && {
transport: { target: 'pino-pretty' },
}),
});
// Basic logging
logger.info('Server started');
logger.error({ err: new Error('Connection failed'), host: 'db.example.com' }, 'Database connection failed');
// Child loggers add context to all subsequent logs
const requestLogger = logger.child({
requestId: req.headers['x-request-id'],
userId: req.user?.id,
});
requestLogger.info({ method: req.method, path: req.path }, 'Request received');
// Output: {"level":30,"time":1707484423,"requestId":"abc123","userId":"456","method":"GET","path":"/api/users","msg":"Request received"}
Why pino over winston: pino is 5-10x faster because it uses worker threads for serialization. In high-throughput services, this matters.
structlog (Python)
import structlog
logger = structlog.get_logger()
# Add context that follows the logger
log = logger.bind(request_id="abc123", user_id="456")
log.info("processing_payment", amount=99.99, currency="USD")
# Output: {"event": "processing_payment", "request_id": "abc123", "user_id": "456", "amount": 99.99, "currency": "USD", "timestamp": "2026-02-09T15:23:45Z"}
slog (Go)
Go 1.21+ includes slog in the standard library:
import "log/slog"
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
}))
logger.Info("payment processed",
slog.String("userId", "456"),
slog.Float64("amount", 99.99),
slog.Duration("latency", 230*time.Millisecond),
)
Log Aggregation
Axiom
Axiom is a log aggregation platform with generous pricing. Its free tier is surprisingly capable.
# Install CLI
brew install axiom/tap/axiom
# Send logs from a file
cat app.log | axiom ingest my-dataset
# Query logs
axiom query "['my-dataset'] | where level == 'error' | sort by _time desc"
For application integration, send logs via HTTP:
// Send structured logs to Axiom via their SDK
import { Axiom } from '@axiomhq/js';
const axiom = new Axiom({ token: process.env.AXIOM_TOKEN });
axiom.ingest('my-app', [{
_time: new Date().toISOString(),
level: 'error',
message: 'Payment failed',
userId: '12345',
error: 'timeout',
}]);
Pricing: Free tier includes 500GB ingestion/month and 30-day retention. Paid starts at $25/month.
Grafana Loki
Loki is an open-source log aggregation system designed to be cost-effective. It indexes only labels (not full text), which makes it much cheaper to run than Elasticsearch.
# docker-compose.yml for local Loki + Grafana
services:
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
Send logs to Loki from your application using a Loki client or by configuring your logging library to ship to the Loki push API.
When to use Loki: When you're already running Grafana for metrics and want logs in the same dashboard. Self-hosted Loki handles millions of log lines per day on modest hardware.
When to Use What
| Need | Tool |
|---|---|
| Runtime errors with stack traces | Sentry (or GlitchTip if self-hosting) |
| Application logs (search, filter, aggregate) | Axiom (hosted) or Loki (self-hosted) |
| Quick debugging during development | pino-pretty / console.log (be honest, we all do it) |
| Compliance/audit logs | Dedicated log store with immutable retention |
Practical Logging Guidelines
Log at boundaries: HTTP requests in/out, database queries, external API calls, queue messages consumed/produced.
Include correlation IDs: Every request should have a unique ID that flows through all log entries. This lets you reconstruct the full lifecycle of a request across services.
Log the right level: ERROR for things that need human attention. WARN for things that might become errors. INFO for request lifecycle events. DEBUG for detailed troubleshooting (off in production by default).
Don't log sensitive data: No passwords, tokens, credit card numbers, or PII in logs. Use redaction middleware.
Set up alerts on error rate, not individual errors: "Error rate exceeded 1% in the last 5 minutes" is actionable. "An error occurred" is noise.