← All articles
TERMINAL CLI Tools for Working with JSON and YAML 2026-02-09 · 10 min read · json · yaml · jq

CLI Tools for Working with JSON and YAML

Terminal 2026-02-09 · 10 min read json yaml jq yq cli terminal data-processing devtools

CLI Tools for Working with JSON and YAML

APIs return JSON. Kubernetes uses YAML. Configuration files come in both flavors. If you work with either format regularly -- and you do -- having the right CLI tools turns a frustrating data extraction task into a one-liner.

This guide covers the tools that matter, with real recipes you'll actually use.

jq: The Essential JSON Processor

jq is the sed of JSON. It's a command-line JSON processor with its own expression language for filtering, transforming, and formatting structured data. If you install one tool from this guide, make it jq.

Installation

brew install jq           # macOS
sudo apt install jq       # Debian/Ubuntu
sudo dnf install jq       # Fedora

The Basics

# Pretty-print JSON
curl -s https://api.github.com/users/octocat | jq '.'

# Extract a field
curl -s https://api.github.com/users/octocat | jq '.name'
# "The Octocat"

# Extract without quotes
curl -s https://api.github.com/users/octocat | jq -r '.name'
# The Octocat

# Extract nested fields
echo '{"user": {"name": "Alice", "address": {"city": "Portland"}}}' | jq '.user.address.city'
# "Portland"

Working with Arrays

# Get all items in an array
echo '[{"name": "Alice"}, {"name": "Bob"}]' | jq '.[].name'
# "Alice"
# "Bob"

# Get specific index
echo '[1, 2, 3, 4, 5]' | jq '.[2]'
# 3

# Slice
echo '[1, 2, 3, 4, 5]' | jq '.[1:3]'
# [2, 3]

# Array length
echo '[1, 2, 3]' | jq 'length'
# 3

Filtering

# Select objects matching a condition
echo '[{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]' | \
  jq '.[] | select(.age > 27)'
# {"name": "Alice", "age": 30}

# Multiple conditions
echo '[{"name": "Alice", "role": "admin"}, {"name": "Bob", "role": "user"}]' | \
  jq '.[] | select(.role == "admin" and .name == "Alice")'

# Filter and reshape
echo '[{"id": 1, "name": "Alice", "email": "[email protected]"}]' | \
  jq '.[] | {name, email}'
# {"name": "Alice", "email": "[email protected]"}

Real-World Recipes

Extract all container image names from a Kubernetes deployment:

kubectl get deployment myapp -o json | \
  jq -r '.spec.template.spec.containers[].image'
# nginx:1.27
# redis:7.4

Get the 5 most-starred repos for a GitHub user:

gh api users/octocat/repos --paginate | \
  jq -r 'sort_by(.stargazers_count) | reverse | .[:5] | .[] | "\(.stargazers_count)\t\(.name)"'

Transform a list of objects into a CSV:

echo '[{"name": "Alice", "age": 30}, {"name": "Bob", "age": 25}]' | \
  jq -r '["name","age"], (.[] | [.name, .age]) | @csv'
# "name","age"
# "Alice",30
# "Bob",25

Merge two JSON files:

jq -s '.[0] * .[1]' defaults.json overrides.json

Group objects by a field:

echo '[{"dept": "eng", "name": "Alice"}, {"dept": "eng", "name": "Bob"}, {"dept": "sales", "name": "Carol"}]' | \
  jq 'group_by(.dept) | map({dept: .[0].dept, members: map(.name)})'
# [{"dept": "eng", "members": ["Alice", "Bob"]}, {"dept": "sales", "members": ["Carol"]}]

Flatten nested JSON:

echo '{"a": {"b": {"c": 1}}, "d": 2}' | jq '[paths(scalars) as $p | {key: ($p | join(".")), value: getpath($p)}] | from_entries'
# {"a.b.c": 1, "d": 2}

jq Gotchas

yq: YAML's jq

yq is jq for YAML (and JSON, and XML, and TOML). There are actually two tools called yq -- the Go version by Mike Farah (the one you want) and the Python version that wraps jq. This section covers the Go version.

Installation

brew install yq                    # macOS
sudo snap install yq               # Ubuntu
go install github.com/mikefarah/yq/v4@latest  # From source

Basic Usage

# Read a YAML field
yq '.metadata.name' deployment.yaml

# Read nested field
yq '.spec.template.spec.containers[0].image' deployment.yaml

# Pretty-print
yq '.' config.yaml

# Convert YAML to JSON
yq -o json '.' config.yaml

# Convert JSON to YAML
yq -P '.' data.json

Editing YAML In-Place

This is where yq shines -- modifying YAML files without mangling comments or formatting.

# Update a field
yq -i '.spec.replicas = 3' deployment.yaml

# Add a new field
yq -i '.metadata.labels.env = "production"' deployment.yaml

# Delete a field
yq -i 'del(.spec.template.spec.containers[0].resources)' deployment.yaml

# Add an item to an array
yq -i '.spec.template.spec.containers[0].env += [{"name": "DEBUG", "value": "true"}]' deployment.yaml

Real-World Recipes

Update all container images in a Kubernetes YAML:

yq -i '(.spec.template.spec.containers[].image) = "myapp:v2.0"' deployment.yaml

Merge two YAML files (overrides win):

yq eval-all 'select(fileIndex == 0) * select(fileIndex == 1)' base.yaml overrides.yaml

Extract all keys at a certain depth:

yq 'keys' config.yaml

Comment out a YAML block:

yq -i '.database.host line_comment="temporarily disabled"' config.yaml

Validate YAML syntax (exit code check):

yq '.' config.yaml > /dev/null 2>&1 && echo "Valid" || echo "Invalid"

yq vs jq for JSON

yq can process JSON too, and its expression syntax is very similar to jq's (though not identical). For pure JSON work, jq is faster and has a richer expression language. For YAML, or for workflows that mix JSON and YAML, yq is the right tool.

fx: Interactive JSON Viewer

fx is an interactive JSON viewer for the terminal. Where jq is for scripting and piping, fx is for exploration -- when you have a big JSON blob and want to browse it interactively.

Installation

brew install fx
npm install -g fx        # Or via npm
go install github.com/antonmedv/fx@latest

Usage

# Interactive mode -- browse with arrow keys
curl -s https://api.github.com/users/octocat | fx

# Apply a transformation
echo '{"users": [{"name": "Alice"}, {"name": "Bob"}]}' | fx '.users.map(u => u.name)'

In interactive mode, fx gives you:

When to Use fx

fx is the tool you reach for when you're exploring an unfamiliar API response or debugging a complex JSON payload. You don't know what you're looking for yet, so you need to browse. Once you know the path you need, switch to jq for the actual extraction.

gron: Make JSON grep-able

gron transforms JSON into discrete assignments that are trivially greppable with standard Unix tools. It's a brilliantly simple idea.

Installation

brew install gron
go install github.com/tomnomnom/gron@latest

Usage

echo '{"name": "Alice", "address": {"city": "Portland", "state": "OR"}}' | gron
# json = {};
# json.address = {};
# json.address.city = "Portland";
# json.address.state = "OR";
# json.name = "Alice";

Now you can grep it:

echo '{"name": "Alice", "address": {"city": "Portland", "state": "OR"}}' | gron | grep "city"
# json.address.city = "Portland";

And convert back to JSON with gron --ungron:

echo '{"name": "Alice", "address": {"city": "Portland"}}' | gron | grep "address" | gron --ungron
# {"address": {"city": "Portland"}}

Real-World Use Case

gron is perfect for answering "where is this value in this JSON?" without knowing the structure:

# Find where "production" appears anywhere in a config
cat config.json | gron | grep "production"
# json.environments[2].name = "production";
# json.deploy.target = "production";

You now know the exact paths. Use jq to extract them properly.

gron vs jq

gron is not a replacement for jq. It's a complementary tool for discovery. Use gron to find paths, then use jq to script extractions. They pair perfectly.

jless: Interactive JSON Viewer (Rust)

jless is a Rust-based terminal JSON viewer with vim-like keybindings. Think of it as a dedicated pager for JSON data.

Installation

brew install jless
# Or download from GitHub releases

Usage

# View a JSON file
jless data.json

# Pipe from stdin
curl -s https://api.github.com/users/octocat | jless

# YAML support
jless config.yaml

Keybindings

jless vs fx

Both are interactive JSON viewers. jless is faster (Rust vs Go/Node), has vim keybindings, and works better as a pager for large files. fx has JavaScript expression support for interactive filtering. Pick based on your preference -- if you're a vim user, jless will feel natural.

dasel: Universal Data Selector

dasel (Data Selector) works with JSON, YAML, TOML, XML, and CSV using a single unified query syntax. It's the Swiss Army knife approach.

Installation

brew install dasel
go install github.com/tomwright/dasel/v2/cmd/dasel@latest

Usage

# Query JSON
echo '{"user": {"name": "Alice"}}' | dasel -r json '.user.name'
# Alice

# Query YAML
dasel -r yaml -f config.yaml '.database.host'

# Query TOML
dasel -r toml -f Cargo.toml '.package.version'

# Convert between formats
dasel -r yaml -w json -f config.yaml
dasel -r json -w yaml -f data.json
dasel -r json -w csv -f users.json

Editing Files

dasel can modify files in-place across all supported formats:

# Update a YAML value
dasel put -r yaml -f config.yaml -t string '.database.host' 'new-host.example.com'

# Update a JSON value
dasel put -r json -f package.json -t string '.version' '2.0.0'

# Update a TOML value
dasel put -r toml -f config.toml -t int '.server.port' '8080'

When to Use dasel

dasel is useful when you work across multiple config formats and want one tool with one syntax. If you only work with JSON, jq is more powerful. If you only work with YAML, yq is better. But if you're constantly switching between JSON, YAML, TOML, and XML, dasel's unified interface saves context-switching.

jsonnet: Data Templating Language

jsonnet isn't a query tool -- it's a templating language for generating JSON. It's particularly useful for Kubernetes configurations, monitoring dashboards (Grafana), and any scenario where you're generating large amounts of structured data with repetitive patterns.

Installation

brew install jsonnet
go install github.com/google/go-jsonnet/cmd/jsonnet@latest

Basic Example

// config.jsonnet
local env = std.extVar('ENV');

{
  database: {
    host: if env == 'production' then 'db.prod.internal' else 'localhost',
    port: 5432,
    name: 'myapp_' + env,
  },
  cache: {
    host: if env == 'production' then 'redis.prod.internal' else 'localhost',
    port: 6379,
  },
}
jsonnet --ext-str ENV=production config.jsonnet

Output:

{
  "cache": {
    "host": "redis.prod.internal",
    "port": 6379
  },
  "database": {
    "host": "db.prod.internal",
    "name": "myapp_production",
    "port": 5432
  }
}

Templating Kubernetes Manifests

// deployment.jsonnet
local app(name, image, replicas=1) = {
  apiVersion: 'apps/v1',
  kind: 'Deployment',
  metadata: { name: name },
  spec: {
    replicas: replicas,
    selector: { matchLabels: { app: name } },
    template: {
      metadata: { labels: { app: name } },
      spec: {
        containers: [{
          name: name,
          image: image,
          ports: [{ containerPort: 8080 }],
        }],
      },
    },
  },
};

// Generate multiple deployments
{
  'api-deployment.json': app('api', 'myapp/api:v2.0', 3),
  'worker-deployment.json': app('worker', 'myapp/worker:v2.0', 2),
}
jsonnet -m output/ deployment.jsonnet
# Creates output/api-deployment.json and output/worker-deployment.json

When to Use jsonnet

jsonnet is the right tool when you're generating configuration, not querying it. Common use cases:

If you're just querying and transforming existing data, jq and yq are the right tools. jsonnet is for when the data doesn't exist yet and you need to generate it from templates.

Piping Patterns

The real power of these tools comes from combining them with each other and standard Unix tools.

API exploration pipeline

# Fetch, explore structure, then extract
curl -s https://api.example.com/data | gron | grep "email"
# Found: json.users[0].email, json.users[1].email, etc.

curl -s https://api.example.com/data | jq -r '.users[].email'

Kubernetes config pipeline

# Get all unique container images across all deployments
kubectl get deployments -A -o json | \
  jq -r '.items[].spec.template.spec.containers[].image' | \
  sort -u

Multi-format conversion

# YAML config -> JSON -> extract -> transform -> back to YAML
yq -o json '.' config.yaml | \
  jq '.database' | \
  yq -P '.'

Batch processing

# Process all JSON files in a directory
for f in data/*.json; do
  echo "$f: $(jq '.items | length' "$f")"
done

# Or with fd and jq
fd -e json . data/ --exec sh -c 'echo "{}: $(jq ".items | length" "{}")'

Log analysis

# Parse JSON log lines and count errors by type
cat app.log | \
  jq -r 'select(.level == "error") | .error_type' | \
  sort | uniq -c | sort -rn

# Extract slow requests from JSON logs
cat access.log | \
  jq 'select(.duration_ms > 1000) | {path: .path, duration: .duration_ms, timestamp: .timestamp}'

Docker and container workflows

# Get all environment variables from a running container
docker inspect mycontainer | jq '.[0].Config.Env[]' -r

# List all images with their sizes
docker images --format '{{json .}}' | jq -s 'sort_by(.Size) | reverse | .[] | "\(.Repository):\(.Tag) \(.Size)"' -r

Tool Comparison

Tool Best For Format Support Interactive Speed
jq JSON querying and transformation JSON only No Fast
yq YAML querying, in-place editing YAML, JSON, XML, TOML No Fast
fx Interactive JSON exploration JSON, YAML Yes Medium
gron Finding paths in unknown JSON JSON only No Fast
jless Viewing large JSON/YAML files JSON, YAML Yes Fast
dasel Multi-format querying and editing JSON, YAML, TOML, XML, CSV No Fast
jsonnet Generating config from templates JSON (output) No Fast

What I'd Pick

Install these three immediately:

  1. jq -- Non-negotiable. It's the foundation for JSON processing on the command line. Learn the basics (field access, array iteration, select, map) and you'll use it daily.

  2. yq (Go version by Mike Farah) -- If you touch YAML at all -- Kubernetes, Docker Compose, CI configs, Ansible -- yq is essential. In-place editing of YAML files without destroying comments is worth the install alone.

  3. gron -- The fastest way to answer "where is this value in this giant JSON blob?" Pairs perfectly with jq.

Install when needed:

The jq + yq + gron combination covers 95% of structured data tasks on the command line. Learn jq first -- it has the steepest learning curve but the biggest payoff. The rest are quick to pick up once you understand the patterns.