Web Performance Tools: Measuring and Fixing What Matters
Web Performance Tools: Measuring and Fixing What Matters
Performance is one of those things everyone agrees matters but few teams measure systematically. The tooling has gotten remarkably good -- good enough that there's no excuse for shipping a 4MB JavaScript bundle or a page that takes 8 seconds to become interactive. This guide covers the tools that actually help, and how to wire them into your workflow so regressions don't slip through.
Core Web Vitals: What They Mean Practically
Google's Core Web Vitals are three metrics that capture real user experience. They matter for SEO, but more importantly, they correlate with whether users stick around or bounce.
| Metric | What It Measures | Good | Needs Work | Poor |
|---|---|---|---|---|
| LCP (Largest Contentful Paint) | When the main content is visible | < 2.5s | 2.5-4.0s | > 4.0s |
| INP (Interaction to Next Paint) | Responsiveness to user input | < 200ms | 200-500ms | > 500ms |
| CLS (Cumulative Layout Shift) | Visual stability (things jumping around) | < 0.1 | 0.1-0.25 | > 0.25 |
LCP is usually the one you fix first. The largest contentful paint is typically a hero image, a heading, or a large block of text. If your LCP is slow, the culprit is almost always: the server is slow, the critical resource is too large, or render-blocking resources are delaying it.
INP replaced FID in 2024 and is harder to fix because it measures the worst interaction during the entire page lifecycle. Heavy JavaScript on the main thread is the usual cause -- long tasks (>50ms) block the browser from responding to clicks and keystrokes.
CLS is the most annoying metric for users. Images without dimensions, dynamically injected content, and web fonts that cause text reflow are the main offenders.
Measure Core Web Vitals in your application with the web-vitals library:
import { onLCP, onINP, onCLS } from "web-vitals";
function sendToAnalytics(metric: { name: string; value: number }) {
navigator.sendBeacon("/api/vitals", JSON.stringify(metric));
}
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);
This gives you field data -- what real users actually experience. Lab data (Lighthouse, WebPageTest) is useful for debugging but doesn't tell the full story.
Lighthouse: Your Performance Baseline
Lighthouse runs a simulated page load and scores you on performance, accessibility, best practices, and SEO. The CLI is more consistent than the DevTools panel because it uses a clean browser profile every time.
npm install -g lighthouse
lighthouse https://example.com --output html --output-path report.html
# Performance only, desktop preset
lighthouse https://example.com --only-categories=performance \
--preset=desktop --output json --output-path results.json
Lighthouse in CI
This is where Lighthouse becomes genuinely useful -- catching regressions before they ship. Configure @lhci/cli with lighthouserc.js:
module.exports = {
ci: {
collect: {
url: ["http://localhost:3000/", "http://localhost:3000/about"],
startServerCommand: "npm run start",
numberOfRuns: 3,
},
assert: {
assertions: {
"categories:performance": ["error", { minScore: 0.9 }],
"largest-contentful-paint": ["warn", { maxNumericValue: 2500 }],
"cumulative-layout-shift": ["error", { maxNumericValue: 0.1 }],
"total-byte-weight": ["warn", { maxNumericValue: 500000 }],
},
},
upload: {
target: "temporary-public-storage",
},
},
};
The assert section is the key part. Set hard limits and fail the build when they're exceeded. Without assertions, Lighthouse CI is just a report nobody reads.
# GitHub Actions
- name: Run Lighthouse CI
run: |
npm install -g @lhci/cli
lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
Chrome DevTools Performance Tab
The Performance tab records everything happening during a page load or interaction: JavaScript execution, layout, paint, compositing, network requests. It has a steep learning curve but is irreplaceable for diagnosing why something is slow.
What to Look For
- Long tasks: Yellow blocks longer than 50ms on the main thread. Click them to see the call stack.
- Layout thrashing: Repeated purple "Layout" blocks -- your code is reading then writing to the DOM in a loop.
- Forced reflows: Flagged with a red triangle. Happens when JavaScript reads a layout property (like
offsetHeight) after modifying the DOM. - Network waterfall: Sequential requests that could be parallelized, or large resources blocking the critical path.
Recording Tips
- Use incognito mode to disable extensions that pollute the trace
- Check "Screenshots" and "Web Vitals" in the recording options
- Throttle CPU (4x slowdown) to simulate slower devices
- Record interactions, not just page loads -- INP problems only show up when you click things
Lighthouse tells you something is slow. The Performance tab tells you why.
WebPageTest
WebPageTest tests your site from real browsers in real locations on real network conditions. This matters because your fiber connection is not representative of your users.
Use WebPageTest over Lighthouse when you need to test from specific geographic locations, on real mobile hardware, compare before/after with filmstrip view, or analyze third-party script impact.
webpagetest test https://example.com \
--location "Dulles:Chrome" --connectivity "3G" --runs 3 --key YOUR_API_KEY
The free public instance is fine for occasional testing. For CI, you need an API key or a self-hosted instance.
Bundle Analyzers
Your JavaScript bundle is probably bigger than you think.
| Tool | Best For | Input | Visual |
|---|---|---|---|
| webpack-bundle-analyzer | Webpack projects | Stats file | Treemap |
| source-map-explorer | Any bundler | Source maps | Treemap |
| bundlephobia | Pre-install size check | Package name | Size report |
| esbuild-visualizer | Esbuild/Vite projects | Metafile | Treemap |
source-map-explorer is the best general-purpose option -- it works with any bundler that produces source maps:
npx source-map-explorer dist/main.js
npx source-map-explorer dist/main.js --gzip
webpack-bundle-analyzer integrates directly into your build:
const { BundleAnalyzerPlugin } = require("webpack-bundle-analyzer");
module.exports = {
plugins: [
new BundleAnalyzerPlugin({
analyzerMode: "static",
openAnalyzer: false,
}),
],
};
Bundlephobia -- check the cost of a dependency before you install it:
npx bundle-phobia-cli lodash
# => lodash: 72.5kB minified, 25.3kB gzipped
npx bundle-phobia-cli lodash-es
# => lodash-es: 86.3kB minified, but tree-shakeable
Common offenders: moment (replace with date-fns or dayjs), lodash (use lodash-es or individual imports), and UI component libraries that don't support tree shaking.
Image Optimization
Images are typically the largest assets on a page.
| Format | Best For | Browser Support |
|---|---|---|
| AVIF | Photos, complex images | Modern browsers |
| WebP | Photos, fallback for AVIF | All modern browsers |
| SVG | Icons, logos, illustrations | Universal |
| PNG | Screenshots, transparency | Universal |
AVIF produces files 30-50% smaller than WebP for photographic content. Use it as your primary format with WebP fallback.
import sharp from "sharp";
// Batch convert: AVIF primary, WebP fallback
await sharp("src/images/hero.jpg")
.resize(1200, null, { withoutEnlargement: true })
.avif({ quality: 60 })
.toFile("dist/images/hero.avif");
await sharp("src/images/hero.jpg")
.resize(1200, null, { withoutEnlargement: true })
.webp({ quality: 75 })
.toFile("dist/images/hero.webp");
Serve them with the <picture> element:
<picture>
<source srcset="hero.avif" type="image/avif" />
<source srcset="hero.webp" type="image/webp" />
<img src="hero.jpg" alt="Hero image" width="1200" height="600"
loading="lazy" decoding="async" />
</picture>
Always include width and height attributes -- they prevent CLS by reserving space before the image loads. Use loading="lazy" for below-the-fold images but never for the LCP image.
Font Loading Strategies
Web fonts are a common source of both LCP delays and CLS.
<link rel="preload" href="/fonts/inter-var.woff2"
as="font" type="font/woff2" crossorigin />
<style>
@font-face {
font-family: "Inter";
src: url("/fonts/inter-var.woff2") format("woff2");
font-weight: 100 900;
font-display: swap;
}
</style>
font-display: swap shows text in a fallback font immediately and swaps in the web font when it loads. This prevents invisible text (FOIT) but can cause a flash of unstyled text (FOUT). For body text, this trade-off is correct.
For a smoother experience, use font-display: optional -- it only uses the web font if it loads within ~100ms, otherwise sticks with the fallback. No layout shift, no flash. First-time visitors on slow connections won't see your custom font, but that's an acceptable trade-off.
Key optimizations:
- Variable fonts: One file covers all weights instead of separate files per weight
- Subset your fonts: Ship only the character ranges you use
npx glyphanger --whitelist="U+0000-00FF" --subset=Inter.woff2
Performance Budgets in CI
Performance budgets turn "we should be fast" into "the build fails if we're not." This is the single most effective way to prevent regressions.
{
"bundlewatch": {
"files": [
{ "path": "dist/main.*.js", "maxSize": "150kB" },
{ "path": "dist/vendor.*.js", "maxSize": "250kB" },
{ "path": "dist/**/*.css", "maxSize": "30kB" }
],
"defaultCompression": "gzip"
}
}
- name: Check bundle size
run: npx bundlewatch
env:
BUNDLEWATCH_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundlewatch posts size comparisons as PR comments, making regressions visible in code review. Start with generous budgets based on your current sizes, then ratchet them down as you optimize.
What to Measure First
If you're starting from zero, here's the order that gives you the most impact:
- Bundle size: Run
source-map-exploreron your production build. You'll almost certainly find something to remove or lazy-load. - LCP: Identify your largest contentful paint element. Preload it, optimize it, don't block it with render-blocking resources.
- CLS: Add
widthandheightto all images. Usefont-display: swaporoptional. Don't inject content above the fold after load. - Lighthouse CI: Set up assertions so you stop regressing. Start with lenient budgets and tighten over time.
- Field data: Add the
web-vitalslibrary. Lab data tells you what could be slow; field data tells you what is slow. - INP: Profile interactions with the Performance tab. Find and break up long tasks. This is the hardest to fix because it requires architectural changes.
Recommendations
- Measure field data with the
web-vitalslibrary -- lab tools alone are insufficient - Set up Lighthouse CI with assertions, not just reports. Fail the build on regressions
- Run
source-map-exploreron every production build to catch bundle size surprises - Use AVIF with WebP fallback for photographic images. Always set
widthandheight - Preload your primary font file, use
font-display: swap, and subset to Latin if possible - Set bundle size budgets in CI with bundlewatch or a custom script
- Profile slow interactions in the Chrome Performance tab -- look for long tasks on the main thread
- Start with bundle size and LCP. They're the easiest to fix and have the biggest impact
- Test from real network conditions with WebPageTest, not just your local machine