technical seo/blog/seo/technical-seo/fix-core-web-vitals

Core Web Vitals for Developers: What Actually Moves the Needle

Core Web Vitals scores have been a Google ranking factor for three years. In that time, I've run the fixes on more sites than I can remember. Some of them improved dramatically in a week. Others barely moved despite months of optimisation. The difference almost always comes down to three things people get wrong: they fix symptoms instead of causes, they measure in the lab instead of the field, and they treat all three metrics as equally urgent instead of finding which one is actually hurting their score. Here's the version of this topic that skips to what actually works.

Measure in the field, not just the lab

Lighthouse runs in a controlled lab environment on a simulated device with a simulated connection. It's useful for catching obvious issues and comparing changes. It is not what Google uses to rank your site.

Google uses Chrome User Experience Report (CrUX) data — real measurements from real Chrome users visiting your real pages. This is what appears in PageSpeed Insights under "Field data", what GSC's Core Web Vitals report shows, and what determines whether your page gets the ranking signal.

The practical implication: a Lighthouse score of 95 doesn't mean your Core Web Vitals are passing. I've seen sites with excellent Lighthouse scores failing all three metrics in CrUX because their users are predominantly on slow mobile connections — and Lighthouse's simulated mobile environment doesn't match the reality of a budget Android phone on a 4G connection in a rural area.

Before you fix anything, check your field data in GSC's Core Web Vitals report. See which metric is failing, on which device type (mobile almost always fails before desktop), and on which page groups. Then fix the actual problem, in the order of severity, for the device that matters most.

LCP — Largest Contentful Paint

Largest Contentful Paint (LCP)
Good: <2.5s Needs work: 2.5–4s Poor: >4s

LCP measures how long until the largest visible element on the page is rendered. Usually a hero image, an H1, or a large text block above the fold.

Of the three vitals, LCP is the one with the highest impact on ranking and the most clearly fixable. In my experience, about 80% of poor LCP scores come from one of four causes — and fixing any one of them often drops LCP by 30–50%.

The most common LCP culprit: an unoptimised hero image

The hero image is almost always the LCP element on a landing page. It's large, it's above the fold, and it's usually the last thing the browser decides to load because nothing in the HTML signals that it's important. The browser discovers it when it parses the CSS or the img tag, by which point it's already juggling render-blocking scripts and stylesheets.

The fix is explicit priority signalling. Add a fetchpriority="high" attribute to the LCP image element. Add a <link rel="preload"> tag in the <head> pointing to the image. Remove loading="lazy" from the LCP image — lazy loading is correct for images below the fold, actively harmful for the image above it.

html — LCP image implementation
<!-- In <head> — preload the LCP image -->
<link rel="preload" as="image"
  href="/images/hero.webp"
  imagesrcset="/images/hero-800.webp 800w, /images/hero-1200.webp 1200w"
  imagesizes="100vw">

<!-- In <body> — the LCP image element itself -->
<img
  src="/images/hero-1200.webp"
  srcset="/images/hero-800.webp 800w, /images/hero-1200.webp 1200w"
  sizes="100vw"
  fetchpriority="high"
  decoding="sync"
  width="1200"
  height="600"
  alt="Hero image description">

Notice loading="lazy" is completely absent. decoding="async" is also gone — for the LCP image, you want decoding="sync" because you want the main thread to decode it as a priority. The width and height attributes prevent layout shift while it loads.

Before
4.8s
LCP — lazy-loaded hero image, no preload, 620KB JPEG
After
1.9s
LCP — fetchpriority high, preloaded, 68KB WebP

Server response time (TTFB)

If your server takes 800ms to respond, your LCP cannot possibly be good regardless of what else you do — the clock starts ticking from when the browser makes the request. A TTFB above 600ms is nearly always a caching problem: you're generating the page dynamically on every request when you should be serving a cached version. Add full-page caching at the server or CDN level. For Next.js apps, check your ISR configuration. For WordPress, a page cache plugin is non-negotiable.

INP — Interaction to Next Paint

Interaction to Next Paint (INP)
Good: <200ms Needs work: 200–500ms Poor: >500ms

INP replaced FID in March 2024. It measures how quickly the page responds to user interactions — clicks, taps, and keyboard inputs — and takes the worst interaction in a session as its score.

INP is the trickiest of the three to debug because it's not a page load metric — it's a runtime metric. Something that happens after the user starts interacting. A page can load perfectly and still fail INP if clicking a button triggers a long JavaScript task.

Finding long tasks

Open Chrome DevTools and go to the Performance panel. Record a session where you interact with the page — click buttons, expand accordions, submit forms, whatever users actually do. Look for tasks in the main thread timeline that are longer than 50ms. These are "long tasks" — they block the main thread and delay the visual response to user input.

The most common causes in modern web apps: large event handlers that do too much work synchronously, third-party scripts (analytics, chat widgets, ad scripts) that execute on every interaction, and framework-heavy React components that re-render entire trees when a small state change occurs.

  • Break long tasks with scheduler.yield() If you have a long computation that must happen on interaction, break it into chunks using the new Scheduler API or setTimeout(fn, 0) to yield back to the browser between chunks.
  • Defer third-party scripts Move analytics, chat, and tag manager scripts so they load after the page is interactive. Most of them don't need to run before users start interacting — they just do, because nobody moved the script tags.
  • Audit your event handlers An onClick that triggers a synchronous API call, updates state, and re-renders a complex component is an INP problem waiting to happen. Debounce, defer non-critical updates, and make sure the visual feedback happens immediately even if the data operation takes time.
  • Use React's useTransition for non-urgent updates If you're using React 18+, wrap non-urgent state updates (filtering, sorting, secondary UI changes) in startTransition. This tells React those updates can be interrupted by more urgent ones — like the click response itself.

CLS — Cumulative Layout Shift

Cumulative Layout Shift (CLS)
Good: <0.1 Needs work: 0.1–0.25 Poor: >0.25

CLS measures visual instability — how much page elements move unexpectedly after first render. A score of 0.1 means a moderate amount of unexpected movement. The lower the better.

CLS is the easiest to understand and, once you know the causes, the easiest to fix. Elements shift because the browser doesn't know how big they're going to be when it first renders the page. The fix is almost always to tell the browser the size before the element loads.

Images without dimensions

Every img element should have explicit width and height attributes matching the image's intrinsic size. The browser uses these to reserve space before the image loads, preventing the layout from jumping when it arrives. This is a two-minute fix that eliminates CLS from images entirely.

html — before and after
<!-- Causes CLS — browser doesn't know the height -->
<img src="/photo.jpg" alt="...">

<!-- No CLS — browser reserves exactly this space -->
<img src="/photo.jpg" width="800" height="600" alt="...">

Web fonts causing FOUT

Flash of Unstyled Text — when the page renders in a fallback font and then jumps to the web font once it loads — is a common CLS source. The fix is font-display: swap combined with preloading the font file in the <head>:

html + css — font CLS fix
<!-- Preload in <head> -->
<link rel="preload" href="/fonts/inter.woff2" as="font" type="font/woff2" crossorigin>

/* In CSS — allow fallback to render immediately */
@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter.woff2') format('woff2');
  font-display: swap; /* Show fallback, swap when loaded */
}
For even better results, use size-adjust, ascent-override, and descent-override in your @font-face declarations to make the fallback font match the web font's metrics as closely as possible. This minimises the visual jump when the font swaps. Google Fonts now includes these automatically when you use the CSS API.

Tracking improvements properly

CrUX data updates on a 28-day rolling window. This means you will not see your improvements reflected in Google Search Console for four weeks after deploying them. This is not a bug — it's how field data aggregation works. Do not panic. Do not revert your changes because "it's not working."

Track your improvements in the lab (Lighthouse, WebPageTest) immediately after deploying. This confirms the fix worked at the code level. Then wait four weeks for CrUX to catch up. Check Lighthouse CI in your deployment pipeline so regressions get caught before they ship, not after they've affected four weeks of field data.

Where to go from here

Core Web Vitals are not a one-time fix. New features get added, third-party scripts accumulate, image sizes creep up, and what passes today might not pass after a site redesign. The best teams treat CWV as a continuous monitoring task, not a one-time project.

If you're looking at your Core Web Vitals report in GSC and don't know where to start, or if you've made fixes that haven't improved your field data, this is exactly the kind of problem my technical SEO consulting service exists to solve. I'll look at your CrUX data, your lab measurements, and your code — and tell you exactly what to fix in what order.

You might also find my technical SEO audit checklist useful — Core Web Vitals is one section of a broader audit framework that covers everything from crawlability to schema markup.

Got a Core Web Vitals problem you can't solve?

I debug CWV issues at the code level, not just the Lighthouse level. Field data, CrUX analysis, and a fix plan ranked by impact — not a report that tells you to "reduce JavaScript."

Talk about your site →

FAQs

What kind of SEO topics do you cover?

Technical SEO: crawling, rendering, indexation, structured data, Core Web Vitals, migrations, and programmatic quality. The emphasis is implementation and measurement, not vague “best practices” lists.

Are these guides up to date with Google documentation?

They are written to align with public Search documentation and observed behavior, but search systems change. Always verify critical details in official docs and your own Search Console data.

Do you offer audits based on these articles?

Yes. Audits are scoped engagements with prioritized tickets and reproduction steps—not a PDF that sits unread.

What tools pair with the SEO posts?

The free tools hub includes meta inspection, robots checks, readability scoring, and JSON-LD validation—useful companions to many of the guides here.

How do I suggest a correction?

Email a link to the post, the passage, and a primary source if applicable. I appreciate concrete reports and update posts when guidance changes.