While it is common to treat click-through rate merely as a metric to pad out a monthly report, I treat it as a definitive verdict on a page's survival.
That shift in perspective might sound like a minor semantic difference, but it fundamentally changes how you need to think about rankings, measurement, and what genuinely constitutes "good performance". It introduces uncomfortable truths that make SEO significantly harder, but I prefer to deal in reality from day one.
This article serves as the written companion to my talk on NavBoost, allowing me to delve much deeper into the underlying mechanics to help you understand exactly how this system works.
The feedback loop problem
The traditional mental model of rankings is tidy, comfortably linear, and entirely disconnected from reality. The common understanding is that Google evaluates your page, assigns it a static score, and places it accordingly—meaning that if you improve the page, the score improves, and the rankings naturally follow.
NavBoost completely dismantles that outdated model. It doesn’t simply replace Google's traditional quality signals; rather, it wraps those signals in a relentless behavioural feedback loop that most of the industry still severely underestimates.
In practice, the loop functions like this:
- Your current position directly influences how many people click your link.
- What those users do after clicking feeds directly back into Google’s ranking systems.
- That behaviour then dictates your future position.
- Which inevitably dictates your future clicks.
The critical takeaway here is that this loop carries massive inertia and certainly doesn’t reset overnight. Position shapes behaviour, and behaviour dictates position. This is precisely why reporting on CTR as an isolated metric is a complete waste of time. You might look at a 4.2% CTR and log it in a spreadsheet, but NavBoost analyses that exact same 4.2% and calculates whether your page actually deserves to retain its real estate on page one.
What NavBoost actually tracks
I don't deal in SEO mythology or guesswork; I deal in what is technically documented. Thanks to the monumental Google Search API leak in May 2024, along with sworn testimony from the US Department of Justice (DOJ) antitrust trial against Google, I have concrete data on the internal field names and structures actually used by the search engine.
Within the leaked QualityNavboostNavboostData module, three specific structures matter more than anything else:
| Signal | The Technical Reality |
goodClicks | The user clicks, stays on your page, and does not return to the SERP. The search intent is considered successfully resolved. |
badClicks | The user clicks, returns quickly to the search results, and often tries a competitor's link. You have failed to satisfy their query. |
lastLongestClicks | Extra weighting is applied to recent, long-dwell clicks, indicating that current performance matters significantly more than historical success. |
The 13-month rolling window
During the 2023 DOJ antitrust trial, Google’s VP of Search, Pandu Nayak, testified under oath that NavBoost relies on a rolling 13-month window of user interaction data to train its algorithms.
This specific timeframe isn't an arbitrary choice by their engineers. It is designed to capture seasonal behaviour without blending one year's December traffic into the next. However, this introduces an incredibly uncomfortable truth for site owners: bad periods of performance will stick around and haunt you.
If your website has endured a lengthy stretch of poor user engagement, you are still carrying the weight of that data. Recovering from a traffic drop isn't just about generating better signals today; it requires you to systematically dilute the failures of the past because the 13-month window never resets—it only slides forward.
For brand-new pages, the problem is exactly the opposite. Because you have absolutely no history for NavBoost to work with, your initial rankings lean entirely on traditional quality signals while you are forced to compete against established pages that have banked a full year of positive momentum. It isn’t an even playing field, and it was never designed to be.
NavBoostTwiddler in the ranking pipeline
Google’s ranking system isn’t defined by a single, monolithic score; it is a complex pipeline of distinct systems.
- Mustang: Acts as the primary indexing and retrieval system responsible for evaluating your content, site structure, topical authority, and relevance.
- NavBoost (The Twiddler): Operates as a "Twiddler"—Google's internal term for a specialised re-ranking function that evaluates user behaviour to adjust the final score after the initial retrieval pass.
Crucially, the Twiddler can override almost everything else. A page that appears weaker on traditional signals can still outrank a much stronger competitor if its user engagement signals are stronger. This isn't just a minor tiebreaker; it is a fundamental re-ranker.
Internal Google engineering emails revealed during the DOJ trial explicitly showed engineers stating that NavBoost is often more powerful than all other ranking signals combined for certain queries. Therefore, if you are attempting to evaluate pages solely on content and backlinks, you will fail to predict their performance. Mustang scores the page, but NavBoost decides whether it stays.
Three things that change when you take NavBoost seriously
Measurement
Traffic and ranking positions mean absolutely nothing in isolation, which is why I exclusively look at engagement quality. A user who reads your content and leaves is fine, but one who clicks, bounces, and immediately chooses a competitor is an active threat to your rankings. Because Google Search Console will never show you this nuance, I connect query-level data directly with analytics to see exactly where user engagement is working against you. Most standard reporting stops exactly where the real analytical work should begin.
The Reality in Practice: You might look at a monthly report celebrating 500 clicks for a high-volume term like "commercial property solicitor". GSC says you are winning. But when I pull that specific query into GA4, I see an average engagement time of four seconds. Users are landing on the page, seeing a massive, unbroken wall of legal jargon instead of the straightforward pricing they wanted, and immediately hitting the back button to click the competitor directly below you. That isn't a traffic win to celebrate; that is a
badClicksfactory that will inevitably drag your ranking down to page two.
Page architecture
Because engagement signals compound over a 13-month period, your URL-level decisions carry immense weight. Consolidating underperforming pages isn't just an exercise in preserving "link equity"; it is fundamentally about concentrating your engagement history. Splitting a historically strong page into multiple weaker variants immediately resets that hard-earned history. I don't split or migrate pages simply for the sake of a site revamp; I concentrate those crucial signals exactly where they count.
The Reality in Practice: Imagine a site redesign where a consultant suggests taking your highest-ranking, comprehensive "Definitive Guide to B2B Software" and chopping it into five separate, bite-sized URLs because it looks "cleaner". I will block that decision. Chopping up that URL doesn't just change the site structure; it completely fractures 13 months of compounded
goodClicksandlastLongestClicks. You are actively resetting your NavBoost history to zero and sending an open invitation for your competitors to overtake you while you rebuild trust from scratch.
UX priorities
If users are consistently bouncing back to the search results, that is no longer a design issue—it is a direct ranking signal. Page speed, the clarity of your opening paragraphs, and your ability to provide immediate answers are no longer "nice to have" design elements; they are strict performance requirements. I honestly don't care if a page looks visually stunning if it fails to stop the user from clicking back to Google to find a better answer.
The Reality in Practice: Consider a B2B landing page targeting "payroll software integration tools". If a user clicks your result and is forced to scroll past a massive, slow-loading hero image and 500 words of corporate waffle about "synergy" just to find out if your software actually integrates with Xero, they simply won't bother. They will bounce back to the SERP and click a competitor who gives them a clear bulleted list above the fold. Google registers that bounce as a failure. You didn't lose your number one ranking because your backlink profile was weak; you lost it because your UX was actively hostile to the user's intent.
What you cannot fix with content alone
NavBoost creates a category of SEO problems that content can't solve.
If your page is ranking in position seven and getting fewer clicks than position seven typically generates — because your title and description are poorly written, because your brand isn't trusted, because a featured snippet is eating your clicks — you're generating fewer engagement signals than you need to compete. You can improve the content as much as you like. If users aren't clicking in the first place, NavBoost has nothing to work with.
CTR optimisation — titles, meta descriptions, structured data for rich results — becomes a NavBoost strategy as much as it's a traditional click-through strategy.
Similarly, if your page ranks well for broad queries but the users landing on it have different intent than your content addresses, you'll generate badClicks regardless of content quality. The page isn't wrong — it's just in front of the wrong audience. NavBoost will eventually move it down until it finds the audience where intent and content align. This is the quiet argument for intent-first keyword research over volume-first keyword research.
The Uncomfortable Conclusion
NavBoost introduces a massive constraint that the industry often ignores: where you are positioned right now heavily affects what happens to you next. A page stuck on page two isn't just underperforming; it is caught in a self-fulfilling loop that mathematically reinforces its invisibility. Simply writing better content might help eventually, but it rarely breaks the cycle fast enough on its own. You either need a massive push from external quality signals, or you must address the click problem directly by obsessing over titles, meta descriptions, and rich results to feed the system the data it demands.
Recoveries are notoriously slow because the system remembers your failures for a full 13 months, which is an uncomfortable truth that I ensure my clients understand immediately.
References & Citations
- The 13-Month Window: Testimony of Pandu Nayak (Google VP of Search), United States v. Google LLC (2023). Nayak testified under oath that Google relies on 13 months of historical user interaction data to train its ranking models.
- The Internal Field Names: Google Content Warehouse API Documentation Leak (May 2024). Specifically, the
QualityNavboostNavboostDatamodule, which exposed the exact metrics Google tracks, includinggoodClicks,badClicks, andlastLongestClicks. - NavBoost as a Re-Ranker (Twiddler): DOJ Trial Exhibit UPX0680, United States v. Google LLC (2023). Internal Google engineering emails revealed that NavBoost is applied as a post-processing "Twiddler" and is heavily relied upon, often outweighing all other ranking signals combined.
Disclaimer: AI was used to help generate the text, but the content has been proofed, optimised, and thoroughly researched as part of my speaker engagement at Manchester DM