Analysis

AI visibility diverges from rankings as Google AI Overviews shift citations

Top rankings no longer guarantee AI citations. Ahrefs says only 38% of AI Overview links come from top 10 results, so brands need a four-signal audit.

Sam Ortega··6 min read
Published
Listen to this article0:00 min
Share this article:
AI visibility diverges from rankings as Google AI Overviews shift citations
Source: searchengineland.com
This article contains affiliate links, marked with a blue dot. We may earn a small commission at no extra cost to you.

Ranking is no longer the whole game

The uncomfortable shift is simple: a brand can still rank well in Google Search and barely register inside Google’s AI Overviews. Ahrefs’ newer analysis of 863,000 search results pages found that only 38% of AI Overview citations also came from pages in the traditional top 10, down from 76% in its earlier study. That is a huge break from the old assumption that a strong organic position automatically translates into visible placement in AI-generated answers.

AI-generated illustration

The better way to think about it is this: AI visibility is now an information problem as much as an SEO problem. Ahrefs also found that 86% of AI Overview citations came from pages somewhere in Google’s top 100, which means the model is still leaning on search-indexed material, just not rewarding the same narrow slice of results it once did. It also says AI assistants tend to cite fresher content than traditional organic results, which helps explain why older ranking assumptions keep failing.

The first check is mention order

If the model mentions your brand first, you are already in a better spot than most. Search Engine Land’s framing matters here because users tend to accept the first option they see, and the first mention often becomes the default recommendation in an AI-generated response. Brand recognition can override that order when people already know the name, but for most discovery queries, the opening position shapes the rest of the answer.

This is the easiest signal to audit because you can test it directly across assistants. Run the same prompt through ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews, then note whether your brand appears first, second, or not at all. If you are consistently buried behind a competitor, you are not dealing with a visibility problem in the old sense, you are dealing with an ordering problem in the new one.

How to audit mention order

• Search for the exact category terms buyers use, not just your brand name. • Compare the first brand named across assistants and prompt variations. • Watch for location, price, or use-case modifiers, because those often change the first pick. • Treat repeated second-place placement as a warning sign, not a near miss.

The second check is depth of explanation

Being mentioned is not the same as being explained. Some brands get a single sentence and disappear; others get a fuller treatment that spells out why they matter, which use case they fit, and how they compare with alternatives. Search Engine Land’s point is that AI systems do not just surface names, they synthesize a narrative, and the amount of citation-worthy material they can find determines how much of that narrative your brand gets.

That distinction matters because shallow mentions are fragile. If the model can only find a thin trail of references, it may name your brand and move on. If it finds product pages, comparisons, reviews, structured facts, and consistent third-party coverage, it has enough material to explain your category position in a way that is more useful to the user and more reusable in future answers.

How to audit explanation depth

• Ask the assistant why your brand fits the query, then compare the answer to a competitor’s. • Look for whether the model can describe use cases, benefits, and tradeoffs without inventing filler. • Check whether it cites concrete features or just repeats generic marketing language. • Identify the pages that should teach the model what your brand is for, then make sure they are actually indexable and clear.

The third check is authority

Authority in AI search is broader than classic domain strength. The article’s framework treats it as the trusted evidence base around the brand, including off-site references and the consistency of entity signals. In practical terms, that means the model is looking for proof that your brand exists in a stable, recognizable form across the web, not just on your own site.

This is where a lot of brands still get sloppy. If your naming is inconsistent, your product descriptions drift, or independent sources barely mention you, the model has less confidence in using you as a citation source. Search Engine Land’s broader coverage has been pushing the same idea across 2025 and 2026 coverage: entity authority and structured-data signals are becoming central to AI search visibility, not optional polish.

How to audit authority

• Check whether your brand name, product names, and categories are consistent everywhere they appear. • Review the quality of off-site references, not just the quantity. • Make sure structured data supports the same entity details that users see on the page. • Look for gaps between what your site claims and what third-party sources repeat.

The fourth check is comparative positioning

The most revealing part of an AI answer is often not whether your brand appears, but how it is framed relative to the rest of the market. Search Engine Land’s model says the AI can position a brand as a leader, a mid-pack option, or a fallback alternative. That framing changes the whole buying conversation before the user ever clicks anything.

This is the signal many teams miss because they stop at presence. If the assistant describes you as the safe backup while a competitor gets framed as the obvious choice, your visibility is technically intact but strategically weak. The real test is whether the AI understands your value well enough to recommend you first for the right use case, not just mention you somewhere in the answer.

How to audit comparative positioning

• Prompt the model with explicit comparison questions, not just category definitions. • Watch whether your brand is framed as best-in-class, acceptable, or merely available. • Compare the language used for you and for competitors, especially around quality, fit, and trust. • Track whether the same positioning shows up across assistants or changes wildly from one system to another.

Why this changes the SEO job

This is why the old rank report is no longer enough. Search Engine Land’s larger argument is that brands now need to track presence in ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews, because visibility is fragmenting across systems that do not all reward the same signals. A page can rank, be indexed, and still fail the new test if the model cannot confidently mention it, explain it, or compare it.

The practical takeaway is brutal but useful: stop asking only where you rank and start asking whether the model can use you. If the answer is no, the fix is not just more keywords. It is better entity signaling, stronger evidence, fresher supporting content, and clearer comparative context so the AI has something worth citing and repeating.

That is the real break from classic SEO. Ranking still matters, but only as one input into a broader visibility system where the brands that win are the ones AI can trust, explain, and place first.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get AI Search Visibility updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More AI Search Visibility Articles