AI visibility failures need diagnosis, not more content
A brand can be reachable, readable, and still absent from AI answers. The fix is to diagnose the broken layer before you write another post.

A brand can be crawled, indexed, and still stay invisible in ChatGPT search or Perplexity. That is the trap Duane Forrester is warning teams about: AI visibility is not one vague problem, and it is not automatically solved by publishing more content.
Why this is a diagnosis problem, not a content problem
Forrester’s core point is simple but uncomfortable. Traditional SEO gave you rankings, traffic, and conversions. AI search often gives you silence unless you have a way to measure whether your brand appears in answers at all. That is a very different game, and it is why a healthy-looking dashboard can hide a very unhealthy answer experience.
The shift matters because the main answer engines are no longer hypothetical. OpenAI launched ChatGPT search in October 2024 with fast, timely answers and links to web sources inside ChatGPT. Google rolled out AI Overviews broadly in the U.S. in May 2024, and says the feature uses a customized Gemini model working alongside Search quality systems and the Knowledge Graph. These products do not just retrieve pages, they synthesize answers, which means visibility now has multiple failure points.
Layer 1: crawling and access
The first layer is basic reachability. If AI systems cannot crawl, fetch, or reliably access your pages, nothing downstream matters. This is the easiest failure to misread because teams often assume a content problem when the real issue is structural: blocked pages, fragile rendering, weak canonicalization, or pages that are simply not easy to retrieve in the first place.
This is where more content is the wrong reflex. You can publish ten more articles and still leave the original money pages invisible if the system cannot access them cleanly. When Google says AI Overviews appear when its systems judge generative AI to be especially helpful, and that the feature includes source links, the practical takeaway is obvious: if your content is not easily fetchable, it is not eligible to help answer anything.
A quick access check should ask: can the page be reached without friction, can the important text be read consistently, and does the version AI systems see actually contain the facts you want surfaced? If not, fix the plumbing first.
Layer 2: retrieval and understanding
The second layer is whether the AI system can correctly interpret and trust what it finds. This is where a brand can be visible to search but still fail inside an answer engine. The content may exist, but the model may not understand the entity, the product, the relationship, or the claim well enough to use it.
Forrester’s framing is useful here because it separates the symptom from the cure. A weak retrieval problem is not solved by another blog post with the same ideas written a different way. It is solved by clearer entity signals, stronger internal consistency, better structured information, and content that answers the exact question the system is trying to resolve. If a brand keeps getting summarized badly, or competitors are repeatedly selected instead, the issue is usually not volume. It is clarity.
This is also where Google’s AI Overviews and ChatGPT search change the stakes. Both are built to synthesize, not just index. Google’s system is tied to Gemini and Search infrastructure. OpenAI’s search product is built to bring timely answers with linked sources. In both cases, the system has to understand what the source is saying before it can decide whether to use it.
Layer 3: answer generation and attribution
The third layer is the hardest one to spot because everything can look fine until the answer actually appears. Your page may be crawlable. The model may understand it. Yet your brand still may not be named, linked, or credited in the final response.
That is the layer Forrester is really pushing teams to diagnose. In his AI Visibility Journal, he argues that AI search often gives silence unless teams can measure appearance in answers. He also said on Bluesky that each layer has different failure modes, different fixes, and different organizational owners. That is the operational reality most marketing teams are missing.

This layer is where answer systems decide whether to cite you, paraphrase you, or skip you. If you are already appearing in some source list but not in the visible answer, the problem is not more publishing. It is attribution, trust, competitive selection, or how clearly your brand maps to the query. The right fix may be a better source profile, more authoritative corroboration, or cleaner language that gives the model a reason to name you.
How to tell which layer is broken
Use a simple diagnostic flow:
- If the page is not being reached, indexed, or surfaced as a source at all, you have a crawling or access problem.
- If the content is reachable but the system keeps misunderstanding the brand, product, or claim, you have a retrieval or understanding problem.
- If the content is clearly in play but the brand still does not get cited or named in answers, you have an answer-generation or attribution problem.
That distinction matters because each layer belongs to a different kind of work. Engineering may own access. Content and information architecture may own understanding. Brand, communications, and SEO may own attribution signals. If you collapse all three into “we need more content,” you guarantee wasted effort.

Why this is happening now
The timing is not accidental. Google’s move from Search Generative Experience experiments in 2023 to AI Overviews in 2024 shows how quickly answer-first search moved from test mode to the main interface. What changed was not just the model, but the reporting culture around it. Conventional SEO dashboards were built for blue links, while the new interface can hide the brand even when traffic patterns look stable.
Adobe’s 2026 AI traffic report makes the business case impossible to ignore. The company said AI-driven traffic to U.S. retail sites rose 393% year over year in Q1 2026, and TechCrunch reported that the analysis was based on more than 1 trillion visits to U.S. retail sites plus a survey of more than 5,000 U.S. respondents. Adobe also said AI-referred retail traffic converted better than non-AI traffic. That is not a vanity metric. That is revenue.
Forrester’s bigger point
Forrester is not arguing against content. He is arguing against using content as a universal solvent. He knows the search stack well, having helped launch Bing Webmaster Tools and Schema.org, and that background shows in how he frames the problem. The old SEO habit was to publish harder whenever visibility sagged. The new habit has to be diagnostic: identify the layer, verify the evidence, and fix the specific failure point.
That is the real shift in AI visibility. It is no longer a single checklist item, and it is no longer enough to ask whether you ranked. The sharper question is whether your brand can be found, understood, and actually named when an answer engine decides what to show.
Know something we missed? Have a correction or additional information?
Submit a Tip

