Resources

Claude Code skill open-sourced to boost citations in AI search results

Open-source GEO-SEO is turning AI citation chasing into a DIY workflow, but automated rewrites still need sharp editorial judgment.

Nina Kowalski··6 min read
Published
Listen to this article0:00 min
Share this article:
Claude Code skill open-sourced to boost citations in AI search results
Source: opengraph.githubassets.com

The new search game is not just about ranking, it is about being cited

A Claude Code skill called GEO-SEO is pushing AI search optimization out of agency-only territory and into the hands of smaller teams. The pitch is straightforward: audit a site, rewrite it for better citation, and do it without expensive retainers, whether the target is ChatGPT, Perplexity, or Claude.

AI-generated illustration
AI-generated illustration

That matters because the center of gravity is shifting. Search is no longer only about blue links and organic position one; it is about whether a page becomes part of an answer at all. In that world, visibility can depend on whether an AI system decides your content is clear, quotable, structured, and worth surfacing.

What GEO means, and why the term stuck

The vocabulary behind this shift came from a Princeton-led paper that introduced Generative Engine Optimization, or GEO, with a submission date of November 16, 2023 and a revision on June 28, 2024. The paper frames GEO as a black-box optimization framework for improving a source’s visibility in generative engine responses, and it also introduces GEO-bench, a way to evaluate performance across queries and source documents.

That framing is important because it changes the job description. Traditional SEO tries to influence crawlers, rankings, and click-through behavior on a results page. GEO assumes the engine is opaque, then asks a more practical question: what kinds of content patterns increase the odds of being cited inside the generated answer?

OpenReview’s summary of the work adds a crucial nuance: citations and quotations can significantly improve visibility, but the effects are domain dependent. That means the same rewrite strategy may help one site and barely move another, which is exactly why a one-size-fits-all playbook has never really fit generative search.

Why the urgency feels different now

The push for AI search optimization is happening because AI-mediated search is competing directly with traditional search for attention. Ahrefs reported that when Google AI Overviews appeared, the top organic result saw a 34.5% lower average clickthrough rate in a comparison of 300,000 searches across March 2024 and March 2025. That is not a small wobble, it is a signal that the first click is being rerouted away from the familiar blue-link economy.

Pew Research Center has pointed in the same direction. Its findings showed that Google users were less likely to click links when an AI summary appeared, and that they rarely clicked the sources cited in those summaries. Taken together, those results explain why citation visibility has become such a prized target: if the summary gets the eye, the source needs to earn its place inside the answer.

Inside the Claude Code skill approach

Anthropic’s Claude Code documentation gives this wave a practical foundation. Skills extend Claude’s capabilities through a SKILL.md file, and they can be invoked directly with a slash command. That makes skills feel less like a vague prompt trick and more like a packaged workflow that a team can load when it needs a specific outcome.

Anthropic also describes Claude Code as an agentic coding system that can read a codebase, make changes across files, run tests, and deliver committed code. That is exactly why GEO-SEO fits naturally inside it. If an agent can inspect a site, understand its structure, and then rewrite pages across templates and documents, it can also chase the signals that generative engines seem to reward.

The appeal is democratizing in a very specific way. What used to require specialized consultants, expensive tools, or retainers can now look like a skill file plus a process. Smaller teams can test their own pages, compare versions, and iterate on copy, markup, and structure without waiting for a large agency workflow to spin up.

The open-source ecosystem is broadening fast

GEO-SEO is not appearing in isolation. Anthropic has published an official public repository for Skills and an official course for creating and sharing agent skills, which gives this whole category a real distribution channel. Once skills became easy to package and share, the ecosystem started to behave like an open toolkit rather than a bespoke service.

Multiple GitHub projects now market themselves as GEO or AI-search visibility tools for Claude Code and other agents. Some focus on citability scoring, others on live AI crawler reachability tests, llms.txt generation, and schema markup support. That spread suggests the market is figuring out the problem in layers: first can the crawler reach the page, then can the model parse it, then can the answer engine trust and cite it.

For smaller teams, that is the most consequential part of the story. Open-source GEO-SEO potentially gives them capabilities that were previously gated by agencies or expensive platforms. Instead of buying a black box, they can inspect the box, fork it, and tune it for their own site.

Where automation helps, and where it falls short

The temptation is to treat AI search optimization as a rewrite machine, but that is where the work gets interesting. Citability is not just a formatting problem. It is also about editorial judgment: what deserves a quotation, which claim needs a source nearby, whether a sentence is concise enough to survive extraction, and how much structure the page needs before an AI can reliably parse it.

The Princeton paper and OpenReview findings point to tactics that are real, but not universal. Citations and quotations can help, yet the same optimization does not work equally across every domain. A health publisher, a software documentation site, and a local business page all invite different kinds of trust signals, and the rewrite that helps one can flatten another.

That is why the best use of a Claude Code skill like GEO-SEO is not to hand the entire editorial process to automation. It is to accelerate the parts that can be systematized: audits, structural rewrites, markup improvements, and tests against likely citation patterns. The human editor still has to decide what is true, what is useful, and what should sound like the brand rather than like machine-generated compliance.

The real democratization test

The most interesting question is not whether GEO-SEO exists. It is whether open-source tools actually lower the barrier to earning AI citations in a durable way. Early signals say they can lower the barrier to experimentation, which is valuable on its own. A smaller team can now behave like a lab instead of a guesser.

But experimentation is not the same as authority. If generative engines prefer clear quotations, stronger sourcing, and structured pages, then the winning teams will still be the ones that combine automation with editorial rigor. Open source may make the tools cheaper and the workflow faster, but judgment remains the scarce resource.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get AI Search Visibility updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More AI Search Visibility Articles