Solo Dev Builds AI Tool That Automates Entire SEO Pipeline, From Research to Published Posts
Craig Hewitt's open-source SEO Machine uses Claude Code to run the full content pipeline autonomously, collecting nearly 2,900 GitHub stars and threatening the $5K agency retainer model.

Craig Hewitt, founder and CEO of Castos, a podcast hosting platform serving more than 40,000 brands, has released SEO Machine as an open-source Claude Code workspace that executes the entire SEO content pipeline without a human team. The repository has accumulated nearly 2,900 stars and close to 500 forks on GitHub, signaling rapid adoption across the developer and marketing communities.
SEO Machine is a specialized Claude Code workspace built for creating long-form, SEO-optimized blog content for any business, helping users research, write, analyze, and optimize content that ranks well and serves target audiences. Hewitt originally built the tool internally for Castos before open-sourcing it, and the repository ships with a complete real-world example directory showing exactly how the podcast SaaS company runs its own content operation.
The tool offers workflow commands including /research, /write, /rewrite, /analyze-existing, and /optimize, backed by seven expert agents handling SEO optimization, meta element creation, internal linking, keyword mapping, editing, performance analysis, and headline generation. That agent roster has since grown to ten, adding a CRO analyst and a landing page optimizer, alongside 26 discrete marketing skills and a full suite of landing page commands covering research, competitor analysis, writing, auditing, and publishing.
For agencies trying to assess what this means to their retainer model, the architecture is instructive. The system provides search intent detection, keyword clustering, readability scoring, and 0-100 SEO quality ratings with actionable recommendations, pulling real-time data from Google Analytics 4, Google Search Console, and DataForSEO. That is the segment of the pipeline, the mechanical production layer, that has always been the easiest to invoice at a premium and the easiest to automate cheaply. Keyword clustering, SERP gap analysis, meta generation, internal link mapping: SEO Machine handles all of it through slash commands.
Where the tool predictably strains is exactly where it tries hardest to compensate. The system removes AI watermarks and patterns from content, including em-dashes, filler phrases, and robotic writing patterns. The /scrub command targets 24 specific AI writing signatures catalogued against Wikipedia's AI cleanup guidelines. Removing robotic patterns by running more AI at them is a known patch, not a solution, and brand voice drift, factual hallucination, E-E-A-T credibility signals, and SERP volatility response are the places where any automated pipeline without human editorial accountability will quietly underperform over time.
The workspace maintains brand voice, style guides, and SEO guidelines across all content through customizable context files, with organized directories for topics, research, drafts, rewrites, and published content. Those context files are the configuration surface where agencies have a clear insertion point. Feeding the system precise brand voice documentation, internal linking maps, and conversion-focused editorial standards is not a commodity task. It requires the kind of strategic judgment that a competent content director brings, and it determines whether the output from 19 automated workflow commands is useful or garbage.
The more interesting commercial read is that SEO Machine makes a strong argument for agencies to adopt it internally and stop billing for the pipeline itself. The differentiator worth selling in 2026 is not keyword research or first-draft production. Hewitt has made the system available as an open-source tool for any business to streamline long-form SEO content creation, which means the commodity layer is now free. The agencies that survive are the ones who get paid for what the tool cannot do: editorial judgment, conversion testing, accuracy QA, and the human accountability that a 2,900-star GitHub repo will never be able to invoice for when rankings drop.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

