Analysis

Marketing Agency Open-Sources Claude Code Workflows for Growth, SEO, and Outbound

Single Brain open-sourced Claude Code workflows battle-tested on pipelines generating millions in revenue, covering growth experiments, content QA, and outbound automation under MIT license.

Sam Ortega3 min read
Published
Listen to this article0:00 min
Share this article:
Marketing Agency Open-Sources Claude Code Workflows for Growth, SEO, and Outbound
AI-generated illustration

The repository landed on GitHub under the handle `ericosiu/ai-marketing-skills` with a straightforward claim: these are not prompts, they are complete workflows. Single Brain, the AI revenue agents company behind the release, described them as "battle-tested on real pipelines generating millions in revenue." The MIT license means any agency can fork, modify, and ship extensions without legal friction. If you run a growth, SEO, or outbound function and you have not looked at this yet, here is exactly how to pilot three of the workflows in two weeks without creating compliance or quality debt.

Start with the Growth Engine in week one. The Growth Engine uses bootstrap confidence intervals and Mann-Whitney U tests, real statistics rather than gut instinct, which means you can hand experiment results to a client and defend the methodology. The two-week governance play: version-control your experiment prompts in a dedicated `/prompts/growth/` directory, tag every run with a commit hash, and require human sign-off before any winning variant gets promoted to production. Your week-one success metric is straightforward: run one A/B test end-to-end, generate the statistical output, and confirm a team member can reproduce the result from the commit log alone. If they cannot, your prompt versioning is broken.

Content ops QA is the second workflow to activate, and it is the one most likely to change how an agency handles client deliverables. The content-ops pipeline routes every piece through an Expert Panel that recursively scores with domain-specific expert personas until quality hits 90 out of 100. Nothing ships below that threshold. The pipeline itself runs: Content Source to Content Transform to Quality Scorer to Quality Gate to Publish, with the Expert Panel feeding a revision loop capped at three rounds. For client-safe data handling, the practical move is to strip any personally identifiable client information before it hits the scoring scripts, then reintroduce client context in the final human review step. Your week-one metric: track what percentage of first drafts clear the gate without triggering a revision loop. Anything above 60 percent on day one means your briefing templates are solid. Below 40 percent means fix the brief, not the AI.

The third workflow is outbound research, and the architecture here is the most sophisticated of the three. The Deal Resurrector runs three intelligence layers, including a "follow the champion" layer that tracks departed contacts to their new companies. The ICP Learner rewrites your ideal customer profile automatically based on actual win/loss data. The RB2B Router handles intent scoring, seniority-based company deduplication, and agency classification before routing leads into outbound sequences. The governance issue to solve here before you run a single sequence: define which data sources are permissible under your client contracts, document that list in a `data-policy.md` file committed alongside the skill, and build a human review gate between the Router's classification output and any sequence trigger. Your week-two metric: measure the accuracy of the Router's agency classification against your own manual review of 50 records. If it misclassifies more than 15 percent, you need to audit the intent-scoring inputs.

Eric Siu is the founder of Single Grain, the digital marketing agency with clients including Amazon, Uber, Airbnb, and Salesforce, and Single Brain is the AI-focused arm that built and released these workflows. The skills are compatible with Claude Code, OpenAI Codex, Cursor, and Windsurf, as well as any agent that supports the Agent Skills spec, which means the toolchain decision does not lock you into a single vendor.

The governance layer is where most agency pilots fail. Prompt versioning in Git, mandatory human sign-off on anything client-facing, and a documented data policy are not optional add-ons. They are the difference between a two-week pilot that earns trust and one that creates a compliance fire drill. The repo gives you the engine. The version control and sign-off discipline is on you to build before you touch a single client account.

Sources:

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get SEO Agency Growth updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More SEO Agency Growth Articles