30+ posts/month per brand, zero manual queueing — Cowork + Claude + MCPs orchestrating the whole pipeline.
The context
At HAZE I was producing content for a portfolio of client brands, and the consistent bottleneck was not strategy or creative direction but the operational overhead of moving an idea from brief to scheduled post. The manual portion of that workflow represented several hours of work per brand each week.
When Cowork, Claude, and the Model Context Protocol ecosystem matured to the point where AI agents could reliably hold context across a multi-step workflow, I rebuilt the pipeline as software. The same architecture now runs in production at Wealth Enhancement Group, with Hugging Face MCP integrated for on-demand asset generation.
The pipeline
Brand brief (Drive doc)
│
▼
Cowork / Claude ──► generates post copy variants + image/video prompts
│ + selects format (carousel, reel, single)
▼
Hugging Face MCP ──► on-demand asset generation when a prompt
│ wants imagery the library doesn't have
▼
Google Drive ──► assets land in brand folder with metadata
│
▼
Make.com / Zapier ──► validates against brand guidelines,
│ pushes to scheduler with caption + hashtag set
▼
Buffer ──► queues posts on the brand's content calendar
│
▼
IG / TikTok ──► publishes; engagement metrics flow back into reporting
What's in each step
- Brand brief: structured doc per client — voice, audience, pillars, no-go list, current campaigns. The prompt template reads it on every run.
- Cowork / Claude generation: variants generated under a strict format spec. Multiple concept options per post, ranked by adherence to the brand's pillars.
- Hugging Face MCP: lets the agent call image generation directly when it needs a specific asset, rather than waiting for a human to source it. Critical unlock — moved the pipeline from "human + AI" to "agent end-to-end."
- Drive landing: filenames include brand + pillar + date. Make.com watches the folder.
- Validation: regex + LLM check against the brand's no-go list (banned phrases, competitor names, off-brand voice). Failed items get flagged for human review instead of silently shipping.
- Buffer scheduling: cadence rules per brand. Calendar self-fills.
- Reporting loop: engagement data flows back to the brand brief so the next generation cycle weights formats that worked.
At WEG
Same architecture, different inputs. Brand brief becomes the firm's voice + compliance constraints; outputs feed lifecycle email + webinar promotion + organic social. The pipeline doesn't care whether it's a DTC brand or a financial-services firm — the brief is the swap point.
Outcome
- 30+ posts/month per brand with zero manual queueing
- ~20 hours/month saved per brand — redirected to strategy + paid
- Pipeline served 12+ brands during HAZE's active years; the architecture now runs in production at WEG with new brand onboarding ≈ 90 min (just the brief doc + Buffer auth)
- Multiple seven-figure-view TikTok campaigns from the same framework
Why this engagement matters
In-house marketing teams frequently outsource execution to agencies because internal operations are not built for the throughput modern campaigns require. Building the operational infrastructure in-house changes that equation. The same pattern is well-suited to a modern B2B SaaS marketing function. Unlike claims of "we use AI for marketing" that do not extend past a single ChatGPT subscription, this system has shipped in production across two organizations and is supported by open-source infrastructure I authored.