Finally, AI Agents That Work. The Secret? Sub-Agents. (Part 2)

I built a LinkedIn content agent that made content creation 5x faster

Community, I’m super excited to bring you an update on using Claude Code Sub Agents

As people at the cutting edge of AI GTM, you’re here to understand the latest tech, its potential and how you can use it.

Last month, I built a LinkedIn Content Agent with Claude Code’s Sub Agents and I’ve now been using it for a month. I’m super happy with it, so wanted to share it in detail today.

This topic is a bit technical, but shows you the power of where we’re already at with AI. Remember, this is just ONE use case. I’d definitely recommend reflect on the power of sub-agents, trying to set them up and thinking about how they may be powerful for your use cases.

Here's what we're covering:

→ The scaling problems with agents Claude Code Sub Agents solve
→ My LinkedIn sub-agent architecture
→ Each agent breakdown (with exact MCPs)
→ My thoughts after 30 days
→ Full setup guide

Let's dive in.

Btw, you may find it useful to check out Part 1 here.

Why Claude Code Sub-Agents Are Different

Most AI agents hit a wall when you try to scale them.

You set up an agent. Works great for simple tasks. Then you add complexity and watch it can’t execute. Context gets messy and is to short. Tools conflict. The whole system becomes unreliable.

Here's what happens with traditional single-agent approaches:

  • Single context window gets overwhelmed with information

  • Trying to handle everything = being mediocre at everything

  • Token limits destroy complex workflows

  • Can't run parallel processes without conflicts

  • Tools interfere with each other constantly

Claude Code Sub-Agents solve this with specialised architecture.

Think of Sub-Agents like hiring specialist team members instead of one overwhelmed generalist.

Each sub-agent:
→ Has its own 200k token context (clean, unpolluted)
→ Only gets the tools it actually needs
→ Maintains specialized expertise for ONE thing
→ Works in parallel with other agents

The result? 5x faster processing. 80% less token waste. Zero context pollution between tasks.

I’ve talked about this in detail, you can more on sub-agents and why they’re powerful here (Part 1).

AI Agents for LinkedIn Content

To put sub agents to the test, I built a LinkedIn Sub Agent system

You feed it raw ideas - could be a keyword, a half-baked thought, something you learned, a trend you spotted. The system then does all the heavy lifting: researching what's actually working on LinkedIn right now, understanding the patterns behind viral posts, transforming your idea into content that matches your exact voice and style, and even creating custom visuals.

The whole point is you go from "I should post about X" to having a complete, publish-ready LinkedIn post with zero manual work. No switching between tools, no copying and pasting, no prompt engineering. Just describe what you want to talk about and the system handles everything else.

It's trained on your best content, so it writes like you. It researches in real-time, so it's always current. And it runs autonomously - you're not babysitting it through each step.

Now let’s run through each of the sub agents in detail:

The Content Agent Architecture: 3 Agents, One Mission

The hierarchy:

📋 linkedin-viral-content (Orchestrator)
├─→ 📊 linkedin-analytics-researcher
├─→ ✍️ linkedin-content-creator  
└─→ 🎨 linkedin-image-generator

post-type-training-data/
├── thought-leadership/
├── demos/
└── lead-magnets/

Critical rule: The orchestrator NEVER creates content. It ONLY delegates. This is what makes the system bulletproof.

This is the end-to-end demo where I run though this system. I’d suggest even listening to this as you read through the rest of the content

Agent #1: The Research Intelligence Engine

linkedin-analytics-researcher

What it does: You have an idea. Maybe "AI automation for marketing teams" or "Best MCPs for sales". This agent turns that spark into comprehensive market intelligence.

MCP Tools:

  • Apify LinkedIn Scraper API - Finds posts matching your keywords

  • Perplexity MCP - Deep research on the topic itself

  • Standard Claude tools - Consolidates everything into insights

The workflow:

  1. You prompt with your topic/keyword

  2. Searches LinkedIn for posts containing those keywords

  3. Pulls engagement metrics, identifies top performers

  4. Simultaneously runs Perplexity search for topic depth

  5. Consolidates LinkedIn patterns + Perplexity insights

  6. Delivers structured research report

Example: Input: "Claude Code sub-agents"

LinkedIn findings:

Top posts mentioning "sub-agents":
- "How I built 10 agents..." (2.1k likes)
- "Sub-agents saved me 20 hours..." (1.8k likes)
Pattern: Step-by-step tutorials outperform theory

Perplexity insights:

Technical context: Sub-agents reduce token usage 80%
Market timing: 500% search increase last 30 days
Gap identified: Nobody showing actual architecture

Consolidated report: Ready for content creation.

Agent #2: The Content Creation Machine

linkedin-content-creator

What it does: Takes the research package and creates content matching YOUR exact style. Not generic. Your voice.

MCP Tools:

  • Training Database Access - Your categorized post examples

  • Perplexity MCP - Real-time validation

  • Standard Claude tools - Content generation

The three-template system:

  1. Thought Leadership - Industry predictions, hot takes

  2. Demos - Step-by-step implementations

  3. Lead Magnets - Free resources, guides, templates

The workflow:

  1. You specify post type ("make this a demo post")

  2. Agent identifies which template category

  3. Pulls relevant examples from that specific training set

  4. Takes research package from Agent #1

  5. Rewrites using your exact patterns from that post type

  6. Maintains your formatting, hooks, CTAs

Agent #3: The Visual Hook Generator

linkedin-image-generator

What it does: Uses FAL API to create custom images based on the post content. No templates. Original every time.

MCP Tools:

  • FAL API (flux-dev model) - Image generation

  • MCP FAL Server - Direct interface

  • Keys File Access - Credential management

  • Standard Claude tools - Prompt optimization

The workflow:

  1. Reads the generated post content

  2. Extracts core theme/concept

  3. Generates scene description

  4. Calls FAL API with optimized parameters

  5. Creates 1200x628px LinkedIn-ready image

My thoughts:

I’ve been running this system for about a month now alongside all my other automations.

Honestly? It's been very useful for turning random insights into actual LinkedIn posts. Like when I have a conversation about something interesting, or I spot a trend, or I just have a half-formed thought - I can throw it at this system and get back something that's actually worth posting.

It's not replacing my other content systems - I still have my n8n automations running. But this fills a different need. It's for those moments when you have something to say but don't want to spend an hour crafting it into a post.

5x faster than writing manually. And I actually use what it creates, which says everything.

Setting This Up Today

If you’d like to try the system, you’ll need:

  1. Claude Code CLI (with API access)

  2. Apify API token ($5 free credits)

  3. Perplexity API key

  4. Fal AI API key

  5. Your training posts categorised

  6. 30 minutes to configure

Here is all the detail to guide you through the set up: 
Detailed set up guide
Full video overview

I’d love to hear how you get on setting this up, how you iterate on it and what the results are.

I read every reply!

Happy building,

Elliot

Learn how AI & Automation can grow your business.
Book a strategy call with our team below 👇

Click here :)