Claude Code 2.0 - Advanced Workflows with Sub-Agents and MCP
Claude Code 2.0 - Advanced Workflows with Sub-Agents and MCP
What ΓΈThis Guide Covers
Claude Code 2.0 introduces powerful new capabilities that go far beyond simple code editing: sub-agents for delegating complex tasks, MCP (Model Context Protocol) servers for extending functionality, and advanced tool composition. This guide shows how to leverage these features to create sophisticated AI-powered workflows for documentation, research, and knowledge base management.
Why This Matters for Your Workflow
Your current setup:
- Manually managing OpenCode documentation
- Researching pricing comparisons across providers
- Maintaining knowledge base articles in midimaze
- Tracking context across multiple AI tools
With Claude Code 2.0, you can:
- Delegate research tasks to specialized sub-agents (search, summarize, compare)
- Extend Claude with custom MCP servers (database access, API integrations, custom search)
- Compose complex workflows (research β analyze β document β validate)
- Maintain context automatically across long documentation sessions
Understanding the New Capabilities
1. Sub-Agents (Task Delegation)
What they are: Claude Code 2.0 can spawn sub-agents to handle specialized tasks autonomously. These agents work in parallel or sequentially, then report back results.
Available agent types:
- general-purpose: Complex multi-step tasks, research, code search
- statusline-setup: Configure Claude Code UI settings
- output-style-setup: Customize output formatting
How they work:
Main Agent (you're talking to)
ββ Sub-Agent 1: Research latest pricing data
β ββ Uses WebSearch, WebFetch tools
ββ Sub-Agent 2: Analyze competitor features
β ββ Uses Read, Grep tools
ββ Main Agent: Synthesizes results into article
Real-world example for your workflow:
You: "Research the latest Claude model pricing and compare with OpenAI"
Main Agent:
ββ Spawns research agent β Fetches Anthropic pricing
ββ Spawns research agent β Fetches OpenAI pricing
ββ Main agent receives both results
ββ Generates comparison table in your article format
2. MCP Servers (Extended Tools)
What they are: MCP (Model Context Protocol) servers are external programs that provide additional tools to Claude. Think of them as plugins that give Claude new abilities.
Your current MCP servers: Looking at your available MCP tools, you already have:
- Docker MCP Gateway: Access to 50+ tools
- DNS management (Hostinger)
- VPS operations
- Billing/subscriptions
- Brave Search (web, news, images, videos)
- Browser automation (Playwright)
- Academic paper search (arXiv, PubMed, Semantic Scholar, etc.)
- Wikipedia integration
- GitHub (via
ghCLI) - Google Maps
- Obsidian vault operations (your midimaze!)
- And many more...
How they extend your workflow:
Standard Claude: Can read/write files, run bash commands
Claude + MCP: Can search academic papers, fetch live web data,
interact with your Obsidian vault semantically,
automate browser tasks, manage VPS instances
3. Advanced Tool Composition
What it means: Claude Code 2.0 can intelligently chain multiple tools together to accomplish complex goals.
Example workflow:
Goal: Update your OpenCode pricing article with latest data
Tool chain:
1. brave_web_search("Claude pricing 2025")
2. WebFetch(anthropic.com/pricing) β Extract structured data
3. brave_web_search("OpenCode Zen pricing")
4. WebFetch(opencode.ai/docs/zen) β Extract pricing
5. Read(your current article) β Understand format
6. Edit(article) β Update specific sections
7. Git commit with descriptive message
Without this capability: You'd manually research, copy/paste, format, and update With this capability: "Update my OpenCode pricing article" β Done in one shot
Practical Workflows for Your Documentation
Workflow 1: Automated Research & Documentation Update
Use case: Keep your OpenCode pricing article current
Traditional approach:
- Visit Anthropic pricing page
- Visit OpenCode Zen pricing page
- Manual comparison
- Copy data into article
- Update frontmatter timestamp
- Commit to git
Claude Code 2.0 approach:
Prompt: "Update my OpenCode pricing article with the latest data from
both Anthropic and OpenCode Zen. Use sub-agents to fetch
pricing concurrently, then update the comparison tables."
Execution:
ββ Sub-Agent 1: WebFetch Anthropic pricing β Extract model prices
ββ Sub-Agent 2: WebFetch OpenCode Zen pricing β Extract model prices
ββ Main Agent: Read current article structure
ββ Main Agent: Generate updated comparison tables
ββ Main Agent: Edit article with new data
ββ Main Agent: Update frontmatter timestamp
Result: Article updated with verified current data in ~30 seconds
Workflow 2: Cross-Reference Validation
Use case: Ensure your articles reference each other correctly
Claude Code 2.0 approach:
Prompt: "Check all articles in Computer Tech/AI/OpenCode/ for broken
wiki links and ensure cross-references are accurate."
Execution:
ββ Sub-Agent: obsidian_list_files_in_dir("OpenCode/")
ββ Sub-Agent: For each file:
β ββ obsidian_get_file_contents(file)
β ββ Extract <span class="wikilink-broken" title="Page not found: wiki links">wiki links</span>
β ββ Validate target files exist
ββ Main Agent: Report broken links
ββ Main Agent: Suggest cross-references to add
Workflow 3: Multi-Source Research Compilation
Use case: Create a new comprehensive comparison article
Claude Code 2.0 approach:
Prompt: "Research and create an article comparing Claude Code, Cursor,
and GitHub Copilot. Include pricing, features, and ideal use cases."
Execution:
ββ Sub-Agent 1: Research Claude Code
β ββ WebSearch("Claude Code features 2025")
β ββ WebFetch(docs.claude.com)
β ββ Summarize findings
ββ Sub-Agent 2: Research Cursor
β ββ WebSearch("Cursor IDE pricing features")
β ββ WebFetch(cursor.sh)
β ββ Summarize findings
ββ Sub-Agent 3: Research GitHub Copilot
β ββ WebSearch("GitHub Copilot vs Claude Code")
β ββ WebFetch(github.com/features/copilot)
β ββ Summarize findings
ββ Main Agent: Read your article patterns (from AGENTS.md)
ββ Main Agent: Synthesize research into comparison tables
ββ Main Agent: Generate article with proper frontmatter
ββ Main Agent: Suggest location in vault structure
Output: Publication-ready article with citations
Workflow 4: Academic Research Integration
Use case: Research AI coding tools from academic perspective
Claude Code 2.0 approach:
Prompt: "Find recent academic papers on AI-assisted coding tools,
summarize key findings, and create a literature review section."
Execution:
ββ Sub-Agent 1: search_semantic("AI coding assistants evaluation")
β ββ Returns: Recent papers from Semantic Scholar
ββ Sub-Agent 2: search_arxiv("large language models code generation")
β ββ Returns: Recent arXiv preprints
ββ For each relevant paper:
β ββ read_semantic_paper(paper_id) or read_arxiv_paper(paper_id)
β ββ Extract key findings
β ββ Note methodologies
ββ Main Agent: Organize by theme
ββ Main Agent: Generate literature review
ββ Main Agent: Include proper citations
Workflow 5: Obsidian Vault Maintenance
Use case: Maintain your midimaze knowledge base automatically
Claude Code 2.0 approach:
Prompt: "Check all articles in Computer Tech/AI/ for outdated 'updated'
timestamps (>6 months old), identify topics that may need
refreshing, and create a priority list."
Execution:
ββ Sub-Agent: obsidian_complex_search({
β "and": [
β {"glob": ["Computer Tech/AI/**/*.md", {"var": "path"}]},
β {"<": [{"var": "updated"}, "2025-05-10"]}
β ]
β })
ββ For each outdated article:
β ββ obsidian_get_file_contents(article)
β ββ WebSearch(article topic + "2025")
β ββ Compare article content with current info
β ββ Flag if significantly outdated
ββ Main Agent: Rank by staleness + importance
ββ Main Agent: Create prioritized update list
Output: "5 articles need updating. Priority: [list with reasons]"
Refactoring Ideas for Your Existing Articles
Idea 1: Make Articles Self-Updating
Add automation sections to your articles:
markdown## Automated Updates (For AI Agents) **Update frequency:** Monthly **Update command:** /update with web research on [Anthropic pricing, OpenCode Zen pricing] **What to update:** - Model pricing tables (Section: "Claude Model Pricing") - Feature comparison (Section: "What Zen Adds Beyond Claude") - Cost examples (Section: "Real-World Cost Examples") **Validation steps:** 1. Verify prices match official sources 2. Check that examples math is correct 3. Update "Last verified" date at bottom
Benefit: Future AI sessions know exactly how to update the article
Idea 2: Create a "Research Agent Prompt Library"
New file: Computer Tech/AI/OpenCode/Research Agent Prompts.md
markdown# Research Agent Prompts for OpenCode Documentation ## Pricing Update Prompt
Update OpenCode pricing article:
- Use sub-agent to fetch latest Anthropic pricing
- Use sub-agent to fetch latest OpenCode Zen pricing
- Compare with current article data
- Update only changed sections
- Update frontmatter timestamp
- Commit with message "docs: update OpenCode pricing [date]"
## Competitive Analysis Prompt
Research [competitor] and compare with OpenCode:
- Sub-agent: Research competitor features
- Sub-agent: Research competitor pricing
- Sub-agent: Search user reviews/discussions
- Synthesize into comparison section
- Add to relevant article or create new article
Benefit: Reusable prompts for common documentation tasks
Idea 3: Add MCP Tool References to Articles
In your existing articles, add:
markdown## For AI Agents: Tools Available for This Topic **Web research:** - `brave_web_search()` - Find latest pricing info - `WebFetch()` - Extract structured data from official pages **Academic research:** - `search_semantic()` - Find academic papers on AI coding - `search_arxiv()` - Preprints on LLM code generation **Documentation maintenance:** - `obsidian_get_file_contents()` - Read this article - `obsidian_patch_content()` - Update specific sections - `obsidian_complex_search()` - Find related articles **Update workflow:** 1. Run web research tools to get latest data 2. Read current article structure 3. Patch only changed sections 4. Update frontmatter
Benefit: Future AI agents understand which tools to use for updates
Idea 4: Create "Living Documents" with Validation
Add to article frontmatter:
yaml--- created: 2025-11-08T18:41:09-08:00 updated: 2025-11-10T20:51:41-0800 last_validated: 2025-11-10 validation_frequency: monthly validation_sources: - https://claude.com/pricing - https://opencode.ai/docs/zen/#pricing auto_update_enabled: true ---
Then create a validation script:
bash#!/bin/bash # Check which articles need validation VAULT="$HOME/Code/github.com/theslyprofessor/midimaze" # Find articles with auto_update_enabled: true # and last_validated older than validation_frequency # Use Claude Code to run validation for each article
Advanced: Creating Custom MCP Servers
Use Case: Custom Search for Your Vault
Problem: You want semantic search across your midimaze articles, not just keyword matching
Solution: Create a custom MCP server that provides vector search
Conceptual architecture:
Custom MCP Server: "midimaze-search"
ββ On startup: Vectorize all articles in midimaze
ββ Provide tool: semantic_search_vault(query, filters)
ββ Return: Ranked articles by semantic similarity
ββ Claude uses results to generate answers
Example usage:
You: "Find articles related to AI coding workflows"
Claude calls: semantic_search_vault("AI coding workflows",
filters={"path": "Computer Tech/AI/*"})
Returns:
- Context Strategies for AI Coding Agents (0.89 similarity)
- Claude Code 2.0 - Advanced Workflows (0.87 similarity)
- AI Agents comparison (0.82 similarity)
Claude: "Here are 3 relevant articles I found..."
Use Case: Automated Citation Management
Problem: You reference external sources but don't have structured citation tracking
Solution: Custom MCP server for citation management
Features:
Tool: add_citation(article, url, note)
Tool: get_citations(article)
Tool: validate_citations(article) β Check if URLs still work
Tool: format_bibliography(article, style="APA")
Integration with articles:
markdown## Sources <!-- CITATIONS_START --> [1] Anthropic API Pricing. Retrieved 2025-11-10. https://claude.com/pricing [2] OpenCode Zen Documentation. Retrieved 2025-11-10. https://opencode.ai/docs/zen/ <!-- CITATIONS_END -->
Auto-validation workflow:
Monthly cron job:
β Run validate_citations() on all articles
β Report broken links
β Claude updates articles with working alternatives
Implementing These Ideas: Step-by-Step
Phase 1: Enhance Existing Articles (Low Effort, High Value)
Week 1: Add Agent Instructions
- Add "For AI Agents" sections to your 2 OpenCode articles
- Document update workflows in articles
- Add tool references (which MCP tools to use)
Week 2: Create Research Prompt Library
- Create
Research Agent Prompts.md - Document 5-10 reusable research workflows
- Test prompts with Claude Code 2.0
Week 3: Add Validation Metadata
- Enhance frontmatter with validation fields
- Document validation sources
- Set update frequencies
Phase 2: Build Automation Workflows (Medium Effort)
Week 1: Create Update Scripts
- Script: Check for outdated articles
- Script: Run validation on specific articles
- Script: Backup before automated updates
Week 2: Implement Sub-Agent Workflows
- Test pricing update workflow with sub-agents
- Test cross-reference validation workflow
- Test academic research workflow
Week 3: Document Workflows
- Create workflow guide article
- Add examples of successful automations
- Share lessons learned
Phase 3: Custom MCP Servers (Advanced, Optional)
Month 1: Semantic Search MCP
- Design vector search architecture
- Implement basic MCP server
- Test with subset of articles
- Deploy to Docker MCP Gateway
Month 2: Citation Management MCP
- Design citation data structure
- Implement citation tools
- Integrate with existing articles
- Set up auto-validation
Practical Examples for Immediate Use
Example 1: Update Pricing Article Right Now
Prompt to use:
I need you to update "OpenCode Zen vs Anthropic Direct - Pricing Comparison.md"
with the latest pricing data. Please:
1. Use sub-agents to fetch current pricing from:
- Anthropic's pricing page (claude.com/pricing)
- OpenCode Zen docs (opencode.ai/docs/zen/#pricing)
2. Compare the fetched data with what's in the article
3. Update ONLY the sections that have changed
4. Update the frontmatter timestamp
5. Show me a summary of what changed before committing
Example 2: Research New Topic with Multiple Sources
Prompt to use:
Research "Claude extended thinking mode" and create a new article. Use:
1. Sub-agent: Web search for official documentation
2. Sub-agent: Search academic papers on reasoning chains
3. Sub-agent: Find user discussions/reviews
Synthesize findings into a new article following my article patterns
(check AGENTS.md for structure). Place in: Computer Tech/AI/AI Models/
Include:
- Feature explanation
- How it differs from standard mode
- Pricing implications
- When to use it
- Comparison with competitors (OpenAI o1, etc.)
Example 3: Maintain Vault Health
Prompt to use:
Audit all articles in Computer Tech/AI/ and create a maintenance report:
1. Find articles with 'updated' >6 months old
2. For each outdated article:
- Check if topic has evolved (quick web search)
- Rate urgency of update (1-5 scale)
- Note specific sections likely outdated
3. Check for broken wiki links
4. Identify articles that should reference each other but don't
Generate a prioritized maintenance TODO list.
Common Patterns & Best Practices
Pattern 1: "Research β Synthesize β Document"
When to use: Creating new articles from scattered information
Structure:
Step 1: Parallel sub-agents research different sources
Step 2: Main agent synthesizes findings
Step 3: Main agent structures per your article patterns
Step 4: Main agent generates final document
Step 5: You review and approve
Example domains:
- Competitive analysis articles
- Technology comparisons
- Literature reviews
- Tool evaluations
Pattern 2: "Validate β Update β Commit"
When to use: Keeping existing articles current
Structure:
Step 1: Check last_validated date
Step 2: Sub-agents fetch current data
Step 3: Main agent compares with article
Step 4: Main agent updates only changed sections
Step 5: Update frontmatter
Step 6: Commit with descriptive message
Frequency:
- Pricing articles: Monthly
- Tool feature articles: Quarterly
- Conceptual articles: Yearly
Pattern 3: "Query β Fetch β Analyze β Report"
When to use: Ad-hoc research questions
Structure:
Step 1: You ask a research question
Step 2: Claude identifies relevant sources
Step 3: Sub-agents fetch from multiple sources
Step 4: Main agent analyzes findings
Step 5: Main agent reports back (not creating article)
Example:
You: "What's the current state of AI code editors?"
Claude:
ββ Sub-agent: Search recent articles
ββ Sub-agent: Check your existing articles
ββ Sub-agent: Fetch competitor websites
ββ Main: Synthesize into verbal report (not article)
Measuring Success
Metrics to Track
Efficiency gains:
- Time to update an article: Before vs After
- Number of sources researched per article
- Accuracy of automated updates
Quality improvements:
- Freshness of articles (avg days since last update)
- Completeness (# of broken links, missing cross-refs)
- Comprehensiveness (sources cited per article)
Automation adoption:
- % of updates done via automation
- % of research done via sub-agents
- Custom MCP tools created and used
Success Looks Like
Before (manual workflow):
- Update pricing article: 30-60 minutes
- Research new tool: 2-3 hours
- Validate all links: Manual, rarely done
- Cross-reference checking: Manual, time-consuming
After (automated workflow):
- Update pricing article: 5 minutes (mostly review)
- Research new tool: 30 minutes (mostly synthesis)
- Validate all links: Automated monthly
- Cross-reference checking: Automated on-demand
Troubleshooting Common Issues
Issue 1: Sub-Agent Gets Stuck
Symptoms: Sub-agent doesn't return results, session hangs
Solutions:
- Set explicit timeouts in prompts
- Simplify sub-agent tasks (break into smaller steps)
- Monitor sub-agent progress (Claude should report status)
Prevention:
# Instead of:
"Research everything about Claude Code"
# Do:
"Research Claude Code pricing only, limit to official sources"
Issue 2: Conflicting Information from Sources
Symptoms: Sub-agents return different data for same topic
Solutions:
- Prioritize official sources (anthropic.com > blog posts)
- Use timestamps (newer info preferred)
- Manual review for critical updates
Mitigation in prompts:
"If sub-agents return conflicting data, prioritize:
1. Official documentation
2. Recent dates (< 30 days)
3. Flag conflicts for my review"
Issue 3: MCP Tool Failures
Symptoms: Tool calls fail, return errors
Solutions:
- Check MCP server status:
docker ps(for Docker MCP Gateway) - Verify tool parameters match schema
- Fallback to alternative tools (WebFetch instead of brave_web_search)
Debugging:
bash# Check Docker MCP Gateway logs docker logs mcp-docker # Test specific tool opencode # then try tool manually
Next Steps
Immediate Actions (Today)
-
Test sub-agent workflow:
- Try Example 1 (update pricing article)
- Observe how sub-agents work
- Note time savings
-
Add agent instructions to one article:
- Pick your most-updated article
- Add "For AI Agents" section
- Document update workflow
-
Bookmark this guide:
- Reference when planning documentation work
- Update with your own learnings
This Week
-
Enhance both OpenCode articles:
- Add agent instructions
- Add tool references
- Add validation metadata
-
Create Research Prompt Library:
- Document your common research patterns
- Test 2-3 prompts
- Refine based on results
-
Set up article validation:
- Add validation fields to frontmatter
- Document validation sources
- Run first validation pass
This Month
-
Build automation habits:
- Use sub-agents for all research tasks
- Let Claude handle multi-source synthesis
- Review and improve prompts
-
Expand to other article topics:
- Apply patterns to other AI articles
- Apply to other Computer Tech topics
- Share patterns in AGENTS.md
-
Explore custom MCP servers:
- Research MCP protocol
- Design first custom tool
- Implement prototype
Related Resources
Internal (Your Vault)
- Context Strategies for AI Coding Agents - How to give Claude effective context
- AI Agents comparison - Understanding different AI coding tools
- OpenCode - Managing and Searching Conversations - Session management
- OpenCode Zen vs Anthropic Direct - Pricing Comparison - Pricing details
External Documentation
- Claude Code 2.0 Documentation
- MCP (Model Context Protocol) Spec
- Docker MCP Gateway
- OpenCode CLI Reference
Community
Conclusion
Claude Code 2.0's sub-agents and MCP capabilities transform documentation work from manual labor to orchestrated automation. Start small (add agent instructions to articles), build up (create research workflows), and eventually create custom tools that make your knowledge base maintenance nearly autonomous.
The key insight: Your articles can become self-documenting and self-updating. With the right structure and agent instructions, future AI sessions can maintain your knowledge base with minimal manual intervention.
Your next prompt: Try Example 1 above and experience sub-agents in action!