Use when creating, improving, or troubleshooting Claude Code subagents. Expert guidance on agent design, system prompts, tool access, model selection, and best practices for building specialized AI assistants.
Use this skill when creating, improving, or troubleshooting Claude Code subagents. Provides expert guidance on agent design, system prompt engineering, tool configuration, and delegation patterns.
When to Use This Skill
Activate this skill when:
User asks to create a new subagent
User wants to improve an existing agent
User needs help with agent configuration or tool access
User is troubleshooting agent invocation issues
User wants to understand when to use agents vs skills vs commands
User asks about agent chaining or delegation patterns
Quick Reference
Agent File Structure
---
name: agent-name
description: When and why to use this agent
tools: Read, Write, Bash(git *)
model: sonnet
---
Your detailed system prompt defining:
- Agent role and expertise
- Problem-solving approach
- Output format expectations
- Specific constraints or requirements
File Locations
Project agents (shared with team, highest priority):
.claude/agents/my-agent.md
Personal agents (individual use, lower priority):
~/.claude/agents/my-agent.md
(from installed plugins):
Plugin agents
<plugin-dir>/agents/agent-name.md
Creating Effective Subagents
Step 1: Identify the Use Case
Good candidates for subagents:
Complex, multi-step workflows
Specialized expertise (debugging, security review, data analysis)
Tasks requiring focused context
Repeatable processes with specific quality bars
Code review and analysis workflows
NOT good for subagents (use Skills/Commands instead):
Simple one-off prompts (use Slash Commands)
Context-aware automatic activation (use Skills)
Quick transformations or formatting
Step 2: Design Agent Scope
Best practices:
Single responsibility - Each agent does ONE thing exceptionally well
Clear boundaries - Define what's in/out of scope
Specific expertise - Don't create "general helper" agents
Measurable outcomes - Agent should produce concrete deliverables
Examples:
β code-reviewer - Reviews code changes for quality, security, and best practices
β debugger - Root cause analysis and minimal fixes for errors
β data-scientist - SQL query optimization and data analysis
β helper - Too vague, no clear scope
β everything - Defeats purpose of specialization
Step 3: Write the System Prompt
The system prompt is the most critical part of your agent. It defines the agent's personality, capabilities, and approach.
Structure for effective prompts:
---
name: code-reviewer
description: Analyzes code changes for quality, security, and maintainability
tools: Read, Grep, Bash(git *)
model: sonnet
---
# Code Reviewer Agent
You are an expert code reviewer specializing in [language/framework].
## Your Role
Review code changes thoroughly for:
1. Code quality and readability
2. Security vulnerabilities
3. Performance issues
4. Best practices adherence
5. Test coverage
## Review Process
1. **Read the changes**
- Get recent git diff or specified files
- Understand the context and purpose
2. **Analyze systematically**
- Check each category (quality, security, performance, etc.)
- Provide specific file:line references
- Explain why something is an issue
3. **Provide actionable feedback**
Format:
### π΄ Critical Issues
- [Issue] (file.ts:42) - [Explanation] - [Fix]
### π‘ Suggestions
- [Improvement] (file.ts:67) - [Rationale] - [Recommendation]
### β Good Practices
- [What was done well]
4. **Summarize**
- Overall assessment
- Top 3 priorities
- Approval status (approve, approve with comments, request changes)
## Quality Standards
**Code must:**
- [ ] Follow language/framework conventions
- [ ] Have proper error handling
- [ ] Include necessary tests
- [ ] Not expose secrets or sensitive data
- [ ] Use appropriate abstractions (not over-engineered)
**Flag immediately:**
- SQL injection risks
- XSS vulnerabilities
- Hardcoded credentials
- Memory leaks
- O(nΒ²) or worse algorithms in hot paths
## Output Format
Always provide:
1. Summary (1-2 sentences)
2. Categorized findings with file:line refs
3. Approval decision
4. Top 3 action items
Be thorough but concise. Focus on what matters.
Step 4: Configure Tools Access
Available tools:
Read - Read files
Write - Create new files
Edit - Modify existing files
Bash - Execute shell commands
Grep - Search file contents
Glob - Find files by pattern
WebFetch - Fetch web content
WebSearch - Search the web
Plus any connected MCP tools
Tool configuration patterns:
Inherit all tools (omit tools field):
---
name: full-access-agent
description: Agent needs access to everything
# No tools field = inherits all
---
Specific tools only:
---
name: read-only-reviewer
description: Reviews code without making changes
tools: Read, Grep, Bash(git *)
---
The description field determines when Claude invokes your agent automatically.
Best practices:
Start with "Use when..." or "Analyzes..." or "Helps with..."
Be specific about the agent's domain
Mention key capabilities
Include when NOT to use (if helpful)
Examples:
β Good descriptions:
description: Analyzes code changes for quality, security, and maintainability issues
description: Use when debugging errors - performs root cause analysis and suggests minimal fixes
description: Helps with SQL query optimization and data analysis tasks
β Poor descriptions:
description: A helpful agent # Too vague
description: Does code stuff # Not specific enough
description: Reviews, debugs, refactors, tests, documents, and deploys code # Too broad
Agent Patterns
Pattern 1: Code Reviewer
Purpose: Systematic code review with quality gates
---
name: debugger
description: Specializes in root cause analysis and minimal fixes for bugs and errors
tools: Read, Edit, Bash, Grep
model: sonnet
---
# Debugger Agent
Expert at finding and fixing bugs through systematic analysis.
## Debugging Process
### 1. Capture Context
- What error/unexpected behavior occurred?
- Error messages and stack traces
- Steps to reproduce
- Expected vs actual behavior
### 2. Isolate the Problem
- Read relevant files
- Trace execution path
- Identify failure point
- Determine root cause (not just symptoms)
### 3. Minimal Fix
- Fix the root cause, not symptoms
- Make smallest change that works
- Don't refactor unrelated code
- Preserve existing behavior
### 4. Verify
- How to test the fix
- Edge cases to check
- Potential side effects
## Anti-Patterns to Avoid
β Fixing symptoms instead of root cause
β Large refactoring during debugging
β Adding features while fixing bugs
β Changing working code unnecessarily
## Output Format
**Root Cause:** [Clear explanation]
**Location:** file.ts:line
**Fix:** [Minimal code change]
**Verification:** [How to test]
**Side Effects:** [Potential impacts]
Pattern 3: Data Scientist
Purpose: SQL optimization and data analysis
---
name: data-scientist
description: Optimizes SQL queries and performs data analysis with cost-awareness
tools: Read, Write, Bash, WebSearch
model: sonnet
---
# Data Scientist Agent
Expert in SQL optimization and data analysis.
## SQL Query Guidelines
### Performance
- Always include WHERE clauses with indexed columns
- Use appropriate JOINs (avoid cartesian products)
- Limit result sets with LIMIT
- Use EXPLAIN to verify query plans
### Cost Awareness
- Estimate query cost before running
- Prefer indexed lookups over full table scans
- Use materialized views for expensive aggregations
- Sample large datasets when appropriate
### Best Practices
- Use CTEs for readability
- Parameterize queries (prevent SQL injection)
- Document complex queries
- Format for readability
## Analysis Process
1. **Understand the question**
- What insights are needed?
- What's the business context?
2. **Design query**
- Choose appropriate tables
- Apply necessary filters
- Optimize for performance
3. **Run and validate**
- Check results make sense
- Verify data quality
- Note any anomalies
4. **Present findings**
- Summary (key insights)
- Visualizations (if helpful)
- Recommendations
- Query for reproducibility
## Output Template
**Question:** [What we're analyzing]
**Query:**
\`\`\`sql
-- [Comment explaining approach]
SELECT ...
FROM ...
WHERE ...
\`\`\`
**Results:** [Summary]
**Insights:**
- [Key finding 1]
- [Key finding 2]
- [Key finding 3]
**Recommendations:** [Data-driven suggestions]
**Cost Estimate:** [Expected query cost]
Pattern 4: Test Generator
Purpose: Generate comprehensive test suites
---
name: test-generator
description: Generates comprehensive test cases covering happy path, edge cases, and errors
tools: Read, Write
model: sonnet
---
# Test Generator Agent
Generates thorough test suites for code.
## Test Coverage Strategy
### 1. Happy Path (40%)
- Normal inputs
- Expected outputs
- Standard workflows
- Common use cases
### 2. Edge Cases (30%)
- Empty inputs
- Null/undefined
- Boundary values
- Maximum values
- Minimum values
- Unicode/special characters
### 3. Error Cases (20%)
- Invalid inputs
- Type mismatches
- Missing required fields
- Network failures
- Permission errors
### 4. Integration (10%)
- Component interaction
- API contracts
- Database operations
- External dependencies
## Test Structure
\`\`\`typescript
describe('[Component/Function]', () => {
describe('Happy Path', () => {
it('should [expected behavior]', () => {
// Arrange
// Act
// Assert
})
})
describe('Edge Cases', () => {
it('should handle empty input', () => {})
it('should handle null', () => {})
it('should handle boundary values', () => {})
})
describe('Error Cases', () => {
it('should throw on invalid input', () => {})
it('should handle network failure', () => {})
})
})
\`\`\`
## Test Quality Checklist
- [ ] Descriptive test names ("should..." format)
- [ ] Clear arrange-act-assert structure
- [ ] One assertion per test (generally)
- [ ] No test interdependencies
- [ ] Fast execution (<100ms per test ideally)
- [ ] Easy to understand failures
## Output
Generate complete test file with:
- Imports and setup
- Test suites organized by category
- All test cases with assertions
- Cleanup/teardown if needed
Using Agents
Automatic Delegation
Claude will automatically invoke agents when:
Task matches agent description
Agent is appropriate for context
More efficient than main conversation
Example:
User: "Can you review my recent code changes?"
β Claude invokes code-reviewer agent
Explicit Invocation
Request specific agents:
"Use the debugger subagent to find why this test is failing"
"Have the data-scientist subagent analyze user retention"
"Ask the code-reviewer to check this PR"
Agent Chaining
Sequence multiple agents for complex workflows:
"First use code-analyzer to find performance bottlenecks,
then use optimizer to fix them,
finally use test-generator to verify the changes"
Agents vs Skills vs Commands
Use Subagents When:
β Complex multi-step workflows
β Specialized expertise needed
β Delegation improves main context efficiency
β Repeatable process with quality standards
β Need focused context window
Use Skills When:
β Context-aware automatic activation
β Reference documentation and patterns
β Multiple supporting files needed
β Team standardization required
Use Slash Commands When:
β Simple, focused tasks
β Frequent manual invocation
β Prompt fits in one file
β Personal productivity shortcuts
Decision Tree:
Need specialized AI behavior?
ββ Yes β Complex workflow?
β ββ Yes β Use Subagent
β ββ No β Simple prompt?
β ββ Yes β Use Slash Command
β ββ No β Use Skill (reference docs)
ββ No β Just need documentation? β Use Skill
Managing Agents
View Agents
Use /agents command to:
List all available agents
See agent descriptions
Check tool permissions
View model configurations
Create Agent with Claude
Recommended approach:
"Create a subagent for [purpose] that [capabilities]"
Claude will generate:
Appropriate name
Clear description
System prompt
Tool configuration
Model selection
Then review and customize as needed.
Edit Agents
Open agent file (.claude/agents/agent-name.md)
Modify frontmatter or system prompt
Save file
Changes apply immediately (no restart needed)
Test Agents
Verify agent works as expected:
"Use the [agent-name] subagent to [test task]"
Check:
Agent activates correctly
Has necessary tool access
Produces expected output format
Handles edge cases
Best Practices
1. Single Responsibility
Each agent should do ONE thing exceptionally well.
β Anti-pattern:
name: code-helper
description: Reviews, debugs, tests, refactors, and documents code
β Better:
name: code-reviewer
description: Reviews code for quality, security, and best practices
name: debugger
description: Root cause analysis and minimal fixes for bugs
2. Detailed System Prompts
Include:
Role definition
Step-by-step process
Output format
Quality standards
Examples
Anti-patterns to avoid
3. Minimum Tool Access
Grant only necessary tools:
β Anti-pattern:
tools: Read, Write, Edit, Bash, Grep, Glob, WebSearch, WebFetch
# Agent only needs Read and Grep
Purpose: API documentation and README generation
Tools: Read, Write, Bash(git *)
Model: sonnet
Related Documentation
EXAMPLES.md - Complete agent implementations
PATTERNS.md - Reusable agent patterns
TOOLS.md - Tool configuration reference
Checklist for New Agents
Before finalizing a subagent:
Name is clear, unique, and lowercase with hyphens
Description specifically explains when to use the agent
System prompt is detailed with step-by-step process
Output format is explicitly defined
Tool access is minimal and specific
Model is appropriate for task complexity
Agent has been tested with real tasks
Edge cases are considered in prompt
File is in correct directory (.claude/agents/)
Remember: Great subagents are specialized experts, not generalists. Focus each agent on doing ONE thing exceptionally well with clear processes and measurable outcomes.