â /develop
Comprehensive 7-phase workflow for developing new features with quality enforcement
Command Usage
Invoke this command in Claude Code:
/developdescription: Comprehensive 7-phase workflow for developing new features with quality enforcement
Feature Development Workflow
A comprehensive, structured 7-phase workflow for developing new features with quality enforcement and Bushido principles.
Overview
This command guides you through a systematic feature development process:
- Discover - Understand requirements and context
- Explore - Analyze existing codebase patterns
- Clarify - Resolve ambiguities with user input
- Design - Create architecture with specialized agents
- Implement - Build with TDD and quality practices
- Review - Multi-agent quality review with confidence scoring
- Validate - Run all verification hooks and summarize
Phase 1: Discover
Understand requirements and gather context
Objective: Establish clear understanding of what needs to be built and why.
-
Review the feature request:
- What is the user-facing goal?
- What problem does this solve?
- What are the acceptance criteria?
-
Identify impacted areas:
- Which parts of the codebase will change?
- What existing features might be affected?
- Are there related issues or PRs?
-
Check for similar features:
# Search for similar implementations grep -r "similar_feature_name" . -
Review project documentation:
- Check CLAUDE.md, CONTRIBUTING.md for standards
- Review architecture docs if available
- Identify any constraints or requirements
Output: Clear problem statement and high-level approach.
Phase 2: Explore (Parallel Agent Execution)
Analyze codebase with specialized agents
Objective: Understand existing patterns and identify integration points.
Launch multiple Explore agents in PARALLEL (single message with multiple Task calls):
-
Code Explorer: Map existing features
- Find entry points and call chains
- Identify data flow and transformations
- Document current architecture
-
Pattern Analyzer: Identify conventions
- How are similar features implemented?
- What testing patterns are used?
- What naming conventions exist?
-
Dependency Mapper: Understand relationships
- What modules will be affected?
- What are the integration points?
- Are there circular dependencies to avoid?
Consolidation: Synthesize findings from all agents into a cohesive understanding.
Output: Comprehensive map of existing codebase patterns and integration points.
Phase 3: Clarify (Human Decision Point)
Resolve ambiguities before implementation
Objective: Get user input on unclear requirements and design choices.
Use AskUserQuestion tool to resolve:
-
Architecture decisions:
- Which approach should we take? (if multiple valid options)
- What are the trade-offs? (performance vs. simplicity)
-
Scope clarifications:
- Should this include X feature?
- What's the priority if time is limited?
-
Integration choices:
- Should we extend existing module or create new one?
- How should this integrate with system Y?
IMPORTANT: Do not proceed with assumptions. Get explicit user answers.
Output: Clear, unambiguous requirements with user-approved approach.
Phase 4: Design (Parallel Agent Execution)
Create architecture with specialized DÅ agents
Objective: Design the implementation before coding.
Select appropriate DÅ agent(s) based on feature type:
- Frontend feature? â
do-frontend-development:presentation-engineer - Backend API? â
do-backend-development:api-designer - Database changes? â
do-database-engineering:database-designer - Complex system? â
do-architecture:solution-architect
Launch agents in PARALLEL for multi-disciplinary features:
- Frontend + Backend agents simultaneously
- Include
do-security-engineering:security-engineerfor sensitive features - Include
do-performance-engineering:performance-engineerfor high-traffic features
Agent responsibilities:
- Define module structure and file organization
- Specify interfaces and contracts
- Identify testing strategy
- Document key decisions and trade-offs
Consolidation: Review all design proposals, resolve conflicts, select final approach.
Output: Detailed implementation plan with module structure and interfaces.
Phase 5: Implement (TDD with Quality Enforcement)
Build the feature using test-driven development
Objective: Implement the designed solution with quality practices.
Apply TDD cycle (use bushido:test-driven-development skill):
For each component:
1. Write failing test (Red)
2. Implement minimum code to pass (Green)
3. Refactor for quality (Refactor)
4. Repeat
Implementation guidelines:
- â Start with tests, not implementation
- â Follow existing codebase patterns (from Phase 2)
- â
Apply SOLID principles (
bushido:solid-principlesskill) - â Keep it simple (KISS, YAGNI)
- â Apply Boy Scout Rule - leave code better than found
- â Don't over-engineer
- â Don't skip tests
- â Don't ignore linter/type errors
Integration:
- Integrate incrementally (don't build everything then integrate)
- Test integration points early
- Validate against acceptance criteria continuously
Output: Working implementation with comprehensive tests.
Phase 6: Review (Parallel Multi-Agent Review)
Quality review with confidence-based filtering
Objective: Identify high-confidence issues before final validation.
Launch review agents in PARALLEL (single message with multiple Task calls):
-
Code Reviewer (bushido:code-reviewer skill):
- General quality assessment
- Confidence scoring â¥80%
- False positive filtering
-
Security Engineer (do-security-engineering:security-engineer):
- Security vulnerability scan
- Auth/authz pattern verification
- Input validation review
-
Discipline-Specific Agent:
- Frontend:
do-frontend-development:presentation-engineer(accessibility, UX) - Backend:
do-backend-development:backend-architect(API design, scalability) - etc.
- Frontend:
Review consolidation:
- Merge findings from all agents
- De-duplicate issues
- Filter for confidence â¥80%
- Organize by: Critical (â¥90%) â Important (â¥80%)
Present findings to user with options:
Found 3 critical and 5 important issues.
Options:
1. Fix all issues now (recommended)
2. Fix critical only, defer important
3. Review findings and decide per-issue
Output: Consolidated review with high-confidence issues only.
Phase 7: Validate & Summarize
Final verification and change summary
Objective: Ensure all quality gates pass and document the change.
Run all validation hooks:
# All Buki plugins automatically run on Stop
# Verify: tests, linting, type checking, etc.
Validation checklist:
- All tests pass
- Linting passes
- Type checking passes
- No security vulnerabilities introduced
- Documentation updated
- No breaking changes (or properly coordinated)
Generate change summary:
- What changed: Files modified and why
- How to test: Steps to verify functionality
- Breaking changes: None, or list with migration guide
- Follow-up tasks: Any deferred work or tech debt
Create TODO list (using TodoWrite tool):
- Document any follow-up tasks
- Track deferred improvements
- Note any tech debt introduced
Output: Ready-to-commit feature with comprehensive documentation.
Usage
Basic usage
/feature-dev
Then describe the feature you want to build.
With feature description
/feature-dev Add user authentication with JWT tokens
Integration with Bushido Virtues
This workflow embodies the seven Bushido virtues:
- èª Integrity: TDD ensures code does what it claims
- 瀌 Respect: Boy Scout Rule honors existing codebase
- å Courage: Confidence scoring enables honest feedback
- åæ Compassion: Clear reviews help developers improve
- å¿ èª Loyalty: Quality enforcement maintains standards
- èªå¶ Discipline: Structured phases prevent rushing
- æ£çŸ© Justice: Fair reviews based on objective criteria
Best Practices
DO
- â Follow all 7 phases in order
- â Launch agents in parallel when independent
- â Use AskUserQuestion to resolve ambiguities
- â Apply confidence scoring to all reviews
- â Run TDD cycle for all new code
- â Pause for user input at decision points
DON'T
- â Skip phases (especially Explore and Review)
- â Start coding before design (Phases 1-4)
- â Implement without tests
- â Report low-confidence review findings
- â Make architectural decisions without user input
- â Commit without running validation hooks
Example Workflow
User: /feature-dev Add pagination to user list API
Phase 1: Discover
- Feature: Add pagination to GET /api/users
- Acceptance: Support page/limit query params, return total count
- Impact: Backend API, database queries
Phase 2: Explore (parallel agents)
- Found existing pagination in products API
- Pattern: Uses offset/limit with total count in response
- Testing: Integration tests verify pagination logic
Phase 3: Clarify
Q: Should we use cursor-based or offset-based pagination?
A: [User selects offset-based for consistency]
Phase 4: Design
- Agent: do-backend-development:api-designer
- Design: Extend existing UserService with pagination
- Interface: getUsersPaginated(page, limit) -> { users, total }
Phase 5: Implement
- Write test for pagination
- Implement pagination logic
- Test passes â
Phase 6: Review (parallel agents)
- Code reviewer: No issues (confidence N/A)
- Security engineer: No issues (confidence N/A)
- Backend architect: No issues (confidence N/A)
Phase 7: Validate
- Tests: â
Pass
- Linting: â
Pass
- Types: â
Pass
- Ready to commit
Summary: Added pagination to user list API
Files: services/user.service.ts, tests/user.service.test.ts
Testing: Run GET /api/users?page=1&limit=10
See Also
/review- Run multi-agent review only/commit- Smart commit workflowbushido:test-driven-development- TDD skillbushido:code-reviewer- Review skill