返回全部 Agent
🔨

阶段 3:构建与迭代

Phase 3 Playbook — Build & Iterate

🧭 战略
8 个章节
9KB
在 GitHub 查看

个章节

ObjectivePre-ConditionsThe Dev↔QA Loop — Core MechanicAgent Assignment MatrixParallel Build TracksSprint Execution TemplateOrchestrator Decision LogicQuality Gate Checklist

🔨 Phase 3 Playbook — Build & Iterate

Duration: 2-12 weeks (varies by scope) | Agents: 15-30+ | Gate Keeper: Agents Orchestrator


Objective

Implement all features through continuous Dev↔QA loops. Every task is validated before the next begins. This is where the bulk of the work happens — and where NEXUS's orchestration delivers the most value.

Pre-Conditions

  • Phase 2 Quality Gate passed (foundation verified)
  • Sprint Prioritizer backlog available with RICE scores
  • CI/CD pipeline operational
  • Design system and component library ready
  • API scaffold with auth system ready

The Dev↔QA Loop — Core Mechanic

The Agents Orchestrator manages every task through this cycle:

FOR EACH task IN sprint_backlog (ordered by RICE score):

  1. ASSIGN task to appropriate Developer Agent (see assignment matrix)
  2. Developer IMPLEMENTS task
  3. Evidence Collector TESTS task
     - Visual screenshots (desktop, tablet, mobile)
     - Functional verification against acceptance criteria
     - Brand consistency check
  4. IF verdict == PASS:
       Mark task complete
       Move to next task
     ELIF verdict == FAIL AND attempts < 3:
       Send QA feedback to Developer
       Developer FIXES specific issues
       Return to step 3
     ELIF attempts >= 3:
       ESCALATE to Agents Orchestrator
       Orchestrator decides: reassign, decompose, defer, or accept
  5. UPDATE pipeline status report

Agent Assignment Matrix

Primary Developer Assignment

Task CategoryPrimary AgentBackup AgentQA Agent
React/Vue/Angular UIFrontend DeveloperRapid PrototyperEvidence Collector
REST/GraphQL APIBackend ArchitectSenior DeveloperAPI Tester
Database operationsBackend ArchitectAPI Tester
Mobile (iOS/Android)Mobile App BuilderEvidence Collector
ML model/pipelineAI EngineerTest Results Analyzer
CI/CD/InfrastructureDevOps AutomatorInfrastructure MaintainerPerformance Benchmarker
Premium/complex featureSenior DeveloperBackend ArchitectEvidence Collector
Quick prototype/POCRapid PrototyperFrontend DeveloperEvidence Collector
WebXR/immersiveXR Immersive DeveloperEvidence Collector
visionOSvisionOS Spatial EngineermacOS Spatial/Metal EngineerEvidence Collector
Cockpit controlsXR Cockpit Interaction SpecialistXR Interface ArchitectEvidence Collector
CLI/terminal toolsTerminal Integration SpecialistAPI Tester
Code intelligenceLSP/Index EngineerTest Results Analyzer
Performance optimizationPerformance BenchmarkerInfrastructure MaintainerPerformance Benchmarker

Specialist Support (activated as needed)

SpecialistWhen to ActivateTrigger
UI DesignerComponent needs visual refinementDeveloper requests design guidance
Whimsy InjectorFeature needs delight/personalityUX review identifies opportunity
Visual StorytellerVisual narrative content neededContent requires visual assets
Brand GuardianBrand consistency concernQA finds brand deviation
XR Interface ArchitectSpatial interaction design neededXR feature requires UX guidance
Analytics ReporterDeep data analysis neededFeature requires analytics integration

Parallel Build Tracks

For NEXUS-Full deployments, four tracks run simultaneously:

Track A: Core Product Development

Managed by: Agents Orchestrator (Dev↔QA loop)
Agents: Frontend Developer, Backend Architect, AI Engineer,
        Mobile App Builder, Senior Developer
QA: Evidence Collector, API Tester, Test Results Analyzer

Sprint cadence: 2-week sprints
Daily: Task implementation + QA validation
End of sprint: Sprint review + retrospective

Track B: Growth & Marketing Preparation

Managed by: Project Shepherd
Agents: Growth Hacker, Content Creator, Social Media Strategist,
        App Store Optimizer

Sprint cadence: Aligned with Track A milestones
Activities:
- Growth Hacker → Design viral loops and referral mechanics
- Content Creator → Build launch content pipeline
- Social Media Strategist → Plan cross-platform campaign
- App Store Optimizer → Prepare store listing (if mobile)

Track C: Quality & Operations

Managed by: Agents Orchestrator
Agents: Evidence Collector, API Tester, Performance Benchmarker,
        Workflow Optimizer, Experiment Tracker

Continuous activities:
- Evidence Collector → Screenshot QA for every task
- API Tester → Endpoint validation for every API task
- Performance Benchmarker → Periodic load testing
- Workflow Optimizer → Process improvement identification
- Experiment Tracker → A/B test setup for validated features

Track D: Brand & Experience Polish

Managed by: Brand Guardian
Agents: UI Designer, Brand Guardian, Visual Storyteller,
        Whimsy Injector

Triggered activities:
- UI Designer → Component refinement when QA identifies visual issues
- Brand Guardian → Periodic brand consistency audit
- Visual Storyteller → Visual narrative assets as features complete
- Whimsy Injector → Micro-interactions and delight moments

Sprint Execution Template

Sprint Planning (Day 1)

Sprint Prioritizer activates:
1. Review backlog with updated RICE scores
2. Select tasks for sprint based on team velocity
3. Assign tasks to developer agents
4. Identify dependencies and ordering
5. Set sprint goal and success criteria

Output: Sprint Plan with task assignments

Daily Execution (Day 2 to Day N-1)

Agents Orchestrator manages:
1. Current task status check
2. Dev↔QA loop execution
3. Blocker identification and resolution
4. Progress tracking and reporting

Status report format:
- Tasks completed today: [list]
- Tasks in QA: [list]
- Tasks in development: [list]
- Blocked tasks: [list with reason]
- QA pass rate: [X/Y]

Sprint Review (Day N)

Project Shepherd facilitates:
1. Demo completed features
2. Review QA evidence for each task
3. Collect stakeholder feedback
4. Update backlog based on learnings

Participants: All active agents + stakeholders
Output: Sprint Review Summary

Sprint Retrospective

Workflow Optimizer facilitates:
1. What went well?
2. What could improve?
3. What will we change next sprint?
4. Process efficiency metrics

Output: Retrospective Action Items

Orchestrator Decision Logic

Task Failure Handling

WHEN task fails QA:
  IF attempt == 1:
    → Send specific QA feedback to developer
    → Developer fixes ONLY the identified issues
    → Re-submit for QA
    
  IF attempt == 2:
    → Send accumulated QA feedback
    → Consider: Is the developer agent the right fit?
    → Developer fixes with additional context
    → Re-submit for QA
    
  IF attempt == 3:
    → ESCALATE
    → Options:
      a) Reassign to different developer agent
      b) Decompose task into smaller sub-tasks
      c) Revise approach/architecture
      d) Accept with known limitations (document)
      e) Defer to future sprint
    → Document decision and rationale

Parallel Task Management

WHEN multiple tasks have no dependencies:
  → Assign to different developer agents simultaneously
  → Each runs independent Dev↔QA loop
  → Orchestrator tracks all loops concurrently
  → Merge completed tasks in dependency order

WHEN task has dependencies:
  → Wait for dependency to pass QA
  → Then assign dependent task
  → Include dependency context in handoff

Quality Gate Checklist

#CriterionEvidence SourceStatus
1All sprint tasks pass QA (100% completion)Evidence Collector screenshots per task
2All API endpoints validatedAPI Tester regression report
3Performance baselines met (P95 < 200ms)Performance Benchmarker report
4Brand consistency verified (95%+ adherence)Brand Guardian audit
5No critical bugs (zero P0/P1 open)Test Results Analyzer summary
6All acceptance criteria metTask-by-task verification
7Code review completed for all PRsGit history evidence

Gate Decision

Gate Keeper: Agents Orchestrator

  • PASS: Feature-complete application → Phase 4 activation
  • CONTINUE: More sprints needed → Continue Phase 3
  • ESCALATE: Systemic issues → Studio Producer intervention

Handoff to Phase 4

## Phase 3 → Phase 4 Handoff Package

### For Reality Checker:
- Complete application (all features implemented)
- All QA evidence from Dev↔QA loops
- API Tester regression results
- Performance Benchmarker baseline data
- Brand Guardian consistency audit
- Known issues list (if any accepted limitations)

### For Legal Compliance Checker:
- Data handling implementation details
- Privacy policy implementation
- Consent management implementation
- Security measures implemented

### For Performance Benchmarker:
- Application URLs for load testing
- Expected traffic patterns
- Performance budgets from architecture

### For Infrastructure Maintainer:
- Production environment requirements
- Scaling configuration needs
- Monitoring alert thresholds

Phase 3 is complete when all sprint tasks pass QA, all API endpoints are validated, performance baselines are met, and no critical bugs remain open.