claude-codev0.6.6
hoyeon
@team-attention⭐ 126 stars· last commit 1mo ago· 1 open issues
Multi-agent Spec-Driven Development (SDD) workflow toolkit for Claude Code. Orchestrates a complete specify → open → execute → publish → compound pipeline with parallel research agents, hook-based automation, PR state management, and ultrawork one-command full pipeline.
7.6/10
Verified
Mar 9, 2026// RATINGS
🟢ProSkills ScoreAI Verified
7.6/10📍
Not yet listed on ClawHub or SkillsMP
// README
# hoyeon
English | [한국어](README.ko.md) | [中文](README.zh.md) | [日本語](README.ja.md)
**All you need is requirements.**
A Claude Code plugin that derives requirements from your intent, verifies every derivation, and delivers traced code — without you writing a plan.
[](https://www.npmjs.com/package/@team-attention/hoyeon-cli)
[](LICENSE)
[Quick Start](#quick-start) · [Philosophy](#requirements-are-not-written) · [The Chain](#the-derivation-chain) · [Commands](#commands) · [Agents](#twenty-one-minds)
---
> *AI can build anything. The hard part is knowing what to build — precisely.*
Most AI coding fails at the **input**, not the output. The bottleneck isn't AI capability. It's human clarity. You say "add dark mode" and there are a hundred decisions hiding behind those three words.
Most tools either force you to enumerate them upfront, or ignore them entirely. Hoyeon does neither — it **derives** them. Layer by layer. Gate by gate. From intent to verified code.
---
## Requirements Are Not Written
> *You don't know what you want until you're asked the right questions.*
Requirements aren't artifacts you produce before coding. They're **discoveries** — surfaced through structured interrogation of your intent. Every "add a feature" conceals unstated assumptions. Every "fix the bug" hides a root cause you haven't named yet.
Hoyeon's job is to find what you haven't said.
```
You say: "add dark mode toggle"
│
Hoyeon asks: "System preference or manual?" ← assumption exposed
"Which components need variants?" ← scope clarified
"Persist where? How?" ← decision forced
│
Result: 3 requirements, 7 scenarios, 4 tasks — all with verify commands
```
This is not just process. It's built on three beliefs about how AI coding should work.
### 1. Requirements over tasks
> *Get the requirements right, and the code writes itself. Get them wrong, and no amount of code fixes it.*
Most AI tools jump straight to tasks — "create file X, edit function Y." But tasks are derivatives. They change when requirements change. If you start from tasks, you're building on sand.
Hoyeon starts from **goals** and derives downward through a layer chain:
```
Goal → Decisions → Requirements → Scenarios → Tasks
```
Requirements are refined from multiple angles before a single line of code is written. Interviewers probe assumptions. Gap analyzers find what's missing. UX reviewers check user impact. Tradeoff analyzers weigh alternatives. Each perspective sharpens the requirements until they're precise enough to generate verifiable scenarios.
The chain is directional: **requirements produce tasks, never the reverse.** If requirements change, scenarios and tasks are re-derived. This is why Hoyeon can recover from mid-execution blockers — the requirements are still valid, only the tasks need adjustment.
### 2. Determinism by design
> *LLMs are non-deterministic. The system around them doesn't have to be.*
An LLM given the same prompt twice may produce different code. This is the fundamental challenge of AI-assisted development. Hoyeon's answer: **constrain the LLM with programmatic control** so that non-determinism doesn't propagate.
Three mechanisms enforce this:
- **`spec.json` as single source of truth** — Every agent reads from and writes to the same structured spec. No agent invents its own context. No information lives only in a conversation. The spec is the shared memory that survives context windows, compaction, and agent handoffs.
- **CLI-enforced structure** — `hoyeon-cli` validates every merge to `spec.json`. Field names, types, required relationships — all checked programmatically before the LLM ever sees the data. The CLI doesn't suggest structure; it **rejects** invalid structure.
- **Derivation chain as contract** — Goal → Decisions → Requirements → Scenarios → Tasks are linked. Each layer references the one above it. A scenario traces to a requirement. A task traces to scenarios. If the chain breaks, the gate blocks. This means: **if you have valid requirements, the system will produce a result** — deterministically routed, even if the LLM's individual outputs vary.
The LLM does the creative work. The system ensures it stays on rails.
### 3. Machine-verifiable by default
> *If a human has to check it, the system failed to automate it.*
Every scenario in `spec.json` carries a `verified_by` classification:
```json
{
"given": "user clicks dark mode toggle",
"when": "toggle is activated",
"then": "theme switches to dark",
"verified_by": "machine",
"verify": { "type": "command", "run": "npm test -- --grep 'dark mode'" }
}
```
The system pushes everything toward `machine` verification. AC Quality Gate reviews each scenario and suggests converting `human` items to `machine` where possible. Multi-model code review (Codex + Gemini + Claude) runs independently and synthesizes a consensus verdict. Independent verifiers check Definition of Done in isolated contexts to eliminate self-verification bias.
Human review is reserved for what machines genuinely can't judge — UX feel, business logic correctness, naming decisions. Everything else runs automatically, every time, without asking.
### 4. Knowledge compounds
> *Most AI tools start from zero every session. Hoyeon remembers.*
Every execution generates structured learnings — not logs, not chat history, but **typed knowledge**: what went wrong, why, and the rule to prevent it next time.
```
/execute runs → Worker hits edge case
│
Worker records:
{ problem: "localStorage quota exceeded at 5MB",
cause: "No size check before write",
rule: "Always check remaining quota before localStorage.setItem" }
│
Next /specify → searches past learnings via BM25
│
Result: "Found: localStorage quota issue in todo-app spec.
→ Adding R5: quota guard requirement automatically"
```
This is **cross-spec compounding**. A lesson learned in one project surfaces as a requirement in the next. The system doesn't just avoid repeating mistakes — it actively strengthens future specs with evidence from past executions.
Three mechanisms make this work:
- **`spec learning`** — Workers record structured learnings during execution, auto-mapped to the requirements and tasks that produced them
- **`spec search`** — BM25 search across all specs: requirements, scenarios, constraints, and learnings. What you learned in project A informs what you ask in project B
- **Compounding loop** — Each /specify session starts by searching past learnings. More projects → richer search results → more complete requirements → fewer surprises during execution → better learnings → the cycle continues
The result: **the tenth project you run through Hoyeon is meaningfully better than the first** — not because the LLM improved, but because the knowledge base did.
---
These aren't aspirations. They're enforced by the architecture — the CLI rejects invalid specs, gates block unverified layers, hooks guard writes, agents verify in isolation, and learnings compound across projects. The system is designed so that **doing the right thing is the path of least resistance.**
---
## See It In Action
```
You: /specify "add dark mode toggle to settings page"
Hoyeon interviews you (scenario-based):
├─ "User opens the app at night — should it auto-detect OS dark mode or require a manual toggle?"
├─ "User switches to dark mode mid-session — should charts/images also invert?"
└─ derives implications: CSS variables needed, localStorage for persistence, prefers-color-scheme media query
Agents research your codebase in parallel:
├─ code-explorer scans component structure
├─ docs-researcher checks design system conventions
└─ ux-reviewer flags potential regression
→
// HOW IT'S BUILT
KEY FILES
PLUGIN-README.mdREADME.ja.mdREADME.ko.mdREADME.mdREADME.zh.md
// REPO STATS
126 stars
1 open issues
Last commit: 1mo ago
// SHARE
// SOURCE
View on GitHub// PROSKILLS SCORE
7.6/10
Good
BREAKDOWN
Code Quality7/10
Documentation8/10
Functionality8/10
Maintenance8/10
Security7/10
Uniqueness7/10
Usefulness8/10