Run this command to begin your first Lean Loop cycle:
opencode "Implement a greet(name) function that returns 'Hello, {name}!': Start with PLAN phase"
A disciplined heartbeat cycle for AI-assisted and human-driven TDD development.
PLAN ──────► APPLY ──────► UNIFY
▲ │
└──────────────────────────────┘
AI coding assistants are powerful but chaotic. They write code fast, skip tests, lose context, and rarely reconcile what was planned vs. what was built. Lean Loop fixes that with three simple rules:
- Plan before you code. Define acceptance criteria before touching any file.
- Test every behavior. Strict Red-Green-Refactor, one test at a time.
- Reconcile every cycle. Compare plan vs. actual, log decisions, update state.
The result: predictable, auditable, test-covered development — whether you're working solo, with a team, or alongside an AI.
Let's walk through your first complete Lean Loop cycle together. We'll build a simple greet(name) function that returns a greeting message. This tutorial takes about 5-10 minutes and requires no prior setup beyond having Node.js installed.
First, let's create a new project folder and set up Lean Loop:
mkdir my-first-loop && cd my-first-loop
npm init -y # Creates a basic package.json
npx skills add oxyplay/lean-loop -g # Installs the Lean Loop skillThis automatically creates a .system/ folder with all the tracking files we need.
Open .system/PLAN.md in your editor and replace the content with:
# Objective
Add a `greet(name)` function that returns `Hello, {name}!`
## Acceptance Criteria
Given name="World"
When greet() is called
Then it returns "Hello, World!"
## Tasks
- [ ] Create greet.js with the function
- [ ] Write a test that fails (RED)
- [ ] Make the test pass (GREEN)
- [ ] Improve the code if needed (REFACTOR)
- [ ] Update STATE.md with what we learnedNow let's follow the TDD cycle:
Create a file called test/greet.test.js with this content:
const { greet } = require('../greet');
test('greets World correctly', () => {
expect(greet('World')).toBe('Hello, World!');
});Run the test to see it fail (this is the RED phase):
npm test
# You should see a failure because greet.js doesn't exist yetCreate greet.js with the minimal code to make the test pass:
function greet(name) {
return `Hello, ${name}!`;
}
module.exports = { greet };Run the test again:
npm test
# Now you should see the test pass! This is the GREEN phase.Look at your code. Is it clear and simple? In this case, it's already good, so we can move on. (If we saw duplication or complexity, we'd improve it now while keeping the test green.)
Now we complete the loop by updating our tracking files:
- Check that all tests pass: We already did this with
npm test - Verify acceptance criteria: Our test confirms that given "World", we get "Hello, World!"
- Update
.system/STATE.md:
# Current Phase
PLAN (we just completed a full loop)
# Next Action
Consider adding more test cases or move on to another feature
# What We Did
- Created greet.js function that returns a greeting
- Wrote and passed a test for the basic case
- Followed Red-Green-Refactor cycle
# Decisions & Debt
- No technical debt incurred
- Decision to keep the function simple and focused- Log the cycle in
.system/LOG.md:
## Loop Completed: [timestamp]
- Planned: Add greet(name) function returning Hello, {name}!
- Actually done: Created greet.js with one test covering the basic case
- AC satisfied: Yes - given "World", returns "Hello, World!"
- Deferred / Debt: None
- Next Action: Consider adding edge case tests (empty string, special characters)Now that you've completed one full PLAN → APPLY → UNIFY cycle, you can start another! Try adding a second test case:
- Go back to
.system/PLAN.mdand add a new acceptance criterion - Write the failing test (RED)
- Make it pass (GREEN)
- Refactor if needed
- Update STATE.md and LOG.md
If you just want to get started quickly:
npx skills add oxyplay/lean-loop -gThe skill auto-creates .system/ tracking files in your project on first use.
- Open
.system/PLAN.mdand write:- Objective: "Add a
greet(name)function that returnsHello, {name}!" - AC:
Given name="World", When greet() called, Then returns "Hello, World!"
- Objective: "Add a
- Open
.system/TDD_RULES.md— follow Red-Green-Refactor - Write one failing test → make it green → refactor
- Open
.system/STATE.md— update phase, log what you did, set next action - Repeat
Operational state lives in the .system/ folder.
graph LR
PLAN["PLAN<br/><i>Define objective,<br/>write ACs, break tasks</i>"]
APPLY["APPLY<br/><i>Red → Green → Refactor<br/>one behavior at a time</i>"]
UNIFY["UNIFY<br/><i>Reconcile plan vs actual,<br/>update state, log decisions</i>"]
PLAN --> APPLY --> UNIFY --> PLAN
Fill in .system/PLAN.md with objective, Given/When/Then acceptance criteria, boundaries, and task breakdown.
Gate: If ACs are unclear, contradictory, or untestable — stop and refine. Do not proceed to APPLY.
Execute Red-Green-Refactor cycles strictly:
- RED — Write ONE failing test. Confirm with actual console output.
- GREEN — Minimal code to pass. No future-proofing.
- REFACTOR — Improve design only when green.
Rule: If work expands beyond original ACs — stop and return to PLAN.
Mandatory reconciliation after every APPLY session:
- All tests green? Run the full suite.
- All ACs satisfied? Check each one explicitly.
- Update
.system/STATE.mdwith current phase and next action. - Log decisions and debt in
.system/LOG.md. - Compare planned vs. actually done.
- **Planned:** [what you set out to do]
- **Actually done:** [what changed]
- **AC satisfied:** [each AC and whether met]
- **Deferred / Debt:** [if any]
- **Next Action:** [exactly one task]
your-project/
├── .system/
│ ├── PLAN.md # Active plan with ACs and tasks
│ ├── STATE.md # Current phase, next action, backlog
│ ├── LOG.md # Decisions, debt, and failure log
│ └── TDD_RULES.md # Red-Green-Refactor execution rules
└── ...
- In-Session Context — All work happens in the main session. No subagent handoffs during implementation.
- Vertical Slicing — Build one behavior end-to-end (Tracer Bullets), not horizontal layers.
- Acceptance-Driven — Define "Done" via clear criteria before writing code.
- Behavior-First TDD — Test public interfaces, not internal implementation.
- Deep Modules — Small public interfaces that hide complex internals.
Rare exceptions, all must be logged in .system/LOG.md:
- Hotfixes — production bugs requiring immediate recovery
- Legacy Code — too tightly coupled for test-first approach
- Spikes — exploratory code, will be thrown away
Each example includes real .system/ files showing the exact state after PLAN, APPLY, and UNIFY phases.
| Example | Stack | What it builds |
|---|---|---|
| greet(name) | Node.js | Simple function — your first loop |
| POST /api/todos | Express + Jest | REST endpoint with validation |
| Toggle component | React + RTL | Accessible UI component |
Browse all: examples/
Lean Loop works with any AI coding tool. Drop-in configs for your stack:
| Tool | Setup | File |
|---|---|---|
| opencode | npx skills add oxyplay/lean-loop -g |
SKILL.md |
| Claude Code | Copy CLAUDE.md to project root |
integrations/claude-code/ |
| Cursor | Copy .cursorrules to project root |
integrations/cursor/ |
See integrations/ for setup instructions.
Q: Do I need opencode to use this?
A: No. The .system/ templates work with any editor. The opencode skill just auto-initializes them.
Q: Can I use this with my team?
A: Yes. Commit .system/ to your repo. Everyone shares the same plan, state, and decision log.
Q: What if my project already has tests? A: Lean Loop works alongside existing test suites. Use it for new features and incremental improvements.
Q: Is this only for AI-assisted development? A: No. The PLAN → APPLY → UNIFY cycle works for human-only teams too. AI just makes the discipline easier to maintain.
- The Pragmatic Programmer — Tracer Bullets concept
- A Philosophy of Software Design — Deep Modules
- Test-Driven Development: By Example — Kent Beck's TDD approach
See CONTRIBUTING.md for guidelines.
MIT © 2026 Max