Chapter 7: Speed Mode
Speed mode answers one question: does it work at all? You implement the feature assuming everything goes right. No error handling, no validation, no edge cases. Just the core functionality, proven to work when conditions are ideal. This feels wrong at first. Every instinct tells you to handle errors, validate inputs, plan for what could go wrong. Resist that instinct. Speed mode exists precisely because trying to do everything at once is how features become fragile. By separating "make it work" from "make it robust," you avoid the tangled mess that happens when success paths and failure paths get implemented together.
The Worktree: Your Safe Space
Before you write any code, you need a safe place to work. In the Jetty Method, that's a worktree.
Think of a worktree as a parallel universe for your code. Your main codebase stays untouched. You experiment in the worktree. If the experiment works, you merge it back. If it fails, you delete the worktree and nothing was harmed.
This isolation is critical. It means you can try things without fear. It means a half-finished feature can't corrupt your working application. It means you can walk away from a broken implementation and start fresh.
When you start a chore, you create a worktree for it. All your changes happen there. When the chore is complete and verified, you merge it back to main. Then you clean up the worktree and start the next one.
The pattern is always the same: create, implement, verify, merge, cleanup. Each chore follows this cycle. Each cycle leaves your main codebase better than before.
The RED-GREEN Loop
Speed mode follows a simple cycle called RED-GREEN.
RED: Run your BDD scenarios. They fail. This is expected. You haven't implemented anything yet. The failing tests tell you exactly what needs to be built.
GREEN: Implement just enough code to make the failing scenario pass. Don't get ahead of yourself. Don't implement error handling because you know you'll need it eventually. Just make the current test pass.
Repeat: Run the scenarios again. Something that was red is now green. Something else might still be red. Implement the next piece. Run again.
When all your success scenarios pass, speed mode is complete.
This loop keeps you focused. Instead of thinking about the entire feature, you're thinking about the next failing test. Instead of wondering if you're done, you have a clear answer: are all the scenarios green?
The loop also catches mistakes early. If you break something while implementing a new piece, you'll know immediately. The previously-passing scenario will fail. Fix it before moving on.
Integration First
The integration chore matters more than it might seem.
A feature that works perfectly in isolation but can't be reached by users is useless. The integration chore wires your feature into the existing application. It makes the integration scenario pass: users can navigate to the feature, click the button, access the page.
Always complete the integration chore first. This catches a common failure mode early. You don't want to build an entire feature only to discover it doesn't fit into your application's architecture.
After integration passes, you have proof that the feature is reachable. Everything else builds on that foundation.
What You Implement
Speed mode implements all the functionality described in your success scenarios:
The integration (wiring the feature into your app)
Required functionality (everything the feature must do)
Optional functionality (nice-to-have features you planned)
Your scenarios already describe all of this. Your job is to make each scenario pass, one at a time.
What You Skip
Speed mode deliberately skips:
Error handling (what happens when the database is down)
Input validation (what happens when the email format is wrong)
Edge cases (what happens with empty inputs or boundary values)
Security concerns (what happens if someone tries to exploit this)
This feels incomplete because it is incomplete. You're building the happy path. The path where users do what you expect, inputs are valid, and systems work correctly.
Why start here? Because it separates two different problems. "Does this feature work?" is different from "Does this feature handle failures gracefully?" Mixing them together creates confusion. You don't know if a test is failing because your core logic is wrong or because your error handling is wrong.
Speed mode isolates the first question. Get the feature working. Prove it with passing tests. Then move on to making it robust.
Working with Your AI Assistant
Your AI assistant writes the code. Your job is to guide the process and verify the results.
At the start of each chore, share the relevant scenarios. The AI analyzes them and proposes an implementation approach. Usually you can accept this and let it proceed. Sometimes you'll want to adjust the approach before implementation begins.
During implementation, the AI works through the RED-GREEN loop. It runs tests, implements code, runs tests again. You'll see progress as scenarios move from failing to passing.
If something goes wrong, the AI will surface it. A test that won't pass. An unexpected error. A conflict with existing code. At that point you collaborate to resolve it.
When Speed Mode Ends
Speed mode ends when all success scenarios pass. Not when the feature feels done. Not when you're tired of working on it. When the tests are green.
This is the discipline that prevents drift. Your BDD scenarios are the specification. If the scenarios pass, the specification is satisfied. If you think the feature needs more, that's a sign your scenarios were incomplete. Add the missing scenarios, then implement them.
When speed mode completes, you have working software. Users can reach the feature. The functionality works when conditions are ideal. You're ready to make it robust.
Transitioning to Stable Mode
After the last speed mode chore is complete and merged, you transition to stable mode. This isn't automatic. It's a deliberate step.
The transition involves generating stable mode scenarios. These are the error handling and edge case scenarios you deferred during planning. Your AI assistant proposes them based on the feature's functionality:
What validation errors can occur?
What system failures need handling?
What edge cases need coverage?
What happens with unusual inputs?
Review these proposed scenarios. Add cases the AI missed. Remove cases that don't apply. Then create stable mode chores to address them.
With stable mode scenarios written and chores created, you're ready for the next chapter.