I’ve been asked a few times what my AI coding workflow looks like, so here it is. A typical session, start to finish.
The tools
Claude Code is the daily driver. It runs in the terminal, has filesystem access, and spawns sub-agents to work on things in parallel.
Cursor is still my IDE. I use it for editing and sometimes testing non-Claude models on the same task to compare results.
OpenCode is the backup. When Claude Code gets stuck on a bug, going in circles or not making progress, I’ll switch to OpenCode with the latest Codex model and let it take a crack at it.
Different model, different approach, often a different result.
I haven’t given Codex a proper try since it seems redundent to use vs OpenCode.
On top of Claude Code, I run superpowers, a plugin system that has skills for planning & executing.
Most of what I describe below comes from these skills.
I also use two MCP servers: Context7 for pulling library docs into the agent’s context, and Figma MCP for reading design files directly.
I don’t keep Figma MCP on all the time though. It eats a lot of the context window just being loaded, and most sessions don’t need it.
I run /context in Claude Code before starting to see what’s taking up space and disable what I won’t need.
Latestly, one thing that bugs me about AI-generated interfaces: they all look the same.
Same border radius, same gray palette, same Tailwind defaults.
You can spot them immediately.
I use a frontend design skill that forces the agent to make real decisions about visual hierarchy and spacing instead of falling back to defaults.
When there’s no designer handing me a Figma file, this is what keeps things from looking like a template.
Planning before the agent codes
Every session starts with a plan and its own branch.
The superpowers planning skill generates a markdown file in docs/plans/ with the scope, the approach, and the files that’ll get touched.
I’ve also been using the Relentless Plan Interview to drill down on what I’m actually trying to solve.
It asks a lot of clarifying questions, which I like. Think of it as a sparring partner that won’t let you hand-wave.
If there are 3rd party libraries involved, I’ll tell the agent to pull docs via Context7 during planning.
Same with the frontend design skill if there’s UI work.
Once the plan is done, I review it.
Sometimes I push back or scope things down. Once it looks right, I approve it, but I don’t execute yet.
Skipping this step is how you end up with agents refactoring things you didn’t ask about or picking approaches that conflict with your architecture.
I learned that the hard way a few times before making it mandatory.
A good plan also means longer unattended runs. I’ve had Claude Code and OpenCode go 2+ hours without me touching anything, just following the plan.
Letting the agent execute
I clear the context window and start a fresh session before executing.
Clean context means the agent isn’t carrying noise from the planning phase.
I point Claude Code at the plan in docs/plans/ in plan mode and tell it to execute the plan.
I give it a few standing instructions every time:
- Explore the codebase and review existing patterns before writing code
- Use Context7 for any 3rd party library work
- Use the Figma MCP if there are designs to implement
- Invoke the frontend design skill for UI work
- Spawn sub-agents for independent tasks or use Agent Teams where it makes sense
- Commit along the way using the git commit prompt
- Reference the
docs/plans/plans
Once Claude comes up with the plan I’ll exit plan mode.
From there, the agent works through the plan. I watch but don’t touch anything unless something goes sideways.
Context7 is where the MCP stuff pays off. Instead of the agent guessing at API signatures or using methods from two versions ago, it pulls the actual docs.
Fewer hallucinations, less time fixing things after.
When there’s a Figma file, the MCP server lets the agent read the design directly, pulling colors, spacing, and layout from the source file.
Not perfect, but way better than me trying to describe a mockup in a prompt.
It builds a decent base for me to refine manually.
QA on AI-generated code
Most people skip this. I think it’s the most important part.
After the agent finishes coding, I run four review skills against the diff. Every time.
First, the AI slop review. This catches the junk agents leave behind: comments explaining what const x = 5 does, try-catch blocks around code that can’t throw, abstractions wrapping a single function call. It compiles, it passes lint, and it still makes your codebase worse.
Then the dead code audit. Agents are bad at cleaning up after themselves. They’ll refactor a function but leave the old one sitting there unused.
For frontend work, React Best Practices checks for performance issues and hook misuse.
For Go, go vet.
For JavaScript, the verification suite.
Four checks sounds tedious. Takes a couple minutes. I can’t remember the last time it didn’t catch something.
Opening the pull request
When the branch is ready, I do one more pass. I’ll use the full review prompt or the concise review depending on how big it is.
For larger changes, I’ll run the K-LLM review in OpenCode for a second opinion.
Once the branch is clean, I use my create pull request prompt to open the PR.
It reads the full branch history, not just the last commit, and drafts a summary with a test plan.
PRs go up as drafts by default.
I review it one more time manually before submitting for code review.
Once the PR is out of draft, GitHub Actions fires off Claude Code for another automated review.
We also have BugBot by Cursor enabled, which catches things the other passes miss.
Wrapping up
Plan, execute, QA, PR. This has been the workflow that has been working for me.
The tools will change and the models will get better, but some things won’t.
Just remember to plan before you code, review before you ship.