I’ve been using Claude Code as my daily driver for AI-assisted coding (along with Cursor).
Recently, I installed the Code Review Github Action for my PRs, but the default prompt wasn’t for me.
Too verbose. Too nit-picky. Every review felt like reading a laundry list of style suggestions I didn’t ask for.
So I wrote a simpler one. It gives a short summary, flags actual bugs, and signs off with an emoji so I can tell at a glance if something needs attention.
It’s cut down the noise by a lot - I catch real issues before a human even looks at the PR.
Prompt
Please analyze the changes in this PR and focus on identifying critical issues related to:
- Potential bugs or issues
- Performance
- Security
- Correctness
If critical issues are found, list them in a few short bullet points. If no critical issues are found, provide a simple approval.
Sign off with a checkbox emoji: (approved) or (issues found).
Keep your response concise. Only highlight critical issues that must be addressed before merging. Skip detailed style or minor suggestions unless they impact performance, security, or correctness.
Full claude-code-review.yml
I ran into a couple gotchas setting this up.
I had to update the claude-code-review.yml permissions and include my github_token.
claude-code-review.yml:
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
claude-review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
direct_prompt: |
Please analyze the changes in this PR and focus on identifying critical issues related to:
- Potential bugs or issues
- Performance
- Security
- Correctness
If critical issues are found, list them in a few short bullet points. If no critical issues are found, provide a simple approval.
Sign off with a checkbox emoji: (approved) or (issues found).
Keep your response concise. Only highlight critical issues that must be addressed before merging. Skip detailed style or minor suggestions unless they impact performance, security, or correctness.
Conclusion
If you’ve got ideas on how to improve the prompt, I’m all ears.
For what it’s worth - I still think human reviewers should review your code.
This just speeds up the feedback loop so the human isn’t wasting time on things a machine can catch.
Update: I’ve since built k-review - a multi-model review tool that runs six AI models in parallel and uses majority voting to find real issues.
Worth a look if single-model review feels limiting.
I also wrote up my full AI coding workflow — where this automated review fits into the bigger picture.