3 min read

Simple Claude Code Review Prompt

Last updated on Mar 15, 2026

I’ve been using Claude Code as my daily driver for AI-assisted coding (along with Cursor).

Recently, I installed the Code Review Github Action for my PRs, but the default prompt wasn’t for me.

Too verbose. Too nit-picky. Every review felt like reading a laundry list of style suggestions I didn’t ask for.

So I wrote a simpler one. It gives a short summary, flags actual bugs, and signs off with an emoji so I can tell at a glance if something needs attention.

It’s cut down the noise by a lot - I catch real issues before a human even looks at the PR.

Prompt

Please analyze the changes in this PR and focus on identifying critical issues related to:

- Potential bugs or issues
- Performance
- Security
- Correctness

If critical issues are found, list them in a few short bullet points. If no critical issues are found, provide a simple approval.
Sign off with a checkbox emoji: (approved) or (issues found).

Keep your response concise. Only highlight critical issues that must be addressed before merging. Skip detailed style or minor suggestions unless they impact performance, security, or correctness.

Full claude-code-review.yml

I ran into a couple gotchas setting this up.

I had to update the claude-code-review.yml permissions and include my github_token.

claude-code-review.yml:

name: Claude Code Review

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  claude-review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      issues: write
      id-token: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1

      - name: Run Claude Code Review
        id: claude-review
        uses: anthropics/claude-code-action@beta
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          github_token: ${{ secrets.GITHUB_TOKEN }}

          direct_prompt: |
            Please analyze the changes in this PR and focus on identifying critical issues related to:
            - Potential bugs or issues
            - Performance
            - Security
            - Correctness

            If critical issues are found, list them in a few short bullet points. If no critical issues are found, provide a simple approval.
            Sign off with a checkbox emoji:  (approved) or  (issues found).

            Keep your response concise. Only highlight critical issues that must be addressed before merging. Skip detailed style or minor suggestions unless they impact performance, security, or correctness.

Conclusion

If you’ve got ideas on how to improve the prompt, I’m all ears.

For what it’s worth - I still think human reviewers should review your code.

This just speeds up the feedback loop so the human isn’t wasting time on things a machine can catch.

Update: I’ve since built k-review - a multi-model review tool that runs six AI models in parallel and uses majority voting to find real issues.

Worth a look if single-model review feels limiting.

I also wrote up my full AI coding workflow — where this automated review fits into the bigger picture.

Frequently Asked Questions

What is Claude Code?
Claude Code is Anthropic's CLI tool for coding with AI. It can be used as a daily coding assistant and integrates with GitHub Actions to perform automated code reviews on pull requests. The GitHub Action uses Anthropic's API to analyze PR changes and provide feedback.
How do you set up Claude Code for automated GitHub PR reviews?
Create a claude-code-review.yml workflow file in your .github/workflows directory. Configure it to trigger on pull_request events (opened and synchronize), set permissions for contents read, pull-requests write, issues write, and id-token write. Use the anthropics/claude-code-action@beta action with your Anthropic API key and GitHub token, and include a direct_prompt with your review instructions.
What makes a good AI code review prompt?
A good AI code review prompt is concise and focused on critical issues only — potential bugs, performance problems, security vulnerabilities, and correctness. Avoid overly verbose prompts that generate nit-picky feedback on style or minor issues, which creates review overload. Include a clear sign-off format (like approved/issues-found emoji) for quick scanning.

Subscribe to my newsletter

Get the latest posts delivered to your inbox