Home / Packages / @prpm-converter/cursorrules-receiving-code-review

@prpm-converter/cursorrules-receiving-code-review

Cursor rules version of receiving-code-review skill - ---

prpm install @prpm-converter/cursorrules-receiving-code-review
0 total downloads

šŸ“„ Full Prompt Content

# Receiving Code Review - Cursor Rules

---

## Overview

This cursor rule is based on the Claude Code "Receiving Code Review" skill, adapted for use in Cursor IDE.

## Core Methodology

When working on code, follow this receiving code review methodology:

Follow the principles outlined below.

## Principles

- Apply best practices from the skill content below

## Implementation Guidelines

- Reference the detailed skill content for specific guidance


## Examples

- *Performative Agreement (Bad):**
- *Technical Verification (Good):**
- *YAGNI (Good):**
- *Unclear Item (Good):**
- *External feedback = suggestions to evaluate, not orders to follow.**

## Integration with Other Rules

This rule works best when combined with:
- Code quality and style guidelines
- Testing best practices
- Project-specific conventions

You can reference other .cursorrules files by organizing them in your project:
```
.cursorrules/
  ā”œā”€ā”€ base/
  │   ā”œā”€ā”€ receiving-code-review.cursorrules (this file)
  │   └── code-quality.cursorrules
  └── project-specific.cursorrules
```

## Original Skill Content

The following is the complete content from the Claude Code skill for reference:

---

---
name: receiving-code-review
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---

# Code Review Reception

## Overview

Code review requires technical evaluation, not emotional performance.

**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.

## The Response Pattern

```
WHEN receiving code review feedback:

1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
```

## Forbidden Responses

**NEVER:**
- "You're absolutely right!" (explicit CLAUDE.md violation)
- "Great point!" / "Excellent feedback!" (performative)
- "Let me implement that now" (before verification)

**INSTEAD:**
- Restate the technical requirement
- Ask clarifying questions
- Push back with technical reasoning if wrong
- Just start working (actions > words)

## Handling Unclear Feedback

```
IF any item is unclear:
  STOP - do not implement anything yet
  ASK for clarification on unclear items

WHY: Items may be related. Partial understanding = wrong implementation.
```

**Example:**
```
your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.

āŒ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
āœ… RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."
```

## Source-Specific Handling

### From your human partner
- **Trusted** - implement after understanding
- **Still ask** if scope unclear
- **No performative agreement**
- **Skip to action** or technical acknowledgment

### From External Reviewers
```
BEFORE implementing:
  1. Check: Technically correct for THIS codebase?
  2. Check: Breaks existing functionality?
  3. Check: Reason for current implementation?
  4. Check: Works on all platforms/versions?
  5. Check: Does reviewer understand full context?

IF suggestion seems wrong:
  Push back with technical reasoning

IF can't easily verify:
  Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"

IF conflicts with your human partner's prior decisions:
  Stop and discuss with your human partner first
```

**your human partner's rule:** "External feedback - be skeptical, but check carefully"

## YAGNI Check for "Professional" Features

```
IF reviewer suggests "implementing properly":
  grep codebase for actual usage

  IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
  IF used: Then implement properly
```

**your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it."

## Implementation Order

```
FOR multi-item feedback:
  1. Clarify anything unclear FIRST
  2. Then implement in this order:
     - Blocking issues (breaks, security)
     - Simple fixes (typos, imports)
     - Complex fixes (refactoring, logic)
  3. Test each fix individually
  4. Verify no regressions
```

## When To Push Back

Push back when:
- Suggestion breaks existing functionality
- Reviewer lacks full context
- Violates YAGNI (unused feature)
- Technically incorrect for this stack
- Legacy/compatibility reasons exist
- Conflicts with your human partner's architectural decisions

**How to push back:**
- Use technical reasoning, not defensiveness
- Ask specific questions
- Reference working tests/code
- Involve your human partner if architectural

**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"

## Acknowledging Correct Feedback

When feedback IS correct:
```
āœ… "Fixed. [Brief description of what changed]"
āœ… "Good catch - [specific issue]. Fixed in [location]."
āœ… [Just fix it and show in the code]

āŒ "You're absolutely right!"
āŒ "Great point!"
āŒ "Thanks for catching that!"
āŒ "Thanks for [anything]"
āŒ ANY gratitude expression
```

**Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback.

**If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead.

## Gracefully Correcting Your Pushback

If you pushed back and were wrong:
```
āœ… "You were right - I checked [X] and it does [Y]. Implementing now."
āœ… "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."

āŒ Long apology
āŒ Defending why you pushed back
āŒ Over-explaining
```

State the correction factually and move on.

## Common Mistakes

| Mistake | Fix |
|---------|-----|
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Assuming reviewer is right | Check if breaks things |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |

## Real Examples

**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
āŒ "You're absolutely right! Let me remove that..."
```

**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
āœ… "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```

**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
āœ… "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```

**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
āœ… "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```

## The Bottom Line

**External feedback = suggestions to evaluate, not orders to follow.**

Verify. Question. Then implement.

No performative agreement. Technical rigor always.


---

## Usage Notes

- Apply these principles consistently throughout development
- Adapt the methodology to fit your specific project context
- Combine with project-specific rules for best results
- Use this as a reference for the receiving code review approach

---
*Converted from Claude Code Skill: receiving-code-review*
*Source: receiving code review skill*

šŸ’” Suggested Test Inputs

Loading suggested inputs...

šŸŽÆ Community Test Results

Loading results...

šŸ“¦ Package Info

Format
cursor
Type
rule
Category
general

šŸ”— Links