Skip to content

Conversation

@ymanor2404
Copy link
Contributor

Summary

  • Adds /rfe.evaluate command to score RFEs against 5 quality criteria
  • Modifies /rfe.breakdown to automatically evaluate each RFE after creation
  • Evaluation scores are appended to each RFE file footer

Evaluation Criteria (1-5 each, 25 total)

  1. Clarity of Purpose and Stakeholder Alignment
  2. Structural Completeness and Organization
  3. Actionability and Testability
  4. Language Quality and Communicative Tone
  5. Role Consistency and Perspective

Output Format

Each RFE gets a footer appended:

---
## Evaluation
**Score**: X/25 | Clarity: X | Structure: X | Actionability: X | Language: X | Role: X

[One sentence explaining the key factors that influenced the scores.]

Test plan

  • Run /rfe.breakdown on a PRD and verify evaluations are appended to each RFE
  • Run /rfe.evaluate standalone on existing RFEs
  • Verify RFEs scoring below 15/25 are highlighted for revision

🤖 Generated with Claude Code

- Add /rfe.evaluate command to score RFEs against 5 quality criteria
- Modify /rfe.breakdown to automatically evaluate each RFE after creation
- Evaluation scores appended to RFE footer (X/25 with per-criterion breakdown)

Criteria: Clarity, Structure, Actionability, Language Quality, Role Consistency

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant