What it is
RICE is a prioritization score for product and project decisions. You multiply three numbers — Reach, Impact, and Confidence — and divide by Effort to get a single comparable number per idea.
- Reach: how many people the work affects in a defined period (e.g., users per quarter).
- Impact: how much it moves the metric per person, usually on a fixed scale like 3 / 2 / 1 / 0.5 / 0.25 for massive / high / medium / low / minimal.
- Confidence: how sure you are, expressed as a percentage (100% / 80% / 50%).
- Effort: total person-months (or person-weeks) across everyone involved.
RICE was created at Intercom by Sean McBride to make backlog triage less of an opinion contest. The point isn't precision — it's forcing the same fields onto every idea so you can compare them at all.
When to use it
RICE shines when you have more candidate work than capacity and the team keeps relitigating priority. It works best for medium-sized initiatives — bigger than a bug, smaller than a strategic pivot. Reach for it when:
- Ranking 20+ feature requests for the next quarter's roadmap
- Comparing growth experiments competing for the same engineering slot
- Settling a recurring debate between two product squads
- Cutting scope on a planned release that ran over
How to run it
- Define the metric you're prioritizing against — "weekly active users," "revenue," "support tickets reduced." RICE only works against one objective at a time.
- List every candidate item at roughly comparable granularity. Don't mix three-day fixes with six-month epics.
- Estimate Reach in real units (e.g., "8,000 users per quarter"), not vibes.
- Assign Impact from the fixed scale. The scale is deliberately coarse to discourage false precision.
- Set Confidence using only 100% / 80% / 50%. If you'd score lower, the idea isn't ready to prioritize.
- Estimate Effort as total person-time, not calendar time.
- Compute the score, sort the list, and look at the top quartile. Discuss whether the ranking matches intuition — if it doesn't, find the input you mistrust.
Common pitfalls
The biggest trap is false precision. The score looks objective because it's a number, but every input was a guess. Treat RICE as a structured argument, not an oracle — if two items score 47 and 52, they're tied.
The second is Confidence-score inflation. Nobody wants to admit their idea is uncertain, so confidences cluster around 80–100%. Fix it by writing the evidence next to each confidence number: 100% means you have data, 80% means a strong analog, 50% means a hypothesis. Anything weaker isn't a RICE candidate yet — it's a discovery task.
The third is comparing items at wildly different scopes. A two-week experiment and a six-month platform rebuild shouldn't be on the same list.
Variations
ICE (Impact × Confidence × Ease) drops Reach. It's the right call when Reach is either hard to estimate or roughly equal across candidates — early-stage growth experiments, or internal tooling where everyone in the team is affected. WSJF (Weighted Shortest Job First), used in SAFe, is structurally similar but separates business value from time-criticality and risk reduction; reach for it in larger enterprise contexts where regulatory or dependency risk dominates. For most product teams, start with RICE; switch to ICE only when Reach has stopped adding signal.