All methods
prioritizationBeginner

RICE Scoring

RICE Scoring is a quantitative prioritisation framework developed by Sean McBride at Intercom and published in 2016. It was created to solve a specific and common problem in product management: important decisions about which features, projects, or initiatives to pursue are made based on loudest voice, seniority, or gut feel rather than structured reasoning. RICE provides a simple formula to calculate a priority score for each candidate item by estimating four factors: Reach (how many people will this affect in a given time period — typically expressed as number of users per quarter), Impact (how much will this move the needle for each affected user — scored on a scale such as 3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal), Confidence (how confident are you in your estimates, expressed as a percentage — 100% for high evidence, 80% for medium, 50% for low), and Effort (how many person-months of work does this require across all team members). The formula is: RICE Score = (Reach × Impact × Confidence) ÷ Effort. The result is a comparable score that surfaces high-reach, high-impact, high-confidence, low-effort items to the top — while penalising speculative big bets that lack evidence. In workshops, RICE creates transparent, defensible prioritisation conversations and reduces political lobbying by anchoring debate to shared estimates.

Duration
1h–2h
Group size
3–15 people
Materials
RICE scoring spreadsheet or template, Backlog of initiatives/features to prioritise, Sticky notes…

How to run it

  1. 1

    Prepare the candidate list: before the session, compile the backlog of features, initiatives, or projects to be scored. Aim for 10–30 items — RICE doesn't scale well to 100+ items without pre-filtering.

  2. 2

    Brief the team on RICE (10 min): explain each factor and the formula. Be explicit about the time frame for Reach (e.g., 'users in the next quarter') and the Impact scale your team will use. Agree these definitions before scoring begins.

  3. 3

    Score Reach for each item (10–15 min): work item by item or in parallel. Ask 'How many people will this meaningfully affect in the defined time period?' Use data where available — analytics, market size estimates, customer counts. Flag items where no data exists.

  4. 4

    Score Impact (10–15 min): for each item, ask 'For each person reached, how much does this improve their experience or move our key metric?' Use your agreed scale. Debate is healthy here — push for honest assessments rather than inflated advocacy.

  5. 5

    Score Confidence (10 min): ask 'How well evidenced are our Reach and Impact estimates?' High confidence (100%) means validated by data or strong customer research. Low confidence (50%) means it's mostly a hypothesis. Confidence is the most important reality check in the model.

  6. 6

    Score Effort (10–15 min): ask engineering or delivery leads to estimate total person-months across all roles. This is often the most debated field — use rough buckets (1 week = 0.25, 1 month = 1, 1 quarter = 3) if exact estimates aren't available.

  7. 7

    Calculate scores and sort: apply the formula (Reach × Impact × Confidence ÷ Effort) for each item. Sort by score descending.

  8. 8

    Review and sense-check the ranked list (10 min): the formula is a starting point, not a final verdict. Discuss outliers — why is this item ranked lower than expected? Is the Confidence score honest? Are there strategic dependencies that the score doesn't capture?

  9. 9

    Agree the next tier: decide together how many items from the top of the list will enter the next planning cycle. Mark these as 'committed' and the rest as 'backlog'.

Tips

  • The Confidence factor is the most powerful and most underused part of RICE. Forcing the team to declare low confidence on speculative items naturally drops them in the ranking without political friction.

  • Reach must be bounded to a time period. Unbounded reach estimates always inflate — everything eventually affects everyone.

  • Don't let Effort scoring become a full-blown estimation session. RICE works with rough estimates. Precision in effort scoring that takes 30 minutes per item defeats the purpose.

  • Watch for 'confidence washing' — teams that assign 80% confidence to everything to keep their pet projects competitive. Challenge the evidence behind high confidence scores.

  • RICE is excellent for comparing items within the same type (all features, or all growth experiments) but should not be used to compare fundamentally different strategic bets against each other.

Variations

For early-stage product teams with little data, replace Reach with 'Strategic Alignment' (1–5 scale against current strategy) and lower the confidence baseline to 30–50% across the board. For marketing campaign prioritisation, Reach can represent audience size and Impact can represent conversion lift. ICE Scoring is a faster, lighter variant of RICE for teams that want to move quickly without effort estimation.

Where it fits

Product roadmap prioritisation across competing feature requestsGrowth experiment backlog managementComparing engineering investments with different risk profilesQuarterly planning sessions with cross-functional teamsReducing HiPPO (Highest Paid Person's Opinion) bias in decisions

Frequently asked questions

When should I use RICE Scoring?â–¾

Use RICE Scoring when you want to: Product roadmap prioritisation across competing feature requests; Growth experiment backlog management; Comparing engineering investments with different risk profiles; Quarterly planning sessions with cross-functional teams; Reducing HiPPO (Highest Paid Person's Opinion) bias in decisions.

How long does RICE Scoring take?â–¾

RICE Scoring typically takes 60–120 minutes.

How many participants does RICE Scoring work for?â–¾

RICE Scoring works best for groups of 3–15 participants.

What materials do I need for RICE Scoring?â–¾

To run RICE Scoring you will need: RICE scoring spreadsheet or template, Backlog of initiatives/features to prioritise, Sticky notes, Markers, Scoring reference card, Whiteboard or shared digital workspace.

How difficult is RICE Scoring to facilitate?â–¾

RICE Scoring is rated beginner — straightforward to facilitate even without prior experience.

🪡

Plan your next workshop with AI

Workshop Weaver helps you combine methods like RICE Scoring into a complete, timed agenda in minutes.

Try it free

Method descriptions on Workshop Weaver are original content written by our team, based on established facilitation practices. This method was inspired by work from Sean McBride — Intercom.

RICE Scoring — Facilitation Method | Workshop Weaver