RICE Scoring
RICE Scoring is a quantitative prioritisation framework developed by Sean McBride at Intercom and published in 2016. It was created to solve a specific and common problem in product management: important decisions about which features, projects, or initiatives to pursue are made based on loudest voice, seniority, or gut feel rather than structured reasoning. RICE provides a simple formula to calculate a priority score for each candidate item by estimating four factors: Reach (how many people will this affect in a given time period — typically expressed as number of users per quarter), Impact (how much will this move the needle for each affected user — scored on a scale such as 3=massive, 2=high, 1=medium, 0.5=low, 0.25=minimal), Confidence (how confident are you in your estimates, expressed as a percentage — 100% for high evidence, 80% for medium, 50% for low), and Effort (how many person-months of work does this require across all team members). The formula is: RICE Score = (Reach × Impact × Confidence) ÷ Effort. The result is a comparable score that surfaces high-reach, high-impact, high-confidence, low-effort items to the top — while penalising speculative big bets that lack evidence. In workshops, RICE creates transparent, defensible prioritisation conversations and reduces political lobbying by anchoring debate to shared estimates.
Cómo ejecutarlo
- 1
Prepare the candidate list: before the session, compile the backlog of features, initiatives, or projects to be scored. Aim for 10–30 items — RICE doesn't scale well to 100+ items without pre-filtering.
- 2
Brief the team on RICE (10 min): explain each factor and the formula. Be explicit about the time frame for Reach (e.g., 'users in the next quarter') and the Impact scale your team will use. Agree these definitions before scoring begins.
- 3
Score Reach for each item (10–15 min): work item by item or in parallel. Ask 'How many people will this meaningfully affect in the defined time period?' Use data where available — analytics, market size estimates, customer counts. Flag items where no data exists.
- 4
Score Impact (10–15 min): for each item, ask 'For each person reached, how much does this improve their experience or move our key metric?' Use your agreed scale. Debate is healthy here — push for honest assessments rather than inflated advocacy.
- 5
Score Confidence (10 min): ask 'How well evidenced are our Reach and Impact estimates?' High confidence (100%) means validated by data or strong customer research. Low confidence (50%) means it's mostly a hypothesis. Confidence is the most important reality check in the model.
- 6
Score Effort (10–15 min): ask engineering or delivery leads to estimate total person-months across all roles. This is often the most debated field — use rough buckets (1 week = 0.25, 1 month = 1, 1 quarter = 3) if exact estimates aren't available.
- 7
Calculate scores and sort: apply the formula (Reach × Impact × Confidence ÷ Effort) for each item. Sort by score descending.
- 8
Review and sense-check the ranked list (10 min): the formula is a starting point, not a final verdict. Discuss outliers — why is this item ranked lower than expected? Is the Confidence score honest? Are there strategic dependencies that the score doesn't capture?
- 9
Agree the next tier: decide together how many items from the top of the list will enter the next planning cycle. Mark these as 'committed' and the rest as 'backlog'.
Consejos
The Confidence factor is the most powerful and most underused part of RICE. Forcing the team to declare low confidence on speculative items naturally drops them in the ranking without political friction.
Reach must be bounded to a time period. Unbounded reach estimates always inflate — everything eventually affects everyone.
Don't let Effort scoring become a full-blown estimation session. RICE works with rough estimates. Precision in effort scoring that takes 30 minutes per item defeats the purpose.
Watch for 'confidence washing' — teams that assign 80% confidence to everything to keep their pet projects competitive. Challenge the evidence behind high confidence scores.
RICE is excellent for comparing items within the same type (all features, or all growth experiments) but should not be used to compare fundamentally different strategic bets against each other.
Variaciones
For early-stage product teams with little data, replace Reach with 'Strategic Alignment' (1–5 scale against current strategy) and lower the confidence baseline to 30–50% across the board. For marketing campaign prioritisation, Reach can represent audience size and Impact can represent conversion lift. ICE Scoring is a faster, lighter variant of RICE for teams that want to move quickly without effort estimation.
Casos de uso
Preguntas frecuentes
¿Cuándo debo usar RICE Scoring?▾
Usa RICE Scoring cuando quieras: Product roadmap prioritisation across competing feature requests; Growth experiment backlog management; Comparing engineering investments with different risk profiles; Quarterly planning sessions with cross-functional teams; Reducing HiPPO (Highest Paid Person's Opinion) bias in decisions.
¿Cuánto tiempo tarda RICE Scoring?▾
RICE Scoring normalmente dura entre 60 y 120 minutos.
¿Para cuántos participantes es RICE Scoring?▾
RICE Scoring funciona mejor para grupos de 3–15 participantes.
¿Qué materiales necesito para RICE Scoring?▾
Para realizar RICE Scoring necesitarás: RICE scoring spreadsheet or template, Backlog of initiatives/features to prioritise, Sticky notes, Markers, Scoring reference card, Whiteboard or shared digital workspace.
¿Qué tan difÃcil es facilitar RICE Scoring?â–¾
RICE Scoring está clasificado como principiante — fácil de facilitar incluso sin experiencia previa.
Planifica tu próximo taller con IA
Workshop Weaver te ayuda a combinar métodos como RICE Scoring en una agenda completa y cronometrada en minutos.
Probar gratisMethod descriptions on Workshop Weaver are original content written by our team, based on established facilitation practices. This method was inspired by work from Sean McBride — Intercom.