ICE Scoring
ICE Scoring is a lightweight prioritisation framework popularised by growth practitioner Sean Ellis, creator of the 'growth hacking' concept, as a fast tool for scoring and ranking experiments and initiatives. ICE stands for Impact (how much will this move the target metric if it works?), Confidence (how sure are you that this will actually work?), and Ease (how easy is this to implement — the inverse of effort?). Each factor is scored on a 1–10 scale, and the ICE score is the simple average: (Impact + Confidence + Ease) ÷ 3. The formula deliberately keeps all three factors equally weighted and uses an arithmetic average rather than multiplication, making it more resistant to extreme outliers than RICE. ICE was designed specifically for high-velocity growth teams running many experiments in parallel — contexts where the cost of over-analysing a decision exceeds the cost of occasionally betting on the wrong experiment. Its primary virtue is speed: an experienced team can score 20–30 items in 30 minutes, dramatically faster than RICE or weighted scoring models. In a workshop context, ICE is used for rapid triage of experiment backlogs, sprint planning, and filtering a large list of options to a manageable shortlist before applying more rigorous analysis.
Comment l'animer
- 1
Compile the candidate list: assemble 10–50 experiments, features, or initiatives. ICE is suited to high-volume lists — for fewer than 10 items, a simple discussion may suffice.
- 2
Agree on the metric context (5 min): what is the primary metric these items are intended to move? ICE scores are only comparable within the context of a shared goal. If items serve different objectives, score them in separate sessions.
- 3
Score Impact (5–10 min): for each item, ask 'If this works exactly as expected, how much will it move our target metric?' Score 1–10 (1 = negligible, 10 = massive). Go item by item quickly — first instincts are often right here.
- 4
Score Confidence (5–10 min): ask 'How confident are you that this will work as predicted?' Score 1–10 (1 = pure speculation, 10 = proven by data or prior results). Flag items where confidence is below 4 — these may be worth running as cheap experiments before committing to.
- 5
Score Ease (5–10 min): ask 'How easy is this to implement given our current resources and constraints?' Score 1–10 (1 = very hard, many months of work; 10 = can be done today with no dependencies). This is the inverse of effort.
- 6
Calculate ICE scores and rank: compute (Impact + Confidence + Ease) Ă· 3 for each item. Sort descending. Items with scores above 7 are strong candidates for the next cycle.
- 7
Review the ranked list and calibrate (10 min): ICE often surfaces quick wins (high Ease) over strategic bets (high Impact, low Ease). Discuss whether the ranked list reflects the team's strategic intent or just optimises for speed. Adjust if needed.
- 8
Assign ownership: for the top 5–10 items selected for the next sprint or quarter, assign an owner responsible for scoping and executing the experiment.
Conseils
ICE scoring is fast because it is approximate. Don't let groups debate a score from 6 to 7 for five minutes — that level of precision is false anyway.
Confidence is the most important filter. A 10/10 Impact score on a 1/10 Confidence item is a research project, not a priority. Sequence low-confidence high-impact items as validation experiments, not full launches.
Ease scoring often gets gamed by technical team members who score items that favour their current sprint high. Calibrate against actual delivery timelines in the last quarter.
Run ICE in silence first — everyone scores independently before any group discussion — to prevent anchoring and social conformity bias.
ICE is a shortlisting tool, not a final decision framework. Always sanity-check the top-ranked items against strategic fit before committing to them.
Variantes
For growth experiment programmes, add a fourth factor — 'Time to Results' — to create an ICET model that also accounts for how quickly you'll know if an experiment worked. For strategy-heavy teams, weight Impact more heavily (e.g., 2×) rather than using a simple average. Use ICE for weekly experiment triage and RICE for quarterly roadmap prioritisation.
Contextes d'utilisation
Questions fréquemment posées
Quand utiliser ICE Scoring ?â–ľ
Utilisez ICE Scoring lorsque vous souhaitez: Weekly growth experiment backlog triage; Rapid shortlisting in product sprint planning; Marketing channel experiment prioritisation; Innovation pipeline first-pass filtering; CRO (Conversion Rate Optimisation) test queue management.
Combien de temps dure ICE Scoring ?â–ľ
ICE Scoring dure généralement 30–60 minutes.
Pour combien de participants ICE Scoring convient-il ?â–ľ
ICE Scoring fonctionne mieux pour des groupes de 2–12 participants.
De quels matériaux ai-je besoin pour ICE Scoring ?▾
Pour animer ICE Scoring, vous aurez besoin de : ICE scoring spreadsheet or simple table, Backlog of experiments or initiatives, Sticky notes, Markers, Shared scoring document (digital or physical).
Quel est le niveau de difficulté de ICE Scoring ?▾
ICE Scoring est classé débutant — facile à animer même sans expérience préalable.
Planifiez votre prochain atelier avec l'IA
Workshop Weaver vous aide à combiner des méthodes comme ICE Scoring en un agenda complet et minuté en quelques minutes.
Essayer gratuitementMethod descriptions on Workshop Weaver are original content written by our team, based on established facilitation practices. This method was inspired by work from Sean Ellis.