Tutti i metodi
prioritizationPrincipiante

ICE Scoring

ICE Scoring is a lightweight prioritisation framework popularised by growth practitioner Sean Ellis, creator of the 'growth hacking' concept, as a fast tool for scoring and ranking experiments and initiatives. ICE stands for Impact (how much will this move the target metric if it works?), Confidence (how sure are you that this will actually work?), and Ease (how easy is this to implement — the inverse of effort?). Each factor is scored on a 1–10 scale, and the ICE score is the simple average: (Impact + Confidence + Ease) ÷ 3. The formula deliberately keeps all three factors equally weighted and uses an arithmetic average rather than multiplication, making it more resistant to extreme outliers than RICE. ICE was designed specifically for high-velocity growth teams running many experiments in parallel — contexts where the cost of over-analysing a decision exceeds the cost of occasionally betting on the wrong experiment. Its primary virtue is speed: an experienced team can score 20–30 items in 30 minutes, dramatically faster than RICE or weighted scoring models. In a workshop context, ICE is used for rapid triage of experiment backlogs, sprint planning, and filtering a large list of options to a manageable shortlist before applying more rigorous analysis.

Durata
30m–1h
Dimensione del gruppo
2–12 people
Materiali
ICE scoring spreadsheet or simple table, Backlog of experiments or initiatives, Sticky notes

Come eseguirlo

  1. 1

    Compile the candidate list: assemble 10–50 experiments, features, or initiatives. ICE is suited to high-volume lists — for fewer than 10 items, a simple discussion may suffice.

  2. 2

    Agree on the metric context (5 min): what is the primary metric these items are intended to move? ICE scores are only comparable within the context of a shared goal. If items serve different objectives, score them in separate sessions.

  3. 3

    Score Impact (5–10 min): for each item, ask 'If this works exactly as expected, how much will it move our target metric?' Score 1–10 (1 = negligible, 10 = massive). Go item by item quickly — first instincts are often right here.

  4. 4

    Score Confidence (5–10 min): ask 'How confident are you that this will work as predicted?' Score 1–10 (1 = pure speculation, 10 = proven by data or prior results). Flag items where confidence is below 4 — these may be worth running as cheap experiments before committing to.

  5. 5

    Score Ease (5–10 min): ask 'How easy is this to implement given our current resources and constraints?' Score 1–10 (1 = very hard, many months of work; 10 = can be done today with no dependencies). This is the inverse of effort.

  6. 6

    Calculate ICE scores and rank: compute (Impact + Confidence + Ease) ÷ 3 for each item. Sort descending. Items with scores above 7 are strong candidates for the next cycle.

  7. 7

    Review the ranked list and calibrate (10 min): ICE often surfaces quick wins (high Ease) over strategic bets (high Impact, low Ease). Discuss whether the ranked list reflects the team's strategic intent or just optimises for speed. Adjust if needed.

  8. 8

    Assign ownership: for the top 5–10 items selected for the next sprint or quarter, assign an owner responsible for scoping and executing the experiment.

Suggerimenti

  • ICE scoring is fast because it is approximate. Don't let groups debate a score from 6 to 7 for five minutes — that level of precision is false anyway.

  • Confidence is the most important filter. A 10/10 Impact score on a 1/10 Confidence item is a research project, not a priority. Sequence low-confidence high-impact items as validation experiments, not full launches.

  • Ease scoring often gets gamed by technical team members who score items that favour their current sprint high. Calibrate against actual delivery timelines in the last quarter.

  • Run ICE in silence first — everyone scores independently before any group discussion — to prevent anchoring and social conformity bias.

  • ICE is a shortlisting tool, not a final decision framework. Always sanity-check the top-ranked items against strategic fit before committing to them.

Variazioni

For growth experiment programmes, add a fourth factor — 'Time to Results' — to create an ICET model that also accounts for how quickly you'll know if an experiment worked. For strategy-heavy teams, weight Impact more heavily (e.g., 2×) rather than using a simple average. Use ICE for weekly experiment triage and RICE for quarterly roadmap prioritisation.

Casi d'uso

Weekly growth experiment backlog triageRapid shortlisting in product sprint planningMarketing channel experiment prioritisationInnovation pipeline first-pass filteringCRO (Conversion Rate Optimisation) test queue management

Domande frequenti

Quando usare ICE Scoring?

Usa ICE Scoring quando vuoi: Weekly growth experiment backlog triage; Rapid shortlisting in product sprint planning; Marketing channel experiment prioritisation; Innovation pipeline first-pass filtering; CRO (Conversion Rate Optimisation) test queue management.

Quanto dura ICE Scoring?

ICE Scoring dura tipicamente da 30 a 60 minuti.

Per quanti partecipanti è adatto ICE Scoring?

ICE Scoring funziona meglio per gruppi di 2–12 partecipanti.

Di quali materiali ho bisogno per ICE Scoring?

Per condurre ICE Scoring avrai bisogno di: ICE scoring spreadsheet or simple table, Backlog of experiments or initiatives, Sticky notes, Markers, Shared scoring document (digital or physical).

Quanto è difficile facilitare ICE Scoring?

ICE Scoring è classificato come principiante — facile da facilitare anche senza esperienza precedente.

🪡

Pianifica il tuo prossimo workshop con l'IA

Workshop Weaver ti aiuta a combinare metodi come ICE Scoring in un'agenda completa e temporizzata in pochi minuti.

Prova gratis

Method descriptions on Workshop Weaver are original content written by our team, based on established facilitation practices. This method was inspired by work from Sean Ellis.

ICE Scoring — Facilitation Method | Workshop Weaver