All methods
prioritizationBeginner

ICE Scoring

ICE Scoring is a lightweight prioritisation framework popularised by growth practitioner Sean Ellis, creator of the 'growth hacking' concept, as a fast tool for scoring and ranking experiments and initiatives. ICE stands for Impact (how much will this move the target metric if it works?), Confidence (how sure are you that this will actually work?), and Ease (how easy is this to implement — the inverse of effort?). Each factor is scored on a 1–10 scale, and the ICE score is the simple average: (Impact + Confidence + Ease) ÷ 3. The formula deliberately keeps all three factors equally weighted and uses an arithmetic average rather than multiplication, making it more resistant to extreme outliers than RICE. ICE was designed specifically for high-velocity growth teams running many experiments in parallel — contexts where the cost of over-analysing a decision exceeds the cost of occasionally betting on the wrong experiment. Its primary virtue is speed: an experienced team can score 20–30 items in 30 minutes, dramatically faster than RICE or weighted scoring models. In a workshop context, ICE is used for rapid triage of experiment backlogs, sprint planning, and filtering a large list of options to a manageable shortlist before applying more rigorous analysis.

Duration
30m–1h
Group size
2–12 people
Materials
ICE scoring spreadsheet or simple table, Backlog of experiments or initiatives, Sticky notes…

How to run it

  1. 1

    Compile the candidate list: assemble 10–50 experiments, features, or initiatives. ICE is suited to high-volume lists — for fewer than 10 items, a simple discussion may suffice.

  2. 2

    Agree on the metric context (5 min): what is the primary metric these items are intended to move? ICE scores are only comparable within the context of a shared goal. If items serve different objectives, score them in separate sessions.

  3. 3

    Score Impact (5–10 min): for each item, ask 'If this works exactly as expected, how much will it move our target metric?' Score 1–10 (1 = negligible, 10 = massive). Go item by item quickly — first instincts are often right here.

  4. 4

    Score Confidence (5–10 min): ask 'How confident are you that this will work as predicted?' Score 1–10 (1 = pure speculation, 10 = proven by data or prior results). Flag items where confidence is below 4 — these may be worth running as cheap experiments before committing to.

  5. 5

    Score Ease (5–10 min): ask 'How easy is this to implement given our current resources and constraints?' Score 1–10 (1 = very hard, many months of work; 10 = can be done today with no dependencies). This is the inverse of effort.

  6. 6

    Calculate ICE scores and rank: compute (Impact + Confidence + Ease) ÷ 3 for each item. Sort descending. Items with scores above 7 are strong candidates for the next cycle.

  7. 7

    Review the ranked list and calibrate (10 min): ICE often surfaces quick wins (high Ease) over strategic bets (high Impact, low Ease). Discuss whether the ranked list reflects the team's strategic intent or just optimises for speed. Adjust if needed.

  8. 8

    Assign ownership: for the top 5–10 items selected for the next sprint or quarter, assign an owner responsible for scoping and executing the experiment.

Tips

  • ICE scoring is fast because it is approximate. Don't let groups debate a score from 6 to 7 for five minutes — that level of precision is false anyway.

  • Confidence is the most important filter. A 10/10 Impact score on a 1/10 Confidence item is a research project, not a priority. Sequence low-confidence high-impact items as validation experiments, not full launches.

  • Ease scoring often gets gamed by technical team members who score items that favour their current sprint high. Calibrate against actual delivery timelines in the last quarter.

  • Run ICE in silence first — everyone scores independently before any group discussion — to prevent anchoring and social conformity bias.

  • ICE is a shortlisting tool, not a final decision framework. Always sanity-check the top-ranked items against strategic fit before committing to them.

Variations

For growth experiment programmes, add a fourth factor — 'Time to Results' — to create an ICET model that also accounts for how quickly you'll know if an experiment worked. For strategy-heavy teams, weight Impact more heavily (e.g., 2×) rather than using a simple average. Use ICE for weekly experiment triage and RICE for quarterly roadmap prioritisation.

Where it fits

Weekly growth experiment backlog triageRapid shortlisting in product sprint planningMarketing channel experiment prioritisationInnovation pipeline first-pass filteringCRO (Conversion Rate Optimisation) test queue management

Frequently asked questions

When should I use ICE Scoring?â–¾

Use ICE Scoring when you want to: Weekly growth experiment backlog triage; Rapid shortlisting in product sprint planning; Marketing channel experiment prioritisation; Innovation pipeline first-pass filtering; CRO (Conversion Rate Optimisation) test queue management.

How long does ICE Scoring take?â–¾

ICE Scoring typically takes 30–60 minutes.

How many participants does ICE Scoring work for?â–¾

ICE Scoring works best for groups of 2–12 participants.

What materials do I need for ICE Scoring?â–¾

To run ICE Scoring you will need: ICE scoring spreadsheet or simple table, Backlog of experiments or initiatives, Sticky notes, Markers, Shared scoring document (digital or physical).

How difficult is ICE Scoring to facilitate?â–¾

ICE Scoring is rated beginner — straightforward to facilitate even without prior experience.

🪡

Plan your next workshop with AI

Workshop Weaver helps you combine methods like ICE Scoring into a complete, timed agenda in minutes.

Try it free

Method descriptions on Workshop Weaver are original content written by our team, based on established facilitation practices. This method was inspired by work from Sean Ellis.

ICE Scoring — Facilitation Method | Workshop Weaver