Probability vs Impact: How to Score Risks Without Guessing
Probability is the likelihood that a risk will materialise within a defined timeframe. Impact is the severity of the consequences if it does. A risk score is what you get when you combine them — typically by multiplying the two values on a scale of 1 to 5. The result tells you where to focus your attention and how to allocate your resources.That definition is straightforward. The problem is what happens in practice: two people assess the same risk and arrive at completely different scores. One calls it a 3×4. The other calls it a 2×2. Neither is wrong — but neither is useful. When risk scoring becomes a matter of opinion, the matrix stops being a management tool and becomes a compliance exercise.
This article shows you how to make probability and impact scoring consistent, defensible, and actually useful.
In this article
Why Risk Scores Become Guessing Games
Risk probability assessment is inherently difficult, and assessments involving uncertainty are subject to a wide range of sources of bias. This is not a team failure — it is a design failure. When organisations define their likelihood and impact levels with labels but without criteria, every assessor fills the gap with their own judgment.
The fix is not more training. It is better definitions. Effective risk scoring requires consistent methodologies across the organisation — everyone must speak the same language when discussing risks. That language is built into how you define each level of your scale, not into how experienced your team is.
According to Pirani's Risk Management Study 2026: Africa Chapter, 62% of organisations cite risk management culture as their primary challenge — and inconsistent scoring is one of the clearest symptoms of a culture where risk management is treated as subjective rather than systematic.
How to Define Probability Scales That Work
Probability (also called likelihood) answers one question: how often could this risk materialise?
Several approaches exist for measuring probability, each with specific advantages depending on context. The most common is a 1–5 numerical scale. The mistake is stopping there.
Here is what a well-defined probability scale looks like:
| Score | Label | Definition |
|---|---|---|
| 1 | Rare | Less than once every 5 years. No historical precedent in your organisation. |
| 2 | Unlikely | Once every 2–5 years. Has occurred in the sector but not internally. |
| 3 | Possible | Once per year. Has occurred internally in recent years. |
| 4 | Likely | Several times per year. Recurring pattern in audit findings or incident logs. |
| 5 | Almost certain | Monthly or more frequent. Ongoing exposure with no effective control. |
The key detail: each level has a time boundary and a reference point (historical data, audit findings, sector precedent). A guideline for probability might include frequency of audit findings — a finding from the past year may indicate a risk is highly likely, while one from five years ago with no repeat findings may indicate an unlikely or remote risk.
Without time boundaries, "possible" means something different to every assessor. With them, it becomes a factual determination.
How to Define Impact Scales That Work
Impact answers the question: how severe would the consequences be if this risk materialised?
The most common failure here is defining impact in generic terms — "minor," "moderate," "severe" — without connecting those labels to anything measurable in your organisation's specific context.
Here is what a well-defined impact scale looks like for a financial institution:
| Score | Label | Financial | Regulatory | Operational |
|---|---|---|---|---|
| 1 | Negligible | Loss < 0.1% of revenue | No regulatory consequence | Disruption < 4 hours |
| 2 | Minor | Loss 0.1–0.5% of revenue | Internal finding only | Disruption 4–24 hours |
| 3 | Moderate | Loss 0.5–2% of revenue | Regulatory notification required | Disruption 1–3 days |
| 4 | Significant | Loss 2–5% of revenue | Formal regulatory action | Disruption 3–7 days |
| 5 | Critical | Loss > 5% of revenue | Licence risk or major fine | Disruption > 7 days |
Three impact dimensions — financial, regulatory, and operational — cover the full risk exposure of most financial institutions in West Africa and South Africa. A risk scores at the highest level it reaches across any of the three dimensions.
How to Calculate Your Risk Score
Once probability and impact are defined and scored, the calculation is simple:
Risk Score = Probability × Impact
The most common method is to multiply the two factors, though some organisations may use addition or more complex formulas. Multiplication is recommended because it differentiates between risks more clearly — a 5×1 (very likely, negligible impact) scores 5, while a 1×5 (rare, critical impact) also scores 5, correctly reflecting that both deserve attention for different reasons.
The score ranges map to priority levels:
| Score range | Priority | Action |
|---|---|---|
| 1–4 | Low | Monitor. Review quarterly. |
| 5–9 | Medium | Active controls required. Owner assigned. |
| 10–16 | High | Immediate management attention. Escalate to CRO. |
| 17–25 | Critical | Board-level visibility. Treatment plan within 30 days. |
The Most Common Scoring Mistakes — and How to Avoid Them
Scoring by committee without defined criteria. When a group debates a score without agreed definitions, the loudest voice wins. Fix: define your scales before any scoring session, and make the criteria visible during the session.
Rating everything as high to be "safe." Labelling too many risks as high dilutes focus, creates alarm, and undermines prioritisation. A well-calibrated matrix has most risks in the medium range. If your matrix is mostly red, your definitions need recalibrating — not your risks.
Ignoring residual risk. Inherent risk is your exposure before controls. Residual risk is your exposure after controls. Inherent risk scores represent the level of risk an institution would face if there were no controls to mitigate it. Both need to be scored — the gap between them is what tells you whether your controls are actually working.
Scoring once and never revisiting. Risk scores go stale. A risk rated "unlikely" two years ago may be "likely" today following a regulatory change or an industry incident. Build quarterly reviews into your process.
Put It Into Practice
The definitions above are a starting point. Your organisation needs to adapt them to your specific context — your industry, your regulatory jurisdiction, and your risk appetite.
Download Pirani's free Risk Matrix template in Excel — it includes pre-built probability and impact scales you can customise for your institution. Or build your matrix directly in Pirani for free and score risks with owners, controls, and board-ready reporting from day one.
Want to see probability and impact scoring in action with real examples from West African financial institutions? Join the next session of the Pirani Risk Management School — this month's topic is risk matrices in practice. Free, every third Wednesday.
Or if you want to see how Pirani's operational risk management module handles scoring, controls, and residual risk in a live environment: let's talk.
FAQ
Probability is the likelihood that a specific risk will occur within a defined timeframe — scored from 1 (rare) to 5 (almost certain). Impact is the severity of the consequences if the risk materialises — scored from 1 (negligible) to 5 (critical). Together they produce a risk score that determines how urgently a risk needs to be addressed.
The most common method is to multiply the probability score by the impact score. A risk with a probability of 3 and an impact of 4 has a risk score of 12, placing it in the high priority range. The resulting score is then mapped to a priority level — low, medium, high, or critical — each with a defined response requirement.
Inherent risk is your exposure before any controls are applied — the raw risk if nothing was done to mitigate it. Residual risk is your exposure after controls are in place. The gap between the two scores tells you how effective your controls are. If your inherent score is 20 and your residual score is 16, your controls are not reducing risk meaningfully and need to be reviewed.
Subjectivity in risk scoring comes from undefined scales. The solution is to attach concrete, measurable criteria to every level of your probability and impact scales — time boundaries for probability (e.g. "once per year"), and quantified thresholds for impact (e.g. "loss between 0.5% and 2% of revenue"). When assessors have specific criteria to reference, scores become factual determinations rather than judgment calls.
Most financial institutions use three impact dimensions: financial (revenue or capital loss), regulatory (consequence of non-compliance), and operational (disruption to processes or services). A risk scores at the highest level it reaches across any of the three dimensions. This prevents underscoring risks that have low financial impact but significant regulatory consequences — a common issue for banks under CBN, BoG, or Prudential Authority supervision.
At minimum, quarterly. Risk scores go stale as the environment changes — a risk rated "unlikely" can become "likely" following a regulatory directive, an industry incident, or an internal control failure. The risk scoring methodology itself should also be regularly reviewed to ensure it remains relevant and effective.
Related resources:
- Free Risk Matrix Template in Excel — Pirani Academy
- Create a free Pirani account — Pirani
- Operational Risk Management Software — Pirani
- ISO 31000 Solution Page — Pirani
- Risk Management School — Pirani
- How to Build a Risk Matrix From Scratch — Pirani Blog
No Comments Yet
Let us know what you think