Probability is the likelihood that a risk will materialise within a defined timeframe. Impact is the severity of the consequences if it does. A risk score is what you get when you combine them — typically by multiplying the two values on a scale of 1 to 5. The result tells you where to focus your attention and how to allocate your resources.That definition is straightforward. The problem is what happens in practice: two people assess the same risk and arrive at completely different scores. One calls it a 3×4. The other calls it a 2×2. Neither is wrong — but neither is useful. When risk scoring becomes a matter of opinion, the matrix stops being a management tool and becomes a compliance exercise.
This article shows you how to make probability and impact scoring consistent, defensible, and actually useful.
In this article
Risk probability assessment is inherently difficult, and assessments involving uncertainty are subject to a wide range of sources of bias. This is not a team failure — it is a design failure. When organisations define their likelihood and impact levels with labels but without criteria, every assessor fills the gap with their own judgment.
The fix is not more training. It is better definitions. Effective risk scoring requires consistent methodologies across the organisation — everyone must speak the same language when discussing risks. That language is built into how you define each level of your scale, not into how experienced your team is.
According to Pirani's Risk Management Study 2026: Africa Chapter, 62% of organisations cite risk management culture as their primary challenge — and inconsistent scoring is one of the clearest symptoms of a culture where risk management is treated as subjective rather than systematic.
Probability (also called likelihood) answers one question: how often could this risk materialise?
Several approaches exist for measuring probability, each with specific advantages depending on context. The most common is a 1–5 numerical scale. The mistake is stopping there.
Here is what a well-defined probability scale looks like:
| Score | Label | Definition |
|---|---|---|
| 1 | Rare | Less than once every 5 years. No historical precedent in your organisation. |
| 2 | Unlikely | Once every 2–5 years. Has occurred in the sector but not internally. |
| 3 | Possible | Once per year. Has occurred internally in recent years. |
| 4 | Likely | Several times per year. Recurring pattern in audit findings or incident logs. |
| 5 | Almost certain | Monthly or more frequent. Ongoing exposure with no effective control. |
The key detail: each level has a time boundary and a reference point (historical data, audit findings, sector precedent). A guideline for probability might include frequency of audit findings — a finding from the past year may indicate a risk is highly likely, while one from five years ago with no repeat findings may indicate an unlikely or remote risk.
Without time boundaries, "possible" means something different to every assessor. With them, it becomes a factual determination.
Impact answers the question: how severe would the consequences be if this risk materialised?
The most common failure here is defining impact in generic terms — "minor," "moderate," "severe" — without connecting those labels to anything measurable in your organisation's specific context.
Here is what a well-defined impact scale looks like for a financial institution:
| Score | Label | Financial | Regulatory | Operational |
|---|---|---|---|---|
| 1 | Negligible | Loss < 0.1% of revenue | No regulatory consequence | Disruption < 4 hours |
| 2 | Minor | Loss 0.1–0.5% of revenue | Internal finding only | Disruption 4–24 hours |
| 3 | Moderate | Loss 0.5–2% of revenue | Regulatory notification required | Disruption 1–3 days |
| 4 | Significant | Loss 2–5% of revenue | Formal regulatory action | Disruption 3–7 days |
| 5 | Critical | Loss > 5% of revenue | Licence risk or major fine | Disruption > 7 days |
Three impact dimensions — financial, regulatory, and operational — cover the full risk exposure of most financial institutions in West Africa and South Africa. A risk scores at the highest level it reaches across any of the three dimensions.
Once probability and impact are defined and scored, the calculation is simple:
Risk Score = Probability × Impact
The most common method is to multiply the two factors, though some organisations may use addition or more complex formulas. Multiplication is recommended because it differentiates between risks more clearly — a 5×1 (very likely, negligible impact) scores 5, while a 1×5 (rare, critical impact) also scores 5, correctly reflecting that both deserve attention for different reasons.
The score ranges map to priority levels:
| Score range | Priority | Action |
|---|---|---|
| 1–4 | Low | Monitor. Review quarterly. |
| 5–9 | Medium | Active controls required. Owner assigned. |
| 10–16 | High | Immediate management attention. Escalate to CRO. |
| 17–25 | Critical | Board-level visibility. Treatment plan within 30 days. |
Scoring by committee without defined criteria. When a group debates a score without agreed definitions, the loudest voice wins. Fix: define your scales before any scoring session, and make the criteria visible during the session.
Rating everything as high to be "safe." Labelling too many risks as high dilutes focus, creates alarm, and undermines prioritisation. A well-calibrated matrix has most risks in the medium range. If your matrix is mostly red, your definitions need recalibrating — not your risks.
Ignoring residual risk. Inherent risk is your exposure before controls. Residual risk is your exposure after controls. Inherent risk scores represent the level of risk an institution would face if there were no controls to mitigate it. Both need to be scored — the gap between them is what tells you whether your controls are actually working.
Scoring once and never revisiting. Risk scores go stale. A risk rated "unlikely" two years ago may be "likely" today following a regulatory change or an industry incident. Build quarterly reviews into your process.
The definitions above are a starting point. Your organisation needs to adapt them to your specific context — your industry, your regulatory jurisdiction, and your risk appetite.
Download Pirani's free Risk Matrix template in Excel — it includes pre-built probability and impact scales you can customise for your institution. Or build your matrix directly in Pirani for free and score risks with owners, controls, and board-ready reporting from day one.
Want to see probability and impact scoring in action with real examples from West African financial institutions? Join the next session of the Pirani Risk Management School — this month's topic is risk matrices in practice. Free, every third Wednesday.
Or if you want to see how Pirani's operational risk management module handles scoring, controls, and residual risk in a live environment: let's talk.