During the process of doing a risk analysis, there is a point where companies want to estimate the expected performance of their barriers. This is usually taken into account when doing a final risk assessment on risk matrix level, to gauge the residual risk and ALARP level. But how can you proactively perform a qualitative barrier performance assessment when there is little to no data (e.g. incidents, audits, inspections)?
For some situations, a quantitative method such as LOPA can help place such a performance value on barriers. LOPA uses the Probability of Failure on Demand (PFD) as a metric to determine how likely it is that the barrier will fail to work when called upon. A PFD of 0.01 indicates the barrier will fail once every 100 years (or other chosen unit of measurement). It implies that it performs its function 100% of the time during the other 99 years. This works for a technical process, where a valve either opens and stops the incident sequence, or fails to open and does not stop the sequence. However, many risk analyses are not that black and white, and the barrier performance has a wider range of possibilities than just ‘On’ or ‘Off’.
When barrier performance requires a more qualitative value
Therefore, we need a way to place a more qualitative value on a certain type of barriers, such as barriers that are designed to mitigate an event, instead of stopping it totally. One metric to indicate the expected barrier performance in a qualitative manner is ‘effectiveness’. In other words, how effective will this barrier be, in performing its function in the context of a certain scenario? A possible and simple effectiveness scale can look like this: very good, good, poor, very poor. Of course, other taxonomies and more extensive (or simpler) ordinal scales can be used.
Effectiveness seems like an intuitive and simple concept, but we find many bowtie practitioners struggle to come to an informed judgment of how effective their barriers are. Especially if there is little to no data (e.g. incidents, audits, inspections) available.
A good first step to make the process easier is by splitting effectiveness up into the concepts of ‘adequacy’ and ‘reliability’. They can be defined more easily, and the combination of them can point to an effectiveness rating.
The adequacy component indicates to what extent a properly functioning barrier will interrupt a certain scenario. It is always in relation to the scenario (threat or consequence line in bowtie terms) and takes into account the design envelope of the barrier and the threat size it has to cover.
Let’s take a handheld fire extinguisher as an example. When we take into account a small kitchen fire in the office, the fire extinguisher, when used properly, is likely to put out the fire. In that case, the barrier can be assessed as adequate. However, when we take into account a larger fire in the office area, that same fire extinguisher is probably not as adequate.
In other words, adequacy is a metric that can be 100%, 0% and anything in between, without having fixed intervals. Instead of an all or nothing barrier, you can now choose the required level of granularity to make your effectiveness scale work.
The second component answers the question: “Will the barrier do what it’s supposed to do when it is needed?”. Therefore, you need to look at all factors that impact availability and survivability in its environment etc. If data is available, we could look at incidents where the barrier was missing (availability), or where the barrier was present but did not kick in because maintenance was not performed (survivability).
If the assessment of both adequacy and reliability has been done, you can use a matrix to determine the effectiveness, such as the matrix below that ranges from Very Good (VG) to Very Poor (VP). For example, a barrier that is both highly adequate and very reliable gets a ‘Very Good’ effectiveness rating. See the image on the left.
In conclusion, assessing barrier effectiveness in a quantitative way has its use, but is not applicable to every risk analysis. As a result, we need something extra. Using qualitative effectiveness ratings, split up into adequacy and reliability, gives you the granularity you need for making an expert judgment on barrier performance. This allows you to perform this part of the bowtie risk analysis with more confidence.
Of course, using ordinal values that are not strictly defined, such as poor, good, very good, also has its disadvantages. One of them is the subjectivity that comes with assessing the values, when is adequacy ‘good’ and when is it ‘very good’? In another blog on our website about risk matrices, Emily Harbottle talks about the pitfalls when using ordinal scales.