As humans, we’ve come a long way. We went from being hunters/gatherers into becoming settlers, expanding communities into villages and thereafter cities. We innovated, evolved. For the past few centuries, the developments have made significant, technically driven leaps, better described as revolutions rather than gradual growth. In the late 18th century the industrial revolution (Industry 1.0) took place, in which steam power and mechanization were leading. The late 19th-century industry was enhanced with the invention of electricity which allowed for mass production by using more efficient assembly line operations (Industry 2.0). The end of the 20th century was characterized by automation through the emergence of computers (Industry 3.0), which is still partially ongoing.
Our working environment and the participation of humans have changed drastically over time. We moved from very unsafe and unhealthy working conditions where a lot of accidents took place (I1) towards a more controlled and sophisticated environment. The downside to this was that work became quite dull (I2), which allows for more mistakes to be made. To decrease the influence of human error, a lot of operations were automized (I3).
Industry 4.0, an era that is marked by connectivity
We are currently bridging an interim period that places us at the base of the next revolution, defined as Industry 4.0. This upcoming era is marked by connectivity; cyber-physical systems (CPS) that operate on a network base, the Internet of Things (IoT). For our production, we will rely on Smart Factories, and our communities will become Smart Cities. The idea of a ‘Smart’ entity is that it uses data and algorithms to apply clever policies and procedures that could be executed, evaluated and improved in an autonomous flow. Thus, smart entities should have the ability to learn and make decisions.
The role of future risk management – understanding risk in complex systems
As we are facing more complex systems in I4, an adaptable approach is needed for the current and future risk management. Bowtie method and its software provide risk scenarios with a common language. It facilitates the understanding of accidents or incidents happening in more complex systems. Barrier thinking leads risk management from traditional controlling of negative events to managing operational performance in real-time.
Bridging the gap, only marginally, compared to the I4 changes necessary, our present customer needs move alongside a maturity spectrum from BowTieXP towards BowTieServer. Where BowTieXP focusses more on defining a barrier-based risk management plan, the BowTieServer solution is more involved in the execution of this plan, as well as the implementation of learnings that arrived from the process. One of the main reasons clients tend to move towards BowTieServer is because of the convenience that is offered by the program; you’re able to automatize steps in the cyclical risk management process. This can be seen as a development that is typical for I3.
How can we move towards risk management (software) that would suit I4?
Within Smart Factories, things still can go wrong. Regardless of having no injuries or fatalities because of the absence of humans involved in the process, it is still possible to endure physical damage (=losses) on the machines that operate the facility or the production itself. Having no humans present on the work floor can be described as a medal with two sides: no human errors can occur on one side, whereas no human decisions can be made on the other hand. Perhaps the function of working with bowties will only facilitate to simplify and control the increased complexity concerning scenario-based thinking.
Having risks managed completely by cyber-physical systems (CPS) can only be achieved where cognitive computing is enabled. Systems need to be adaptive, interactive, iterative, stateful, and contextual. This takes a step further than artificial intelligence (AI), and therefore it is still hard to imagine for us humans today. Instead of merely automating processes, such as AI ultimately is capable of, cognitive computing should be able to increase the system’s human capabilities.
Are we still in control when we do not supervise and intervene when it comes to risk management?
For now, it is still unthinkable to ever leave humans out of the Deming cycle (plan-do-check-act) altogether. Even if cognitive computing would be fully developed, the scenario of having machines taking over entirely is a fearsome picture to paint for most. CPS will probably identify humanity as an escalation factor on their bowties, at best.