Swiss cheese model

The Swiss cheese model of accident causation is a model used in risk analysis and risk management, including aviation safety, engineering, healthcare, emergency service organizations, and as the principle behind layered security, as used in computer security and defense in depth. It likens human systems to multiple slices of swiss cheese, stacked side by side, in which the risk of a threat becoming a reality is mitigated by the differing layers and types of defenses which are "layered" behind each other. Therefore, in theory, lapses and weaknesses in one defense do not allow a risk to materialize, since other defenses also exist, to prevent a single point of failure. The model was originally formally propounded by Dante Orlandella and James T. Reason of the University of Manchester,[1] and has since gained widespread acceptance. It is sometimes called the "cumulative act effect".

The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur.

Although the Swiss cheese model is respected and considered to be a useful method of relating concepts, it has been subject to criticism that it is used too broadly, and without enough other models or support.[2]

Failure domains

Reason hypothesized that most accidents can be traced to one or more of four failure domains: organizational influences, supervision, preconditions, and specific acts.[3][4] For example, in aviation, preconditions for unsafe acts include fatigued air crew or improper communications practices. Unsafe supervision encompasses for example, pairing inexperienced pilots on a night flight into known adverse weather. Organizational influences encompass such things as reduction in expenditure on pilot training in times of financial austerity.[5][6]

Holes and slices

In the Swiss cheese model, an organisation's defenses against failure are modeled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason's words) "a trajectory of accident opportunity", so that a hazard passes through holes in all of the slices, leading to a failure.[7][8][9][6]

Frosch[10] described Reason's model in mathematical terms as a model in percolation theory, which he analyses as a Bethe lattice.

Active and latent failures

The model includes both active and latent failures. Active failures encompass the unsafe acts that can be directly linked to an accident, such as (in the case of aircraft accidents) a navigation error. Latent failures include contributory factors that may lie dormant for days, weeks, or months until they contribute to the accident. Latent failures span the first three domains of failure in Reason's model.[5]

In the early days of the Swiss Cheese model, late 1980 to about 1992, attempts were made to combine two theories: James Reason multi-layer defence model and Willem Albert Wagenaar's Tripod theory of accident causation. This resulted in a period where the Swiss Cheese diagram was represented with the slices of cheese labels as Active Failures, Preconditions and latent failures.

These attempts to combine both theories still causes confusion today. A more correct version of the combined theories is shown with the Active Failures (now called immediate causes) Precondition and Latent Failure (now called underlying causes) shown as the reason each barrier (slice of cheese) has a hole in it and the slices of cheese as the barriers.

Applications

The same framework can be applicable in some areas of healthcare. For example, a latent failure could be the similar packaging of two drugs that are then stored close to each other in a pharmacy. Such a failure would be a contributory factor in the administration of the wrong drug to a patient. Such research led to the realization that medical error can be the result of "system flaws, not character flaws", and that greed, ignorance, malice or laziness are not the only causes of error.[11]

Lubnau, Lubnau, and Okray[12] apply the model to the engineering of firefighting systems, aiming to reduce human errors by "inserting additional layers of cheese into the system", namely the techniques of Crew Resource Management.

This is one of the many models listed, with references, in Taylor et al (2004).[13]

Kamoun and Nicho[14] found the Swiss cheese model to be a useful theoretical model to explain the multifaceted (human, organizational and technological) aspects of healthcare data breaches.

See also

References

  1. Reason 1990.
  2. "Revisiting the Swiss cheese model of accidents" (PDF). Eurocontrol. October 2006.
  3. 1. J.A. Doran (Shell International Exploration and Production B.V.) |G.C. van der Graaf (Shell International Exploration and Production B.V.) Tripod-BETA: Incident investigation and analysis, Proceedings of the SPEE Health, Safety and Environment in Oil and Gas Exploration and Production Conference, 9–12 June, New Orleans, Louisiana Publication Date1996
  4. 2. A.D. Gower-Jones (Shell International Exploration and Production) | G.C. van der Graf (Shell International Exploration and Production) Experience with Tripod BETA Incident Analysis Proceedings of the SPE International Conference on Health, Safety, and Environment in Oil and Gas Exploration and Production, 7–10 June, Caracas, Venezuela Publication Date1998
  5. Douglas A. Wiegmann & Scott A. Shappell (2003). A human error approach to aviation accident analysis: the human factors analysis and classification system. Ashgate Publishing. pp. 48–49. ISBN 0754618730.
  6. Stranks, J. (2007). Human Factors and Behavioural Safety. Butterworth-Heinemann. pp. 130–131. ISBN 9780750681551.
  7. Daryl Raymond Smith; David Frazier; L W Reithmaier & James C Miller (2001). Controlling Pilot Error. McGraw-Hill Professional. p. 10. ISBN 0071373187.
  8. Jo. H. Wilson; Andrew Symon; Josephine Williams & John Tingle (2002). Clinical Risk Management in Midwifery: the right to a perfect baby?. Elsevier Health Sciences. pp. 4–6. ISBN 0750628510.
  9. Tim Amos & Peter Snowden (2005). "Risk management". In Adrian J. B. James; Tim Kendall & Adrian Worrall (eds.). Clinical Governance in Mental Health and Learning Disability Services: A Practical Guide. Gaskell. p. 176. ISBN 1904671128.
  10. Robert A. Frosch (2006). "Notes toward a theory of the management of vulnerability". In Philip E Auerswald; Lewis M Branscomb; Todd M La Porte; Erwann Michel-Kerjan (eds.). Seeds of Disaster, Roots of Response: How Private Action Can Reduce Public Vulnerability. Cambridge University Press. p. 88. ISBN 0521857961.
  11. Patricia Hinton-Walker; Gaya Carlton; Lela Holden & Patricia W. Stone (2006-06-30). "The intersection of patient safety and nursing research". In Joyce J. Fitzpatrick & Patricia Hinton-Walker (eds.). Annual Review of Nursing Research Volume 24: Focus on Patient Safety. Springer Publishing. pp. 8–9. ISBN 0826141366.
  12. Thomas Lubnau II; Randy Okray & Thomas Lubnau (2004). Crew Resource Management for the Fire Service. PennWell Books. pp. 20–21. ISBN 1593700067.
  13. Taylor, G. A.; Easter, K. M.; Hegney, R. P. (2004). Enhancing Occupational Safety and Health. Elsevier. pp. 241–245, 140–141, 147–153. ISBN 0750661976.
  14. Faouzi Kamoun & Mathew Nicho (2014). Human and Organizational Factors of Healthcare Data Breaches: The Swiss Cheese Model of Data Breach Causation And Prevention, International Journal of Healthcare Information Systems and Informatics, 9(1). IGI Global. pp. 42–60.

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.