Threat and Error Management in the Operating Room

Contact Our Team

For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more

 

The Americas -
holly.foster@halldale.com

Rest of World -
jeremy@halldale.com



OperatingRoom2---Pixabay

Threat and Error Management (TEM) is an overarching safety concept that originated in aviation in the 1990s. SCT’s Mario Pierobon reports on the application of TEM in medical operating rooms.

Managing patients in the operating room is an iterative dynamic process that is highly dependent on the cooperation among team members and relies on sophisticated technology. In such context operatives are exposed to threats and errors on a constant basis. In this second part of a two-part story on threat and error management in the operating room we shall focus on how the threat and error management (TEM) concept can be used to reduce the risk of error and how it can be implemented in practice.

Error Cycles

According to the research conducted by Hickey et al. [i], assessment of error confirms that error is common, and two thirds of these errors are consequential. “One fifth of all patients experience error cycles, which are highly associated with residual lesions, end-organ injury, and death,” they say. “When error leads to an unintended deviation from the expected clinical course, extreme vigilance and optimal use of resources is essential to prevent or break error cycles.”

Human factors research in medicine focus has been on analysing errors and implementing system-wide changes to prevent them from recurring, according to Ruskina et al. [ii]. “Addressing these problems should decrease the probability that the same event, or events with similar cause, will occur in the future,” they say.

Modelling Errors

For the reduction of errors, a model of the error process can be used. According to Helmreich & Musson [iii], the model should identify the types of errors committed, deficiencies in training and knowledge, ineffective, lacking or potential error detection strategies, effective error mitigation or management strategies, threat detection and management strategies and systemic threats.

The main goals of the model, according to Helmreich & Musson, are capturing the context of patient treatment including expected and unexpected threats, classifying the types of threats and errors that occur in the medical setting, classifying the processes of managing threat and error and their outcomes, and leading to the identification of latent systemic threats in the medical setting. “The model is recursive; that is, each error either resolves itself, is successfully managed, or is unsuccessfully managed, and may precipitate further errors. These further errors may be analysed in a similar fashion,” they say. “As each error is analysed, it is possible to look for error detection safeguards (such as a procedure, vigilance, or possible monitoring equipment), knowledge or training deficiencies, and mitigation strategies.”

A Process Model

Helmreich & Musson propose a process model of human behaviour. “The model flows from left to right, starting with the commission of an error, followed by the response to that error, the effect that the error has on the patient, the patient management in response to that effect, and the final outcome of the error on the patient. This model is recursive, with error at each stage feeding back into the model,” they say.

The structure of Helmreich & Musson’s model is based on latent threats (what exists in the organisation?), overt threats (what was present that day?), human error (what was done wrong?), error management (how was the mistake handled?) and outcomes (did a change in a patient’s wellbeing result from the error, and how was it managed?). “The analysis of many errors or incidents should lead to the identification of systemic threats and deficiencies within the organisation in question,” they say.

Implementing TEM in Practice

Ruskina et al. hypothesize that TEM can be used as a multifaceted strategy allowing healthcare providers to recognize potential threats to patient safety and proactively manage hazards before an operator error causes an injury. “The first step toward predicting the points at which errors and violations can occur is the creation of a systematic description of anesthetic practice,” they say. “After reviewing the list of threats that had been developed for the Aviation Safety Information Analysis and Sharing System (ASIAS) and other anesthesia taxonomies, our group developed a task list for a typical anesthetic and surgical procedure that takes place in an operating room.”

The listed threats in the taxonomy can be considered as situations that produce error and that require the necessity to be managed in order to prevent a decrease in the margin of safety, according to Ruskina et al. “This taxonomy may improve analysis of critical events with subsequent development of specific interventions. It may also serve as a framework for training physicians in risk management,” they say. “After identification with TEM, specific threats and associated errors can be used to guide the content of educational programs or other quality improvement initiatives at individual institutions and throughout the profession."

A TEM-Structured Review

Threats and errors can be documented by self-reports or records generated by patient care processes, observe Ruskina et al. “Training programs can use a TEM-structured review as a framework for evaluating the performance of and providing feedback to resident physicians. Error-producing conditions identified by TEM, and strategies to mitigate them, could ultimately be adopted as a core component of medical education,” they say.

Another source of information for TEM training can be the narrative stories. These experiences can be used to supplement operational experience, according to Ruskina et al.

According to the study conducted by Grose, the errors can be of three types: procedural, communication and handling. “Procedural errors are failure to follow defined protocol or policy. Communication errors may be failure to use standard communication techniques, such as read-back of information, or as obvious misunderstandings during communication. Handling errors are largely technical in nature, but could also include judgment,” he says.

Random Slips

The research performed by Grose, shows that handling errors have no patterns whatsoever, and appeared to be the result of random slips, lapses or mistakes as expected. “Communication errors occurred almost entirely within the team. Procedural errors, like handling errors, were wide ranging, but unlike handling errors, they occurred with a frequency that did permit some patterns to emerge,” he says.

Procedural errors have a relation with the communication environment in the room through failure to maintain sterile communications, failure to callout information or give/receive report, failure to crosscheck leading to configuration state or failure to use checklists, according to Grose.

Preoccupation with Failure

In summary, high-stakes industries, including the medical industry, have a philosophy about human error that is akin to that in aviation. All have developed a preoccupation with failure and have engrained a culture of systemic vigilance. All have endorsed and promoted mechanisms for blame-free error assessment and analysis, and all accept that human error is both ubiquitous and inevitable, conclude Hickey et al.

References

[i]

Hickey et al.

[ii]

Ruskina et al.

[iii]

Helmreich & Musson

Related articles



More Features

More features