BD - Earth day 2024

High Reliability in Healthcare

Creating the culture and mindset for patient safety

Robin Guenther

Robin Guenther

More about Author

Robin Guenther designs and advocates at the intersection of healthcare architecture and sustainable policy. She is Senior Advisor to Health Care Without Harm and co-authored Sustainable Healthcare Architecture and Primary Protection: Enhancing Health Care Resilience for a Changing Climate available from US Department of Health and Human Services at toolkit. climate. gov.

AJ Hobbs

AJ Hobbs

More about Author

AJ Hobbs is an industrial engineer and Healthcare Planning + Strategies Consultant at Perkins+Will. AJ partners with healthcare clients to plan for their future through a systems engineering and quality improvement lens. AJ designs operations and facilities in tandem to improve health and the experience of care while reducing costs.

While patient care delivery reliability is more nuanced than mechanical industries, healthcare can improve by adopting characteristics of other industries that have achieved high reliability. Creating the culture for patient safety begins with a preoccupation with failure and commitment to resiliency. These characteristics must be adapted at all levels of the organisation.

As the world moved into 2018, the safety record of commercial aviation in 2017 was big news. 2017 was the safest year for commercial flight ever, with one fatal accident for every 16 million flights. While 2017 may have been a bit of an outlier, there is a clear trend of decreasing fatality in commercial aviation; fatality risk dropped 83 per cent from 1998 to 2008. This safety record is all the more impressive given the global trend toward increasing extreme weather, which can certainly impact aviation safety. The annual average number of US extreme weather events costing over USD 1 billion in the most recent five years (2012-16) is 11.6 events. In 2017, there were 16, which set a new record. From Hurricane Sandy to Typhoon Haiyan, healthcare infrastructure is vulnerable to damage and disruption at a moment when the community need for health services peaks.

In part due to these great strides, aviation is among the industries known for their high reliability. High reliability industries do an exceptional job reducing errors and protecting the humans that interface with their systems, even in the face of dangerous and complex tasks and external influences. While aviation is praised for its high reliability, healthcare lags behind. Healthcare produces errors and deaths that have made their own headlines; in the United States, some estimates have put healthcare errors as the third leading cause of death, behind only heart disease and cancer. Healthcare infrastructure continues to fail in extreme weather, leading to costly evacuations, care disruption, and in some instances, loss of life. In many instances, long-term disruption to healthcare delivery can hamper community economic recovery. How can this continue to happen? How can healthcare contribute to the problem it’s here to solve? Can healthcare reliability be improved?

Arguably, the ‘input’ and ‘output’ in healthcare, human beings, are much more complex than planes, and thus achieving high reliability is even more elusive in healthcare. Still, healthcare organisations can adopt characteristics of other industries that have achieved high reliability in order to improve patient safety and system performance. Healthcare has to continually challenge itself to work toward high reliability – that is, ‘doing no harm’. This begins with culture and mindset.

Researchers Karl Weick and Kathleen Sutcliffe focused on high reliability have identified five characteristics of highly-reliable organisations. Weick and Sutcliffe have noted that these characteristics are ‘responsible for the mindfulness’ that keeps the system functioning well, even in unanticipated conditions. These characteristics are: preoccupation with failure, reluctance to simplify, sensitivity to operations, deference to expertise, and commitment to resilience. Two characteristics in particular that create the culture and mindset for patient safety and high reliability are preoccupation with failure and commitment to resilience.

Commitment to Resilience

While healthcare should make failure increasingly visible, they must also respond appropriately to failure by committing to resilience. Resilience thinking, which can look different at the many levels of our complex healthcare delivery systems, helps organisations reach high reliability. It is a culture and mindset that must be supported by tangible system techniques to prevent and recover from patient safety failure in both daily clinical operations and in infrastructure. Commitment to resilience can come in the form of physical and operational design and implementations.

Architect Thomas Fisher, in Designing to Avoid Disaster, notes that “…centralised infrastructure, from power grids to hospitals, are larger, more complex, increasingly dependent upon massive amounts of increasing ongoing maintenance, and often vulnerable to failure of a single element.” He goes on to suggest that going forward, good design and planning will be based on the understanding that nothing will work as planned, or even at all. A commitment to resilience means imagining, and accounting for, the worst. There are a host of tools and resources emerging globally to assist healthcare organisations in developing a commitment to resilient infrastructure.

Preoccupation with Failure

A preoccupation with failure requires transparency and a shared responsibility for outcomes. Organisations and their people with high reliability mindsets are consistently seeking out where errors are occurring – errors that reach the end user and errors that get caught and corrected. In healthcare, we often call these errors that are caught before reaching the patient “near misses.” This preoccupation with failure spans both direct patient care and the physical infrastructure that supports it.

A Highly Reliable Organisation (HRO) is not satisfied with correcting individual failures, satisfied with near misses, but rather thinks in terms of the system. A HRO questions how their system is designed to allow a near miss to happen and knows that one mistake can turn a near miss into a patient safety error or a critical infrastructure failure. In a high-reliability, patientsafety focused culture, it is everyone’s responsibility to constantly seek to expose and correct systematic vulnerabilities and failures, whether in patient care protocols or physical infrastructure and equipment.

Searching out failure requires both experts in systems and those on the front-line caring for patients, all supported by leadership. Experts trained in system design and human factors can utilise specific tools to proactively identify potential patient safety errors. One such tool, Failure Modes and Effects Analysis (FMEA), is a step-by-step approach to identifying possible failure in a system’s design. FMEA practitioners follow through by recommending actions to reduce opportunity for failure. Systems engineers can also implement systems to making failures on the front line more visible to the healthcare improvement professionals who can dedicate time to investigating and fixing them. The concept of ‘and on’—a system to notify management, maintenance, or other quality and safety staff of a quality or safety failure—could be adopted in healthcare to move towards high reliability.

In a high reliability organisation, those on the front line delivering patient care share the responsibility of identifying and remediating failures. Front line staff will encounter patient safety failures more organically as they directly deliver care to patients. A basic step hospitals and ambulatory sites should take is to create a structure for reporting patient safety incidents (patient safety lapses, near misses, errors, etc.)

As this incident reporting avenue is made available, leadership must support the psychological safety of the front line staff who do report failure. One approach to supporting the front line in reporting incidents is the ‘Just Culture’ system. Just culture includes cultural guidelines and incident response tools, developed based on engineering principles to enhance patient safety by respecting and protecting those who report incidents. Historically, safety issues in healthcare have been too often hidden because of fears that those reporting patient safety incidents may result in individually punishment or even termination. This is unjust for the healthcare workforce which is, by and large, comprised of caring, well-intentioned, and smart individuals. On the other hand, HROs support their front-line staff when they report failure. Their culture believes that failure is almost always a consequence of system error rather than individual negligence. Actively seeking out failure and responding appropriately in the face of system failures will move healthcare toward high reliability.

Operationally, one way to commit to resilience is to appropriately respond to every incident report brought forward by front line staff. It is not enough to collect information about system failure and make it more visible. A highly reliable organisation responds with resilience by redesigning systems in response to patient safety failures and properly communicates the outcomes of the improvements to the appropriate stakeholders.

When it comes to increasing incidents of extreme weather, it is critical to design systems that can respond to future weather conditions, from the gradual stresses of sea level rise to the immediate increases in maximum wind speeds or rainfall totals. Designing redundant systems, such as operable windows for natural ventilation and daylighting when mechanical infrastructure fails, is one key strategy for resilient design. United Nations Office of Disaster Risk Reduction (UNIDSR 2012) notes: “Paying attention to protection and resilience will improve environmental, social and economic conditions, including combating the future variables of climate change, and leave the community more prosperous and secure than before.” Perkins+Will, a design firm, collaborates with clients to implement tangible facility design tactics that can demonstrate and support commitment to resilience. For example, designing clean and / or redundant energy systems allows for patients and the healthcare professional serving them to operate in a stable, safe environment. Spaulding Rehabilitation Hospital, on the Boston waterfront, is designed with all the critical infrastructure on the roof, well above flood elevation. All critical services are located out of harm’s way. The building generates its own thermal energy and electricity, so it can operate indefinitely when municipal grid power is lost.

Additionally, a high reliability organization fosters resiliency in its individuals. At Perkins+Will, the healthcare design teams design spaces in hospitals that aim to reduce fatigue and interruptions. For example, carpeting the interior of support cores on nursing floors has reduced sound transmission which can wear on the staff delivering healthcare and cause distraction and fatigue. Designing space for staff that are restorative also demonstrates a commitment to resilience. Staff members having quiet areas with access to natural light and natural views can improve their mood, make them more resilient in the face of failure, and ultimately improve patient safety.

Conclusion

A failure-obsessed culture and a systems thinking mindset is necessary for healthcare to move toward high reliability. The imperative is clear: a focus on improved patient safety and improved performance in extremely complex care and extreme externalities, like weather, is necessary. Healthcare as an industry has much to learn from other high reliability industries, like aviation, in terms of ensuring safety for its customers. Characteristics of high reliability organisations, as defined by Weick and Sutcliffe, must be adopted across the levels of the complex structure of healthcare organisations to improve patient safety. These characteristics include a preoccupation with failure and a commitment to resilience. To do no harm, healthcare must actively seek out understanding its potential to cause harm and encounter system failures and design systems that both prevent harm and quickly recover from harm wherever possible.

Sources

Fisher, Thomas (2013). Designing to avoid disaster: The nature of fracture-critical design. New York and London: Routledge.

Makary, Martin A.; Daniel, Michael (2016). Medical error-the third leading cause of death in the US. BMJ. pp. 353.

Weick, Karl E.; Kathleen M. Sutcliffe (2001). Managing the Unexpected - Assuring High Performance in an Age of Complexity. San Francisco, CA, USA: Jossey-Bass. pp. 10–17.

Boysen, Philip G. (2013). “Just Culture: A Foundation for Balanced Accountability and Patient Safety.” The Ochsner Journal. pp. 400–406.

United Nations Office for Disaster Risk Reduction [UNISDR]. (2012). How to make cities more resilient: a handbook for local government leaders. Retrieved from http://www.unisdr.org/campaign/resilientcities/toolkit/handbook

--Issue 40--