Review: Still Not Safe, by Robert L. Wears & Kathleen M. Sutcliffe

Russ Allbery eagle at eyrie.org
Sat Aug 13 20:40:39 PDT 2022


Still Not Safe
by Robert L. Wears & Kathleen M. Sutcliffe

Publisher: Oxford University Press
Copyright: November 2019
ISBN:      0-19-027128-0
Format:    Kindle
Pages:     232

Still Not Safe is an examination of the recent politics and history of
patient safety in medicine. Its conclusions are summarized by the
opening paragraph of the preface:

  The American moral and social philosopher Eric Hoffer reportedly
  said that every great cause begins as a movement, becomes a
  business, and eventually degenerates into a racket. The reform
  movement to make healthcare safer is clearly a great cause, but
  patient safety efforts are increasingly following Hoffer's path.

Robert Wears was Professor of Emergency Medicine at the University of
Florida specializing in patient safety. Kathleen Sutcliffe is Professor
of Medicine and Business at Johns Hopkins. This book is based on
research funded by a grant from the Robert Wood Johnson Foundation, for
which both Wears and Sutcliffe were primary investigators. (Wears died
in 2017, but the acknowledgments imply that at least early drafts of
the book existed by that point and it was indeed co-written.)

The anchor of the story of patient safety in Still Not Safe is the 1999
report from the Institute of Medicine entitled To Err is Human, to
which the authors attribute an explosion of public scrutiny of medical
safety. The headline conclusion of that report, which led nightly news
programs after its release, was that 44,000 to 120,000 people died each
year in the United States due to medical error. This report prompted
government legislation, funding for new safety initiatives, a flurry of
follow-on reports, and significant public awareness of medical harm.
What it did not produce, in the authors' view, is significant
improvements in patient safety.

The central topic of this book is an analysis of why patient safety
efforts have had so little measurable effect. The authors attribute
this to three primary causes: an unwillingness to involve safety
experts from outside medicine or absorb safety lessons from other
disciplines, an obsession with human error that led to profound
misunderstandings of the nature of safety, and the misuse of safety
concerns as a means to centralize control of medical practice in the
hands of physician-administrators. (The term used by the authors is
"managerial, scientific-bureaucratic medicine," which is technically
accurate but rather awkward.)

Biggest complaint first: This book desperately needed examples, case
studies, or something to make these ideas concrete. There are
essentially none in 230 pages apart from passing mentions of famous
cases of medical error that added to public pressure, and a tantalizing
but maddeningly nonspecific discussion of the atypically successful
effort to radically improve the safety of anesthesia. Apparently
anesthesiologists involved safety experts from outside medicine,
avoided a focus on human error, turned safety into an engineering
problem, and made concrete improvements that had a hugely positive
impact on the number of adverse events for patients. Sounds
fascinating! Alas, I'm just as much in the dark about what those
improvements were than I was when I started reading this book. Apart
from a vague mention of some unspecified improvements to anesthesia
machines, there are no concrete descriptions whatsoever.

I understand that the authors were probably leery of giving too many
specific examples of successful safety initiatives since one of their
core points is that safety is a mindset and philosophy rather than a
replicable set of actions, and copying the actions of another field
without understanding their underlying motivations or context within a
larger system is doomed to failure. But you have to give the reader
something, or the book starts feeling like a flurry of abstract
assertions. Much is made here of the drawbacks of a focus on human
error, and the superiority of the safety analysis done in other fields
that have moved beyond error-centric analysis (and in some cases have
largely discarded the word "error" as inherently unhelpful and
ambiguous). That leads naturally to showing an analysis of an adverse
incident through an error lens and then through a more nuanced safety
lens, making the differences concrete for the reader. It was maddening
to me that the authors never did this.

This book was recommended to me as part of a discussion about safety
and reliability in tech and the need to learn from safety practices in
other fields. In that context, I didn't find it useful, although
surprisingly that's because the thinking in medicine (at least as
presented by these authors) seems behind the current thinking in
distributed systems. The idea that human error is not a useful model
for approaching reliability is standard in large tech companies, nearly
all of which use blameless postmortems for exactly that reason. Tech,
similar to medicine, does have a tendency to be insular and not look
outside the field for good ideas, but the approach to large-scale
reliability in tech seems to have avoided the other traps discussed
here. (Security is another matter, but security is also adversarial,
which creates a different set of problems that I suspect require
different tools.)

What I did find fascinating in this book, although not directly
applicable to my own work, is the way in which a focus on human error
becomes a justification for bureaucratic control and therefore a
concentration of power in a managerial layer. If the assumption is that
medical harm is primarily caused by humans making avoidable mistakes,
and therefore the solution is to prevent humans from making mistakes
through better training, discipline, or process, this creates
organizations that are divided into those who make the rules and those
who follow the rules. The long-term result is a practice of medicine in
which a small number of experts decide the correct treatment for a
given problem, and then all other practitioners are expected to
precisely follow that treatment plan to avoid "errors." (The best
distributed systems approaches may avoid this problem, but this failure
mode seems nearly universal in technical support organizations.)

I was startled by how accurate that portrayal of medicine felt. My
assumption prior to reading this book is that the modern experience of
medicine as an assembly line with patients as widgets was caused by the
pressure for higher "productivity" and thus shorter visit times,
combined with (in the US) the distorting effects of our broken medical
insurance system. After reading this book, I've added a misguided way
of thinking about medical error and risk avoidance to that analysis.

One of the authors' points (which, as usual, I wish they'd made more
concrete with a case study) is that the same thought process that lets
a doctor make a correct diagnosis and find a working treatment is the
thought process that may lead to an incorrect diagnosis or treatment.
There is not a separable state of "mental error" that can be
eliminated. Decision-making processes are more complicated and more
integrated than that. If you try to prevent "errors" by eliminating
flexibility, you also eliminate vital tools for successfully treating
patients.

The authors are careful to point out that the prior state of medicine
in which each doctor was a force to themselves and there was no role
for patient safety as a discipline was also bad for safety. Reverting
to the state of medicine before the advent of the
scientific-bureaucratic error-avoiding culture is also not a solution.
But, rather at odds with other popular books about medicine, the
authors are highly critical of safety changes focused on human error
prevention, such as mandatory checklists. In their view, this is
exactly the sort of attempt to blindly copy the machinery of safety in
another field (in this case, air travel) without understanding the
underlying purpose and system of which it's a part. I am not qualified
to judge the sharp dispute over whether there is solid clinical
evidence that checklists are helpful (these authors claim there is not;
I know other books make different claims, and I suspect it may depend
heavily on how the checklist is used). But I found the authors'
argument that one has to design systems holistically for safety, not
try to patch in safety later by turning certain tasks into rote
processes and humans into machines, to be persuasive.

I'm not willing to recommend this book given how devoid it is of
concrete examples. I was able to fill in some of that because of prior
experience with the literature on site reliability engineering, but a
reader who wasn't previously familiar with discussions of safety or
reliability may find much of this book too abstract to be
comprehensible. But I'm not sorry I read it. I hadn't previously
thought about the power dynamics of a focus on error, and I think that
will be a valuable observation to keep in mind.

Rating: 6 out of 10

Reviewed: 2022-08-13

URL: https://www.eyrie.org/~eagle/reviews/books/0-19-027128-0.html

-- 
Russ Allbery (eagle at eyrie.org)             <https://www.eyrie.org/~eagle/>


More information about the book-reviews mailing list