The following is an invited post from Thom Walsh PhD, MSPT, OCS, Diplomat MDT (@thomwalsh on twitter). Thom very recently completed his doctorate in health policy at The Dartmouth Institute for Health Policy and Clinical Practice. He is currently on faculty as a curriculum specialist in The Dartmouth Center's Health Care Delivery Science program.
Nobel-Prize winning economist Daniel Kahneman has made it his life’s work to scrutinize the many ways in which human beings fail to behave rationally. His most recent book, “Thinking, Fast and Slow” summarizes the 40 years of research conducted by him and his colleague Amos Tversky on the topic.
Kahneman categorizes human thinking into two systems: 1) fast and 2) slow. The fast system (System-1) represents rapid decision making based on assumptions, prior personal experience, and neurologic short cuts that have proven advantageous to survival. System I “prepares us to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders,” using subconscious actions and reactions that are unavoidable. One cannot unlearn the simple mathematical formula 2+2=4 any more than they can stop themselves from flinching when surprised by the sudden bark from an angry dog. To work at such instantaneous speed, System-I thinking relies on biases and heuristics. Kahneman explains the “hindsight bias,” the “optimistic bias,” the “availability heuristic” and the “fast and frugal heuristic,” along with many others. These multiple biases and heuristics detail the seemingly infinite ways in which humans divert from the utility maximization path laid out by econometrics and utility theory. See here for a basic overview of decision theory in econometrics
By contrast, the slow system (System-2) is lazy and laborious. Activating System-2 thinking requires us to pay attention and allows us to follow complex directions, compare choices, reflect on the past and project into the future. Activities that rely on System-2 thinking require self-control and sustained effort. In short, engaging System-2 is oftentimes depleting and unpleasant.
Difficulties arise when we are presented with a situation that intuitively engages System-1 thinking, but actually requires System-2 thinking. For example, if I told you the price of my son’s new ball and bat was $1.10, and the bat cost $1.00 more than the ball, your brain would likely switch into System-1 thinking by immediately concluding that the ball cost 10¢. And, you would be wrong. If the ball were 10¢, the bat would be $1.10 and our total would be $1.20. By engaging System-2, we find the actual cost of the ball is 5¢.
Kahneman’s brilliant studies also demonstrate that humans are risk-averse when there is hope for a gain, yet risk-seeking when faced with a decision between certain losses. He goes on to explain that we tend to rely heavily on System-I thinking when faced with uncertainty, trade-offs, or long-term horizons.
Uncertainty, novel situations, trade-offs, hopes, certain and long- term horizons are common features of many health care decisions.
The surprising truth is that medical providers seldom have solid evidence of superior treatment effectiveness. More frequently, the clinical context is a choice between treatments with outcomes that differ depending on the option chosen without clear evidence to tell the patient or provider which is the superior choice. Trade-offs exist with probabilities of good and bad outcomes over various time horizons.
For example, a patient suffering from 6 weeks of uncomplicated low back and leg pain from a herniated disc could chose surgical or non-surgical treatment. Surgical care holds the promise of relieving the leg pain more quickly than non-surgical care but at the added risk of the surgical procedure itself. Non-surgical care holds the promise of relief over a longer period of time and without the surgical risks. Existing evidence suggests the long-term outcomes are equivalent. (See here, here, & here for supporting evidence.) For this reason the patient’s preferences should determine the treatment pathway. More commonly, however, it is the provider’s opinion that determines the patient’s treatment destiny.
Following this thought process further leads to the uncomfortable realization of a seldom discussed medical error, operating on the wrong patient. I do not mean operating on patient A instead of patient B. By “wrong patient,” I mean one who would have, had she been fully informed of the risks and benefits of all treatment options, chosen non-surgical care. Evidence of the over-use of surgical interventions in this manner is detailed in Dartmouth Professor Jack Wennberg’s latest book, “Tracking Medicine.”
Patient preference-sensitive clinical situations are best addressed by aligning the fully informed patient’s preferences with the care she receives. Doing so requires System-2 thinking by both the clinician and the patient. This is not easy and there are legitimate reasons for both the clinician and the patient to be unhappy.
The situation is difficult for clinicians because System-2 thinking is laborious, but not reimbursed. There are no evaluations or procedures to order. Further, medical school does not provide the skills required to elicit a patient’s preferences, fears, and risk-tolerance, let alone their numeracy. This final aspect is especially important because medical decisions frequently involve very small numbers that are difficult to place in context.
Patient preference-sensitive clinical situations are difficult for patients because they may not want to be so involved. Patients engage the health care system at their most vulnerable times. They reveal secrets to us their spouses may not know. They disrobe in front of us and we peer inside of them with our technology or surgical technique. Despite all this, patients reveal their anxieties and hopes to us, after the briefest of interactions, in the hopes of receiving information, wisdom, and care that can prevent or ease pain and dysfunction.
In return, they should receive our best information and judgment. Past attempts to involve the patient in sharing the medical decision have, in my opinion, come up short. They have relied on econometric utility-maximizing theory, weighting each of the probabilities in a decision tree and then mathematically folding branches back to reveal the most rational choices with the highest probability of producing the most utility for the patient. This process has always struck me as an unusual because, in over 15 years of clinical work, I have never met a truly rational patient. Nor for that matter, have I encountered a fully rational and unbiased clinician.
The work done by Kahneman and Tversky debunks utility theory and replaces it with “prospect theory.” Kahneman describe this well in his book. It is incumbent on us to find ways to design the delivery of care in ways that incorporate the insights gained from prospect theory and employs methods that lead to fully informed patient choice. I think you’ll agree and look forward to your comments.