The Normalization Fallacy: why much of "critical care" may be neither
Like many starry-eyed medical students, I was drawn to critical care because of the high stakes, its physiological underpinnings, and the apparent fact that you could take control of that physiology and make it serve your goals for the patient. On my first MICU rotation in 1997, I was so swept away by critical care that I voluntarily stayed on through the Christmas holiday and signed up for another elective MICU rotation at the end of my 4th year. On the last night of that first rotation, wistful about leaving, I sauntered through the unit a few times thinking how I would miss the smell of the MICU and the distinctive noise of the Puritan Bennett 7200s delivering their [too high] tidal volumes. By then I could even tell you whether the patient’s peak pressures were high (they often were) by the sound the 7200 made after the exhalation valve released. I was hooked, irretrievably. I still love thinking about physiology, especially in the context of critical illness, but I find that I have grown circumspect about its manipulation as I have reflected on the developments in our field over the past 20 years. Most – if not all – of these “developments” show us that we were harming patients with a lot of the things we were doing. Underlying many now-abandoned therapies was a presumption that our understanding of physiology was sufficient that we could manipulate it to beneficial ends. This presumption hints at an underlying set of hypotheses that we have which guide our thinking in subtle but profound and pervasive ways. Several years ago we coined the term the “normalization heuristic” (we should have called it the “normalization fallacy”) to describe our tendency to view abnormal laboratory values and physiological parameters as targets for normalization. This approach is almost reflexive for many values and parameters but on closer reflection it is based on a pivotal assumption: that the targets for normalization are causally related to bad outcomes rather than just associations or even adaptations. I understand how, as a medical student, one might be tempted to think that if a laboratory parameter is abnormal and the means to “correct” it are available, she ought to intervene to make the parameter normal. In fact, we are enculturated to intervene in this way, and rewarded for being good stewards of our patients’ labs and physiological parameters during our training. But isn’t this a bit naïve? Consider the immune system and the myriad cascades that are activated during an infection with say, Streptococcus pneumoniae. Complement, humoral immunity, cytokines, kinins, cell signalling, cell mediated immunity and on and on ad nauseum. For every pathway that we understand, how many are there that we have not yet discovered? For every pathway that we think we understand, how many aspects of that pathway are unknown or misunderstood? The daunting and elegant complexity of the immune system was wrought by millions of years of mammalian evolution. Discover a molecule and call it “beta-anti-protease-19” then study it and learn that it skyrockets during gram negative bacteremia, and I will bet you that beta-anti-protease-19 is protective, if I’m basing my bets on the adequacy of evolutionary processes. But tell me that the potassium of the septic patient is low, and that’s not what I’m thinking. I’m thinking I understand potassium. I know the intracellular and extracellular concentrations, I know about the pump, I know about the adrenals and the kidneys and angiotensinogen and aldosterone. And I’ve had a cardiology rotation so I know that patients with low potassium who are having myocardial infarction have arrhythmias and die. I have a handle on this potassium problem, and I’m going to intervene and normalize the potassium of the septic patient. If our ancestors 100,000 years ago were getting infected with some organism and that infection was making their potassium low and they were dying as a result, would you not expect some sort of mutation in the aldosterone receptor that is activated during sepsis or some other compensatory mechanism? Did evolution fail? Must she lean on the diligent medical student with a K-rider to cover her shortcomings? Is that the most likely explanation, or is the most likely explanation that low potassium during sepsis is an epiphenomenon of all the compensatory cascades, resulting from millions of years of evolution, that protect us from the ravages of infection? Worse, could low potassium during sepsis be adaptive, and we are harming patients when we “replace” it? (Note that “replacement” implies that something is missing – even the way we talk about these things belies our enculturated biases.) The same logic could apply for almost every perturbation we see in laboratory values and physiological parameters in sepsis and critical illness: anemia, electrolyte derangements, body temperature changes, tachycardia, hypo- and hyper-tension, tachypnea, delirium, leukocytosis, elevated biomarkers such as troponin and BNP, lactate (watch the linked video you will be glad you did), d-dimers, coagulopathies, cortisol levels, anorexia during infection or stress – you name it. The sicker a patient gets, the more values go out of whack and the greater are the deviations from normalcy. There are three general hypotheses that could explain these phenomena: 1. The abnormal values (e.g., low ionized calcium) are causing deleterious effects (i.e., hypotension, organ failure, and death). If this is true, normalizing them may be expected to improve outcomes 2. The abnormal values (e.g., fever) represent evolutionary compensatory mechanisms that are protective (i.e., fever impairs microbial survival). If this is true, normalizing them will worsen outcomes 3. The abnormal values are epiphenomena of an underlying process, but are not causal in pathways to outcomes of interest. If this is the case, then manipulating them will not change outcomes of interest regardless of whether they stem from a deleterious pathway or a compensatory one. We cannot assign probabilities to each of these three classes of hypotheses because we are largely ignorant of the underlying causal pathways. But I see little reason why we have traditionally assumed that many of these derangements are targets for intervention, implying that we think #1 is the best bet. My own reasoning is that, for any given perturbation, it is more like that it is compensatory (#2) or an epiphenomenon (#3) than it is that any given parameter is part of a causal pathway to patient-centered untoward outcomes. Opinions will vary, but I think we should be more careful about assuming, explicitly or through our actions, that #1 is the correct underlying hypothesis. It is interesting to note that the values that we try to correct happen to be those that we have a means of correcting. We don’t try to lower BNP or d-dimers because we don’t have the means to do so. We do try to “correct” (or miscorrect, if it is a compensatory and beneficial phenomenon) electrolytes, anemia, body temperature, abnormalities in heart rate and blood pressure. Perhaps understanding the physiology enough to measure it and to isolate a compound to manipulate it increases our confidence just enough that we start to make leaps of faith. Perhaps we would try to correct elevated BNP levels if we had a tool to manipulate one of the pathways leading to elevated BNP. Perhaps we accept our impotence only when we are forced to. The researcher has even more incentive than the clinician to presume #1 is the correct hypothesis in a given scenario. She wants to understand physiology, but she ultimately wants to cure disease. If the molecule she discovers is associated with outcomes of critical illness, she is advancing science to some degree. If the molecule she is investigating is causal, she is advancing science and may be on the cusp of a therapeutic breakthrough. She wants the parameter to be causal, especially if the means of manipulating it are available. Twenty years ago, I looked at the house officers, attendings, nurses and ancillary staff in the ICU as though they were omnipotent on some level. They knew all that physiology I had taken pains to learn, but they had an added power – they knew how to manipulate it, to usurp control of nature, to trump disease, and to cure patients. Now I realize that while they were surely doing some of that, they were also transfusing FFP and platelets for DIC, they were “normalizing” PaO2 and PaCO2 with large tidal volumes, they were giving sublethal doses of methylprednisolone because “nobody dies in the MICU without a trial of steroids” they were heavily sedating and paralyzing patients so they wouldn’t “fight the ventilator”. I do not now believe that we are impotent and must stand by with jaw slack and mouth agape as nature runs its course, but I am now much more circumspect and measured in my hypotheses and the actions I pursue based upon them. I have come to accept that I am merely quasipotent – and one of my highest duties is to determine, as best I can, when my physiological manipulations are doing good, and when they are more likely to be doing harm. Hypothesis #1 is selected only when there is very, very good evidence that it is correct, and my physiological manipulations are safe, knowing that what seems safe today may appear reckless 20 years hence. This post by Scott Aberegg, MD, MPH originally appeared on his Medical Evidence Blog.