December 2nd, 2008 The New York Times health blog “Well” today posted:
A national panel of medical experts proposed significant and costly changes for training new doctors in the nation’s hospitals, recommending mandatory sleep breaks and more structured shift changes to reduce the risk of fatigue-related errors.
The report was issued by the Institute of Medicine (IOM). As expected, there are hundreds of comments on the blog. Many established doctors defend current training practices (or lament the loss of even more grueling ones in the past) as the best way to get maximal experience during training. Some residents also defend current practice, while others recount mistakes made while sleep-deprived, and call the system senseless. Most self-identified laypeople condemn as obviously poor patient care a system where doctors work for over 24 hours without sleep.
Sleep deprivation during medical/surgical training has been an interest of mine since I was in training myself in the 1980s. I even wrote a paper on it as a medical student. This is my commentary I posted to the “Well” blog. I invite your thoughts here.
I’m a psychiatrist involved in medical education my whole career. It’s a relatively easy specialty in terms of training hours. But I was a med student and had a medical internship like other MDs, and was appalled by the hazing justified as a necessary educational experience. Like a fraternity initiation, each generation of doctors imposes it on the next to keep medicine special, to maintain a sharp in-group/out-group distinction. It is also perpetuated institutionally thanks to the unbeatable economics of paying a highly trained, intelligent workforce minimum wage.
There is no way to learn everything in training, whether residency is 3 years or 10. The conceit that the current system teaches residents “everything they need to know” leads to calls to add residency years to make up for reduced hours. But how did we determine we’re teaching residents the “right” amount now?
In an era of evidence-based medicine it is medical training itself that resists the application of empirical science. Plenty of studies show cognitive and interpersonal deficits with sleep deprivation. There are none I’m aware of that show these effects can be trained out of people, nor that long call hours “teach” residents to make hard decisions in the middle of the night. If we rely on personal anecdotes, my experience says that sleep deprivation teaches trainees that working half-awake is acceptable as long as you survive the ordeal, that procedures are more important than talking with patients — there goes prevention and lifestyle changes — and that anyone who criticizes this heroic undertaking is a wimp who “just doesn’t get it.”
Certain rote practices like CPR or running a code improve with mindless repetition, but sensitive interpersonal skills (eg, discussing a patient’s cancer diagnosis) do not improve by “practicing” them over and over when you can’t think straight. The human qualities of great doctors — caring, sensitivity, interpersonal nuance — are profound gifts. It is a cruel and misguided system that devalues these gifts in order to to maximize the repetition of protocols and procedures.
Where to go from here?
1) Obviously, the IOM’s changes will cost money. Other countries with excellent health care systems have found a way, and we can too.
2) Medicine is already too complicated to sign-out (ie, hand off) patients in the informal way we do now. Electronic medical records with built-in error checking is inevitable in the near future. It’s a good thing, particularly at this error-prone step.
3) The “ownership” of patients is a real issue, and may be made worse by a shift-work mentality. The solution is not to avoid shifts — they are inevitable in any business that is open 24/7 — but to (re-)instill a cultural norm that caring about *people* is expected, and frankly more important than memorizing the last 5 days of electrolyte values. I’d rather be treated by a well-rested doctor who cares about me but has to look up the labs.
http://www.stevenreidbordmd.com/blog.html
— Steven Reidbord MD
November 30th, 2008 I’ve been online quite a few years now. Actually, I first used the internet in college in the late 1970s. There were only a handful of non-governmental university sites back then, and I happened to be an undergraduate at one of them. A decade later, in the late 1980s, I was a member of the first online community, called The Well. In the years that followed, I was active in online forums about psychiatry and other topics.
By the early 1990s, after my psychiatry residency and research fellowship, I had begun to think about the psychological dynamics of online communication. Now that it’s almost 2009, many dissertations and books have been written on the subject, online communities are commonplace (e.g., see here, here, and here), and most everyone is at least somewhat familiar with online communication. But nearly 20 years ago it still felt new and ill-defined, and an idea dawned on me that seemed to explain a lot about what I saw happening online.
In the 1980s, the main place for freewheeling discussion online was Usenet, a huge collection of topic-based forums (newsgroups) covering every imaginable subject. From highly technical discussions of computer technology to frivolous chatter about celebrities, this text-only, worldwide, generally anonymous medium served as a massive reservoir of information and “computer culture.” In addition, local “bulletin board systems” (BBSs) provided similar forums on a much smaller scale, and individual email discussion lists arose where subscribers could read and post commentary outside of Usenet. In all of these, emoticons (e.g., smiley faces), in-group jokes, and acronyms (like LOL for “laugh out loud”) were popularized, and have left their mark ever since.
What most caught my attention, though, were the extremes of emotion expressed. I was struck by the incredible hostility many users directed toward others. Snide putdowns, withering sarcasm, and utter contempt were fair game and surprisingly common. This was collectively called “flaming,” as in blasting someone with flames. Flaming was social behavior that would never be tolerated FTF, IRL (face-to-face, in real life).
My first thought about flaming was that it existed because flamers could get away with it. They were untraceable due to anonymity and unreachable due to lack of physical proximity. There were typically no consequences for flaming other than flames directed back in response. But this failed to account for the hostility in the first place. Are young men (the demographic of most internet users back then) naturally inclined to such levels of raw aggression anytime they can get away with it? Not entirely implausible, but a sad conclusion if true.
Then I noticed the opposite as well. Organized online dating would not catch on until the advent of the World Wide Web and commercial websites in the 1990s, but cyber-romance was already well-known in 1980s computer culture. People who scarcely knew each other — and only through text, no pictures — would flirt eagerly online. There was an idealization of the unseen other, a belief or hope that he/she was everything one dreamed of. Of course, stories were rampant of disappointing (or horrifying) results when a meeting finally took place FTF. But these romances struck me as the other side of the flaming coin. Hmm, idealization and devaluation — where had I heard of those before?
Transference was a concept invented by Sigmund Freud to apply to the psychoanalytic situation. The analysand (patient in analysis) experiences feelings toward the analyst that are “transferred” from parents or other important figures in the analysand’s life. These feelings are often erotic if positive, harshly rageful if negative, and often a mixture of both. Successfully analyzing the transference is a main task of psychoanalysis, and thought to be central to its curative effects. For this reason, transference is promoted in psychoanalysis. By sitting outside the analysand’s field of view, refraining from personal disclosure, and maintaining “therapeutic neutrality,” the analyst provides space for the patient to ascribe facts and feelings to the analyst and to react to them. These facts and feelings are created in the patient’s imagination, and are not realities about the analyst. However, the patient’s emotional reactions are real, and shed light on long buried feelings and emotional assumptions about others.
I realized that online anonymity, and online communication more generally, unwittingly encouraged transference. During the 1980s era of text-only discussions on the internet, users literally sat outside each others’ field of view and provided little personal disclosure. (This formed the basis of a now famous 1993 cartoon in The New Yorker.) Facts and feelings were ascribed to the unseen others online. These presumed facts and feelings in turn prompted primitive erotic and aggressive feelings, and these in turn led to cyber-romance or flaming. The main difference from psychoanalysis was the lack of a professional to interpret and contain the transference. Thus, there was nothing healing or curative about it; it was bad-tasting medicine with no benefit.
Transference helps to explain important aspects of online dynamics in the 1980s. But much has changed since then. The graphical user interface of the World Wide Web began to erode user anonymity by the mid-1990s, as home pages with photographs and personal information were uploaded by the millions. This trend has accelerated with Web 2.0 and current social networking sites like MySpace, Facebook, Twitter, and many others. Anonymity now seems to be the last thing on the minds of internet users. Personal details and the minutia of everyday life are routinely shared online. Job-seekers google potential employers and vice versa. Potential romantic partners google one another. In theory this discourages transference; my own unscientific impression is that flaming has become passe, cyber-romance more cynical than idealistic. I now wonder if we need not fear unwitting transference so much as its opposite: an undue familiarity that makes romantic love and admiration of heros less possible.
November 18th, 2008 A psychiatric revolution began in the late 1950s with the marketing of Thorazine, the first neuroleptic (antipsychotic) medication. Thorazine and similar drugs quelled psychotic agitation and quieted auditory hallucinations (voices). This allowed large numbers of state-hospital patients to be “de-institutionalized,” i.e., released to the community. While the ultimate promise of this revolution was broken in the U.S. by an inadequately funded community mental health system, the medications themselves were a huge leap forward. In contrast to other types of tranquilizers and sedatives, neuroleptics specifically targeted certain psychotic symptoms, apparently due to effects on the brain chemical dopamine.
Unfortunately, the neuroleptics brought many side-effects as well. Stiff, shuffling, drooling patients became a common sight in clinics and on the street. Thinking and reactivity were slowed. Nearly all developed some degree of a chemically-induced form of Parkinson’s Disease, and many suffered an irreversible movement disorder called tardive dyskinesia. A few died of a severe reaction called “neuroleptic malignant syndrome.” These side-effects were well-known, noxious to the patient, and feared by doctors. It took a condition as terrible as psychosis to justify using such harsh and potentially dangerous treatment.
In 1993 another psychiatric revolution was triggered with the marketing of Risperdal, the first widely prescribed “atypical” neuroleptic — atypical in that it did not primarily act on the brain’s dopamine, and seemed to lack many of the movement side-effects of typical neuroleptics. Patients tolerated it more easily, and as a bonus it also seemed to help a wider range of psychotic symptoms (“negative” symptoms as well as the “positive” symptoms that the older drugs affected). Other atypical neuroleptics were released: Zyprexa, Seroquel, and later Geodon and Abilify. Psychiatrists soon switched most chronically psychotic, e.g., schizophrenic, patients to this new class of medication.
However, the atypicals brought problems of their own. The first was price: Atypical neuroleptics cost several times as much as the medications they replaced, although some studies show this cost is offset by decreased hospitalization. Several cause significant weight gain. In recent years a metabolic syndrome has been identified where patients develop high cholesterol, diabetes, and related abnormalities. There are risks of hormonal (prolactin) imbalance and heart arrhythmia. While atypical neuroleptics may be safer than their predecessors, they are not safe medications by a longshot.
Nonetheless, new and expanded uses of the atypicals keep appearing. Typical neuroleptics have long been used in the acute manic phase of bipolar disorder, not only for the psychosis that is sometimes present, but also as a sedative until the mood stabilizer (e.g., lithium) takes effect. But the advent of atypical neuroleptics led to much wider use in bipolar disorder, i.e., ongoing maintenance use. Eventually the FDA approved these medications as single-agent treatments for bipolar disorder. Atypicals are also used widely to treat autism, ADHD, and simple agitation in children as well as adults (particularly demented elderly), as an adjunct to antidepressants (interesting discussion here), and even as simple sleep aids.
Considering the risks associated with atypical neuroleptics, they are being overused — misused — on a wide scale. Indeed, today’s New York Times reported that a panel of federal drug experts roundly criticized the cavalier use of these medications in children. Adults, too, are being over-prescribed these drugs. Useful for severe disorders, they are dangerous and unwarranted in the face of misguided complacency.
November 11th, 2008 Around the time I was finishing medical school I published a short essay on subjectivity in psychiatric assessment. The American Psychiatric Association had released the third edition of its Diagnostic and Statistical Manual just a few years before. When it came out in 1980, DSM-III was a revolutionary update: It provided specific criteria for diagnosing disorders, not the narrative descriptions of the previous editions. In my essay I pointed out that the new, precise-sounding criteria still included social judgments. For example, “inappropriate affect” was a criterion for schizophrenia, even though inappropriateness is assessed in relation to a given situation, depends on cultural norms, and is a judgment call. My point was not that we should avoid social judgments in psychiatric assessment, but that they are inevitable, whether expressed in narrative descriptions or in numbered lists of diagnostic criteria.
Fast forward 20+ years. The current (11/10/08) issue of The New Yorker features an article by John Seabrook called “Suffering Souls: The search for the roots of psychopathy.” It presents an overview of “the condition of moral emptiness that affects between fifteen to twenty-five per cent of the North American prison population….” Seabrook notes that psychopathy is not a diagnosis in the current DSM-IV; the more general antisocial personality disorder subsumes it. Much of the article revolves around brain imaging studies using fMRI to discern which parts of the brain are over- or under-utilized in psychopaths versus normals.
Functional imaging like fMRI has grown huge in psychiatric and brain research, almost to the point of becoming a fad in some areas. Everyone wants to know what parts of the brain “light up” in different disorders. Psychopathy is no exception, and such studies may uncover crucial findings about the condition. More interesting to me, though, is that psychopathy is defined almost wholly by social judgment. It causes no distress in the person who has it, and generates virtually no clinical signs outside the social sphere.
Early in the article Seabrook says the psychopath’s main defect is “a total lack of empathy and remorse.” That was the way I learned it, too. Such a definition categorically separates psychopaths from normals in a manner that is non-situational, relatively free of cultural bias, and avoids nuanced judgment calls. However, it is also the last we hear of it. The rest of the article takes a dimensional, matter-of-degree approach. First presented is the well-known Psychopathy Checklist, or PCL-R, developed by Canadian psychologist Robert Hare. The PCL-R interviewer scores the subject on 20 items, including irresponsibility, parasitic life style, lack of empathy, and shallow emotions. Most researchers agree that psychopathy is present above a certain threshold score. While the PCL-R has good face validity, its use to assess psychopathy requires judgment calls, taking the situation and culture into account. As mentioned above, this is inevitable in much of psychiatry; we just need to be careful about it.
The danger surfaces later in the article. Seabrook is driving with Robert Hare, who sees another driver run a red light. Hare remarks: “‘Now, that man might be a psychopath. That was psychopathic behavior certainly — to put others in the intersection in danger in order to realize your own goals.'” Seabrook observes that this kind of behavior is commonplace, and “can make it possible to see psychopaths everywhere or nowhere.”
The pejorative association of psychopathy with serial murders and other horrible crimes underscores the liability of seeing disorders “everywhere or nowhere.” In the last 25 years the DSM list of official psychiatric disorders has mushroomed. As disorders are codified — and importantly, as medications are marketed for them — more and more people receive diagnoses. What was once shyness has become social anxiety disorder, treatable with SSRI antidepressants. Poor concentration and an inability to sit still has become ADHD, treatable with stimulants. Social-context judgments (and financial incentives) grease the wheels of “diagnosis creep.”
Social judgments in psychiatric assessment are inevitable, but that does not mean we can be casual about them. On the contrary, their very subjectivity argues for closer scrutiny and care, as the pitfalls resulting from bias and intellectual laziness are grave. If every personality quirk is a disorder, then psychiatric diagnosis loses meaning. Worse, the parameters of normality narrow. Tolerance of difference retreats in step with “diagnosis creep.”
Psychopathy, like schizophrenia, are useful concepts. Let’s keep them that way, despite the shifting sands of the social milieu.
November 10th, 2008 Stanley Fish has an interesting opinion piece in today’s New York Times. In September the American Psychological Association (APA) reversed its position and now bans its members from participating in some military interrogations and all torture, a stance taken earlier by the American Medical Association and the American Psychiatric Association. (The psychiatrists’ group is also known as APA; in order to avoid confusion, for today APA means the psychologists.) Fish wonders why it took the psychologists so long.
Fish first suggests that psychologists reached this position more slowly because medicine and psychiatry are fundamentally healing arts, whereas psychology is an academic field that pursues knowledge, not just healing. As might be expected, several commenters quickly note that medicine and psychiatry also engage in academic research, courtroom testimony, and other non-healing pursuits. Nonetheless, Fish has a point. Medicine, and by extension psychiatry, have a long and relatively narrow history of focusing on the well-being of individual patients. In contrast, psychology originated in academia; clinical psychology is a relatively new addition to a scholarly and experimentalist field. Perhaps that history made it harder for the APA to separate itself from the fascinating if troubling human realities of military interrogation and torture.
However, this account seems incomplete, and it appears that Fish feels that way, too. His argument shifts to the alleged difference between pure and applied knowledge:
[T]he moment psychological knowledge of causes and effects is put into strategic action is the moment when psychology ceases to be a science and becomes an extension of someone’s agenda.
He argues that psychology is heir to the ancient discipline of rhetoric, the art of persuasion where the “emphasis is not on what is true, but on what works.” The susceptibility to “base appeals” has been “mapped and scientifically described by the modern art of psychology.” He concludes: “Applied psychology can never be clean.”
Several comments that follow Fish’s paper take issue with the last line. What does it mean to be clean? Can any real-world field be clean? Others (e.g., here and here) note the long history of military funding of psychology, and suggest a “follow the money” approach might address Fish’s original question. Still others argue that the involvement of psychologists can be merciful in settings of interrogation and torture, as the alternative is sheer physical agony at the hands of the untrained. One professor of rhetoric defends his maligned field. And so it goes.
In my view there is no sharp distinction between pure and applied knowledge. Every bench scientist and laboratory researcher hopes his or her work will someday prove useful. This hope fuels the endeavor. The problem is, one never knows in advance the uses any knowledge may serve. Atoms for peace, or atomic weapons? Microbiology for vaccines, or for bioterrorism? Given that it is impossible to know in advance, ethical prohibitions apply to unethical behavior. There is no such thing as unethical knowledge.
“Slippery slope” arguments applied to behavior are very pertinent here. If it is wrong as a physician to be present in the torture chamber, is it also wrong to be available nearby? How about acting as personal physician to the torturers, so they can have long, healthy careers torturing others? These are not easy lines to draw, but the distinctions are important. Likewise, if psychology in the service of brutally extracted confessions is wrong, how about psychology in the service of involuntarily altering a pedophile’s obsession? Or influencing an addict to turn away from recreational drugs?
These kinds of lines are best drawn by larger society, not professional organizations. Harsh interrogation and torture (which exist on a continuum) are unethical for everyone, not just mental health experts. The behavior itself is wrong. The specific means used and the professional status of its practitioners are immaterial. Position statements by professional organizations are largely symbolic in the context of a larger society, national and global, that still condones these practices.
November 5th, 2008 In watching the concession and acceptance speeches last night, I was struck by the apparently sincere willingness of both Mr. McCain and Mr. Obama to “reach across the aisle” and work together after their bitter campaign fight. To me, this feels much like a hard-fought sports competition, where in the heat of battle each side seems to want nothing less than the annihilation of the other. Yet there are congratulations all around immediately after it is over. I am also reminded of competitive (often courtship related) behaviors in other animals, which usually end well before death or serious injury.
The whole idea seems hopeful. It is a “regression in the service of the ego” (to use Freud’s phrase) that we can become so primitive and impulse-based, but only temporarily, and for useful social and political purposes. Could a nasty, divisive political season be healthier for us as a society than a quiet, civilized one? I’m not ready to claim that, but neither am I ready to condemn it — as long as we’re good sports about it afterward.
November 2nd, 2008 In my last post, I wrote about the use of placebos in clinical practice — or more accurately, about giving medical treatments based on psychological comfort, not physiological effect. However, the area where placebos are most used and accepted is human research, not clinical practice. Psychiatric research in particular introduces some interesting conceptual issues regarding the use of placebos.
To decide whether any new medication actually helps patients, the “randomized, double-blind, placebo controlled study” is the gold standard. In this type of experiment, research subjects who all have the same disease (or are all equally healthy) are randomly assigned to take either the new medication or a placebo: an identical twin of the medication lacking any of its active ingredients. The subjects do not know whether they are in the medication arm or the placebo arm of the study — they are “blind” to this fact. Their doctor, or whomever rates their outcomes in the experiment, does not know either; this makes the experiment “double-blind.” If the only difference between the two arms of the study is the presence of the active ingredients, then any outcome differences, on average, between the groups can be attributed to those ingredients.
The reason such experiments are double-blind, and the reason for the placebo in the first place, is to separate the medication’s physiological effects from any rating bias or placebo effects. Outcome ratings can be distorted by raters’ expectations; people often see what they want to see. Placebo effects are psychological factors in the subjects that improve rated outcome. These include wanting to please the experimenter by getting better, hope that the medication will work, improvement due to feeling attended to and cared for, etc. A “controlled” experiment carefully dissects away rating biases and placebo effects, so that only physiological effects count.
There is great value in knowing that an antibiotic and a sugar pill lead to different medical outcomes. Patients can feel attended to and cared for — they can feel “better” — and still die of infection. Disease has a life of its own, apart from subjective experience. Western medicine prides itself on its scientific foundation, and rightly so. We don’t want merely to feel better, we want to be better.
However, the situation is less clear with many psychiatric disorders. Depression and anxiety, for example, do not appear to have lives of their own, apart from the patient’s subjective experience. If the patient feels better, he or she is better. Thus, the rationale for placebo controlled studies in this area bears further reflection. Why does it matter if mood or anxiety improvement from a medication is physiological, or “merely” a placebo effect?
One answer is that we hope to further our scientific/medical understanding of such disorders, and placebo controlled studies help us do that. Another is that drug makers and the FDA justify the value of pharmaceuticals by virtue of their active ingredients, not the psychological features surrounding their use. Yet another is that placebo effects are idiosyncratic and vary across the population, and we strive for more predictability. Nevertheless, as discussed in my previous post, for a given patient it ultimately doesn’t matter why a treatment works, as long as it really does. (For a more technical discussion, see this British Journal of Psychiatry editorial.)
Another fascinating conceptual problem is whether and how to include placebo controls in psychotherapy research. Psychotherapy, like depression and anxiety medication, treats distress that is inseparable from subjective experience. Thus the importance of differentiating “active ingredient” and “placebo” effects is unclear at best. In addition, the treatment itself consists of the very psychological influences a placebo controlled study aims to remove.
By analogy with medication studies, psychotherapy researchers have tried to isolate the “active ingredients” in therapy, and to fashion studies with a treatment arm and a placebo arm differing only by the presence of those ingredients. So far, no active ingredient has been identified as essential (although the overall efficacy of psychotherapy is not seriously in question, see here and here). What if there are none — what if psychotherapy is all “placebo effect?”
This sounds like psychotherapy is quackery, much like the headline “Half of Doctors Routinely Prescribe Placebos” sounds like quackery exposed (see my previous post). But if psychotherapy is compared not to medication but to other human relationships, the “active ingredient” idea seems out of place. The helpfulness and support of a parent, teacher, spouse, sports coach, or minister cannot be reduced to any specific ingredient. Growth or self-improvement gained from such relationships comes organically and emergently, not as a result of a discrete intervention. This does not make the value of such relationships, or the benefits gained, any less real.
Placebos are essential for careful medical research, yet carry overtones of fakery and worthlessness. When the placebo concept is applied in an overly broad fashion, its negative connotation can tarnish highly beneficial human interactions that are neither fake nor worthless.
|
|