What counts as a medical issue?

It has become a sign of legitimacy to call a personal problem “medical.”  This aims to distinguish the problem from those of morality or character.  It implies both that the problem is serious, and that it is unbidden and largely out of the suffer’s control.  Unfortunately, it isn’t clear what exactly qualifies as “medical,” so this label serves more as a rhetorical device than a scientific finding.

Alcoholism is the paradigm and perhaps least controversial example.  Through the 19th Century, alcoholism was variously declared a disease, or a matter of will and character.  The disease model gained prominence in the 1930s and 40s with the “powerlessness” identified in the 12 Steps of Alcoholics Anonymous, as well as researcher E.M. Jellinek’s descriptions of progressive stages and subtypes of alcoholism.  The American Medical Association declared alcoholism an illness in 1956 and has endorsed the disease model ever since, partly as a strategy to ensure insurance reimbursement for treatment.

The model expanded to include other abused substances with the formation of Narcotics Anonymous in the 1950s, and as a result of widespread recreational drug use in the late 1960s and early 1970s.  The specialty of addiction medicine was first established in 1973 in California.  The American Society of Addiction Medicine now states: “Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry.”  Proponents of the disease model of addiction cite many documented brain changes and a plausible neuropathology, as well as the presence of genetic risk factors, cognitive and emotional changes, impaired executive functioning, and disability and premature death.  The model purportedly destigmatizes addicts — they are no longer “bad” or “weak” people — thereby making it more acceptable for them to seek treatment.

Nonetheless, the disease model of addiction remains controversial.  In addition to the existence of alternative models, the disease model itself has been criticized.  Some believe it removes personal choice and responsibility, and actually contributes to the problem of addiction.  Others cite surveys of American physicians who consider alcoholism more a social or psychological problem — even a “human weakness” — than a disease.  Critics note that about 75% of those who recover from alcohol dependence do so without seeking any kind of help, and that the most popular and recommended treatment, Alcoholics Anonymous, is a fellowship and spiritual path, not a medical treatment.

Behavioral addictions to gambling, sexpornography, the internet, video games, and food are described in language that explicitly parallels addiction to alcohol and drugs.  The same brain pathways are implicated.  Accordingly, these problems are called medical as well.

Addiction is not the only domain that has been declared, often somewhat stridently, as medical.   Depression has been deemed a medical issue for several decades now, using much the same rationale.  The push to frame all psychiatry as neurobiology is a larger matter.  But here, too, documented brain changes, genetics, and characteristic signs and symptoms underlie a rhetoric that may, or may not, decrease stigma and facilitate treatment.  Moreover, a number of other behaviors and traits, formerly considered bad habits or personality quirks, are now reified as discrete psychiatric disorders (not the same as diseases, but close): shyness is now social anxiety disorder, misbehaving kids have oppositional defiant disorder, and so forth.  What are the risks in subsuming more and more of human experience into nosological categories?

One risk is that medicalizing problems may hide political or other bias.  The most shocking historical examples include drapetomania in the U.S. and the misuse of psychiatry in the former Soviet Union.  Yet even well-meaning efforts to highlight a social problem, give it gravitas, and impart a clinical, impersonal air to one’s opinions can result in this sort of over-reach.  Examples include the “politics is part of pediatrics” antiwar stance of famed physician-author Benjamin Spock, and Physicians for Social Responsibility, a group that opposes nuclear arms from a medical perspective.  Most recently, some mental health professionals have published impassioned statements characterizing President Trump’s behavior in medical/psychiatric terms.  Such statements have no medical purpose: they neither clarify Mr. Trump’s behavior (which is well known to all), nor change it.  Their effect, if any, is solely on electoral politics.  Medical language can thus amount to little more than grandstanding.

A related risk of medicalization is that it may lurch toward absurdity.  Suicide, that profoundly personal matter studied by poets and philosophers as well as scientists, also may be deemed a disease.  This confuses disease with symptom — as if “headache disease,” for example, were touted as a new diagnostic entity.  No doubt there will soon be measurable brain findings that distinguish suicidal people from non-suicidal people; no doubt such findings, too, will soon distinguish the state of having a headache from the state of not having one.  In this nascent era of functional brain imaging, is it sufficient to see something “light up in the brain” to call it a medical problem?

Doing mental arithmetic is detectable by fMRI.  Is math a medical issue?

A plainly medical disease such as diabetes results from nature and nurture, genetics and environment.  What makes it medical are not its causes.  The effect of diabetes on the human body, the fact that it historically has been treated by physicians, and to a lesser degree the nature of its treatment make it medical.  Addiction also appears to result from genetics and environment, to have consistent effects on the human body, and for a few decades at least, has been treated by physicians.  Its treatment, though, is mostly non-medical in the usual sense of the term, i.e., not pharmacological or surgical.  There are strong behavioral and psychological aspects to addiction, and often sociocultural ones as well.  It is thus not surprising that its status as medical remains, to some, a matter of debate.  However, by the time we get to war, nuclear arms, a heretofore unimagined presidency, or suicide, we are talking about matters that have no consistent findings in the human body, are not historically treated by physicians, and respond almost exclusively to non-medical solutions.  The phrase “medical issue” can’t stretch to cover this territory, no matter how fervently physicians would like to weigh in.

In the future, more and more brain function will be open to scrutiny.  As our brains mediate all human behavior, advancements in functional imaging and similar technology may tempt us to declare any and all products of the human mind “medical issues.”  Problems such as prejudice, racism, violence — or, from other viewpoints, liberalism, collectivism, and the like — may be claimed as the physician’s to treat.  It will be hard to resist this temptation; doctors like to fix things.  But the cost of succumbing is to reduce medicine to threadbare rhetoric, weakening our moral status as healers of the human body.

Graphic courtesy of Pixabay, CC0 license.


“60 Minutes” ran a segment last Sunday on electroconvulsive therapy (ECT), better known as shock treatment.  Kitty Dukakis was interviewed as a long-time recipient and advocate of ECT for her severe depression.  The piece was almost entirely positive, save for brief mention of memory loss as an unfortunate side-effect.  This was soon left behind by video of a treated, newly smiling patient declaring no such problem: new technologies are in development, e.g., magnetic shock therapy or MST, that promise to mitigate this issue.  ECT was presented as an under appreciated miracle treatment — and miracle treatments, unfortunately, always make me worry.

In addition to reassuring the public — ECT isn’t painful and doesn’t feel punitive anymore, in contrast to its fictional depiction 43 years ago in “One Flew Over the Cuckoo’s Nest” — the segment also admonished psychiatrists who don’t use ECT often enough.  According to “60 Minutes,” severely depressed patients are languishing for years on ineffective antidepressants, imprisoned by their doctors’ outdated prejudices and unfounded fears.

Like so much in the news these days, this report oversimplified to make a rhetorical point.  The reality is rather different.  I can only recall two patients in my decades-long practice who possibly would have benefitted from ECT.  (I’m not trained to administer it, so they would have been referred to a colleague.)  All the other depressed folks I’ve seen, hundreds of them, improved on psychotherapy, standard antidepressant medicine, or both.  Or they had longstanding personality issues that made them depressed — a vexing problem to be sure, but not one ECT can fix.  The great majority were eventually helped by more benign and far less expensive treatment than ECT.

This is not surprising considering that most depression is of modest severity.  And the modest severity of most depression is itself not surprising.  In an effort to capture as many cases as possible, the DSM-5 diagnostic criteria for major depressive disorder includes chronically unhappy people who are still able to work or attend school, people with no psychomotor slowing, and people who have never given suicide a serious thought.  While severe melancholic depression looks and acts very much like a disease worthy of a medical intervention under general anesthesia, i.e., ECT, most of what we call depression these days does not.  A great many people hobble along, not really enjoying life but not being severely impaired either.  Suggesting ECT for this group is irresponsible.

The perceptions and history of ECT remain roadblocks as well.  Even voluntary ECT is the epitome of paternalistic medicine: a powerful, technological treatment done to a passive patient.  And while most ECT is now voluntary and requested, historically it wasn’t.  Some ECT in the U.S. is still court-ordered today.  This again follows from its use in the most severely depressed patients, who may exhibit nihilistic delusions, or are so impaired they can’t participate in their own treatment.  Even if it is highly effective and without better alternatives, restraining someone in order to administer anesthesia and an electric shock that causes a grand mal seizure is a tough notion for the public to accept.

Feel-good pieces on television are no match for the discomfort most of the public feels about shock treatment.  I would not hesitate to recommend it for severely depressed, non-functional patients, especially those with classic melancholic depression who have failed full trials of standard antidepressant medication.  Also, ECT may be a good first choice when depression is accompanied by mood-congruent delusions.  But these are unusual conditions where the established efficacy of ECT outweighs its attendant memory loss, cost, and apprehension.  Realistically, we ought to think of ECT as we do life-saving surgery: an essential option when needed, but hardly something to be popularized or welcomed lightly.

Managing assaultive behavior

In Toronto on April 23, 2018, Alek Minassian intentionally drove a rented van into pedestrians, killing ten and injuring at least 15. Later the same day, Constable Ken Lam of the Toronto Police Service arrested Minassian after a brief, tense standoff. As seen in a widely circulated video, Minassian dared the officer to shoot, and feigned drawing a gun, most likely to commit “suicide by cop.”

Constable Lam, however, did not shoot. Instead, he took specific steps to de-escalate the confrontation, and arrested Minassian without further bloodshed. Commenters praised his actions, contrasting them with many police confrontations in the U.S., where even unarmed suspects are killed in a hail of bullets. According to the U.S. Department of Justice, “law enforcement officers should use only the amount of force necessary to mitigate an incident, make an arrest, or protect themselves or others from harm.”

The use-of-force continuum begins with the mere presence of the officer.  It then progresses to verbal requests, commands, non-lethal physical tactics or weaponry, and ends with lethal force.  Suggested reasons for over-reliance on lethal force by U.S. law enforcement include racism, an assumption that suspects are armed and thus dangerous to the arresting officers, low rates of prosecution for alleged police brutality, an American culture of violence, a police culture of intimidation, and police training issues.

Regarding the last of these, only two days prior to Lam’s arrest of Minassian, Douglas Starr wrote an opinion piece for the New York Times arguing that police have a lot to learn — from hospitals.  Starr notes that hospital workers often deal with volatile people, yet are not permitted to attack, shoot, or otherwise harm them.  As a result, these institutions have developed techniques for de-escalating potentially violent situations.  Courses in “managing assaultive behavior” are widespread, and evidence suggests they are effective in decreasing violence in health care settings, for example by defusing it at a verbal, pre-physical stage.  Since 1993, California law (AB-508) mandates that hospital staff working in behavioral health or emergency departments receive employee training in assault/violence prevention.

While police officers in some western nations, e.g., Great Britain, all receive de-escalation training rivaling that of California hospital workers, most U.S. police officers do not.  Such training is not required in 34 states; most police and sheriff departments in those states offer little or no de-escalation training (but a great deal of firearms training).  For example, APM Reports compiled a table showing the amount of de-escalation training for police in the Twin Cities metropolitan area of Minnesota, with wide variation from one suburb to the next.  Until last year, most police and sheriffs’ departments in Georgia documented less than one hour of training per officer in the preceding five years.  Starting last year, however, all Georgia officers are required to take one hour of de-escalation training annually.

The Police Executive Research Forum, a membership organization of law enforcement leaders and academics, is developing a program called ICAT, to standardize de-escalation training nationally.  ICAT assists officers in dealing with several types of encounters that too often result in lethal force.  For example, those behaving erratically, and perhaps dangerously, due to mental illness or drug abuse often react more favorably to a slower, calming approach.  “In many instances, the goal is for the first responding officers to buy enough time so that additional, specialized resources can get to the scene….”  Non-firearms incidents, in which a subject is unarmed or armed with a weapon such as a knife or baseball bat “often present officers with time and opportunity to consider a range of responses.”  Perhaps most important, ICAT training

focuses on protecting officers from both physical threats and emotional harm…. The goal is to help officers avoid reaching the point where their lives or the lives of others become endangered and the officers have no choice but to use lethal force.

This last point is crucial, as fear and self-preservation typically provoke excessive responses in everyone, including law enforcement personnel.  Faced with a threat, the fight-or-flight response takes hold.  However, police officers cannot flee and may thus react with lethal force.  It takes dedicated training to un-learn this instinctive response, which may lead to over-reaction and unnecessary violence.

In the end, the police are much like the rest of us.  We all react as we have learned or trained.  We all act to assure our own physical and emotional safety.  And unfortunately, we all rush crucial decisions in the face of pressure and stress.  De-escalation training is not only an overdue necessity for law enforcement, it would be a highly desirable means to promote nonviolence in society generally.  Imagine how different life would be, if instead of reflexively meeting threat with threat, we learned from childhood to de-escalate and calm those who threaten us out of their own agitation or insecurity.  Imagine how different our current politics would be.

Yes, there will always be criminals and sadists who stop at nothing but lethal force.  However, a drug-addled screamer on the street corner is not such a person.  Nor, apparently, is a mass murderer such as Alek Minassian.  He was stopped with firm words and a cool head.  That should be a lesson to American police officers — and to us all.

Road rage is all in your head

Two cars arrive at a stop sign at the same time.  Both start into the intersection.  One driver speeds through, while the other jams on the brakes, avoiding a collision.  This driver feels insulted, offended, diminished.  Who the hell does that other driver think he is?  He nearly killed me!

This scenario, and countless others involving merge lanes, contested parking spaces, and aggressive rush hour traffic, are set-ups for road rage.  The aggrieved party feels a flash of anger and hostility, and may swear aloud within the confines of his vehicle.  He may “give the finger” in a way the other driver may or may not see.  He may grumble to passengers about the lousy drivers in his town.  Sometimes the response is louder and more direct: yelling at the other driver, or even giving chase.  At the extreme, enraged parties physically retaliate with weapons, or by using their cars as battering rams.

What’s going on?  In a practical sense, the initial harm is often trivial.  A moment’s delay at a stop sign would be ignored under other circumstances.  The real trigger is what the behavior says about the perpetrator’s attitude — or more precisely, how it was interpreted by the “victim.”  Did the aggressive driver proclaim his time was more valuable?  Did he disregard or disrespect the other driver?  Was it a power play, a demonstration that “I can do whatever I want, and you’re powerless to stop me?”  Was it contemptuous?  “I don’t have to wait for the likes of you, you’re beneath my consideration.”

Actually, the offended driver doesn’t know.  One reason road rage is so prevalent is that the outsides of motor vehicles are inscrutable.  We can’t read the nonverbal cues of other drivers.  A car with a mean, aggressive driver who couldn’t care less whether you live or die looks very much like a car with a driver who honestly thought it was his turn to enter the intersection, and who would be mortified to know you were offended or frightened as a result of his actions.  While you were cursing and giving the finger, he may have been wincing and muttering “Oops, I’m sorry!”  But that was inside his own car.  You didn’t know.

Road rage, therefore, is nearly always self-generated.  It’s all in your head.  Do you tend to think of others as mean-spirited opportunists, ready to take advantage of you, disdainful of your wants and needs?  Or do you give strangers the benefit of the doubt, assume they meant no harm and didn’t aim to insult or diminish you?

Either attitude is contagious.  I recently visited a country with polite drivers.  I never felt stressed even if it wasn’t clear whose turn it was at an intersection.  It didn’t matter; we were all content to defer to the others.  In contrast, when traffic is dog-eat-dog, and when our self-worth rises or falls with our ability to cut through it efficiently, then everyone else is a rival and an obstacle.

None of this is unique to road rage.  Yesterday I was in a supermarket express checkout line, “15 items or less.”  (Um, “fewer.”)  Ahead of me another shopper was packing up three bags of groceries.  I stood there steaming as she slowly ended her cellphone call and took her good old time to pay the $63 she owed.  I rehearsed angry comments in my head: “I guess even people who can’t count still need to eat.”  I didn’t actually say anything.

Later I wondered what exactly irritated me so much.  I could have been equally delayed, yet completely untroubled, by any number of things.  It wasn’t the wait itself, it was my perception of the perpetrator’s attitude.  Apparently the supermarket’s rules didn’t apply to her.  She was self-important and inconsiderate.  Looming even larger psychologically was her attitude toward me.  I imagined she didn’t care about me at all.  My inconvenience was not her concern.  I felt disrespected, not taken into account.

These situations happen all the time.  A patient of mine recently shared how angry he feels when his teenage kids fail to turn off lights after he’s reminded them repeatedly.  We agreed it’s not the trivial increase in his electricity bill that bugs him.  It’s his perception of their laziness, their disrespect towards him and his values, perhaps their willful defiance.

In all these settings, indeed throughout our lives, we react to interpersonal transactions taking place in our own heads.  Occasionally our perceptions of contempt and disdain are accurate.  Sometimes brats, narcissists, and sociopaths really do put themselves first, and either don’t care about us or actively seek to hurt us.  But more often we’ve concocted a story.  We’ve been insulted, pushed around, treated like dirt.  And in response we self-righteously strike back.

How can we escape this hall of mirrors?  Most simply, we can remind ourselves that our assumptions about others may be mistaken.  We may recognize that we tend to assume the worst in people, and take this bias into account.   There’s no need to assume evil intent when sheer stupidity — or momentary confusion or misunderstanding — can account for the behavior.

More psychoanalytically, we may reflect on our unconscious wish for care-taking and nurturance from others, and the anger that results when real life inevitably falls short of this yearning.  Such insight may spare us from projecting our own anger onto anonymous others.  And more philosophically, with years of meditation and discipline we could learn to detach our egos.  Slights from others have no effect upon the Self.  I believe this is one small aspect of Buddhist enlightenment, but don’t quote me.

Meanwhile, on that long road to enlightenment it doesn’t hurt to drive defensively.  And take a few deep breaths.

Does severe remorse require a specialist?

In her recent New Yorker article, “The Sorrow and the Shame of the Accidental Killer,” author Alice Gregory claims there are no self-help books for anyone who has accidentally killed another person.  Nor published research, therapeutic protocols, publicly listed support groups, nor therapists who specialize in their treatment.  She profiles several such tormented souls who bear their burdens largely alone.

Yet dealing with guilt, shame, and regret is a mainstay of both self-help and professional therapy.  A simple online search reveals page after page of self-help websites, therapist and clinic practices, newspaper and magazine articles, all about forgiving oneself, learning to accept one’s failures, and letting go.  In that sense the piece misleads about the lack of help available.  Indeed, although I don’t “specialize” in the treatment of those who accidentally kill another person — as best I recall, I’ve never worked with this specifically — I join many of my colleagues in welcoming any such person into my practice.

Gregory implies this particular remorse is unique: qualitatively different and far worse than regrets about bad marriages, abusive parenting, ruined businesses, accidental self-harm, and so on.  And so it is, in the same way that murder is usually considered the worst crime.  Taking a life, even unintentionally, is irrevocable and can’t be remedied.  Each life is one of a kind.

Does this render all the self-help moot?  the army of therapists clueless?  Does it take an elusive specialist to help in such severe cases?

Experience can’t hurt, of course.  Just as an experienced addiction therapist readily spots enabling and codependency; just as a therapist well versed in psychodynamics quickly senses subtle inner conflict; just as an expert cognitive therapist knows how to tailor a welcome intervention; so too a therapist who has worked with many guilt-ridden, self-punishing CADI (“Causing Accidental Death or Injury”) clients would know which interventions are usually helpful.

Lacking such an expert, should a sufferer reach out for the far more accessible, if less tailored, help out there?  By all means.  Although CADI is an extreme case, no one’s life story or emotional burden is exactly like another’s.  No one’s guilty remorse — or depression, anxiety, or self-sabotage — is quite the same as anyone else’s.  No therapist, no matter how experienced or specialized, can know beforehand exactly where a patient or client is coming from.  To one CADI client, the phrase “accidental killer” (in the title of the New Yorker piece) may feel just right, to another painfully harsh.  Even the value-neutral term “CADI” covers very different situations, e.g., a subway operator unable to stop the train before hitting a suicidal person on the tracks, versus a driver who falls asleep at the wheel and veers into unsuspecting traffic.

A widely-read New Yorker article highlighting this forgotten, suffering group is surely a gift to these folks and their loved ones.  Yet it would be sad if it left the false impression that only hard-to-find, specialized help is worth seeking.  In this situation especially, it’s important to remember our human connection with others, not just our differences.

Image courtesy of Stuart Miles at FreeDigitalPhotos.net

Lumping and splitting

As a young psychotherapy researcher I learned that some of my colleagues were “lumpers” and others were “splitters.” The former look at research data and see commonalities. Instead of different kinds of psychotherapy, say, they see a spectrum of styles with a shared core. Lumpers search for universal truths, missing links, ways of combining categories. They apply this to people too.  Lumpers believe we are more alike than we are different, that our personalities differ in degree, not in fundamental type. We all bleed the same color.

Splitters, on the other hand, make distinctions. Different psychotherapies are as different as salt and pepper. The more categories we recognize, the better we understand the world, and each other. Science advances as we see distinctions we previously overlooked. The classification of human disease ever expands. Biologists name new sub-species. And as for people, our personalities fall into discrete types: narcissistic, sociopathic, neurotic — and normal.  Splitters call a spade a spade.

While the splitter in me just divided people into two kinds — lumper and splitter — the lumper in me now adds that we are all mixtures of both. Developmental psychology bears this out. At birth, we can’t even tell our mothers from ourselves — the ultimate lumping. But soon a sense of self appears, culminating in the “terrible twos” when toddlers delight in black-and-white thinking and contrary opinions — crude but heartfelt splitting.  With maturity comes a balanced appreciation of both commonality and difference. (To therapists, “splitting” is a technical term for polarized, binary thinking that pathologically persists into adulthood.)  The Swiss psychologist Jean Piaget described a similar cognitive adaptation as “assimilation” and “accommodation.”  In learning about the world, the child assimilates (lumps) various observations into a single schema — all furry pets are “dogs” — until that schema fits so poorly that the child must accommodate (split) it into “dogs” and “cats.”  Lumping and splitting are in dynamic tension as we develop.

Splitting rules American and international politics today.  Difference, not commonality, echoes across the political spectrum. The right is an old hand at this. Conservatives draw stark lines around good and evil, law-abiding and criminal, citizen and immigrant. A “good guy with a gun” is a different species than a similarly armed “bad guy,” never mind that even good guys may suffer a momentary lapse of judgment, or simply misinterpret a fast-unfolding situation.  Those who disagree with conservatism are dismissed as socialists or “snowflakes.” Politics today banks on race, religion, and nationalism. The brotherhood of man is for losers. The epitome of splitting, the alt-right, has been welcomed into the mainstream by the President himself.

However, the contemporary left also splits like crazy. Identity politics erects walls defining who is in and who is out. Those who disagree with progressivism are dismissed as racists or fascists. “Cultural appropriation” condemns the mixing of cultures and the blurring of boundaries, while “intersectionality” slices us into finer and finer categories. In 2014 Facebook introduced at least 58 gender labels for self-identification. We belong to smaller and smaller groups, perhaps ultimately to groups of one. By striving to make every unique voice heard, the left has fractured itself into politically powerless factions, the very opposite of collectivism.

Splitting is in our genes. It’s a survival mechanism we share with other animals. When startled, safety demands that we make a snap judgment of friend or foe. After all, ignoring danger can be fatal. Yet constantly expecting danger stifles the rewards of lumping, e.g., empathy, connection, seeing the big picture.  An individual who constantly splits to assure personal safety is mentally unwell: anxious, untrusting, exhausted.  Politically we now suffer the same illness.  From left to right we behave as though under attack, hunkered down, reduced to crude binary survival thinking and nothing better.

Children, psychotherapy researchers, and healthy societies must balance lumping and splitting. We split to assure our safety, autonomy, and comprehension. But we need to lump too. The toddler must learn to say yes occasionally. The researcher must concede that different schools of therapy look similar in practice. And despite our political differences, we must allow ourselves, and others, to feel safe enough to give up some of our grim and isolating splitting.

The high-risk psychiatric patient

A woman recently requested a medication evaluation at the suggestion of her psychotherapist.  The caller told me her diagnosis was borderline personality disorder. She hoped medication might ease her anxiety.  She also admitted that two other psychiatrists refused to see her because she was too “high risk.”  I asked if she was suicidal.  Yes, thoughts crossed her mind. However, she never acted on them, and was not suicidal currently.  I was curious whether my colleagues recoiled at the caller’s diagnosis, her suicide risk, her wish for anxiety-relieving medication, or something else.

By definition, “high risk” medical and surgical patients face an increased chance of poor outcome.  According to a British study, high-risk surgical patients are a 12% minority who suffer 80% of all perioperative deaths.  High-risk pregnancies threaten the health or life of the mother or fetus; they constitute six to eight percent of all pregnancies.  Various charts and algorithms identify the high-risk cardiac patient.

Historically, physicians and surgeons accepted high-risk cases.  As one would expect, these patients had poorer outcomes and higher mortality.  Doctors did the best they could, humbled by their limitations and occasional failures, spurred to treat the next such patient more successfully.  However, recent social changes conspire to blunt this acceptance.  Fear of lawsuits, stemming both from an active medical malpractice bar and patients’ high expectations, means that doctors, too, are at high risk.  Increased reliance on outcome data and online reviews by patients may likewise lead some clinicians to cherry-pick cases that won’t mar their results.  Patients at high medical or surgical risk now have a harder time finding a doctor who will see them.

No single hazard defines the high-risk psychiatric patient.   There is a robust literature on young people at high (and “ultra-high“) risk for developing psychosis.   There are well established risk factors for addiction.  Patients have also been deemed at high risk psychiatrically when they leave institutional care without permission; when they are young unemployed women following discharge from medical ICUs; and when they are youths with “serious emotional disturbance” who receive public services.  Having a psychiatric problem at all may be one factor among many that signals high risk in non-psychiatric medical settings.

However, “high risk” in psychiatry most often refers to suicide risk.  A large literature relates suicide to demographics, physical health, psychiatric diagnosis, behaviors such as substance use, and so on.  Unfortunately, a diagnosis of borderline personality disorder is associated with an 8-10% lifetime suicide rate.  This is significantly higher than the general population, and on par with schizophrenia and major mood disorders.  Did two psychiatrists refuse to see my caller due to her suicide risk?  If so, do they also refuse those with schizophrenia, bipolar disorder, and major depression?

To the best of my knowledge, psychiatrists do not shun high-risk cases in order to avoid lawsuits or to improve their outcome statistics or online ratings.  Psychiatrists are rarely sued, and few of us even have such statistics or ratings.  However, a 1986 study by Hellman et al found (unsurprisingly) that patients’ suicidal threats were stressful for their psychotherapists.  Perhaps the real question is: What kinds of stress should be expected in routine psychiatric practice, and what kinds are legitimately avoided?

We must acknowledge that every decision about joining insurance panels, setting fees, or limiting one’s practice in any way is a form of cherry-picking, broadly construed.  The stresses of running a business and providing for one’s family are not unique to psychiatry.  Everyone wrestles with balancing self-interest and other-interest.  Yet these trade-offs are particularly glaring in heath care, including mental health care.

The law allows doctors to refuse service to anyone, as long as that refusal isn’t based on membership in a legally protected class, e.g., race or religion.  This doesn’t resolve questions of ethics and professionalism though.   I often turn down medication-only cases (although not the above caller) owing to my interest in psychotherapy.  I’ve also written about avoiding private insurance contracts, and my mixed feelings about accepting Medicare.  Of course, patient misbehavior may also lead a psychiatrist to turn down or refer out a case: inability to keep or pay for appointments, calling incessantly, making too many demands, etc.

I think avoiding suicidal patients is different.  To me, a psychiatrist who avoids suicidal patients is like a surgeon who can’t stand the sight of blood, or an obstetrician who doesn’t like to think about where babies come from.  Suicidal feelings are exactly why some patients seek our help.  Yes, they are at high risk for a bad outcome.  And I can vouch for the stress: in addition to being the target of numerous suicide threats and gestures, I have had one confirmed suicide in my practice, another that was equivocal (it may have been an accident), and likely others I don’t know about.  It’s no fun.  But in the end, the “high risk” belongs to the patient, not me.  I do the best I can.

Come to think of it, a closer analogy is my declining to conduct ADHD evaluations in order to avoid being a gatekeeper for stimulant-seekers.  I suppose here too the risk is theirs, despite my discomfort with gatekeeping and lie detection.  This confusion — whose risk is it? — is tricky.  Death, disability, hospitalization, and addiction are risks to the patient.  Lawsuits, adverse outcome data, regret at taking the case, and the stress of uncertainty and self-criticism are risks to us.  Some of the latter risks have always been par for the course, some are newer.  Some are self-imposed.  When we speak of the high-risk patient, let’s be honest about whose risk it is.

Graphic courtesy of FreeGreatPicture.com