Parenting medical disruptors

rebel-without-a-cause-2Popularized telemedicine — that is, teleconferencing with a physician over one’s smartphone — worries many critics because it assumes patients can be evaluated without a physical exam.  The critics are right that those with a financial interest in “disrupting” health care typically minimize the trade-offs.  Convenience and lower cost are trumpeted, while risks of misdiagnosis and mismanagement are waved off.  The concerns of practicing physicians are dismissed as self-serving and illegitimate.  Common sense supplants expertise; repudiation of experts, or perhaps a rebellion against them, lies just under the surface. Startup culture celebrates and sometimes handsomely rewards brash Big Thinkers who don’t let a few practical matters, like the fact that diagnosis isn’t always a slam dunk, impede progress.  Steven Jobs wasn’t the only one with a reality distortion field.

The tension between professionalism and commercialism isn’t new or limited to medicine.  Misgivings by medical personnel about “Dr. Google” and smartphone telemedicine parallel misgivings by attorneys about do-it-yourself wills and divorces, and by CPAs about at-home tax return software.  In each domain professionals lament the erosion of quality, and their inability to provide it, while business disruptors revel in expanded markets.

It’s also well accepted that providing high quality products or services, and wide availability at the same time, is an elusive challenge.  Usually it’s one or the other.  Although the marketplace accommodates fine dining and fast food, the fiduciary role of doctors, attorneys, accountants, and banks separates these fields from the restaurant business.  Banking is a prime example: no amount of convenience or access make up for uncertainty about the safety of one’s money.  And while profit, or making a living, motivates professionals as much as it does the businesspeople who aim to unseat them, only the former maintain longstanding traditions and ethical codes to put their patients or clients before profit.  The stale charge that heel-dragging professionals are financially self-serving applies far more to the gung-ho disruptors themselves.  Medical care has always been about high quality and wide availability, which is why health care reform is genuinely hard.  Trading away quality for availability or expediency is simply cutting corners.  We could have done that all along.

Smartphone telemedicine doesn’t currently allow physical examination.  There are a range of scenarios (“use cases”) where this makes little difference, and many others where it matters a lot.  But technology is a moving target.  It’s a safe bet that remote examination technology will improve, gradually putting this concern to rest.  Criticism of telemedicine is not about what it someday may become — “Star Trek” style holodecks with virtual physicians? — but about today’s enthusiasts getting ahead of themselves.  That is, selling science fiction, not science.  This creates a peculiar dynamic: innovators speak in vague but urgent tones of our shiny future and the need for traditionalists to step aside for progress, while critics walk a tightrope between condoning exploration and improvement, and at the same time keeping everyone safe.  This resembles nothing so much as parental oversight over a teenager.  Like good parents, professionals must step aside to allow entrepreneurs to try new things, learn from their mistakes, and yes, ultimately make the world better than they found it.  But we can’t be negligent either.  Some cool new toys are risky, some daring adventures bring unanticipated danger.  It’s no coincidence that the language of “disruption” sounds adolescent, and that pushback from the disruptors sounds like a teenager complaining that his or her parents are old-fashioned, uncool, and self-interested.

There’s a direct parallel in my specialty.  For over 35 years, advocates of a neurobiological approach to psychiatry have oversold what we actually know.  From now-discredited “chemical imbalances” to current talk of circuitopathies, neurobiology enthusiasts dismiss humility (and occasionally honesty) as old-fashioned and uncool.  This began with an Oedipal victory over Papa Freud in the 1970s, was codified into DSM-III in 1980, celebrated as the Decade of the Brain in the 1990s, and has shaped the NIMH and psychiatric research ever since.  Neurobiology has become the dominant paradigm, a matter of faith.  But aside from a limited range of scenarios (“use cases”) involving addiction and bonafide brain injury, it’s vaporware so far.  We psychiatrists are told to think neurobiologically, and to educate our patients using the language of brain circuitry — even though it’s often an educated guess, and even though it doesn’t actually change our treatment.

Surely time is on the side of the innovators.  It’s a safe bet we’ll learn much more about the brain, gradually discovering the causes of at least some disorders we currently call psychiatric.  Thoughtful criticism of neurobiological psychiatry is not about what it someday may become.  It’s about today’s advocates getting ahead of themselves, selling wishes and half-truths as established science.  Neurobiology disruptors speak in vague but urgent tones of our imminent bright future and a need for the older generation to step aside for progress.  Meanwhile, critics play the parental role, walking a tightrope between encouraging exploration and improvement, while keeping everyone safe with care for the brain and the mind.

It’s not easy parenting adolescents.  Sophomoric self-righteousness, know-it-all smugness, and knee-jerk rebellion can be irritating as hell.  Suddenly, adults are idiots and “just don’t understand.”  The young resist all guidance and veer toward obvious trouble.  It’s nerve-wracking to hang back and watch this happen; to refrain, except in extreme circumstances, from wagging a parental finger and chiding, “you have a LOT to learn!”  And all these challenges grow in complexity when the “adolescents” are actually adults, sometimes even colleagues, and when professional expertise and decades of hands-on experience invite only suspicion, not authority or respect.  Even if our concerns are dismissed as the bloviation of myopic dinosaurs, we still hope our colleagues, business counterparts, and larger society grow up fast enough to see past the seduction of disruption and rebellion.  We need to weigh the real trade-offs we face.

 

NEJM and the pharmascolds

medical moneyThe New England Journal of Medicine (NEJM) called the question: Has criticism of the pharmaceutical industry, and of physician relationships with that industry, gone too far?  Are self-righteous “pharmascolds” blocking the kind of essential collaboration that brought streptomycin and other lifesaving treatments to market?  The editorial by Dr. Jeffrey Drazen and the lengthy threepart piece by Dr. Lisa Rosenbaum push back against a rising skepticism that obviously feels unfair to them, and presumably to many.

Drazen, editor in chief at NEJM, stands in sharp contrast to former editors Drs. Arnold Relman, Jerome Kassirer, and Marcia Angell, all of whom warned of corrosive commercial influence in medicine.  According to Drazen, an unfortunate divide between academic researchers and industry has arisen “largely because of a few widely publicized episodes” of industry wrongdoing.  He underscores the ongoing need for collaboration and guides readers to Rosenbaum’s exposition.

In her first of three articles, Rosenbaum correctly notes that skepticism about financial ties may obscure other biases of arguably greater influence.  For example, industry marketing and promotion, i.e., influence that is not directly financial, also affects physicians.  But what to do about it?  Rosenbaum claims “the answer still largely eludes us,” partly due to the “overwhelming complexity” of the variables:

I think we need to shift the conversation away from one driven by indignation toward one that better accounts for the diversity of interactions, the attendant trade-offs, and our dependence on industry in advancing patient care.

Rosenbaum cites the social psychologist Robert Zajonc, who researched how feelings influence thinking.  According to this account, critics hear “canonical conflict-of-interest stories and pharmaceutical marketing scandals” and this leads to emotional bias: “we worry about ‘corrupt industry’ interacting with ‘corruptible physicians’.”

Our feelings about greed and corruption drive our interpretations of physician–industry interactions…. reasoned approaches to managing financial conflicts are eclipsed by cries of corruption even when none exists.

Of course, indignation runs both ways.  Rosenbaum fails to note that Zajonc’s findings apply equally well to apologists who hear or experience positive relationships, and are thereby reassured that “friendly, helpful industry” interacts with “ethically impervious physicians.”  Perhaps reasoned approaches to managing conflicts of interest and marketing scandals are eclipsed by cries of innocence even when corruption exists.

Rosenbaum’s second installment takes a more adversarial and defensive tone, introducing the derisive “pharmascold” label to describe critics.  Her own criticism of Relman’s seminal 1980 editorial on “The Medical-Industrial Complex” seems misplaced:

Relman wanted to mitigate undue influence by curtailing physicians’ financial associations with companies, but his concern seemed as much about appearance as about reality. Noting the uncertainty about the magnitude of physicians’ financial stake in the medical marketplace, he wrote, “The actual degree of involvement is less important than the fact that it exists at all. As the visibility and importance of the private health care industry grows, public confidence in the medical profession will depend on the public’s perception of the doctor as an honest, disinterested trustee.”

Rosenbaum acknowledged in her first article that the influence of an industry gift or payment may be unrelated to its monetary value.  Relman agrees: the “degree of involvement is less important than the fact that it exists.”  And while public confidence in the medical profession is partly a matter of appearance, Relman was not talking about putting on an act.  He was urging doctors to remain honest, disinterested trustees — a theme to which we shall return.

In holding that we “lack an empirical basis to guide effective conflict management,” Rosenbaum says we don’t know whether commercial bias actually harms patients.  The evidence is only suggestive.  This is particularly weak rhetoric, as there is a great deal of suggestive evidence, some of which she cites herself, and very little, suggestive or otherwise, to oppose it.  Her stance is reminiscent of arguments that staying up all night is good for medical trainees and their patients — because it’s traditional, and because there is no empirical data from those specific groups showing harm.  Never mind that thousands of studies of sleep deprivation exist, and that it is almost uniformly deleterious.  One may likewise point to entire industries, e.g., advertising and public relations, founded on the very influence that is so curiously hard to pin down here.  Is there harm in having medical research and clinical decisions affected by those who stand to gain financially?  Not in every case, but surely the burden of proof lies with those who claim to be an exception.

Rosenbaum correctly notes that disclosure and transparency may not mitigate bias, nor its effect on listeners.  Most consumer advertising is very transparent in its intent; this doesn’t appear to sap its effectiveness in the least.  She ends her second installment by revisiting psychology and the “self-serving bias” which may fuel both pro- and anti-industry positions.  She aptly notes that stereotypes and ad hominem arguments may be unfair.  Why the pharmascold slur then?

The last installment is clearly the best of the three, and could have stood alone as a stronger statement.  Rosenbaum opens with how the culture of medical training has dramatically swung from an unthinking acceptance of industry influence to intense skepticism and peer pressure to avoid it.  She cites yet another psychologist, Philip Tetlock, who focuses on how certain “sacred values” like health prevent us from contemplating inevitable trade-offs.  She also cites psychologist Jonathan Haidt, who found that “people who were offended by social-norm violations worked hard to cling to a sense of wrongdoing, even when they couldn’t find evidence that anyone had been hurt.”  She applies these findings to unbending critics, and to those who either invent harm, or who claim wrongdoing without evidence that anyone has been hurt.  Rosenbaum points out that doctors may be more risk-averse and conflict-avoidant than some patients prefer.  More examples follow of allegedly unfair criticism of industry ties.  “The bad behavior of the few has facilitated impugning of the many.”  Medical progress stops if we scare people away.  We unwittingly replace expertise with conflict-free mediocrity.  And so forth.  She ends with this:

The answer is not a collective industry hug. The answer will have to be found by returning to this question: Are we here to fight one another — or to fight disease? I hope it’s the latter.

Some responses to the NEJM series were quick and biting.  My own reaction is mixed.  Rosenbaum raises several good points.  It isn’t right to stereotype.  Academic collaboration is necessary to move medical science forward.  Witch hunts serve no one.  The appearance of a conflict of interest (COI) isn’t the same as having one, and even that isn’t the same as being biased.  Many psychological blind spots attributed to defenders of industry collaboration may apply as well to its critics.  Perfectionism in avoiding COI may carry costly trade-offs.  Vague indignation is pointless.

However, Rosenbaum goes astray by misconstruing professional ethics and by overlooking its Kantian, deontological nature.  Relman wrote his editorial not for the sake of appearance, but to remind readers of the physician’s ethical duties.  As with other fiduciaries, our standards are higher than usual business ethics; Tetlock is free to call this a “sacred value” if he wishes.  Medical ethics doesn’t wait for “evidence that anyone has been hurt” — just as judges recuse themselves absent such evidence, and bribing public officials is prohibited without waiting for proof of harm.  Haidt’s social-norm violations, e.g., defacing an American flag, may be considered a dereliction of duty and therefore wrong, even if no one is hurt.

As medical fiduciaries, we have a positive duty to avoid COI when we reasonably can.  This is best framed as an attitude, not a pure or absolute set of behavioral rules.  It’s not a crime to talk to a drug rep or to attend an industry sponsored talk.  Under certain circumstances these may be the best way to enhance patient care.  But usually they’re not: expedience is rarely worth the price of having to evaluate commercially biased material.  And make no mistake, commercial bias is the raison d’être of business.  While academic physicians should collaborate with industry when appropriate — and feel proud to do so — they should also recognize it may color their clinical thinking.

As will many other sources of bias.  Rosenbaum is right to point this out, even if it doesn’t exonerate the influence of money.  Her example of sleep deprivation is a good one.  Rather than declaring these influences too complex and myriad to do anything about, let’s try.  If clinical care is adversely affected by the on-call doctor’s need for sleep, maybe the on-call doctor should be well rested.  If clinical care is harmed by draconian regulations and paperwork, let’s work to improve that.  Money can be an obvious, concrete COI, but it’s certainly not the only COI out there.

Rather than focusing on do’s and don’ts, shills and pharmascolds fighting one another, medicine needs to regain its ethical footing.  In the 1940s, Dr. Waksman could collaborate with Merck to produce streptomycin, and later to write a review article on the drug, because his ethics, and probably Merck’s, were above reproach.  This was long before off-label drug promotion, ghostwritten articles, KOL targeting, and all the rest.  If medicine is again to be respected in this way, our best argument can’t be that harm hasn’t been proven yet.  We can’t minimize the mistrust that “a few widely publicized episodes” can bring.  We can’t defend the profession against critics by ridiculing and dismissing the radical fringe.

Will some extreme “pharmascolds” continue to decry all Pharma, without regard to reason or consequences?  Undoubtedly.  Yet we don’t declare pollution a sham because fringe groups of radical environmentalists exist. We don’t abandon our critical faculties when others are excessively critical.  We should accordingly still scrutinize physician COI resulting from commercial influence, and from other sources as well, and seek to minimize it in ourselves and in our profession.  If we can do it without overheated rhetoric and unfair stereotyping, all the better.

Image courtesy of Vichaya Kiatying-Angsulee at FreeDigitalPhotos.net

Medical ethics are healthier than business ethics

ethics_meterCompared to most others in society, physicians endorse, and are held to, higher ethical standards.  (To illustrate, here are ethical codes from the AMA and the World Medical Association.)  High standards apply to professionals in other fields as well, especially fiduciaries such as attorneys, accountants, schoolteachers, and judges.  But standards of medical ethics may be among the most stringent.  We put patient welfare first, and anything that interferes with this primary aim, particularly personal gain, is deemed a conflict of interest (COI).  For example, it is legitimate to make money as a physician, i.e., to earn a living, but not in any way that detracts from patient welfare.  These are not black and white distinctions, however, and line-drawing controversies abound.  Offering unneeded treatment solely to boost income is always unethical.  But what about limiting one’s practice in lucrative or otherwise pleasant ways: orthopedic surgeons practicing in ski towns, plastic surgeons who only do cosmetic surgery?  What about choosing a more lucrative specialty in the first place?  Accepting only certain types of insurance, or none at all?  Charging for missed or late-cancelled sessions?  Without attempting to resolve any of these examples here, it’s noteworthy how much concern is voiced, and ink spilled, over how physicians practice.  To completely escape controversy, we’d have to take a vow of poverty and offer our services for free.

In contrast, many other businesses that affect health do not share the physician’s ethics.  Precise line-drawing plainly doesn’t apply.  Beverage companies peddle diabetes along with refreshment, supplements come adorned with dubious health claims. Snack food can be unhealthy.  Manufacturers and retailers of exercise equipment need not refer customers to more suitable products from competitors.   One can even argue that new cars, not to mention video games, movies, and many other products, discourage people from exercising.  “Patient welfare” simply isn’t a priority for most firms — they aren’t dealing with patients.  There is no general code of business ethics that makes health its primary aim.  Thus, in extreme cases the government — we the people — step in, by limiting tobacco and alcohol ads for example, or by inspecting meat.  This is one reason we have government: to set priorities, including ethical priorities, that an ungoverned free market cannot or will not.

Some firms do explicitly deal with patients, yet still do not share the physician’s ethical standards.  Insurance companies run feel-good ads that obscure their cost-containment mandate.  Medical corporations attract customers or subscribers who are “covered lives” as opposed to individual patients.  Pharmaceutical companies entice the public with all the irrational tricks used to sell other products, then tack on “ask your doctor” to absolve themselves of any medical responsibility.  Pharmacy benefit managers (PBMs) can disallow a physician’s prescription wholly on the basis of cost, and without taking medical responsibility.  These are all huge “conflicts of interest” from a physician’s point of view. But COI doesn’t apply the same way to entities with less stringent professional ethics, where the primary aim is profit, not health.

This makes our burden harder. For the most part, it isn’t up to pharmaceutical companies to avoid biasing doctors with their promotional efforts. It’s up to us.  Moreover, it’s up to us to counter unhealthy biases instilled in the public, like the willingness to use an antipsychotic with significant side-effects to treat routine depression.  Likewise, as long as insurers and PBMs are corporations, no one will compel them through moral persuasion or ethical codes to sideline their economic interests. It’s not a conflict for a business to maximize return for its shareholders; it’s the main reason they exist. Indeed, too much concern for patient welfare might be criticized, e.g., at a shareholder meeting, as a COI that impedes this primary aim.

Doctors are held to standards that would be absurd in virtually any other business. Historically, these higher ethical standards gave us a special status in society, and earned our patients’ trust. The erosion of this special status, and of patient trust, is both a cause and an effect of a health care environment with lower, more businesslike, ethical standards.  The accelerating corporatization of American medicine replaces traditional medical ethics with the looser standard of business ethics.  MD decisions are now vetoed by MBAs.  As a result, patients may see us as replaceable technicians in a corporate infrastructure, and lose the benefits of a personal physician.  In parallel, physicians who are viewed by their patients and employers as mere cogs in the wheel of a large system are more apt to relax their own high ethical standards.  I fear for both our profession and the public as this vicious cycle continues.

While we doctors are busy maintaining our ethics and watching out for COI, other “stakeholders” in health care operate under fewer ethical constraints and enjoy greater profits, often directly at our expense.  It can be maddening, yet physicians have no unified voice to defend ourselves and our work.  Proposed solutions are inescapably political, and polarize us along deeply divided political lines, left versus right.  Ultimately, though, traditional medical ethics and public welfare are on the same side.  Doctors exist to help individual patients — and we will all be individual patients someday.  The looming challenge is whether we can put our internecine struggles aside long enough to save ourselves, our families, and our neighbors.

Image courtesy of Stuart Miles at FreeDigitalPhotos.net

America's top selling drug is an antipsychotic

AbilifyI learned recently that the antipsychotic Abilify is the biggest selling prescription drug in the U.S.  (I try to stay calm and collected here, but that’s a fact worth boldface.)  To be a top seller, a drug has to be expensive and also widely used.  Abilify is both.  It’s the 14th most prescribed brand-name medication, and it retails for about $30 a pill.  Annual sales are over $7 billion, nearly a billion more than the next runner-up.

Yes, you read that right: $30 a pill.  A little more for the higher dosages.  There’s no generic equivalent in the U.S. as yet; Canadian and other foreign pharmacies stock the active ingredient, generic aripiprazole, for a fraction of what we pay in the states.  However, Abilify’s U.S. patent protection expires next month, and aripiprazole may soon be available here at lower cost.

Abilify is an “atypical” antipsychotic.  This is a confusing term, as these are now the drugs typically prescribed for schizophrenia and other psychotic conditions.  The name comes from their atypical mechanism of action, as compared to the prior generation of antipsychotics.  “Atypicals” also play a useful role in the treatment of bipolar disorder, where traditional medications such as lithium require blood level monitoring, and often multiple doses per day.

Antipsychotics are powerful drugs with considerable risks and side-effects.  But psychosis and mania are powerful too.  As with cancer chemotherapy and narcotic painkillers, a risky and/or toxic treatment can be justified in dire circumstances.  It’s also true that one crisis visit to an emergency room, not to mention a psychiatric admission, may cost more than months of Abilify, and can itself be emotionally traumatic.  If Abilify keeps psychosis at bay and prevents hospitalization, the risks are worth it.  The cost is worth it too — if a less expensive generic atypical won’t do.  Several are now available.

As I wrote in 2009, the manufacturer Otsuka tapped a much larger market for Abilify as an add-on treatment for depression.  I objected to the consumer ad campaign that trumpeted this expensive, dangerous niche product for common depression.  While there’s a role for Abilify in unusually severe, unresponsive depression, advertising it widely as a benign “boost” for one’s antidepressant was, and is, irresponsible.  By analogy, the makers of the narcotics OxyContin and Percocet could run ads showing people with bad headaches, and urging fellow headache sufferers to ask their doctors “if Percocet is right for you.”

And these are merely the FDA-approved uses of Abilify.  Atypicals are also widely prescribed off-label for use as non-addictive tranquilizers and sleeping pills, and to treat other psychiatric conditions.  There’s no advertising for off-label use, so the onus falls squarely on prescribers who balance the risks and benefits of these drugs in a manner that research tends not to support.  In short, a costly, risk-laden medication created to ease the awful but relatively uncommon tragedy of schizophrenia is now the top selling prescription drug in America owing to its widespread use in garden variety depression, anxiety, and insomnia.

It’s been said that the top selling drug in any era is a comment on society at that point in time.  Valium held the lead during the 1960s and 70s, suggesting an age of uncertainty and anxiety.  The top spot was taken over by the heartburn and ulcer medication Tagamet in 1979.  Tagamet was the first “blockbuster” drug with more than $1 billion in annual sales. Cholesterol-lowering Lipitor was the biggest seller for nearly a decade after it was released in 1997, the same year the FDA first allowed drug ads targeting consumers.  Pfizer spent tens of millions on such ads — and sold over $125 billion of Lipitor over the years.  The stomach medicine Nexium took over after that.  Without covering all the top sellers, it’s fair to say that Americans spend a great deal on prescriptions to deal with emotional distress and unhealthy lifestyles.  The blockbusters also show how mass-marketing brand name drugs has becomes a huge and highly profitable business.

What does it say about us that Abilify holds the top spot now?  What does it mean to live in the Age of Abilify?  First, that we’re still looking for happiness and peace in a bottle of pills, costs and risks be damned.  Second, that there’s nearly no end to the money the U.S. health care system will spend on problems that can be addressed more economically.  And third, it’s a stark reminder that commercial interests seek to expand sales and profits whenever possible.  They find (or create) new markets, promote products by showcasing benefits and concealing drawbacks, appeal to our emotions instead of our rationality.  This is simply how business works.  We should not be surprised, yet we ignore this reality at our peril, particularly when it comes to our health.

Behavioral science versus moral judgment

General George S PattonGeorge S. Patton, Jr. commanded the Seventh United States Army, and later the Third Army, in the European Theater of World War II.  General Patton, a brilliant strategist as well as larger-than-life fount of harsh words and strong opinions, was also infamous for confronting two soldiers diagnosed with “combat fatigue” — now known as post-traumatic stress disorder, or PTSD — in Sicily in August of 1943.  (One such incident was depicted in the classic 1970 film “Patton” starring George C. Scott.)  Patton called the men cowards, slapped their faces, threatened to shoot one on the spot, and angrily ordered them back to the front lines.  He directed his officers to discipline any soldier making similar complaints.  Patton’s commanding officer, General Eisenhower, firmly condemned the incidents and insisted that Patton apologize.  Patton did so reluctantly, always maintaining that combat fatigue was a pretext for “cowardice in the face of the enemy.”

Seventy years have passed, yet as a society we still feel the tension between moral approval or disapproval on the one hand, and value-neutral scientific or psychological description on the other.  Cowardice is a character flaw, a moral lapse, a weakness.  PTSD, in contrast, is a syndrome that afflicts the virtuous and the vile alike.  We similarly declare violent criminals evil — unless they are judged insane, in which case our moral condemnation suddenly feels misplaced.  Likewise, a student who is lazy or careless needs to shape up to avoid our scorn; a student with ADHD, in contrast, is a victim, not a bad person.

Personality descriptors — brave, cowardly, rebellious, compliant, curious, lazy, perceptive, criminal, and many more — feel incompatible with knowledge of our minds and brains.  It seems the more we explain the roots of human behavior, the less we can pass moral judgment on it.  It doesn’t matter if the explanation is biological (e.g., brain tumor, febrile delirium, seizure) or psychological (e.g., PTSD, childhood abuse, “raised that way”).  However, perhaps because we feel we know our own minds best, it does seem to matter if we are accounting for ourselves versus others.  We usually explain our own behavior in terms of value-neutral external contingencies — I’m late because I had a lot to do today, not because I’m unreliable — and more apt to tar others with a personality judgment such as “unreliable.”  This finding, the Fundamental Attribution Error, has been a staple of social psychology research for decades.

Will we eventually replace moral judgments of others with medical or psychological explanations that lack a blaming or praising tone?  It appears our inclination to judge others will not pass quietly.  Much of the rancor between the political Left and Right concerns the applicability of moral language.  Are felons bad people, or merely raised the wrong way?  Are the poor lazy and entitled, or trapped in poverty by circumstance?  Was General Patton disciplining cowards who were shirking their duty, or was he verbally and physically abusing soldiers who had already been victimized?

The Left and Right disagree over where to draw the line.  But no matter how far we progress in our brain and behavioral sciences, we will still want to voice judgments of others — and negative judgments seem the more compelling.  Humans are notoriously inventive in the use of language to denigrate.  Originally neutral clinical terms like “idiot” and “moron” (and “retarded” and “deluded” and many more) eventually became terms of derision.  Euphemisms like “juvenile delinquent” didn’t stay euphemistic for long.  While it may blunt the sharpness of our  scorn in the short term, “politically correct” language won’t change this aspect of human nature in any lasting way.

Even logic doesn’t stop us.  For example, terrorists are routinely called cowards in public discourse, although it isn’t clear why.  Many terrorists voluntarily die in their efforts, an act considered heroic, or at least brave, in other contexts.  They often attack civilian rather than military targets.  But we did that in WWII, and we weren’t cowards.  They use guile, sneak onto planes, employ distraction and misdirection — like our “cowardly” Special Forces do.  The point is, we find terrorists despicable, but that isn’t a strong enough putdown.  If we didn’t call them cowards, we’d have to call them something else to humiliate them.  Mama’s boys?

Humans are a funny species.  Uniquely striving for intellectual understanding, yet not so far from the other beasts who purr or growl or screech their approval or protest.  Balancing the aims of morality and science is the stuff of constant, and perhaps endless, political debate.  Ultimately it’s irresolvable, yet we do our best to pay homage both to our hearts and our heads.

Defining the competent psychiatrist

psychwclientWhat defines a competent psychiatrist?  To staunch critics of the field, perhaps nothing.  Some believe psychiatry has done far more harm than good, or has never helped anyone, rendering moot the question of competency.  What defines a competent buffoon?  A skillful brute?  An adroit half-wit?  Having just finished Robert Whitaker’s Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America (Crown, 2010), a reader might easily conclude that psychiatric competency is a fool’s errand.  From directing dank 19th Century asylums, to psychoanalyzing everyone for nearly anything during much of the 20th Century, to doling out truckloads of questionably effective, often hazardous drugs for the past 35 years, perhaps psychiatry is beyond redemption.

Of course, I don’t think so.  For one thing, critics often disagree about what is wrong with the field.  For every charge of over-diagnosis and overmedicating, another holds that debilitating disorders are under-recognized and under-treated.  A charge that psychiatry has become too “cookbook” and commodified is answered by the complaint that it is too anecdotal and not sufficiently “evidence-based.”  Claims that the field stumbles because it is subtle, complex, and understaffed by well-compensated specialists, are met with counter-claims that checklists in primary care clinics can do most of the heavy lifting at less expense.  Contradictory criticisms offer no evidence that the field is faultless.  But the confusion does suggest that psychiatry’s limitations reside at a different level of analysis than that engaged by its critics.

For another thing, the undeniable shortcomings of psychiatry don’t make the patients disappear.  Whether the field teems with genius humanitarians or raving witchdoctors, there are still families watching their teenage daughters starving themselves to death; beloved aunts and uncles living unwashed and mumbling to themselves on the street; people ending their lives out of temporary tunnel-vision; tormented souls imprisoned in their homes by irrational fears.  And our society still harbors a nagging ethical sense that a crime is committed only when a person knows what he’s doing — and that when he doesn’t, he deserves help not punishment.

We can admit that psychiatrists are (at times meddlesome) do-gooders who take on misery and heartache and uncontrolled destructive behavior despite deep controversies over how best to help.  It’s the same role filled, in different times and places, by clergy, by family, by shamans, by the village as a whole.  Every society fills it by someone.  This is the modest starting point that bootstraps a meaningful definition of psychiatric competency.

Lists of “core competencies” are issued by the Accreditation Council for Graduate Medical Education (ACGME) for psychiatry residents, and by the American Board of Psychiatry and Neurology (ABPN) for board-certified psychiatrists.  Both organizations categorize psychiatric competency under the six headings established by the ACGME for all medical specialties: Patient Care, Medical Knowledge, Interpersonal and Communications Skills, Practice-Based Learning and Improvement, Professionalism, and Systems Based Practice.  (These categories are also used by the Accreditation Council for Continuing Medical Education [ACCME], so that continuing education required to maintain one’s medical license addresses one or more of these competency areas.)  A review of either of these detailed lists reveals two important truths.  First, a committee can make any aspirational standard byzantine and lifeless.  And second, in the eyes of  ACGME and ABPN at least, it’s not so easy to be a competent psychiatrist.

However, these official competencies are unlikely to satisfy skeptics, nor do they get to the heart of the matter.  No such list can be exhaustive: the ABPN includes knowledge of transcranial magnetic stimulation, presumably a recent addition, but fails to require knowledge of specific pharmaceuticals.  Focus areas such as addiction, forensic, and geriatric psychiatry are mentioned, but not administrative or community psychiatry.  The linguistic philosopher Ludwig Wittgenstein argues that our inability to precisely define natural categories, even simple nouns like “chair,” is a feature of language itself, not of psychiatric competence specifically.  Accordingly, any catalog of psychiatric competencies, whether intended to be comprehensive or a “top ten” list, captures some, but not all, of what constitutes a competent psychiatrist.

As implied above, the starting point, although not the end point, for defining the competent psychiatrist is intent.  A psychiatrist aims to relieve suffering in an uncertain human domain.  Brought to bear are skills, knowledge, and personality factors (“professionalism” etc) which bring this goal closer.  These cannot be listed exhaustively: virtually the whole of human knowledge and experience can inform one’s understanding of a patient’s emotional turmoil.  The best we can say, I believe, is that a competent psychiatrist is curious, has a wide fund of knowledge and life experience, and aims to keep an open mind.  Some of this knowledge certainly should be biomedical.  But knowing about the psychology of aging, common stressors such as job loss and divorce, gender differences, and many other areas are hardly less important. The practitioner’s proclivity to observe the human condition both scientifically and humanistically is ultimately a better gauge of competence than whether a specific treatment modality such as TMS has been added to a long list, or whether the practitioner is able to cough up a specific fact.

Given the controversy and uncertainty in the field, another essential of competent practice is humility.  In most cases we don’t know the etiology of what we’re treating.  Any treatment we offer helps some patients but not others, and nearly always carries risk.  Whitaker makes many good points along these lines.  A competent psychiatrist tempers his or her urge to intervene with the realization that the road to hell is often paved with good intentions.  Psychiatrists virtually always mean well, and (contrary to some critics) help our patients far more often than not.  Nonetheless, a competent psychiatrist is always ready to admit misjudgment or miscalculation.  Self-correction is a feature of competence in psychiatry as well as in many, perhaps all, other domains of human expertise.

For another take on the competent psychiatrist, arriving at a similar endpoint using different reasoning, see this 2011 post by Dr. Raina.

I wrote above that psychiatry’s limitations may reside at a different level of analysis than that engaged by its critics.  Psychiatry is a hard job because the brain is the most complex organ, because normality is so hard to define, because human development is a subtle interplay of nature and nurture, and because we don’t understand the root causes of many forms of mental distress.  But even if we did know and understand these far better than we do now, the field would still be fraught with controversy and uncertainty.  Our attitudes regarding responsibility, free will, conformity versus deviance, and how we treat each other reflect our politics and deeply held values.  Psychiatry serves as a lightning rod for strong feelings around these matters.  By its very nature, it always will.  Psychiatrists must accept that many will view us skeptically, some with hatred — and others with undeserved adoration — and not let this dissuade us.  A competent psychiatrist hears criticism from individual patients and the public, neither dismissing it unthinkingly, nor allowing it to lead to demoralization and defeat.

Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net.

Living between three and seven

ID-100198982Despite my mostly psychodynamic approach to psychotherapy, I sometimes include cognitive interventions as well.  I think of this as choosing from a variety of tools to suit the moment.  Generally speaking, cognitive techniques (and psychiatric medications) aim for symptom relief, while psychodynamic work aims for structural personality change, with symptom improvement as a byproduct.  There’s a time and place for each, their relative value varying from patient to patient.  The following is a cognitive framework I’ve introduced to a number of patients over the years.  Let me know if it’s useful to you.

Essentially it’s a simple one to ten scale that highlights polarized thinking — “splitting” in dynamic lingo — and encourages modifying it through conscious effort.

Many patients who evidence polarized, black-and-white thinking — who devalue the bad and idealize the good — quickly catch on when I propose that their abject hopelessness and seething rage represent a “one” on a one to ten scale, whereas their over-the-top exuberance rates a “ten.”  (Some take it further and claim their despair sinks to “negative 100″ and positivity zooms up to “50” on that scale, but usually they’ll agree to keep it manageable.)  The key intervention is then to point out that life is mostly lived between three and seven. Realistically speaking, bad experiences in life usually rate a “three” or “four,” good experiences a “six” or “seven.”  Anything more extreme is rare.  Feelings of “one” and “ten” are almost always exaggerations, polarized distortions that whipsaw the patient’s feelings and interpersonal relationships.

The concreteness of speaking in numbers comes easily to most of us.  Once introduced to this scale, some patients spontaneously and enthusiastically rate their own feelings: a troubling encounter “felt like a ‘one’ but I know it was really a ‘three’.”  More often they relate an experience in unrealistically glowing terms, and I gently challenge their idealization by asking if it was truly a “ten” or more accurately a solid “seven” (and likewise with a “one” that upon reflection could be re-rated a “three.”)  Some patients formerly prone to one-or-ten thinking soon begin sessions by telling me their day feels like a satisfying “six” or a disappointing “four”.  Either way, I support this more nuanced assessment and discuss how they may nudge themselves up the scale.

Many patients, particularly those who take a degree of pleasure in the ups and downs of their emotional roller coaster, would never abide a monotonous life stuck at “five.”  Where’s the fun in that?  Fortunately, the point of the scale is not to aim for stagnation, nor to suggest that the midpoint is ideal.  The realities of life assure that some days will be better than others.  No cognitive trick will stop successes from feeling good and letdowns from feeling bad.  The question is how much.  Attaching numbers to feelings offers a little distance and perspective.  It’s a gentle reminder that such emotional exaggeration may be a form of self-torture — and that an apparent “ten” is risky (and literally “too good to be true”), often crashing precipitously into a “one.”  Most of the time it’s far more comfortable, safe, and sustainable to “live between three and seven.”

Of course, it wouldn’t be psychodynamic therapy if we stopped there.  The numerical scale offers a useful language to describe unrealistic emotional extremes, and perhaps to help the patient mitigate them through conscious effort.  However, it can’t account for the splitting itself, nor change the patient’s propensity in any structural way.  For that, we turn to unconscious dynamics, and to a trustworthy, consistent therapeutic relationship that permits emotional nuance to gain a foothold. Rather than being seen as mutually exclusive — itself an unhealthy polarization — cognitive and psychodynamic approaches can complement one another.

Graphic courtesy of Danilo Rizzuti at FreeDigitalPhotos.net