I learned recently that the antipsychotic Abilify is the biggest selling prescription drug in the U.S. (I try to stay calm and collected here, but that’s a fact worth boldface.) To be a top seller, a drug has to be expensive and also widely used. Abilify is both. It’s the 14th most prescribed brand-name medication, and it retails for about $30 a pill. Annual sales are over $7 billion, nearly a billion more than the next runner-up.
Yes, you read that right: $30 a pill. A little more for the higher dosages. There’s no generic equivalent in the U.S. as yet; Canadian and other foreign pharmacies stock the active ingredient, generic aripiprazole, for a fraction of what we pay in the states. However, Abilify’s U.S. patent protection expires next month, and aripiprazole may soon be available here at lower cost.
Abilify is an “atypical” antipsychotic. This is a confusing term, as these are now the drugs typically prescribed for schizophrenia and other psychotic conditions. The name comes from their atypical mechanism of action, as compared to the prior generation of antipsychotics. “Atypicals” also play a useful role in the treatment of bipolar disorder, where traditional medications such as lithium require blood level monitoring, and often multiple doses per day.
Antipsychotics are powerful drugs with considerable risks and side-effects. But psychosis and mania are powerful too. As with cancer chemotherapy and narcotic painkillers, a risky and/or toxic treatment can be justified in dire circumstances. It’s also true that one crisis visit to an emergency room, not to mention a psychiatric admission, may cost more than months of Abilify, and can itself be emotionally traumatic. If Abilify keeps psychosis at bay and prevents hospitalization, the risks are worth it. The cost is worth it too — if a less expensive generic atypical won’t do. Several are now available.
As I wrote in 2009, the manufacturer Otsuka tapped a much larger market for Abilify as an add-on treatment for depression. I objected to the consumer ad campaign that trumpeted this expensive, dangerous niche product for common depression. While there’s a role for Abilify in unusually severe, unresponsive depression, advertising it widely as a benign “boost” for one’s antidepressant was, and is, irresponsible. By analogy, the makers of the narcotics OxyContin and Percocet could run ads showing people with bad headaches, and urging fellow headache sufferers to ask their doctors “if Percocet is right for you.”
And these are merely the FDA-approved uses of Abilify. Atypicals are also widely prescribed off-label for use as non-addictive tranquilizers and sleeping pills, and to treat other psychiatric conditions. There’s no advertising for off-label use, so the onus falls squarely on prescribers who balance the risks and benefits of these drugs in a manner that research tends not to support. In short, a costly, risk-laden medication created to ease the awful but relatively uncommon tragedy of schizophrenia is now the top selling prescription drug in America owing to its widespread use in garden variety depression, anxiety, and insomnia.
It’s been said that the top selling drug in any era is a comment on society at that point in time. Valium held the lead during the 1960s and 70s, suggesting an age of uncertainty and anxiety. The top spot was taken over by the heartburn and ulcer medication Tagamet in 1979. Tagamet was the first “blockbuster” drug with more than $1 billion in annual sales. Cholesterol-lowering Lipitor was the biggest seller for nearly a decade after it was released in 1997, the same year the FDA first allowed drug ads targeting consumers. Pfizer spent tens of millions on such ads — and sold over $125 billion of Lipitor over the years. The stomach medicine Nexium took over after that. Without covering all the top sellers, it’s fair to say that Americans spend a great deal on prescriptions to deal with emotional distress and unhealthy lifestyles. The blockbusters also show how mass-marketing brand name drugs has becomes a huge and highly profitable business.
What does it say about us that Abilify holds the top spot now? What does it mean to live in the Age of Abilify? First, that we’re still looking for happiness and peace in a bottle of pills, costs and risks be damned. Second, that there’s nearly no end to the money the U.S. health care system will spend on problems that can be addressed more economically. And third, it’s a stark reminder that commercial interests seek to expand sales and profits whenever possible. They find (or create) new markets, promote products by showcasing benefits and concealing drawbacks, appeal to our emotions instead of our rationality. This is simply how business works. We should not be surprised, yet we ignore this reality at our peril, particularly when it comes to our health.
George S. Patton, Jr. commanded the Seventh United States Army, and later the Third Army, in the European Theater of World War II. General Patton, a brilliant strategist as well as larger-than-life fount of harsh words and strong opinions, was also infamous for confronting two soldiers diagnosed with “combat fatigue” — now known as post-traumatic stress disorder, or PTSD — in Sicily in August of 1943. (One such incident was depicted in the classic 1970 film “Patton” starring George C. Scott.) Patton called the men cowards, slapped their faces, threatened to shoot one on the spot, and angrily ordered them back to the front lines. He directed his officers to discipline any soldier making similar complaints. Patton’s commanding officer, General Eisenhower, firmly condemned the incidents and insisted that Patton apologize. Patton did so reluctantly, always maintaining that combat fatigue was a pretext for “cowardice in the face of the enemy.”
Seventy years have passed, yet as a society we still feel the tension between moral approval or disapproval on the one hand, and value-neutral scientific or psychological description on the other. Cowardice is a character flaw, a moral lapse, a weakness. PTSD, in contrast, is a syndrome that afflicts the virtuous and the vile alike. We similarly declare violent criminals evil — unless they are judged insane, in which case our moral condemnation suddenly feels misplaced. Likewise, a student who is lazy or careless needs to shape up to avoid our scorn; a student with ADHD, in contrast, is a victim, not a bad person.
Personality descriptors — brave, cowardly, rebellious, compliant, curious, lazy, perceptive, criminal, and many more — feel incompatible with knowledge of our minds and brains. It seems the more we explain the roots of human behavior, the less we can pass moral judgment on it. It doesn’t matter if the explanation is biological (e.g., brain tumor, febrile delirium, seizure) or psychological (e.g., PTSD, childhood abuse, “raised that way”). However, perhaps because we feel we know our own minds best, it does seem to matter if we are accounting for ourselves versus others. We usually explain our own behavior in terms of value-neutral external contingencies — I’m late because I had a lot to do today, not because I’m unreliable — and more apt to tar others with a personality judgment such as “unreliable.” This finding, the Fundamental Attribution Error, has been a staple of social psychology research for decades.
Will we eventually replace moral judgments of others with medical or psychological explanations that lack a blaming or praising tone? It appears our inclination to judge others will not pass quietly. Much of the rancor between the political Left and Right concerns the applicability of moral language. Are felons bad people, or merely raised the wrong way? Are the poor lazy and entitled, or trapped in poverty by circumstance? Was General Patton disciplining cowards who were shirking their duty, or was he verbally and physically abusing soldiers who had already been victimized?
The Left and Right disagree over where to draw the line. But no matter how far we progress in our brain and behavioral sciences, we will still want to voice judgments of others — and negative judgments seem the more compelling. Humans are notoriously inventive in the use of language to denigrate. Originally neutral clinical terms like “idiot” and “moron” (and “retarded” and “deluded” and many more) eventually became terms of derision. Euphemisms like “juvenile delinquent” didn’t stay euphemistic for long. While it may blunt the sharpness of our scorn in the short term, “politically correct” language won’t change this aspect of human nature in any lasting way.
Even logic doesn’t stop us. For example, terrorists are routinely called cowards in public discourse, although it isn’t clear why. Many terrorists voluntarily die in their efforts, an act considered heroic, or at least brave, in other contexts. They often attack civilian rather than military targets. But we did that in WWII, and we weren’t cowards. They use guile, sneak onto planes, employ distraction and misdirection — like our “cowardly” Special Forces do. The point is, we find terrorists despicable, but that isn’t a strong enough putdown. If we didn’t call them cowards, we’d have to call them something else to humiliate them. Mama’s boys?
Humans are a funny species. Uniquely striving for intellectual understanding, yet not so far from the other beasts who purr or growl or screech their approval or protest. Balancing the aims of morality and science is the stuff of constant, and perhaps endless, political debate. Ultimately it’s irresolvable, yet we do our best to pay homage both to our hearts and our heads.
What defines a competent psychiatrist? To staunch critics of the field, perhaps nothing. Some believe psychiatry has done far more harm than good, or has never helped anyone, rendering moot the question of competency. What defines a competent buffoon? A skillful brute? An adroit half-wit? Having just finished Robert Whitaker’s Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America (Crown, 2010), a reader might easily conclude that psychiatric competency is a fool’s errand. From directing dank 19th Century asylums, to psychoanalyzing everyone for nearly anything during much of the 20th Century, to doling out truckloads of questionably effective, often hazardous drugs for the past 35 years, perhaps psychiatry is beyond redemption.
Of course, I don’t think so. For one thing, critics often disagree about what is wrong with the field. For every charge of over-diagnosis and overmedicating, another holds that debilitating disorders are under-recognized and under-treated. A charge that psychiatry has become too “cookbook” and commodified is answered by the complaint that it is too anecdotal and not sufficiently “evidence-based.” Claims that the field stumbles because it is subtle, complex, and understaffed by well-compensated specialists, are met with counter-claims that checklists in primary care clinics can do most of the heavy lifting at less expense. Contradictory criticisms offer no evidence that the field is faultless. But the confusion does suggest that psychiatry’s limitations reside at a different level of analysis than that engaged by its critics.
For another thing, the undeniable shortcomings of psychiatry don’t make the patients disappear. Whether the field teems with genius humanitarians or raving witchdoctors, there are still families watching their teenage daughters starving themselves to death; beloved aunts and uncles living unwashed and mumbling to themselves on the street; people ending their lives out of temporary tunnel-vision; tormented souls imprisoned in their homes by irrational fears. And our society still harbors a nagging ethical sense that a crime is committed only when a person knows what he’s doing — and that when he doesn’t, he deserves help not punishment.
We can admit that psychiatrists are (at times meddlesome) do-gooders who take on misery and heartache and uncontrolled destructive behavior despite deep controversies over how best to help. It’s the same role filled, in different times and places, by clergy, by family, by shamans, by the village as a whole. Every society fills it by someone. This is the modest starting point that bootstraps a meaningful definition of psychiatric competency.
Lists of “core competencies” are issued by the Accreditation Council for Graduate Medical Education (ACGME) for psychiatry residents, and by the American Board of Psychiatry and Neurology (ABPN) for board-certified psychiatrists. Both organizations categorize psychiatric competency under the six headings established by the ACGME for all medical specialties: Patient Care, Medical Knowledge, Interpersonal and Communications Skills, Practice-Based Learning and Improvement, Professionalism, and Systems Based Practice. (These categories are also used by the Accreditation Council for Continuing Medical Education [ACCME], so that continuing education required to maintain one’s medical license addresses one or more of these competency areas.) A review of either of these detailed lists reveals two important truths. First, a committee can make any aspirational standard byzantine and lifeless. And second, in the eyes of ACGME and ABPN at least, it’s not so easy to be a competent psychiatrist.
However, these official competencies are unlikely to satisfy skeptics, nor do they get to the heart of the matter. No such list can be exhaustive: the ABPN includes knowledge of transcranial magnetic stimulation, presumably a recent addition, but fails to require knowledge of specific pharmaceuticals. Focus areas such as addiction, forensic, and geriatric psychiatry are mentioned, but not administrative or community psychiatry. The linguistic philosopher Ludwig Wittgenstein argues that our inability to precisely define natural categories, even simple nouns like “chair,” is a feature of language itself, not of psychiatric competence specifically. Accordingly, any catalog of psychiatric competencies, whether intended to be comprehensive or a “top ten” list, captures some, but not all, of what constitutes a competent psychiatrist.
As implied above, the starting point, although not the end point, for defining the competent psychiatrist is intent. A psychiatrist aims to relieve suffering in an uncertain human domain. Brought to bear are skills, knowledge, and personality factors (“professionalism” etc) which bring this goal closer. These cannot be listed exhaustively: virtually the whole of human knowledge and experience can inform one’s understanding of a patient’s emotional turmoil. The best we can say, I believe, is that a competent psychiatrist is curious, has a wide fund of knowledge and life experience, and aims to keep an open mind. Some of this knowledge certainly should be biomedical. But knowing about the psychology of aging, common stressors such as job loss and divorce, gender differences, and many other areas are hardly less important. The practitioner’s proclivity to observe the human condition both scientifically and humanistically is ultimately a better gauge of competence than whether a specific treatment modality such as TMS has been added to a long list, or whether the practitioner is able to cough up a specific fact.
Given the controversy and uncertainty in the field, another essential of competent practice is humility. In most cases we don’t know the etiology of what we’re treating. Any treatment we offer helps some patients but not others, and nearly always carries risk. Whitaker makes many good points along these lines. A competent psychiatrist tempers his or her urge to intervene with the realization that the road to hell is often paved with good intentions. Psychiatrists virtually always mean well, and (contrary to some critics) help our patients far more often than not. Nonetheless, a competent psychiatrist is always ready to admit misjudgment or miscalculation. Self-correction is a feature of competence in psychiatry as well as in many, perhaps all, other domains of human expertise.
For another take on the competent psychiatrist, arriving at a similar endpoint using different reasoning, see this 2011 post by Dr. Raina.
I wrote above that psychiatry’s limitations may reside at a different level of analysis than that engaged by its critics. Psychiatry is a hard job because the brain is the most complex organ, because normality is so hard to define, because human development is a subtle interplay of nature and nurture, and because we don’t understand the root causes of many forms of mental distress. But even if we did know and understand these far better than we do now, the field would still be fraught with controversy and uncertainty. Our attitudes regarding responsibility, free will, conformity versus deviance, and how we treat each other reflect our politics and deeply held values. Psychiatry serves as a lightning rod for strong feelings around these matters. By its very nature, it always will. Psychiatrists must accept that many will view us skeptically, some with hatred — and others with undeserved adoration — and not let this dissuade us. A competent psychiatrist hears criticism from individual patients and the public, neither dismissing it unthinkingly, nor allowing it to lead to demoralization and defeat.
Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net.
Despite my mostly psychodynamic approach to psychotherapy, I sometimes include cognitive interventions as well. I think of this as choosing from a variety of tools to suit the moment. Generally speaking, cognitive techniques (and psychiatric medications) aim for symptom relief, while psychodynamic work aims for structural personality change, with symptom improvement as a byproduct. There’s a time and place for each, their relative value varying from patient to patient. The following is a cognitive framework I’ve introduced to a number of patients over the years. Let me know if it’s useful to you.
Essentially it’s a simple one to ten scale that highlights polarized thinking — “splitting” in dynamic lingo — and encourages modifying it through conscious effort.
Many patients who evidence polarized, black-and-white thinking — who devalue the bad and idealize the good — quickly catch on when I propose that their abject hopelessness and seething rage represent a “one” on a one to ten scale, whereas their over-the-top exuberance rates a “ten.” (Some take it further and claim their despair sinks to “negative 100″ and positivity zooms up to “50” on that scale, but usually they’ll agree to keep it manageable.) The key intervention is then to point out that life is mostly lived between three and seven. Realistically speaking, bad experiences in life usually rate a “three” or “four,” good experiences a “six” or “seven.” Anything more extreme is rare. Feelings of “one” and “ten” are almost always exaggerations, polarized distortions that whipsaw the patient’s feelings and interpersonal relationships.
The concreteness of speaking in numbers comes easily to most of us. Once introduced to this scale, some patients spontaneously and enthusiastically rate their own feelings: a troubling encounter “felt like a ‘one’ but I know it was really a ‘three’.” More often they relate an experience in unrealistically glowing terms, and I gently challenge their idealization by asking if it was truly a “ten” or more accurately a solid “seven” (and likewise with a “one” that upon reflection could be re-rated a “three.”) Some patients formerly prone to one-or-ten thinking soon begin sessions by telling me their day feels like a satisfying “six” or a disappointing “four”. Either way, I support this more nuanced assessment and discuss how they may nudge themselves up the scale.
Many patients, particularly those who take a degree of pleasure in the ups and downs of their emotional roller coaster, would never abide a monotonous life stuck at “five.” Where’s the fun in that? Fortunately, the point of the scale is not to aim for stagnation, nor to suggest that the midpoint is ideal. The realities of life assure that some days will be better than others. No cognitive trick will stop successes from feeling good and letdowns from feeling bad. The question is how much. Attaching numbers to feelings offers a little distance and perspective. It’s a gentle reminder that such emotional exaggeration may be a form of self-torture — and that an apparent “ten” is risky (and literally “too good to be true”), often crashing precipitously into a “one.” Most of the time it’s far more comfortable, safe, and sustainable to “live between three and seven.”
Of course, it wouldn’t be psychodynamic therapy if we stopped there. The numerical scale offers a useful language to describe unrealistic emotional extremes, and perhaps to help the patient mitigate them through conscious effort. However, it can’t account for the splitting itself, nor change the patient’s propensity in any structural way. For that, we turn to unconscious dynamics, and to a trustworthy, consistent therapeutic relationship that permits emotional nuance to gain a foothold. Rather than being seen as mutually exclusive — itself an unhealthy polarization — cognitive and psychodynamic approaches can complement one another.
Graphic courtesy of Danilo Rizzuti at FreeDigitalPhotos.net
Even today there are patients who leave diagnosis and treatment entirely to their doctors. They make no effort to inform themselves about their illness or chart their own course; they do whatever their doctors advise. Once the norm, this passive, willfully naive attitude has withered in the face of a multigenerational attitude shift, coupled with the wealth of medical information at hand today. Direct-to-consumer drug ads on television, online peer support, medical websites and blogs of all stripes, “Dr. Google,” PubMed — it almost takes dedicated effort to avoid learning about one’s medical issue. The complementary role of doctors as kindly but authoritarian caretakers feels outdated by decades, and to many nowadays, offensive. “Paternalistic” has become the epithet of choice for doctors who fail to recognize, respect, and make room for patient autonomy and medical self-determination.
Most doctors practicing today, even those of us decades into our careers, began medical training at a time when patient empowerment had already gained ground in the U.S. Many of us supported it wholeheartedly. In college I studied medical ethics and patient autonomy. I volunteered at a community clinic called “Our Health Center” that aimed to empower patients. My stated goal when applying to medical school was to help patients take responsibility for their own health. Even today I tend to over-explain my reasoning to my patients, and to err — and sometimes it is an error — on the side of offering a smorgasbord of options along with their risks and benefits.
However, over the years the goalposts have moved. For a growing subset of patients it is no longer enough that we doctors talk to them as fellow adults. The one-time goal of shared decision-making has, in some circles, given way to a deep skepticism toward doctors and our expertise. Some regard us as irksome gatekeepers who add little to medical decision making and serve mainly as roadblocks to obtaining the medical tests or treatments they already know they need. In this jaundiced view, our role is reduced to rubber-stamping: ordering desired tests, signing requested prescriptions, drafting work excuses, and so forth. For example, I’ve received many calls from would-be patients seeking a prescription stimulant for self-diagnosed “adult ADHD.” The callers sound dismayed when I point out that my diagnosis may not agree with theirs. Similarly, patients seek me out to provide documentation and advocacy on behalf of a psychiatric disability they swear they have, but I haven’t yet evaluated. I find myself wishing that such callers could face the consequences of their own decisions without involving the unwanted, apparently superfluous impediment of a doctor.
These examples from my practice could be dismissed as drug-seeking or “gaming the system.” But skepticism toward physicians and our expertise goes much further. Patients insist on antibiotics for viral (or non-existent) infections. Parents refuse to vaccinate their kids. Online forums abound with horror stories of patients misdiagnosed and mistreated, who finally escape this nightmare only by taking matters into their own hands. “Ask your doctor” drug ads imply that doctors will fail to consider the advertised treatment if not for patient self-advocacy (and the generous assistance of a multimillion dollar marketing campaign). California has a voter initiative this fall that, among other provisions, would mandate random drug testing of physicians for the first time in the U.S.
There is a movement afoot to share medical records with primary-care patients, ostensibly for doctor-patient collaboration, but often justified on the basis of “transparency.” It is now deemed paternalistic for doctors to keep private notes of our own work, even though this is accepted in other professional and consultative fields. Institutions no longer trust us to do high-quality work without oversight by non-physicians who track quality and patient satisfaction measures. Some patients now balk when doctors ask personal questions, e.g., about religious practices or hobbies, that are not obviously related to a manifest disease process. Learning about our patients as people, their strengths as well as weaknesses, is apparently also paternalistic. Shouldn’t the patient decide what areas of information to divulge?
Reducing doctors to servile technicians renders us safely powerless. Never mind that we can no longer diagnose or treat illness as well, for example by drawing unanticipated connections between habits and disease. For many patients, and apparently for society at large, it is more important not to feel a power differential.
This is an odd sentiment indeed. Anyone offering a skilled service, professional or not, wields a degree of power — and at least a little paternalism — over clients or customers. The computer professionals and attorneys who come to my office expect their own clients to defer to their expertise. My mechanic knows more about cars than I do, my barber about hair, my grocer about what produce is in season. Somehow we don’t find it threatening to put our faith in these authorities, especially when they welcome dialog and involve us in the decisions and recommendations that affect us personally.
People sometimes wonder when they may question a doctor’s diagnosis or advice. I say always. I’ve spent a career encouraging patients to be curious, to ask questions, to understand their suffering and what may help. This is the legacy of patient empowerment: all of us taking responsibility for our own well-being, and medical professionals respecting the right of patients to make their own well-informed health care decisions.
However — and it is a big however — this is not the same as physicians rubber-stamping everything patients believe or want. Shared decision-making lies between “doctor’s orders” and “patient’s choice” and follows the ethical standard of acting in the patient’s best interest (illustration courtesy of Practice Matters):
Nor should fear of sounding paternalistic silence us when detractors claim that everyone’s opinion is equally valid. It is falsely modest and politically naive to deny our own expertise. When it comes to medical matters, we doctors, while admittedly fallible, are nonetheless right far more often than we are wrong, and far more often than even intelligent, well-read non-physicians are. Like the attorney, computer professional, mechanic, barber, and grocer, we know things most other people do not. There is no shame in that, nor is it a power trip to point it out. A paternalism that demeans others is bad; a servility that demeans ourselves may be worse.
Top image courtesy of Ambro at FreeDigitalPhotos.net
OpenNotes is “a national initiative working to give patients access to the visit notes written by their doctors, nurses, or other clinicians.” According to their website, three million patients now have such access, generally online. Participating institutions include the MD Anderson Cancer Center in Texas, Beth Israel Deaconess in Boston, Penn State Hershey Medical Group, Kaiser Permanente Northwest, and several others. Patients with a premium account in the My HealtheVet program at the VA have access to outpatient primary care and specialty visit notes, discharge summaries, and emergency department visit notes. The New York Times recently ran a mostly celebratory piece on OpenNotes as applied to mental health visits at BI Deaconess (“What the Therapist Thinks About You“), garnering over 350 public comments. Significantly, many of these comments expressed annoyance with any mental health professional who cited potential drawbacks — despite the fact that BI Deaconess doctors who actively participate in OpenNotes concede that such openness may be detrimental for those with “psychiatric or behavioral issues” (e.g., see this promotional video, starting at 2:15).
The notion of sharing clinical notes with patients enjoys populist appeal. On a self-report survey with no control or comparison condition, patients reported that OpenNotes helped them remember what was discussed during visits, feel more in control of their care, and improved their medication adherence. Advocates also say it improves communication with patients and can correct factual errors in the record. However, the strongest argument seems to be that patients like it. Defenders repeatedly invoke “transparency,” implying that the status quo is intentionally obscure and aims to hide something from patients. Some of the rhetoric has a defiant, even self-righteous tone: one promotional video (at 3:16) features a patient who pointedly declares that she’ll never be refused this access again. And there’s no clear endpoint: about 60% of the patients surveyed in the OpenNotes study believed they should be able to add comments to a doctor’s note, and about a third believed they should be able to approve the notes’ contents; the overwhelming majority of participating physicians disagreed with the latter. If OpenNotes is widely accepted, it will be increasingly difficult to draw clear lines regarding the authorship and authority of clinical notes.
Fifty-five percent of eligible primary care doctors declined to participate in the OpenNotes study cited above. Of those who did participate:
Several doctors struggled with the notion of a one-size-fits-all note, arguing that one document cannot address the needs of billing, other doctors, and patients. A few changed their own use of the note; for example, eliminating personal reminders about sensitive patient issues, excluding alternate diagnoses to consider for the next visit, restricting note content, or avoiding communication with colleagues through the note…. A substantial minority reported [changing documentation, in particular when addressing potentially sensitive issues], including their reported change in “candor.” For example, some doctors reported using “body mass index” in place of “obesity,” fearing that patients would find the latter pejorative.
§ § §
“Progress note,” not “visit note,” is the traditional term for a physician’s written entry into a patient’s medical record, documenting an outpatient or inpatient encounter. (OpenNotes advocates may find “progress note” too quaintly optimistic to be publicly acceptable.) Physicians write other notes for other purposes, including admission notes, procedure notes, transfer notes, discharge notes, and so forth. Additionally, many notes are written by nurses and a wide variety of other clinical personnel, particularly in inpatient settings.
The traditional format of a progress note documents (1) symptoms and (2) physical examination, including lab test results, obtained by the physician, (3) his or her differential diagnosis, and (4) the next steps, such as further exams, tests, or treatments, that follow therefrom. Medical students are taught to write SOAP notes as an acronym for these four components. Such notes assist in performing and archiving medical work, much as a scientist’s laboratory notebook records the design, data, and results of experiments. Progress notes were not designed to be a legal defense against malpractice suits, justification for third-party payment, quality-assurance tools for health institutions, or educational handouts for patients. Yet these notes now serve many masters, resulting in excessively time-consuming documentation that squeezes out face-time with patients, and is increasingly cumbersome as a clinical tool. Some of the additional trade-offs in adding yet another stakeholder, the patient reviewer, are cited in the quotation above, and cannot be casually dismissed as balderdash by defenders of OpenNotes.
OpenNotes presumably works best in primary care, and with an electronic medical record that expands abbreviations (and/or provides templates), corrects spelling, and produces legible output that patients can access online. In contrast, notes with technical jargon by specialists such as ophthalmologists, anesthesiologists, radiation oncologists, and many others would be incomprehensible unless radically altered to be more patient-friendly. Less “connected” practices would similarly be left out. But even in the best-case scenario, progress notes are a poor tool for doctor-patient collaboration. By nature they are shorthand, telegraphing complex medical reasoning in a few words. Old-fashioned discussion is paradoxically superior for assuring that doctors and patients are “on the same page.” Written material designed specifically for patients is better suited for reminders about what was discussed and how to take medications as prescribed.
The real thrust of the OpenNotes initiative is less pragmatic. Many patients want to feel more in control of their care. In addition, doctors aren’t trusted as profoundly as we used to be. If given the chance, many patients will gladly join the ranks of those who look over our shoulder. And of course, if the traditional use of progress notes is framed as paternalistic or elitist, reforming these notes into something “democratic” seems like the only sensible thing to do. The enthusiastic fervor to empower patients in this misdirected way (further) dulls a useful documentation tool which is no more inherently elitist or paternalistic than the work notes of a car mechanic or the recipe notes of a chef. Everyone feels good about this newfound “transparency.” And that, apparently, is what really counts.
These considerations apply doubly in the case of mental health notes. My colleague who writes the Psych Practice blog wrote a response to the New York Times piece on sharing therapy notes. I agree with her completely. I’d only underscore that psychotherapy based on psychoanalytic and psychodynamic principles depends crucially on gauged disclosure and the timing of verbal interventions. These treatments anticipate and rely on the reality that the perspectives of therapists and patients inevitably differ, and that this discrepancy is not a simple error or miscommunication, but instead is the engine that drives psychological change. Arguing for transparency in such treatment is tantamount to wishing that these therapies disappear (some critics will readily acknowledge this).
The relationship between doctors and patients should always be collaborative, but it is never equal. One party is ill and needs help, the other offers expertise and resources the other doesn’t have. “Giving everyone a say” sounds democratic, but medicine isn’t practiced democratically. Try asking a car mechanic or a chef at a fine restaurant (or your child’s schoolteacher, or an architect, or a police officer…) if you can share in their work-flow and decision making. Most will initially appreciate your interest and offer you an overview. A kind one may let you look under the hood. However, very soon you will be told that you are in the way — that you can watch intently or enjoy a good result, but not both. There is nothing paternalistic about this, it’s how skilled workers do their jobs. When reminded that this applies to physicians as well, and once the thrill of the “forbidden” behind-the-scenes look wanes, we will see that the remaining advantages of OpenNotes are better served by other means.
Sounding like something straight out of science fiction, DARPA recently announced grants to fund research and development of implantable brain-stimulation chips aimed to relieve, or even cure, mental disorders. The Defense Advanced Research Projects Agency thinks big, and it has the money, i.e., our tax dollars, to back it up. Decades ago, DARPA brought us the internet. In comparison, revolutionizing psychiatry ought to be a walk in the park — right?
Find a need and fill it: “Current approaches — surgery, medications, and psychotherapy — can often help to alleviate the worst effects of illnesses such as major depression and post-traumatic stress, but they are imprecise and not universally effective.” You can say that again. So DARPA created a program called SUBNETS (Systems-Based Neurotechnology for Emerging Therapies) “to generate the knowledge and technology required to deliver relief to patients with otherwise intractable neuropsychological illness.” SUBNETS aims to create an “implanted, closed-loop diagnostic and therapeutic system for treating, and possibly even curing, neuropsychological illness.” In other words, computer chips in the brain.
SUBNETS will pursue the capability to record and model how these systems function in both normal and abnormal conditions, among volunteers seeking treatment for unrelated neurologic disorders and impaired clinical research participants. SUBNETS will then use these models to determine safe and effective therapeutic stimulation methodologies. These models will be adapted onto next-generation, closed-loop neural stimulators that exceed currently developed capacities for simultaneous stimulation and recording, with the goal of providing investigators and clinicians an unprecedented ability to record, analyze, and stimulate multiple brain regions for therapeutic purposes.
SUBNETS is hedging its bets. With an overall budget of $70 million, it is funding both a diagnosis-based arm, in the manner of the DSM5 of the American Psychiatric Association (APA), as well as a “trans-diagnostic” approach, in the manner of the Research Domain Criteria (RDoC) of the National Institute for Mental Health (NIMH). The ideological rift between the APA and NIMH last year was awkward and impolitic; fortunately, SUBNETS has the resources to avoid choosing sides. A research team at the University of California San Francisco (UCSF) will receive up to $26 million to study diagnostic groups, specifically post-traumatic stress, major depression, borderline personality, general anxiety, traumatic brain injury, substance abuse and addiction, and fibromyalgia/chronic pain. Another team at Massachusetts General Hospital (MGH) will receive up to $30 million to tackle trans-diagnostic traits, such as increased anxiety, impaired recall, or inappropriate reactions to stimuli. Both groups will include public and private partnerships, including with device manufacturers Medtronic, Draper Laboratory, and the start-up Cortera Neurotechnologies.
What to make of this? Well, it’s certainly ambitious. As I read it, the effort relies on several unproven premises. First, that psychiatric diagnoses, as currently construed, can be differentiated by monitoring activity in specific brain pathways. This has been tried before without success, and it isn’t clear that sensor technology was the reason. An alternative model would suggest that mental states are an emergent property of widely integrated brain states. If so, chips implanted in specific areas could no more capture this complexity than carefully listening to the trombone section could capture a symphony.
Another assumption is that carefully focused electrical stimulation can treat a variety of mental disorders. The efficacy of transcranial magnetic stimulation (TMS) to treat depression provides some support for this idea. In contrast, typical comparisons to deep brain stimulation to treat seizures and severe obsessive-compulsive symptoms only go so far. Analogous stimulators may quell a panic state or chronic pain. It is less clear how complex interpersonal patterns, such as those seen in borderline personality or substance abuse, could respond to this type of intervention. Of course, we shall see.
A central tenet of SUBNETS is that implanted technology can promote healthy (or curative) neural plasticity. Plasticity is a popular concept at the moment, highlighting the fact that brain wiring is not static, as was previously assumed. “Neurons that fire together wire together” — that is, synaptic connections change dynamically in response to input, i.e., life experience. This property underlies the hope that implanted stimulators may change the activity of neural pathways in a permanent way, “firing” the pathway together to make it “wire” together, and allowing the device eventually to be removed. Again, we shall see.
Of course, there are many stumbling blocks ahead. Implanting brain chips is no small matter, and this approach is unlikely to be used in the foreseeable future for anything short of the most severe, treatment-resistant disorders. Initial public commentary immediately honed in on the “military mind control” aspect of the project, with visions of soldier drones being controlled on the battlefield via implanted chips. The potential abuse of such technology is manifest and terrifying, and careful controls and standards are needed to assure freedom, not to mention safety.
At the most mundane level, the technology will only work if the science behind it is sound, and that remains to be seen. Nonetheless, if even a portion of the SUBNETS agenda comes to pass, it would represent a monumental leap for psychiatric treatment.