November 6th, 2011 The following post is an adaptation of an argument I presented on Sacramento Street Psychiatry, my blog on the Psychology Today website. As usual, I welcome your comments.
Western medicine’s great strides are largely due to understanding etiology (the biological basis of disease), defining a nosology (a system of categorizing diseases), and testing treatments aimed at these nosological entities, not at individual patients. Take 100 healthy volunteers, swab their throats with Streptococcus, and perhaps 88 will soon develop strep throat. Both our knowledge of bacterial infections (etiology) as well as repeated empirical observation of similar cases leads us to conclude that Streptococcus causes a recognizable condition called strep throat (nosology). Once patients are diagnosed with strep throat — once their conditions become exemplars of this disease category — experiments can be done to show which treatments relieve the condition. Western medicine is the accretion of such knowledge.
Hypotheses about disease categories, and about treatments aimed at these categories, can be tested using randomized controlled trials (RCTs), our most powerful statistical method to assess the effect of independent variables. As in the rest of medicine, evidence supporting the efficacy of psychopharmacology, as well as manualized psychotherapies such as CBT, depends on sorting patients into nosological categories such as “major depression,” applying different treatments to comparison groups, and finding statistically significant group mean differences. In psychology such a research approach is called nomothetic; the goal is to identify general laws of behavior.
However, another kind of knowledge is important too. Why didn’t the other 12 subjects get strep throat? Is it the same reason for all 12, or is the answer different for each of them? Looking at what makes people unique, as opposed to members of a category, is called idiographic research in psychology. This is the nature of psychodynamic theory and treatment, and why it resists the usual RCT approach to research. Patients who present for such treatment rarely fit neatly into a category such as “depressed.” They vaguely say their lives aren’t working well for them, or that their relationships are unsatisfying in a particular way. They lack meaning and purpose in life. They get a “funny feeling” when dealing with competition. Their boss triggers authority issues. They can’t trust their spouse’s fidelity. And on and on. Such complaints are not exemplars of a nosological category. We may not know what causes schizophrenia or bipolar disorder — we have no etiological understanding of any psychiatric disorder, one reason they are called “disorders” and not “diseases” — but at least these labels reflect a coherent nosology. Not so with the presenting complaints of most psychotherapy patients.
Psychodynamic therapists and psychoanalysts find little of value in the nomothetic approach. DSM-IV and similar nosology sheds no light on the particular patient in the office, with his unique history, dreams, fears, hopes, etc. The psychoanalytic/dynamic perspective is to understand the uniqueness of that specific patient, and to promote unique helpful changes that may have no relevance to any other patient seen in the practice.
This is not to discount the importance of the nomothetic approach where it applies. If a patient’s condition is exemplary of a nosological category, it should be treated that way. Doing so allows us to use powerful research tools to separate bias and wishful thinking from real treatment effects. If a patient presents with major depression, bipolar disorder, or schizophrenia, nomothetic research can and should guide treatment. In such cases, psychodynamic therapy must stand or fall on the same RCT basis as other treatments. The evidence base for manualized psychotherapies such as CBT, IPT, and a few others is stronger than for dynamic psychotherapy. If someone is seeking relief of major depression, pure and simple, I am happy to refer them to a CBT therapist, and have done so on a number of occasions. It would be nice to be able to claim strong evidence for the efficacy of prescription antidepressants as well, but unfortunately this is less clear.
CBT and other manualized therapies for specific conditions are much easier to study than dynamic therapy for ill-defined complaints. So it’s really no surprise there are more such studies. Idiographic research methods, e.g., pre and post measures in single-case designs, have been used to study dynamic psychotherapy, both whether it works and how. But nomothetic researchers consider this “weak science”: There are no control groups — no groups at all, actually.
The bottom line is that dynamic psychotherapy has different goals than CBT or medication. It doesn’t aim to treat a nosological category such as major depression. Since it isn’t based on a nomothetic treatment model, RCTs are the wrong assessment tools to use. Idiographic research methods may be statistically weaker than their nomothetic counterparts, but they are the best that this domain of inquiry allows. (Seligman argues that naturalistic surveys have their place too.) Dynamic psychotherapy is based on a rich theoretical foundation that has been scrutinized and refined for the past century. But ultimately it comes down to the individual and the unique mix of discomforting feelings and troubling thoughts that led him or her to reach out for help.
September 25th, 2011 People sometimes wonder whether I “analyze” everyone I meet. This is usually asked with some fear that as a psychiatrist I can “see right through them” and instantly know things about their innermost thoughts they’d prefer to keep hidden. Although this is true (just kidding), I try to reassure them with the following analogy.
Imagine an architect whose business and personal life includes walking into and out of buildings all day. Does the architect “analyze” every building — home, coffee shop, office, gym — all day long? I doubt it. Perhaps if a particular construction is especially creative, or unusual, or singularly beautiful or ugly. But most of the time an architect relates to buildings the same way everyone else does: for the personal reasons he or she visited there. (If there are any architects out there, please confirm!)
In my experience the same is true of psychiatrists and other mental health practitioners. We deal with people all day, both professionally and personally. When working, our attention is directed in a certain way, toward understanding the person in front of us. After all, this person paid good money for us to focus our attention exactly this way. Other than this, though, we deal with loved ones as loved ones, colleagues as colleagues, store clerks as store clerks, and so forth. It is only when someone’s personality or behavior is noteworthy and unusual that we may find ourselves viewing them momentarily through our “psychiatrist glasses.”
I’ve heard it works similarly for doctors and medical diseases. Occasionally a case of acromegaly, cerebral palsy, rheumatoid arthritis, or psoriasis can be diagnosed in a stranger on the street, or in a crowded elevator. Most of the time, though, people are just people.
The question about analyzing everyone often seems to harbor some anxiety. It feels threatening to have possessors of mystical and limitless insight lurking among us, wantonly tearing holes through the public persona and self-image of each innocent bystander.
Fortunately, this is a fantasy. Being a psychiatrist doesn’t make me a mind-reader. It usually takes an hour of formal intake interviewing before I begin to have a sense of a person’s personality. Often it takes more than one session. While it’s true that people, not just psychiatrists, can pick up clues to personality early in a conversation, psychiatrists aim more for accuracy than speed. Instant on-the-fly psychiatric diagnosis or case formulation is fraught with uncertainty and error because it is based on insufficient data. As professionals, we are trained not to shoot from the hip, and for good reason: because our opinion should mean something. If the considered views of psychiatrists are to matter more than the hunches of untrained persons, we must refrain from offering half-baked, “cocktail party” assessments. I cringe when I hear a colleague spouting off about a politician or celebrity known only through the media. A detailed study of someone not personally interviewed, e.g., a psychohistory, may be defensible; an off the cuff opinion cloaked in psychological jargon is not.
“Analyzing everyone we meet” is literally impossible, and as in the case of the architect, would be a huge distraction from everyday life. Moreover, even attempting it is unprofessional. We should reserve any such analysis for the clinical office, where the setting is conducive, and the data sufficient, to make a meaningful assessment.
September 18th, 2011 Tara Parker-Pope of the New York Times blog Well featured my prior post, on the feelings some patients have as they imagine whether their psychotherapists have been in therapy themselves. My post was about patients’ fantasies, not the reality of therapy for therapists. Nonetheless, many of the comments argued for the great value of such therapy, and one or two expressed amazement that such therapy is not universally required. I agree that psychotherapists have much to gain from personal therapy, and in this follow-up post I’ll offer some reasons why.
Is therapy required in order to become a therapist? In the U.S., generally not. According to Geller, Norcross, and Orlinsky [1]: “In most European countries, a requisite number of hours of personal therapy is obligatory in order to become accredited or licensed as a psychotherapist. In the United States, by contrast, only analytic training institutes and a few graduate programs require a course of personal therapy.”
A “training analysis” is required to become a psychoanalyst. I.e., one must be analyzed oneself. However, in the U.S. personal therapy is not required to practice other schools of psychotherapy, nor to obtain licensure in mental health disciplines such as psychiatry, clinical psychology, etc. Specific training programs within a discipline may require it, and certainly a large number of programs recommend personal psychotherapy for their trainees. Indeed, many strongly encourage it by offering referrals to therapists, low-fee therapy, time off from training to attend therapy, and so forth. In a 1994 survey of psychologists by Kenneth Pope and Barbara Tabachnick, 84% reported having had psychotherapy themselves, although only 13% had attended a graduate program requiring personal therapy for therapists-in-training [2]. Whether by mandate, urging, or independent choice, many practicing psychotherapists can claim experience in “the other chair.”
At the most commonsense level, a therapist who knows what it is like to be a patient may be more empathic, and may anticipate unstated feelings more readily than a therapist without this first-hand knowledge. For example, vacation breaks can feel extraordinarily disruptive to patients, a fact that can be taught in lectures or textbooks (or blogs), but may not be fully appreciated until it is experienced oneself. Transference in general is better understood experientially than learned academically. Even non-analytic therapists can benefit by recognizing transference and other common “real-time” emotional reactions, conscious and unconscious, in their patients or clients; these can affect rapport, treatment adherence, and so forth. Psychodynamically informed practice is a hallmark of psychiatry, even when psychodynamic treatment is not offered. The same, I would argue, is true of other mental health disciplines. Psychologists conducting CBT and clinical social workers leading support groups should know about psychodynamics too. And the best way to learn dynamics is experientially, in one’s own psychotherapy.
The argument is even stronger for therapists who practice traditional psychodynamic therapy, where transference and countertransference are essential treatment tools. As I wrote last year, it takes self-knowledge to use countertransference therapeutically. Without this self-knowledge it would be impossible to sort out the patient’s issues from one’s own. In seminars for psychiatry residents, I point out that our field has no blood test or brain scan to directly measure thoughts and feelings in the interpersonal space. Our own feelings, countertransference broadly defined, is the sensitive instrument we bring into the consultation room. The therapist’s own psychotherapy “calibrates the instrument” so he or she can better trust its readings when applied to patients.
To me, this is the main reason to recommend therapy for therapists. In addition, others have argued that it normalizes and destigmatizes being in therapy (assuming the therapist discloses his or her personal therapy to the patient); that it improves one’s performance as a therapist non-specifically, by relieving stress and tension; and that it may give the therapist “a valuable perspective on what works and what doesn’t.” Several commenters on the NY Times blog believe the therapist’s own therapy encourages humility, and may decrease errors based on hubris and unexamined countertransference:
We are to be one of the self monitoring professions, responsible in a unique way as the stewards of our treatment with our clients…. Having our own issues worked with … goes a long way toward ensuring a unique quality of care.
I would be very wary of a therapist who had never sought therapy for him or herself. To me it would smack of an “I don’t need it — it’s for messed up folks like you” attitude.
I am also frequently shocked by the stories my patients will tell me about being in therapy with someone who clearly hasn’t worked on their issues. It can be very damaging to a patient…
A personal psychotherapy does not guarantee that a therapist will be caring, non-abusive, technically proficient, or effective. But there is little in psychotherapy, or in life, that is guaranteed. Psychotherapeutic work, particularly the psychoanalytic and psychodynamic varieties, seems closely tied to the therapist’s self-knowledge and willingness to self-reflect. If we are to use our own perceptions and reactions as sensitive instruments in the consultation room, we are well-advised to take good care of the equipment.
[1] Geller JD, Norcross JC, and Orlinsky DE, The Psychotherapist’s Own Psychotherapy: Patient and Clinician Perspectives, Oxford University Press, 2005.
[2] Pope KS and Tabachnick BG, “Therapists as Patients: A National Survey of Psychologists’ Experiences, Problems, and Beliefs” Professional Psychology: Research and Practice, 25(3), pp 247-258.
September 11th, 2011 Recently a patient asked whether I’d ever been in therapy myself. Without answering his question directly (see my post on psychotherapist disclosure and privacy), I replied that many of us have, and asked what it meant to him. It would be a bad sign: “How can you help if you need help too?” We went on to discuss his feeling that being in psychotherapy marked him as defective or deficient. He would naturally prefer a therapist who did not share similar defects and deficiencies.
Many patients take the opposite view. They believe a doctor who knows what it’s like to be a patient can better empathize with them. So this patient’s concern stood out in my mind — he truly feels his psychotherapy is a mark against him, a kind of declaration or admission that he is damaged. I later reminded myself that professionals — and others, everyone really — regularly use services offered by others in the same field. Lawyers have their own lawyers, doctors see their own doctors. Chefs eat meals made by other chefs, barbers get haircuts from other barbers. The only problematic examples that come to mind are when the condition being treated is shameful or morally repugnant, or when the condition could directly affect the service being offered. Examples of the former: police officers who require the “services” of other police officers after committing crimes, and clergy who need spiritual or moral counseling for their own transgressions. Examples of the latter: a neurologist with brain damage, and a business consultant who cannot maintain his or her own business and needs outside help. How does this apply to psychotherapists, and what light does it shed on patients’ feelings about seeing therapists themselves?
The need for psychotherapy feels to many people like a sign of defect/deficiency/damage. In speaking with patients I often highlight the “need” in that sentence, and contrast it with “want” or “could benefit by.” Some patients make themselves feel worse by telling themselves they “need” therapy, when it would be just as accurate to say they are apt to benefit by it, or even that they desire it. I don’t believe it devalues psychotherapy, or psychiatric medications for that matter, to note that they’re frequently optional. Most depression improves on its own eventually, and people may choose to muddle along in life dissatisfied, angry, or in a series of bad relationships. Remembering that psychotherapy is a choice may take some of the shame out of it.
That’s only part of it, though. No one worries or cares if one’s proctologist also needed to see a proctologist at some point, even though proctological conditions feel shameful to many people. In addition to shame, there is moral repugnance associated with mental illness, even, or perhaps especially, the apparently milder problems that lead people into psychotherapy. Often unstated is the notion that one chooses to be emotionally weak, distraught, hotheaded, or whatever, and that this choice is selfish, unfair to others, or otherwise immoral. Moreover, that seeking professional help to “snap out of it” or pull oneself together is self-indulgent and akin to laziness. While the idea isn’t totally groundless — there is some choice in how to act, and even how to feel sometimes — it assumes far too much conscious choice. Most troubled patients would give anything to be happier, at least consciously. In returning to my patient’s question, perhaps he would not trust a doctor who willingly made himself dependent on others to help steer his life back on course. It may feel as morally suspect as the corrupt police officer or clergyman: a character flaw in the traditional sense.
Alternatively, there may be concern that a psychotherapist who needed therapy (“needed” in scare-quotes as noted above) cannot perform well as a therapist. This would be analogous to the brain-damaged neurologist or the business consultant whose own business is failing. The logic may be pragmatic: A psychotherapist should have his or her own life in order before claiming to be able to help others. Or it may be fear that residual pathology lurking in the therapist may be harmful to the patient. Or it may be a transferential need for an idealized, faultless therapist. Each of these can be addressed as it arises. We each have our blind spots, and can help others without necessarily being able to help ourselves. It is better to have sought treatment for potentially hurtful pathology, than to have ignored or denied it. No therapist is perfect.
Any or all of these concerns about the therapist may also apply to the patient himself. Being in therapy may make a patient feel ashamed, or morally bad or wrong. It may highlight a fear of incompetence or harmfulness. It may clash with a need to be perfect. Asking the therapist “Have you seen a therapist yourself?” may be an easier way for the patient to broach sensitive feelings about his or her own participation in therapy. This seemingly simple question can carry a lot of meaning, and if explored in detail, can help a patient understand himself better.
July 12th, 2011 There is an active debate underway in the popular literature about whether antidepressant medications actually do anything chemically helpful for depressed patients. No one doubts that many patients report feeling better, and that most evidence less depression on standardized rating scales, following treatment. But much of that improvement appears to be due to psychological factors, i.e., the placebo effect. The debate is over how much improvement is not due to the placebo effect. What beneficial effects can be attributed to the active ingredients in the tablet or capsule?
It’s disconcerting to enter this debate decades after the popularization of antidepressants. These are among the most common prescriptions in America: In 2010, antidepressants were the second most commonly prescribed class of drugs in the U.S., according to IMS Health. They are so widely used that Consumer Reports publishes “best buy” recommendations about which ones to try first. Yet recent reanalyses of efficacy data have called into question whether antidepressants help more than inert pills. In a two–part piece in the New York Review of Books, Marcia Angell MD, the former editor-in-chief of the New England Journal of Medicine, favorably reviews these skeptical findings. (I won’t summarize the arguments here, but I do very much recommend her review.) In the other corner is Peter Kramer MD, author of Listening to Prozac and other books, who offers a spirited defense of antidepressants in his op-ed rebuttal in the New York Times. The 300 comments that follow the online version of the op-ed also make for fascinating reading: Many are first-person accounts of the lifesaving benefit of antidepressants.
What to make of all this? Those conversant in research methodology will pick apart the various arguments. Do the studies have enough statistical “power”? Does it matter that typical efficacy studies recruit subjects who differ from patients in clinical practice? How much difference does an “active” placebo make? Is it preferable to use subjective mood ratings, or ratings from trained observers? How many weeks or months should subjects be assessed? Should subjects with co-morbidities, i.e., additional diagnoses, be included or excluded? Are there advantages to including a third study arm (a known effective intervention) to the usual two (the drug being assessed, and placebo)?
There are many such questions that need to be resolved, and professional researchers are probably in the best position to discuss them. Meanwhile, the rest of us are left with a seeming paradox. Thousands — millions? — of individuals claim relief from antidepressant treatment, and virtually any psychiatrist will swear that antidepressants really have helped many of his or her depressed patients. (This is my own experience, by the way — it’s nearly inconceivable to me that antidepressants are no more than placebos. I’ve seen too many patients improve before my very eyes.) Meanwhile, there are also many patients, equally depressed, who obtain little or no benefit from antidepressants, and a large number of carefully conducted studies that find little benefit in the active ingredients of these pills, once placebo effects are factored out.
While I can’t prove it, my sense is that the answer lies in the heterogeneity of depression. Some patients get dramatically better on antidepressants (in entirely believable ways, as opposed to reactive “flight into health” and the like), some only a little, and others appear not to change at all. Widely varying responses can easily “average out” in the usual randomized controlled trials used to assess efficacy, and could account for lackluster findings in group studies. Since I do have some research background and training myself, I’d want to see the scatterplots of individual subject ratings, to see if they cluster into responsive, partly responsive, and unresponsive groups.
Of course, it is not a new idea that some depression responds to medication and some doesn’t. When I started medical school, psychiatrists distinguished “endogenous” and “exogenous” depression — i.e., depression that originated within the patient chemically, and depression that originated from external stress or loss. (For a concise summary of the idea, see the first paragraph of this editorial.) Antidepressants were thought to help the former but not the latter.
Unfortunately, that wasn’t true. As it turns out, knowing whether an external event precedes a depression doesn’t predict whether an antidepressant will help. The search has gotten more sophisticated lately, and measurable genetic subtypes may one day tell us who will benefit by antidepressants and who won’t. But we’re not there yet. At this point, we cannot predict whether an individual patient will improve with antidepressant medication.
I’ll end this post by noting that the placebo effect, a vexing complication in clinical research, isn’t a bad thing in real life. If a patient feels better, I don’t worry too much about who or what gets the credit. Maybe it’s the citalopram or sertraline in the pill. Maybe it’s the patient’s belief in the pill and in the medical science behind it. Maybe it’s the fact that I gave the patient something that our culture imbues with symbolic healing powers. Maybe my words were healing and the prescription was a mere distraction. Or maybe I had no effect at all, and the patient healed himself or herself. Usually it’s impossible to know. In my view, being a psychiatrist in clinical practice requires this kind of agnosticism and humility.
June 20th, 2011 This post was inspired by an article in the May 30th issue of The New Yorker, “God Knows Where I Am” by Rachel Aviv. Full-text online is only available by subscription, but a free abstract is available here. In the process of telling a riveting and ultimately very sad story, the author discusses psychiatric insight.
Insight is a curious concept as used in psychiatry. In common parlance insight is unquantifiable, something like charm or wisdom. We feel we know it when we see it. But most of us hesitate to make finer distinctions. We may allow that someone strikes us as a little insightful or very wise. Beyond that, it seems ludicrous to attach a scale to it, or to refer to insight as though it could be measured precisely.
Nonetheless, in psychiatry an assessment of insight is part of the “mental status examination” (MSE), the psychiatrist’s version of the physical exam in general medicine. Along with assessments of mood, affect (expressed emotion), paranoia, suicidal feelings, and other issues, the psychiatrist also evaluates the patient’s insight.
Psychiatry has no standardized way to assess this. We may ask our patient: “What is your understanding of the problem that brought you here today?” It’s a great question — the problem is what to do with the answer. Critics note that if the patient’s response accords with the psychiatrist’s own belief, the patient is judged to have good insight. Thus, in an earlier era when psychoanalysis was predominant, a patient with schizophrenia exhibited good insight by agreeing that his “schizophrenogenic” mother caused the problem. Nowadays, this would be evidence of clear impairment; the insightful patient would instead agree with his psychiatrist that he has a “chemical imbalance.”
For better or worse, many such judgments in psychiatry — perhaps most of what we do — cannot be divorced from social context. Exuberance in one crowd may look like hypomania in another. “Inappropriate” affect begs the question, what is appropriate? And likewise, an understanding of one’s own mental health status (or psychiatric label) is meaningful only within one’s social group and culture.
Anosognosia is a term from neurology. As defined in Mosby’s Medical Dictionary, 8th edition:
[an′əsog·nō′zhə]
Etymology: Gk, a + nosos, not disease, gnosis, knowing
a lack of awareness or a denial of a neurologic defect or illness in general, especially paralysis on one side of the body. It may be attributable to a lesion in the right parietal lobe.
Certain patients with brain disease or injury appear not to know they are paralyzed (or blind, etc). Presumably, parts of the brain involved with self-awareness are damaged. This lack of knowing then becomes one of the signs of the disease itself, and may help with diagnosis. For example, the cause of a paralysis may be localized to the parietal lobe if it is accompanied by anosognosia.
The term has lately appeared in psychiatry (and is discussed briefly in the New Yorker piece). This is a worrisome error in my opinion. Its use seems intended to make psychiatry sound better understood, and more biological/neurological, than it really is. A person who denies having a psychiatric disorder may delusionally attribute his or her difficulties to space aliens. This makes a good case for extending anosognosia into psychiatry. But a denial could equally be an honest difference of opinion, as when a patient discounts a diagnosis of Social Anxiety Disorder because shyness is a family trait. Here, denial of an anxiety disorder is certainly not a sign of having such a disorder. And of course social stigma leads many patients to deny having a psychiatric disorder; this denial likewise bears no relationship to having the disorder itself.
The reasons patients may deny having a psychiatric disorder are far too varied to reify such denial with a neurological term. It creates a suspicious “Catch-22,” where disagreeing with one’s doctor is itself a diagnosable condition with a fancy medical name, and the implication of brain-structure underpinnings. This is sophistry, and the mark of a profession whose false certainty belies insecurity.
Many years ago I wrote a short essay arguing that social judgments in psychiatry (e.g., inappropriate affect) are both inevitable and essential to our work. I was not a psychiatrist yet, but nothing I have seen since has changed my view. Despite great advances in biological psychiatry, we still cannot ascribe specific attitudes or viewpoints to neurological damage. Insight is still subjective. And if we ever do identify the seat of “psychiatric anosognosia,” our understanding will no longer be psychiatry, but neurology.
May 21st, 2011 Fundamentalist Christian minister Harold Camping of Oakland, California, has widely publicized that today is the day of the Rapture, when according to some interpretations of the New Testament true believers ascend to heaven to escape impending misery and turmoil on Earth. I am writing in the afternoon, and can’t guarantee just yet that Camping is mistaken. But let’s assume he is: He was wrong before, and he is just the latest in a long string of mistaken end-times prophets. I promise to post a prompt, heartfelt apologetic retraction if he turns out to be right — and if the internet and I survive the initial cataclysm.
I have a few reflections on end-time prophesies, starting with the admission that I’ve always found them oddly alluring. As a child, I knew I would be alive in the year 2000. In my young mind this futuristic date glittered with flying cars, modular glass homes, one-piece unisex jumpsuits that somehow didn’t look absurd, and one or more Moon colonies. But in addition, I had repeatedly heard predictions that Christ’s Second Coming would coincide with the new millennium. Although there is plenty of theological controversy on this point even within Christianity, and even though I was not raised to believe anything of the sort, it always struck me as exciting that such a grand moment might actually take place in my lifetime.
With the year 2000 come and gone, most end-time attention has since moved to 2012, when, among other things, the Mayan calendar supposedly runs out of dates. Even so, I wonder whether Mr. Camping, who is 89 years old, is consciously or unconsciously motivated by the possibility that this greatest of historical events might occur in his remaining natural lifetime. Perhaps it is human nature both to hope and to believe that we live in a unique time. A touch of narcissism perhaps?
Psychologists and others have wondered, and occasionally studied, how believers deal with mistaken prophesy. What will Camping and his followers do or say tomorrow? Leon Festinger’s classic 1956 study “When Prophesy Fails” suggests that rather than recanting his beliefs, Camping is apt to rationalize his failed prophesy. For example, he may realize his calculations were off, or declare a divine 11th hour reprieve for the world. Of course, some followers, perhaps the majority, are apt to feel disillusioned and humiliated. The “Great Disappointment” of 1844 offers the historical precedent of a similar failed prophesy.
There is a non-religious definition of rapture: “n. the state of being carried away with joy, love, etc.; ecstasy.” In a larger sense, we all seek to connect with something bigger than ourselves. For many, it is religion and its connection with God. Others find connection and larger purpose in humanitarian or political work. Playing music or team sports with others can satisfy this need to some extent, as can being part of the crowd at a concert or other event. Even mobs and riots satisfy this need, albeit in destructive ways. The lure to belong, to share experiences with others, to have a larger purpose, to be “in a groove” seems innate. I once saw a greeting card that read, “People who never get carried away… should be.”
It is really no surprise that doomsayers capture headlines and our attention. Whether we expect to rise to heaven today with God’s Chosen, or join others in ridiculing the gullible — or blog to readers on the internet — we all can be part of a grand spectacle. It makes this sunny Saturday more special than it would otherwise be, and ourselves a bit more connected to feelings, purposes, and forces greater than ourselves.
|
|