The first principle is that you must not fool yourself, and you are the easiest person to fool. ― Richard Feynman
Most everyone is familiar with the placebo effect (e.g., taking a sugar pill will make your symptoms diminish if you believe the drug is real). During a placebo experiment, the expectation of the patient is the mechanism that affects the outcome of treatment rather than the nature of the drug. The power of suggestion alone can have a strong influence on our experience despite the content of the proposed intervention. Conversely, but less familiar, is the nocebo effect (e.g., taking a sugar pill will make you feel worse if you believe the drug is real and has consequential side-effects). The word nocebo in Latin essentially means “I will be harmful.” For example, if a discussion between a doctor and patient includes the topic of experiencing negative side-effects when ingesting a substance, the likelihood that those symptoms will be manifested increases. There are valid public safety reasons for clinical trials, peer-reviewed data, and understanding the chemistry involved whenever medication is administered (e.g., pharmacodynamics, pharmacokinetics) to minimize undesirable side-effects. Likewise, comprehensive studies are also conducted for the perceived efficacy of less invasive, non-pharmaceutical treatment modalities. However, outcomes based on objective treatment measurements can be subjectively or suggestively influenced. How much does personal opinion impact the safety of treatment and how much does the power of suggestion dictate the experience of treatment?
As a psychotherapist who does not specialize in biochemistry, pharmacology, or internal medicine, I realize that presenting many treatment dilemmas is beyond my domain of expertise (hence armchair deductions). Nonetheless, what remains interesting to me is the psychology of motivated reasoning, anecdotes as evidence, suggestive priming, subjectivity, correlation/causation errors, and logical fallacies. In fact, one of my pet peeves in the museum of logical fallacies is know as the fallacy of illicit transference. The fallacy of illicit transference contains two subsets of fallacious reasoning. The first being the fallacy of composition, which involves believing that what is true of the parts is also true of the whole (e.g., negative experiences for a few people will equal negative experiences for all). In the opposite direction, but on the same continuum, the fallacy of division refers to the belief that what is true of the whole is also true of the parts (e.g., the treatment facility is corrupt and that clinician lied to me … therefore, all treatment facilities are corrupt and all clinicians lie). I cannot tell you how frequently these fallacies are generated and how appealing they can be when the mind seeks resolution in lieu of cognitive dissonance. Throw in a bit of anecdotal allure and you’ve got a two-for-one in the department of erroneous reasoning (e.g., my aunt received treatment last year for her arthritis and then complained of experiencing severe migraines for several months … so I’ll never trust a doctor again). Understandably, people want to feel empowered as consumers and value their health just as much as their time, but jumping to conclusions based on limited information is evidence of yet another fallacy: hasty generalization. Correlation may not equal causation any more than snoring would infer lucid dreaming. Asking critical questions and being an advocate for your health is admirable, but asserting unfounded claims while irresponsibly influencing others is not. Of course, as we will see, most controversies are not born in a vacuum.
Sturgeon’s law, named for the science fiction author Theodore Sturgeon, is an amusing proposition that boldly states “ninety percent of everything is crap.” Despite the dubious mathematical accuracy of this summation, a very useful critique could be applied to the ways in which people seek information online (i.e., Google University Syndrome). With enough motivated reasoning, a person can uncover selective sources and tendentious articles via data mining that reinforce a previous conviction without involving the requisite fact checking. To make matters worse, most individuals do not know what qualifies as empirically valid information and can easily be seduced by “mavericks” who emphasize their academic or occupational qualifications as experts only to mislead anyone who does not have a sufficient background in their field of interest (often motivated by selling bogus products, exposing “the system,” selling books, and promulgating “natural” remedies). In other words, conspiracies are easy to manufacture, but long-term studies based on collective evidence and peer-reviewed data are challenging. With the world at our keyboard, the likelihood of being persuaded by specious claims can render us exponentially gullible. In addition, the local consensus of laypersons should never be persuasive in the way that a global consensus of highly-trained professionals should be regarding any scientific-based inquiry. For example, the opinion of most citizens in a small town regarding alternative forms of cancer treatment does nothing to invalidate the facts about legitimate treatment compiled by the World Oncology Network. Research takes time and “miracle cures” are probably less than miraculous once you take time to do the research. Similarly, a handful of op-ed articles written about vaccines for The Huffington Post are not going to obliterate peer-reviewed findings in Oxford’s International Immunology medical journal or dismantle CDC data collection systems for vaccination statistics.
Certainly, we shouldn’t be dismissive of physiological subjectivity and it’s often necessary to compare evidential information with personal experience while being mindful of our bodies. After all, no one can tell you what it feels like to be in pain or when to trust your intuition when it’s symptomatically obvious that you should (there have been plenty of false negatives during diagnostic examinations). Nonetheless, there remains no dispassionate control group of the mind once we have committed ourselves uncritically to a controversial narrative. How can we remain vigilant about factual data in the face of suggestive influences or learn to wait until all the evidence is available before assuming epistemic victory? Not every association is significant enough to warrant causation, and correlations should only be a guide for ruling out other possible (often more likely) variables. Wanting to believe something is not the best compass for truth and personal experience does not equal universality.
Publicly constructed fear mongering that encourages unenlightened self-interest is a pernicious form of cultural propaganda and it can have dire ethical consequences. Remember, there is a difference between informed speculation and unhinged credulity. We must also be willing to relinquish false propositions or behavioral trends when faced with a comprehensive analysis that indicates a different conclusion―despite our emotional investment in particular outcomes or our uncomfortableness with a current paradigm. Most importantly, we must be willing to listen carefully to those who have dedicated their entire lives to research and education regarding scientific-based subject matter and make sure that their findings have been supported on an international level. Specialty alone is not enough to validate knowledge, but overwhelming agreement from many specialists will benefit the reliability of any methodology when that process is also made verifiable, falsifiable, and repeatable.
When a psychic tells you to beware of bad karma, our pattern-seeking mind is going to be on high alert. If anything goes wrong, the psychic has been substantiated and we feel our money was well spent. Of course, this psychic knows there ain’t no business like the nocebo business.