For years, I’ve grumbled to myself about an irritating tendency in science punditry. I haven’t written about it before, because it’s subtle, even paradoxical, and I couldn’t think of a catchy phrase to describe it. One I’ve toyed with is “premature ethical fretting,” which is clunky and vague. I’m venting now because I’ve discovered a phrase that elegantly captures my peeve: wishful worries.
The problem arises when pundits concerned about possible social and ethical downsides of a technology exaggerate its technical feasibility. This happens in discussions of psychopharmacology, genetic engineering, brain implants, artificial intelligence, and other technologies that might, in principle (that wonderful, all-purpose fudge factor), boost our cognitive and physiological abilities. Warnings about what we should do often exaggerate what we can do.
Technology historian David Brock introduces “wishful worries,” which he defines as “problems that it would be nice to have,” in a 2019 essay for the Los Angeles Book Review. He cites these examples: “As biotechnology affords dramatically longer human lifespans, how will we fight boredom? With neurotechnology-augmentation rendering some of us essentially superheroes, what ethical dilemmas will we face? How can we protect privacy in an age of tech-enabled telepathy?”
We expect boosters of techno-enhancement to minimize technical obstacles. Transhumanists can’t wait to become super-intelligent, immortal cyborgs, or disembodied digital souls dwelling in cyberspace. They say, bring it on! When these enthusiasts downplay practical as well as ethical objections, we take their hype with a grain a salt. But critics of techno-enhancement, who superficially might seem more credible, indulge in hype too, to alarm us. Their wishful worrying leaves the public with a grossly distorted picture of science’s potential. Let me give you a few historical examples:
Cosmetic Psychopharmacology
In the late 1980s, the pharmaceutical giant Eli Lilly introduced the antidepressant Prozac, a so-called selective serotonin reuptake inhibitor, or SSRI, which supposedly elevates mood by altering levels of the neurotransmitter serotonin. In his 1993 bestseller Listening to Prozac, psychiatrist Peter Kramer claimed that Prozac could do more than simply relieve depression; it could make us “better than well.”
Kramer proposed that Prozac might bring about an era of “cosmetic psychopharmacology,” in which drugs help the healthy and ill alike. Kramer feared that if we don’t suffer any more, if we’re always happy, we might not be fully human. Many readers, I’m guessing, couldn’t care less about Kramer’s windy philosophical ruminations. They thought, give me Prozac, I want to be better than well!
Prozac became a blockbuster for Lilly, one of the best-selling drugs of all time. But Kramer’s “better than well” scenario rests on a bogus premise. As I pointed out in Scientific American in 1996, Prozac is not more effective than older antidepressants, which overall are scarcely more effective than placebos. There is growing evidence that antidepressants and other psychiatric drugs, over the long run and in the aggregate, make us sicker. “Cosmetic psychopharmacology” now seems like a bad joke.
Designer Babies
Genetic engineering has spawned countless wishful worries. Beginning in the late 1980s, geneticists linked specific genes to a host of specific disorders and traits, from schizophrenia and aggression to high intelligence and homosexuality. Many pundits simply assumed that genetic engineering would soon enable us to shed bad traits and add good ones. The Human Genome Project, launched in 1990, would surely usher in the era of “designer babies,” whether or not we wanted it.
Each alleged advance in biotechnology revives such concerns. Last June The New York Times reported that the debate over genetic enhancement “has taken on new urgency in recent years” as a result of CRISPR, a novel gene-editing method. But like every other gene-manipulation method, CRISPR works better in principle than in practice. A recent study found that CRISPR caused “serious side effects in the cells of human embryos,” according to The New York Times.
As of last year, according to Scientific American, the FDA had approved nine gene therapies, which for the most part target rare physiological disorders, such as adenosine deaminase deficiency and lipoprotein lipase deficiency. Gene therapy for mental illnesses such as bipolar disorder and schizophrenia remains entirely hypothetical, as does genetic enhancement of intelligence and other cognitive traits. Please keep this in mind the next time you hear an “expert” warn that, given advances in CRISPR, “superintelligent humans are coming.”
Brain Chips
Then there are brain chips, implanted electronic devices that can receive signals from and transmit them to neural tissue. Brain chips could, in principle, give us enormous power over our brains, hence minds, hence behavior. In 2003 the U.S. Council on Bioethics, a group convened by President George Bush, brooded over brain chips’ possible effects on STEM education. “If computer chips in my brain were to ‘download’ a textbook of physics,” the authors wrote in their 2003 report Beyond Therapy, “would that make me a knower of physics?” Talk about a wishful worry!
Unfortunately, transferring this sort of complex information from a computer into a brain via implanted chips would require decoding the brain’s software, or neural code. The neural code is arguably science’s hardest problem; it is one of those mysteries that appears more intractable as more effort is expended on it. So don’t count on instantly mastering quantum mechanics—or helicopter flight or martial arts, like Neo in The Matrix—by means of a brain implant any time soon.
There are lots of other wishful worries. If we become immortal, overpopulation will be out of control! If we digitize our psyches and upload them into cyberspace, we’ll lose our sense of individuality, like The Borg in Star Trek! I’m not saying we always need to resolve could questions about a technology before we jump to should questions, because by then it might be too late to curb the technology. But let’s base should conversations on realistic assessments of current research. Science, which is already struggling with a replication crisis and other problems, cannot afford any further damage to its credibility.
John Horgan directs the Stevens Center for Science Writings. This column is adapted from one originally published on ScientificAmerican.com.
Be First to Comment