Corporations vs. the demarcation problem
Elon Musk's Twitter and the "crisis" of scientific misinformation
Let’s say you are a psychiatrist. A good one, renowned in your field, although you aren’t a public name. At some point, you begin to suspect that there is something rotten within your discipline—a rot that goes beyond misapplications or misdiagnoses. No, something is wrong with the foundations of psychiatry itself. Particularly around depression. You think it’s over-diagnosed, and not just by a little, but by a lot, and that the current treatments, at the level of their entire modalities, are wrong. Indeed, certain meta-analyses show psychiatric drugs have no long-term effect on depression at all. So you begin to criticize the field.
First, you keep things academic. You present at conferences, claiming that depression is not only over-diagnosed, but actually the data show that physical exercise and participation in meaningful rituals and other lifestyle changes are the same, if not better, than drugs. You also point out the lack of replicability of many studies on depression, and claim that the constant emphasis of depression as being a symptom of a “broken brain” actually makes people worse. You point out that hormonal imbalances, which are quite common, are getting misdiagnosed. You advocate people should, except in rare cases, work to wean themselves off medication and establish a routine of physical exercise, social interaction, and, most importantly, attend rituals like church, or, if unwilling to get religious, create agnostic/atheistic rituals for themselves (you even advocate for the use of psilocybin in these rituals). You compile research and arguments like an armamentarium, assured of your thesis.
No one seems to care. The whole field goes about their business.
So you go public. You write a book. You go on podcasts. You talk about it on social media. Everything goes well. Too well, really. Soon, your Twitter followers are approaching 200,000. Your book becomes an overnight bestseller. You get on the biggest podcasts to make your argument. It’s around this time that criticism begins to build. It starts as a few critiques by academics, followed by some media articles, before becoming an overwhelming wave as official organizations weigh in.
First, the academics criticize you for ignoring the good studies and cherry-picking the bad ones. You respond, but the matter is complicated and based on meta-analyses and statistics—not only that, it’s based on your observation of the field itself over many decades, a thing that ultimately ends up being pretty ineffable when you express it. Eventually, a news organization characterizes you as the guy who “wants people to run around in the woods eating psilocybin rather than taking their meds.” This is the tipping point. The issue quickly boils down to: you are convincing people not to take their meds, which could harm millions of people. A blogger calculates precisely how many people a year you’d be killing if everyone with depression listened to your message—it turns out you’d increase the suicide rate by hundreds of thousands, according to their calculations (and assuming, of course, that you’re wrong). Spokespeople at the CDC, the AMA, and the WHO, all of whom have strict guidelines about psychiatric treatment and the need for it, come out with statements that explicitly say: you are going against scientific consensus. This is misinformation. Keep taking your drugs.
There is a legitimate chance you might even get banned from social media, or at minimum shadow banned. After all, according to the US surgeon general, companies should
. . . make meaningful long-term investments to address misinformation, including product changes. Redesign recommendation algorithms to avoid amplifying misinformation, build in “frictions”— such as suggestions and warnings—to reduce the sharing of misinformation, and make it easier for users to report misinformation.
and in the UK, the Royal Society is now officially recommending expanding the notion of scientific misinformation beyond just the personal to the more general (and amorphous) notion of “societal harm:”
As part of its online harms strategy, the UK Government must combat misinformation which risk societal harm as well as personalised harm, especially when it comes to a healthy environment for scientific communication.
For if your views are indeed causing excess death, causing harm, at either the personal or the societal level, isn’t that enough to justify whatever happens next?
This hypothetical person might not come across as realistic. They seem, perhaps, somewhat overly sympathetic—after all, they’re you. Maybe if we examined the hypothetical psychiatrist’s evidence it would patly fall apart, and the AMA, the CDC, the WHO, the Royal Society, and the social media company like Twitter instituting the ban, would all be correct.
So let’s firm up our example with some specifics. Let’s say you’re a real person: Alexey Guzey, the founder of the excellent New Science. He’s been diving into the science of sleep, and these are his conclusions:
I have no trust in sleep scientists. . . The reason is that I have approximately 0 trust in the integrity of the field of sleep science. . .
2 years ago I wrote a detailed criticism of the book Why We Sleep written by a Professor of Neuroscience at psychology at UC Berkeley, the world’s leading sleep researcher and the most famous expert on sleep, and the founder and director of the Center for Human Sleep Science at UC Berkeley, Matthew Walker.
Here are just a few of biggest issues (there were many more) with the book.
Walker wrote: “Routinely sleeping less than six or seven hours a night demolishes your immune system, more than doubling your risk of cancer”, despite there being no evidence that cancer in general and sleep are related. There are obviously no RCTs on this, and, in fact, there’s not even a correlation between general cancer risk and sleep duration.
Walker falsified a graph from an academic study in the book.
Walker outright fakes data to support his “sleep epidemic” argument. The data on sleep duration Walker presents on the graph below simply does not exist:
By the time my review was published, the book had sold hundreds of thousands if not millions of copies and was praised by the New York Times, The Guardian, and many other highly-respected papers. . .
Did any sleep scientists voice the concerns they with the book or with Walker? No. They were too busy listening to his keynote at the Cognitive Neuroscience Society 2019 meeting.
Did any sleep scientists voice their concerns after I published my essay detailing its errors and fabrications? No. . .
Did Walker lose his status in the community, his NIH grants, or any of his appointments? No, no, and no.
I don’t believe that a community of scientists that refuses to police fraud and of which Walker is a foremost representative (recall that he is the director of the Center for Human Sleep Science at UC Berkeley) could be a community of scientists that would produce a trustworthy and dependable body of scientific work.
A pretty savage criticism. Now, personally, I think that sleep likely has specific biological functions, especially for dreams, as I outline in the Overfitted Brain Hypothesis, so I personally do think that getting the right amount of sleep is probably important to some degree. And I don’t think Matt Walker should lose his job based off of Alexey’s criticisms. I think big popular nonfiction books inevitably will not have the same rigor of scientific papers. Language will be looser and (being as charitable as possible to Matt Walker here) a lot of Alexey’s criticisms are actually about that looseness. For the literature is vast and contains multitudes: if we have two papers, and Matt Walker decides to emphasize one as being “correct,” and Alexey Guzey decides to emphasize another, and they contradict one another, which is right?
Are Alexey’s views “scientific misinformation?” For Alexey is in an almost identical position as our hypothetical psychiatrist—that of rejection of scientific consensus. Indeed, Alexey is even actively experimenting on himself and calling for people to get less sleep in a way that goes directly against CDC guidelines. As far as I’ve seen, no one in the media has pilloried Alexey, other than some strongly-worded disagreements on the Lesswrong forums (which I think understate his case), and it seems absurd that anyone at the CDC or WHO would feel the need to make an official statement. But under the now-prevalent cultural standards they could, couldn’t they? What, precisely, is the difference?
And why do I, personally, care about this issue? Well, for one, I’m writing a nonfiction book for Simon & Schuster. And in it I criticize neuroscience a great deal as being pre-paradigmatic, as not even wrong, and a lot of neuroscientists (not all, of course, but a significant amount) as being somewhere between wasting money and defrauding the public. It’s obviously a far less extreme example than the hypothetical psychiatrist. And one might reasonably point out that there is no mass epidemic of sleeplessness due to Alexey’s views, nor will there likely be a mass turning away from funding of neuroscience due to mine. But even if it’s true these critiques of scientific consensus haven’t caught on, it seems there’s no in-principle difference between me, Alexey Guzey, and the hypothetical psychiatrist, other than scale of the harm, immediacy, and politicization—in all cases, we’re “going against scientific consensus.” And this worries me, since the notion of scientific misinformation could, in retrospect, end up being one of those classically over-broad standards, like in the 2000s when music companies tried to figure out how to prosecute as felons the millions of Americans who downloaded peer-to-peer music.
What’s the solution? Now that Elon Musk has bought Twitter, there are already warning by regulators (including some threats by EU officials) that loosening of restrictions on misinformation will have consequences. How can Elon’s new Twitter resolve the reasonable concerns about scientific misinformation with the also reasonable concern that scientific consensus should be criticizable?
Taking a step back: it appears to me that current debates around “scientific misinformation” look a lot like the debates around “pseudoscience” of the 1990s. I think this is because both are an expression of the demarcation problem. The demarcation problem asks “What is a science, and what is a pseudo-science?", which turns out to be an incredibly tricky question to answer. Scientists sometimes operate on hunches rather than logic, they sometimes milk more from irrational obsession than rational argumentation, they happily speculate about unfalsifiable events and theories, they regularly change their theories to incorporate disagreements with empirical data rather than falsifying the original theory, indeed, sometimes they reject empirical results entirely, and they often object vigorously to the consensus of their colleagues. Kuhn, Popper, Lakatos, all had their shots at solving the demarcation problem, and yet there is no agreed-upon solution.
Of course, at a coarse-grained level, the demarcation problem is easy to solve. As a Supreme Court justice once said about pornography: “I know it when I see it.” Astrology is a pseudoscience, and vaccines causing autism is scientific misinformation (N.b., “scientific misinformation” means essentially the same as “a pseudoscientific critique of scientific consensus”). Such examples are coarse-grained demarcation—to seriously say vaccines cause autism is to not engage with the scientific literature (since famously only one now-retracted study showed this), no top experts in the field believe it, and so on.
But at a fine-grained level the demarcation problem is essentially impossible to solve. And shutting down criticism of scientific consensus, especially by credentialed experts who are seriously engaged with the literature, implies having solved fine-grained demarcation. This implication structure looks like:
Some criticism of scientific consensus can be dismissed based on the critic being an unreliable source, or on a lack of expertise, or on a lack of engagement with the existing scientific literature (i.e., coarse-grained considerations). However, in some cases criticisms cannot be dismissed solely on these considerations.
These latter expert-level criticisms are often not resolvable via appeal to the current scientific consensus, since they are definitionally a criticism of that consensus.
At this level of disagreement, distinguishing between legitimate criticism vs. illegitimate criticism is equivalent to solving the demarcation problem, since it implies being able to specify scientific critique vs. pseudoscientific critique.
The problem is that you can’t just refer to the self-contained scientific literature as it stands, since that’s precisely what so many critiques object to—and it’s too much of an onus on a critic to say that everything should be resolved within the literature. Max Planck said that “science progresses one funeral at a time” for a reason. Furthermore, the confidence of censoring someone is an implicit assumption that demarcation has successfully occurred, that is, that the issue was convincingly resolved.
This entire situation has been thrown into high contrast due to the coronavirus epidemic. I won’t belabor such an obvious application, especially as there are some just-as-obvious complexities involving the seriousness and immediacy of a world-wide pandemic. Those extenuating circumstances exist, and to deny them is to deny reality. Yet, the idea of scientific misinformation has gone far beyond a debate over just vaccines—see, for example, this Vanity Fair investigative breakdown of how the lab leak hypothesis, despite being a viable scientific hypothesis of the virus origin (albeit being an unproven one), was made verboten for over a year:
Daszak early on set about covertly organizing a letter in the Lancet medical journal that sought to present the lab-leak hypothesis as a groundless and destructive conspiracy theory. And Fauci and a small group of scientists, including Andersen and Garry, worked to enshrine the natural-origin theory during confidential discussions in early February 2020, even though several of them privately expressed that they felt a lab-related incident was likelier. Just days before those discussions began, Vanity Fair has learned, Dr. Robert Redfield, a virologist and the director of the Centers for Disease Control and Prevention (CDC), had urged Fauci privately to vigorously investigate both the lab and natural hypotheses. He was then excluded from the ensuing discussions—learning only later that they’d even occurred. “Their goal was to have a single narrative,” Redfield told Vanity Fair.
Why top scientists linked arms to tamp down public speculation about a lab leak—even when their emails, revealed via FOIA requests and congressional review, suggest they held similar concerns—remains unclear.
The result was that Facebook only removed its ban on mentioning the lab leak hypothesis in May of 2021.1 But this hasn’t stopped the recent Bill AB-2098 in California, which makes the punishment for spreading COVID-19 misinformation the loss of one’s license (and keep in mind, until May of last year, any talk of the lab leak was labeled misinformation).
Even if one assumes (correctly or not) that during the pandemic there was good reason to put a far-reaching system in place with which to censor criticism of scientific consensus, my fear is that this system will remain in place after the pandemic is over and end up applying to a wider domain of science than anyone anticipated. E.g., consider that just a few weeks ago in Science there was a glowing review of the work of Carl Bergstrom, who is calling for the study of scientific misinformation to become a “crisis discipline:”
“Bullshit” is Bergstrom’s umbrella term for the falsehoods that propagate online—both misinformation, which is spread inadvertently, and disinformation, designed to spread falsehoods deliberately. . . In a perspective published in PNAS last year, Bergstrom and 16 other scientists from various fields argued that the study of how the information ecosystem influences human collective behavior needed to become a “crisis discipline,” much like climate science, that could also suggest ways to tackle the [misinformation] issue.
It is clear from the PNAS paper that the notion of scientific misinformation by Berstrom et al. is not limited to issues around the COVID-19 pandemic, but include claims about artificial intelligence, climate change, economics, psychology, and so on.
In many cases there is indeed widespread scientific misinformation in those fields! And many of these are obvious coarse-grained demarcations of we “know it when we see it.” But in other cases, it’s extremely difficult to know. Yes, there are obvious cases like the lab leak hypothesis, but also, e.g., the infectious disease origins of diseases like multiple sclerosis has been considered a fringe view, but it is being borne out by recent research.
In Elon’s new Twitter, it seems likely that fine-grained demarcations by the company (such as being the final adjudicators of scientific debates between experts in their field) will significantly reduce in frequency. And I’ll be honest: I think that, when it comes to science, that’s the right move. For this isn’t an engineering problem that can be brute-forced or designed away, nor a political one that can be compromised out; the demarcation problem is a beast that’s never been solved, even by the best philosophers of science. It’s simply too much to ask a corporation to give a good answer to it.
Criticism is the most fragile part of the scientific process. It’s unpleasant and more often than not the critics are themselves wrong. Yet, criticism of scientific consensus can draw attention to urgent issues, to holes in our understanding and knowledge. Which is why censorship and science have never made good bedfellows. And why they should continue to sleep, whenever possible, in separate bedrooms.
Just to put my cards on the table, I personally think the lab leak hypothesis is less likely than the natural original hypothesis, but not by much (only a ~40% chance), based on statistical analysis that the Wuhan wet market was the epicenter (but this is only suggestive, not confirmatory). Both the natural origin hypothesis and the lab leak hypothesis are viable and that should have been on the table from the beginning.
Perhaps the problem is amplified by the opening up of discourses whose participation should be limited. Scientific disagreement is a necessary part of science and could be handled within science, but that fails when suddenly the public takes part in this discourse. "Disagreement" within a community of experts and "misinformation" of the public should have been clearly separated by barriers that would separate the populations that take part in each discourse. It is by opening up unfinished and disputed science to the public view that scientists get into trouble with the public by doing what they are supposed to be doing, i.e. criticising their science (but not in public). Access to all the world's information through the internet is not necessarily only a good thing. Gatekeepers, controlled channels that disseminate information to the public and expert disputes behind closed doors might have allowed for a level of freedom in scientific discourse that we have now lost. Just a thought. I'm not sure if this is even correctly diagnosed, but perhaps it would be worth looking into. -- Thanks for that inspiring article!
It occurs to me I tend to weigh in here only when I disagree with you, which seems a bit cranky. So: nice job, dude. Solid points. Although I expect you'll get some blowback from the slippery-slope crowd.
Also, regarding Wuhan: have you heard about the hybrid lab-leak-wet-market hypothesis? I know a source with a source claiming there's at least some evidence that covid did in fact originate in a lab, where some underpaid low-level flunkie was told to dispose of infected experimental animals— and, being both underpaid and not especially savvy on the whole epidemiological front, then sold those animals to the local wet market to make a few extra RMB.
Haven't been able to find any independent corroboration. Wonder if you might have read something.