THE CULT(URE) OF SCIENCE AND ITS POSSIBLE DEMISE
I am not anti-science. Frankly, I don’t know how anyone could be. As a method of understanding, science is a remarkable human intellectual feat. As a producer of major world-changing products, discovers and inventions, it has no equal. Science has made our lives better in countless ways. It has, however, done so at a cost. There are the many specific ways science has been destructive and made our lives worse. Depending who you are, your list might include such things as inventing the atomic bomb, using animals for research, extracting minerals from the earth, prolonging life and inventing ways to end it, inventing vaccines, polluting the sea with plastic…). What I’ll be concentrating on in this chapter is not any specific scientific experiment or discovery or outcome of either the making our lives better or making our lives worse variety. Instead, I will focus on the authoritarianism of science, by which I mean its self-righteous and arrogant insistence that it has a privileged relationship not only to reality but to human progress as well—and how that is a destructive force on our ability to continuously see in new ways, create and develop.
To expand. My beef with science isn’t that it’s a scam. The “hard” or natural sciences like chemistry, anatomy, biology and physics don’t, as a matter of course, purposely distort what they’re studying to make it studiable, the way psychology does. (This might happen in the natural sciences anyway, but not intentionally.) So while psychology often treats people as if we’re atoms or grains of sand, science doesn’t treat atoms or grains of sand as if they’re people. (Although some science teachers and popular science writers do—as if the only way we non-scientists can understand anything non-human is if you pretend it’s human.) Science, most times, tries to do justice to its objects of study.
Of course, it can’t—not in this world. And that includes the laboratory, for labs are part of our world. Science may not intentionally distort what it studies, but it does distort what it means to study something. It does so primarily in its claim to be objective—indeed, to be the only objective mode of study and understanding there is.
When it’s said that science is objective, what’s meant is that it is “true” to the facts, unbiased and free of influence from personal beliefs and values and any particular cultural or political perspectives or institutions. There’s a similarity here to when you or I say, in everyday conversation, “Of course, you’d say that. You’re my mother—you can’t be objective!” But in science it’s a grander claim with far more at stake. It’s the source of science’s authority and admiration by the general public and popular culture. We (the public) believe science, as a human endeavor, is objective—even if sometimes we don’t agree with some of its findings. It could be you read an exposé of some basic research that’s funded by drug companies and you’re skeptical about the objectivity of the results. The same for corporations, special interest groups and political parties. Chances are, though, you’ll think that it’s the “outside influence” that destroys the scientific objectivity—that science in itself is an unbiased way of studying something.
The Subjectivity of Objectivity
There is no in itself. Not science, not scientists, not the things they study. They’re all intertwined. They exist in, are influenced by, and assert influence on the society, culture, politics and economics of their time. They function, succeeding at some tasks and failing at others, institutionally— by which I mean as an established custom, practice and organization of people and how they behave that has a reputation to uphold. They cannot be separated from each other nor from the subjectivity that swirls in, around and through them.
Here’s why. Science relies on and perpetuates the distinction (dualism) between subjective and objective—subjective being beliefs, attitudes, and other mental processes that are “inside,” and objective being objects and events “out there.” This dualism (introduced in Chapter 4. “The What and How of Knowing”) was the invention of philosophy. But it’s been of enormous value to science over the centuries. It provided science with a method of study and discovery that yielded many, many multitudes of significant results, and established its value as the ultimate source of knowledge.
And yet, as the saying goes, “Houston, we have a problem.” If we accept (for the moment) the objective-subjective distinction and apply it to itself, don’t we expose that science’s claim to objectivity is subjective? Isn’t it a statement, proposition, opinion, belief, etc., about science as a way of knowing? If so, how can it be objective? Within the dualistic framework of objective-subjective, this kind of aboutness belongs to the mind (the “inside”) and not to the material world (what’s “out there”). In this regard, it’s important to note that scientists don’t only claim that what they study, the things “out there” in the material world, like rocks, stars, human organs and systems, are objective. They also claim that how they study these things and come to understand them is objective. Yet, by their own classification, studying and understanding something belongs in the subjective category.
And that’s not all. Science doesn’t present itself as merely one way of knowing, but rather as the highest form of knowing, superior to all others. Why? Because it’s objective! This claim, too, is subjective—it’s a statement about science—and a value judgment, to boot.
Facts and Values
Which brings us to the dualistic divide between fact and value, another conceptual split (with family resemblances to the objective-subjective dichotomy) that science and its reputation have benefited from. Science, we are taught, is fact (what can be demonstrated to exist, objective). Everything else, we are taught, is value (what “ought to be,” what we deem ethical, a judgment, subjective). Scientists haven’t been particularly interested in exploring this divide. Presumably they’ve been happy that the fact-value distinction sets them apart as the supreme makers and keepers of factual knowledge. (But as you’ll see a bit later in this chapter, circumstances of their own making are forcing scientists to confront the issue.). It’s been philosophers who’ve been debating and often disputing both the “fact” and the “value” of this division. This makes sense; after all, the big philosophical questions—questions about what exists, how we can ever know what exists, and the relationship between the two—have everything to do with what’s considered a fact and what’s considered a value.
Some of the most well known philosophers of language and of science speak volumes on this issue. They include my favorite philosopher Ludwig Wittgenstein, who I introduced in Chapter 3, “Ludwig Wittgenstein: The Tortured Smarty Pants,” and Hillary Putman, the influential American philosopher who passed way this year at the age of 90. [See Note 1]
One of Wittgenstein’s brilliant observations was the way we non-scientists are preoccupied with the method of science. This isn’t something we’re aware of. It’s hidden from our view because the philosophical conceptions that are the foundations of science (such as reality, cause-effect, objective-subjective, fact-value, etc.,) are embedded in our language. We believe that such things have an existence independent of our speaking of them in the way that we do. We think everything must be either one thing or another (for example, fact or value, objective or subjective). This way of thinking—and being blind to it—gets us into muddles, which are almost impossible to escape. As Wittgenstein put it: “A picture held us captive. And we could not get outside it, for it lay in our language and language seemed to repeat it to us inexorably.” [See Note 2]
Escape from captivity comes with new ways of thinking, with seeing the world, our actions and beliefs and language in new—non-dualistic, non-generalizing, non-reductionistic—ways. As cultural phenomena, as forms of life. Here is how Wittgenstein describes the process and its impact.
The difficulty has to be pulled out by the roots; and that involves our beginning to think about these things in a new way. The change is as decisive as, for example, that from the alchemical to the chemical way of thinking. The new way of thinking is what is so hard to establish. Once the new way of thinking has been established, the old problems vanish; indeed they become hard to recapture. For they go with our way of expressing ourselves and, if we clothe ourselves in a new form of expression, the old problems are discarded along with the old garment. [See Note 3]
During his long career pondering the nature of reality, science, language, mathematics, ethics, and more, Hillary Putnam had a lot to say about fact and value. Like most philosophers, Putnam’s writings are steeped in philosophical history (presenting, dissecting and most often eviscerating the arguments of those long gone as well as some contemporaries). This makes it difficult to summarize his positions or even find an illustrative quote easily understandable to non-scientists and non-philosophers. The following quote is from a short essay of Putnam’s, entitled “Beyond the Fact-Value Dichotomy.” What he’s saying about physics might not compute for everyone, but I hope his way of denying the separation of fact and value does come across.
I don’t doubt that the universe of physics is, in some respects, a “machine,” and that it is not “caring” (although describing it as “uncaring” is more than a little misleading). But—as Kant saw—what the universe of physics leaves out is the very thing that makes the universe possible for us, or that makes it possible for us to construct that universe from our “sensory stimulations”—the intentional, valuational, referential work of “synthesis.” I claim, in short, that without values we would not have a world. Instrumentalism, although it denies it, is itself a value system, albeit a sick one. [See Note 4]
A more general (and gentler) statement of the flawed fact-value distinction comes from Catherine Elgin, a philosopher based at Harvard University. She is someone I’m just now becoming familiar with. In her article, “The Fusion of Fact and Value,” she makes the point that it isn’t merely possible that values influence our knowledge-seeking, but that all of what we call factual knowledge is infused with values.
The stereotype of factual knowledge as consisting of value free theories about the way the world is, which may subsequently be over overlaid with subjectively evaluations, is implausible. Since values infuse our lives, it is not surprising that they also infuse our understanding of our lives, our world and our place in the world. [See Note 5]
In other words, as people live their lives—some of them doing science—values are ever-present.
My last quote is from philosopher Fred Newman, my mentor and intellectual partner who was, in addition, a political activist, playwright and therapist (and who I introduced to you in Chapter 1, “What’s Knowing? What’s Growing?”). Here is an excerpt from his “A Performatory Manifesto” written for the Interamerican Society of Psychology Congress in 1999.
In the spirit of overstated generalizations it is not unfair to say that theatre tells us how humans should be while psychology purports to tell us how we are. Yet in life we are neither who we are nor who we should be; we are, as Vygotsky insists, who we are becoming. But not only has becoming been under-studied in western culture; it has been specifically negated in the name of objectivity. Hence, in theatre the process of creating the illusion is self-consciously hidden from view in the name of creating the illusion. In psychology, the moral dimension of life activity is ruled out of order in the name of good science. Fact and value are arbitrarily distinguished even though in the “becoming” of life “ought” and “is” are dialectically intertwined.
I read Fred here as pulling the difficulty out by its roots, as Wittgenstein advised above. The distinction between fact and value may be “good science” but it is, in human terms, arbitrary. In our lives-as-lived, human beings are neither who we are nor who we should be. We are who we are becoming. As this becomes a new way of thinking, the dualistic dilemma vanishes.
The Facticity of Facts
I ‘ve liked that word ‘facticity’ from the first time I heard it. (It’s fun to say.) Facticity has specialized meanings in specific philosophical schools of thought, but I use it here with its simple dictionary definition—“the quality or condition of being a fact” (American Heritage Dictionary of the English Language). It turns out that the quality of facts and the conditions necessary for being deemed a fact are currently being debated, not just by philosophers but by scientists themselves. As I said earlier, science has hardly been in the forefront of looking deeply into what facts and values are and whether they are so distinct as some many would have us believe. Having built its reputation as the legitimate owner of the ‘fact’ part of the dichotomy, it had no need to. But there appear to be cracks in the foundation. And science is being forced to look in the mirror and see them.
The Changing Quality of Facts
We are living in a time when, through the Internet there is no limit to the amount of information and number of facts about everything, including science, that can be generated and disseminated. This explosion in fact-generation has prompted concern, analysis and both utopian and dystopian predictions—and many books. One of these books hooked me with its title, Too Big to Know (for obvious reasons). The subtitle, though, clinched it for me; Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room is the Room not only spoke directly to me and the topic of The Overweight Brain, but it was even longer than my subtitle! I immediately hit my Amazon one-click and bought the book.
The author is David Weinberger, a Senior Researcher at Harvard University’s Berkman Center for Internet & Society, and consultant and professional speaker and commentator on the impact of the Internet on our lives. Weinberger believes that the nature of knowledge is being transformed in such a way that it’s no longer the property of individuals, institutions, laboratories, libraries or journals—but of networks. Thanks to the Internet and “the networking of knowledge,” we now live in a world in which while there is too much to know, there’s no need for any one to know it. I wish Too Big to Know was more philosophical (Weinberger has a Ph.D. in philosophy) and less glitzy and Pollyanna-ish, but the book did give me a sense of some of the ways people who don’t just use the Internet but think about it deeply view the world.
The most interesting part of Too Big to Know for me was where Weinberger talks about the history of facts. It’s something I never had thought about and probably you hadn’t either. So I’ll share some of what Weinberger takes to be the history of the changing conception of what facts are within the emergence and development of Western science.
Early on in the section, “A History of Facts,” Weinberger writes:
We believe the firm foundation of knowledge consists not of analogies but of facts. The ancients and we moderns disagree about how to lay a firm foundation, but we firmly believe in foundations themselves.
Facts are facts. It’s a fact that polio vaccine is effective and it’s a fact that Cancer the constellation has nothing to do with tumors. But the idea that the house of knowledge is built on a foundation of facts is not itself a fact. It’s an idea with a history that is now taking a sharp turn. (p.23)
That history, Weinberger tells us, includes the turn in the 16th century from knowledge being human theories about universals and essences, to facts about particulars gleaned from experimental observation becoming an important source of knowledge. According to Weinberger, “Having the particular ground the universal was a remarkable inversion of the traditional approach to knowledge. No longer derived by logical deduction from grand principles, theories would hereafter be constructed out of facts the way houses are built out of bricks” (p. 26).
Another turn occurred in the 18th century. Weinberger’s examples are drawn primarily from the social reform movement in England, when facts were offered, in contrast to the interests of those in power, to support changes in social policy. That facts could be gathered (for example, What did working as chimney sweeps do to little boys’ physical health?) meant that upper class arguments (for example, If the boys weren’t employed they’d be stealing, and the poor are poor because they deserve to be) could be countered. Gathering facts on crime, poverty, health, education and other social issues and, with the advent of statistics, summarizing them in reports provided evidence in arguments for social reforms. Supposedly, what was the case (fact) and what the rich wanted (value) were now separate and distinct, and social policy could be made on the basis of fact. Disagreements over what to do could be settled once ‘what was the case’ could be established.
Of course, everyone didn’t live happily ever after. Once knowledge became facts and facts described ‘what was the case’ (the truth of the matter), then self-interest could be hidden behind the façade of objectivity in the generating and citing of facts in which what is the case is what someone wants it to be. (Recall my showing the hidden subjectivity behind science’s claim to objectivity a few pages ago.)
The sharp turn in today’s “Age of the Net” that Weinberger is writing about is a collapse of the factual foundation of knowledge. Facts are no longer what they used to be because they are produced and disseminated differently. No longer by a relatively few people with specific expertise and institutional positions. No longer shared exclusively through scholarly journals and monographs, government publications and certain media. No longer protected, cloistered, elite. The Internet changed all that. Anyone can now produce and publicize information about anything and call it a fact. Any bit of information can be linked to any other piece of information. We have an uncountable number of networked facts, each linked to what seems like (and might well be becoming) infinite amounts of information giving the history of that fact, its back story, what songs and films it’s been referenced in, who believes it, who doesn’t, counter arguments, and counter-facts, to mention just a little of what a click or two can yield.
Within this environment, it’s become harder and harder to hold to traditional distinctions between information, facts, opinions and values. There is an incredible amount of knowing out there. Much of it is about things we don’t care about at all—or didn’t until it became known (like a personal favorite of mine: The US is the country with the most redheads.)
The changes spawned by technology put knowing and science in a new light. What does the reshaping of facts and their facticity have to do with how people do knowing? What will it take to maintain a scientific knowledge-producing activity and institutional structure? Is the cult(ure) of science eroding?
There are some signs. The institutions of research and scholarship (of science and everything else as well) look very different from a decade ago. Scientists and other researchers can now connect with each other instantly; they can collaborate, argue, share data, get feedback on their writings one their own time schedule instead of being limited by the old means of communication and by the rules of scholarly presses and academic departments. At least a dozen ‘open access’ sites makes scholarly research available to anyone who signs up (academia.edu, has 46 million members; ResearchGate has 11 million—me included in both). Several US and Canadian universities have collaborated to form the Public Knowledge Project to make publically funded research freely available. These innovations make dissemination of and feedback on one’s work far quicker than print publications (in which the time from when an article is submitted to when it’s published can be as much as two years). They also make the research community much larger and more dynamic and collaborative. All of this threatens the academic publishing industry, the esteemed status of its journal editors, and its profits. (It also has created a new, rapidly expanding for-profit business for scholarly publications, where authors pay to get their research published.) It remains to be seen if the scientific establishment will be able to preserve its elite status as creators and purveyors of knowledge worthy of the name and science and technology become more open.
How Many Scientists Does It Take to Make a Fact?
Certainly, science has had to defend itself these past few years. There have been several charges of fraud and corruption that made the headlines in the news media and got millions of shares on Facebook and tweets on Twitter. Among the most notorious were psychology’s involvement in US government torture, drug companies involvement in mental illness diagnosis, and the sugar industry paying scientists to ‘discover’ sugar doesn’t lead to heart disease. [See Note 6]. Perhaps more significant, in the long run, are the ways that the scientific method itself has come under public scrutiny.
One has to do with what’s called replicability or reproducibiity, a tenet of what it means to do science. An experiment yields results. These results generate evidence for a particular finding. But in order for the finding to be considered what’s referred to as a “true finding” (a fact), the experiment has to be replicated (repeated), preferably by both the original scientists and others, and the same results need to be found. It turns out this doesn’t happen as often as we’d like. Failure to replicate is common in all sciences, but shockingly high in the fields of medicine and psychology, where surveys and meta-analyses of published research show a failure to replicate rate as high as 70%. Most scientists live with this uncomfortable “fact.” Most of the rest of us remain unaware.
But in the summer of 2015, this issue—dubbed the “replication crisis”—became headline news in major media (with millions of shares on Facebook and tweets on Twitter). The media picked up a report published in the prestigious journal Science. The article, “Estimating the Reproducibility of Psychological Science” was written by the Open Science Collaboration, who reported on the three-year Reproducibility Project they conducted to re-do 100 psychology studies and see how many of the original results they could replicate. The result? Only about 40% of them. Criticism of the methods of Open Science Collaboration was swift, followed by defense of it, etc. Within the back and forth of who was right, however, some serious discussion emerged about the reasons for this crisis and what could be done about it. I see the attention to this issue by scientists as them confronting the facticity of the facts they generate. Thus far, what I see is a push to make the current scientific method more rigorous, transparent and subject to scrutiny by one’s peers—with the goal to further define, systematize and control what can be counted as a fact. The value of facts, however, is a topic they’re not touching. [See note 7]
Where the assertion of fact (regardless of replicability) really confuses us lay people the most is when it has to do with us—especially what makes us tick and what makes us healthy or sick. In Chapter 6. “Psychology Made Its Bed and We’re the Ones Who Have to Lie in It,” I called out psychology as a scam, so enough said about how we tick. But we can have a field day with what makes us healthy or sick. Especially when it comes to what we put in our bodies and how often we move them. Not a day goes by without a new finding from scientific research—the amount of physical exercise people need, the newest exercise regimen, another food that cures (or causes) cancer, what foods lower your blood pressure, and on and on. The number of different kinds of studies being done and information being publicized is staggering.
“Failure to replicate” wreaks havoc. Not only can’t we lay people discern the signal from the noise, none other than the Director of the National Cancer Institute’s Division of Disease Prevention, is quoted in The New York Times as saying, “We don’t know how to measure diet or exercise.” Scientists are confronted with a large body of research studies whose conclusions can’t be repeated. And us? As Dr. Kramer puts it, “One week drinking coffee is good for you, and the next week it is lethal.”
In the same article (“We’re So Confused: The Problems With Food and Exercise Studies,” August 11, 2016) we hear from Dr. John Ioannidis, a professor of medicine and of health research and policy at Stanford University’s medical school. “The situation is so bad that what gets published tends to be what the scientists believe ahead of time. There are so many nutrients and so many diets. So many outcomes—heart disease, cancer, stroke. What kind of data do you collect? A follow-up at two months, six months, two years, 10 years? You end up having millions of choices. I can get you any result you want in any observational data set.” [See Note 8]
When the Facts Belie the Evidence
Science reporting used to annoy me to no end. Especially if I knew something about the topic being reported on, and then the superficiality and assumptions of the study would outrage me. Nowadays, I read popular science reporting in a different way and I feel much better. I read these news pieces to understand better how science works and what, if any, issues of method scientists might or should be grappling with. Just recently I came across an essay in The New York Times that was quite worthwhile in this regard. To me, it showed a kink in the armor of science’s arrogance. It seems a good way to bring this chapter—on the authoritarianism of science; the shaky grounds of its reputation as the supreme way of knowing; what is fact and what is value; how technology, which could not have been invented with science, is undermining its authority—to a close.
Here’s the story. It’s called “Flossing and the Art of Scientific Investigation” by James Holmes. [See Note 9] Holmes is commenting on a widely read Associated Press (AP) report that appeared in August 2015, which suggested that, contrary to the advice of dentists everywhere, flossing didn’t necessarily foster good oral health. The AP based its conclusion on documents of the Departments of Health and Human Services and Agriculture it obtained under the Freedom of Information Act. From that data, the AP did a meta-analysis (an analysis of the studies/ analyses) of 25 studies that compared using a toothbrush and using a toothbrush and floss. The results showed very weak evidence that flossing was effective.
Holmes tells us that, “in response, the Department of Health and Human Services, the American Dental Association and the Academy of General Dentistry reaffirmed the importance of interdental cleaning. But confusion persists: A lot of people now mistakenly think that ‘science’ doesn’t support flossing.”
What Holmes goes on to show is that ‘science’ doesn’t support expertise. For, expertise is the type of evidence that the report was calling weak—the expertise dentists develop from their clinical experience and professional life as dentists. To the scientific authorities, expertise is “fatally subjective” and no match for “definitive randomized controlled trials, the so-called gold standard for scientific research” that provides real knowledge.”
Holmes goes on to warn against “the cult of randomized controlled trials,” the effect of which is to malign, neglect, ignore or not even see other routes, ways, ideas that might be rich avenues of hypothesis, research and discovery. He urges more respect for expertise and clinical experience, in the spirit that both kinds of evidence are needed.
I agree. Science needs to grow. Knowing took science and us this far. But just like everyone else, science would do well to grow beyond knowing if it’s to contribute to making the world a better place. [See Note10]
Notes to Help You Go Deeper and Broader
Note. 1. Among the most influential are Richard Rorty and W.V.O Quine, as well as feminist philosophers who tackle science and epistemology, such as Donna Haraway, Sandra Harding and Merrill Hintikka.
Note 2. This quote from Wittgenstein is from Philosophical Investigations, Remark 115.
Note 3. This quote from Wittgenstein is from Culture and Value. It appears on page 48e of the 1980 University of Chicago’s paperback edition.
Note. 4. This essay by Putnam originally appeared in the journal Critica, 14 (1982), pp. 3-12. It can be accessed at http://inters.org/Putnam-Fact-Value
Note 5. The quote from Elgin is from Iride, 20, 2007, 83-101 (in Italian). It can be accessed here http://elgin.harvard.edu/misc/ffv.pdf
Note 6. Here are the highlights of the sugar story. “Sugar Backers Paid to Shift Blame to Fat.” This was the headline of a front-page story in the September 13, 2016 The New York Times, which began, “The sugar industry paid scientists in the 1960s to play down the link between sugar and heart disease and promote saturated fat as the culprit instead, newly released historical documents show.” The Times was reporting on an article that appeared in the prestigious medical journal JAMA Internal Medicine, which traces and reveals the sordid history of the so-called science that shaped how doctors and the public understand coronary heart disease and, consequently, the way Americans eat. In the 1950s the prevalent view was that sugar was the dietary causes of the disease. But in 1965, a literature review published in the New England Journal of Medicine singled out fat and cholesterol as the dietary causes and downplayed evidence that sugar was also a risk factor. It turns out that the Sugar Research Foundation played the major role in in the literature review—from deciding the objective of the published review to paying researchers to do the studies to support their conclusion. But this vital information—that a food industry was behind it—wasn’t disclosed until 50 years later. The sugar industry hoax is just one of dozens of recent media exposés of the myth of scientific objectivity. Not only did the discovered facts about sugar and fats have great value (mostly for the sugar industry). It turns out that scientists were paid to discover the facts! The online version of this article (which was titled “How the Sugar Industry Shifted Blame to Fat”) can be accessed at http://www.nytimes.com/2016/09/13/well/eat/how-the-sugar-industry-shifted-blame-to-fat.html
Note 7. Here is the link to the original Science article, plus to some essays of commentary and analysis: http://science.sciencemag.org/content/349/6251/aac4716.full,
Note 8. This article appeared in The New York Times on August 11, 2016. It can be accessed here: http://www.nytimes.com/2016/08/11/upshot/were-so-confused-the-problems-with-food-and-exercise-studies.html
Note 9. This article appeared in The New York Times on November 25, 2016. It can be accessed here:
Note 10. For a deeper dive into the cult of science and its history, see Newman’s and my books The End of Knowing and Unscientific Psychology.