We depend on all our great readers to keep Salvo going!
Follow Salvo online
British author and retired psychiatrist Theodore Dalrymple recently flagged an article in the prestigious medical journal The Lancet, in which researchers announced that they had calculated that an average of 92 minutes per week of exercise reduced subjects' "all-cause rate of mortality" by 14 percent. They also claimed that "every additional 15 minutes of daily exercise beyond the minimum of 15 minutes per day further reduced all-cause mortality by 4 percent."
Later, The Lancet received a letter pointing out that, if the researchers' findings were correct, a man who exercised for six hours every day would reduce his mortality rate to zero, thus becoming immortal. Dalrymple comments, "In my opinion, life would not actually go on forever; it would merely seem as if it did, in the sense of being boring and pointless."1
But how did this blooper get past peer review in the first place?
Peer review—colleagues' prior approval of journal papers—is, say science popularizers, the "gold standard" of science.2 Actually, today it is more like a troubled currency, of fluctuating and generally diminishing value.
Bias Toward the Positive
One well-recognized problem is a deforming bias toward publishing positive findings. As Daniel Sarewitz put it in Nature, when positive findings are published, "scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered." He also notes that "the lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated—but the necessary cultural change is incredibly difficult to achieve."3
It is difficult to achieve because disproving a discipline's nostrums (through failed replication) is a high-risk activity. Take, for instance, a recent replication attempt that failed to support the classic 1948 Bateman study underpinning Darwinian sexual selection (males benefit from promiscuity, females from monogamy).4 Will Darwinists thank the authors for this news, when their theory is under fire elsewhere?
Conformism is more rewarding. Professor of medicine Fred Southwick recently complained in The Scientist that "many who succeed in advancing to leadership positions in academia have been cautious, making few enemies and stirring little controversy. But such a strategy fails to generate the insights that drive scientific fields of research forward."5
It's no surprise, then, that in PLoS Medicine, John P. A. Ioannidis was bold enough to explain, in 2005, "Why Most Published Research Findings Are False." He posited that, "for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias"6—what dissident biologist Lynn Margulis (1938–2011) described to science writer Suzan Mazur in 2008 as a "cycle of submission."7
Retractions Up, Repeatability Down
Meanwhile, plagiarism is said to have "skyrocketed" over the past decade.8 Retractions have boomed, too. As science writer Carl Zimmer tells us (2012), "the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent."9
Daniel Kennefick, a cosmologist at the University of Arkansas, reveals that "many authors are nowadays determined to achieve publication for publication's sake, in an effort to secure an academic position, and are not particularly swayed by the argument that it is in their own interests not to publish an incorrect article."10 One factor in this attitude may be that repeatability in medical studies falls over time; Jonah Lehrer told us in The New Yorker in 2010 that "it's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable."11
C. Glenn Begley, a former researcher at the biotechnology firm Amgen, which funds cancer research, offers a sobering example: when Amgen redid the published experiments of 53 "landmark" cancer studies, it could not replicate the original results in 47 of them. As reported in a Reuters news article:
Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."12
Not only that, but some study authors made the Amgen scientists sign a confidentiality agreement preventing them from disclosing data that contradicted the study authors' original findings. So we may never know which ones couldn't be replicated.
Peer review is not much use against problems at this level. These people are the peers.
Sexy Findings for the Media
The social sciences have rocked to some juicy scandals in recent years. Diedrik Stapel, a recently suspended professor at Tilburg University in the Netherlands, was famous for his edgy findings, widely publicized in prestigious journals. For example, as James Barham explains at the blogsite TheBestSchools.org:
The first paper upended a gender stereotype (alpha-female politicos philander, too?!), while the second linked the physical world to the psychological one in a striking manner (a messy desk leads to racist thoughts!?). Both received extensive news coverage."13
Trouble was, Stapel had made up or manipulated dozens of papers over nearly a decade.14
Then there's Harvard researcher Marc D. Hauser, author of Moral Minds and other books, who made dramatic, unsupported claims about monkeys' mental abilities, and was forced to resign in 2011. Darwinian philosopher Michael Ruse told the Chronicle of Higher Education that the case "really makes me mad." Detractors, he worries, "seize on issues that supposedly discredit evolution and parade them publicly as the norm and the reason to reject modern science." Thus, "one man's mistakes rebound on every evolutionist."15
But is it really just "one man's mistakes"? In one industry-advertised study, half the social scientists admitted reporting only desired results.16 A New York Times article admits that "self-serving statistical sloppiness" is common.17 Ed Yong noted in an article in Nature that "it has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results."18 And when statistician Theodore Sterling analyzed four major psychology journals in 1995, he found that 97 percent of the studies published in them reported positive results. Interestingly, that was the same percentage he found when he first analyzed the journals in 1959.19
Some sources attribute the unusually high rate of questionable findings in the social sciences to "physics envy." As philosopher Gary Gutting explains, "most social science research falls far short of the natural sciences' standard of controlled experiments."20 Political scientists Kevin A. Clarke and David M. Primo urge social scientists to just get over it and "embrace the fact that they [the social sciences] are mature disciplines with no need to emulate other sciences."21
But are they mature? Methodological expert Eric-Jan Wagenmakers told the Chronicle of Higher Education that psychology "has become addicted to surprising, counterintuitive findings that catch the news media's eye, and that trend is warping the field."22 Actually, Mr. Wagenmakers is mistaken on one key point: Stapel's and Hauser's misrepresentations were popular because they did confirm the worldview of the discipline, and of the mainstream media as well, who like to think that Top People all philander, flyover country is racist, and monkeys 'r' us.
Various fixes for the problem are on offer. In the Chronicle of Higher Education, Tom Bartlett reports on an effort called the Reproducibility Project, part of the Open Science Framework, which aims to expose the bunk by focusing on reproducibility of findings. "This," Bartlett writes, "is a polite way of saying, 'We want to see how much of what gets published turns out to be bunk.'"23 But what if the problems run deeper than exposing bunk?
For example, Dutch psychologist Jelte M. Wicherts notes that most social psychologists "simply fail to document their data in a way that allows others to quickly and easily check their work."24 He advocates sharing data, which sounds like a fine idea, but if this failure has gone unaddressed for decades, how do we know what data isn't bunk?
More usefully, astrophysics postdoc Andrew Pontzen asks us to look at what has changed. "Peer-review offered a quality-control filter in an age where each printed page cost a significant amount of money," he points out, but today, physicists download papers from arxiv.org irrespective of their peer-review status. And they form their own opinions, without the gatekeeper.
So Pontzen suggests an alternative, better suited to the internet age:
Imagine a future where the role of a journal editor is to curate a comment stream for each paper; where the content of the paper is the preserve of the authors, but is followed by short responses from named referees, opening a discussion which anyone can contribute to. Everyone could access one or more expert views on the new work; that's a luxury currently only available to those inside privileged institutions.25
At Physics World, Stephan Thurner adds a "scouts" model, where journal editors could troll the preprint servers for suitable papers: "Papers that no-one wants to publish remain on the server and are open to everyone—but without the 'prestigious' quality stamp of a journal."26 Thus, the information in ignored papers would still be available, eliminating the problem of absolute censorship.
These suggested fixes recognize the reality that in today's science publishing, "getting published" is much less the issue than gaining attention and credibility in a trillion-word stream. But there may be a deeper, underlying issue as well.
Fudging Truth for the Cause
One wonders why the "skeptic" Michael Shermer isn't embarrassed by his praise of peer review in Scientific American, "The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism" (2011).27 He sounds so astonishingly naive. As is so often the case when a troubled currency's value is diminishing, the underlying crisis is philosophical.
Cognitive psychologist Steve Pinker insists, "Our brains are shaped for fitness, not for truth; sometimes the truth is adaptive, sometimes it is not."28 And Darwinian philosophers like Michael Ruse insist that ethics is an illusion.29 When scientists accept these accounts of ethics, a dilemma arises: Materialistic atheism is supposed to accurately account for all events. All contrary results will eventually yield to that overriding fact. Then why not fudge a little in the short term, so that the triumph will not be delayed by present-day inconveniences such as stubborn disbelief? Well-known atheist science writer John Horgan explicitly endorses lying for science in such cases as the effort to fight global warming: "It's a war, and when people are waging war, they always lie for their cause."30
A Different Standard for Christians
Christians face this temptation, too. But for Christians, truth is a Person, a Person who never asked them to lie for him, but who, on the contrary, threatens to consign liars to the fire (Rev. 21:8), not to tenure.
Thus—to the extent that scientists accept materialistic atheism—we will likely see plenty of smoke, noise, and mirrors around reform, but no real reform. Real reform means deciding that ethics is not an illusion but a correct relationship with reality. And discovering truth is what a brain is fit for. Science used to be like that. •
If you enjoy Salvo, please consider contributing to our matching grant fundraising effort. All gifts will be matched dollar for dollar! Thanks for your continued support.
© 2017 Salvo magazine. Published by The Fellowship of St. James. All rights reserved.