Rebuilding After the Replication Crisis—Asterisk

@tags:: #lit✍/📰️article/highlights
@links::
@ref:: Rebuilding After the Replication Crisis—Asterisk
@author:: asteriskmag.com

2023-12-14 asteriskmag.com - Rebuilding After the Replication Crisis—Asterisk

Book cover of "Rebuilding  After the  Replication Crisis—Asterisk"

Reference

Notes

Quote

Why check the validity of one another’s findings when, instead, we could be pushing on to make new and exciting discoveries?
- No location available
-
- [note::Honestly, I can't fault people for thinking this way. Validating other's findings is boring and unprestigious. A new and exciting discoveries may get you an award.]

Quote

Research in all fields was affected by fraud, bias, negligence and hype, as I put it in the subtitle of my book Science Fictions. In that book, I argued that perverse incentives were the ultimate reason for all the bad science: Scientists are motivated by flashy new discoveries rather than “boring” replication studies — even though those replications might produce more solid knowledge. That’s because for scientists, so much hinges on getting their papers published — particularly getting published in prestigious journals, which are on the lookout for groundbreaking, boundary-pushing results. Unfortunately, standards are so low that many of the novel results in those papers are based on flimsy studies, poor statistics, sloppy mistakes or outright fraud.
- No location available
-
- [note::Makes sense - how do we realign incentives to counteract this?]

Quote

For instance, unless you were a clinical trialist, you likely wouldn’t recognize the term “preregistration.” This involves scientists planning out their study in detail before they collect the data, and posting the plan online for everyone to see (the idea is that this stops them “mucking about” with the data and finding spurious results). And unless you were a physicist or an economist, you might be surprised by the rise of “preprints” — working papers shared with the community for comment, discussion and even citation before formal publication. These ideas come under the rubric of “open science,” a term that in 2012 you might have heard of (it’s been around since the 1980s), but that in 2022 is discussed almost everywhere.
- No location available
-

Quote

There are also telling patterns in the tools they’re using. The Open Science Framework, a website where scientists can post their plans, share their data and generally make their whole research process more transparent, had somewhere near zero users in 2012, but by the end of 2021 had hit 400,000.
- No location available
-

Quote

Over 300 journals across a variety of fields now offer the ultimate form of preregistered research, the “Registered Report,” where it’s not just that a plan is posted and then the study goes ahead, it’s that peer reviewers review a study plan before the study happens. If the plan passes this quality control — and the reviewers might suggest all sorts of changes before they agree that it’s a good study design — the journal commits to publish it, regardless of whether the results are positive or negative. This is a brilliant way of making sure that decisions about publication are made on the basis of how solid the design of a study is — not on the perceived excitement levels of its results.
- No location available
-
- [note::Love this - pre-committing to publish regardless of the favorability of the results.]

Quote

Researchers in medical trials are forced to be transparent in a way that would be unrecognizable to scientists in other fields, whose research can effectively be entirely done in secret.
- No location available
-

Quote

Instead, we have to look at some proxies for quality. In psychology, one such proxy might be “adherence to open research”: How much of the new replication-crisis-inspired reforms do they follow? Sadly, for this, all we have so far is a starting point: Only 2% of psychology studies from 2014–2017 shared their data online, and just 3% had a preregistration.
- No location available
-

Quote

Using adherence to open research as a proxy for research quality is complicated by the fact that it’s possible to post a preregistration and then simply not follow it, or write it so vaguely that it doesn’t constrain your “mucking about” in the intended way.
- No location available
-

Quote

A study from earlier this year found that, in studies where the authors wrote that they’d be happy to share their data on request, only 6.8% actually did so when emailed. In other words, it’s possible to go through the motions of “open science” without it really affecting your research or the way you behave — a problem that’s increasingly been spotted as more researchers sign up to these “open science” techniques. If you want to really make your research open, you have to actually mean it.
- No location available
-
- [note::A form of "lip service" or "symbolic compliance"?]

Quote

In other fields, though, all we have are starting points but no data on long-term trends. Almost uniformly, the starting point is one of very low power. That’s true for psychology in general, clinical and sports and exercise psychology in particular, ecology, global change biology (the field that studies how ecosystems are impacted by climate change), economics, and political science. Other areas like geography have seen glimmers of a replication crisis but haven’t yet collected the relevant meta-scientific data on factors like statistical power to assess how bad things are. We’ll need a lot more meta-research in the future if we want to know whether things are getting better (or, whisper it, worse).
- No location available
-
- [note::The average statistical power of studies within a field is correlated to the size/maturity of that field?
In other words, emerging/unpopular fields typically have smaller average sample sizes than mature/popular fields do.]

Quote

Even then, the mere knowledge that studies are, say, getting bigger shouldn’t reassure us unless those studies are also becoming more replicable — that is to say, a closer approximation to reality. And although areas like psychology and economics have attempted to replicate dozens of experiments, there hasn’t been time to make the same attempts to replicate newer studies or compare the replication rates over time. We likely won’t see meta-research like this for a long time — and for some fields, a very long time. Witness how long it took the Reproducibility Project: Cancer Biology, a heroic attempt to replicate a selection of findings in preclinical cancer research, to finish its research: It began in 2013, but only just reported its final mixed bag of results in December 2021.
- No location available
-
- [note::…and as the average sample size (and thus, statistical power) of studies in a given field grows, the harder it becomes to replicate studies in those fields.]