Episode 4 - Scientific Study Reports There are Too Many Scientific Studies

Image credit: Atlassian Confluence, using ISI data from the journal "Scientometrics" and Science of Science and Innovation Policy (SciSIP) data from NSF Awards Search.

Image credit: Atlassian Confluence, using ISI data from the journal "Scientometrics" and Science of Science and Innovation Policy (SciSIP) data from NSF Awards Search.

Meanwhile in the land of scientometrics, a new scientific report concludes that there are too many scientific reports for scientists to keep track of. As you can tell, this episode of the podcast gets just a tad meta.

Resources:

Episode Transcript

A recent study published in arXiv claims that there may be more newly published scientific studies these days than scientists can actually keep up with. (arXiv is an online open-access electronic archive and distribution server for research articles that is maintained and operated by the Cornell University Library.) The authors of the paper, Parolo et al., claim that the exponential growth in scientific papers makes it difficult for researchers to keep track of pertinent research in their fields of interest. The amount of attention a paper can maintain in its field, measured by the number of citation counts, also suffers as a result of this high growth in publications. To take a closer look at the actual lifespans of scientific literature, Parolo et al. performed a systematic analysis of the life cycles of scientific papers across historical periods and multiple scientific fields, including physics, medicine, chemistry, and biology. They looked at a comprehensive list of publications (both articles and reviews) written in English till the end of 2010 that were included in the database of the Thomson Reuters (TR) Web of Science.  Now it's been known that the typical lifespan for a paper seems to be short-lived, with its citation count increasing for the few years after its publication, reaching its peak, then decaying rapidly shortly thereafter. What these authors found was that this exponential rate of decay appears to be even faster as of late, most likely, as they suggest, due to the sheer increase in the amount of published research that is available to the public. They have thus termed this phenomenon "attention decay".

Why is this such a problem?

Science, among other fields like entertainment, operates on an attention economy. Researchers who produce new knowledge, so to speak, are rewarded with attention by their peers in the form of citations - and in turn, these peers' standing is measured by the number of citations they receive. In the entertainment industry, this is how celebrities become phenomenons or "stars". They receive a higher income of attention than is the norm in their fields. So not only is attention the currency of science, but as you can imagine, attention is linked to things like productivity. The more attention something gets, the more productive whoever made that thing is going to be. Look at any content creator on the internet. The authors use YouTube videos as a prime example. The more attention a channel gets because of a certain video, let's say, the more productive that YouTuber is likely to be in creating more content. It's kind of like a positive feedback loop. Conversely, if a video receives very little attention or the amount of attention dedicated to it starts to taper off, you most likely will not see much content coming from that YouTube channel. In the sciences, attention gets you motivated to keep doing research so you can keep discovering new knowledge and publish more papers. Attention probably has some influence on funding, as well, which is a pretty strong motivator to begin with.

So why and how does this decay in attention in the sciences happen in the first place?

Presumably, the phenomenon is a tad similar to the nature of viral videos or memes. The novelty of something eventually just dies off over time. They can be revived, and boy can they be revived, but a meme today isn't necessarily a meme next week. The authors of this paper also suggest that "the human capacity to pay attention to new content is limited" in and of itself. New papers with new research come along so often, even more so these days than before, and steal the spotlight of older papers. Just like memes. Man, this memes analogy really works well. Why has this just now been getting attention? Well, it hasn't. There's an entire field dedicated to measuring and analyzing the quantitative character of science and scientific research called scientometrics...and this issue of rapid obsolescence of scientific material has been gaining a lot of...well, you know, attention.

How serious are these findings?

As you can imagine, while these findings seem worrisome, the fields that the authors pooled these papers from were very broad. Physics, Medicine, Biology, and Chemistry - I can think of at least 20 college courses for each subtopic in each field. The authors admit this as well, claiming that while it would be very difficult to isolate the relevant literature case by case, the fields were very broad and do contain many, many subtopics. I imagine the results still remain the same, however, and just as worrisome. Not only does exponential attention decay over time, as you might expect, but it also seems to be occurring as an actual result of the increased growth in publications, which in turn increases the turnover rate of each paper.

So how do we fix this?

You can imagine that if scientists can't keep track of all the new literature that's being published in scientific journals...someone out there might be doing research on something that another lab has already published work on or provided sufficient evidence for systematic review. And even then, there could be issues with this published article, which just adds to the quantitative pool of scientific literature but doesn't contribute as much qualitatively. A viewpoint article published by Iain Chalmers & Paul Glasziou in 2009 delves into this issue of waste in producing and reporting research evidence. They claim that:

"New research should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence. Many researchers do not do this—for example, Cooper and colleagues found that only 11 of 24 responding authors of trial reports that had been added to existing systematic reviews were even aware of the relevant reviews when they designed their new studies."

That's a problem. In this article, the authors took a closer look at the causes and magnitude of waste in scientific research and publications by splitting up the process into four stages:

  1. The choice of research questions
  2. The quality of research design and methods
  3. The adequacy of publication practices
  4. The quality of the reports of research

They claim that since research must pass through all four of these stages, the waste in each stage is cumulative. If this assumption is made, they claim that as much as 85% of research investment is wasted due to poor research design, unnecessary new research when reviews of prior publications would have sufficed, biased reporting, or unpublished results. How do we fix this? By just being aware of the problems that lead to wasteful scientific reporting and working to avoid them. Certain rules and enforced practices can help, as well. For instance, the National Institute for Health Research in the UK has a Health Technology Assessment Program that does a number of things to help minimize research waste. The program "routinely requires or commissions systematic reviews before funding primary studies, publishes all research as web-accessible monographs, and, since 2006, has made all new protocols freely available".

Researchers can also keep track of any false findings that have been published and then subsequently retracted thanks to Retraction Watch, a website that keeps track of...you guessed it, articles retracted from their appropriate journals. The project was founded by Ivan Oransky, a medical editor and physician, and journalist Adam Marcus back in 2010 when they saw the need for a comprehensive database of article retractions, which could provide researchers with an idea of which papers and results were erroneous or even fraudulent. They recently just received a $400,000 grant from the MacArthur Foundation that they plan on utilizing to better develop their database to make it fully comprehensive as well as allowing better user interaction in terms of search results, among other upgrades to the website and organization.

While a bunch of meta-research (that's research being done on research, whoa so meta) is helping to expose ways in which scientific research can improve, not only in its execution but in its eventual publication, the onus really and truly falls on the scientists doing the research. As we've seen, though, it's not an easy task what with all of these new scientific publications flooding in...it's hard to keep track of what's what in your field and even harder to remember them as time goes on and newer papers get published. But there are resources out there to help science make better decisions moving forward, whether it's in making sure that we're asking the right research questions or that our studies aren't biased or that we're keeping track of some of the errors in our fields...and it looks like more people are waking up to this dilemma, so hopefully these resources will continue to grow moving forward.


Sound effects used in this episode are from www.freesfx.co.uk. All music tracks are attributed to Kevin MacLeod and are licensed under Creative Commons: By Attribution 3.0 creativecommons.org/licenses/by/3.0/. All audio clips included in the podcast are used for nonprofit, educational purposes. 

The Synapse Science Podcast is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Creative Commons License.