Tag Archives: Nature

Missing Mice

There are a number of problems with scientific publications.  Some of which are avoidable, others are not.  Some are well-documented, others still remain veiled.  Some have the potential to significantly damage a study’s validity, while others may simply question it.  Sometimes a single flaw in the process can encompass many of these at once.

Last month Nature News posted an article on their website titled Missing Mice: Gaps in the data plague animal research.  The tagline, ‘reports of hundreds of biomedical experiments lack essential information’ outlines the increasingly evident failings in scientific publications of not documenting crucial study components.  While not the first to raise awareness of such short-comings, the Nature article focuses on two separate studies that reinforce this growing awareness.

The first study from the Charité Medical University in Berlin review 522 rodent-based experiments from 100 paper from 2000-2013, discovering that two thirds of them displayed a drop in the numbers of animals used between the methods and results sections.  Sometimes dropping one or several test subjects, for valid and noted reasons, is part of the scientific process.  However, in the experiments investigated, only 14 explained why.  This suggests potentially misleading results, or poor experimental protocol.  Additionally, it could also point to intentional or unintentional reporting biases, wherein individual results that would have confound the desired overall results were cast aside. Further analysis has showed that the relative statistical severity of removing selected data points from studies could affect the overall results by as much as 175%.

How many publications, especially those that are heavily cited, have discrepancies between the initial n value proposed and the actual number used in the final conclusions?  It would be seemingly straightforward to justify the use of few dozen or hundred mice for a study, then break that number up into treatment groups over several runs, subsequently removing data points here and there as and when they disagreed with the sought after result.  How many casual, or even invested readers for that matter, would go back and double check that all the numbers add up to the initial amount?

The second study examined whether 268 randomly selected biomedical papers provided full data as well as sufficient detail to replicate the work.  The Stanford-lead study discovered that none supplied full data and only one provided information adequate enough to reproduce the experiment.  Beyond this, it was stated that in 2014 33% of papers included conflict-of-interest statements, compared to 10% in 2000.

These findings are considerably discouraging.  Of course, they both reflect relatively small sample sizes, yet provide important insights into areas of scientific reporting that many take for granted.  The investigations carried out are part of a larger meta-analysis looking to address and identify the most common and problematic fallacies in scientific publications.  An immense task to uncover all the past inaccuracies, but incredibly valuable to avoid any future ones.  The ultimate risk to proper scientific output from practices like the ones discovered by these studies cannot be underestimated nor disregarded.  The immediate hope is that through widespread awareness across all areas of research of the potential complications and/or biases such activities can produce a fresh start, in sorts, can begin.  Naturally, not all of the reporting and methodological inaccuracies are intentional or malicious; yet increased universal understanding and appreciation of the implications of the issues related to them should hopefully lead to their reduction and a growth in more reliable and improved research.