Thursday, April 19, 2012

Framework of research skewed to nurture untruths

Last October, a 2010 paper of his published in Science and a 2011 paper in Nature Immunology were retracted by the journals for the same cause.
The immunologist who has about 70 papers under his belt left NUS in 2009. He began working at the University of Liverpool from August 2010 but remained an NUS visiting professor until May last year. He is under suspension from the British institution which is investigating the case. All his work is still under investigation at NUS as well. Dr Melendez has declined to speak to the media.
How common is such academic fraud? In January, the British Medical Journal e-mailed over 9,000 Britain-based scientists and doctors of whom 2,700 replied, with 10 per cent of them saying that they had observed colleagues intentionally manipulating or fabricating research data. Six per cent said the misconduct was not dealt with.
Reprehensible though such outright fraud may be, there are even more pervasive structural problems in biomedical research that may cause an unknown proportion of published works to be untrue.
One cause for concern is called the 'file drawer' problem because journals privilege positive over negative results. That is, papers are more likely to be published for reporting positive results - X causes Y or A cures B, say, rather than X does not cause Y or A does not cure B.
A University of Edinburgh study last year looked at 4,656 papers published between 1990 and 2007 to find that it had become progressively more likely for published papers to report positive results - especially in Japanese and United States journals.
As a result, researchers tend not to submit papers about studies that fail to prove an idea. These unpublished studies are, in effect, filed away unreported in the drawers of imaginary file cabinets.
This file drawer effect has vexed the US Food and Drug Administration and top journals for over a decade now. Unpublished negative results may cause others to replicate research that has already failed, thus wasting effort and resources.
One solution may be to have journals that publish negative results. The oldest one for this purpose is the Journal of Negative Observations in Genetic Oncology begun in 1997. Then there is the Journal of Null Results in Biomedicine begun in 2002 while the Journal of Cerebral Blood Flow and Metabolism introduced a negative results section in 2010.
However, very few papers have been published in these journals. This suggests most researchers are reluctant to share negative data which could help other scientists but not their own reputation.
A second problem is that because top journals in most fields have rejection rates of 90 per cent or more, studies that have novel results are more likely to get in. Under such pressure, scientists may resort to making extravagant claims. For example, a large, randomised, controlled trial 'proved' that 'secret prayer' by others saves the lives of heart surgery patients.
That study grabbed headlines but such studies with similarly immoderate claims are likely to be wrong for the following reason: The evidence from randomised controlled trials where both researchers and subject do not know who was given the placebo or the experimental treatment is considered nearly incontrovertible. Yet, even this gold standard can be faulty because what questions are posed, how they are worded, how trial subjects are recruited, what is measured and what is left out as well as how data is analysed, all matter. All these parameters can be tweaked to get the desired findings, especially extravagant ones.
Complex software is used to mine data and analyse it to get desired results. There are actually statistical packages whose selling point is that they can dredge the data to give you statistically significant results - the 'positive' results that are more likely to be published.
A third problem is that the much-vaunted peer review process is not neutral or objective. In fact, it can biased: Papers that include a famous name as a 'co-author' are more likely to be published. It can even be wrong, as shown up by fraudulent studies that get published.
In 2006, the prestigious Nature editorialised: 'Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.'
One game changer would be for funding agencies and top journals to require researchers to pre-register in a public database before they start a study as to what their hypotheses might be and how they propose to conduct their study to prove or disprove those hypotheses. They then post their raw data and statistical analyses on that public registry for all to see.
In 2004, the International Committee of Medical Journal Editors of top journals such as the New England Journal of Medicine and The Lancet began requiring comprehensive pretrial registration of studies as a condition for publication.
This would prevent post-hoc manipulation of data and also research questions to fit the data. In the US, the site www.clinicaltrials.gov is already doing this. If other nations emulate that effort, the research enterprise could become a better one.