Results 1 to 3 of 3

Thread: Negative results: Null and void

  1. #1

    Negative results: Null and void

    Source

    Nature 422, 554 - 555 (2003); doi:10.1038/422554a

    Negative results: Null and void

    Many scientific studies produce negative results that never see the light of day. Is progress in some disciplines being hampered by researchers' tendencies to consign these data to the bin? Jonathan Knight investigates.

    M. J. FACKLER

    Helen Colhoun and Scott Kern believe researchers would benefit from greater access to negative results.
    Although scientists clamour to publish the results of successful experiments, they are less excited about trumpeting those that simply confirm the null hypothesis - that a particular genetic marker isn't associated with an inherited disease, for instance, or that there is no difference between mice given a candidate drug and those in the control group.

    Whether a result of an eagerness to move on - or perhaps, in some instances, a desire not to reveal to competitors the avenues they have been fruitlessly exploring - most researchers don't bother to write up negative results. Even when they do, journals might be unreceptive. Unless a paper convincingly overthrows a widely held belief, negative findings tend to be of less interest than positive ones.

    But what is the cost to science of all these data languishing in the bin? How many postdoc years and scarce grant funds are wasted on projects that have failed previously in other labs? And is our scientific understanding in some cases biased by a literature that might be inherently more likely to publish a single erroneous positive finding than dozens of failed attempts to achieve the same result?

    Answering these questions is extremely difficult. In some fields, awareness of negative results tends to spread rapidly by word of mouth, even if the data are never published. If a cell line fails to behave as described in a high-profile paper, for instance, the news tends to spread among the biologists who need to know, whether or not anyone actually publishes a paper refuting the original discovery.

    Nevertheless, many researchers interviewed for this article suspect that their disciplines would benefit if negative results were to get a public airing. At present, the obstacles to disseminating negative findings make it difficult even to assess the extent of the problem. "It is hard to see what the bottom of the iceberg is like when you are sitting on top of the water," observes Helen Colhoun, a genetic epidemiologist at University College London.

    Awareness of the problem is growing. Over the past few years, a handful of journals and online repositories dedicated to negative results have been launched - with varying degrees of success. In certain fields, some scientists are even arguing that a requirement to reveal negative results should be made a condition of publishing a positive finding.

    Gold standard
    In Colhoun's discipline, the problem is particularly acute. The postgenomic era has seen an explosion of 'gene association' studies, in which researchers screen large numbers of people for thousands of genetic markers. Their aim is to see whether some of the markers seem to be inherited alongside a disease, which is taken as evidence that a gene conferring susceptibility to the condition lies nearby. But just as gold prospectors keep the nuggets and throw the pebbles back into the stream, those engaged in the new genetic gold-rush tend to report only positive associations, leaving the rest to be panned through by others again.

    Worse, it turns out that many of the positive results that have been published may be errors. Last year, a team led by geneticist Joel Hirschhorn of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, reviewed the literature on 166 common genetic variants that had been linked to diseases such as heart disease or acne at least once, and which had been subject to association analysis at least three times. He found that there were consistent results for only six of the variants1. This suggests that false positives and false negatives are all too easy to come by - and because there tends to be a bias towards publishing positive associations, it stands to reason that many genetic links to disease described in the literature are wrong.

    The practice of shelving negative results also leads to problems in other fields. Two years ago, Britain's Animal Procedures Committee, which advises the UK government on its policies on research involving animals, raised the concern that scientists may be duplicating experiments that had already failed in other labs. The committee recommended that the Home Office, the government department responsible for issuing licences for animal research, require its licensees to share their negative results, possibly through a government website. But officials argued that publishing research findings is not the government's job, and the issue remains unresolved.

    Non-publication of negative results may also be skewing the debate over the safety of transgenic crops. Trials of genetically modified plants overwhelmingly reveal no adverse environmental consequences or health effects, argues Alan McHughen, a plant biotechnologist at the University of California, Riverside, and the results generally go unpublished. "Journal editors say: 'So what?'," McHughen says. Not that he blames them - he wouldn't want to pick up Nature and read a procession of negative, unsurprising findings. But as the trials tend to be catalogued in obscure government documents, the public and scientists outside the field are often unaware of them.

    Can anything be done? Clearly, journals that seek to maximize their visibility will continue to publish only high-impact papers. Occasionally a negative result falls into that category. On 27 February, Nature published a physics paper that ruled out certain types of string theory by searching for deviations from Newton's inverse-square law and finding none2. And The Lancet recently published a large study that failed to confirm a previous hypothesis that certain versions of the gene for apolipoprotein E make smokers more susceptible to heart disease3.

    Positive steps
    But these are exceptions. To handle the steady stream of lower-profile negative findings, some scientists are setting up their own publishing efforts - with mixed results. Bjorn Olsen, a cell biologist at Harvard Medical School, for instance, established the Journal of Negative Results in Biomedicine (JNRBM) last year. The main requirement is that the results be reproducible. Beyond that, anything biomedical goes, from failed clinical trials to reagents that don't work as advertised.

    Submissions to the JNRBM, which is published online by London-based BioMed Central, go out for peer review only if they are deemed interesting by the journal's editorial board. This should prevent the JNRBM from becoming a laundry list of experiments with predictably negative outcomes, says Olsen. Since the journal went live in November, it has received 11 submissions, 7 of which have gone out for review. Three of these have been accepted and two are now online, Olsen says.

    A modest beginning, perhaps, but better than a similar effort in computer science, set up in 1997. In an article4 in the Journal of Universal Computer Science, editorial board member Lutz Prechelt of the University of Karlsruhe in Germany announced a new section of the journal to be called the Forum for Negative Results (FNR). He argued that valuable insights in computer engineering are lost when people discard their failed solutions to problems, rather than reporting them. But since then, there has not been a single submission, leaving Prechelt to suspect that computer scientists just don't like facing their failures. "Maybe I should write up a submission to FNR describing the concept as a negative result," he jokes.

    Another publication that encourages the submission of negative findings is a new biotechnology journal. The Paris-based International Society for Biosafety Research, of which McHughen is a board member, recently launched Environmental Biosafety Research as a forum for publishing field trials of transgenic crops, including the majority that show nothing alarming or surprising. The first issue, which appeared in October, included four research articles, one of which reported the negative result that an insecticide-producing maize did not harm non-target species.

    Although journals may be part of the answer, they won't work well for fields such as gene association, in which negative results outweigh positives by orders of magnitude. "No one can read 150 papers and remember what they read," says Colhoun.

    In such cases, presentation of negative findings in a more abbreviated form on the Internet seems the obvious answer. To this end, Colhoun has assembled a group of colleagues to discuss possible approaches, with a view to publishing the results of their deliberations in The Lancet, which has taken an interest in the issue. One important issue is ensuring that sufficient experimental details are provided to allow negative results to be interpreted. Colhoun's group is, for instance, considering recommending something akin to the MIAME (minimum information about a microarray experiment) standards established last year to aid the comparison of gene-expression studies using DNA chips.

    Still, the availability of an online database into which scientists can deposit their negative results does not guarantee that it will be used, as Scott Kern has found. A cancer researcher at Johns Hopkins University School of Medicine in Baltimore, Maryland, Kern set up NOGO, which stands for the Journal of Negative Observations in Genetic Oncology, on his website six years ago. Although styled as a journal, NOGO is a repository for brief reports of negative results, including those that have been published elsewhere. When he set up the site, Kern provided a simple form for submitting negative results about genes suspected to be involved in cancer, approached colleagues at meetings and distributed flyers. Reactions were very positive, but contributions never rose above a trickle.

    So one of the tasks Colhoun has set for her working group is to come up with a system of carrots and sticks to prise out negative data from the genetic-epidemiology community. One idea might be for journals publishing positive findings to require authors to make any negative data available as a condition of publishing a positive gene association, says Mark McCarthy, one of Colhoun's colleagues at University College London, and a member of her working group.

    Whether journal editors will buy into that idea is unclear. But McCarthy is optimistic that a cultural shift is under way. "People are starting to realize the benefit of looking at other people's negative data," he says.

    Journal of Negative Results in Biomedicine

  2. #2
    Senior Member
    Join Date
    Apr 2002
    Location
    Newcastle, Australia
    Posts
    741
    Interesting article Wise. I worked in the health promotion field of our area health service. They conducted health promotion research. They would search for the proverbial needle in a haystack to publish a finding and I asked why they weren't as keen to publish results that confirm the null hypothesis.
    Could it be that 'successful' researchers are regarder higher and the prospect of future grant approvals increased? I hope not.
    Andrew

    "You can stand me up at the gates of hell
    But I won't back down"
    Tom Petty

  3. #3
    Andrew,

    There is one aspect of negative data that this article does not talk about. Scientists doing experiments are like chefs cooking a meal. Two chefs can start out with the same ingredients and tools but turn out very different meals. When a scientist gets a negative result on something that somebody else has reported to work, that scientist is under the onus of proving that he/she did the work well. It is much harder replicating the work of another person than to cook your own meal.

    We sometimes gotten negative results on therapies that others have reported to work. The nagging question is whether we have done it right, if there was some little detail in the experimental protocol that we have missed, or if there are some differences in the spinal cord injury model that may have accounted for the negative results. I feel the obligation to repeat the experiment until all the details have been accounted for before I am willing to publish something saying that a treatment does not work.

    What this articles talk about is the reluctance of journals to publish negative results. Nobody wants to read about failures. They want to read about successes. That is why journals such as NOGO, which stands for the Journal of Negative Observations in Genetic Oncology, are having a hard time. Who would subscribe to a journal of negative observations? I think that such journals are not the answer as much as a change in the editorial policy of established journals to encourage and publish articles that report negative results.

    Negative results are very important because it tells other scientists what does not work. This saves a huge amount of time and resources so that other scientists do not make the same mistakes. At the present, the mistakes of science are being repeated in many laboratories because journals are unwilling to spend their space reporting these mistakes.

    Wise.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •