Results 1 to 6 of 6

Thread: Lies, Damned Lies, and Medical Science

  1. #1
    Senior Member Rick1's Avatar
    Join Date
    Jul 2001
    Location
    Carlsbad, CA
    Posts
    716

    Lies, Damned Lies, and Medical Science

    Lies, Damned Lies, and Medical Science
    http://www.theatlantic.com/magazine/...science/8269/1

    Can medical research studies be trusted?

    Beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. “Randomized controlled trials,” which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. “I realized even our gold-standard research had a lot of problems,” he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.

    This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
    “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”

    We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.

    “Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
    Know Thyself

  2. #2
    The fact that there is good and bad science should not be surprising to anybody. After all, that is why we have rigorous peer review for NIH funding, to ensure that proposed trials are critically and carefully reviewed. Clinical trials that have been peer-reviewed and published in the best journals tend to be good science.

    People like Ioannidis help ensure the quality of clinical trials. I agree with him that we need to pay more attention to the questions that clinical trials are asking and not only focus on whether the trials are designed to answer the question. In my opinion, the least useful clinical trials are those that come up with ambiguous answers where you can't say whether the therapy is beneficial or not. A clinical trial that shows that a therapy does not work is not a failure. As long as the conclusions are trustworthy and credible, the trial is successful.

    Too often, we are defining a successful trial as one that shows that the treatment works. I define a failed trial as one that is unable to show whether the treatment worked or not. The trial is a waste because it did not provide credible information in one or the other direction. In addition, very often, a trial may fail to show that a therapy is effective and at the same time fail to provide any clues as to why the treatment did not work.

    Perhaps the trial failed because the clinical trial design is inadequate rather than the treatment did not work. For example, a trial may not have tested enough patients, the therapy was not administered in a high enough dose, the condition was not well-defined and represented heterogenous states that the therapy may affect, or the dose/timing/duration of the therapy were insufficient. In such cases, the negative results represent a false negative.

    Most clinical trials are designed to rule out false positives and do not guard enough against the possibility of false negatives. In order to design a phase 3 pivotal trial properly, investigators do phase 2 trials in order to optimize the dose, the treatment conditions, the inclusion and exclusion criteria, and the outcome measures to maximize the chances of getting reliable information.

    Let me give an example of a recent trial that assumed a treatment would work miracles against graft-versus-host-disease (GVHD). Osiris was testing the effects of mesenchymal stem cells on GVHD, a condition that occurs with infusion of bone marrow or cord blood cells to replace bone marrow of a patient with hematological or autoimmune diseases. Because animal experiments suggested that the treatment will suppress transplanted cell attacks of the recipients body, they thought that the treatment would be effective for all types of GVHD. In the end, the trial showed that the treatment was successful for some kinds of GVHD and not others. The FDA gave them a limited approval for certain types of GVHD and the company is now doing more trials to understand the treatment effects. They were lucky. If the treatment was less robust, they could have potentially missed the treatment effects.

    Wise.

  3. #3
    The best and quickest way to tell if a study is any good is to look at the data. When reading a paper a scientist will generally skip straight to the results in order to decide if it's worth reading any further.

    Happily it's quite easy to do. Follow David Vaux's ten rules of thumb, which are explained here:

    http://www.asbmb.org.au/magazine/stu...ge%20Aug08.pdf

  4. #4
    That's a great article, but makes points that will be lost on most non-scientists. Here is one for more casual readers
    http://hampshire.edu/~apmNS/design/R.../HOW_READ.html
    And a nifty Powerpoint w/ good links at the end
    http://www.google.com/url?sa=t&sourc...oPQb5WqokrX0xA

  5. #5
    Those are pretty good too!

    I'd suggest practice is the only real way to get the hang of reading data. Though when you're just starting it helps heaps to know what you're looking for.

  6. #6
    Senior Member diddlindoug's Avatar
    Join Date
    Nov 2010
    Location
    Circleville Ohio
    Posts
    103
    I did public opinion research studies (polls) Shhh back there, they are NOT marketing crap...lol. BUT I think it goes around the same theory that bias is very hard to see and understand especially if there are no third party intervention during the planning stages. To remove bias is to not have a slant either way. Its like asking for a person that 1. does not know or understand ANYTHING and 2. a person who has NOT been exposed to the rationale of the study or who is doing it. Just some stuff I learned...thx

Similar Threads

  1. Lies, lies, lies !
    By Obieone in forum Movies & Music
    Replies: 30
    Last Post: 03-23-2010, 02:12 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •