Bad Pharma, a new book by Ben Goldacre, looks into the research practices of big pharmacy. Apparently any negative information about new drugs is systematically suppressed even in the academic environment:
In 2010, researchers from Harvard and Toronto found all the trials looking at five major classes of drug…: were they positive, and were they funded by industry? They found more than 500 trials in total: 85% of the industry-funded studies were positive, but only 50% of the government-funded trials were. In 2007, researchers looked at every published trial that set out to explore the benefits of a statin….This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. They found that industry-funded trials were 20 times more likely to give results favouring the test drug.
…In 2003, two [systematic reviews] were published. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results, and both found that industry-funded trials were, overall, about four times more likely to report positive results….
In general, the results section of an academic paper is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures….In Fries and Krishnan (2004), this level of detail was unnecessary. The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence:
“The results from every randomised controlled trial (45 out of 45) favoured the drug of the sponsor.”
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good. But after all these methodological quirks comes one very simple insult to the integrity of the data. Sometimes, drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them.
Still feeling confident about your industry sponsored cell phone radiation tests?