Methodology and Process

One of the aspects of this whole complicated issue that makes me feel the need to do this work is that I have been becoming aware of a slowly building movement, within the scientific and medical communities, of criticisms of various types of methodology and process. Some of the issues seem to be, yes, with statistics, but there are also issues with how journals work in relation to retractions or disputations (e.g. journals often refuse to publish disputations because they retread the same work, even if the findings are completely different – this makes it challenging to actually dispute what seems like a wrong conclusion).

Another good example that RetractionWatch and Bad Science (links to both below) both talk about is that when journal articles get retracted there is no central body whose job it is to follow the citations back to authors who cited the retracted article and advise them that the science or findings may be questionable. Without that step in the process, we end up with often seminal articles that exist for years or decades, are often in turn cited as valid, and which may be depending on a retracted paper or work for its primary findings. The break in the chain is often simply ignored and scientists go on about their business, often unaware of the retraction completely.

These articles/links talk about the methodology and process of doing science (and of government regulation and other processes we are meant to trust to protect us).


  • Goldacre, B. (2011, January 14). “None of your damn business” – Bad Science. Bad Science. Retrieved September 9, 2011, from
    (Talks about how retractions of papers with problems don’t always make it back to the folks who cite those papers in other works)
  • Goldacre, B. (2011, April 23). I foresee that nobody will do anything about this problem – Bad Science. Bad Science. Retrieved September 9, 2011, from
    (Talks about problems with journals not publishing duplicate work even if the finding is opposite – makes it hard to refute bad science)
  • Goldacre, B. (2011, September 9). The statistical error that just keeps on coming | Ben Goldacre | Comment is free | The Guardian. The Guardian. Newspaper. Retrieved September 10, 2011, from
    (Talks about a study that reveals an extremely widespread statistical reporting error in academic/journal papers.)
  • Goldberg, N. H., Schneeweiss, S., Kowal, M. K., & Gagne, J. J. (2011). Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States, May 4, 2011, Goldberg et al. 305 (17): 1786. The Journal of the American Medical Association, 305(17), 1786-1789. Retreived from
    (Talks about how drugs in drug approval processes don’t need to establish that they work better than other drugs already on the market, just that they work – even if it’s worse.)
  • Marcus, A., & Oransky, I. (2011, August 3). Why write a blog about retractions? « Retraction Watch. Retraction Watch. Blog. Retrieved September 9, 2011, from
    (Talks about recent high profile retractions and general related logistics)
  • Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E.-J. (2011). Erroneous analyses of interactions in neuroscience: a problem of significance?: Nature Neuroscience?: Nature Publishing Group. Nature Neuroscience, 14, 1105-1107. Retreived from
    (Reveals an extremely widespread statistical reporting error in academic/journal papers.)
  • Smith, R. (2005). Investigating the previous studies of a fraudulent author — Smith 331 (7511): 288 — British Medical Journal, 331(7511), 288. doi:10.1136/bmj.331.7511.288. Retrieved from
    (Discussion of problems in the process of policing academic fraud and the process of propagating news about retractions of fraudulent publications)

Other Links: