His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
These were challenging facts for me, as a deep believer in science and rigorous research. It's worth considering the meaning of these findings in the health field for the new focus on evidence-based funding in the social sector and in foundations. The author argues, based on the above, that when science becomes big business, when one's livelihood depends on finding positive effects, the scientific ideal (the search for truth) regularly falls under the wheels. Are social interventions any easier to study than biological systems -- and therefore any more likely to produce accurate results under the best of research conditions? Does the new pressure for evidence-based funding create high stakes that will generate bias and inaccuracy in purportedly scientific studies of programs? The arts aren't under the microscope of evidence-based funding today, but this shift in the thinking around us is already starting to inform the thinking of the program officers and donors upon which we rely. What can we do to ensure a reliable system of study in our field if evidence-based funding is going to become a foundation of philanthropy?