The Academic Culture of Fraud
Full article in Palladium magazine.
In 2006, Sylvain Lesné and seven coauthors published a paper on Alzheimer’s disease, “A specific amyloid-beta protein assembly in the brain impairs memory,” in Nature, the world’s most prestigious scientific journal. This was a major paper in the development of the “amyloid hypothesis,” a proposed mechanism for how Alzheimer’s disease afflicts its victims. About 50 million people suffer from Alzheimer’s disease, more than the entire population of California, making it the world’s most common cause of dementia. This population will grow as the world’s average population gets older. There is no effective treatment for Alzheimer’s disease, and its pathology is poorly understood. Any progress in understanding this disease represents a massive humanitarian victory. Encouraged by this paper and other promising studies, funding and talent poured into investigating the amyloid hypothesis. By 2022, such research had received over $1 billion in government funds.
That year, neuroscientist Matthew Schrag discovered doctored images in this and many of Lesné’s other papers, including others purporting to provide evidence for the amyloid hypothesis. These images had been manually edited and cropped together to falsely show support for the papers’ hypotheses. Notably, these frauds all made it through the formalized “peer review” processes of Nature and six other academic journals undetected, before eventually being uncovered by unrelated channels.
Schrag’s investigation that uncovered the fraudulent papers began as a tangent from his work uncovering doctored images used in studies supporting simufilam, an experimental drug for Alzheimer’s disease. The suspicion would prove vindicated when in June 2024 Hoau-Yan Wang, a paid adviser to simufilam’s developer, was indicted by a federal grand jury for fabricating data and images in simufilam studies for which he obtained $16 million in National Institutes of Health (NIH) grants, following a 2021 petition to the Food and Drug Administration, a method of reporting research fraud which is highly unusual if not unique.
Follow-up to evidence of Lesné’s fraud was slow. Schrag’s discovery kicked off two years of wrangling, eventually leading all of Lesné’s coauthors—but not Lesné himself—to agree to retract the 2006 Nature paper. As Science reported in 2022, “The Nature paper has been cited in about 2300 scholarly articles—more than all but four other Alzheimer’s basic research reports published since 2006, according to the Web of Science database. Since then, annual NIH support for studies labeled ‘amyloid, oligomer, and Alzheimer’s’ has risen from near zero to $287 million in 2021. Lesné and [his coauthor] Ashe helped spark that explosion, experts say.”
Scientists must now untangle the strands of fraud woven through decades of arguments stretching across a billion dollars worth of research. The paper’s contribution to the allocation of this billion dollars might also be a reason why such a widely-cited paper, presumably read by thousands of experts where some must have spotted the fraud, wasn’t reported earlier. Whether the amyloid hypothesis survives or not, this fraud has likely delayed the arrival of life-saving medication for tens of millions of people, perhaps by many years. If so, it is a humanitarian disaster larger than most wars.
Continue reading my full article in Palladium magazine.


Good reportage. You've done a thoughtful job of unpacking the problems and latent potential for error in research studies. Now it's time to figure out how to introduce some rigor and accountability into the process.
In that respect, there's no time to lose, with AI upon us. We need to work out a proper framework to scrutinize the peculiar vulnerabilities of "Big Data" metastudies--because they're easy to crunch with AI and potentially useful in positing the most productive directions for future research. But they also compress the subtleties of data findings and the details found in individual studies. My opinion on Big Data is that as a rule we don't have nearly enough of it. Intriguing research directions, possible confounders, and alternate hypotheses can get buried when multiple studies are condensed and statistically aligned to produce a metastudy conclusion. At their worst, metastudies partake of the same problems as financial instruments like CDOs that fold tranches of junk and AA grade together. Good metasudies guard against that problem arising, by accurately focusing their questions and incorporating standards for individual study methodology. Not all of them are good. Some of them are merely facile. It's an convenient process to plug in some keywords into a database and generate a result that resembles a meaningful conclusion, but it's a little too easy. Data correlations can be misleading, especially if researchers are seeking results telling them what they want to hear by narrowing an inquiry to suit a preconceived narrative frame. A lot of review needs to be done before a given meta analysis can be viewed as authoritative. Some subjects are more amenable to that mode of analysis than others- which indicates that there's often no substitute for a human intelligence perusing an awful lot of individual studies and comparing the information they yield. Just because a metastudy interpretation and conclusion is a representation of more data than what's found in the single study components, that doesn't necessarily lead to the conclusion that the result has encompassed Big Data. Big Data is comprehensive, multifactorial, thorough in every relevant aspect. It's a conceit to imagine that we're there yet in most fields of study.
The reticence to assert conclusions as definitive when results are equivocal is evidence of humility and scholarly discipline, and praiseworthy. The findings of studies with tentative conclusions should not be viewed as empty exercises, or as possessing insignificant value compared to studies that assert bold conclusions. Bold conclusions and breakthrough claims need to be treated as suspect, precisely because they're the sort of conclusions that we all want. The ones that we all yearn for.
Also, looking around the most recent pages on the Palladium site, the topic choices and article content appears to me to have improved considerably over the last time I visited.
Related problem is that vast majority of publications are useless. The tiny fraction which are fraudulent becomes less tiny once you start filtering out uninteresting papers, the job, in essence, of peer review.
Paul Meehl fretted about this problem in the 1980s. 36 years later we are still debating
https://www.argmin.net/p/youre-gonna-run-when-you-find-out