In 2006, Sylvain Lesné and seven coauthors published a paper on Alzheimer’s disease, “A specific amyloid-beta protein assembly in the brain impairs memory,” in Nature, the world’s most prestigious scientific journal. This was a major paper in the development of the “amyloid hypothesis,” a proposed mechanism for how Alzheimer’s disease afflicts its victims.
Good reportage. You've done a thoughtful job of unpacking the problems and latent potential for error in research studies. Now it's time to figure out how to introduce some rigor and accountability into the process.
In that respect, there's no time to lose, with AI upon us. We need to work out a proper framework to scrutinize the peculiar vulnerabilities of "Big Data" metastudies--because they're easy to crunch with AI and potentially useful in positing the most productive directions for future research. But they also compress the subtleties of data findings and the details found in individual studies. My opinion on Big Data is that as a rule we don't have nearly enough of it. Intriguing research directions, possible confounders, and alternate hypotheses can get buried when multiple studies are condensed and statistically aligned to produce a metastudy conclusion. At their worst, metastudies partake of the same problems as financial instruments like CDOs that fold tranches of junk and AA grade together. Good metasudies guard against that problem arising, by accurately focusing their questions and incorporating standards for individual study methodology. Not all of them are good. Some of them are merely facile. It's an convenient process to plug in some keywords into a database and generate a result that resembles a meaningful conclusion, but it's a little too easy. Data correlations can be misleading, especially if researchers are seeking results telling them what they want to hear by narrowing an inquiry to suit a preconceived narrative frame. A lot of review needs to be done before a given meta analysis can be viewed as authoritative. Some subjects are more amenable to that mode of analysis than others- which indicates that there's often no substitute for a human intelligence perusing an awful lot of individual studies and comparing the information they yield. Just because a metastudy interpretation and conclusion is a representation of more data than what's found in the single study components, that doesn't necessarily lead to the conclusion that the result has encompassed Big Data. Big Data is comprehensive, multifactorial, thorough in every relevant aspect. It's a conceit to imagine that we're there yet in most fields of study.
The reticence to assert conclusions as definitive when results are equivocal is evidence of humility and scholarly discipline, and praiseworthy. The findings of studies with tentative conclusions should not be viewed as empty exercises, or as possessing insignificant value compared to studies that assert bold conclusions. Bold conclusions and breakthrough claims need to be treated as suspect, precisely because they're the sort of conclusions that we all want. The ones that we all yearn for.
Also, looking around the most recent pages on the Palladium site, the topic choices and article content appears to me to have improved considerably over the last time I visited.
Related problem is that vast majority of publications are useless. The tiny fraction which are fraudulent becomes less tiny once you start filtering out uninteresting papers, the job, in essence, of peer review.
Paul Meehl fretted about this problem in the 1980s. 36 years later we are still debating
Well, I don't know about medical science. But there is considerable evidence that (for example) the psychology p-value problem and similar "most published research findings are false" stuff was itself dramatically overstated and ironically guilty of the exact same manipulations that it accused others of.
The goal is the publication in Nature, the prestige, not the science. Much of our system is set up to encourage liars, to make little lies necessary for the achievement of any end. What do we expect? Tragic.
Good reportage. You've done a thoughtful job of unpacking the problems and latent potential for error in research studies. Now it's time to figure out how to introduce some rigor and accountability into the process.
In that respect, there's no time to lose, with AI upon us. We need to work out a proper framework to scrutinize the peculiar vulnerabilities of "Big Data" metastudies--because they're easy to crunch with AI and potentially useful in positing the most productive directions for future research. But they also compress the subtleties of data findings and the details found in individual studies. My opinion on Big Data is that as a rule we don't have nearly enough of it. Intriguing research directions, possible confounders, and alternate hypotheses can get buried when multiple studies are condensed and statistically aligned to produce a metastudy conclusion. At their worst, metastudies partake of the same problems as financial instruments like CDOs that fold tranches of junk and AA grade together. Good metasudies guard against that problem arising, by accurately focusing their questions and incorporating standards for individual study methodology. Not all of them are good. Some of them are merely facile. It's an convenient process to plug in some keywords into a database and generate a result that resembles a meaningful conclusion, but it's a little too easy. Data correlations can be misleading, especially if researchers are seeking results telling them what they want to hear by narrowing an inquiry to suit a preconceived narrative frame. A lot of review needs to be done before a given meta analysis can be viewed as authoritative. Some subjects are more amenable to that mode of analysis than others- which indicates that there's often no substitute for a human intelligence perusing an awful lot of individual studies and comparing the information they yield. Just because a metastudy interpretation and conclusion is a representation of more data than what's found in the single study components, that doesn't necessarily lead to the conclusion that the result has encompassed Big Data. Big Data is comprehensive, multifactorial, thorough in every relevant aspect. It's a conceit to imagine that we're there yet in most fields of study.
The reticence to assert conclusions as definitive when results are equivocal is evidence of humility and scholarly discipline, and praiseworthy. The findings of studies with tentative conclusions should not be viewed as empty exercises, or as possessing insignificant value compared to studies that assert bold conclusions. Bold conclusions and breakthrough claims need to be treated as suspect, precisely because they're the sort of conclusions that we all want. The ones that we all yearn for.
Also, looking around the most recent pages on the Palladium site, the topic choices and article content appears to me to have improved considerably over the last time I visited.
Related problem is that vast majority of publications are useless. The tiny fraction which are fraudulent becomes less tiny once you start filtering out uninteresting papers, the job, in essence, of peer review.
Paul Meehl fretted about this problem in the 1980s. 36 years later we are still debating
https://www.argmin.net/p/youre-gonna-run-when-you-find-out
Well, I don't know about medical science. But there is considerable evidence that (for example) the psychology p-value problem and similar "most published research findings are false" stuff was itself dramatically overstated and ironically guilty of the exact same manipulations that it accused others of.
The goal is the publication in Nature, the prestige, not the science. Much of our system is set up to encourage liars, to make little lies necessary for the achievement of any end. What do we expect? Tragic.
Horrifying
Very interesting! Have you thought about how this might develop in the future with Large Language Models x replication crisis?