Executive Summary
The replication crisis—the widespread failure to reproduce published scientific findings—remains a critical challenge for science in 2026, more than a decade after it first gained widespread attention. Recent evidence demonstrates that this is not a historical problem that has been solved, but an ongoing threat to scientific credibility that continues to evolve and expand. While some disciplines, particularly psychology, have shown measurable improvements in research practices, fundamental structural problems persist across the scientific enterprise.
Large-scale replication studies conducted in 2025 and 2026 continue to reveal troubling success rates, with only approximately 50% of originally significant findings successfully replicating across disciplines. Beyond the original concerns about questionable research practices, a new dimension has emerged: systematic scientific fraud is now growing faster than legitimate research, with fraudulent publications doubling every 1.5 years compared to 15 years for legitimate science. This fraud epidemic, facilitated by sophisticated networks of paper mills and predatory journals, threatens to overwhelm the scientific literature, particularly in fields like cancer research where unreliable findings can directly harm patients.
The crisis matters in 2026 because its consequences extend far beyond academic debates. Failed replications waste billions in research funding, erode public trust in science at a time when evidence-based policy is critically needed, and in biomedical research, can lead directly to patient harm. While major institutions including the National Institutes of Health and the UK government have launched significant new initiatives to address reproducibility, and the growing field of metascience offers promising tools for reform, the fundamental incentive structures that reward publication quantity over quality remain largely unchanged. The path forward requires not just methodological reforms but a cultural transformation in how science is conducted, evaluated, and rewarded.
Background & Context
The replication crisis refers to the widespread failure to reproduce published scientific results, a phenomenon that undermines the credibility of established theories and challenges substantial portions of what we thought we knew scientifically. While concerns about reproducibility have existed throughout the history of science, the current crisis gained prominence in the early 2010s when large-scale systematic replication efforts began revealing the extent of the problem.
The crisis emerged most visibly in psychology, where a landmark 2015 study by the Open Science Collaboration attempted to replicate 100 studies published in three top psychology journals. The results were shocking: while 97% of the original studies reported statistically significant findings, only 36% of the high-powered replication attempts achieved significance [Open Science Collaboration, 2015]. This study crystallized concerns that had been building for years about questionable research practices, publication bias, and the pressure to produce novel, positive findings.
The term "replication crisis" itself encompasses several related problems. At its core is the simple failure of replication—when researchers following the same methods as a published study fail to obtain similar results. But the crisis also includes the "file drawer problem," where studies with null or negative results go unpublished, creating a distorted literature that overrepresents positive findings. It encompasses questionable research practices such as p-hacking (manipulating data analysis until statistical significance is achieved), HARKing (Hypothesizing After Results are Known), and selective reporting of outcomes. In recent years, the crisis has expanded to include outright fraud, with organized networks producing fabricated research at an alarming scale.
What began as a problem primarily associated with psychology has spread across the scientific landscape. Economics, biomedical research, cancer biology, earth sciences, and even physics have all documented significant replication failures. A 2024 survey found that 72% of biomedical researchers agreed there was a reproducibility crisis, with pressure to publish identified as the leading cause by 62% of participants [Biomedical Research Survey, 2024]. The crisis has thus evolved from a discipline-specific concern to a fundamental challenge facing the entire scientific enterprise.
The persistence of the crisis into 2026 reflects both the depth of the underlying problems and the difficulty of implementing effective solutions. While the scientific community has responded with various reforms—including preregistration of studies, open data sharing, registered reports, and increased emphasis on replication—these measures have achieved only partial success. As one researcher noted in February 2026, "the crisis is far from over" with "central empirical problems remain: unusually high rates of statistically significant results in journals, implausible success rates given typical power, and repeated failures to reproduce headline findings" [Replication Index, 2026].
Key Findings
Recent data from 2025 and 2026 provide a comprehensive picture of where the replication crisis stands today, revealing both areas of progress and persistent challenges.
Current Replication Rates Remain Concerning
A 2026 study by Tyner et al. reported that only about 50% of originally significant claims were successfully replicated across scientific disciplines [Tyner et al., 2026]. This figure represents a slight improvement over earlier estimates but remains far below what would be expected if published research reliably reflected true effects. The variation across disciplines is substantial, with some fields showing better reproducibility than others.
In cancer biology, a 2021 effort to replicate 53 different cancer research studies achieved a success rate of just 46% [Cancer Biology Replication Study, 2021]. Given that cancer research directly informs treatment decisions affecting millions of patients, this low replication rate has serious implications for clinical practice and patient outcomes.
Economics has shown somewhat better performance, with a 2016 study replicating 18 experimental studies from leading economics journals finding approximately 61% successfully replicated [Economics Replication Study, 2016]. However, even when replication succeeded, the effect sizes in the replications were only 66% of the original reported sizes, suggesting that initial studies systematically overestimate effects. Additionally, about 20% of studies published in The American Economic Review are contradicted by other studies using similar data, indicating ongoing problems with robustness.
Psychology Shows Mixed Progress
Psychology, where the crisis first gained widespread attention, has shown some measurable improvements. A comprehensive 2025 study examining p-values across 240,355 empirical psychology articles published from 2004 to 2024 found that "over this period and across every subdiscipline, the typical study has begun reporting markedly stronger p values" [Psychology P-Value Study, 2025]. This suggests that psychological research is becoming more rigorous, with larger effect sizes or better-powered studies.
However, this progress is uneven and incomplete. The same study found that researchers at the highest-ranked universities tend to publish articles with the weakest p-values, suggesting that career incentives at elite institutions still don't fully align with robust research practices. Furthermore, research based on certain theoretical frameworks appears particularly vulnerable. A 2026 study found that research predicated on the "empty-self metaphor"—including situationism and social priming—is "far less likely to replicate" than other psychological research [Social Psychology Replication Study, 2026].
The Fraud Epidemic Accelerates
Perhaps the most alarming development is the acceleration of outright scientific fraud. A 2025 Northwestern University study found that "the publication of fraudulent science is outpacing the growth rate of legitimate scientific publications" and discovered "broad networks of organized scientific fraudsters" [Northwestern Fraud Study, 2025]. According to research by Reese Richardson, the number of fraudulent scientific papers appears to be doubling every 1.5 years, compared to legitimate papers doubling every 15 years [Richardson, 2025].
This fraud is not random or isolated. A 2025 PNAS study revealed sophisticated global networks including paper mills (sellers of mass-produced fabricated research), brokers (conduits between producers and publishers), and predatory journals that facilitate systematic scientific fraud [PNAS Paper Mills Study, 2025]. These networks operate at industrial scale, producing thousands of fabricated papers that infiltrate the legitimate scientific literature.
Cancer research appears particularly vulnerable to this fraud epidemic. Dr. Richardson stated that "Cancer is probably the most vulnerable field for fraudulent research" with "a huge fraction of the cancer literature is probably completely unreliable" [Richardson, 2025]. The combination of high stakes (cancer affects millions), complex methodologies that are difficult to verify, and pressure to publish creates an environment where fraud can flourish.
The Crisis Extends Beyond Original Disciplines
Evidence from 2025 and 2026 confirms that the replication crisis has spread far beyond its origins in psychology. In January 2026, physicist Sergey Frolov reported that physics "faced the same issues as others" regarding replication. His team found alternative explanations for seemingly extraordinary results in four experiments they attempted to replicate [Frolov, 2026]. This finding is particularly significant because physics has traditionally been viewed as a "harder" science with more rigorous standards.
Earth sciences have also documented replication problems. A 2024 article identified insufficient analytic transparency as a common problem preventing replication in earth sciences, noting that "the replication crisis has spread throughout all sciences" [Earth Sciences Reproducibility, 2024]. Even mathematics, long considered immune to such problems, has been affected. A 2025 report documented fraudulent practices in mathematical publishing for the first time, with Clarivate excluding mathematics from its 2023 Highly Cited Researchers list due to metric gaming [Mathematical Fraud Report, 2025].
Multiple Perspectives
The replication crisis has generated diverse interpretations and responses from different stakeholders in the scientific community, each offering valuable insights into both the nature of the problem and potential solutions.
The Optimistic View: A Credibility Revolution
Some researchers frame the crisis not as a catastrophe but as a necessary correction that is ultimately strengthening science. A 2023 Nature Communications Psychology article reframed the crisis as a "credibility revolution," highlighting positive structural, procedural, and community-driven changes including data sharing, preregistration, and registered reports [Credibility Revolution, 2023]. From this perspective, the crisis represents science working as it should—self-correcting when problems are identified.
Proponents of this view point to concrete improvements: psychology journals increasingly require data sharing, preregistration has become common in many fields, and registered reports (where methods are peer-reviewed before data collection) are gaining acceptance. The growth of metascience—using empirical research methods to examine research practice itself—provides tools for understanding and addressing reproducibility problems. Tim Errington of the Center for Open Science noted that metascience has been "exploding," with the Metascience Alliance launched in 2025 bringing together researchers dedicated to improving scientific practices [Errington, 2025].
The Skeptical View: Reforms Are Insufficient
Other researchers argue that while reforms are welcome, they fail to address the fundamental structural problems driving the crisis. A 2024 review concluded that while measures like preregistration and data sharing have advanced transparency, "they often fall short in mitigating QRPs [questionable research practices] due to persistent incentive misalignments" [QRP Review, 2024].
From this perspective, the core problem is that scientists are rewarded for publishing novel, positive findings in high-impact journals, not for conducting rigorous, reproducible research. As one analysis noted, success rates over 90% in psychology journals have been documented repeatedly since 1959, indicating that structural problems with publication bias remain unchanged despite decades of awareness. "This unscientific incentive structure is the root cause of the replication crisis" [Replication Index, 2026].
Skeptics point out that preregistration, while helpful, has significant limitations. Only 16% of published preregistered studies fully adhered to their plans or disclosed all deviations [Preregistration Adherence Study]. Many preregistered protocols still leave room for p-hacking, and researchers rarely follow the exact methods they preregister. Without changing the underlying incentives, methodological reforms may simply shift questionable practices to less visible stages of the research process.
The Fraud-Focused View: A Criminal Enterprise
The discovery of organized fraud networks has led some researchers to emphasize that the crisis is not just about honest mistakes or questionable practices, but includes deliberate criminal activity. The Northwestern study on fraud networks called for "enhanced scrutiny of editorial processes, improved methods for detecting fabricated research, a greater understanding of the networks facilitating this misconduct and a radical restructuring of the system of incentives in science" [Northwestern Fraud Study, 2025].
This perspective emphasizes that paper mills and fraud brokers operate as profit-seeking businesses, exploiting the publish-or-perish culture to sell fabricated research to desperate academics. The problem requires not just scientific reforms but potentially legal action and international cooperation to dismantle these networks. The scale of the fraud—doubling every 1.5 years—suggests that without aggressive intervention, fraudulent papers could eventually outnumber legitimate research in some fields.
The Disciplinary Variation View: Context Matters
Some researchers argue that the crisis manifests differently across disciplines and that solutions must be tailored accordingly. Brian Nosek noted that we should expect high levels of reproducibility for findings translated into government policy, though we could tolerate lower reproducibility for exploratory research [Nosek, cited in C&EN, 2025]. This suggests that reproducibility standards should vary based on how research is used.
Physics, for example, faces different challenges than psychology. Sergey Frolov noted that in physics, replication attempts can damage careers when they challenge established results, creating disincentives for conducting replications [Frolov, 2026]. In biomedical research, the stakes are different because failed replications can lead directly to patient harm, making reproducibility particularly critical in clinical contexts [Biomedical Reproducibility Survey, 2024].
The Systemic Change View: Culture Over Methods
A growing perspective emphasizes that the crisis requires cultural transformation, not just methodological fixes. A 2026 Annual Review of Medicine article emphasized that "reproducibility concerns in biomedical research have persisted for more than a decade" and called for "balanced scientific reforms that strengthen reproducibility without stifling innovation" [Annual Review of Medicine, 2026].
This view recognizes the tension between reproducibility and innovation. Overly rigid requirements for replication could slow scientific progress and discourage exploratory research. The challenge is creating a culture that values both discovery and verification, rewards transparency and null results alongside novel findings, and recognizes that different types of research require different standards of evidence.
Analysis & Implications
The persistence of the replication crisis into 2026 reveals fundamental tensions in how modern science operates and raises critical questions about the reliability of scientific knowledge.
The Incentive Structure Problem
At the heart of the crisis lies a misalignment between what science needs (reliable, reproducible findings) and what scientists are rewarded for (novel, positive results published in high-impact journals). Academic careers depend on publication metrics, with hiring, promotion, and funding decisions heavily weighted toward publication count and journal impact factors. This creates powerful incentives to produce publishable results by any means necessary.
The finding that researchers at the highest-ranked universities publish articles with the weakest p-values is particularly revealing [Psychology P-Value Study, 2025]. It suggests that elite institutions, which should be setting the standard for rigorous research, may actually be environments where career pressures are most intense and quality standards most compromised. When the most prestigious positions reward quantity and novelty over reproducibility, the entire system is skewed.
This incentive problem explains why methodological reforms have had limited success. Preregistration, open data, and registered reports are valuable tools, but they operate within an unchanged reward structure. Researchers still need publications to advance their careers, journals still prefer positive findings, and funders still favor novel discoveries. Until these fundamental incentives change, reforms will remain partial solutions.
The Fraud Dimension Changes Everything
The emergence of organized fraud networks represents a qualitative shift in the nature of the crisis. While questionable research practices and publication bias are problems of scientific culture and methodology, systematic fraud is criminal activity. The finding that fraudulent publications are doubling every 1.5 years suggests that without intervention, fraud could eventually overwhelm legitimate science in some fields [Richardson, 2025].
The implications are profound. If a substantial fraction of the published literature is fabricated, then systematic reviews and meta-analyses—which synthesize multiple studies to identify robust findings—become unreliable. Researchers building on fraudulent work waste time and resources. In clinical fields, treatment decisions based on fabricated data can harm patients. The entire edifice of cumulative scientific knowledge becomes suspect.
Addressing fraud requires different tools than addressing questionable research practices. It demands better detection methods, potentially including forensic analysis of data and images. It requires publishers to invest in screening submissions, which conflicts with the profit motives of commercial publishers. It may require legal action against fraud networks and international cooperation to shut down paper mills. Most fundamentally, it requires acknowledging that science has a crime problem, not just a methodology problem.
The Discipline-Specific Nature of Solutions
The spread of the crisis across disciplines reveals that while the underlying causes may be similar, the manifestations and solutions vary by field. In physics, the problem includes reluctance to challenge established results and career risks for those who attempt replications [Frolov, 2026]. In psychology, the issue involves theoretical frameworks that generate non-replicable findings [Social Psychology Replication Study, 2026]. In cancer research, the combination of high stakes and complex methodologies creates vulnerability to fraud [Richardson, 2025].
This variation suggests that one-size-fits-all solutions are unlikely to succeed. Each discipline needs to examine its specific practices, incentives, and vulnerabilities. Physics might need to create career paths that reward replication work. Psychology might need to reconsider theories that consistently fail to replicate. Cancer research might need enhanced fraud detection given its particular vulnerability.
The Public Trust Dimension
The replication crisis has "compromised the public's trust in science" and undermined "the role of science and scientists as reliable sources to inform evidence-based policy and practice" [Credibility Revolution, 2023]. This erosion of trust comes at a particularly dangerous time, when evidence-based policy is needed to address challenges from climate change to public health.
The implications extend beyond individual studies. When the public learns that much published research doesn't replicate, it becomes easier to dismiss any scientific finding as potentially unreliable. This creates space for motivated reasoning, where people accept or reject scientific evidence based on whether it aligns with their preferences. The crisis thus threatens not just the internal workings of science but its social authority and ability to inform public decisions.
Rebuilding trust requires not just fixing the problems but demonstrating that they've been fixed. This is why transparency is so critical—open data, preregistration, and registered reports make the research process visible and verifiable. But transparency alone isn't enough if the underlying research remains unreliable. Trust requires both openness and actual reproducibility.
The Resource Allocation Question
Failed replications represent massive waste of research resources. When researchers build on false findings, they waste time and funding pursuing dead ends. When systematic reviews include non-replicable studies, they produce misleading conclusions. When funding agencies support work based on unreliable evidence, they misallocate scarce research dollars.
The NIH's commitment to devote resources to replication studies represents recognition of this problem [NIH Replication Initiative, 2026]. The Paragon Health Institute's recommendation that NIH devote at least 0.1% of its annual budget (about $48 million) to fund replication studies acknowledges that verification is a legitimate research activity deserving support [Paragon Health Institute, cited in C&EN, 2025].
However, this raises difficult questions about resource allocation. Should funding agencies devote more resources to replicating existing findings or to generating new knowledge? How much replication is enough? Should high-stakes findings (those informing policy or clinical practice) be held to higher replication standards than exploratory research? These questions have no easy answers, but the crisis forces us to confront them.
The Innovation-Reproducibility Tension
There is a genuine tension between reproducibility and innovation. Requiring extensive replication before publication would slow the pace of discovery. Demanding perfect reproducibility would discourage exploratory research and risk-taking. Science needs both reliable knowledge and new discoveries.
The challenge is creating a system that appropriately balances these goals. As the Annual Review of Medicine article noted, reforms must "strengthen reproducibility without stifling innovation" [Annual Review of Medicine, 2026]. This might mean different standards for different types of research—higher reproducibility requirements for findings that will inform policy or practice, more tolerance for variability in exploratory work.
It might also mean changing how we think about scientific progress. Rather than viewing each published study as a definitive finding, we might treat initial studies as preliminary evidence requiring verification. This would shift the culture from "publish and move on" to "publish and verify," making replication a normal part of the research cycle rather than an implicit criticism of the original work.
Open Questions
Despite more than a decade of attention to the replication crisis, fundamental questions remain unresolved as we move through 2026.
How Much Replication Is Enough?
There is no consensus on what proportion of studies should successfully replicate or what replication rate would indicate a healthy scientific literature. The current rate of approximately 50% across disciplines [Tyner et al., 2026] is clearly too low, but what should the target be? Should we expect 80% replication? 90%? 100%?
The answer likely varies by context. For findings that inform clinical practice or public policy, we should demand very high replication rates. For exploratory research in new areas, we might accept lower rates while requiring more verification before findings are treated as established. But we lack clear frameworks for making these distinctions and setting appropriate standards.
Can Fraud Be Controlled?
The exponential growth of fraudulent publications—doubling every 1.5 years [Richardson, 2025]—raises the question of whether fraud can be brought under control or whether it will continue to accelerate until it overwhelms legitimate science. Current detection methods are labor-intensive and catch only a fraction of fraudulent papers. Paper mills and fraud networks adapt to detection methods, creating an arms race between fraudsters and those trying to stop them.
It's unclear whether technological solutions (better fraud detection algorithms, blockchain verification of data) can stay ahead of increasingly sophisticated fraud networks. It's also unclear whether the scientific community has the will to invest the resources needed to combat fraud effectively, given that doing so diverts resources from research itself.
Will Incentive Structures Actually Change?
Despite widespread recognition that misaligned incentives drive the crisis, there's little evidence that the fundamental reward structures in science are changing. Universities still make hiring and promotion decisions based largely on publication metrics. Journals still prefer positive findings. Funders still favor novel discoveries.
Some initiatives attempt to change incentives—for example, some funders now consider registered reports and some journals have adopted policies to reduce publication bias. But these remain marginal changes to a system that fundamentally rewards quantity and novelty over quality and reproducibility. Whether more substantial changes are possible, or whether the current system is too entrenched to reform, remains an open question.
What Is the True Extent of the Problem?
We still don't know how much of the published scientific literature is unreliable. Replication studies sample only a tiny fraction of published work, and they may not be representative. Fields that haven't conducted large-scale replication efforts may have problems we haven't yet discovered. The fraud epidemic suggests the problem may be larger than even pessimistic estimates suggested.
Without knowing the true extent of the problem, it's difficult to calibrate solutions. If 50% of published findings don't replicate, that requires different interventions than if the true figure is 20% or 80%. The lack of systematic data across all fields leaves us uncertain about the scope of the challenge.
How Do We Balance Transparency and Privacy?
Open data and transparency are key reforms, but they raise questions about privacy, particularly in research involving human subjects. How do we make data open enough to verify findings while protecting participant privacy? How do we handle sensitive data that can't be fully shared? These questions become more complex as research increasingly uses large datasets and machine learning methods that may be difficult to fully document and share.
What Role Should Replication Play in Scientific Training?
Current scientific training emphasizes generating new findings, not verifying existing ones. Should graduate programs require students to conduct replications? Should replication be valued in dissertations and theses? How do we train the next generation of scientists to value reproducibility when current career incentives don't reward it?
Can We Distinguish Honest Error from Questionable Practices from Fraud?
Not all replication failures indicate misconduct. Some result from honest errors, some from questionable but not fraudulent practices, and some from deliberate fraud. But distinguishing among these categories is often difficult. Failed replications can damage reputations even when the original researchers acted in good faith. How do we create accountability for research quality while protecting researchers from unfair accusations?
What Is the Path to Rebuilding Public Trust?
Even if the scientific community successfully addresses the replication crisis, rebuilding public trust may take years or decades. What strategies can restore confidence in scientific findings? How do we communicate about the crisis and its solutions without further undermining trust? How do we help the public understand that science is self-correcting while also acknowledging that this correction process reveals substantial problems?
Will the Metascience Movement Deliver Solutions?
The growth of metascience—research on research itself—offers hope for evidence-based solutions to the crisis. But it's unclear whether metascience findings will translate into actual changes in research practice. The field is still young, and many of its recommendations require systemic changes that may be difficult to implement. Whether metascience can move from documenting problems to implementing solutions remains to be seen.
References
Annual Review of Medicine. (2026). Reproducibility concerns in biomedical research. Annual Review of Medicine. https://www.annualreviews.org/content/journals/10.1146/annurev-med-050124-050859
Biomedical Research Survey. (2024). Survey of biomedical researchers on reproducibility crisis. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC11537370/
Cancer Biology Replication Study. (2021). Replication of cancer research studies. Chemical & Engineering News. https://cen.acs.org/research-integrity/reproducibility/Amid-White-House-claims-research/103/web/2025/06
Credibility Revolution. (2023). Reframing the replication crisis as a credibility revolution. Nature Communications Psychology. https://www.nature.com/articles/s44271-023-00003-2
Earth Sciences Reproducibility. (2024). Replication crisis in earth sciences. Science Direct. https://www.sciencedirect.com/science/article/pii/S1674987124000458
Economics Replication Study. (2016). Replication of experimental economics studies. Wikipedia. https://en.wikipedia.org/wiki/Replication_crisis
Errington, T. (2025). Growth of metascience. Nature. https://www.nature.com/articles/d41586-025-02065-0
Frolov, S. (2026). Replication crisis in physics. Pitt Wire. https://www.pittwire.pitt.edu/features-articles/2026/01/29/sergey-frolov-replication-crisis-research
Mathematical Fraud Report. (2025). Fraudulent practices in mathematical publishing. Wikipedia. https://en.wikipedia.org/wiki/Scientific_misconduct
NIH Replication Initiative. (2026). NIH launches replication and reproducibility initiative. National Institutes of Health. https://www.nih.gov/replicationandreproducibility
Northwestern Fraud Study. (2025). Networks of organized scientific fraudsters. Phys.org. https://phys.org/news/2025-08-scientific-fraud-alarming-uncovers.html
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Nature. https://www.nature.com/articles/s44159-025-00529-8
PNAS Paper Mills Study. (2025). Global networks of scientific fraud. PNAS. https://www.pnas.org/doi/10.1073/pnas.2420092122
Preregistration Adherence Study. Adherence to preregistration in published studies. Wikipedia. https://en.wikipedia.org/wiki/Preregistration_(science)
Psychology P-Value Study. (2025). Trends in p-values across psychology articles 2004-2024. SAGE Journals. https://journals.sagepub.com/doi/10.1177/25152459251323480
QRP Review. (2024). Questionable research practices and reform limitations. Science Direct. https://www.sciencedirect.com/science/article/abs/pii/S2214804324001642
Replication Index. (2026). The replication crisis is far from over. Replication Index. https://replicationindex.com/category/replication-crisis/
Richardson, R. (2025). Fraudulent scientific papers in cancer research. ASCO Post. https://ascopost.com/issues/november-25-2025/how-the-proliferation-of-fraudulent-scientific-papers-is-threatening-the-integrity-of-cancer-research/
Social Psychology Replication Study. (2026). Replication failures in social priming research. SAGE Journals. https://journals.sagepub.com/doi/abs/10.1177/17456916251401849
Tyner et al. (2026). Replication success rates across disciplines. Replication Index. https://replicationindex.com/category/replication-crisis/
UK Metascience Unit. (2025). A year in metascience. UK Government. https://www.gov.uk/government/publications/a-year-in-metascience-2025
Wikipedia. Replication crisis overview. Wikipedia. https://en.wikipedia.org/wiki/Replication_crisis