AlzPED: A roadmap for increasing rigor and reproducibility of preclinical studies
Author’s Note: The Division of Neuroscience is grateful for Health Scientist Specialist Shreaya Chakroborty’s contributions to the development of AlzPED and for conducting the analyses presented in this blog post.
In our quest to develop effective therapies for Alzheimer’s disease (AD), we have built an array of open-science, translational infrastructure programs that provide high-quality data and critical analytical and experimental tools to researchers. One success has been the Model Organism Development and Evaluation for Late-Onset Alzheimer’s Disease (MODEL-AD) consortium. In just four years, MODEL-AD has generated and made available more than 40 new genetically modified mouse models harboring various risk factor genes for late-onset AD and shared a wealth of data from these models via the NIA-supported AD Knowledge Portal.
Despite that positive step, it remains difficult to translate overwhelmingly positive preclinical study results into similar positive clinical outcomes. One of the chief culprits for this is poor rigor in design, methodology, and evaluation of preclinical studies. Rigor — the overall quality of the experimental process — is the essence of scientific research. More rigorous research produces more trustworthy and reproducible outcomes.
To help make our rigor more rigorous (so to speak), NIA joined forces about four years ago with the NIH Library, the Alzheimer’s Drug Discovery Foundation, and the Alzheimer’s Association to create the Alzheimer’s Disease Preclinical Efficacy Database (AlzPED). AlzPED is a searchable, publicly available knowledge base that hosts more than 1,000 published studies regarding the preclinical testing of candidate therapeutics in animal models of AD and Alzheimer’s disease related dementias (ADRD). It aims to illuminate the experimental design and reporting practices of preclinical efficacy testing studies for researchers, funding agencies, and the public. Since our previous blog post, AlzPED has grown significantly. It now houses data on 188 animal models, 890 therapeutic agents, 173 therapeutic targets, and more than 1,500 AD-related outcome measures.
AlzPED: A systematic approach
The AlzPED team has identified a three-step process to improve rigor, reproducibility, and translatability:
- Identify missing but critical experimental design elements.
- Adopt a standardized set of best practices and experimental design guidelines and encourage investigators to follow them.
- Provide a platform for creating citable reports of (previously unpublished) studies with negative findings.
AlzPED uses a “rigor report card” listing a standardized set of study design elements to find critical pieces missing from preclinical studies and to monitor the rigor of every study we curate (see Figure 1 for a sample report card.) Our most recent analysis of 1,030 studies points to serious gaps in reporting critical methodology elements such as sample-size calculation, blinding, and inclusion and exclusion criteria. On a positive note, we find that most studies do report experimental design elements like dose and formulation of the therapeutic agent being examined and treatment paradigms (Figure 2).
Roadmap to increased rigor
Wider use of AlzPED’s roadmap, best practices, and guidelines can improve and refine the rigor and reproducibility of preclinical research in Alzheimer’s animal models and promote effective translation of drug testing data to the clinic.
We encourage you to use AlzPED to survey the existing literature, to check where your work is on the rigor roadmap, and to improve the design of your future preclinical efficacy testing studies. NIH is committed to enhancing reproducibility through rigor and transparency, and AlzPED is a very useful tool to strengthen the Research Strategy section as you prepare your next application.
Most importantly, if you have conducted a study that has arrived at negative findings and have not been able to publish it, contact us and we will help you create a citable report so we can reduce the publication bias that favors studies with positive findings. We welcome your questions or comments below!