AlzPED: Raising the standards for preclinical testing of Alzheimer’s drug candidates
As NIA continues to lead efforts to meet the research priorities of the National Plan to Address Alzheimer’s Disease, we emphasize rigorous, reproducible, and open science to develop effective therapies for Alzheimer’s and related dementias. A key endeavor has been the Alzheimer’s Disease Preclinical Efficacy Database (AlzPED), launched with the goal of improving the predictive power of preclinical testing in animal models. We last blogged about AlzPED two years ago, and since then, its public-private partnership has grown to include Sage Bionetworks and Cohen Veterans Bioscience. In this update, we offer a look at the trends we’re seeing from data collected and encourage those of you who haven’t used AlzPED yet to get involved.
The importance of research rigor
The nature of scientific inquiry and therapy development is to build upon existing knowledge. However, previous assessments of preclinical animal studies of potential Alzheimer’s therapeutics highlighted a lack of methodological rigor as well as inadequate reporting practices. This could be a major contributor to the preclinical-to-clinical translation gap. Improving rigor is a priority across NIH. For example, the Advisory Committee to the Director’s Working Group on Enhancing Rigor, Transparency, and Translatability in Animal Research released a report (PDF, 2.7M) emphasizing the need to improve design and reporting of preclinical studies.
NIA developed AlzPED to address these needs. Since 2020, AlzPED has grown to house curated summaries from 1,298 published and unpublished studies. These summaries include data related to 251 therapeutic targets, 1,123 therapeutic agents, 210 animal models, and more than 2,000 outcome measures for Alzheimer’s and related dementias. AlzPED’s “Rigor Report Card” checks a standardized set of design elements to rate the rigor of our curated studies.
Signs of improvement
We recently tracked trends from nine core study design elements identified as critical for rigor and reproducibility and evaluated them over five-year spans from 2000 to 2021. The results showed steady and significant improvement in experimental design and reporting of some critical design elements such as conflict-of-interest statements, genetic background, and sex of the animal model used in the study. As a next step, it is crucial to work to similarly improve trends of other critical but underreported elements like power calculation, inclusion/exclusion criteria, and balancing a study for sex (see the graph below for more details). More detailed analyses from curated summaries can be found on the analytics page in AlzPED.
Connect to AlzPED to strengthen your studies
We see these results as encouraging evidence that wider use of AlzPED’s framework has the potential to improve the predictive validity of preclinical efficacy testing studies in dementia research animal models. Individual researchers can use it to survey the quality of preclinical efficacy testing studies, strengthen study designs and reporting standards relevant to their own work, and publicize studies with negative findings, which are critically important as context in optimizing future studies. We are also eager to assist researchers to create citable reports of studies with negative findings, which can be included in future grant applications.
Organizations funding dementia research can use AlzPED as a tool for implementing expectations for rigorous and reproducible preclinical research. NIA is seeking more AlzPED partner organizations: Reach out to us today!