Automated AI Framework Paves Way for Earlier Detection of Pancreatic Ductal Adenocarcinoma
An automated AI framework, called Radiomics-based Early Detection Model (REDMOD), identified pancreatic ductal adenocarcinoma with significantly higher sensitivity than radiologists. Findings published in Gut showed that disease was detected by REDMOD up to 3 years prior to clinical diagnosis, before the tumors are fully visible on standard imaging.
“REDMOD is an automated, mechanistically grounded, longitudinally stable, externally validated AI that surpasses radiologists for pancreatic ductal adenocarcinoma detection at its visually occult pre-diagnostic stage. These attributes position it for prospective validation in high-risk cohorts, a necessary step towards shifting the paradigm from late-stage symptomatic diagnosis to proactive preclinical interception,” the study authors, including senior author Ajit Harishkumar Goenka, MD, Department of Radiology, Mayo Clinic in Rochester, Minnesota, wrote in their published report.
The study authors suggested that once validated further, the REDMOD framework could serve as a non-invasive triage and longitudinal monitoring tool prior to confirmatory imaging for earlier pancreatic cancer detection.
Model Methods
The REDMOD framework was trained and validated on a large, multi-institutional data set reflecting real-world clinical conditions and the low prevalence of early pancreatic ductal adenocarcinoma. The study included 1,462 CT scans: 219 prediagnostic scans from patients later diagnosed with pancreatic ductal adenocarcinoma (median lead time to diagnosis, 427 days; range, 90–1092 days) and 1,243 control scans from individuals without cancer, all confirmed with at least 3 years of follow-up. These were divided into a training cohort (n = 969) and an independent test cohort (n = 493).
In the test set, the prediagnostic CT scans were analyzed for temporal detection window and external generalizability. Scans were gathered from multiple institutions and CT vendors for generalizability.
The fully automated pipeline integrates deep learning–based volumetric pancreas segmentation (based on the 3D nnU-Net architecture) with radiomic feature extraction, initially generating 968 quantitative imaging features per scan. These were reduced to 40 key features using minimum redundancy maximum relevance selection and incorporated into a heterogeneous ensemble model combining logistic regression, random forest, and extreme gradient boosting algorithms.
To benchmark the performance of REDMOD against current practice, the researchers conducted a head-to-head multireader study of all 493 CT scans in the test subset. Two board-certified abdominal radiologists completed reads of all CT scans and rated the likelihood of a stage 0 pancreatic ductal adenocarcinoma finding from 1 to 5. Metrics were calculated with the same ground-truth labels as the AI framework for consistency.
Key Results
Radiomic analysis revealed that 90% of the selected features were derived from multiscale, wavelet-filtered images, which significantly outperformed unfiltered data (area under the curve [AUC] = 0.82 vs 0.74; P = .007). The model was further evaluated for robustness across institutions and imaging platforms, as well as for longitudinal stability on repeat imaging.
In the independent test cohort, REDMOD achieved an AUC of 0.82 (95% confidence interval [CI] = 0.81–0.83), with a sensitivity of 73.0% (95% CI = 60.0%–78.7%) and specificity of 81.1% (95% CI = 75.2%–93.1%). This performance significantly exceeded that of radiologists, whose pooled sensitivity was 38.9% (P < .001), with the AI model demonstrating nearly double the detection rate. The advantage increased with longer lead times: more than 24 months before diagnosis, sensitivity was 68.0% for REDMOD vs 23.0% for radiologists.
Across prediagnostic intervals, sensitivity remained 75.0% for both 3 to 12 and 12 to 24 months before diagnosis. The model detected cancers at a median lead time of 475 days and maintained consistent performance across internal and external validation cohorts, with specificity of 81.3% and 87.5%, respectively. REDMOD also demonstrated strong longitudinal stability, with 90% to 92% concordance on repeat imaging.
“This study validates REDMOD as a fully automated AI framework capable of identifying the imaging signatures of stage 0 pancreatic ductal adenocarcinoma in normal pancreas, achieving this with substantial lead times and performance superior to expert radiologists. The demonstrated ability of the framework to consistently detect these occult signals on a large clinically-oriented dataset, combined with its high longitudinal stability and validated specificity, establishes a robust foundation for AI-augmented early detection. This work overcomes key barriers in the field by providing a scalable objective tool that addresses a critical diagnostic gap,” Mukherjee et al conclude. “While prospective validation is paramount to confirm clinical utility, the REDMOD framework represents a significant advance towards shifting the paradigm for sporadic pancreatic ductal adenocarcinoma from a late-stage symptomatic diagnosis to proactive pre-clinical interception, offering tangible hope for improving outcomes in this challenging disease.”
DISCLOSURES: The study was funded by the National Institutes of Health, Mayo Clinic Comprehensive Cancer Center, Champions for Hope Pancreatic Cancer Research Program of the Funk Zitiello Foundation, Centene Charitable Foundation, and the Hoveida Family Foundation. For full disclosures of the study authors, visit bmj.com.
ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.