MRI-Based Foundation Model Predicts Key Molecular Biomarkers and Posttreatment Outcomes in Glioma
A new AI foundation model trained on routine magnetic resonance imaging (MRI) scans may enable noninvasive prediction of key glioma molecular biomarkers and posttreatment outcomes, according to a study published in JCO Precision Oncology. The model—called the Unified Multimodal Brain Imaging Foundation (UMBIF)—uses large-scale self-supervised learning to extract clinically relevant imaging features and may support clinical decision-making in neuro-oncology.
Background
MRI is widely used to diagnose gliomas and monitor treatment response, but interpretation remains largely subjective and may not reliably predict tumor molecular features. Additionally, the identification of key biomarkers in patients with glioma typically require tissue sampling through biopsy or surgery, procedures that may not always be feasible or safe.
To address these limitations, Junxian Li, PhD, of Tianjin Medical University, Tianjin, China, and colleagues, developed an AI framework capable of identifying imaging patterns associated with tumor biology. As the authors noted, “the large volume of unlabeled data available in clinical settings creates a pathway to alleviate the scarcity of annotated samples required by supervised learning.”
Model Methods
The UMBIF model uses a vision transformer–based architecture and a transformer decoder trained with self-supervised learning. In the first stage, the model was pretrained on 51,029 routine brain MRI examinations collected from 13 public data sets (n = 40,823 for training; n = 10,206 for testing), enabling it to learn visual representations from 3D medical images without labeled data. Then, the encoder was assessed against the pretrained self-supervised learning reference models SSL-ImageNet and SSL-Cerebral.
The model was then fine-tuned for several clinical tasks. They extracted features from three convolutional neural networks to develop and evaluate multiple machine learning classifiers. Four binary classification tasks were set, including distinguishing pseudoprogression from tumor progression after treatment and predicting key glioma molecular biomarkers. Researchers compared its performance with other self-supervised learning models using metrics such as accuracy and area under the receiver operating characteristic curve (AUC).
Key Findings
Across multiple neuro-oncology tasks, UMBIF consistently outperformed comparison models. For identifying posttreatment radiographic outcomes, the UMBIF model achieved an accuracy of 0.899 (AUC = 0.815; 95% confidence interval [CI] = 0.740–0.885; P = .001) and a specificity of 0.636. The positive predictive value was 0.990 and the negative predictive value was 0.878; the F1 score for balancing precision and recall was 0.942.
Comparatively, the SSL-Cerebral model showed a sensitivity of 0.875 and a specificity of 0.588, with a positive predictive value of 0.333, a negative predictive value of 0.643, and an F1 of 0.483. The SSL-ImageNet model had a sensitivity of only 0.250 and a specificity of 0.735; the positive predictive value was 0.182, the negative predictive value was 0.806, and the F1 score was 0.211.
In terms of molecular profiling, the model reached accuracies of 0.898 (AUC = 0.916; 95% CI = 0.886–0.945; P = .001) for identifying 1p/19q codeletion status, 0.829 (AUC = 0.896; 95% CI = 0.857–0.929) for IDH mutation status, and 0.905 (AUC = 0.859; 95% CI = 0.817–0.900; P = .001) for MGMT promoter methylation classification.
These findings suggest that MRI-based AI models may help identify molecular features of gliomas without requiring invasive procedures. According to the study authors, “UMBIF showed robust transferability to both post-therapy imaging assessment and molecular status prediction in glioma. By leveraging large-scale self-supervised pretraining to boost performance while reducing dependence on manual annotations, the framework may facilitate more efficient and reliable diagnostic workflows.”
Qian Su, PhD, of Tianjin Medical University, Tianjin, China, is the corresponding author of the article in JCO Precision Oncology.
DISCLOSURE: This study was supported by Introduction-of-Talents and Doctoral Start-up Fund of Tianjin Medical University Cancer Institute and Hospital, the National Natural Science Foundation of China, and the Tianjin Key Medical Discipline Construction Project. For full disclosures of the study authors, visit ascopubs.org.
ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.
Performance of a convolutional neural network in determining differentiation levels of cutaneous squamous cell carcinomas was on par with that of experienced dermatologists, according to the results of a recent study published in JAAD International.
“This type of cancer, which is a result of mutations of the most common cell type in the top layer of the skin, is strongly linked to accumulated [ultraviolet] radiation over time. It develops in sun-exposed areas, often on skin already showing signs of sun damage, with rough scaly patches, uneven pigmentation, and decreased elasticity,” stated lead researcher Sam Polesie, MD, PhD, Associate Professor of Dermatology and Venereology at the University of Gothenburg and Practicing Dermatologist at Sahlgrenska University Hospital, both in Gothenburg, Sweden.