Deep Learning Distinguishes Thymic Epithelial Tumor Subtypes
A deep learning model demonstrated the ability to diagnose thymic epithelial tumors and differentiate between histological subtypes, with high sensitivity found especially for detecting thymic carcinomas.
The researchers suggested, in their report published in Annals of Oncology, that the tool could be integrated into pathology workflows to improve diagnostic consistency and support decision-making, especially in settings with limited thoracic pathology expertise.
“We created a tool that—in the hands of a nonexpert pathologist—is able to properly diagnose 100% of thymic carcinomas and outperform nonexpert diagnoses,” explained senior study author Marina Garassino, MD, Professor of Medicine at UChicago Medicine.
Thymic epithelial tumors are a rare group of tumors that can be challenging to diagnose due to their heterogeneous histologic patterns. Historically there has been significant interobserver variability in the classification of these tumors, even with standardized World Health Organization (WHO) classification criteria. About half of all thymic epithelial tumors may be reclassified on a second opinion.
The researchers believed that deep learning could reduce the diagnostic variability surrounding thymic epithelial tumors.
Previously, the research team developed an ordinal regression approach to labeling thymic epithelial tumors using hematoxylin and eosin (H&E) whole-slide images. However, that model could not reliably classify the tumors. But distribution differences in predicted scores suggested that morphological differentiation could help improve diagnostic accuracy.
In this study, they refined their classification system and optimized their model architecture to improve diagnostic accuracy and robustness.
When tested on a three-group hierarchical classification, the researchers’ newer model achieved 91.1% accuracy (Cohen's ĸ = 0.859), and 77.7% accuracy on WHO classifications (Cohen's ĸ = 0.716).
The model achieved a sensitivity of 100% and an accuracy of 94.6% for detecting thymic carcinomas.
Sixty percent of the misclassifications were found within the same clinical management group, which limited the impact on therapeutic decision making.
Going forward, the research team is trying to validate the tool with larger, international datasets and to allow for data and slides that utilize different procedures.
“In a larger population, harmonizing these steps is the biggest challenge,” Dr. Garassino said. “So, in the future, we plan to expand the algorithm so that it can correct for such differences, which will make the tool even more widely usable.”
Model Methods
For this study, the researchers trained an attention-based multiple instance learning model for classification and interpretability with attention weights to interpret the whole of diagnostically relevant regions. A novel hierarchical loss function was also included in the model architecture for clinically relevant tumor groupings according to treatment strategies and patient outcomes.
Model training was done using a dataset of H&E whole-slide images from The Cancer Genome Atlas (n = 119 patients). Image tiles were extracted from regions of interest, as annotated by the pathologist, and background tiles were removed. The image tiles then underwent digital stain normalization and were put into the pretrained foundation model H-optimus-o, which uses an open-source vision transformer with 1.1 billion parameters and has been trained on more than 500,000 H&E stained whole-slide images.
For each whole-slide image, the model generated two heat maps to explain its decision-making process, showing areas of attention derived from the attention weights and a heatmap of spatial patterns to explain the scoring of the tumor subtype, allowing for high interpretability.
The researchers’ model was validated on a cohort of 112 cases from the University of Chicago, and the diagnoses of thymic epithelial tumors were confirmed by an expert thoracic pathologist. Performance of the model was tested on a three-group hierarchical scheme (As: A + AB, Bs: B1 + B2 + B3, and thyroid carcinoma) and on the six classes from the WHO classification (A, AB, B1, B2, B3, and thyroid carcinoma).
DISCLOSURES: The research was supported by grants from the National Institutes of Health and a scholarship “Pierluigi Galli and Eurovetro Recycling SRL” from Associazione TUTOR. The Department of Medicine, Section of Hematology/Oncology and Department of Pathology at The University of Chicago and the TCGA Research Network also supported the study. For full disclosures of the study authors as well as access to the code used for model training and evaluation, visit annalsofoncology.org.
ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.
Performance of a convolutional neural network in determining differentiation levels of cutaneous squamous cell carcinomas was on par with that of experienced dermatologists, according to the results of a recent study published in JAAD International.
“This type of cancer, which is a result of mutations of the most common cell type in the top layer of the skin, is strongly linked to accumulated [ultraviolet] radiation over time. It develops in sun-exposed areas, often on skin already showing signs of sun damage, with rough scaly patches, uneven pigmentation, and decreased elasticity,” stated lead researcher Sam Polesie, MD, PhD, Associate Professor of Dermatology and Venereology at the University of Gothenburg and Practicing Dermatologist at Sahlgrenska University Hospital, both in Gothenburg, Sweden.