News Research Clinical Trials

Clinical Staff Using Natural Language Processing Model Enhances Accuracy of Clinical Trial Prescreening Process

February 20, 2026 By Lisa Astor 6 min read
Share Share via Email Share on Facebook Share on LinkedIn Share on Twitter

A randomized, noninferiority trial demonstrated that when trained research staff use a pretrained language model for assistance in prescreening patients for enrollment in a clinical trial, the AI can boost the accuracy and efficiency of the eligibility criteria assessment. The addition of the AI language model in the prescreening process led to modest improvements in chart-level accuracy that achieved superiority compared with research staff reviews alone, as well as similar average times for chart review. A report on the performance of the human and AI framework was published in Nature Communications.

Prescreening of patient records for eligibility assessments for inclusion in a clinical trial is typically a time-consuming and labor-intensive process. Clinicians or clinical research staff members have to manually comb through clinical notes to find the relevant criteria, meaning that the process may be subject to human error, missed information, and bias.

Researchers explored the use of a natural language processing algorithm to overcome the pitfalls of clinical trial eligibility criteria screening and improve the accuracy and efficiency of the prescreening process as well as the overall inclusivity of the trial participants. They conducted a large, randomized, noninferiority trial to assess prescreening with or without AI augmentation of electronic health records from patients with non–small cell lung cancer (n = 195) or colorectal cancer (n = 160) who had received systemic therapy at a large community oncology clinic.

Chart-level accuracy, the primary endpoint, was determined in a paired study design between trial arms. Prescreening assessment with a human reviewer plus AI augmentation demonstrated an accuracy of 76.1% compared with 71.5% by human-alone assessment (P for noninferiority < .001; P for superiority = .002). Autonomous AI assessment alone showed an accuracy of 59.9%.

“AI can augment existing research staff screening processes to improve the accuracy of identifying patients for clinical trials,” said co-lead and corresponding study author Ravi B. Parikh, MD, MPP, Associate Professor in the Department of Hematology and Medical Oncology at the Emory University School of Medicine, and Medical Director of the Winship Data and Technology Applications Shared Resource at Winship Cancer Institute of Emory University. “We estimate that at a high-volume cancer center, that improvement could translate to 10 to 20 additional patients screened each week, meaning at least one extra patient per day being offered a potentially lifesaving clinical trial they otherwise might not have been offered.”

In terms of individual eligibility criteria, the addition of AI algorithms allowed for a greater accuracy compared with human-alone reviews in seven areas, including T, N, and M stages; biomarker testing and interpretation; and clinical outcome. Human review alone proved best in assessing ECOG status. AI algorithms alone, on the other hand, showed the best accuracy in the assessment of M stage, clinical outcome, and response to therapy compared with both the human-alone and human-plus-AI reviews.

By subgroup analysis, earlier vs later chart reviews as well as less vs more complex charts both showed similar accuracy measurements by both human-alone review and coordinators using AI algorithms. The reviews with and without AI augmentation both showed greater accuracy for shorter charts than longer charts, and greater accuracy for charts of patients with non–small cell lung cancer compared with patients with colorectal cancer.

Similar mean times per chart review were seen between the human-alone reviews and the human-plus-AI reviews (34.7 vs 37.8 minutes; P = .513). While efficiency was greater for earlier vs later chart reviews for the clinical research coordinators both with and without AI augmentation, the review times increased significantly more for humans without AI assistance (31 vs 44.6 minutes compared with 32.6 vs 42.2 minutes). However, the study authors noted that the results of the efficiency analysis should be interpreted with caution as this was not the primary endpoint, and because a potential focus of the research staff was on automation bias.

“While this work was conducted at a single cancer center, the approach is designed to be scalable and could be adapted by other health systems seeking to improve clinical trial operations,” Dr. Parikh concluded.

Model Methods

Electronic health records were retrospectively collected and processed through an optical character recognition system to convert the documents into machine-readable data. The patient records were also de-identified and the dates were scrambled to ensure privacy. Transformer-based models trained on real-world data combined with pretrained neurosymbolic AI algorithms extracted and structured information from the health records on characteristics pertaining to 12 eligibility criteria from phase I–III oncology clinical trials. Medical experts had curated a symbolic reasoning system that guided the model in classifying the clinical report types and clinical events.

The de-identified records were then presented separately to two clinical research coordinators through a secure, web-based platform that allowed the data sources to be traced to each clinical event. Charts were randomized in groups of 20 for assessment with or without AI augmentation in an alternating order. A third person compared the two sets of manual human abstractions.

To balance accuracy differences between coordinators, concordance was checked on eight health records that were not included in the final analysis. Additionally, all accuracy and efficiency assessments were reviewed against a data set that was manually developed by three clinician reviewers before the study started; this data set was considered the gold standard for the study.

“Clinical trial enrollment in oncology is limited by an arduous and suboptimal prescreening process, performed solely by clinical staff. We find that integrating chart abstractions of unstructured clinical texts by a high-performing AI system into human workflows leads to comparable and improved prescreening accuracy, improves precision in abstracting biomarker and neoplasm trial inclusion criteria, and preserves prescreening efficiency,” the study authors wrote in their published report. “Using experimental frameworks such as ours to evaluate human-AI collaborations is critical to build trust in using AI in healthcare workflows.”

DISCLOSURES: The study was funded by Mendel.ai. Dr. Parikh reported receiving grants and personal fees from Mendel.ai, both outside the submitted work. For full author disclosures as well as open access to the code and data set, visit ascopubs.org.

 

AI in Practice: Have you explored using AI systems for clinical trial prescreening, enrollment, or analysis? If so, tell us about your experience. Connect with us by email. 

ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.

Performance of a convolutional neural network in determining differentiation levels of cutaneous squamous cell carcinomas was on par with that of experienced dermatologists, according to the results of a recent study published in JAAD International.

“This type of cancer, which is a result of mutations of the most common cell type in the top layer of the skin, is strongly linked to accumulated [ultraviolet] radiation over time. It develops in sun-exposed areas, often on skin already showing signs of sun damage, with rough scaly patches, uneven pigmentation, and decreased elasticity,” stated lead researcher Sam Polesie, MD, PhD, Associate Professor of Dermatology and Venereology at the University of Gothenburg and Practicing Dermatologist at Sahlgrenska University Hospital, both in Gothenburg, Sweden.

KOL Commentary
Watch

Related Content