AI-Driven Support for Aesthetic Outcomes After Locoregional Breast Cancer Treatment
“[Women undergoing breast cancer surgery] are facing traumatic aesthetic impact, and, somehow, they have to know what will come,” commented Timo Schinköthe, PhD, of the University of the Bundeswehr Munich, Neubiberg, Germany. Thus, at the inaugural European Society for Medical Oncology (ESMO) Artificial Intelligence (AI) & Digital Oncology Congress (Abstract 70MO), he introduced an AI-powered mobile application aimed at enhancing patient engagement and satisfaction with aesthetic outcomes and presented exploratory findings from its evaluation in the multinational CINDERELLA trial.1
Dr. Schinköthe explained the platform as a photo robot connected to a cloud-based electronic health record system, and together they are linked to an AI tool called BreloAI, which analyzes captured wound images to estimate aesthetic outcomes based on the type and quality of surgery. Electronic patient-reported outcomes were integrated because “patient feedback is extremely important,” he noted, along with an electronic case report form system. He continued that the “most important” part was the mobile application he and his colleagues built for patients to view potential outcome pictures, along with other content they might find interesting.
“Preliminary insights [from our study] show patients’ high interest in AI-tailored visual content within a digital oncology setting,” the investigators commented, “but uptake varied by center, suggesting contextual influences (eg, culture, education, and local integration).”
Study Details
Approximately 1,000 patients with breast cancer were stratified by type of surgery (conservative vs mastectomy) and whether they had previously undergone radiotherapy. They were then randomly assigned in a 1:1 ratio to the intervention or control group. Patients in the former group had full access to the mobile Cinderella application, including its AI-tailored content, whereas those in the latter were restricted to using the application solely for electronic patient-reported outcomes and image capture. The AI-generated content comprised images drawn from a curated database that illustrated potential aesthetic outcomes of various surgical techniques, customized to the patient’s own upper body photograph.
The mobile application was deployed in six languages across the following five centers, each in a different country: Champalimaud Foundation, Lisbon, Portugal; Heidelberg University Hospital, Germany; Gdańsk University Hospital, Poland; Ospedale San Raffaele, Milan, Italy; and Sheba Medical Center, Ramat Gan, Israel.
To evaluate usage behavior, the mobile application recorded all user activities, including the number of sessions, session duration, and navigation paths. Engagement metrics were not prespecified primary or secondary endpoints and were thus presented as exploratory process indicators, independent of clinical or patient-reported outcome analyses. The 1-year follow-up period will continue until May 2026.
Exploratory Findings
On average, each patient viewed 46.5 pages and completed 7.5 sessions. The average number of sessions per patient was found to vary significantly by country (P < .001):
Portugal (n = 142): 8.73 (95% confidence interval [CI] = 7.44–10.02);
Germany (n = 84): 6.12 (95% CI = 4.77–7.47);
Poland (n = 70): 7.64 (95% CI = 5.54–9.74);
Italy (n = 91): 7.59 (95% CI = 6.11–9.08);
Israel (n = 29): 4.66 (95% CI = 3.25–6.06).
Surgery content, which contained the AI-tailored material, was identified by Dr. Schinköthe as the most important chapter of the mobile application for patients. Most users appeared to begin their application journey with this section.
Concerning the patient population, Dr. Schinköthe stated, “Comparing the different countries is very interesting because the patients who are enrolled from country to country differ in a lot of their characteristics, but also in the disease characteristics as well as in the type of surgery.” He added that he looks forward to seeing how these differences will impact the primary and secondary endpoint outcomes.
The investigators concluded, “Ongoing analyses will identify predictors of sustained engagement and relate usage to patient-reported outcomes.”
DISCLOSURE: Dr. Schinköthe reported financial, personal, and ownership interests with CANKADO, as its Chief Executive Officer and owner. The other study authors reported no conflicts of interest.
ASCO AI in Oncology is published by Conexiant under a license arrangement with the American Society of Clinical Oncology, Inc. (ASCO®). The ideas and opinions expressed in ASCO AI in Oncology do not necessarily reflect those of Conexiant or ASCO. For more information, see Policies.
Performance of a convolutional neural network in determining differentiation levels of cutaneous squamous cell carcinomas was on par with that of experienced dermatologists, according to the results of a recent study published in JAAD International.
“This type of cancer, which is a result of mutations of the most common cell type in the top layer of the skin, is strongly linked to accumulated [ultraviolet] radiation over time. It develops in sun-exposed areas, often on skin already showing signs of sun damage, with rough scaly patches, uneven pigmentation, and decreased elasticity,” stated lead researcher Sam Polesie, MD, PhD, Associate Professor of Dermatology and Venereology at the University of Gothenburg and Practicing Dermatologist at Sahlgrenska University Hospital, both in Gothenburg, Sweden.