Division of Labor in Oncology: The Limits of AI in Cancer Care
The past few years has seen the rapid development of AI in American workforces. Health care is no exception to this, and there are now frequent articles discussing the potential impacts of this technology on the practice of medicine. However, these discussions are often incomplete, addressing particular tasks affected by AI in the practice of medicine without fully addressing the technology that undergirds it and the implications for oncology based on the fundamental principles of AI technology. The effective utilization of AI in oncology will require oncologists to recognize how the technology can enhance the execution of tasks to strengthen the overall work of oncology.
The Work and Tasks of Oncology
The idea of “work” is fundamentally and conceptually different than the idea of a “task”. In the very first book of The Wealth of Nations,1 Adam Smith captures this separation in his discussion of the division of labor seen in the creation of a simple pin:
“One man draws out the wire; another straights it; a third cuts it; a fourth points it; a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations.”
Each individual task is but a smaller component of the work of pin creation, and the mastery of any individual task, no matter the degree of proficiency, is insufficient for the overall production of the pin. Yet, even if all 18 of these tasks were accomplished perfectly, a pin would still not exist if it were not done in the right order, with the right materials, the proper supervision, and communication to ensure that the entire process proceeds without interruption. Although the labor of pin making can be divided into tasks, the work is more than the rote execution of a set of tasks. Instead, it is an ongoing integration of many infinitesimally small tasks performed with the best possible quality and coordinated to generate a final, singular end product of a pin. The work of oncologists should similarly be seen less as a series of individual tasks, but rather as executing the responsibilities of providing high-quality medical care to patients with cancer and contributing efforts to the understanding and treatment of cancer.
Accreditation Council for Graduate Medical Education (ACGME) program requirements for medical oncology explicitly name dozens of tasks that fellows are expected to perform and implies hundreds of other tasks that trainee oncologists must master to successfully complete the program and become fully fledged oncologists. Fellows are expected to take on the responsibilities they would be expected to shoulder as attendings: they must manage patients undergoing a wide variety of treatments for oncological conditions, participate in and lead scholarly activities, and play a role in the institution’s management of academic and service missions. Mastering the individual tasks is not enough, it is only upon demonstrating that fellows are able to integrate all those activities and work within the health-care system that they are then allowed to complete the program.
Using AI in the Work of Oncology
Modern AI, built on a foundation of machine learning, allows oncologists to enjoy a “proportionable increase in the productive powers of labor” by allowing them to complete tasks they were previously unable to perform and enhancing their effectiveness at other tasks, thereby deepening the division of labor and then integrating its fruits.1 But given its complexity and the uncertainty inherent in its very nature, AI alone cannot be relied upon to complete the total work of oncologists, regardless of its execution of individual tasks.
The most prominent forms of AI that capture public attention are readily available large language models (LLMs), such as ChatGPT. These LLMs appear to be able to transcend the bounds of rigid language and algorithms, and to a certain extent participate in the actual “language games” that define human communication.2 This enables computers to engage in unprecedented subtlety, seemingly becoming capable of understanding rules of human expression that were never explicitly defined, such as the difference and relationship in meaning between a report of “nausea” and an expression of “ugh!” from a patient. It also enables oncologists to give instructions for the LLM to perform minutiae-oriented, repetitive tasks that computers are well suited for, something that could previously only be done through specialized syntax few oncologists have comprehensive training in.
AI can assist physicians without a background in computer science to leverage basic programming to improve day-to-day administrative operations. For example, AI has facilitated the creation of spreadsheet formulas that can identify schedule conflicts, count days off, and balance rotations from the Stony Brook University fellowship schedule in a manner that would otherwise be laborious and manual, or require expensive software. In research, AI can be used to write code and make reusable programs to generate Kaplan-Meier curves and run basic statistical inferences. Similarly, AI can be easily adapted to write SQL queries. These have saved tremendous amounts of time without even interpreting data at a high level.
But beneath this display lies something fundamentally more ethereal than real human-to-human communication: LLMs are trained on correctly predicting the next word, generating signifiers without something being directly signified in the “mind” of its author.3 These modern AIs are simply models of linear algebra built to isolate patterns from a large amount of data. In the same way that a retrospective cohort study is subject to certain biases, today all LLMs are fundamentally insights generated from retrospective data. It is important that we never lose sight of this. This means that they are built on data that may contain bias, imperfections, inaccuracies, and historical treatments that are no longer relevant today. Most concerningly, LLMs are architecturally incapable of learning beyond the supplied data, a limitation that Richard Sutton, the father of reinforcement learning, has emphasized. He has argued that LLMs are incapable of real-time learning and will ultimately be superseded by systems that learn from experience rather than existing human-generated text.4
Thus, there is a limit to the possible alignment between AI systems and current oncology practice, meaning there is a limit to how much these systems can advance our intended objectives. An AI trained on historical data may recommend treatments that reflect an outdated standard of care, or omit novel therapies that postdate its training entirely. Even when clinical facts are current, models trained on population-level data cannot reliably center the priorities of individual patients with similar clinical features.
Given these limitations, it is hard to accept information created by AI as-is. AI tools exist to substitute tasks such as chart review, transcription, and note writing, but these may be counterproductive. Priorities indicated in generated notes may not reflect the topics the writer intended. Specific aspects and ideas that capture an individual physician’s expertise and years of experience may be lost as AI trends towards generalized outputs due to the fundamental nature of its construction. AI models also frequently generate words that appear coherent on the surface but do not coherently signify any meaning. In this sense, AI can enable a productive division of labor in that the simplest tasks may be streamlined, enabling physicians to stay focused on larger tasks that require more attention, more uncertainty, and depend on the integration of novel and personalized data.
AI has significant capabilities that can tackle many of the individual tasks that represent a significant portion of the responsibilities that oncologists carry. Yet due to the very nature of LLM construction and operation, such tools remain mere machines that are best suited for enabling the division of labor and giving physicians access to certain new capabilities most did not previously have, rather than as truly autonomous agents. The final responsibility and integration of all activities into the provision of cancer care and cancer research can only rest in the hands of actual oncologists, as only they are equipped to center patient values, identify new information, handle ambiguity, and recognize developments that would truly improve the care of patients with cancer.
DISCLOSURES: Dr. Tian and Dr. Li reported no conflicts of interest.
Dr. Tian and Dr. Li are both Fellows in the Hematology/Oncology Program at Stony Brook University Hospital in New York.
REFERENCES
Smith A, Cannan E: The Wealth of Nations. Random House Inc; 2000.
Wittgenstein L: Philosophical Investigations. Basil Blackwell; 1968.
Radford A, Narasimhan K, Salimans T, Sutskever I: Improving Language Understanding by Generative Pre-Training. Available at cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf. Accessed March 14, 2026.
Patel D: Richard Sutton — Father of RL thinks LLMs are a dead end. The Dwarkesh Podcast. September 26, 2025. Available at www.dwarkesh.com/p/richard-sutton. Accessed March 14, 2026.
Disclaimer: This commentary represents the views of the author and may not necessarily reflect the views of ASCO, Conexiant, or ASCO AI in Oncology.