AI Technology NYUTron Accurately Predicts Health Outcomes

NYU Center for Data Science
4 min readJun 30, 2023

CDS PhD Student Lavender Jiang discusses the limitations and possibilities of AI support tools for healthcare providers

CDS PhD Student, Lavender Jiang

As the role of artificial intelligence (AI) in healthcare settings grows, a new computer program is being used at NYU Langone Health hospitals to predict the chances of a discharged patient being readmitted within a month. NYUTron, the large language model (LLM), is able to read physicians’ notes and estimate patients’ risk of death, length of hospital stays, and other health factors.

A significant challenge in incorporating computer programs in healthcare settings lies in information processing. Physicians often write in individualized language, and the data reorganization required to compile the information into neat tables is time-consuming. NYUTron and other LLMs have been successful in their ability to “learn” from text without requiring specific formatting.

Designed by researchers at the NYU Grossman School of Medicine, NYUTron is trained on text from electronic health records. Results from the study “Health system-scale language models are all-purpose prediction engines” published in the journal Nature show that the AI program can predict 80% of readmitted, roughly 5% better than the standard, non-LLM computer model.

To learn more about how NYUTron was developed along with the limitations and possibilities of AI support tools for healthcare providers, CDS spoke with Lavender Jiang, PhD student at the NYU Center for Data Science and lead author of the study. As a medical fellow for NYU Langone Health, Lavender’s work explores clinical predictions using natural language processing. Read our Q&A with Lavender below!

Could you talk about how you got involved in this research and how the idea for NYUTron was developed?

I first heard about the research idea in my PhD interview process when I met Assistant Professor at Langone (and CDS affiliated professor) Eric Oermann, my current advisor. The idea began to percolate when Eric was building machine learning tools to detect clinical depression from EEG signals at Google Brain. He saw the limitation of doing clinical research at a big tech company due to a lack of quality data resulting from privacy issues and limited deployment opportunities. He also saw a lot of published AI-health papers did not get used in practice due to limited consideration for the deployment scenario. When Eric became a neurosurgeon, the idea fully formulated when he considered the usefulness of having an AI assistant who could read along with him and chime in with advice.

The strides in NLP (transformers) and the quality data in Langone electronic health record (EHR) motivated him to envision NYUTron: a BERT model pretrained on ten years of EHR notes, fine-tuned on a battery of tasks for hospital operations, and deployed in a hospital environment to assess its potential impact.

I decided to join the NYUTron project because I enjoy researching tools that could potentially get used in practice and benefit people. Both my grandparents suffered a lot in hospitals before they passed so I’m motivated to help patients like my grandparents receive better care.

The resources NYU has are unique and valuable. We had access to large amounts of patient data, and strong support from different people (hospital administrators, physicians, data managers, data engineers, cluster managers, data scientists, computer scientists, NLP researchers, NVIDIA, etc.). The development of NYUTron is the result of hard work from a large team, and I truly appreciate everyone’s support!

What elements of this research process did you find most engaging?

It stood out to me how much collaboration is required for interdisciplinary research! I found it engaging to learn from different disciplines such as medicine and hospital management.

NYUTron works as a support tool for healthcare providers. What are some of the concerns or limitations of incorporating AI technology into hospital settings?

A few things come to mind, the first being fairness. It is possible that AI technologies would have bias toward minority groups. More research needs to be done to evaluate and reduce bias. Physicians could also over-rely on NYUTron for decision-making. We need to develop protocols and more HCI (Human-Computer Interaction) research to address the concern. Finally, the research and development of language models relies on heavy compute, which is not commonly accessible to research groups in smaller hospitals.

In considering tools like NYUTron, what are some ways in which large language models (LLMs) can assist physicians with patient care?

NYUTron could potentially alert physicians of high-risk cases that could have been missed. It could also perform real-time inference, for example: chiming in with advice as a physician finishes signing a clinical note. In the future, it is possible for clinical language models to summarize medical history, look up similar cases, and bill insurance for patients.

Are there any areas for further research that this project opened you up to?

Through this work, I have become interested in learning more about conversational AI, interpretability, privacy and fairness, and causality.

By Meryl Phair

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.