Democratize AI now, says Geisinger's medical director of artificial intelligence

Democratize AI now, says Geisinger's medical director of artificial intelligence

Photo: Geisinger

Editor’s Note: This is the second in a series of articles and videos interviewing top voices in health IT on the subject of artificial intelligence. To read and view the first, an interview with Dr. John Halamka of the Mayo Clinic, click here.

Dr. Aalpen Patel is a unicorn: a medical director of artificial intelligence at a major healthcare provider organization. Few hospitals and health systems have a high-level clinician devoted expressly to AI. Yet. That number is sure to increase, and fast, to keep up with the pace of AI and machine learning evolution across healthcare.

Patel is a part of the AI projects at prominent health system Geisinger, where he also serves as chair of radiology. Having a physician have input into AI projects was a high priority for the forward-thinking health system.

“I act as a consultant to those who are looking for help with building AI-based systems at Geisinger,” Patel explained. “I help develop partnerships and collaborations with data scientists, clinicians, healthcare administrators and technology experts, and help facilitate the development and deployment of AI systems within Geisinger.

“And I help facilitate and collaborate on research initiatives to develop translational AI algorithms and their applications in diagnosis, treatment and patient care,” he added.

Improving healthcare practices

Here, Patel talks in-depth on Geisinger AI programs, including ones that deal with population health, radiology, and natural language processing and large language models. He discusses his passion to educate the healthcare workforce on AI to democratize the technology. And he offers more on his role at the large health system.

“As the medical director for AI, my primary responsibilities revolve around leveraging artificial intelligence and machine learning technologies to improve healthcare practices,” he said.

“I cochair the System AI Steering Committee to appraise the executive leadership team of new developments, opportunities and risks,” he continued. “This committee has members who are clinicians, data scientists, information technology experts, informaticists, ethicists, researchers and executive leadership members.”

The steering committee and cochairs review ethical concerns related to patient data privacy, algorithm bias and the responsible use of AI in decision-making processes.

“Our team collaborates to develop a strategic vision for integrating AI into healthcare to solve real problems,” he said. “This would involve identifying areas where AI can have the most impact, setting goals, and planning the development and implementation of AI-driven solutions.

“We also have developed a pragmatic and standard approach to determine if AI is appropriate for a healthcare problem and how to create a solution,” he added.

AI and population health

Geisinger has deployed AI to help with population health efforts. Patel has been part of the program and said it can help make great strides in the field.

“Geisinger has a large population of patients that we take care of, and within that population, we have some care gaps, and those need to be filled,” he explained. “So let me give you some examples. And a lot of these are in the screening of the population.

“So, whether it’s colorectal screening or whether it’s lung cancer screening or breast cancer screening, there are folks who are eligible for screening and are overdue to get that screening done, but they’re still at risk of developing whatever the condition that we’re looking at,” he continued. “So, we have deployed AI tools that will look at the population and especially those who are eligible for screening and then risk stratify them so those who need the help first, and those who are the highest risk, we get to them first in an active manner.”

The mature model is colon cancer screening.

Going after the higher risk patients

“Looking at a few lab parameters, demographics and some of the EHR data, in conjunction with our partner Medial EarlySign, we have a model that predicts who is at the higher risk or re-stratifies those who are eligible, and we go after the highest risk patients so we can take care of them first,” he explained.

“For example, when we risk stratified the patients, it’s one in 13 patients who have some positive finding in doing a colonoscopy, that is about eight times higher than the average population,” he reported. “Similarly, we have a program where we look at abdominal aortic aneurysm (AAA), where we look at some labs and some EHR data to see who’s at risk for developing or having AAA, and when we do ultrasound screening on those folks.”

This AI program was just deployed in January 2023, and staff already have scanned about 500 patients.

“And of those, we have found 40 new aneurysms that were not known,” Patel said. “So going after these patients who are at higher risk than average [is important]. So, for example, in AAA, the enriched population has about one-and-a-half to three times higher chances of finding a positive finding than the average population.

“And last, just last month, we deployed lung cancer screening stratification,” he added. “We know we as a nation are behind in lung cancer screening, but how do we get to those folks who are at the highest risk? [AI] will allow us to get to them much faster.”

Looking at the AI model, Patel believes it’s about four times higher likelihood of finding a positive finding in those who are in the stratified population.

AI and radiology

Geisinger has very successfully been using AI in radiology, which is Patel’s field as chair of radiology.

“Radiology is a perfect fit for using certain types of AI,” he said. “And especially in the middle of the last decade, convolutional neural networks became popular and really helped supercharge the effort in radiology AI.”

Geisinger in 2014 realized it was sitting on a gold mine of data. For example, the health system had approximately three petabytes of imaging data at the time. This raised the question: What to do with this data? As new AI techniques were coming online, staff decided to try a few out.

“We initially started with doing some analysis on chest X-rays and even doing some calculations of visceral fat and subcutaneous fat,” he recalled. “And then eventually we moved on to projects we can deploy once we had cut our teeth on some of the smaller projects. We tackled intercranial hemorrhage, where the model would detect intercranial hemorrhage in those who we do CT scans on.

“Initially the challenges were that we didn’t have access to data and so on,” he continued. “So, we got over a lot of those things and when we had access to data, we were able to access about 47,000 CTs of the head.”

Using that, staff were able to teach the machine to find intercranial hemorrhages in patients. Staff were so confident that in January 2018 they put that AI technology into use. It is being used to this day.

Preventing strokes with AI

“Along with that, our research teams were working on electrocardiogram analysis as well as echocardiogram analysis and a combination of both to see who’s at the highest risk of death,” Patel said. “Just based on those findings, and in some cases, along with EHR findings, [we found] who is at the highest risk of developing atrial fibrillation so that we can prevent strokes in some of these patients.

“We have also currently deployed lung nodule detection models; these are commercially available models because we’re not going to be able to develop everything we need,” he continued. “We’ve even deployed it outside of radiology into radiation oncology, helping the radiation oncologists segment the treatment areas, which can take hours sometimes.”

This auto-segmentation helps reduce the time it takes to segment some of these things before treatment, he added.

“And finally, we are about to deploy a prostate lesion detection and segmentation model, which is also commercially available,” he noted. “So, a lot of activities are going on, but we do realize in radiology that at least 10 of the AI efforts are a little more fragmented than we would like.

“So, our next step is to develop or work with someone to build infrastructure that allows scalability, flexibility, and the ability to swap models based on monitor performance,” he continued. “That’s our next goal.”

Closing the loop with NLP

On another front, Geisinger is using natural language processing, a form of AI, for what Patel labels “close the loop” initiatives. He expects large language models, also known as generative AI (the kind found in applications like ChatGPT), will supercharge this effort.

“We realized there are many incidental findings that happen in radiology studies and even outside of radiology,” Patel said. “So, we have to have a way to detect these, because many of these reports are not structured data. But how do you detect these in all the onslaught of information we have, and then how do you act on it?”

The answer to detection in these mountains of data is NLP AI.

“There are multitudes of efforts where many of the closed-loop initiatives are mostly radiology-focused,” he continued. “We are going outside of radiology and saying once we find a finding, the loop is only closed when the patient’s care is finished. That means if you find a lung nodule that requires a follow-up CT, we’ll do that.”

But if that nodule grows, then staff will do a biopsy. And if it is cancer, then staff will take care of that cancer.

NLP and clinical algorithms

“So that’s the approach we’re taking,” he explained. “The first part is detection using NLP in a variety of reports. And the second is developing algorithms, clinical algorithms, that allow us to take care of that once we’ve made the detection.

“And that is a key component of what we’re doing, not just detection using NLP, but what we do after we make that finding,” he continued. “And in that, Geisinger physicians have collaborated and come up with a program called STAIR (System to Track Abnormalities of Importance Reliably). That means let us say that patient has a lung nodule. That patient will then be referred to the STAIR program.”

The STAIR program, for example, will make sure if a lung nodule is 4mm big, that patient does not get a follow-up. If there is a low risk, then that person just goes to their primary care doctor. Or if it does require follow-up, then STAIR reschedules a CT scan. But if it requires a visit to the pulmonologist, then STAIR will schedule that, as well.

“What it also does is not only take care of patients well longitudinally and make sure we’re not dropping the ball on this patient, but also allows us to reduce the unnecessary referrals to our specialty care and allow us to improve access,” he said. “No. 1, we’re doing longitudinal care better. No. 2, we’re improving access for those patients and using resources appropriately.”

Educating employees about AI

Patel is passionate about educating the workforce about AI. He said this is especially important in healthcare. He has begun discussions for creating an AI sandbox at Geisinger designed to allow gradual learning and experimentation with appropriate precautions to democratize AI.

“As with everything, we have to understand the tools we use, and especially in healthcare, because as physicians and healthcare providers, we always like to understand the tools we use,” he said. “And when we don’t understand them, you lose trust in those tools.

“So, allowing people an environment to learn what different machine learning techniques are, what the limitations are, what the advantages are, will really help them collaborate with the machine learning experts to devise these tools or even use them better in their practice,” he continued.

That’s one of the reasons Patel and colleagues are trying to develop an “AI sandbox.”

Lectures, experiments and more

“One of the compartments of the sandbox would have curated lectures, whether it’s our own or whether they are publicly available,” he explained. “Lectures for beginners, for intermediate and for advanced learners. And once we’ve done that, the second box would be an environment where they can experiment.

“Whether it’s the Google Colab or Jupyter Notebook or JupyterLab environment and associated data sets that are publicly available or synthetic data they can experiment with, there is nothing like actually doing some coding and experimenting with the data sets to really understand what the limitations and advantages are,” he continued.

The other two boxes would be the ability to collaborate within Geisinger and outside of Geisinger.

“So that provides an environment where you can do some developments without the data leaving Geisinger,” he concluded. “That’s the sandbox we’re trying to create.”

To view a video of the entire interview with Dr. Aalpen Patel, including bonus content not in this story, CLICK HERE.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article