John Halamka is the President of the Mayo Clinic Platform and a leading expert in digital health and AI. He has traveled to 21 countries, helping to scale digital health solutions and address regulatory and ethical challenges in the reuse of healthcare data.
Addressed topics in this discussion:
1. Differences in regulatory frameworks and cultural expectations across countries.
2. Comparison of the U.S. and European approaches to AI and data exchange.
3. Risks associated with generative AI and the need for a credibility scores.
4. Observations from various countries on AI adoption. Importance of local tuning for algorithm validation.
5. Data Standards and Future Trends.
6. Advice for Governments and Healthcare Institutions: Encouraging a proactive approach to AI adoption -Starting with low-risk projects and building trust and reliability.
Website: www.facesofdigitalhealth.com
Newsletter: https://fodh.substack.com/
Youtube: https://www.youtube.com/watch?v=tH9qYpFW-W8
[00:00:00] Dear listeners, welcome to Faces of Digital Health, a podcast about digital health and how healthcare systems around the world adopt technology with me, Tjasa Zajc. Discussions around AI seem much less hyped and overexciting in 2024. After the initial enthusiasm, especially fueled by generative AI, its time for real-world
[00:00:28] applications and figuring out the nitty-gritty details of regulation. In this episode, recorded at Health Europe 2024, you will hear from John Halamka, President of the Mayo Clinic Platform and a leading expert in digital health and AI. John has travelled to 21 countries helping to scale digital health solutions and address
[00:00:54] regulatory and ethical challenges in the reuse of healthcare data. And in this discussion, you will hear a little bit about the differences in regulatory frameworks and cultural expectations across countries, comparison of the US and European approach
[00:01:12] to AI and data exchange, risks associated with generative AI and the need for credibility scores for different AI solutions. Enjoy the discussion and if you like the show, make sure to leave a rating or a review wherever you get your podcast. That's especially easy in Spotify.
[00:01:35] And if you have a little bit more energy, do leave a review wherever you listen to your podcast. Thank you. Now let's dive in. John, thank you so much for taking the time to share a few thoughts about AI while we're here at Health.
[00:02:04] I know that your job is to touch the lives of 4 billion people, so how is that going? Well, so when you look at the number of patients in the world that have electronic health data and you look at the countries that have regulatory frameworks that are friendly
[00:02:18] and political stability, it's about 21 countries. And country by country, you say, how do we empower and scale digital health? It all starts with data. And the challenge, of course, every country has different regulatory frameworks but
[00:02:32] also different cultural expectations about how data of the past can be used for the future. So my role, traveling the world 21 countries this year, helping it country scale, answer the question of ethical privacy protection, reuse of data so the patients of the future
[00:02:49] benefit from the patients of the past so far so good. We see in the U.S. that there's a lot of kind of connectivity of data or creating data pools for research that there's Epic Cosmos, there's Komodo Health, there's the Mayo Clinic
[00:03:03] platform that goes even beyond the U.S. border so you're connected to Brazilians with Koreans with others. I wonder in the U.S., we are currently in the middle of the UAI Act trying to figure out how we're going to manage data. There's GDPRs.
[00:03:17] What's your perspective with all your travels and all your knowledge also by building and being the president of the Mayo Clinic platform? I love all countries equally but as you ask, where are countries willing to take risk and work at large scale and move rapidly?
[00:03:33] I would say Denmark, Netherlands, Sweden. And of course we're in the Netherlands today. Tonight I'll be in Sweden and it's meeting at government levels to say what's the right organizing principle? Is it a medical center? Is it a region? Is it a country?
[00:03:50] And for Rethynis GDPR is not a restriction to the secondary use of data. It tells us how. The EUA law doesn't prohibit the use of AI, it gives us guardrails and guidelines. So I would say certainly Europe is moving fast.
[00:04:06] The U.S. is innovating quickly with technology and there have been several regulatory and non-regulatory approaches to building guidance around AI like the Coalition for Help AI. I think our job, because no one has the right answer, is to just work together.
[00:04:24] And if anything my job is about building the togetherness. You often emphasize or I heard you say at Health in Las Vegas that when it comes to generative AI which has been all the hype in the last 12 months, the challenge is that
[00:04:39] when the output is different every time how do you actually regulate that? What's new in this field so in the field of figuring out what's the best way to regulate and regulate generative AI in healthcare? Two approaches.
[00:04:52] First we know it's going to have hallucinations, it's going to have some answers not so good. So what's the risk? So imagine that we decide on a use case, the summary of your historical record so I can read it more easily.
[00:05:06] Okay if it's inaccurate it's probably not going to do a huge amount of harm. If on the other hand we say generate AI decide on your treatment plan that could do real harm. So first issue is look at the risks of every use case and
[00:05:20] go forward with those administrative low risk high yield use cases and that's happening now. Second we're going to need a credibility score because it may very well be as we said one moment the answer is brilliant the next moment the answer is horrible.
[00:05:34] So you need to assign a numeric credibility score because if the answer is bad then another prompt might be in order. Companies are starting to build credibility measures, quality measures for generative AI. If I'm not mistaken the Mayo Clinic uses around 240 algorithms right?
[00:05:51] So at the moment we have about 250 predictive algorithms all validated many through FDA and CE but we have eight major generative AI projects. Those are all being done under a research framework because as I say
[00:06:07] generative AI useful may not be ready to be given directly to patients of this point. In June 2024 we're crossing the hype so we're decreasing the expectations a little bit. What kind of use cases do you have at the Mayo Clinic for AI and
[00:06:23] generative AI? So if you had to cluster all those 250 algorithms is it 50% for administrative things how many of them are used for diagnostics and now we're talking just AI generally. All right and so I have a slide which goes through this in detail but
[00:06:39] if you were to look that slide it's a wheel and on the wheel you'll see about half are administrative in nature. Supply chain, revenue cycle, workflow enhancement, staff augmentation then we go into fields like cardiology, early diagnosis of disease,
[00:06:55] prediction of cancer, assessment of neurology like Parkinsonism or other neuromuscular diseases, radiation oncology, planning treatment. So you've got a whole variety of clinical use cases but that's predictive AI. The world of generative AI such things as summarizing a record, writing a discharge summary, helping with nursing
[00:07:17] notes, helping interpret genomes and one speculative early project is can we predict a patient's journey looking at hundreds of millions of patients and every event in their lives what's next for you? Do you need a lab test, a visit, a surgery? We're just starting on that journey.
[00:07:39] If I go back to the Mayo Clinic platform so I mentioned earlier that you're connected to other countries, to institutions from other countries the platform works based on the federated approach. Exactly. And can you talk a little bit about the validation of algorithms? So
[00:07:54] oftentimes it happens that basically there's data drift that you know the algorithms are not applicable to different institutions if they're designed in one institution so I really wonder when you even go cross borders and you actually have a different
[00:08:08] population, a different race what does that mean for algorithm validation and how do you even fine-tune that then? So on the one hand you could say I want to look at an algorithm and how is it generated? What's its data card and model card?
[00:08:24] And that's great. You can at least get a sense of is this algorithm following good hygiene and good process but every algorithm needs local tuning. So to your point, Mayo Clinic in Minnesota may not have a very
[00:08:36] large population of Slovenians and so what would you need to do? Validate the algorithm as having been put together in a reasonable fashion but then it take it to a Slovenian dataset and then assess how well does it work on subgroups of that population
[00:08:53] and then tune it from there. So that's why our very federated approach is yes, validate overall process but then also locally tune. And what are the biggest changes in this regard so far when you're when you're testing the models? What are your observations
[00:09:08] currently of the basically the federated approach to using or scaling algorithms? It has to start with data curation right so the data and healthcare we gather is not very clean and so the first step and it takes nine to twelve months
[00:09:24] is to turn the data that was gathered for transactional purposes medical care into highly organized vocabulary controlled data for AI algorithms in a privacy control environment. So data cleanup and curation is the hardest part. So if I understand correctly do you
[00:09:42] actually when you connect different institutions? Is this just for research purposes so you take a specific data in a specific time and then try to clean it so it's comparable and then you test and try to get research results based on that data?
[00:09:57] I mean the answer is for many different use cases. The data is never sold the data is never exfiltrated, doesn't leave an organization or a country the data may be used for research. Is this drug safe?
[00:10:11] Is there a new treatment protocol? But it's also used for creation of products and services like novel algorithms that will then be used in the marketplace for validation of algorithms so they can go through a regulatory process. So it's used for both.
[00:10:25] Okay in this whole story and especially with the generative AI and kind of new AI approaches to structuring data or even coding the data where do you see the role of data standards in healthcare?
[00:10:38] How do you see I know that it's gonna be easier to structure data but what's the need for different standards how do you see them working together in the future? Sure whenever I talk about standards I talk about three kinds content vocabulary and transport.
[00:10:55] So a content standard how are we going to store data at rest? Break clear internationally that OMOP and extensions to OMOP from Odyssey are the way to store data at rest. I see country after country moving to an OMOP model. Vocabularies, SNOMED for descriptions of
[00:11:12] medical observations, LOINK for labs then there are other country specific standards like RXNorm for pharmaceuticals or various kinds of world health organization standards for codifying demographics but there's a set of vocabulary standards commonly in use globally and then for moving data from place to place
[00:11:32] FIRE and FIRE are four and are five normative additions so the great news in 2024 the number of data standards that we have is much much fewer than we used to have and more generally accepted. Maybe just one last question so you travel
[00:11:48] a lot what are your general observations in terms of how AI in healthcare is developing is there something specific that you saw recently that kind of inspired you because we do tend to look at you as especially from Europe as far as progress goes and
[00:12:05] rest in that scene. But I think two observations most countries that I visit have birth rates less than 1.5 which means fewer babies born means fewer doctors and nurses but yet they're aging societies people are living longer and needing more services. Every country is saying unless we have
[00:12:23] AI it will extend our existing workforce or refine our existing capacity to deliver care we're never going to make it through this demographic shift so that's by saying observation one two is every country has what I'll call a Gaussian distribution of adoption
[00:12:40] there are those who say give me AI today I'm willing to take the risk and others that say oh that's pretty risky why don't I wait till someone else doesn't and so it's just interesting as I visit
[00:12:51] country after country that divide line between those who are early adopters and those who are fast followers is clear at a cultural level. Is there something such as a common advice or the most common thing that you usually say to governments or hospitals when you meet them and
[00:13:08] ask you we hear a lot about AI we have no clue what to do with it so what's your answer to that the answer is you need to start on your journey so it's fine if you have a very low
[00:13:18] risk tolerance let's do one hospital let's do one algorithm and then build from there once you have trust once you have a sense of transparency of consistency and reliability does that say because every country is different but every country is willing to start somewhere don't just wait
[00:13:36] it's time to move now. You've been listening to Faces of Digital Health a proud member of the Health Podcast Network if you enjoyed the show do leave a rating or a review wherever you get your podcast subscribe to the show or follow us on LinkedIn
[00:13:52] additionally check out our newsletter you can find it at fodh.substack.com that's fodh.substack.com stay tuned