The Agentic Patient 4: Finding Insurance and Red Team Analysis

The Agentic Patient 4: Finding Insurance and Red Team Analysis

When Demetri Giannikopoulos was diagnosed with multiple sclerosis, his community neurologist handed him a sheet with fifty medication options and told him to pick one. That was a long time ago. Today he's the Chief Innovation Officer at RadAI, overseeing how artificial intelligence gets deployed in radiology across US health systems — and he's spent two decades learning how to navigate a healthcare apparatus that, in his words, "is not designed for sick patients."

In this conversation Demetri explains why the most valuable thing AI has done for him as a patient isn't clinical — it's the 50 pages of insurance underwriting documents he fed into ChatGPT to save several thousand dollars on a plan that looked, on paper, worse. He walks through his "red team" prompting technique, the error he caught in a radiology report where legacy speech-recognition software had dropped the word "no," and why he thinks the regulatory debate around AI in healthcare should look less like drug approval and more like how we regulate nuclear power. If you want a ground-level view of what AI can and cannot do inside the American medical system, this is where to start.


Additional resource with prompt tips: https://aipatients.org/ Additional resource: Scanxiety toolkit: https://edge.sitecorecloud.io/americancoldf5f-acrorgf92a-productioncb02-3650/media/ACR/Files/Clinical/Patient-Family-Centered-Care/PFCC-Scanxiety-Toolkit-Brochure-Digital-Version.pdf Full Agentic Patient series: https://www.facesofdigitalhealth.com/agentic-patient-blog


Detailed summary and tips from Demetri: https://www.facesofdigitalhealth.com/agentic-patient-blog/red-teaming-your-health-plan-demetri-giannikopoulos-on-responsible-ai-the-cures-act-and-what-patients-should-actually-do

6 tips on AI use for patients: https://fodh.substack.com/p/the-agentic-patients-are-here

[00:00:06] Dear listeners, welcome to Faces of Digital Health and a special series called The Agentic Patient, which is a series about how real patients are using AI to navigate their health, such as finding insurance, managing symptoms, tracking disease and more. In this series, we go into details. We talk about which tools patients use, which prompts, what's working, what isn't.

[00:00:32] These discussions are intended for informational purposes only and should not be relied upon as a sole source of medical information or a substitute for professional medical advice, diagnosis or treatment. Always consult a qualified healthcare provider regarding any medical concerns or decisions.

[00:00:52] The speaker in today's episode is Dimitri Giannikopoulos. He is someone that knows AI up from close, also from the vendor, not just patient perspective. Dimitri is the Chief Innovation Officer at Red AI, overseeing clinical integration of AI in radiology across US health system.

[00:01:22] On the personal side, he has lived with multiple sclerosis for roughly 20 years and used AI to find the best insurance plan in the US. So he speaks with both sides of the microphone as a patient who has been navigating a broken insurance apparatus for over two decades and as an executive deploying AI into radiology workflows. Enjoy the show. Enjoy the show.

[00:01:52] Also, check out the agentic patient website on facesofdigitalhealth.com and check out our newsletter, FODH.substack.com, which also has a summary of the first few episodes of the agentic patient with six tips on how to use AI if you are facing a medical issue. Now let's dive in today's discussion.

[00:02:31] Dipak Kalra, hi, and thank you so much for joining me here on Faces of Digital Health or actually a special series called The Agentic Patient, Through Which I Am Exploring How Patients Use AI, How We As Patients Should Think About AI, What Should We Be Mindful Of, What Can Be Very Useful To Know If You Want To Use AI As A Patient,

[00:02:57] And To Cover All The Aspects Of AI Use In That Context. You have been working in healthcare for years in several different roles. You're currently the chief innovation officer at Red AI, where you oversee the deployment and responsible clinical integration of AI technologies in radiology, mostly in systems across the US, but you had several different roles even before that.

[00:03:25] And you've also been living with MS, so multiple sclerosis for two decades. So I'm really curious to know how did the fact that you're a patient impact everything that you do in healthcare? Did it have an impact? So how do you see the role of the disease in everything that you do for healthcare? Tjasa, thank you for having me on today and looking forward to the conversation.

[00:03:50] My path to diagnosis for multiple sclerosis took a decade from my initial presentations. I didn't even know that until probably five years ago. I found an old journal, a handwritten journal, where back in 2003, I was complaining about numbness in my shoulder, my arm, my hand. I think I chalked it up to an accident I had on my bike, so I went to a chiropractor. That's a symptom I've had ever since.

[00:04:19] With multiple sclerosis, there's a relapsing remitting where symptoms go away and then they come back. And that's one of those symptoms that has gone through that relapsing remitting pathway. Even getting to the diagnosis took a long time. And it was, I would say, nobody's fault. Like, it wasn't my fault as a patient. I also lived in Florida. I'd been in a lot of car accidents. There's a whole long story behind the reasons why this got categorized other ways.

[00:04:46] I had things like MRIs of my lumbar spine instead of my C-spine, like my neck at that point. So, like, my hip area instead of the neck or lower back because pinched nerves is what people assume. The path to even get there took a really long time. And luckily for me, I've had a very benign course of progression. It's been some numbness and stuff like that. It hasn't really done much in the past decade. The path to get there, by the time I finally got the diagnosis, my wife said,

[00:05:16] Okay, you're going to get a PCP. My wife is a nurse practitioner and has practiced family care for more than a decade. But at that point, she was just starting her career. And she's like, you're going to a PCP and just have them talk about what's wrong, what's going on. And he ordered an MRI on my neck. And again, we thought it was a pinched nerve or something like that. And I distinctly remember this. I was on a business trip. I was at a friend's house in Massachusetts. And he called me at 6 p.m., which was strange.

[00:05:45] You don't usually get doctors calls that late in the evening. And he was like, it looks like you might have multiple sclerosis. My only context for multiple sclerosis at that point in time was the West Wing, the TV show, where the president of the United States had multiple sclerosis in it in that TV show, Jed Bartlett. And I was like, the TV thing is, yeah. And I'm like, what? And I was 30.

[00:06:13] I think it was like 34, 35 at that point in time. And we're going to get another study to confirm it. But it's really good indicators of that. And this was long before AI and all those areas. So I started Googling it, of course, with my friend there and all that. I found some great resources on the National MS Society website and stuff like that. But ultimately, went through all the diagnostic procedure, got the diagnosis.

[00:06:37] And by that point in time, it felt like a relief because I just had all these things happening. We didn't know why. And it's OK. At least now we've got something that it is and can actually start treatments at that point in time. I had one community neurologist that honestly I didn't have the best experience with. He handed me a piece of paper that had 50 different medicine options on it. He was like, you could do any of these. Which should I do? And he's like, whichever one you want to do.

[00:07:06] I ended up establishing a Johns Hopkins instead with a great neurologist that specializes in multiple sclerosis. And was ultimately able to get on a... Now they have oral therapies. So it's no longer injections. You can't, depending on your disease progression. You may need those. But for those on the lower tier threshold, you can go with like oral medications. It's like two pills twice a day now. Yeah. It's...

[00:07:31] I think the therapies for many autoimmune diseases have advanced a lot in the last 20 years, luckily for patients. So that makes things much easier. In now... So this was pre-AI era. And now we're in the AI era. Have you used AI for your disease research, management, understanding, prognosis?

[00:07:57] Has AI had any impact on you and your disease management? For my personal disease management, unlike the disease management side, the clinical side of it, not really. I've been... Again, it's very benign. I've had it for a long time, pre-Daten AI. I have strong trust with my care team on how this is handled.

[00:08:21] Where artificial intelligence and large language models in particular have been absolutely priceless is on the insurance side. In the United States, we have many different types of plans. My organization recently switched our insurance plans coming into the new year. And one of the things that I knew I had to do, I knew I had to get my full spine series effectively, so like brain all the way down to MRIs. And that's three long MRI stays.

[00:08:50] It's almost 90 minutes to get those. And they're very expensive. I had been quoted under our last year plan, and my co-pay amount was going to be $3,500. For co-pay of these three. I asked what the cash pay was, and they said the cash pay was $3,100. I was very confused. Charge masters are a fascinating thing that hopefully most patients should know what they are. But hopefully they don't have to deal with them too much.

[00:09:18] So when we were looking at the new insurance plans, I pulled all of the information, like all of the underwriting information, went to the websites, downloaded all of it. I put all of that into ChatGPT. And unexpectedly, I discovered that a high deductible health plan was actually the best one for my wife also has cancer, had cancer, no evidence of disease currently, for our use case.

[00:09:44] And the higher, the better plan, like the one that has higher premiums, still has co-pays on things like imaging, like surgeries, and all that kind of stuff. The high deductible had none of those after you hit your deductible. So my co-pay would have been $3,500. My deductible is like $3,600. And now I don't have to pay for anything else. If I get more imaging, it's $0. Yes. Another couple thousand dollars.

[00:10:11] So that's the way that I've absolutely gained so much. And it turned into an unexpected counseling type thing because I was like, thank you. I'm always trying to be nice to the robots. Thank you for helping me figure this out. I was like, I'm amazed this has to be so hard. And I was using OpenAI and it was ChachiPT. I came back with something like, healthcare is not designed for sick patients. You did everything right. You broke it down. You looked at the traps.

[00:10:41] You looked at the cash pay versus co-pay. Like all that kind of stuff. I was like, wow, this is, it got me. Like that it gave that emotional of an answer to a relatively simple thank you. I think there's a lot. And I think earlier in my disease progression, I probably would have gone back to it heavily to consult and understand a little better. Yeah. What did you ask? Like how much information did you give to the chat bot? Was it a general question?

[00:11:07] I don't know if you remember the exact wording, but did you throw a lot of information already in or did you have a general information and that the result or the answer came based on everything that's in the background for AI? So I am a heavy prompter. I don't ever just ask a general question. I say, imagine you're an insurance underwriter and you're trying to find the best way to design a plan or a patient where you have to pay the least.

[00:11:35] So you're coming at this from the adversarial perspective and I'm coming at this from the patient perspective that I want to pay the least. So help me find the best way. And then I fed in, it had to be at least 50 plus pages of documentation where I literally cut and pasted it. So it was a massive context window. Of the clinical documents, clinical documentation. Of the insurance plans. So I was putting all that kind of stuff in there and saying, compare this plan to that plan because these were the three I could choose from.

[00:12:04] And given my history, my wife's history, the fact that I know I need imaging in January and my wife needs blood tests and imaging in January and February as well. So we're going to hit our deductible early. What is the best route to go for this and kind of gameplay it out for me? Yeah. Did it turn out that the AI was right? So did you manage to save costs in the end? Yes. We hit our deductible literally in the first two weeks of January.

[00:12:34] So we ended up paying a lot more at the very beginning because of this versus spread out over a year. But again, like I got the cash pay for it and or I got the copay for it under the old plan, which was basically the same as the new plan. And we were going to pay just as much in the copay for my imaging alone. That was, I think, a good jump in the end.

[00:12:57] The entire billing process, though, I had prior off with my prior insurance because it was I was having my imaging the second week of January. And it caused all kinds of confusion because my old plan still showed it was active. And now I had a new plan. And the hospital I was at, they got prior off from my old plan. I told them I left that one. And I was on. I spent an hour on the phone with a wonderful guy. His name was Gilbert.

[00:13:25] Not going to name the facility, but he got on the prior off people for my new insurance plan. And he got all the confirmation codes. He made sure I was there, that I heard it. I wrote it all down. It was a wonderful experience. But unfortunately, because I had that change in insurance and it still showed my prior one was active, they billed both. And then my prior one paid and then my current one rejected. And now it's middle of March and I'm still trying to figure this out.

[00:13:52] And I've got denials of benefits coming from both and asking me to coordinate care. And I'm like, whoa. Yeah. Yeah. Hopefully, I know that there's some efforts going on to just digitized prior authorization and just make them electronic on all ends. So hopefully, that will take away some of the confusion in coordination. Yeah, absolutely. Hopefully. And by the way, I know I didn't answer the question. So how has this impacted my professional career?

[00:14:19] I'd say in 2013, my mother passed away from cancer. It was curable cancer, realistically, like one of the ones that can be easily addressed. After she had received prior care for that cancer, she herself did not have insurance. This preceded the Affordable Care Act. And because of that, she was basically one of those emergency room patients and had a lot of white coat syndrome. We're scared of going to the doctor because you don't know how much it's going to cost you or what it's going to do to your life.

[00:14:47] And I remember distinctly, she got her your cured letter from the cancer care process the month she died. And that was like one of those unexpected daggers in the heart. Like when you get that kind of letter the same month, you know, that your family member passes away, it hurts. And I was working at that point in time for nuanced communications.

[00:15:14] And we had these follow-up solutions that have been a challenge in radiology, in particular, where you have these incidental findings or these treatment pathways through oncological care where patients end up falling through the cracks. And it completely changed as a business person working with these solutions, my perspective. It was no longer this abstract, we're going to help patients work through the system. It became, we're going to help people like my mom avoid this in the future.

[00:15:44] We're going to both help them like with this technology as a patient live their lives. And we're going to enable the healthcare system to help these patients live their lives because they got so much more going on than just their cancer care or whatever it is. And that's just continued throughout my career. My personal diagnosis, my children were born a bit premature. So I learned a lot about ICUs back with the NICUs with the birth of my children.

[00:16:10] And then my wife's cancer care journey most recently, watching her navigate the system like a pro because she is a pro. So, but thinking, how would have we done this if we didn't have her? You know what I mean? That's how could this been accomplished so seamlessly without somebody that knows every in and out and the people to call? Yeah. Yeah. It's definitely a maze that you need to have special skills to navigate, unfortunately.

[00:16:34] And I guess this is exactly where AI to some degree can be super helpful. So also based on your, if you look back at how you found your insurance plan with the help of AI, if you had to write a recipe of the things that you did, how would that recipe look like? What were the steps that you took?

[00:16:58] And reflecting on that, what do you think are the necessary steps people need to take if they want to use AI to solve any medical related challenge? Never ask a simple question because you'll often get a simple answer and that can take you down the wrong course. Like that's, I very much believe that with most anything. That's the way I approach life too. So it's not unique to just artificial intelligence. The big thing is add the other information.

[00:17:25] Like I originally, when I was doing that prompt analysis for the insurance thing, I took the simple block where it had, I went into my portal and I had the block of these are the coverage things and all that. And I put that in and I saw that's not going to be enough. There there's cause there's all of the hidden loopholes. So I had to click the more information and then the more information took me to the, the insurance company website, which had the complete benefits package. And those were the large documents.

[00:17:53] So that's ultimately what I worked off of was the large documents and as much as possible work off of that. Don't feed in a single episode, a single piece of information because that's not the way it works. That's just not the way healthcare works. If you're looking on the clinical side, the diagnosis isn't happening in a vacuum or it shouldn't happen in a vacuum. Same thing with this type of analysis on insurance plans. You need that full context and the full comparison across all of them. So try and get as much information as you can.

[00:18:21] If you're ever trying to get some clinical analysis, download not just the one radiology report or the one encounter notes, get your past nine months of encounters and put all that in and see what it comes out with. Then also be skeptical and always ask it to red team. Red team is a programming term, basically the adversarial team, you have blue team or red teams trying to undermine what's going on in the blue team side.

[00:18:46] Have it challenge itself because you might get something that it thinks you want to hear. And you don't want to, I use multiple, I use Anthropic as well. And they each have their own quirks. How to Anthropic, just stop asking me questions. You've got the answer. It's a bit more direct, but I always make sure and vet it and go through it and really push it. And I'll even, I'll ask it if I'm trying to do something on writing. I don't love M dashes. I use them by only a couple.

[00:19:15] So I'll be like, hey, just go ahead and go through and remove all the M dashes and I'll see it update. And they're still there. And I'll have to prompt it. Like, did you remove all the M dashes? And like, oh, you're right. Let me go through and update that again. So it's, you've got to trust, but verify and then push and then ask it to challenge itself before you can really work with it, at least today.

[00:20:05] Yeah. So I'll have to outline the symptoms by timeline, severity and context. Then help me think about what patterns might matter and what questions they raise for my doctors. And there's more examples like that in that, in this website. So I would encourage people to check that out because it's one thing to say that you shouldn't ask general questions, but it's much harder to actually do that in practice.

[00:20:32] And at the same time, I guess when we say that we should challenge what we ask, it's so fascinating to me if I try to create a message for a specific recipient and I ask AI to help me do that. And then when I already think that this is good, I say, now imagine you're the other person. How would you perceive this message? And then you get like tons of additional information. This profile cares about this.

[00:21:00] If you want to get their attention, you need to be mindful of this and this and this. So all this is super useful, but also, of course, very time consuming and also not, I guess, in a way, super easy. We definitely have many, much work to do when it comes to AI literacy and how to best use it. One trick I have, because I'm a talker, if that's not obvious, is I record voice notes and then I take the transcript of the voice notes.

[00:21:29] So most of my prompts are a couple thousand words because I'm usually given the context. This is where I'm thinking. This is what I want to accomplish. These are my base ideas. This is what I understand. And then I put that in along with any supporting documentation that's already written. Like, it can handle it. These things are amazing nowadays. It used to be if you fed more than into a natural language processing algorithm of decades past, the context window was so small. You can only give it simple queries.

[00:21:58] Nowadays, you can go all over the place. Yeah. That actually helps you. Yeah, absolutely. It's amazing how technology is changing. Amazing in a sense that it's very capable, but also not exactly worrisome, but very increasingly difficult for us as people to just follow all the developments.

[00:22:22] It's just impossible to digest all the advancements that are happening, which is exactly why also the development and responsibility are so important. And I mentioned earlier that you are a chief innovation officer at Red AI, where you oversee the development and responsible clinical integration of AI intelligence. When I read this, I immediately thought, how do you see the evolution of AI in healthcare from the responsibility lens?

[00:22:52] Has it been used irresponsibly anywhere? What's your perspective on that? Has it been used irresponsibly anywhere? Not that I'm actively aware of. I would say... What do we even talk about when we talk about responsible use? I guess that's the core issue. Right, yeah.

[00:23:15] Because I remember already a few years ago, Stat News had several reports, especially usually with insurance companies, where they would automate things and reject care for the elderly. And then there would be no person to turn to or to talk to because everything was automated. So people were denied care. And consequently, outcomes were much worse.

[00:23:41] Of course, the algorithm achieved its goal of saving money for the insurance. But the outcomes were... The patients were the ones that were paying the price. So that might be one example. But so what do you talk about when you talk about responsible AI? So like even in that insurance example, yes, we have a new technology powering the decision. Was there ever a pathway for the patient to speak to the insurance like denial clearly in simple terms? Anyways, that's the question. Like this is all happening in a system.

[00:24:10] So maybe AI was behind the scenes and it was making the determination on coverage, no non-coverage. But it was still already in the broken system, which didn't support simple responses for denials. So it's like that's the part that we need to figure out is the denial pathway. My medication recently was denied by my insurance. I'm on a very expensive specialty medication for the multiple sclerosis. It went to my physician. This has happened every time I've changed insurance.

[00:24:39] And I've changed insurance three times in the past 18 months. It's the course of care that happens with this kind of stuff. So now the question I have is the algorithm designed maliciously. And that's where I think when I think responsible, I think malicious. I have not yet seen malicious algorithms per se, like by design. Not maybe in function. Now, are they being used responsibly? That's the question. And take a look at the insurance example.

[00:25:07] This is a personal opinion to be clear. When I think about insurance and AI, I would be comfortable with AI approving. I would be on the flip side if it's going to deny. That then needs to go to expert human review, most likely a clinician at that point. And who knows, are clinicians even involved before the denial step today? Rules-based determinations.

[00:25:30] On the denial side, rather than automated denial by AI, that's where you can have human in the loop and bounce it out. Because there's so many claims to go through. And you need the approvers, the reviewers, having the time to go through the ones that actually need to be reviewed. And getting rid of all the fluff, the ones that are going to be auto-approved, actually gives that opportunity for them to go through and actually determine at that point in time. I remember what the problem was in that insurance case.

[00:25:58] It was that basically the reviewers were instructed that their clinical opinion should not go away for more than, I don't know, 0.1% from what the algorithm suggests. So that was the issue. Because you basically override the professional opinion. That is poor system design, like implementation system design.

[00:26:25] And frankly, it creates concerns about scope of practice and all that, right? If you're supposed to have a clinician review at that point, and they're told to not vary, you did something wrong in implementing the solution. Like that's, again, the thing. I don't know how the solution was designed to underline it. But I've been trying to think of an analogy for AI. And there's many, of course. But one that I've landed on recently is nuclear power. And that's both good and bad. And a lot of people are scared of it.

[00:26:55] There's a long history there. Nuclear power was this amazing new technology that was able to harness the power of the atom, literally split it, right? And use that to fundamentally charge a steam turbine. So the same technology that we've had for hundreds of years with a new power source. And that's where we're at with AI right now. Like we have this incredible new power source. We have this incredible new capability that can do different work than we've ever been able to do with software.

[00:27:24] But ultimately, it's powering a steam turbine. Like that is our healthcare system. And that's great because steam creates a lot of energy. And the turbine does as well. So like when I think about it, I don't want to have the same regulatory framework we have for nuclear necessarily, because that's been a challenge. But we regulate the atom, we regulate the application of it. But once it hits that distribution side, it's going back into your traditional already regulated energy grid,

[00:27:54] already regulated energy distribution requirements. Now, the challenge with AI and healthcare is it's overwhelming the system. We're creating more power than we know what to do as a system, to be clear. Again, opinions are my own. Let's dive into that a little bit. Before your current role, you were also the chief transformation officer at AI Doc, where you led the clinical operational and governance protocol.

[00:28:19] Transformation required to safely integrate AI medical devices into front-life physician workflows. Regulation in the US specifically is changing in the direction to speed up the adoption of AI, to make it easier to get AI into practice. What does the journey from AI look like at the moment? So AI solutions, getting from the idea to market.

[00:28:47] How do you observe that this is now changing? I'm mostly asking from the perspective of how should we as patients be worried by these changes? Should we be worried? Should we be excited? That speed is prioritized. What's your opinion or observations based on the industry insights that you have? The historical regulation framework has had some very clear regulated silos.

[00:29:14] FDA is a clear one for anything that qualifies as a medical device, an AI-enabled medical device. Most of the radiology ones, simply because they touch images, fall into that silo, clearly. So anything that touches the image is regulated. There's many other AI use cases, but in radiology, that's straightforward, which has offered an acceleration opportunity in the space, I would argue. I really love to play board games, physical, on-the-table board games.

[00:29:42] And without a rulebook, that's just cardboard. It's plastic. I can play with it. I can throw things at my kids. We can have fun. All that good stuff. The rulebook is what gives me a game. And it's ultimately, if I'm thinking about winning, and I do not take it easy on my kids, I always try and win. It's what allows me to win. Having those guardrails actually enables innovation. And that's why if you look at the FDA-cleared medical devices, the vast majority are in radiology.

[00:30:11] It's like 75%, 80% of 1,400 devices that have already been cleared because there's a process. There's a rulebook. Is it perfect? No. Could it be updated with this brand-new technology? Well, really old technology. AI is 50 years old, but able to be leveraged in new ways because of the scale of inferencing and cloud technologies and hosting, etc., that we couldn't do in the past. We have a great opportunity to do that.

[00:30:37] I look at regulation and guardrails as being positive if they're not onerous. We've got to be careful on the over-defining or over-regulating, even a simple definition, because that can have implications.

[00:30:55] And how do you, if the definition changes and then the technology fundamentally changes, fundamentally, which happens right now every six months, the definition could lag and then it could actually hold back a lot of that. So, like, another example is the Office of the National Coordinator, which became part of the Office of ASTP that regulates certified health IT vendors. And they, as part of the, and this is, I'm sorry, U.S. regulation. No problem.

[00:31:25] Go ahead. Go ahead. We can cut. No, it's fine. As part of the Cures Act, they regulate certified health IT vendors, or that's actually High Tech Act going even farther back. They updated it to also include AI. And there was a recent definition a couple of years ago of what artificial intelligence is. And it was a very broad definition. And I think that's the space it's sitting right now is broad and expectations of transparency, expectations of guidance.

[00:31:54] Like, rather than saying, if this many tokens do that, who knows when that's going to be irrelevant? That could be irrelevant tomorrow with a new Mac mini coming out or something like that with what you can do. So rather than saying a number of tokens, say, if this is a device that is used in the practice of clinical care that provides a determination, then this. If it provides a guidance based on evidence-based guidelines to a physician, then that. If it's explainable, then that way. If it's not, this way.

[00:32:23] There's all these broad edge conditions that could actually give you methods to say, okay, what is... I'm sorry, I'm all over the place. If we think about 510k clearance, the FDA clearance. Yeah, the basic, the clearance that allows you to validate your solution based on something that's already been approved. Substantially similar. Substantially is the important part. They don't have to be the exact same.

[00:32:50] Like, what you have to do with any FDA clearance is you have to create labeling. You have to say the indication for use. Literally, what's it doing? Then who are the intended users? And you have to strictly define those. And if you want to update those, you have to go back to the FDA to get those updated. So that's where I think the regulation side actually provides an opportunity. Because that simple definition can resolve so many of the challenges that we run into.

[00:33:19] Like I mentioned before, I don't see any maliciously developed algorithms or solutions. But I do see them being put into places where the intended use is not clear. The intended user is not clear. So like in that insurance example where they were coached, don't vary by X percent. Okay, that's bad IFU. That's bad indication for use. Like a physician needs to review this and make the determination in accordance with an evidence-based guideline. That's a clear utilization.

[00:33:47] And that will solve some of those problems where you still give the physician scope of practice and scope of determination. But you also tell them like, this isn't giving you the answer. It's giving you a directional arrow towards where you should go, not the actual answer itself. Yeah, absolutely. What do you see as the current biggest question for AI development?

[00:34:10] A few years ago, even before large language models were available to the general public, the biggest questions were, do you have a good data set? How do you make sure that data drift doesn't happen? Now with the agentic AI and hallucinations, the question isn't as much are the results hallucinated and potentially wrong?

[00:34:35] But what if at some point the whole agentic workflow gets hallucinated and the steps that you wanted to have aren't exactly the steps that you wrote in the first place? So where do you see the current biggest challenges?

[00:34:53] What are the key topics that need most attention, basically, when we talk about LLMs, chatbots, increasing capabilities of AI and agents? Sometimes, so modern AI and particularly like this new agentic world that we're moving towards. I don't think we're there in a lot of cases yet. I think it's a term that's being used. I haven't seen it implemented in healthcare in particular.

[00:35:20] I think partly because of that regulation and time it takes to roll out a lot of pieces. It's the insidious confidence. Like the answers look so good and confident that FDA calls it automation bias, where you just tend to accept it. Because it's like that person that shows up and yeah, that's the answer and rallies the troops. Decision support software is not new. Again, decades. I've worked with it for decades.

[00:35:49] Not cleared, but the guidance type style of it. Speech recognition is not new. It's the old deterministic methods with hidden Markov models and all those technical terms. They've since morphed into large language model or transformer based approaches. But like the old ones were just good enough to use, but not good enough to trust, particularly as a clinician. Like you always knew, I'm going to review my reports. I'm going to see if the word no is missing. I'm going to check to make sure that this went the way I expected it to go.

[00:36:18] Because like they were replacing the keyboard basically, but you knew what to check for. Now it's harder because they're like, they're so close and they're so good. So if anything, the burden of review is higher at the physician level. And unfortunately also at the patient level. Like you have to check to see, hey, it put this medication history in. Medication history is always wrong because people update prescriptions and they don't make it in there. But did I ever actually take that medicine?

[00:36:46] And you need to review that yourself, unfortunately, because sometimes things are inappropriately put in or they're so close to what was said that they get categorized that way. We all have tools now, though. That's the good thing. Like you can actually feed this in and do some verification as a patient to say, forget like reviewing the quality of the diagnosis. Does this match what's happened in my record in the past based on what you're seeing and verify that way?

[00:37:09] So we need to find, I think what we'll find is a balance where we eventually land that physicians will realize, hey, these things are really good, but they do make errors. So I've got to be a bit more focused on spot checking, like the really important things to make sure that they're not getting some like prior history of depression or something like that could impact their long-term care, their insurance underwriting for anything. And I think we're already close to that.

[00:37:37] When I look at a lot of the physicians I speak to, there's growing acceptance. There's the cognitive burden and the relief and the joy of using the solutions. But they're also like, it's not perfect. I got to make sure. And with that five minutes I saved documenting, 10 minutes, whatever the time is, I'll spend two minutes verifying. And I even look at here at RadAI, we automate the impression, which is the synthesis of the radiology report.

[00:38:03] And whenever I speak to patients at conferences or anything like that, or as part of the patient user groups I'm part of, I'm like, read the impression. It's the very first thing you do. If you read nothing else, read your impression, then go through and just verify. Check the rest of the report, but read that impression. And some physicians will say, I didn't say that patient had cancer in my report, but it says here on the impression it did.

[00:38:26] So the impression is wrong, but then they'll go back and look at their report and they'll see that the non-AI based speech recognition missed the word no. And no evidence of neoplasm cancer became evidence of neoplasm. And that's the difference between a positive and negative diagnosis.

[00:38:44] So it actually, it offers the opportunity to catch errors elsewhere in the chart, elsewhere in the process, because it brings that to the attention of the licensed physicians reviewing this to make sure that it's correct. And it lets them change it and say, whoa, that was wrong. And so that now saves a patient potentially reading that report and thinking, oh my goodness, I have cancer. And then calling their PCP that ordered it had only looked at the impression. They're like, no, you don't have cancer. Oh, wait, they said up here, you do.

[00:39:12] Then they have to call the radiologists and then the radiologist is like, oh, that was an error. Let me add in the report. So like, that's the efficiencies that can really drive is catching it the first time, enabling it, bring it to the attention in a seamless way and helping the quality of it because of this level of integration. But again, trust, verify, interact. It's a whole process that we got to go through. Verification has been a topic for several years by now.

[00:39:40] And there's been several ideas as well in place on how to best govern AI, how to best make sure that what you're buying as a healthcare provider is safe. And the Coalition for Health AI did several things from ideas about labels that would bring transparency on which data was used to build this model. How was the model tested like a food label?

[00:40:05] And additionally, the idea was also to establish several testing labs across the U.S. that would do the verification. You would get some sort of a stamp of approval as a vendor. But then that idea didn't get realized. So different institutions or different organizations are creating guidelines and recommendations. And when I look at this as a lay person or as a patient, I know that a lot of guidance is there.

[00:40:34] But I wonder what do healthcare providers use in the end to really make sure that what they buy is safe and responsible. So how do you see the development in this segment? What seems to be most promising at the moment? A couple of years ago with the launch of the Coalition for Health AI, I became involved effectively within months of them being founded. I saw the press release.

[00:41:02] I got some connections and became involved partly because I looked at model cards as being a great opportunity for standardization of at least basic information. That has been one of the challenges for non-FDA cleared solutions. FDA cleared, class two, class three, you can give your summary. And that kind of covers a lot of that basic information. For non, it's the Wild West and how that gets articulated.

[00:41:28] That also coincided with that update to the Cures Act I was talking about before HCI. They gave some really good questions to consider, pieces of data to share and all that. By the way, model cards, model facts. They've had a couple of nutrition labels. They've had a couple of different names depending on the org. Health AI Partnership has been involved in that as well as I think DIME and others. That's kind of table stakes. What is it? How does it work? How is it trained? Varying degrees of information inside of it. That should be information that should be there for everyone.

[00:41:58] Patients, job applicants. I used the RAD AIs, model cards to vet whether or not they had the right approach for AI before I joined the company. I asked for them when I was speaking with the CEO. The next step is that validation at scale. I think that is a big hairy problem. I applaud the principle. The idea that we could do this at scale.

[00:42:22] I think the challenges of rollout, the consolidation across the community never quite emerged. And this is an industry problem. It's not unique to try with that challenge. An example I like to use when I'm speaking to this verification and validation side is in healthcare technology. There's something called a SOC 2 type 2 report. And as a cloud vendor, you have to go through this.

[00:42:46] It is a standard that was founded by the American Accounting Society. I don't remember the exact name of the society. But literally, accountants made this standard. It's not a health standard. It's just one that's used for the secure transmission of information throughout the world. It's based on ISO guidelines in that situation.

[00:43:06] And then there's also the National Institute of Standards in the United States, which has basically ISO and NIST, as they call it, got together and reconciled the guidelines between the two by and large. These are independent agencies. It's auditing agencies. Consultant firms, accounting firms, Deloitte's of the world, Ernst & Young, those types, that go through and they perform a validation as an org using an independent standard set up here.

[00:43:34] Which is what I look at a lot of the CHI guidelines. Again, it's these high-level guidelines are adopted by orgs that then do independent verification. And it verifies you as an organization, as a product, as all these different parts and pieces. So the sooner we get to good guidelines that are good baseline, ideally, frankly, coming from NIST. Like, that's, they should be trusted up there.

[00:44:01] And then you implement a certification standard to that at the auditing organization level. And then healthcare technology, the market, the enterprise software adopting this can accept that. Because it's not me as a vendor. It's not another company saying you're certified. It's like, we've looked at principles and standards that are up here at the high level. We've implemented them as an organization for auditing purposes. And we've given it the badge of approval.

[00:44:29] The sooner we can get there, the better it's going to be for everybody. Because right now, the biggest risk to adoption, the biggest challenge in innovation is not the regulation or any of those pieces. It's the sites need to be, feel like they're empowered and able to do this in a way that's going to help their patients, going to help their institution, and drive the results that we're all expecting.

[00:44:50] And right now, that oftentimes means very complicated local governance committees that are fragmented on a per-site basis, with some requiring a couple pages of documentation, others requiring hundreds of pages of documentation. You get a patchwork of review at the individual site level. Yeah, yeah. Anything but easy. Still at the moment, very complex.

[00:45:13] So if we just, as a final question, if we try to return back and put our patient hats on, because patients are increasingly using AI to search for information, to try to get answers. Because as you mentioned at the beginning, just getting an answer to the medical problems that you might have, the health problems that you might have, is already a huge progress.

[00:45:40] If you've been battling a specific issue for months or years. So what do you think are the basic requirements or skills that patients need to have to not be deceived by AI? What would be some of the practical things that you perhaps think about when you use AI and might be useful for everyone else as well?

[00:46:04] On top of what we mentioned in the beginning, which is to not ask simple questions and to make sure to verify and validate. Again, always red team. Use those exact words. Red team. Because AI knows what that means. Okay, so that's exactly an expression. So how do you phrase that? When you have a discussion, then you say, now imagine you're a red team, what would you tell me? I'll just say conduct a red team analysis of this.

[00:46:33] And it will say, hey, I'm not trying to be a jerk right now, but I'm going to give you the absolute worst interpretation of every single thing in here. It's pretty effective because, again, that's like a programming term. And ultimately, these are the technical tools that are using plain language. The other thing I would really encourage as a patient is to get involved in the patient and family care councils. I personally participate in the American College of Radiology. They have a commission for patient and family centered care.

[00:47:00] And these are they have clinicians. They have administrators that the health systems run them. Societies and colleges run them. And they bring patients together with all these people. And they talk about what are our delivery principles as an organization? How can we involve the patient more in this journey? And it's an amazing opportunity to provide feedback. A lot of people don't know they exist. If you're going to a health system and receiving care, look up patient and family care council and then just email.

[00:47:29] And they'll probably welcome you on because they're looking to recruit patients. Sometimes they provide little bonuses and stuff like that. That's not the point. The point is to be involved in that. And like the example for me, I got involved in the American College of Radiology, PFCC, because of my MS diagnosis and having to get regular imaging. But at the time I joined, there was a patient co-chair. She was the first co-chair. Her name was Amanda.

[00:47:55] And she had been diagnosed with low-grade ovarian cancer a decade before and gone through a quite complex care journey. And a project that she was leading as that co-chair was around the anxiety that you feel when you're going to go get imaging or when you're going to the doctor. And the anxiety you feel when you're waiting on the results. Both sides of that. And there's a great opportunity for AI here, too, by the way, to shorten the cycle of results. She and the council were very focused on this.

[00:48:23] And they developed a scanxiety toolkit, which is meant to enable practices. So enable the radiology practices to let patients know you're coming in for a scan. It might have a small amount of radiation. That's OK. We track that closely. Or it might not because it's an MRI. You're going to have the scan. You're going to go through it. And you'll get your results soon thereafter. Your physician will review those and give you results.

[00:48:46] Now, the new layer that's emerged since then, because when I joined that five, six years ago, the Cures Act was newer, is the automatic distribution of results the minute the results are available. And now you might get your results on a Friday. And your physician's not going to review it until Monday or Tuesday. And that might be a week after you got your scan. So you've been waiting. And now you have these results. And they say progression or no progression. And that's just an incredible burden that patients feel. We've developed this toolkit to enable the practices.

[00:49:15] We're also working on a more patient-facing version of it. And these are the ways in which we can start to help patients, to let them know, look, you're going to have a scan. You're going to have results. You might get your results beforehand. Maybe put them in that AI. Maybe don't. It all depends on you and your personal tolerance for all this. Do call your physician if you haven't received a call in a couple days. With those kinds of things. Enabling and guiding. Because we're in a new world of connectedness.

[00:49:44] And that's what these councils are really focused on, I think, is helping the patient journey. You brought it up. Yeah. Yes. I'm just bringing it up. So I will make sure to add this link to the show notes as well. So this is anxiety and basically a toolkit to reduce anxiety after an imaging and when you get the results. So, yeah. Definitely. I had no idea how relevant.

[00:50:12] Amanda was my initial introduction to low-grade ovarian cancer. That was what my wife was diagnosed with. Having that personal context because of this committee and interaction helped me go into that journey a lot more prepared. I thought I was anxious with MS and waiting on results. I had no idea, like, how much different it would be. And it wasn't even my results. It was my wife's. But every time she goes to get a confirmatory scan now, that three to seven days waiting for the results, you're like, how's it going to come?

[00:50:41] Is it going to come back with something or is it going to be clean? And we've been blessed thus far for clean results. I hope it stays that way. Dimitri, thank you so much for joining me today for a discussion. All the best with your health. And I will definitely be in touch as AI develops. And there's going to be plenty for us to discuss in the future as well. Thank you. This was a pleasure.

[00:51:07] You've been listening to Faces of Digital Health, a proud member of the Health Podcast Network. If you enjoyed the show, do leave a rating or a review wherever you get your podcast, subscribe to the show, or follow us on LinkedIn. Additionally, check out our newsletter. You can find it at fodh.substack.com. That's fodh.substack.com. Stay tuned.