I Used AI for My Chronic Illness for a Year. Here's What Went Wrong. (Tjasa Zajc, Agentic Patient)

I Used AI for My Chronic Illness for a Year. Here's What Went Wrong. (Tjasa Zajc, Agentic Patient)

The Agentic Patient is here — and most healthcare systems don't have a plan for it. In this special reverse-role episode of Faces of Digital Health, Eric Sutherland interviews host Tjaša Zajc about what a year of using AI through her own chronic illness has actually taught her about patients, doctors, and the future of healthcare AI. 200 million people will ask ChatGPT a health question this week. The question is no longer whether patients will use AI to navigate their care — it's how to help them do it well, without harm, and in productive partnership with their clinicians. In this episode: - Why "patients know best" breaks down for chronic patients - The three archetypes AI is creating: minimizers, cyberchondriacs, and informed collaborators - What happens when doctors dismiss patients who use AI - A two-model verification method for cross-checking medical AI advice - Why "digital literacy" is the wrong name for the most important skill in modern healthcare - Two prompts that genuinely change what AI gives you back - What health ministries should actually do — and why we shouldn't offload patient AI education to doctors ⏱ CHAPTERS 00:00 Intro & reverse-role experiment 01:00 Eric Sutherland: "a data guy with personality" 01:36 A year as a chronic patient using AI 02:50 Same prompt, different LLMs — the trust problem 04:30 How The Agentic Patient series was born 06:00 Three patient archetypes 09:00 When doctors dismiss AI, patients start hiding 12:30 Dale Atkinson, HIMSS Europe, and data outside the clinic 13:30 200M weekly ChatGPT health queries — who's accountable? 15:30 The two-model cross-verification method 17:00 Making 7-minute appointments work with AI 19:30 Finland's Elements of AI — a model for healthcare 22:00 Why chronic patients may not know best 24:30 Five minutes with a health minister 27:00 Two prompts that change AI outputs 30:00 The agentic patient is a survivor, not a tech enthusiast 🎙 ABOUT THE AGENTIC PATIENT The Agentic Patient is a series under Faces of Digital Health exploring how patients and clinicians are actually using AI in healthcare — the wins, the harms, and the best practices emerging across cancer care, chronic disease, and primary care. 🔗 LINKS Newsletter: https://fodh.substack.com/p/the-agentic-patients-are-here More episodes: https://www.facesofdigitalhealth.com/agentic-patient-blog Tjaša Zajc on LinkedIn: https://www.linkedin.com/in/tjasazajc/ Eric Sutherland on LinkedIn: https://www.linkedin.com/in/esutherland272/?skipRedirect=true #AgenticPatient #AIinHealthcare #DigitalHealth #FacesOfDigitalHealth #HealthcareAI #ChatGPT #PatientEmpowerment #ChronicIllness #AIliteracy #MedicalAI #PatientAdvocacy #DigitalTransformation

[00:00:00] When we talk about risks, and sometimes we like to say that patients know best, but I think that chronic patients actually potentially don't know best because when you're a chronic patient for 10, 20 years, your benchmark of what's normal changes a lot.

[00:00:18] My relationship with AI looked like. I call this the curve that's similar to when you fall in love with someone. First, you're curious, then you're completely full of admiration, then something happens and you lose a little bit of trust, and then you need to find a way to kind of navigate that relationship, and the same thing is happening with AI.

[00:00:50] Hello, everyone, and welcome to Faces of Digital Health and a special series that I call The Agentic Patient, where we explore how patients use AI, what kind of garters they use, and what kind of best practices can we share across the board to help more people get the best out of AI and prevent harm, potentially. And today we're going to do things a little bit differently. Eric, please take over.

[00:01:18] It is my pleasure to take over. So my name is Eric Sutherland. I am one of the many faces of digital health, and specifically, I've been called the data guy with personality. Within that, my focus is really at the intersection of all things health, and there are intersections with artificial intelligence, data governance, and security.

[00:01:40] And really, my belief is that the future transformation of the health system will come because patients demand it. And that is why, when I first heard about the series around the agentic patient, it really caught my curiosity.

[00:01:59] And I really wanted to learn a lot more about it, and hence why I'm leading the discussion today, so I can learn more and hopefully take all of you on the same journey to really understand where this movement toward the agentic patient is going. So, with that said, let me start.

[00:02:17] So, as folks will know, Tasha recently, in her newsletter, shared her personal experience in the last year, really learning and understanding how to work with AI. So, I was hoping, first of all, that Tjasa Zajc, you could share both how you're doing today and a bit more about your experiences that you had in the last year.

[00:03:09] Thank you, Eric.

[00:03:11] Thank you, Eric.

[00:03:41] Thank you, Eric.

[00:04:11] Can I or should I trust AI? So, to illustrate, one AI said that I should rest more, the different LLM said that basically what I shared and given the exercises that I was doing, the fact that I can do them is a sign of a positive improvement in health. So it was really confusing to me to understand which one to believe.

[00:04:37] I think AI is going to have and is having immense positive impact also on patients, but we need to be really, really careful because it can go in various ways if you're not careful, if you're not assessing and thinking critically when you use it.

[00:04:57] That sounds like, well, first of all, an incredible journey that reminds me much of a roller coaster, which is actually nicely framed in the newsletter article where you talked about both the highs and lows that you experienced, both from your own personal health and in your experiences using the AI system.

[00:05:22] So, but really, it seems to me that from this experience, it really inspired you about what the realm of the possibilities were. So could you take us on the journey of how you really were inspired to develop this concept of the agentic patient? Right. So basically, I was invited to Canada to talk a little bit about the way that patients use AI.

[00:05:50] I started researching the topic in more detail. By the way, this is the curve that you mentioned before of how my relationship with AI looked like. I call this the curve that's similar to when you fall in love with someone. First, you're curious. Then you're completely full of admiration. Then something happens and you lose a little bit of trust. And then you need to find a way to kind of navigate that relationship.

[00:06:19] And the same thing is happening with AI. It's important to know that, you know, when you see one patient, you just see one patient. Everyone's different. Everybody has a different story. They react differently to the same medications potentially. So just because something helped me doesn't mean it's going to help someone else that has the same diagnosis. So that's one thing that's important to keep in mind when we try to take the best practices from others.

[00:06:45] And it was also the reason why I really wanted to talk to other patients as well and understand their stories. So I so far spoke with two cancer patients who used AI for research or for building their own GPTs. I also spoke with a researcher from the Italian hospital, Children's Hospital, who shared with me the other side of the coin.

[00:07:13] So how doctors are now faced with parents that use talk to AI and then come back to the hospital and basically deny that their child has a rare condition. So that becomes a whole new sphere of how does healthcare address those types of cases.

[00:07:36] There's also another challenge that AI is very kind of inclined to please you, which can mean that if you go to AI with the mindset of searching for comfort, you're going to get that. And if you go to AI with fear, it may amplify the fear.

[00:07:59] So in essence, I started, I kind of clustered patients into three different groups. So one is the so-called minimizers. So that's the patients that use AI to try to minimize the seriousness of their issues.

[00:08:21] And the challenge with those is, and this is also reported in literature, that when patients delay seeking for medical care, if they have a serious condition, that can result in complications, which can result in higher cost for the healthcare systems and worse patient outcomes. The other point that I also described just now is the so-called cyberchondriacs.

[00:08:48] So the people that, you know, have a headache for three days and immediately think it's a brain tumor. And if you talk to AI, it can very easily convince you that that's exactly what you have. So these are also the patients that are going to over-utilize healthcare services and increase the costs of healthcare utilization.

[00:09:07] And what I'm trying to do with this series and what in the ideal scenario we would all come to is to have as many patients educated and have them in the bucket of informed collaborators,

[00:09:21] where you have the ability to think critically, your health, your literate in health and AI, and you use AI to organize your data to potentially get new ideas that you can then present to your clinician and potentially speed up your outcomes, positive outcomes and decrease healthcare costs.

[00:09:48] At the moment, according to the American Medical Association, one in five clinicians is using AI mostly for, no, so actually 80% of clinicians use AI, but mostly for research. And one in five uses AI for diagnosis.

[00:10:11] And the point is that hopefully doctors can approach insights that patients bring with an open mind. In my experience, I'm lucky to have such doctors, but patients that I spoke with for this series so far do not have that experience, which is a big issue.

[00:10:32] Because if the doctor tells you that, you know, it's stupid to use AI to search for answers, that just means that you're going to start hiding things. You're going to start doing things on your own based on AI recommendations and can go into a completely wrong direction. And healthcare won't even know about it.

[00:10:52] When I was listening to, I listened to one of the podcasts this morning, and because the patient was hiding things with the doctor, they ended up going into both ER. And because the treating physicians didn't know what drugs he was taking on the side that were not part of his formal medical record, he actually had a very dangerous episode in acute care.

[00:11:19] And so, which really was born out of a lack of respect between the patient and the provider.

[00:11:29] And so, I think what's really exciting about the genetic patient is it really moves things forward from a patient perspective to be more, not necessarily an equal footing, but a minimum of respectable footing to provide their own care.

[00:11:50] So, within that realm, I know from listening to the podcast this morning, you're sharing a little bit about your own experience and how you did feel more respected during your health journey, but also reflecting some of the experiences in the other patients you've talked to where that lack of respect or that respect does not yet exist or really transitioned.

[00:12:14] So, I'm curious about your reflections around that new relationship between patient and what it feels like. Yeah. So, I think everybody is specific, I guess. They have their own personality. And the same thing goes for clinicians. We have been talking about a partnership relationship between the clinicians and patients for probably more than a decade now,

[00:12:42] which does mean that paternalism isn't there yet. It's, I don't think patients want to undermine the authority of clinicians, but with new tools, with the internet, with the changing asymmetry between the access to information,

[00:13:03] it's still wished, I guess, by patients to show that they, you know, they want to be an active participant in the way that their treatment looks like. After all, you know, clinicians treat, what, 2,000, 3,000 patients? And nobody's going to deal so much with your use case that you will.

[00:13:27] So, I guess we still have some work to do there, but hopefully we can all work together to better outcomes. In the past, I worked as a medical journalist for several years. And I do think that especially in oncology, clinicians are aware that patients try alternative things as well.

[00:13:55] So, I am very curious to see what Dale Atkinson, who is a cancer patient from the UK, is going to share at HIMSS Europe. So, he's also joining us at HIMSS Europe, where he's going to talk about how there's this whole world of data and patient journeys that are happening outside of the awareness of clinicians.

[00:14:18] And how, if we really want to take the 360 approach to patient care, we need to start thinking, how do we actually integrate that part of patients' lives in the data sets as well? And the reality is, we know that 200 million people are going to use CHAT-GPT this week with a health question.

[00:14:43] So, and that number is today, after ChatGPT started to get good, when it starts with CLOD and other such things get to be really, really good in the next few years. I cannot see that number going anywhere but up.

[00:14:58] And so, it is imperative for the health system to understand how to work with patients who do bring more information to the table and how the providers can respect those curiosities, respect the fact that patients are dealing with their own healthcare 24-7. And give the patients' guidance about how to use the systems responsibly and effectively.

[00:15:28] But having said that, my question for you is, who is accountable? Because, like, for helping patients understand this new way of being an agentic patient? Because, frankly, doctors are already overworked, already overburdened. We know this in the last 10, 15 years.

[00:15:53] So, and if we leave it to patients to go find their own information, they're going to be beset with a whole bunch of shiny objects, as you said before. The confirmation bias type of AI, as opposed to this is how you responsibly use the system. And one of the things that struck me from your article was how you use two AI systems. You got advice about questions to ask from one, and you ask questions of the other.

[00:16:22] And I think that was really smart back and forth because you really mitigate the risk. So, Bo, can you share some of that, where you think this education, awareness, capability, empowerment, where really patients should be going to help themselves on that journey? Yeah.

[00:16:46] So, I think one of the, you know, digital literacy is, in my view, one topic that's so important. It sounds so boring, though. I think we need to find a better name for it. It's kind of like information technology and information security and cybersecurity. Cybersecurity sounds cool. And information security is like IT, and everybody falls asleep when they hear it. So, I think we need to, first of all, find something similar for digital literacy. Give it maybe a different name.

[00:17:14] But apart from that, find a way to increase the education and sharing of best practices. Someone recently said that he would wish that the doctor would tell them how to use AI. I don't think this is something that we should offload on doctors because the number one thing that I want my doctors to do is to not quit their jobs. You know, I'm incredibly grateful to have doctors I have. They're great.

[00:17:44] I want them to give me feedback on their clinical knowledge. Like, I can ask AI about ideas. I can't train 20 years of medicine that they already have. So, that partnership, when you present information in a condensed way and then verify with the clinician, is the best case scenario.

[00:18:10] I think where we have the biggest opportunity here, and again, it's what I'm trying to do with the agentic patient, is raise the awareness of how can we, as patients, make the most out of those 7 to 10 minutes. I'm not expecting the healthcare to find a way to increase those visits to 15 to 30 minutes.

[00:18:31] I do think there's a way to, you know, do all your research and then tell AI, create a one-page summary, or give me three key questions I should ask my clinician on my next visit. And that can, I think, increase efficiency of the visits, etc. So, I don't know if that kind of answers your question.

[00:18:57] But healthcare, in my view, has always been struggling with the marketing part of increasing, you know, literacy. For example, even if you just look at the patient portals and everything that the healthcare ministries are investing in, in terms of improvement of how healthcare system works, there's a lot. I mean, all the countries are doing so much if I look at Europe. But how much do patients actually know? Where do you actually learn about that?

[00:19:24] Maybe when you come to a doctor's office, there's a leaflet. Or if you're lucky, a doctor will tell you that, I don't know, you can go get an overview of your therapist with a clinical pharmacist. But we really don't have enough official marketing on the healthcare side to make sure that people understand what's out there.

[00:19:49] And what then happens is that patient groups too often get stuck with assumptions and fears of what's going to happen to their data, how unsecured their data is because, you know, nobody told them how complex identity management or access controls are and that there's an audit for their data, etc., etc. I think we can do a lot on the communication side.

[00:20:17] Well, yeah, and I definitely think that what you've laid out here is the beginning of what a curricula could actually be around this. There was a course that Finland put out a few years ago called Elements of AI AI when Finland's aspiration was to, quote, train 10% of their population with a 10 or 1% of the population on AI,

[00:20:46] which from a Finnish perspective is around 60,000 people. But what they subsequently did, they opened up the course for everyone to take. So everyone would have a basic understanding of what AI was about. And since then, it's a free online course. And they've believed something like two or three million people have taken from across Europe and around the world. So it's certainly an area.

[00:21:12] But a big part of the challenge is just getting patients to be aware that this is an opportunity to have. So awareness is really the key word. And if I can dare say, I think you should start trademarking. Think about trademarking the agentic patient because it is a much better term than digital or AI literacy.

[00:21:35] Because the nice thing about the agentic patient is that it captures the whole journey around data and data management through the use of AI through to your empowerment of how to use your data and AI systems to achieve better health outcomes for yourself and to have more meaningful and productive conversations with your health providers and how to engage with the health system most efficiently and effectively.

[00:22:00] So I definitely think that is a very interesting and intriguing path that you've set yourself on. I can see in my own mind's eye a series of courses that patients offer to patients. I've talked to many chronic patients in Canadian context.

[00:22:21] And nobody has ever trained how to be the chronic patient, how to carry the binders around to be your own Sherpa for your information. You learn it on the fly. And the opportunities that we have for patients to work with patients to understand the capabilities and the strengths that we have in this space, I think are absolutely phenomenal. To be aware of both the opportunities, but also to be aware of the risks.

[00:22:49] So within that, I want to ask you a bit more. However, as much as the opportunity space is massive, there are risks associated with this pathway around a genetic patient. What do you think are some of those risks? And importantly, how do you think that patients should be working together and themselves in order to help mitigate and address those risks? That's a great question.

[00:23:18] I think the number one thing to keep in mind is to never, ever stop being skeptical about what the AI tells you. That's why I said earlier that I think the whole partnership perspective of me figuring stuff out with AI,

[00:23:37] even looking into actual sources, verifying through clinical studies, and then asking my clinicians if I got things right or if this is potentially not applicable to my particular case. That's like all super, super important. And as far as chronic patients go, I find it quite interesting when we talk about risks.

[00:24:01] And sometimes we like to say that patients know best, but I think that chronic patients actually potentially don't know best. Because when you're a chronic patient for 10, 20 years, your benchmark of what's normal changes a lot. So let me give you an example. I have a friend who has Crohn's disease and, you know, it's like bloody stool is completely normal for us. It won't be for somebody that's generally healthy.

[00:24:31] And she once called me and she was like, oh, you know, I'm not feeling very well. I think a piece of my intestine fell out. Do you think this is normal? Should I go see a doctor? And I was like, well, obviously, you know, you have to go straight away. You have to run. It's just that when your benchmark of what's normal changes, I think it's easy to not take things as seriously as you would potentially have to. When some things happen.

[00:25:01] And that's just that's just one one examples. And yeah, just caution being skeptical is always a good mindset to have, I would say. Yeah, absolutely. We've all heard about AI's hallucinations.

[00:25:22] But having said that, there is tremendous value to be had in there as long as you approach it with both caution and skepticism, as you say. Right. But there is a significant amount of hope in the using of these systems.

[00:25:39] So as I come toward the end of this, I, you know, I'm going to think big that, you know, if you had the attention of a health minister for five minutes. What one thing would you ask them to do to help advance the work of the genetic patient?

[00:26:05] I would say let's work together on just spreading the message around, you know, the potential and also the dangers. Just invest in that literacy awareness campaigns part. And you actually gave me an idea earlier that maybe after I have a few more of these discussions for the genetic patient,

[00:26:33] the next step that we could do is actually create open webinars where people can come, even if they didn't read anything that was shared so far and just come to these sessions and say, I want to use AI. This is my problem. I'm not sure what to do or I'm thinking of doing this. Do you think this is a smart idea?

[00:26:56] And then everybody in the room can share what could be the best approach or where one should really, really be careful. But as we get increasingly individualistic today, I really see a lot of power here in the community in just sharing of best practices and contributing to the broader population.

[00:27:26] And I don't just expect this to be done by healthcare, by hospitals, by primary care centers. Yeah, we can think of a lot together. Yeah, and I think that when you actually are doing that thinking together, it's not just talking about what is the art of the possible and what are the risks and things you have. It's about actually doing it.

[00:27:51] What struck me is from, again, the podcast, your podcast that I listened to previously with one of the genetic patients, how it took him over the span of two to three weeks, of two to three hours per day, in order to get the AI system to a place where he could effectively trust it or understand its output in a way that he could actually use. So it took a lot of time and energy for him to do that.

[00:28:21] But even knowing where to start in that journey, how to, and even getting to the point of here are some first questions to ask or first interactions to have with an AI system. And patients would then get the, oh, I see this is useful. I'm now curious. And then they can start on your so-called love journey that you had earlier on. I think that is really fantastic. And I'm looking forward to joining.

[00:28:48] I know, frankly, dozens of people from certainly Canada and across Europe that would love to join such an online webinar, workshop, series, whatever the case may be, in order to really use these systems and unlock the power of this. Because patients are the driver that are going to really force the transformation of the health system from without.

[00:29:18] They are the ones who actually are most incented for the health system to be operating for them. And they now have a very powerful tool to help the system transform in the way that benefits them as individuals and collectively as communities to actually achieve better outcomes. So all that said, my last question for you is what's next?

[00:29:46] More stories, more ideas. You know, every discussion brings something new, brings a new insight. For example, one of the discussions that I am yet to publish, the speaker said that initially, when he was still learning how to create detailed prompts for AI, he simply asked AI, what else do you need from me?

[00:30:12] What kind of information would be useful for me to provide in order for you to be able to give me a better answer? And another person at a conference that I recently attended said that if you actually tell AI, don't hallucinate, that prompt in itself already improves the output that you're going to get and create some sort of a limit to the system. So I think I learn something every day.

[00:30:41] You know, there's only so many ideas that each one of us has. And I will keep trying to make the initial journey of every patient that starts using AI to help themselves. I will try to contribute to make that journey a little bit easier.

[00:31:04] And what I've learned through this conversation is that the agentic patient is not somebody who is particularly AI literate or a tech enthusiast. It's really somebody who's trying to survive in an uncertain world and trying to achieve better health outcomes for themselves, using the tools that are now readily available.

[00:31:28] And I truly believe that the path that you're on is going to really help patients achieve better outcomes for themselves. Because it's not really about what's stopping patients from using AI. It's about how we can help them and how the system can help them to improve their awareness of literacy while being aware of the safeguards and ultimately approaching the whole topic with humility.

[00:31:56] So with that, I thank you very much for sharing this. I really have to say that this is exceptionally exciting for me because I do believe this is the beginning of the transformation journey that everyone needs. Eric, thank you so much for taking on the role of the moderator today for coming up with this idea in the first place. This reverse role thing was your idea. So thank you so much.

[00:32:25] And yeah, we will keep working together to do as much good as possible. Thank you. My pleasure. I hope I did you a good service. You did great. Thank you.