This is the first episode of a special series called The Agentic Patient, which is a series about how real patients are using AI to navigate their health. We go into details, how do patients make AI help them do better, not worse, and what should we all be mindful of along the way? Which tools do they use? Which prompts? What's working, what isn't? It is not just patients on the series, it's also researchers and clinicians. These discussions are intended for informational purposes only, and should not be relied upon as a sole source of medical information or as a substitute for professional medical advice, diagnosis, or treatment.
In the first episode, you will hear from Dale Atkinson. Dale was a financial crime investigator before his terminal cancer diagnosis. This is important understanding the research he did on his cancer.The skills required for a compliance officer trained him to read dense regulated documents, which is a transferable skill for medical literature. He is a compelling interview subject and, simultaneously, a survivorship-biased sample of one.
Key insights:
1. ChatGPT confuses popularity with authority.
2. Clinician dismissal produces concealment, which produces real harm.
3. Most advanced-stage cancer patients are using AI in secret.
4. Use AI to narrow the search, not to summarize the answer. Read the papers yourself.
5. Context hallucination is the subtle killer not invented studies, but correctly-cited studies applied to the wrong disease.
6. Concealment is a safety emergency caused by clinician posture, and disclosure is non-negotiable regardless.
7. Custom GPTs with closed corpora are the step up from consumer chat, and require real time investment.
8. A clinical team you can bring AI findings to is a prerequisite, not a nice-to-have.
9. Clinician language and clinician posture shape patient behavior — agency begets partnership begets better care.
10. n=1 is n=1. Dale's outcome is extraordinary; his method is instructive; the two must be reasoned about separately.
[00:00:05] Dear listeners, welcome to Faces of Digital Health and a special series called The Agentic Patient, which is a series about how real patients are using AI to navigate their health. We go into details, how do patients make AI, help them do better not worse, and what should we all be mindful of along the way. Which tools do they use, which prompts, what's working, what isn't.
[00:00:29] It's not just patients on the series, it's also researchers and clinicians. These discussions are intended for informational purposes only and should not be relied upon as a sole source of medical information or as a substitute for professional medical advice, diagnosis or treatment.
[00:00:56] In today's discussion, you are going to learn a little bit more about Dale Atkinson. Dale got a terminal cancer diagnosis and when he decided to fight his cancer, he took the research first approach, AI second.
[00:01:15] He used ChatGPT as a literature triage layer, feeding in his diagnosis and medical letters asking where should he start researching which papers should he read first. He then manually read through 3,500 papers over 3-6 months, initially cover to cover, then skimming them for sections that actually mattered.
[00:01:41] More about his story, what he did next and how he built a custom GPT in this episode. If you will enjoy the show, make sure to subscribe to the podcast. Also go to our website facesofdigitalhealth.com where under the tab Agentic Patient, you will find detailed summaries of the discussions with patients with useful tips on AI use. Now let's dive in today's discussion.
[00:02:21] Dale, hi and thank you so much for joining me here for a special series that I called the agentic patient as part of the Faces of Digital Health podcast, in which I talk to different patients, researchers and clinicians about how patients use AI, what are the best practices that are worth sharing and more.
[00:02:45] We already spoke in the past where you presented in detail your cancer patient journey. You had a very bad prognosis, but then with a lot of research and your skills, you managed to take individual steps personalized for you and are still here with us in what we can say quite good health. So that's great news. Yes, exactly.
[00:03:13] If we just very briefly wrap up or summarize your story and the research that you did, can you, for those that didn't hear the first episode, I will add it in the chat because we did talk a lot about the patient-doctor relationship and other stuff as well. But can you take us through the steps that you took when you started doing your research? What kind of mindset did you have?
[00:03:38] What were you paying attention to when you were trying to make sense of what your patient journey should look like? Yeah, of course. Let me try and slim it down as much as possible because it's a very long story. So essentially, I was given an inoperable, incurable diagnosis and told I had less than 12 months to live back in October 2024. Around that same time, my partner had lung cancer. She'd just been through her own journey and had a full lobectomy and was in recovery. And then my mother died.
[00:04:07] It threw my headspace into a pretty bad place. But instead of letting that defeat me, I actually used that as fuel in order to go off and essentially ingrain myself and hyper-focus, as it were, into research. With that, I came across things like next-generation sequencing. I came across repurposed medications. I came across mindset work, etc. And I used a combination of those in order to essentially provide myself with, first of all, the data from things like next-generation sequencing.
[00:04:33] Which then gave me sort of a really good steady platform and an understanding and a confidence in what I was doing. And then from there, I began to build on my own protocol using everything from hyperbaric oxygen through to red light therapy, infrared saunas, and a whole host of other bits and pieces. And I went from a 9.2-centimeter primary tumor with lots of secondaries and lots of metastases and lymph nodes involved and a really dire prognosis.
[00:05:00] I think it was a T4BN3M1, I think it was, which is about as bad a prognosis for an esophageal cancer patient as you can get. Through to, at this exact second in time, no visible signs of cancer. Which is very good to hear.
[00:05:19] In the previous episode, you explained in detail that you did a lot of research into clinical studies, into the academic papers, so everything that you could find. And you didn't use that much of AI at that time. So, do you use AI today? How do you see the rise of use of large language models to aid us as patients for disease management?
[00:05:49] Yes, I think I should not correct that as such, but update that slightly. So, I did use AI in certain parts and pieces. However, I did do my own research with that. So, in terms of the use of AI, I would use AI to try and identify which papers should be prioritized and which I should read first. I also created my own custom GPT into which I fed the papers that I felt were most relevant for my situation, etc.
[00:06:14] And with that, I then had that output things in order for me to then better focus my research. So, it wasn't that I didn't use AI. It was just I used it for very focused pieces as opposed to getting it to do all of the research for me and getting it to summarize things. I wanted to sit down and read through those papers myself. I wanted to fully understand what the purpose and direction of those were. And thankfully, due to my skill set having been a compliance officer for many years,
[00:06:40] I was well-versed in how they were roughly written because I was used to legal language as opposed to medical language. And the approach is very similar in the two. And it allowed me to have more efficacy and efficiency in what I was doing, as opposed to just walking in blindly and trying to desperately research them myself. But with that, I did end up reading through somewhere around 4,500 plus research papers myself over the span of about three to five, three to six months.
[00:07:10] Absolutely. It's important to emphasize that any research that we do needs to be verified. Anything that AI suggests needs to be verified. It's important to really see that the results that you're getting aren't hallucinated, but that you also actually get reliable sources and a discussion with clinical professionals who understand that in a different way than we can, because we don't have medical degrees and all the knowledge that is put into that.
[00:07:40] So let's slow down and explain the two things, the two use cases that you had. So the first one was to identify papers that you had to read, and then we'll go to the tailor-made GPT that you created for yourself. When you were looking for the articles, what exactly did you ask? Did you put in any of your, like, how detailed were you in the description of your condition? And what was the problem there?
[00:08:10] Did you just ask which papers should I read? Did you use a different type of wording? Let's try to be as specific as possible. Just to give a slight bit of background to that. So if I go to, say, PubMed and I type in, let's go for a big topic, a big political topic at the moment, ivermectin. So if you type ivermectin into PubMed or into any sort of similar medical database, there are over 400 and I think it's 426 or 429 papers as it stands as of about two weeks ago.
[00:08:39] I don't know and I wouldn't have known and I wouldn't expect anybody on this call to know exactly where to start. So what I did was I took that to ChatGPT and I said, this is my diagnosis. I gave it my medical letters. I gave it all the information that I possibly had. And I said, if you were in my shoes, where would you start? Which would be the first paper on your reading list?
[00:09:01] And it came back with a list of about, I think it was 34, 35 papers that it told me were particularly relevant in my situation. So I started there. And it was just, it was really that simple at the beginning. And then from there, I started to realize that, first of all, it didn't necessarily understand the full context. It didn't necessarily have the cognition and ability to pull out some of the information that I did. And therefore, some of the relevance wasn't quite there.
[00:09:28] So it had picked up completely different types of cancer with completely different hallmarks and pathways. And had told me that this was the most relevant paper you could get, etc. Or it had picked up instances of other esophageal cancers. So looking at squamous cell versus adenocarcinoma, which is what I had, which weren't actually as relevant as ChatGPT thought they were. It was good as a starting point. It was good to give me a direction, but it still involved a lot of manual work.
[00:09:57] However, had I not had ChatGPT, I wouldn't have had that starting point. And it probably would have been too overwhelming. It's horses for courses. It's not one thing or the other. It gave me a starting point. It gave me a position to begin building from. Absolutely. When you were reading through papers and when you went to the research, how did you cope with the amount of new information that you were getting,
[00:10:27] given that you are not a clinical expert? So was it confusing? Was it overwhelming? Regardless of the fact that you were reading very specific pieces of paper, did you then try to elaborate and get additional explanations again with AI in the same chat, in a new chat? How did you go about that? So it was in the same chat. And yes, I absolutely did. I think anybody who can claim that they can go from no medical knowledge whatsoever
[00:10:54] into suddenly understanding these sorts of things is lying. It's written in language that's difficult to understand. It's written in ways that are quite highly indecipherable at points. And it's designed to be a little bit prohibitive in some ways, because it's designed for a specific use case and for people who understand a specific focus and language type. Thankfully for me, having worked within, as I said earlier, the compliance world for so long, it meant that I had been used to digesting huge volumes of sort of regulatory information,
[00:11:23] looking at laws, looking at financial regulations, looking at finance papers in general, which all tend to be written in a very similar way. And they tend to use very similar language. Now, of course, the medical terms themselves are different. And therefore, I took those back to ChatGPT. And I tried to use that to get a very rough understanding. But as with anything in this world, ChatGPT does tend to hallucinate and it gives you variations depending on what it believes the context is.
[00:11:49] And I then had to take that back and make sure that the context was correct for that exact paper. And in not all instances, sorry, not in all instances it was. And I then had to take that sometimes to Google as well. And I had to then go back to ChatGPT and basically explain to it that it had got the context wrong, that we didn't want to hallucinate. And then I had to design further prompts in order to stop it from essentially wandering off and doing its own thing. So with that, I would prompt, you have to cite the source for this.
[00:12:18] You have to look at least five different sources in order to give me a proper definition of things. I also want this cited into the individual papers. I want to know exactly where you are sourcing it from, why you're sourcing it from there. I want to know essentially the process that you went through to create this definition. And that then allowed me to understand where it was either going wrong or it allowed me to modify my thinking in order to understand what the answer was and how it was coming to that.
[00:12:46] So if we summarize the food for thought here is to say to any large language model that you use, find at least five sources that support your conclusion and give me also the links to those sources. Because sometimes AI can also hallucinate the links or the sources. So it's always good to double check that's actually coming from a valid source. Exactly. That's exactly it.
[00:13:15] Okay, awesome. Super interesting. When, if we go now to the tailor-made GPT that you created for yourself, can you take me through that? What exactly that is? How did you build it? How long did it take you to build it? Yeah, so that came a little bit later. So after I'd done all the research, after I thought I had a very good understanding of what was going on and what was happening to me with my cancer,
[00:13:45] and also after I'd done the next generation sequencing to essentially get the real data in place, I then decided that, first of all, I didn't understand enough about interactions. And second of all, that in order to essentially keep track of everything, I needed one sort of central repository for that. So what I did was I created my own custom GPT within ChatGPT itself. I asked it not to research onto the internet directly. I asked it to only focus on the papers that I gave it
[00:14:14] and the information that I gave it, and to only source any answers directly from within that. During my research, I'd obviously amassed quite a few papers that I thought were relevant and that I thought were what I wanted to focus on, etc. And I had a list of around, there was around 3,500 to 4,000 papers at that point. It varied right the way up until about 7,000 or 8,000 papers in total. But I gave it those initial papers that I'd noted down into my notes, etc. I asked it to feed itself on that,
[00:14:42] which took me about two weeks to do in total. And then I asked it to give me summaries of the individual pieces and what it felt the outcomes were and how it felt they were relevant to my situation based on the information that had come out of my next generation sequencing, etc. I then sat down and manually cross-referenced what it felt the summary and the answers were versus what I knew the answers were from my research and from the NGS stuff. And I then checked any sort of inefficiencies, discrepancies, etc.
[00:15:09] And then I spent about another two or three weeks writing in various different prompts in order to get it to the right focus and the right sort of, I say mindset, you can't really say mindset about an AI, but the right sort of process to get it into the right sort of, the right position for what I wanted. And then I fed it forward with other bits of information and asked it how relevant these were and started to then build this bigger picture
[00:15:36] of other things I could potentially then go and research myself, other interactions that I needed to be careful of, be that through diet, be that through my chemotherapy, be that through my immunotherapy or any of my off-label meds, etc. And basically to build up a bigger picture of where I was going and anything else that might help me along that journey. I think in total it took me about six weeks. So when you were reading the papers, like the 3,000 to 4,000 papers,
[00:16:06] did you, like how detailed were you? Did you actually read the whole thing word by word? Did you actually just focus on the introduction and the key findings to figure out if this is something that you should really dig deeper into? Because like, personally, that's what I would do. I would see if it seems relevant and I'm just asking because of the volume.
[00:16:34] And that's exactly what you then use AI for, to really turn the information into the most key things. So at first, I wasn't in that mind frame at all. At first, it was very much a case of sitting down, reading through the papers themselves, and whether they were relevant or not, each one taught me something. It taught me everything from language to direction, to how different people in the medical world write, to different focuses, whether researchers or doctors
[00:17:04] or people outside of that sort of industry as well. It taught me the difference between somebody who really had proper evidence and somebody who didn't. It allowed me to learn initially by reading the entire paper. It wasn't until probably after that initial four and a half thousand papers that I started to get a bit more of an understanding, or it was probably somewhere through that, but towards the end of that was where I started to get enough understanding to be able to read the summary at the top and the key findings
[00:17:32] and be able to discern from those whether it was relevant or not. But there are a lot of papers where the summaries, the key findings, the whatever you want to call it, the output of the paper may or may not look relevant, but the actual body of the text is or isn't. So for me, it was a case of trying to get as much information in as possible to then define it down and be able to work out what was relevant over the longer term.
[00:18:01] It really sounds as if if you're not careful, you can very quickly go into the wrong direction also with your mindset. When you were using all these tools, what were key reflections or key thoughts around the use of AI for patient care, by patients? What did you wonder or observe that could really go wrong if you're not careful?
[00:18:30] And how did you also make sure that you knew that you were on the right path? So I think the number one thing that can go wrong is that AI can pull from what it believes to be the biggest authority source. So if you look at half of the doctors who go on to Facebook, etc., half of them are there for likes. What they're spouting out quite often has no medical basis. There is sometimes absolutely no evidence to what they're saying. And some of them aren't even actual doctors.
[00:18:59] They are other practitioners calling themselves doctor for whatever other reason. And unfortunately, ChatGPT goes and looks for how many times they've been mentioned and then calls that authority. And in reality, it isn't at all in any way, shape or form. So the first thing I would say to people is be extremely skeptical of anything that ChatGPT pushes out. It pulls things from Facebook groups. It pulls things from all sorts of sources, all sorts of conspiracy stuff and stuff that just isn't real
[00:19:29] in any way, shape or form. And it is far too easy to get trapped into that sort of rabbit hole and to chase down horse medications and all these things because somebody online said so. ChatGPT is really good at doing that to people, unfortunately, because that's the way it's been taught and trained is the more instances something's mentioned, the more likely it is to be true and therefore, the more it should focus on it. And that's just simply not the case. In reality, quite often, the least spoken ones
[00:19:58] tend to be the ones that are the most grounded in evidence, etc. The ones with very little press tend to be the ones to really focus on and the quiet doctors in the background really doing the hard work and not touching social media virtually at all are the ones to really look at. But ChatGPT doesn't front those easily and it doesn't find them easily unless you direct it to, unless you really push it to. So that would be my one observation on the negative side. And then the sort of positive, the opposite side of that, is that ChatGPT is fantastic
[00:20:28] at digesting massive volumes of information. It is really good if you tell it to stick within its certain lane, if you give it guardrails and you spend the time to build something that is really efficient, effective, focused, that is very good at staying within that. You do still, of course, need to watch for hallucinations. You do still need to, of course, make sure that it hasn't gone off-piste and make sure it's not pulling in from other sources. But it is fantastic at pulling out abstract findings
[00:20:57] in huge data sets that us humans just, we're not built to understand. We're not built to be able to process in the same way. The challenge, of course, is how to prompt in order to introduce those guardrails. Sometimes we don't really, for example, I could know that I need to steer a model in a specific direction, but then I wouldn't really know
[00:21:27] what to tell it to. So what would your advice be? Let's say that I am researching my condition, I am trying to figure out if I'm on the right path for the current medications that I have. What kind of prompts, for example, did you use for the guardrails? What did you say? So mine was a hugely iterative process and it took me six weeks of spending probably two to three hours a day every single day, if not longer, doing it. So I wouldn't
[00:21:57] necessarily be able to give you exactly how I designed it and what my prompt was because it was massive. What I've come across recently though and with sort of the progression in AI because we've leaped from I think it was on version two or three of ChatGPT. We're now on version I think 5.4, 5.5 and it's come on leaps and bounds and what I'm hearing from a lot of patients now that they are doing as opposed to what I did was they are actually taking it to a different AI so they're going into Claw, they're going into any other AI
[00:22:26] that you could possibly want to imagine. They're building out the guardrails in that other AI giving it very specific instructions. I have a diagnosis, I'm not going to tell you that diagnosis but I want to then create a prompt to go into ChatGPT or go into Clawed whichever way around you want to do it. I then want to make sure that it doesn't hallucinate. I want to do this, I want to do this. I want to put in proper guardrails. Can you please ask me 20 questions to make sure that the focus is correct? To make sure that we're doing this safely. Can you then, and this is as another
[00:22:56] prompt afterwards and I would always say with AIs make sure to give it one focus at a time because otherwise it can cross things over and get confused. So do an iterative process. Number one is say I want to do this, give it the basis, try and give it as much context as you can. Number two, then tell it roughly what guardrails you want. Then ask it to ask you questions so that it understands better your focus. It understands why you're asking it to give guardrails. It wants to understand what safety issues are at play, etc. and the more information
[00:23:26] you give it, the longer you spend doing that, the better it'll end up with a prompt at the end and then you can take that prompt, feed it into which other AI you are using for that process and I would probably, if I was looking back now, I would actually suggest Claude is probably a slightly better system to use because it's slightly better at data analysis, it's slightly better at pulling information directly from the sources as opposed to chat AI which is slightly more prone to hallucination but actually better at writing those prompts I think from what I hear from people
[00:23:55] so I would probably do it the opposite way around if it were me now but I hear a lot of people having great success doing that. Super interesting and super useful. I must say that based on my recent experience I also, yeah, I fell in love with Claude. I think it's amazing how it structures the data, how it can create a patient summary that's really short for example. Especially Claude. Claude. Yeah, it's pretty amazing but since you mentioned different models
[00:24:25] what happened to me recently was that I wanted to see how the different outputs look like if I use different models so I used the same prompts in different models and I got just slight variations in the responses for example I would talk about exercise and let's say weight training and Gemini would say based on your current health situation maybe you should
[00:24:55] take it a little bit easy so the drugs can kick in better and then I went to Claude and Claude was like oh, so if you can do that and if you don't have any problem that indicates that you're actually doing really well. I was like I was contemplating one day based on the Gemini response if I'm not listening to my body enough and then Claude said something completely opposite and I was absolutely confused so given that you spent two to three hours reiterating prompt
[00:25:24] what was your experience in terms of the confusion the trust and how much time did you actually take to just step back a little bit maybe go away from AI and just really maybe also get some input from clinicians which we will get to a bit later as well yeah of course so that's a huge thing and problem at this moment in time is different AI systems have different focuses they pull from
[00:25:54] different places and they have different ways of putting out those answers and the other side of that is where people use different AIs differently they have a different context and sorry I said differently a lot in that sentence but every separate AI model you use you will have told in different ways you would have used slightly different language it will have slightly different information on you and you tend to feed different things into different models in different ways and therefore it tailors the answer to that language structure to the information
[00:26:23] it has and puts it out in a way that it thinks you want to hear not in terms of the actual reality and that's the one thing to really think about with AI and especially in this sort of a context is AI is essentially like a little puppy dog it is very eager to please it is very adaptable it just wants to do what you tell it but it doesn't always fully understand the commands it doesn't always fully understand that you need a realistic answer that you need a certain type of output so you have to manage your
[00:26:53] expectations in terms of what it gives you as much as you have to manage the prompts that go into it in order to get the answer in the direction that you want to so that aside in terms of what I then did so I didn't take any time away from it I at that point in time focused far too much on it and I put far too much information into it in order to get what I wanted out of it and very thankfully it gave me roughly the direction that I thought I wanted to go whether that was trying to please me or not I have no idea
[00:27:23] but I then also was very lucky to have a really great clinical team that I'd built outside of this I had Dr. Isabella Cooper who's an incredible researcher based out of the University of Westminster doing some amazing work in the metabolic and general sort of metabolic theory around cancer space I then had Amanda King who's an incredible naturopath and nutritionist and one of my favorite human beings on this whole planet who is supporting me on the diet supporting me with bloods and everything else and then I had Dr. Harry Cohen who is an absolutely
[00:27:53] phenomenal integrative oncologist and I took everything that I had I didn't hide anything from them I said I've put this into chat GPT it's told me this it's given me this output do you think this is correct is this the right direction am I right in thinking this and this is what I want to do can you help me to refine it and they either said you're nuts or yeah absolutely that's great I never thought to put those two together but that's probably the right direction and we had really
[00:28:23] open and honest conversations about it and without their input I probably wouldn't have had the confidence to do half of what chat GPT said because I'd seen the hallucinations I'd seen it come up with crazy things it told me to it told me some weird things at various points that I'm not even sure I want to repeat on anything that's recorded ever yeah it's all about making sure that you have your own guard rails with real human beings who understand the space who have medical agrees who know what they're doing atop just
[00:28:52] the AI in the previous discussion that we had you mentioned that you talk to a lot of patients on a daily basis do you by any chance know what's their attitude towards sharing with clinicians how they use AI are they afraid that doctors would be annoyed by the fact that people are doing research are they super excited about it what are your observations
[00:29:22] in that sense so if I come across somebody early in their journey they tend to be extremely excited about AI there is a lot of patients out there and I do mean a lot who are using AI in order to help them when you take any of that to your especially standard of care clinician you're quite often met with you're met with mistrust you're met with
[00:29:52] I don't really I can't think of the word I'm looking for here but you're met with you know that disgusting look bit hour-long meeting where they told me that even doing a keto focused diet was really stupid and how instead I should be eating lots of chocolate and sugar and all these things that cause mass inflammation
[00:30:22] and was the absolute opposite of what every research paper I had read and every piece of information I knew at that point said but that was their focus that was what they felt was best and because of that they looked down their nose and sadly that's pretty pervasive across the entire system at this point in time I hear it from at least half of the patients I speak to that have gone through this sort of a route and have used AI they are looked at like they're crazy they are basically told to
[00:30:52] stop using AI because everything they're looking at is complete fantasy and all this kind of stuff whereas in reality patients need the opposite they need somebody to sit down and say look I know chat GPT has said this but this is the real science this is why that doesn't work and to explain to them and to hand hold them a
[00:31:31] you can absolutely use it and we can help you build that in safely if that's what you want to do but please know that in reality it may not have the effect you're thinking we need to watch out for interactions with chemotherapy etc it causes liver toxicity it causes all these bits and pieces that we need to monitor and manage and therefore let's put it there but let's see what else we can do that is more effective in the meantime and let's work out how later afterwards we can combine that in to give you the best overall outcome
[00:32:00] and sadly I haven't heard of virtually a single NHS or standard of care practitioner who has said that or done the problem with that is it can cause interactions it can cause problems the doctors just can't account for if they don't know and in my journey that nearly
[00:32:30] happened as well I ended up taking low dose naltrexone which is originally a drug created for people to come off of opioids etc it was for opioid addiction but you use low doses because it has anti-tumor effects and I was using low dose naltrexone I had a pain management issue and was rushed into hospital put on a palliative ward and all these sorts of things and they started trying to give me morphine now obviously that interacts massively with the anti-opioid being a
[00:33:00] huge opioid but I hadn't noted it on my file nobody knew about it because my doctor had called me dangerous and stupid so why would I want to willingly offer up something I do that why would I want the extra pressure in a time that was incredibly stressful of trying to fight with somebody who did not want to help me and sadly mine is one
[00:33:31] of thousands of stories that I've heard in the last year alone around that until the medical system itself changes and starts to understand that no matter what you do patients are going to go that route and as it stands I think it's between 76% of all stage 3 and 4 patients now look for alternatives on the internet whether that be chap GPT or Google and I think it's something like 70% 72% very roughly I can't remember the exact of my head of all patients who are diagnosed with
[00:34:01] cancer look for adjunct and alternative therapies until the standard of care system takes account for that and notes that no matter what you tell a patient they are going to go that route because you are talking about their life and until the standard of care realizes that it
[00:34:36] I am hopeful I must say I am sure that this is going to change if we look at the research that's been done about the American medical associations 81% but this is the US of course 81% of physicians say they use AI but mostly for research papers if you ask them how many use AI for diagnosis it's only 17% so it's yeah yeah I find it in a way
[00:35:05] understandable that the experiences that you mentioned are as they are unfortunately I must say that I am very grateful that I have a different experience so my GP is super excited about technology so she knows that I'm doing a lot of research on digital health so when I gave her the information or the
[00:35:35] clinicians own evidence is European based open evidence is US based she was super grateful about that and also when I went to my physician because I needed a new biologic therapy and I
[00:36:05] come up with ideas which I find unusual because I think that's a sign that you've become one of the joyous problem patients much like myself because my clinician my oncologist has done the same recently it's taken me over a year to get to this position but my oncologist now says the same what do you think we should do where do you want to go with this as opposed to how it
[00:36:36] and I think that's because you've shown agency you've shown that you have an understanding of what's going on and therefore they are now part of your journey which is what I hope for all patients is that the clinician becomes part of
[00:37:05] and make sure they're aware of all the dangers that might be there if you use AI because we do see a lot of discussions like most of the discussions that I
[00:37:35] about patients and AI is there any last message that you would have for the listeners as I think I've said on lots of different podcasts and lots of different places in the
[00:38:05] I think understanding both your own situation and how things like AI work and if you're using it what it may do and its limitations etc is where true knowledge really comes from it's all good and well plugging research papers in and getting an answer but if you don't have faith and trust in where that answer comes from if you don't have the knowledge of what it's done how it's
[00:38:35] told you and the same comes with a doctor it works both angles both sides and until you become your own researcher until you have enough of an understanding and sorry I don't mean become your own researcher in terms of you need to really go and get a PhD etc I didn't do that in any way shape or form I learned by making mistakes I would always reach out to somebody like me you can
[00:39:05] always reach out to there are some really wonderful people out there who can give you starting points who can give you focus Amanda King substack is truly amazing for that she is great at
[00:40:04] patient. Each story is individual. Yeah, definitely consult your medical team with any steps that you want to take because what we share here is meant to inform. It's not meant as a medical device. So Dale, thank you so much for joining me again today and I hope you stay in good health for a long time. You're also going to be in Copenhagen for Hymes Europe, so I look forward to meeting you in person. Busy writing my speech for it right now.
[00:40:35] Awesome. You've been listening to Faces of Digital Health, a proud member of the Health Podcast Network. If you enjoyed the show, do leave a rating or a review wherever you get your podcast, subscribe to the show or follow us on LinkedIn. Additionally, check out our newsletter. You can find it at fodh.substack.com. That's fodh.substack.com. Stay tuned.


