Philippe Habets is a Dutch physician-scientist and entrepreneur specializing in computational psychiatry and artificial intelligence (AI) applications in healthcare. He is co-founder of Evidence Hunt, a health-tech company aimed at revolutionizing medical literature discovery using AI. The AI-powered platform that streamlines the process of finding, analyzing, and utilizing medical evidence, enabling users to access summarized, evidence-based answers with cited sources in seconds. This discussion covers:
- Why measuring resilience with data is so elusive
- How Evidence Hunt reduces search fatigue for clinicians and researchers
- The philosophical challenge of translating emotions into data
- What makes a good prompt when searching for medical knowledge
Youtube: https://www.youtube.com/watch?v=F8tC0B4NvpM
www.facesofdigitalhealth.com
Newsletter: https://fodh.substack.com/
[00:00:00] Dear listeners, welcome to Faces of Digital Health, a podcast about digital health and how healthcare systems around the world adopt technology, with me, Tjasa Zajc. Medical knowledge and scientific papers are growing exponentially, and it can be hard to find relevant information for your use case.
[00:00:25] The ability to efficiently sift through vast amounts of data to find relevant and trustworthy information is more critical than ever and getting increasingly easier with new AI tools. Evidence Hunt offers an innovative approach to data search. By extracting, labeling and prioritizing data, the platform provides users with refined search results tailored to their specific questions.
[00:00:52] Instead of sifting through a deluge of articles, users receive a concise list of high-relevance documents facilitated by AI-generated semantic search and ranking. In this discussion, you will hear a bit about how Evidence Hunt reduces search fatigue for clinicians and researchers, what makes a good prompt when searching for medical knowledge, and more.
[00:01:18] I spoke with Philippe Habetz, a Dutch physician scientist and entrepreneur specializing in computational psychiatry and AI applications in healthcare. He's the co-founder of Evidence Hunt, and because of his background, we also talked about how to quantitatively measure resilience with data,
[00:01:45] and the philosophical challenge of translating emotions into data. So, enjoy this discussion, and if you like the show, make sure to subscribe to the podcast, visit our YouTube channel where you can find all the videos in the video format as well, and also leave a rating or a review wherever you listen to your podcasts. I know this is annoying.
[00:02:09] I know every podcaster says this, but leaving your feedback really helps the show improve, and you can also send me a message on LinkedIn if you have a suggestion for the next topic that we might want to cover. The focus of the show is healthcare systems design, how healthcare systems adopt technology.
[00:02:33] So, I'm currently looking for speakers from the BRICS countries since we've mainly stayed in Europe and the US in the last few months. Now, let's dive in today's discussion. Philippe, hi, and thank you so much for joining me for a discussion on Faces of Digital Health,
[00:03:03] where we're going to talk about medical evidence, how to find the latest answers based on the latest research, how is this possible in 2025 when we've got the LLMs, when we've got CHEDGPT. There's also a lot of tools already existing in the market, and you are the co-founder of Evidence Hunt,
[00:03:27] one of the providers of engines that basically make it easier for healthcare providers to search for relevant answers. But before we dive into that, I would actually like to ask you a little bit about your background, because it's an interesting one. So, you have a PhD in computational psychiatry. So, let's explain what that is in the first place. And then I've got a few other ones about your background. Yeah, so first of all, lovely to be here.
[00:03:56] Thanks for inviting me. Yeah, indeed, it's computational psychiatry. My background actually started with mechanical engineering, but switched to medicine, then worked as a physician for a brief while, and then committed to a full-time PhD, indeed, in computational neuroscience, computational psychiatry. Computational psychiatry is sort of a subfield of computational neuroscience, so everything considering computational modeling of brain function, and specifically in psychiatry is related to the clinical psychiatric disorders.
[00:04:25] So, what happens in the brain with depression, what happens in the brain with psychosis. Psychosis. So, this can be on a biological level, meaning what has changed in gene expression. Is there specific DNA sequences that match specific disease symptoms that we see? And basically, what I focused on in this broader field is two things. So, one thing was the prediction of clinical outcomes based on what's called multimodal data. So, incorporating both genetic data, protein data, imaging data,
[00:04:54] but also clinical measurements, like you have these questionnaires for depression specifically, or for bipolar disorder. And basically, the idea was there for my thesis, that if you incorporate many different kinds of data, that you would ultimately be able to have an accurate prediction of, for example, who's going to get remission from depression in two years, who is going to get an improvement of symptoms, and who's going to be irresponsive to any kind of treatment.
[00:05:23] So, that was a large part of my PhD, so that's within the field of computational psychiatry. Another thing that I did was more focused on biology. With these psychiatric disorders, always the difficult thing is that there are a lot of different factors that come into play. So, there are definitely biological factors, but it's not a one-on-one causality. So, if you have a predisposition to get some certain disorder, that doesn't mean that you will get it, because there are also other social, psychological influences. That's the idea together,
[00:05:51] ultimately determine whether or not you get a certain phenotype. What I was focused on as a second part of my PhD was actually going a bit more in-depth to these genetic predispositions and linking them to both biological measures in terms of gene expression in the brain and also epigenetic alterations. How far did you manage to come? Because it sounds interesting to be able to predict who is going to relapse in mental health,
[00:06:19] but mental health is so related to the social factors that I'm just wondering to which extent can biology prevail in this field? Yeah, so I would say if it's deterministic, meaning that if you have all the variables, you can 100% accurately predict what the outcome is, then we definitely don't have the full picture in terms of data yet. So, there was no 100% accuracy. So, there was actually so close to 80%,
[00:06:48] which for a super diverse disease, this was specifically in depression. That's pretty high. We also validated that in a separate cohort. Yeah, it's quite complicated. There are many factors and also there is a difference between trying to predict an outcome and understanding why specific data predicts a specific outcome. Of course, bias, bias of model training, all of these things. So, yeah, it's a full picture of biological, psychological, social variables. So, as many as possible, we collected that data, included it,
[00:07:18] but it definitely was not the full picture. Yeah, as I said, it's a complex field, especially if you think that depression and mental health can heavily depend on the relationships that you have and it's difficult to predict this or even gather the data about a patient and like how happy is the marriage or relationships or if you might know that the person is alone, but anything else is probably, yeah, a whole field of the unknowns. Yeah, I thought it was fascinating
[00:07:48] to just see that you were working on data-driven definitions of resilience. So maybe we can just do one more stop with that. I recently was just saying how, if you look at books from Gabormate and a lot of books around mental health, but also physical illnesses, how related they are to the psychological health
[00:08:14] and how little do we actually understand about those correlations, but science always tries to quantify things. So how far, yeah, again, did you come? Can you, what's the data-driven definition of resilience? A very good question. With these concepts like resilience, right, it's very intuitive to us. We all use them and we have a sort of understanding of what we mean by them,
[00:08:42] but to capture it in data is a problem because of two things, basically. First of all, when you go into the definition of resilience, what does it actually mean? You find out very quickly that although we use it very broadly and generously, we actually have no fixed definition that we can then measure. So for example, one definition would be the homostasis or staying in a healthy type of balance mentally during adverse events or during trauma.
[00:09:11] But that is one definition. There could be actually many. So one thing that I looked at is this idea of that it reflects a sort of innate ability or maybe acquired ability or maybe something that's based on the two of them to really stay relatively healthy during a stressful time, right? And what a stressful time is, there again is one of those words that has a lot of meaning and we all use it, but how do you actually measure stressful times? So you can ask people
[00:09:39] how stressful they think their life is at the moment. There are, of course, also events that you can measure and score. Big life events like going through specific events like bankruptcy or a divorce. And then you could look at specific mental indicators of well-being and see if they drop relatively, maybe even more compared to others that have the same type of adverse events. So that was the setup of that project.
[00:10:07] And it was really difficult to see. So in the end, for me, it was a question, is this something that is a trait or is it something that basically is acquired? Based on this is what you get a lot with research. More research is needed, basically. Yeah. That's the conclusion. Yeah, that's the conclusion, yeah. Are there any findings that you thought were super interesting out of all that research that you did? What kind of still sticks with you?
[00:10:38] Yeah, so from the multimodal data side, I thought it was really interesting to see that indeed with combining multiple modalities of data, you do get better predictions, which makes sense. And from specifically the resilience side, I thought it was interesting to see. So one of these emerging concepts of resilience is exactly what they call the delta approach, compared to a lot of variables indicating how well you are doing and based on the specific amount of stress. So that could be childhood trauma,
[00:11:07] including adverse life events, including diseases, comorbidities. And if you look at people, how far off the average of well-being you are, corrected for how many stress you've gathered throughout life, this gives a sort of a difference of comparable to other human beings. You might experience more stress based on how much stress people would normally or on average experience given your cumulative stress throughout life. So it's an intuitive concept,
[00:11:36] but looking at it more closely, it actually just boils down to how much stress you've got. So there was really not this distinguishable delta for people. It was just completely correlative to if you have more stress, then you probably have experience of more stress. And there is not really a distinguishable fixed delta from the average baseline because in a couple of years time, it could be completely different, which could not be related to anything that seemed plausible at all.
[00:12:05] So if it's not something innate to someone, and it's also not something that you can explain changing throughout time based on certain variables, yeah, it's really questionable what it actually is in terms of data, this resilience concept. It was very interesting because I think, so I studied philosophy as well. And what I always found most interesting in terms of Western philosophy, at least is this distinction between what we very intuitively understand to the way that we speak. So basically natural language, which is a topic that I guess
[00:12:34] we'll dive into in a couple of moments, but this intuitive understanding and then on the other side, the quantifiable, more sort of mechanistic way of viewing things. So that's also at what point does a specific neurotransmitter become an emotion? In itself, it's just a molecule, right? So I know that there is a lot of even the jewelry with the specific molecular structure of serotonin or dopamine, then resembling or the meaning that someone is all about happiness or drive.
[00:13:03] But actually it's not the same thing, right? The old neurotransmitters are part of a lot of very complex systems that have nothing to do with that specific thing that we intuitively understand in natural language. This sort of dichotomy between what we intuitively understand because of the way that we understand the world around us. And on the other side, what we can measure and predict is something that was always very interesting to me. Yeah. Since we're talking about philosophy and then we're really going to move to evidence hunt, but do you think
[00:13:32] not knowing what are the mechanisms behind emotions is part of the appeal of being a human? And I guess what makes us feel special is the lack of understanding of why we feel in a certain way. So do you think we're going to at one point be able to decode all that and what will that mean? Is there just going to be an intervention for every negative,
[00:14:02] negative emotion? Yeah, it's a really spot on questions. I think on one hand we actually do understand very well what these emotions are and how they work because we experience them all and if someone is angry or if someone is sad, we immediately recognize it and we also know what that feeling is. So that is like one paradigm of looking at it. So we do have a very good understanding of those things. But the moment we try to translate it into molecules which themselves do not have any emotion,
[00:14:31] it becomes difficult for the human brain to see the connection. So at what point does this system that you can also try to simulate in a computational model at what point in time does this actually become the emotion instead of just a model for emotion? I think that also has to do with the way that we naturally understand things because that's from the paradigm, the domain of lived experience to call it that. And on the other side, so if we were able to basically model it completely from A to Z
[00:15:00] meaning that there is some sort of deterministic worldview attached to that, then still you have to think, okay, now I have all of these systems computated or I can explain everything, then how is that still that feeling? It's not, right? You have this system of neurotransmitters, you have this electrical system with signals that you understand that you know what correlates with a specific emotion, but it's not the experiencing of that emotion. So in that sense, I think it is for manipulation of matter, it is, that is where this is useful,
[00:15:30] this calculation of things, understanding things, and of course manipulation of anything material could also be used in health, so basically any drug does the same by understanding mechanisms or not even understanding it, but having figured out that it does something on a molecular level that helps, that manipulates the current state of molecules and matter in the human body and has a certain effect. But the actual experiencing is, I would say, yeah, something completely unrelated in the sense that it's a different way of looking at something that might
[00:16:00] be the same thing, but just in a different language. Yeah, language is so important when it comes to emotions and we have so little, the vocabulary is often missing today. Like, I love the fact that you used sad, happy, angry. That's like, when we talk about emotions, we usually say, yeah, somebody said happy, angry, frustrated, and I might find a few more, but there's so many nuances, like how do you know
[00:16:29] when you feel abandoned, when you feel left out, when you feel hopeless, when you feel, I don't know what, the spectrum is just so broad around that. Yeah, and it's, I think it's very fascinating, especially with LLMs, to see that they can mimic the way that we use language accurately. Because these things like mentioning your sad and intuitively understanding what that actually means, is, I think, due to the nature of our understanding itself.
[00:16:59] So we have a specific way of experiencing, so this is very Kantian, but I also love that just 10 minutes in, we're very deep into philosophy. But the way that we experience the world, basically, also means that we will interpret things that do not understand the world in the same way like they actually do. So I don't know how many times you've heard someone say he or she to JetGPT, but it's a common thing that people say, but he is saying or he's not working now or he's at this point,
[00:17:29] he doesn't understand, but who is this he? There is no agent there. It's just a bunch of neural networks that run inference. But I think that's always the thing that we tend to interpret things in ways that we understand the reality, meaning lived experience. Actually, what we're, for example, with LLMs have built is something that mimics something we can understand, but in itself, yeah, we cannot really attribute, there are people that would disagree, but I would say it's absolute nonsense to attribute lived experience to a model.
[00:17:59] Yeah, absolutely. Speaking of LLMs and AI, Evidence Hunt is an AI-powered medical research platform. Tell me a little bit more about how you started working on this. If we look at search engines for medical research, there's PubMed, there's Walter's Kluvers, UpToDate, there's Open, what's
[00:18:29] Open something? Open Evidence. Yes, Open Evidence. There's you, there's also ChatGPT and like other models that have been tested for medical knowledge. So how, yeah, do you position yourself and what was the driver of you and your co-founder to build Evidence Hunt? Yeah, so to start with, yeah, the origin was actually just, or just, but trying to solve
[00:18:58] an issue I had both during my brief time as a physician and later as a PhD student. Especially if you're a junior doctor, there are a lot of moments where you have to look up specific information and guidelines or you get requested to find out what's been published in literature. And definitely as a PhD student in medical sciences, you have to do a lot of literature reviews, find out what's been published before, what hasn't been published before. And yeah, with the existing platforms, that means usually drafting a very weird looking query,
[00:19:28] for example, on PubMed. And then what you get is a long list of articles that you then manually need to go through, scan. And while you're doing this, you're always having a fixed number of criteria in your mind, right? For example, if you're doing a literature review, you're looking at specific study types, for example, in the clinical fields, were there any randomized controlled trials? Was the control group blinded? Was there a double blinding going on? What was the actual population? What outcomes were being measured? Was there enough sample size? What were the
[00:19:58] effect sizes? What is the risk of bias? So all of these things are based on basically some sort of schema of evaluating articles for relevancy and trustworthiness. So to me, so I got this book called Automating the Borrowing Stuff already I think 10 years ago. So it's this sort of introduction into coding in Python. So basically all the tasks that take a lot of time are very repetitive and you would like to do something else. There's always a way to automate it with code. So when I
[00:20:28] was doing this, that really reminded me of that specific book. So of course, I don't want to say that literature review is only boring and should be automated, but I think not having the option to automate it hinders a lot of people that are not really into doing full systematic reviews. Even systematic reviews, they're all based on specific rules, criteria, evaluations that is very structured. So what I started working on, because I was already working with transformer models, so that's the type of deep
[00:20:58] learning model that's very good at processing large sequences of data. So I initially use it on DNA sequences, but of course, nowadays it's mostly used on text. So all the larger language models like ZGPT, they're all these transformer models. So what I started doing was first thinking of all these labels I have in my head for selecting something as relevant. I can actually pre-extract with a model probably. So I started fine tuning models that could do that, also benchmarking how well did they actually perform. So these
[00:21:27] were things like what population is this study performed in, what was the intervention, the quality outcome, but also things like sample size, some risk of bias measures. So having this sort of collection of models that could extract these things from all the articles, what I ended up with was a database with not only all articles, but also these extracted labels. And that in a second iteration of what I was working on can then be used for way more precise filtering and ranking of what is relevant to
[00:21:57] your specific question. So that's how it started. Then of course I wanted to get rid of any query language or keywords. It's very possible to turn any question or prompt from natural language into a search strategy, so both using semantic search but also a keyword search. And then what I started seeing was with this flow of having a natural language that gets converted to a different type of searching in a different kind of database that's enriched with all of these labels, you actually can get a tenfold reduction in
[00:22:27] terms of the articles that you get returned as potentially relevant. And on top of that, you don't have to write any query and you can actually do follow-up questions. And especially with the LLM developments that have been there, there was also the option to not only get a very short list of potentially relevant articles but actually read them, do the reading with an LLM and that then summarizes whatever is relevant in those retrieved articles in regards to your question. So it basically started with
[00:22:56] my annoyance with the existing ways of going through that literature because I'm very passionate about all the knowledge we have in the medical field. There is so much knowledge and there are so many research out there with extremely valuable findings and robust findings as well. But there is also a huge chunk of research that is not relevant to your use case specifically and also that shouldn't be incorporated in whatever you're trying to do because either it is not completely fitted to
[00:23:26] what you're trying to do or it is of low quality. Having a way to navigate that ocean of medical literature was to me a no-brainer to work on. Initially for myself but when I built these components then I put out an MVP and it was from the get-go it was very clear that a lot of people didn't enjoy the existing ways of searching. What about existing tools such as open evidence and
[00:23:56] other competitors that you might have on the market? Why did you not use that for example? Yeah because they didn't exist when I started this. Okay and now you're in competition. Yeah so it's a logical thing and I think that's good so there is so much value in all the existing evidence and it is really a bottleneck that there is so much evidence and not really an efficient way of processing it reliably that it makes sense that a lot of
[00:24:26] people start working on a solution for that. Yeah. I think what we focused on from the beginning is specifically that the way that you would do it manually so the logic behind that is really integrated in the product from day one. So specific labels, specific prioritization, for example reg-based searches or retrieval augmented generation, something very commonly used now for these type of use cases but then you just get a ranking based on semantic similarity or maybe if you use keywords that these keywords have been found as a match in the original
[00:24:55] content but that's still a long way from getting the actual top ranked list of relevant articles because there's like I mentioned things like is this the right population? Was this a randomized trial? Was there huge loss of follow-up, any risk of bias here? So these are all different types of metrics that you use for prioritization and ranking that do not come natively with a reg-based approach but you can build that actually and what we've done and
[00:25:25] what we focus on. So the beginning was this. What's your source of data? The key question that anyone building large language models or doing anything with generative AI these days is what's your hallucination rate and what's your data source or data sources. I think that's also where the differentiation between different providers comes from. Definitely. In terms of data sites, so we started with just peer-reviewed publications, right? So anything in
[00:25:54] Medline is also PubMed users. Then expand it into medical guidelines as well. So it's a different type of document. But the idea was from the beginning that you don't want to use all these different platforms. You basically want to have one interface that connects all of these different platforms. So it's also built in a way that it can handle different types of data relevant to medical evidence processing. And actually what we have now is a feature where people can upload any data related to medical content. So this could be guidelines, protocols, unpublished
[00:26:24] research, could also be, for example, maybe there was a QP and leader that has a specific white paper about something or a presentation that's in the PDF format that you can, that you want to include in whatever you're trying to create or get an answer to. Can I just ask you something very quickly? Since you mentioned guidelines, how does the fact that guidelines are very different from country to country, you've got national guidelines and then hospitals create their own local
[00:26:53] guidelines and then clinicians can have their own guidelines. How does that impact affect your vision and what you can achieve given that, yeah, it's very hard to standardize clinical practice? Yeah, no, that's definitely true and I think it also shouldn't be standardized because every region has a different population, a different healthcare system so it makes sense that there are actually deviations of
[00:27:22] the guidelines for every region. So as a simple starting point, I think having the right information for your region, for your specific use case is key so that's why we have the option to include your own documents that are relevant to you. In terms of guidelines specifically, these guidelines, one of the biggest problems with guidelines is that they get updated not that frequently, meaning if you go to any guideline database, you'll see that some of the guidelines have been updated for the last
[00:27:52] time 10 years or more ago so and the reason why it is because this whole process of doing literature review in the traditional way is super time-consuming and that means that for all these specific treatments, specific populations, specific diseases, it's just an incredible workload to do new iterations on a guideline on a yearly basis and it's definitely impossible to do it on a monthly basis. But that's how things are now. I would say so
[00:28:22] what we're also building and focusing on is have a super comprehensive retrieval of whatever is relevant, very specific to your use case. And in the end, what that would mean is that if you have an existing guideline and you do the right retrieval of new evidence that either extends on specific findings or contradicts specific findings and you can incorporate specific details about your healthcare system, about what metrics you want to focus on for deciding on what the
[00:28:51] prescribed guidance is on a specific topic, you could actually do an automated update of that. So I think that's also the beauty about these developments in LLM industry because it is all about processing textual content in a very efficient way but guided by the reasoning that we as humans think should be adhered to. So I see a lot of potential there that not only for incorporating the right documents for your specific use case but actually update updating these knowledge documents like guidelines
[00:29:21] using these techniques and this platform. So what's your long-term vision in terms of where would you like to get and I am hoping that you can basically reflect that on the challenges with data sources. A lot of data sources in medical research are closed which is the
[00:29:50] main kind of selling point of open evidence. They signed a huge agreement with the New England Journal of Medicine that basically serves them to position themselves as this reliable source. So how and so on the one hand we've got that and then on the second hand we've got this strong voices that the knowledge should be open innovation and the encouragement for open science, open data, open research.
[00:30:21] How do you fit into that in the long run? Yeah so I think that's definitely a direction that is being taken towards more open accessibility of information that's definitely an improvement. There is of course there is a lot of value in content itself so it's logical that there is also a publishing industry. I think a second thing that came up now with the recent decennia of having more publications out there is that there is not only value in the content itself so the study that was
[00:30:51] done and knowing that the study was done but actually locating from this insane ocean of papers and knowledge specifically what's relevant to you in a couple of seconds because that's something that wasn't possible before that definitely is possible now. So there is a second value on top of the content itself which is the accessing of the content given this whole ocean of different types of data. So that's an important thing so for us
[00:31:19] all of these processes regarding medical evidence is of course based on publications but there is actually a lot of more data that is relevant. For example real world evidence meaning data that is not based on a trial which is always very artificial because people don't regularly use medication the same way that they do in controlled trials but that data is very valuable because that's in daily practice how treatments are performed how people adhere to
[00:31:49] medication prescriptions. So having that data out there but being able to incorporate it or enrich it with whatever we have from published research I think is an even bigger opportunity. So different types of data outside of just publications integrating it in a platform that is enormously flexible depending on your use case to tailor to whatever you're trying to do is really also what we envision evidence hunt should grow to.
[00:32:18] So we have also from the product right so our product is very simple the basic version online it has a chat interface you can ask any question give any instruction draft the table do analysis of A and B and it basically performs it but I think that's also that's really should be the focus if you want to have this interface that gives you control and access to all kinds of relevant data that gives you a much completer picture so that's also something I got from my time as a PhD this multimodal data idea
[00:32:47] seeing that there are different modalities of data that basically say the same thing but sometimes they actually enrich each other or they contradict each other and having the full picture is definitely unlocking a new type of value and type of insights that that I think we should strive for and that's what we envision with Eflinzong What do you consider as your biggest success so far and also the biggest challenge apart from the rising competition which wasn't there
[00:33:17] when you started? Yeah biggest achievement so I think it's pretty cool that you start with a problem that you just solve for yourself and it ends up being this company that grows and you hire new people and you have a team that's working on on the same problem and you get further to solutions and even new use cases every day so that's incredibly exciting and yeah I'm very very lucky to experience that yeah in terms of challenges I think so this is and I love that it's
[00:33:46] enormously fast paced environment in terms of the techniques that exist so every couple of months you have a new model that beats all the closed source models then there is another model that actually isn't the model but just the anthropic anthropic with a lot of prompts so this is a super exciting field to be in but it also means that you have to be very fast and focused and given that there are so many options to go for given that so much is possible speed is really of the essence and
[00:34:16] yeah that's that's exciting but also a challenge to focus on the right things and execute in a short time of period yeah especially I guess if you're based in Europe do you have any anticipations regarding the current developments around the announcement that the US government is not going to regulate AI so God knows what's going to happen in terms of the development there and on the other hand there's you in Amsterdam in Europe with the
[00:34:45] EU AI Act and tens of others types of legislation that entrepreneurs often complain are slowing things down yeah yeah not I understand the complaining I'm not the biggest fan of complaining I think definitely there is so much possible and you don't want to hinder any progress especially in Europe you don't want to block any innovations with a ton of regulations on the other side of course there is a
[00:35:14] reason for coming up with regulation because there are of course dangers to AI misinformance of with actually flake data that it makes sense to come up with frameworks and the measures to counter that I think for us it's again it's a developing field I think there is a logical thing to want to have some sort of control but how you actually balance this too right you don't want to block any innovation at the same time you also want to have some checks in
[00:35:43] place it's a developing field so we'll see how to navigate that but I'm in that case all for responsible use of AI so I think we've focused on that also from the beginning by including credible sources incorporating specific ranking mechanisms that are based on robustness if there are specific risk of bias criteria then probably the article shouldn't be presented as the most promising article or the most trustworthy article. for a given overview of literature on the other side you
[00:36:13] also want to have flexibility right so if you want to find out if there any publication has been has been put out on a specific subject disregarding the quality then that's also something you should retrieve but yeah that's it's a developing field I think it's a logical thing. At the moment I think I saw one of the posts on the evidence hunt LinkedIn that you have 25,000 professionals using
[00:36:41] the platform but at the same time anyone can actually search on the platform right. Do you plan on changing the access in that sense because I think I'm I keep talking about open evidence but I believe it's it's really closed if you're not if you're not a clinician. Yeah it seems to be the trend right if you use open in your name then it's really closed with open AI open evidence. What's your advice on prompting?
[00:37:11] You recently also published some suggestions on what's a good prompt what's a bad prompt for the people listening to this discussion what would be your advice and an example of how to not ask the engine a question and what is a good example. Yeah so I think a good starting point is to view these type of AI systems as reasonably intelligent aliens so that means that they can
[00:37:40] understand what you're saying or understand or they can process what you're saying into something that looks as a proper response but any context that these models don't have they also don't know about. So if you're for example asking about the specific treatments but you have the context in mind of specific things regarding to the disease the onset population or anything basically that's relevant for getting the right question that's something that you should integrate so that's I think the most basic tip for prompting
[00:38:10] include the context that maybe you understand by default but a reasonably intelligent alien wouldn't know about. So that's the first tip. I think another thing is it's really it's a skill like everyone has his own or her own ways of googling and I think with tools like chatgbt also people develop their own ways of prompting and it's using these things that really gives you the skills of using it most effectively so what I
[00:38:39] always say is if you start with the platform and you haven't used a lot of LLM tools yet just ask a question in two different ways and see the difference in response and see what response is actually what I was looking for maybe even try a third question and that actually helps I hear a lot from users that doing that like even one time the same question but in a different way they get an intuitive understanding of how the platform works and I think that's also the beauty of this conversational interface which of course can be expanded with voice
[00:39:09] to text and maybe in the distant future with other types of connections but that is very intuitive so you rapidly develop an intuition for what how should I prompt but the starting point is include all the relevant context that just landed on earth has learned how to speak but doesn't know any context should be aware of it's we're going to end on a philosophical note because as you said prompting is a skill so what I was thinking while you were talking was
[00:39:39] going to chat GPT and saying help me create a good prompt for evidence hunt this is what I'm trying to figure out and then I don't know what would happen what how chat GPT would guide you through the creation of a good prompt but since you said you know that you are yeah you studied philosophy one of the previous speakers of the podcast said that we are now becoming the interfaces between
[00:40:09] LLMs because you go to chat GPT to ask for a prompt and then you put that prompt into evidence hunt and evidence hunt produces something and then you potentially go to chat GPT to explain to you what you just got
[00:40:40] workflow right and what you don't want is to have 50 tools that you need to use in this one workflow so what you want is one platform one interface that actually from end to end helps you during this workflow integration so that's one of the most important things that we focus on with the new developments to increase the range of the workflow that you can integrate within evidence hunt so that's the first thing I think of then the other thing is more on the philosophical level at this point how many
[00:41:10] emails do you draft completely from scratch yourself and how many times do you actually just summarize what you want and let chat GPT write your email for you and how it sounds quite amusing it has some sort of funny twist to it but I think it makes sense to automate
[00:41:40] things that are just not the things you want to spend time on the same time because you interact with them you also create new skills so it's not that we are interfaced between LLMs I think it's becoming a more and more natural way of improving efficiency but also just having a more comfortable way of working and leaving most of the automation up to models that do do a better job at that and that opens up also possibilities for new skills new
[00:42:09] types of thinking about things so actually I have this a lot when I think of specific communications emails or presentations that I have a discussion with LLMs so it's again there you have this thing where you get pointed to maybe new insights even though it's not really a model that has creativity but just having this new way of interacting with something with thoughts also opens up a lot of possibilities and that's not just me being an interface between LLMs is actually me interacting with these
[00:42:39] LLMs and think it's just a new extension of interacting with the world and if I put your answer into chat GPT and ask what's the bottom line would be you're optimistic about the future and the use of tools that we have at our disposal exactly so you can download the transcript generated by AI and you can put it in chat GPT and you can ask exactly if this is this way but that's a good summarization
[00:43:29] stay tuned