How will health AI assurance labs look like and who will pay for assessments?
Faces of Digital HealthDecember 10, 2024

How will health AI assurance labs look like and who will pay for assessments?

Several organizations are thinking about the right way to regulate AI and the idea of assurance labs which would test and validate AI solutions in the US healthcare is taking shape. This was the topic we discussed with Brian Anderson - CEO of the coalition for Health AI or CHAI: how will assurance labs look like, how much will assessments cost, who will pay for them, and how will AI “nutrition labels” look like. 


Summary:

Assurance Labs in Healthcare AI

  • The Coalition for Health AI (CHI) is developing a network of quality assurance labs to evaluate AI models in healthcare.
  • These labs aim to provide independent, transparent assessments of AI models' performance across different populations.
  • By the end of 2024, CHI plans to have two certified labs operational, with more to follow in 2025.

Model Cards and Evaluation

  • CHAI has introduced "model cards" or "nutrition labels" for AI models, describing their training data, methodology, indications, and limitations.
  • Model cards are created by developers, while assurance labs provide independent evaluation reports.
  • CHAI is working on technical specifications for model cards to ensure consistency and transparency.

Goals and Benefits

  • Assurance labs aim to balance innovation with safety in AI development.
  • They can help identify model performance issues across different populations and accelerate improvements.
  • The process is intended to build trust in AI among healthcare providers and patients.

Implementation and Challenges

  • CHAI is creating a competitive marketplace of quality assurance labs to keep costs reasonable.
  • Labs must be free from conflicts of interest with AI vendors.
  • Evaluation reports will be published in a public registry for transparency.
  • The cost of evaluations is expected to be in the range of thousands of dollars, not millions.

Future Plans

  • CHAI is exploring partnerships with health systems and NGOs to establish quality assurance labs in the EU.
  • The initiative aims to be scalable and adaptable to different geographic regions and populations.


www.facesofdigitalhealth.com

Newsletter: https://fodh.substack.com/

[00:00:00] Dear listeners, welcome to Faces of Digital Health, a podcast about digital health and how healthcare systems around the world adopt technology with me, Tjasa Zajc.

[00:00:14] If 2023 was full of excitement around the generative AI scribes that will decrease the burden on clinicians and their burnout,

[00:00:25] it seems that a lot of focus in the AI space in healthcare in 2024 is on regulation.

[00:00:33] Several organizations are thinking about the right way to regulate AI and the idea of assurance labs,

[00:00:41] which would test and validate AI solutions in the US healthcare, is taking shape.

[00:00:47] This was the topic we discussed with Brian Anderson, the CEO of Coalition for Health AI, or CHI.

[00:00:55] And I asked Brian everything about assurance labs.

[00:00:59] How will they look like?

[00:01:01] How much will an assessment cost?

[00:01:03] Who will pay for assessments?

[00:01:05] And how will so-called AI nutrition labs look like?

[00:01:09] And what will vendors need to put on them?

[00:01:13] Enjoy the show.

[00:01:15] And if you haven't yet, make sure to subscribe to the podcast and check out our newsletter,

[00:01:20] which you can find at fodh.substack.com.

[00:01:23] That's F-O-D-H dot sub-stack dot com.

[00:01:27] And if you will enjoy the discussion, I will really, really appreciate it if you take a minute or two

[00:01:33] to leave a rating or a short review.

[00:01:36] I know the process is not super friendly, but your opinion matters and it's the fuel for the show.

[00:01:43] So thank you all the listeners from the past who already took the time to leave a review.

[00:01:49] And thank you to those that will do so in the future.

[00:01:53] Now let's dive in.

[00:02:08] Brian, hi, and thank you so much for joining me on Faces of Digital Health for a short discussion

[00:02:15] and kind of an overview of what's currently happening in the regulation of AI space.

[00:02:22] I must say that personally, I feel like if 2023 was the year where everybody was excited about AI,

[00:02:32] it seems that in 2024, there's more and more organizations that are trying to find solutions to regulate it.

[00:02:40] So the space is getting quite crowded to you.

[00:02:43] So I am glad to have you as the CEO of CHI, the Coalition for Responsible Use of AI here.

[00:02:50] I think this was basically one of the first organizations or initiatives in the US that wanted

[00:02:57] to provide some sort of a structure around ensuring how we validate AI.

[00:03:04] But if I just mentioned a few other things that are currently happening in the US and globally,

[00:03:09] so there's TRAIN, the Trustworthy and Responsible AI Network, which was announced at HIMSS 2024

[00:03:15] in the HIMSS Global Conference.

[00:03:17] It aims to operationalize responsible AI principles.

[00:03:22] Then there's Global Agency for Responsible AI in Healthcare.

[00:03:26] In Europe, we've got the HIPPO AI Foundation, which really advocates for open AI,

[00:03:32] meaning that the AI models are transparent.

[00:03:36] So in essence, it's agreed upon that we want to regulate AI.

[00:03:41] Everybody might have a little bit of a different idea on what the best approach is.

[00:03:45] So how do you see the evolving landscape of AI regulation and the way forward?

[00:03:53] Gosh, that was a long intro.

[00:03:54] Sorry about that.

[00:03:55] It's all right if that's the scene.

[00:03:57] No, it's a complex space.

[00:03:58] So I appreciate the need to set the context.

[00:04:00] I think it would be helpful maybe to start a little bit about the genesis of CHAI and how

[00:04:05] that kind of frames a lot how we see moving forward in this space of health AI.

[00:04:12] So CHAI was started back three years ago now, and it was started within the private sector

[00:04:18] by a group of doctors and technology companies and patient community advocates wanting to come

[00:04:24] together to set what we would call best practice frameworks.

[00:04:27] What are the best practice principles that we all are in violent agreement about?

[00:04:34] Things like transparency, fairness, security, robustness, accountability.

[00:04:40] We agree with those things at a high level, but when it gets down to a technical level,

[00:04:44] we don't have consensus.

[00:04:46] And it's really good in a rapidly evolving space like AI to have consensus, particularly

[00:04:52] when you're in a consequential space like health where patients' lives are on the line.

[00:04:58] And so because we were started in the private sector, it really was an effort where you have

[00:05:02] industry trying to define what those industry best practices are.

[00:05:07] And that has been the focus.

[00:05:09] And so when we pivot from that to how do we ensure that AI in health is safe and effective?

[00:05:16] I think the vision that we've always had in CHAI is that it is really important, particularly,

[00:05:24] again, in a rapidly evolving and maturing space like AI, to bring together the private sector,

[00:05:29] industry, health systems, payers, patient community advocates, to bring them alongside

[00:05:35] public sector officials, regulators, government officials that are looking to develop policies

[00:05:42] and inform whatever the emerging regulatory frameworks are, to have that be informed by

[00:05:48] what these industry best practices are, right?

[00:05:50] You don't want regulation to be uninformed by where industry is moving.

[00:05:55] And so we've always approached it from this is a private sector-led effort.

[00:05:59] We are going to create space for the public sector, for the regulators to join our meetings,

[00:06:06] to join our working groups so that they can understand what these best practices are that are being

[00:06:10] developed on the private sector side.

[00:06:12] Now, to your question, it is a real challenge to balance the need to spur innovation with

[00:06:23] the need to create safety.

[00:06:24] And I personally believe, and I think many of us in CHAI believe that the quality assurance

[00:06:30] labs are the place where that can happen.

[00:06:33] If you think about and you look at many other sectors of consequence across our economy,

[00:06:40] right, aerospace, automotive, consumer electrical devices, the lamps that I have here in my office,

[00:06:47] they all have independent entities that evaluate these airplanes, cars, machines, toaster ovens

[00:06:55] for safety and efficacy.

[00:06:57] And they do that in the private sector because that's what the industry demands, right?

[00:07:03] As a consumer, I do not want to buy a toaster oven that is going to cause a fire in my kitchen

[00:07:07] or a lamp that's going to cause a fire in my kid's bedroom.

[00:07:10] I don't want to buy a car that's not going to have airbags that deploy appropriately if I get into an accident, right?

[00:07:17] We demand and we put a premium and a value on those.

[00:07:20] And so that's why these assurance labs in those sectors have developed and continue to be there.

[00:07:25] Similarly, in the AI space within CHAI, we are hearing very loudly from the customers of AI models.

[00:07:33] So examples of customers of AI models might be health systems or payers or health plans or life science companies

[00:07:40] where they are engaged in procurement processes or an RFP looking to solicit for an AI tool to solve a problem.

[00:07:48] We're hearing very loudly from that community that they want to have greater level of transparency

[00:07:54] around how these models actually perform and how they're actually created.

[00:08:01] And so that kind of transparency can be found through the independent,

[00:08:07] intentional evaluation of how these models actually perform on testing data.

[00:08:12] And the ability to create that evaluation report informs the customer to make a more informed procurement decision.

[00:08:22] Now, the flip side of that, the benefit of these labs, how it spurs innovation and accelerates development of high quality AI models

[00:08:31] is if you have a developer that's trained a model on a particular set of data, they take it to the lab.

[00:08:38] The lab discovers, wow, this model works really well on a particular subpopulation, but works really poorly on this other subpopulation.

[00:08:45] And the developer may not even know that because they didn't have access to that training data for that particular kind of subpopulation.

[00:08:52] At that point, what these labs can do is they can actually accelerate the access to training data

[00:08:58] to enable these models to perform robustly across different populations of people,

[00:09:05] opening up new sales opportunities, different populations of people for these models to work safely and effectively on.

[00:09:12] That balance of the rigor around an evaluation, developing an independent network of labs to do that kind of evaluation,

[00:09:22] that can then also spur innovation, is I think a really good balance that is private sector led,

[00:09:28] that allows industry to both accelerate or remediate models that might perform poorly in a particular population

[00:09:37] or open up new opportunities for models to be developed in new populations,

[00:09:41] while also enabling the customers of these models to rapidly be able to identify who's the cream of the crop

[00:09:48] in a particular space and make the procurement decisions for those people.

[00:09:52] Now, what that means for the regulators, I'm not a politician, and so I can't really,

[00:09:56] I certainly don't want to speak for the regulatory community, but I can say that we are actively involved

[00:10:02] across the Republican and Democrat political spectrum, both at the federal level and at the state level,

[00:10:10] because I think everyone wants these tools to be safe and effective.

[00:10:13] No one wants AI tools to hurt people. We're all patients, we're all caregivers.

[00:10:18] And I think finding the right balance about how at a regulatory level, these kinds of assurance labs

[00:10:24] can be used to protect all of us is something that Chai, myself and our team are eager to work with

[00:10:32] the political leaders from HHS and the Food and Drug Administration or ONC, which is the other regulatory

[00:10:38] body at the federal level, as well as at state levels.

[00:10:41] So I don't have an answer for you about what that exactly looks like, other than Chai is a ready

[00:10:45] and willing partner to work with them.

[00:10:48] The idea of assurance labs has come quite far from the initial idea to actually figuring out

[00:10:56] how you're going to do that. But just before we dive into those details, the Coalition of Health AI

[00:11:03] recently unveiled the first applied model card, a so-called nutrition label, which would basically

[00:11:10] tell an individual or an organization what the model is based on. So can you talk a bit about that?

[00:11:20] So what's the relationship? Is there a relationship between these nutrition labels and assurance labs?

[00:11:28] Yeah, it's a great question. And it's a complex one. So if you think about a nutrition label or a

[00:11:33] model card, it has the basic ingredients that go into what makes the tool or the can of soup,

[00:11:39] if you're going shopping at a grocery store. Similarly, for AI tools, model cards are principally

[00:11:45] things that are made by the developer, right? By the creator of it, the person putting all the

[00:11:52] ingredients together. So model cards describe the training data, the training methodology,

[00:11:57] the indications, the limitations, any particular kinds of warnings, things like that would go on a

[00:12:04] model card. It also importantly includes a section or a category that is an opportunity for the model

[00:12:12] developer to share important evaluation reports or metrics. But a model card fundamentally is developed

[00:12:19] by or made by the developer, the creator of the AI model. CHI's applied model card adheres to the Office

[00:12:28] of the National Coordinator's HTI 1 rule, which was published in December of last year. It created

[00:12:33] a framework for 31 specific categories that would be used by a model developer using or creating an AI

[00:12:42] tool that would be deployed in electronic health record. CHI's model card, the use case is broader

[00:12:47] than that. We believe that the importance of model cards extends beyond certified EHRs and certainly

[00:12:54] across a variety of use cases. Again, we're focusing on the private sector use case of procurement,

[00:12:59] where you have model customers that value transparency around how these models are created,

[00:13:06] how they perform on specific populations, what their indications are, what their limitations are.

[00:13:11] That kind of transparency can only come from a model developer. And so they're asking the vendor

[00:13:17] community that work with them in these procurement processes to share these kinds of model cards.

[00:13:23] Now, separately, the assurance labs are an independent entity, right? An assurance labs,

[00:13:29] at the end of the day, a model goes through the lab, a evaluation report is created. That quality

[00:13:36] assurance labs evaluation report, give or take can be between 10 to 20 pages, right? It's a fairly robust,

[00:13:43] fairly detailed thing. Now, you remember in the model card had a section for certain key evaluation

[00:13:50] metrics. So you might imagine that the developer could take the evaluation report from the assurance

[00:13:56] lab and call out specific metrics that they want to then put in that model card. So that's the balance

[00:14:01] between the model card and the evaluation report from a quality assurance lab.

[00:14:05] Tons of questions there. So how do you, if it's all on the vendor side to decide what they're going to put

[00:14:15] on the model card, how do you monitor that the vendors don't just use this for their own promotion,

[00:14:24] saying, hey, we've got this model card? I looked at one of the examples and it seemed quite general to

[00:14:28] me saying it's like 12,000 patients, but I don't know where these patients are coming from. And so

[00:14:35] that's one. And what is there, what is, if there's a discrepancy between the model card and the

[00:14:42] assessment?

[00:14:44] So great question. Our strategy within CHI is we're going to be publishing a open source,

[00:14:50] freely available, technically specific version of the model card. It actually,

[00:14:53] we announced it at Health and we showed the image of what the model card looks like as a wireframe,

[00:14:59] but we haven't yet published what we will be publishing within the next week is actually the

[00:15:03] technical specifications and the technical data schema or standards that we want to be used within

[00:15:11] each one of those categories. So the hope is that that level of specificity will get to the point,

[00:15:19] the concern I think that you and probably many others have, which is if we don't clearly specify

[00:15:25] a level of granular detail, anything could be put in that category or in that space. And so we will be

[00:15:31] sharing within the next week, the specifics that have been coming out of the working group and there

[00:15:36] will be, and it'll use a variety of different standard references associated with it. Now,

[00:15:42] the other part you bring up is how do we ensure that there's a level of validity and trust to what

[00:15:48] vendors are putting in these? So what we intend to do in CHI is if a vendor wants to publish a model

[00:15:57] card and share it with their customer base, they certainly can do that. It's an open source tool and

[00:16:03] we'll make that available. If they want to have that model validated and used with the CHI logo,

[00:16:11] there will be an additional step, a lightweight validation that our team in CHI will look to partner

[00:16:17] with the vendor to ensure that it meets the detail and the rigor of what they're describing in the model

[00:16:25] card. So there will be a level of trust that we're trying to, in partnering with the vendor,

[00:16:31] bring to the customer that they can trust what this vendor is saying because it has been validated

[00:16:36] to a level of where the vendor is allowing us to essentially work with them to make sure that

[00:16:43] what they're saying is accurate on the model card. And so that's how there will be an open source version

[00:16:49] and then there will be a CHI branded version. The CHI branded version will have a level of service

[00:16:53] validation layered on top of it. Okay. Yeah. Yeah. That makes sense. Going back to the assurance labs,

[00:17:02] if I'm not mistaken, there's already a clear idea on when they're going to start to operate. So can you

[00:17:10] just share a little bit more details in terms of who's going to work there? How are they going to

[00:17:15] get paid and how all that complexity is currently seen? Yeah, no, great question. So this, this is

[00:17:24] something I'm truly, really excited about. Let me start by saying this is a new space, right? The idea

[00:17:30] of creating the kinds of methodology and rigor around evaluation science for particular AI tools,

[00:17:39] particularly generative AI tools is something we are developing as the plane is being flown.

[00:17:44] And so there's a level of humility that we need to have and a level of collaboration that we certainly

[00:17:49] need to have in developing that science and that rigor around evaluation, particularly for generative

[00:17:53] AI models in the quality assurance lab pathway. So with that in mind, the network of labs that we've

[00:18:01] been working with for the past year, there's about 32 or so different health systems that are partnering

[00:18:05] with a variety of different technology companies to stand up these labs. As I think I've probably

[00:18:10] shared publicly, one of our major goals for CHI by the end of the year is to have two labs certified. And so we

[00:18:18] launched and shared the certification framework at health earlier in October, the team is right now actually

[00:18:24] actively beginning the process of working with those labs to develop the specific certification checklist to

[00:18:32] then go through the rigor of certifying them so that by the end of the year, so December 31st, we plan to have

[00:18:38] two labs that will be certified. And then starting in 2025, they will then be able to work with the CHI

[00:18:46] community to do that kind of evaluation of models that are brought to them. In terms of the business model

[00:18:52] and how they get paid and that sort of thing, the vision and mission that we're on here at CHI is to

[00:18:57] create a competitive, vibrant marketplace of quality assurance labs. It's not meant to be a hegemony of one or

[00:19:05] two labs. I want a lot of labs, a lot of labs will be competing for customers, right? And so hopefully

[00:19:11] that will drive down the price point to whatever the market establishes that price point to be. It is up

[00:19:16] to the lab as a separate business entity to be partnering with their customers, which traditionally

[00:19:23] I would imagine would be the model developers that are bringing the model to the lab to have it be

[00:19:28] evaluated, to have some business relationship with a service level agreement and a contract that is

[00:19:34] compensated for the services by the lab. I don't want to speak for labs and give you a quote about

[00:19:39] what that price point is. I honestly don't know. It's yet to be established, but the vendor,

[00:19:45] I imagine the vendor would be paying the lab to do the evaluation. Now, an important point about that

[00:19:53] is all of these labs are required, this is part of the certification process, will be required to be

[00:19:59] conflict of interest free from any commercial entanglement with a vendor. Meaning a vendor with

[00:20:06] a model could not bring that model to a lab where they have a separate commercial partnership with

[00:20:12] that lab to do model training. Because if you remember what I just said, these labs can also help

[00:20:16] with model training. And so it's really important that these labs be independent, they be trustworthy.

[00:20:22] And part of that trust and independence is through having no conflict of interest with the vendors

[00:20:28] themselves in the evaluation process.

[00:20:30] Based on the discussions that you have on the ground, what are your expectations in terms of how all of this

[00:20:39] is going to unveil? We've had some unfortunately bad examples around the use of AI algorithms in insurance,

[00:20:47] in even sepsis algorithms and more. So do you think it's going to be the buyers,

[00:20:55] so the healthcare systems and hospitals that are going to demand from vendors to go through this

[00:21:01] before the procurement as part of the requirements? Or will vendors want to differentiate

[00:21:08] themselves by going through this process? Are you seeing any fear that the assurance labs

[00:21:17] would give a negative opinion on specific solutions?

[00:21:23] Yeah, so a lot to unpack in that one. Let me come back to something actually you said at the beginning.

[00:21:27] I actually have a little bit of a different perspective. So I see last year, 2023, as a year

[00:21:33] when everyone got very excited about some of the real tangible use cases for things like generative AI.

[00:21:39] And we all, probably many of us, those listening have been, had the chance to experiment with any of the

[00:21:46] generative AI frontier models out there. 2024 though, as far as what I've been hearing from many of the

[00:21:53] customers is a year of show me the proof in the pudding. Show me the value, the return on investment

[00:21:59] of this tool that you're looking for me to purchase and buy that on a health system, I might be getting

[00:22:04] pressure from my board to have some strategy about AI and make some investments and lean into that space.

[00:22:09] It's very clear from my perspective that many of the customers of AI models are demanding a level of

[00:22:17] proof that they see that these things actually perform as their claims seem to imply. And how this all

[00:22:24] plays out, I think we're, let me start by saying sunlight and transparency is the best solution when you

[00:22:31] don't have trust in something, right? Shining a light on something so that all can see truly how

[00:22:38] something is working. And in the case of AI, we have a profound trust problem, right? Poll after poll,

[00:22:45] survey after survey shows that the majority of Americans do not trust AI. And when you add health

[00:22:49] AI on that, the trust becomes even worse. And so what we are trying to do in CHI is to create a space

[00:22:55] where these evaluation reports from these quality assurance labs can be shared publicly.

[00:22:59] One of the requirements to be a certified quality assurance lab by CHI is that a version of the

[00:23:05] evaluation report will need to be shared on a public registry that we will be launching later this year

[00:23:11] that will allow, you know, patients, people like you and me, health system leaders to go to and see

[00:23:17] how these models actually perform against various populations of people. And that will be in some

[00:23:24] beginning. That will be the start of the proof in the pudding that individuals in society,

[00:23:30] customers of models will begin to see, are these things actually performing as we think or hope they

[00:23:36] would? You bring up the example of sepsis, right? That was an example, a traditional AI model or risk

[00:23:44] classifier that looked to predict individuals that are high risk of developing sepsis. It performed really

[00:23:48] well on particular populations of people, but really poorly, worse than a coin flip on other

[00:23:53] populations of people. And we didn't know that until that evaluation and that rigor was brought to it.

[00:23:57] And people's lives could potentially have been lost because of it. We want to avoid that. And so part of

[00:24:03] avoiding that is going to be being honest with ourselves about what models are good and what models

[00:24:09] aren't good. And yeah, I think candidly, I think there's some anxiety around that. We are committed

[00:24:16] to being honest brokers and being transparent about the process that we evaluate, right? How do we

[00:24:22] determine the results that we determine and sharing those results transparently with everyone. We're not

[00:24:29] going to get it right perfectly. I mentioned at the beginning, like this is a process where we are

[00:24:33] developing the science as the plane is flying, but we have to be doing that because AI is moving,

[00:24:41] as I'm sure everyone appreciates, a mile a minute and we need to keep up with it. And so this is our

[00:24:46] attempt to bring that kind of rigor and those kinds of safeguards for all of us through transparency,

[00:24:52] through being an honest broker and through sharing the results with everyone.

[00:24:56] Definitely welcome direction from the patient and the buyer's perspective and the patient safety

[00:25:03] perspective. One thing that I do wonder though, is Europe is often criticized for being too regulated

[00:25:09] and we are happy that AI is very regulated, especially because of the bad examples that we had in the past.

[00:25:16] But how do you see the impact of the assurance labs and just regulation in general on who is actually

[00:25:26] going to be able to afford to develop AI? This is already a question with large language models

[00:25:33] which are very limited in terms of who can actually afford to develop them.

[00:25:37] But then if you add all the costs related to the assurance process and everything else to the mix,

[00:25:44] I'm just wondering what impact is that going to have for startups that are just starting out

[00:25:49] and are not that financially strong?

[00:25:52] It's a fair question. And I think it's worth monitoring this process as it develops.

[00:25:59] And again, as the industry and the marketplace in front of us mature,

[00:26:02] I think I'm a big believer in capitalism. And I think that the market will set the right price point

[00:26:09] for what it will tolerate and what's appropriate for the end user. At the end of the day,

[00:26:14] the cost for the assurance lab for these quality assurance lab is likely to be passed on to the

[00:26:19] customer. And so the customer ultimately will need to decide for themselves the value of that

[00:26:25] evaluation report that they are going to be paying for from the vendor. Now what that cost is,

[00:26:31] I can almost guarantee you it's not going to be in the hundreds of thousands or the millions of

[00:26:36] dollars. It's going to be in the fives to tens of thousands of dollars. So it's not a insignificant

[00:26:42] cost, but it's not a insurmountable cost, even for a startup, particularly if that kind of evaluation

[00:26:49] report can be used for multiple different customers. Meaning if I'm a vendor and I go through

[00:26:55] a lab, that report card, particularly if it's a robust evaluation report can be used for multiple

[00:27:02] potential customers. It's not like I'd have to go to get an evaluation done for each specific customer.

[00:27:08] That's the point of the registry where these evaluation reports will be published over time.

[00:27:12] There will be a diminished cost associated with it and a scalable approach to the use of these

[00:27:17] reports across a broader and broader customer base. And so the hope is again, that this will be an

[00:27:23] opportunity for vendors to both validate, but then also to accelerate the kinds of research and

[00:27:31] development of models moving forward. Now how all this plays out in regular in regulatory policy

[00:27:36] and frameworks. Again, I don't ever want to speak for the U S government, certainly not any, anyone in

[00:27:41] the EU, but I can tell you that it is absolutely a scalable approach and we are talking

[00:27:46] with many EU nations about this concept, right? Because to create a quality assurance lab network,

[00:27:53] it needs to ideally be geographically co-located with the populations of people that the model intends

[00:28:00] to be used on. Meaning if I am a vendor and I want to sell my model, I'll use a example here in the U S

[00:28:09] on health systems in Florida. I might go to one of the quality assurance labs that is based in Florida,

[00:28:16] that draws on patient data from real people who live in Florida to test and validate my model and to

[00:28:23] presumably train my model on as well. I wouldn't go to a quality assurance lab from California or Arizona,

[00:28:30] right? I'd go to the one in Florida. Similarly, many folks in the EU are excited about this concept

[00:28:35] because of course you have health systems in EU and they can stand up quality assurance labs too.

[00:28:40] And so we're working to partner with a number of health systems and a number of nonprofit NGOs about

[00:28:46] this concept of a CHI network of quality assurance labs in the EU. That's actually going to be a major

[00:28:53] effort that we haven't shared publicly, but that we'll announce much more detail about in 2025.

[00:28:59] You've been listening to faces of digital health, a proud member of the health podcast network.

[00:29:05] If you enjoyed the show, do leave a rating or a review wherever you get your podcast,

[00:29:10] subscribe to the show or follow us on LinkedIn. Additionally, check out our newsletter.

[00:29:17] You can find it at FODH.substack.com. That's FODH.substack.com. Stay tuned.