Solera Well being has created a digital platform that matches well being plan members to greater than 20 curated digital well being options. Two of the corporate’s execs just lately sat down with Healthcare Innovation to debate the corporate’s enterprise mannequin and development in addition to its strategy to AI governance throughout its digital well being companion community.
Glenn Alphen, Solera’s chief business officer, spoke in regards to the firm’s founding and development, and Mike Levin, the corporate’s normal counsel and chief info safety officer, described the complexity of creating an AI governance framework throughout its ecosystem of digital well being resolution companions.
For example of the kind of partnership it develops with payers, Blue Cross and Blue Protect of Texas simply introduced its Unity Well being Hub, powered by Solera Well being, that can hyperlink to customer support and situation administration assets to offer members with a coordinated expertise.
Healthcare Innovation: May you give us an outline of the corporate’s enterprise mannequin and discuss a number of the digital well being companions it really works with?
Alphen: The corporate was based underneath the Reasonably priced Care Act to serve Medicare Benefit members and to drive them to diabetes prevention packages regionally and probably digitally, after which flip their progress into claims. We began to construct a entrance finish, utilizing interviewing methods to know the people utilizing it. Over time, our business prospects who additionally had Medicare Benefit stated that there have been some digital packages that may be nice for his or her business inhabitants in weight administration and diabetes prevention. May we do this as effectively? So we started to construct out a mannequin that steered individuals to these kinds of packages and discovered methods to construct these as claims.
We started accumulating info on engagement and outcomes. Are you truly reducing weight? Are you truly doing this system? We constructed what is basically our personal EMR, the place we hold monitor of all that knowledge coming in by way of these companions over time. Now we’re at eight situations.
We’ve got various giant well being plan prospects that use all of our situation classes primarily in business markets — whether or not it is totally insured or ASO [Administrative Services Only] sell-through.
When I’m at a convention and folks ask what we do, I say, ‘See every little thing on this room? We’re attempting to make it straightforward for a person to navigate and to take the purpose resolution fatigue away from the well being plan or the employer by being the place the place a community for digital and digital care exists, so we’re actually making a community strategy.’
HCI: Does Solera vet the digital well being options by way of their efficacy or trustworthiness? Or do the well being plans say to you that they work with a specific firm and would really like you to make it a part of your community?
Alphen: We do have plans say, ‘Hey, we love these guys. We wish to make them a part of the community.’ However due to our vetting course of, it does not at all times occur.We begin with medical vetting. After which there’s enterprise alignment. Do they serve a care path that we already serve or do they serve a brand new care path? As a result of that is how we give it some thought — what’s the suitable care path? There is a very medical lens. The trick is that they must conform to extra of a pay-for-performance mannequin, which is that matching up of engagement with medical outcomes. Can they share the info in order that we will construct a value-based framework round billing? There are totally different billing methodologies. They’re typically per member/per 30 days, and that is the place loads of that time resolution fatigue comes from. The employers or the well being plans are at all times having to adapt to any person’s new methodology. We clear that up for them, typically talking.
HCI: Solera simply introduced a brand new behavioral well being community with corporations Calm and Lyra Well being. May you discuss that?
Alphen: Sure. We’ve been very profitable within the psychological well being house with some prior companions. We thought we would have liked a little bit bit extra of an expansive class, to actually meet the best way our prospects. Calm grabs loads of consideration due to their deep shopper background, however they’ve launched Calm Well being for Employers, which additionally asks questions on different situations that we serve. We’ll be capable of map a few of that knowledge into different choices that now we have. Behavioral well being provides us some flexibility to do some extra particular choices. I do not actually wish to get into what these are but, however there are different areas that we will go into in behavioral well being.
HCI: Let me flip to Mike. I noticed some details about Solera unveiling a framework for accountable and clear use of AI in digital well being for use throughout your companion ecosystem. May you first discuss the place governance most frequently collapses as soon as AI goes operational and what efficient, enforceable AI oversight must appear to be now on this house?
Levin: You are asking: how do how does AI governance break down? Usually, it is the identical issues that you just see in safety. Initially, it is stock drift. Lot of organizations do not even notice that they’re utilizing AI particularly in manufacturing or that their community companions are using it, so they do not actually have a correct stock of the place the AI is definitely embedded.
Monitoring atrophy occurs fairly a bit, notably while you’re constructing out a governance program. The monitoring cadence begins to float and the people who find themselves monitoring will not be monitoring constantly, and that turns into an enormous danger. The third factor is incident response gaps. Once we have interaction with our payers, that is the one which they’re frequently asking us about. A pilot does not truly floor actual incidents as a result of it’s extremely restricted in scope. However when you’re truly out in the true world, manufacturing could be very totally different. When an AI makes a problematic suggestion, how do you reply to it? In a dwell medical context, you want an escalation path. You have to be pulling within the correct subject material experience. These have very restricted 24- to 72-hour reporting home windows as effectively. Greater than the rest, the incident response shouldn’t be actually thought by way of. It has to reflect what you do from a cyber perspective. If there are pre-existing fashions that exist in safety, you’ll be able to mainly copy them over to the AI aspect.
HCI: Solera is sitting in form of a novel place on the middle of a digital well being ecosystem of separate corporations. Is that this governance framework one you are constructing to assist all these corporations as a base you anticipate them to achieve by way of issues like transparency?
Levin: We’ve got a reasonably expansive AI governance program for our digital well being suppliers. That is one thing that we hold being requested about by our payers. There’s loads of nervousness round this, as a result of it is an unknown and there may be plenty of overlapping and typically contradictory steering round this. We see twin dangers. There’s the medical and there is the compliance, they usually do not at all times align. Medical danger is about affected person security and care high quality. Does the AI floor correct suggestions? Does it hallucinate? Does it carry out equitably? If the info that’s coming in has bias, the outcomes that come out additionally has bias. May it result in hurt if the output is unsuitable?
Then there’s the compliance danger, which is the one that you just hear extra about from the authorized aspect, and that’s regulatory publicity. All people’s aware of HIPAA, however there are all these new legal guidelines, notably in California and Colorado. Washington state has one, too. The FTC is trying like they’ll begin imposing this as effectively. So there’s loads of worry in regards to the authorized danger perspective as effectively.
We’ve got a cross-functional oversight committee for our AI governance, which has engineering, authorized, safety, and compliance. Every of them has a novel perspective on the AI downside, if you’ll. These views must work collectively, as a result of the dangers that I establish should not the identical dangers that the engineering crew or the medical crew will see. That is how you must handle it. The sensible actuality is that good medical governance typically satisfies the compliance necessities. So for those who do one proper, it’s going to typically result in the opposite. It’s a must to doc every little thing. You want an enormous paper path.
HCI: Are these digital well being corporations in your community appreciative that you just guys are doing this? Is it such as you’re serving to them or is it such as you guys are the duty masters who’re making them do that stuff?
Levin: Effectively, a few of them are much less completely happy than others. We’ve got a spread of digital well being companions as a result of now we have a reasonably large portfolio, and a few of them are rather more mature, they usually’re capable of present mannequin playing cards. They’re capable of clarify danger, to clarify bias and different issues. We’ve got to stroll them by way of this, however by doing that, they really construct out higher practices internally.
The half that stunned me greater than the rest was the way you would possibly suppose AI is in all places, nevertheless it’s actually not at all times being utilized immediately within the supply of care. It is within the again finish. It is mainly getting used for coding or as a copilot within the workplace, nevertheless it’s not truly constructed into loads of these healthcare apps, as a result of there’s a lot nervousness round it from a compliance perspective.
HCI: I learn that full implementation throughout the companion community was anticipated by the tip of the third quarter of 2025. Did that keep on schedule?
Levin: There have been some adjustments in our community, so since that assertion we have had some people be part of and others have left. However we do have visibility in regards to the AI standing throughout all of our companions. We all know the posture of all of them, and we’re serving to those that want the assistance.
HCI: And Solera is creating an AI maturity scoring functionality with interactive dashboards for safety and compliance, anticipated to roll out this yr?
Levin: We’re engaged on that as a part of our bigger Halo platform. It is one of many product options. Consider it as a scoring mechanism for the digital well being suppliers — from a safety perspective, in addition to from an AI danger perspective. Consider it virtually like a credit score rating.
HCI: That already feels like so much, however are there every other large duties in your to-do record for 2026?
Levin: That could be a lot. I would say that AI most likely consumes about 50% of my crew’s time from a governance and oversight perspective, as a result of there’s a lot unknown about it proper now, and it is so dynamic. However we’re not alone. I’ve seen this inside the payer ecosystem as effectively. Numerous the payers have invested pretty closely in constructing AI governance groups, and no two of them are the identical. All of them reply otherwise. They’re all decoding the laws otherwise. For those who’ve seen one AI governance program, you’ve seen one AI governance program.
