Saturday, February 21, 2026

Can We Trip the GenAI Wave With out Getting Subsumed by It? – The Well being Care Weblog

By DAVID SHAYWITZ

“There are a long time the place nothing occurs; and there are weeks the place a long time occur,” stated Lenin, most likely by no means.  It’s additionally a remarkably apt characterization of the final 12 months in generative AI (genAI) — the final week particularly — which has seen the AI panorama shift so dramatically that even skeptics are actually updating their priors in a extra bullish path.

In September 2025, Anthropic, the AI firm behind Claude, launched what it described as its most succesful mannequin but, and stated it might keep on complicated coding duties for about 30 hours constantly. Reported examples together with constructing an online app from scratch, with some runs described as producing roughly 11,000 strains of code. In January 2026, two Wall Avenue Journal reporters who stated they’d no programming background used Claude Code to construct and publish a Journal mission, and described the potential as “a breakout second for Anthropic’s coding software” and for “vibe coding” — the concept of making software program just by describing it.

Across the similar time, OpenClaw went viral as an open-source assistant that runs regionally and works by means of on a regular basis apps like WhatsApp, Telegram, and Slack to execute multi-step duties. The deeper shift, although, is architectural: the ecosystem is converging on open requirements for AI integration. One such customary known as MCP — the “USB-C of AI” — is now being downloaded practically 100 million occasions a month, suggesting that AI integration has moved from exploratory to operational.

Markets are watching the evolution of AI brokers into doubtlessly helpful financial actors and reacting accordingly. When Anthropic introduced plans to maneuver into high-revenue verticals — together with monetary providers, legislation, and life sciences — the Journal headline learn: “Risk of New AI Instruments Wipes $300B Off Software program and Information Shares.”

Economist Tyler Cowen noticed that this second will “go down as some form of turning level.” Derek Thompson, lengthy involved about an AI bubble, stated his worries “declined considerably” in current weeks. Heeding Wharton’s Ethan Mollick — “keep in mind, right now’s AI is the worst AI you’ll ever use” — traders and entrepreneurs are busily trying to find alternatives to trip this wave.

Some founders are taking their ambition to healthcare and life science, the place they see a slew of issues for which (they anticipate) genAI could be the answer, or at the very least a part of it. The method one AI-driven startup is taking in the direction of major care presents a glimpse into what such a future would possibly maintain (or maybe what recent hell awaits us).

Two Visions of Main Care

There may be real disaster in major care. Absurdly overburdened and comically underpaid, major care physicians have fled the career in droves — some to concierge practices the place (they are saying) they will present the standard of care that initially attracted them to medication, many out of medical observe completely. Recruiting new trainees grows tougher every year.

What’s being misplaced is captured with extraordinary energy by Dr. Lisa Rosenbaum in her NEJM podcast sequence on the subject.

In a companion essay, Rosenbaum paperwork the measurable penalties when sufferers lose a major care doctor: an increase in mortality, emergency room visits, and hospitalizations, all in proportion to the connection’s length — suggesting, as she writes, “that the connection itself conferred well being advantages.” Worse, greater than three quarters of sufferers by no means type a brand new PCP relationship after dropping one.

However Rosenbaum’s deepest concern isn’t statistical. It’s about what she calls the “good physician” phenotype — not a ability set however a mode. She describes a doctor whose hallmark was assuming duty for the totality of his sufferers’ issues. When Rosenbaum was caring for considered one of his hospitalized sufferers, the affected person insisted she replace the physician, explaining merely: “He’ll wish to know.” For Rosenbaum, having your sufferers intuit that you’d wish to know — excess of any high quality metric — constitutes the essence of being physician. A “tradition and not using a imaginative and prescient of the nice physician,” she warns, “is a career and not using a soul.”

Her darkest fear: the system could morph into “some artificial-intelligence-enhanced triage system devoid of a relational core.”

Which is sort of precisely what physician-entrepreneur Muthu Alagappan, co-founder of Counsel Well being, aspires to ship — for the sake of sufferers. His start line: 100 million People don’t have a relationship with a physician, good or in any other case. The relational preferrred Rosenbaum celebrates is already inaccessible to huge swaths of the inhabitants.

At Counsel Well being — just lately backed by a $25M Collection A from GV and Andreessen Horowitz — AI handles the upfront info gathering and preliminary medical reasoning, functioning, as Alagappan places it, like “an especially good medical resident that’s reasoning together with them, serving up the plan and permitting them to approve or deny in a single click on.” Docs see 15 to 20-plus sufferers per hour. The imaginative and prescient: major care visits costing lower than a greenback.

As Alagappan sees it “It’s arduous to fathom a cognitive side of the observe of medication in major care {that a} know-how system is simply not higher suited to do than the human mind.”

He acknowledges that people should be mandatory for pesky, hands-on duties like wrapping an ankle or administering a vaccine, however past these, he appears to imagine, the long run belongs to the machines. He anticipates “regulation will ease and enhance in order that the AI can do an increasing number of.”

In Utah, the method pursued by a startup known as Doctronic suggests such regulatory change could also be nearer than we predict. The corporate’s AI prescribes renewals and not using a doctor within the loop for 190 routine drugs, at $4 per script — with a malpractice insurance coverage coverage masking the AI system itself, and escalation and oversight safeguards.  Growth is already contemplated to states like Texas, Arizona and Missouri, with a nationwide roll-out into consideration as effectively.

Who’s in cost?

As AI capabilities compound quickly, there may be large temptation to use them wherever they match most naturally.  With out intentionality, this method dangers quietly redefining disciplines by the duties the know-how performs effectively. As a result of AI can effectively course of signs, match protocols, and renew prescriptions, we would begin to outline medication as these particular duties — in a lot the identical manner that as a result of we are able to measure steps, sleep scores, and VO2 max, we’re tempted to outline well being because the optimization of dashboard metrics. As Kate Crawford astutely warned, we should not let the “affordances of the instruments turn into the horizon of reality.”

This stress extends to biopharma R&D as effectively. Right here, efforts to leverage AI have succeeded in restricted domains with dense knowledge and established benchmarks, however have struggled the place the essential knowledge are scarce, extremely conditional, or each — as Andreas Bender, particularly, has eloquently mentioned.

We’re at all times tempted to look the place the sunshine is.  However tough as it may be to keep up concentrate on what truly issues, slightly than what know-how most readily delivers, it may be executed.

A Firm Constructed on What Issues

For a while now, I’ve argued — on this house, at KindWellHealth, and elsewhere — that genuinely enhancing human flourishing requires consideration to a few broad dimensions: physiology (motion, vitamin, restoration, preventive screening), company (your perception in your capacity to form a greater future), and connection (the worth of significant relationships and purposeful pursuits).

The information that caught my consideration just lately was that somebody independently constructed a enterprise round precisely this framework. Unbound, a UK-based preventive well being firm working from a single just-opened location in London’s Shoreditch, describes itself as “constructed on the assumption that bodily, psychological and social well being are inseparable.”

A number of design decisions distinguish Unbound from the optimization-culture norm. They measure connectedness alongside biomarkers — actually assessing social connection as a medical enter. Their medical director, Dr. Elliott Roy-Highley, frames well being as “not merely the results of inner mobile mechanics, however an emergent property of social integration, objective, and communal regulation.” A espresso store replaces the ready room; neighborhood circles, run golf equipment, and artwork exhibitions aren’t wellness window-dressing however structural commitments – the social setting is handled as significant a part of the intervention.

Maybe most distinctive is a post-assessment “future self” train — an evidence-backed constructive psychology intervention that asks individuals to examine their optimum future self and determine private obstacles to reaching that imaginative and prescient.  By strengthening the psychological connection between current and future selves, the train enhances aim readability, self-efficacy, and motivation for habits change. This course of works by means of narrative mechanisms — imagining, evaluating, and orienting towards personally significant objectives –that translate evaluation insights into actionable well being methods.

Crucially, Unbound doesn’t reject measurement and know-how. They provide a companion app for extending connection and monitoring suggestions past the clinic; their assessments combine blood work and bodily efficiency testing alongside the emotional and social parts.  As Unbound places it: “Sure, we use instruments like medical testing — however not as a technique to measure your value or push you to chase perfection. We use them to information and help a a lot greater aim: serving to you reside the life you need, with readability and confidence.” The intent: leverage science and know-how with intentionality, pointing them the place they need to be aimed, slightly than the place they’re most inclined to go.

In fact, there’s a big hole between a compelling idea and improved well being. It’s potential Unbound will show to be savvy wellness advertising and marketing aimed toward motivated, prosperous urbanites. The individuals who stroll into a stylish Shoreditch well being studio are already comparatively motivated and certain already drawn to purposeful engagement. The proof that this system truly improves well being, whereas theoretically grounded, stays to be seen.

However the curiosity Unbound has attracted reveals a considerable urge for food for one thing past relentless metric optimization — and there’s little of their method that appears particularly proprietary. The identical foundational ideas — deepen connection, develop company, attend (with compassion) to physiology — all might be utilized at scale by incumbents and digital platforms. Peloton, as an illustration, has the neighborhood infrastructure and the person engagement; what it lacks is a framework that extends past leaderboards and efficiency dashboards towards one thing that may assist customers not simply carry out however flourish.

Backside Line

GenAI is advancing at a tempo that may have appeared fantastical even a 12 months in the past; the developments of the previous few weeks have compelled even seasoned skeptics to recalibrate. There may be large incentive — and good purpose — to trip this know-how wave towards compelling alternatives just like the disaster in major care. However as these capabilities compound, the central problem shall be guaranteeing the know-how serves what sufferers and folks really need, slightly than permitting these must be outlined by what the know-how most readily delivers. The danger of basically lowering well being to what will be optimized by know-how is actual, as so many tech-powered firms in healthcare, biotech, and health display. However it is usually potential to leverage know-how in service of a extra full and fewer reductive imaginative and prescient — attending to physiology, company, and real human connection — as Unbound suggests, and hopefully, many others pursue.

Dr. David Shaywitz, a physician-scientist, is a lecturer at Harvard Medical Faculty, an adjunct fellow on the American Enterprise Institute, and founding father of KindWellHealth, an initiative targeted on advancing well being by means of the science of company. This piece was beforehand printed on the Timmerman Report

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles