OpEd
I want to tell you about a medicine cabinet.
In December 2005, at the White House Conference on Aging, I stood in a 10,000-square-foot technology exhibition and watched a demonstration of what was then called a “smart” medicine cabinet. It recorded medication comings and goings, displayed dose frequency, and alerted caregivers when doses were missed. It was also expensive, clunky and dependent upon a connectivity infrastructure that most American homes did not yet have. I wrote about it skeptically. Technology’s time had “not quite come,” I noted. The psychology of acceptance had not reached a critical juncture.
It is now 20 years later. The smart medicine cabinet exists in refined and functional forms. Medication management apps are ubiquitous. But the problem of medication non-adherence among older Americans, which costs an estimated $300 billion annually in preventable hospitalizations, remains substantially unsolved. The technology arrived. The adoption, the equity of access, and the integration with care systems did not follow.
Now we are living through a second version of the same conversation. The medicine cabinet has been replaced by AI. The promises are larger, the demonstrations more impressive, and the gap between what is being shown in conference halls and what is being experienced by older Americans who most need it is, if anything, wider than it was in 2005.
What Intel’s Chairman Promised
Craig Barrett, then Chairman of Intel, told a 2005 conference that the next 20–30 years of technology would resemble the leap from mainframes to personal computers. He identified four converging trends: convergence of devices, wireless broadband, sensing technology, and personalized software. Score his predictions against the 2025 reality and the results are, on the technical side, impressively accurate. The convergent wristwatch he described is the Apple Watch. Wireless broadband, while still inequitably distributed, is far more accessible. Sensing technology has advanced from clunky prototype to ambient reality. Personalized software is the foundational premise of every major consumer health platform.
‘He was right about the technology. He was silent about the system. And in healthcare, the system is everything.’
What Barrett did not address was adoption, the equity gap, and the difference between technology that exists and technology that reaches the people who need it most. He was right about the technology. He was silent about the system. And in healthcare, the system is everything.
Twenty Years of Technology Arriving To an Unprepared System
The technology Barrett predicted largely arrived on schedule. The care system capable of deploying it equitably did not.
Electronic health records are now nearly universal among hospitals and have created new administrative burdens, contributed to clinician burnout, and failed to solve interoperability. A patient’s records still do not reliably follow them from hospital to nursing home to home care to primary care and back.
Telehealth expanded meaningfully during COVID, demonstrating real potential for extending access and also showing that telehealth requires reliable broadband, a device, digital literacy, and often a helper. None of those requirements are uniformly available.
Remote monitoring technologies are real and commercially available. Most care programs do not use them systematically, either because costs are not covered by Medicaid or Medicare, because the workforce to respond to alerts does not exist, or because the integration between sensor data and care team workflow has not been built.
This last point deserves emphasis. A sensor that detects a fall is useful only if someone responds. A medication dispenser that flags a missed dose is useful only if a care system can follow up. Technology in healthcare is not self-executing. It requires human infrastructure of sufficient capacity to act on what the technology reveals. In 2025, that human infrastructure is in crisis—understaffed, underpaid, and turning over at rates that make continuity of care impossible in exactly the settings where technology is most needed.
The New Promise: AI and the Same Old Questions
AI demonstrations are genuinely impressive. Systems can analyze gait patterns and identify fall risk with accuracy exceeding clinical assessment. Natural language tools can scan records and flag medication interactions that busy clinicians miss. Predictive algorithms can identify residents at risk of acute deterioration before clinical signs appear.
The question is not whether this technology works in controlled settings. Much of it does. The question is the same one I should have asked more insistently in 2005: What are the conditions under which this technology will reach the older Americans who most need it, and who profits if it does not?
In many ways we speak more to how augmented intelligence enhances human judgment, giving clinicians better information, flagging what might be missed, and extending analytical capacity. The human remains the decision-maker and the deciding agent. Augmented intelligence is a tool. AI deployed without adequate oversight is a liability transfer.
When an AI system determines that a resident’s fall risk is “high” and that staffing is low, the consequences are borne by the person served. When an AI scheduling algorithm determines a resident requires fewer care minutes than the staffing plan allows, the consequences are borne by the person served. When a natural language system misclassifies a behavioral health presentation because its training data underrepresented minority older adults (a predictable result of datasets that have historically underrepresented those communities), the consequences are borne by the person served.
These are not hypothetical risks. They are predictable results of deploying powerful technology in a system where accountability has already been systematically obscured by related-party transactions, LLC structures, and management fee arrangements.
A Framework for Evaluating Tech Promises
Five questions cut through the noise.
Does this technology augment human judgment or replace it? Augmentation tools are valuable in proportion to the quality of the human system they support. Replacement tools cannot transfer accountability to entities that cannot be held accountable.
‘A sensor that detects a fall is useful only if someone responds.’
Who bears the risk when technology fails? In healthcare, the resident bears the consequence. This is a policy question requiring explicit regulatory attention that current frameworks do not provide.
What is the reimbursement pathway? Technology not reimbursed by Medicare or Medicaid will not reach the populations who most need it. The history of healthcare technology is full of interventions that worked during demonstrations and disappeared when grant funding ran out.
What human infrastructure does this require? A fall-detection sensor requires someone to respond. An AI care-planning tool requires a clinician capable of reviewing its recommendations. Technology that demands more from an already depleted workforce is not a solution. It is an additional burden.
Who holds vendors responsible? If technology fails to perform as promised, AI vendors still stand to gain from the contracts they secure. Their profit motive drives them to highlight impressive demonstrations and favorable evaluations, but that incentive does not disappear just because a demo looks good.
Which Is Better? The Answer the Question Deserves
The AI of 2025 is technically more powerful than anything demonstrated in 2005 by several orders of magnitude. The capability gap is not in question. What is in question is whether greater capability, without corresponding structural adequacy in the system deploying it, produces better care or simply more sophisticated demonstrations of better care.
For older Americans served by well-resourced, adequately staffed systems with human infrastructure to deploy AI responsibly, AI may be genuinely transformative. For the majority who depend upon Medicaid-funded long-term care, who live in rural areas with inadequate broadband, who are being served by facilities operating below minimum staffing standards, or who live in buildings whose financial architecture extracts resources rather than deploys them, the AI of 2025 will be, absent deliberate policy intervention, a demonstration and conference exhibit.
In 2005, I wrote that many delegates did not understand how the technology exhibition would directly change their lives. What I did not predict was that 20 years of accurate technical predictions and inadequate structural action would leave us with better tools and a worse system. The workforce crisis is deeper. The equity gap is wider. Financial extraction from healthcare care facilities is more sophisticated and more prevalent.
The smart medicine cabinet was not the problem in 2005. The problem was a system not structured to deploy it equitably, fund it adequately, or hold anyone accountable when it failed to reach the people who needed it most. The AI of 2025 is not the problem, either.
The technology has changed. The system has not changed enough.
James Lomastro, PhD, has more than 40 years’ experience as a senior administrator in healthcare, human services, behavioral health, and home- and community-based services. He was a surveyor at the Commission on Accreditation of Rehabilitation Facilities surveying throughout the United States and Canada. Lomastro is a member of the Coordinating Committee of Dignity Alliance Massachusetts.
Photo credit: Shutterstock/MMD Creative













