Abstract

Artificial intelligence (AI) offers significant potential to enhance the lives of older adults through innovations like digital assistants and self-driving cars. But AI also poses risks, including inaccurate decisions and biased results. This article explores AI’s impacts on employment, healthcare, and fraud. These three areas are important for older adults and offer examples and insights concerning the potential benefits and risks of AI technologies. The article also discusses efforts by policymakers, the private sector, and nonprofits to promote positive AI uses while mitigating possible negative effects.

Key Words

artificial intelligence, AI, hiring, fraud, healthcare, legislation, principles

The use of artificial intelligence (AI) has increased rapidly in the past few years, and AI has the potential to create meaningful innovations that can improve the lives of older adults. AI digital assistants, for example, can provide reminders for medications and appointments, while AI-powered self-driving cars, once fully developed, could expand access and reduce isolation for older adults who no longer drive. Telehealth and remote monitoring, meanwhile, can help older adults gain access to care from home when appropriate, which already has proven important in increasing healthcare access among those living in rural areas.

Nevertheless, AI comes with substantial risks that need to be addressed. AI can make inaccurate decisions and produce biased results. These can have significant implications, such as whether someone is offered a job, receives health services, or has access to credit.

This article highlights three areas where the use of AI has particular impact on older adults—employment, healthcare, and fraud—all of which provide insights on the potential benefits and risks of AI technologies. It also discusses a range of nascent efforts underway from policymakers, the private sector, and nonprofits to support and encourage positive AI uses while protecting against potential negative effects.

Artificial Intelligence and Key Risks

AI involves programming computers to complete tasks typically requiring human intelligence, such as comprehending language, making decisions, and solving problems. This broad term encompasses a number of technologies, including predictive and generative AI. Predictive AI analyzes existing data to make predictions and excels in tasks required for forecasting and optimizing decision-making. Generative AI generates new content based on learned patterns, including text, images, and video. Tools such as ChatGPT, Google Gemini, and Microsoft Copilot use generative AI. Advancements in generative AI have fueled much of the increased adoption of AI.

Predictive AI decision-making tools can produce biased results that reflect historic and ongoing societal prejudices, including against older adults. The potential for adverse outcomes resulting from the use of such tools is disproportionately high for groups that are discriminated against. For example, self-driving car technology has experienced problems identifying people of color, in part because training data may overrepresent white people, leading to cars less accurately detecting pedestrians with darker skin (Matousek, 2019; Price, 2023).

‘AI will never replace physicians—but physicians who use
AI will replace those who don’t.’

Such risks are especially concerning as predictive AI tools already are in use across many health, financial, and government sectors to make or inform consequential decisions—including those that have a legal, material, or other significant impact on an individual’s life. Such tools can be used to recommend whether someone is offered a job; is approved for housing; gains access to insurance, credit, and other financial products at equitable costs; receives various health services; and is eligible for government benefits.

When in the wrong hands, generative AI can be abused to create convincing “deepfakes” in videos, photos, or audio recordings that seem real but have been artificially manipulated. Deepfakes can depict someone appearing to say or do something they never said or did. This convincing and deceptive content has been used in attempts to suppress voting. In New Hampshire, thousands of voters received a call from someone who claimed to be and sounded like President Biden. He told them not to vote in the 2024 presidential primary election and instead save their vote for the November election. The caller was not Biden but rather an AI-generated voice (Kornfield, 2024). The robocall was created by a Democratic political consultant who seemingly worked alone. He said he did so to warn about the dangers of AI (Shepardson, 2024).

Artificial Intelligence and Aging

While employment, healthcare, and fraud are not the only areas of AI to have a significant impact on older adults, they are extremely important in the context of aging and also offer examples and insights concerning potential benefits and risks to AI technologies. What follows is a look at AI’s implications specific to these areas.

Hiring Practices

Employers are increasingly using AI tools throughout the hiring process to increase efficiency, lower costs, improve the applicant experience, and achieve a better quality of new hires. But doing so can create and perpetuate biases that disadvantage older workers, people with disabilities, and groups that have historically faced discrimination.

As discussed above, this often happens if the data used to train the tools is inaccurate, unrepresentative, or otherwise biased. For instance, Amazon created a recruitment algorithm that unintentionally preferred male applicants over female applicants for certain positions and penalized resumes that included the word “women’s” (Dastin, 2018). The system was trained on resumés submitted to the company across a 10-year period, most of which were from men. A similar situation could arise if AI were trained using data from a time when people ages 60 and older were routinely excluded from the hiring process. This could lead to such older applicants being left out of the hiring pool, inadvertently perpetuating ageism in hiring practices.

Despite potential problems, AI tools also can be used to mitigate bias. Some provide text analysis to help employers debias their job descriptions. They can identify words and phrases such as “digital natives,” “recent graduates,” and “energetic person,” which may incline some demographic groups rather than others to apply for a job (Terrell, 2022).In addition, AI tools trained on proper data can help reduce certain types of implicit HR bias, such as favoring candidates who remind hiring managers of themselves or who fit in with the existing culture.

Health and Long-term Care

AI is already employed in many aspects of health and long-term care and its use continues to grow. When used appropriately, applying AI tools in healthcare holds promise for improving access, quality, efficiency, and safety of care. This includes administrative tasks such as scheduling and documentation, as well as assisting with decision-making. Some experts predict that AI will replace as much as 80% of what doctors currently do (Khosla, 2012). For instance, an AI tool could read and digest the latest medical research, then use that information, along with patients’ medical history and symptoms, to quickly recommend potential diagnoses and treatment options.

The president of the American Medical Association built on this sentiment when he said, “AI will never replace physicians—but physicians who use AI will replace those who don’t” (Schumaker et al., 2023). So, while AI tools can augment human capabilities by automating repetitive tasks, informing and enhancing decision-making, and boosting creativity, they cannot replace a clinician’s judgement, ethical responsibilities, and accountability.

‘AI has already begun to intensify the scale and effectiveness of scams and fraud, many of which target older adults.’

Moreover, it is well documented that AI tools used to make predictions or decisions also can cause harmful outcomes in healthcare. In June 2021, an electronic health record platform, which was using an AI tool to predict sepsis, was found to miss cases and flood clinicians with false alarms (Ross, 2022). Following an academic review of the system, it was re-engineered by the company with several adjustments, including the data variables used. 

Also, for many years, Medicare Advantage Plans have been using AI tools in a variety of ways, including to predict patients’ need for services and clinical outcomes. Advocates claim, and courts have agreed in some cases, that health plans have at times denied access to necessary care due to rigid adherence to algorithm-based predictions (Ross & Herman, 2023). This has resulted in, among other consequences, beneficiaries being inappropriately discharged from skilled-nursing facilities. Physicians also have expressed concern about using AI-enabled tools, with little or no human review, to automatically deny prior authorizations. This can lead to patient harm and poor patient outcomes, as well as increased administrative burden (American Medical Association, 2025).

Fraud

AI has already begun to intensify the scale and effectiveness of scams and fraud, many of which target older adults. Deepfakes are increasingly being used to commit such fraud. The FBI warns that criminals can now quite easily and inexpensively create very realistic images, video, and audio (Federal Bureau of Investigation, 2024). For example, they can generate short audio clips to impersonate a close relative in crisis, asking for immediate financial assistance, or demanding a ransom. Similarly, the Federal Trade Commission has warned that voice cloning can make older adults more susceptible to grandparent scams (Puig, 2023).

But AI also can be an important tool for detecting fraud. By using technology that includes AI, the U.S. Department of the Treasury (2024) enabled the prevention and recovery of more than $4 billion in fraud and improper payments in fiscal year 2024. Credit card companies and banks also are leveraging AI to ensure that purchases, transfers, and transactions are legitimate (Cooley, 2024a).

Public Policy and AI Principles

Historically, most regulation of digital technologies, if it occurs, is implemented years or decades after the technologies in question are deployed and where harms on individuals and society are already well-documented. Understanding the enormity and consequential nature of AI, policymakers are working to reverse this trend. This is proving challenging, though, as AI is evolving much faster than the slower moving legislative and regulatory processes. To combat this, policymakers, consumer groups, industry groups, and other leaders can work together to create dynamic and flexible standards that promote innovation and consumer protection. This approach could lead to a regulatory system that remains relevant as AI evolves. The necessary groundwork is being laid, in part through legislation, as well as the identification of key principles.

At the international level, in May 2024, the European Council formally adopted the EU AI Act. This is the world’s first comprehensive AI regulation. The Act bans applications and systems that create an unacceptable risk. It subjects high-risk applications, such as resumé-scanning tools that rank job applicants, to specific legal requirements. It leaves other applications not explicitly banned or listed as high-risk largely unregulated.

No comprehensive AI legislation has been passed
at the federal level in the U.S., but there
have been executive and congressional actions,
plus considerable activity at the state level.

In the United States, no comprehensive AI legislation has been passed at the federal level, but there have been executive and congressional actions, in addition to considerable activity at the state level. President Biden issued an executive order in October 2023 aimed at directing and guiding AI implementation (Exec. Order No. 14110, 2023). It sought to enable beneficial uses of AI while mitigating harms through a balance between innovation and regulation.

On the congressional side, in December 2024, the Bipartisan House Task Force on Artificial Intelligence released its final report, which included seven high-level principles. Similar to Biden’s executive order, it stressed the need to promote AI innovation while protecting against AI risks and harms. The report also provided recommendations in several areas, including data privacy, civil rights, workforce, healthcare, and financial services.

In January 2025, President Donald Trump rescinded Biden’s AI executive order and signed his own, titled “Removing Barriers to American Leadership in Artificial Intelligence” (Exec. Order No. 14179, 2025). It calls for developing AI systems that are free from ideological bias or engineered social agendas and solidifying the U.S.’s position as the global leader in AI. It also revokes certain existing AI policies and directives that act as barriers to American AI innovation.

At the state level, Colorado is the only state to have enacted comprehensive AI legislation regulating the use and development of AI systems in consequential decision-making contexts, including protecting consumers from any known or reasonably foreseeable risks of algorithmic discrimination. California has passed a few significant AI bills, including one that requires AI system developers to publicly post certain information about training data used and another that requires large developers of generative AI systems to offer AI detection tools so users can determine whether an image, video, or audio content was altered or created by AI (Cooley, 2024b).

Almost every other state also introduced some type of AI legislation in 2024. These varied in approach and addressed a range of topics. The most common to pass were those concerning AI use in campaign advertisements, prohibiting deepfakes that falsely depict others without their consent, and the creation of task forces or commissions to study AI (National Conference of State Legislatures, 2024).

Meanwhile, the private sector and nonprofit entities have stepped forward with principles or policies aimed at what is sometimes referred to as “responsible AI.” A sampling of entities involved in this work includes Google, the International Organization for Standardization, the Leadership Conference on Civil and Human Rights, Microsoft, and the Organization for Economic Co-operation and Development. A few core principles are often repeated, including fairness, explainability, auditability, and accuracy (Center for Democracy and Technology, 2019).

The Role of AARP

To address the impact of AI particularly on older adults, AARP has issued public policy relating to AI. The policy calls on leaders in the private and public sectors to ensure fairness, transparency, and accountability in algorithmic tools used for informing consequential decisions regarding health and financial well-being. AARP has advocated for these key principles at the federal level (AARP, 2024; Certner, 2024). AARP also continues to monitor this fast-moving area and is committed to ensuring that the rights of older adults are protected.

As leaders in introducing new technology to older adults, AARP’s Older Adults Technology Services (OATS) is taking a proactive approach to teach older adults about the benefits and risks of AI. OATS has developed eight lectures to educate older adults on foundational aspects of AI and build awareness to protect against the use of deceptive AI content. Curriculum covers how AI tools can help with a variety of tasks, tips for identifying AI-generated content, best practices for using AI, and how to safeguard against fraud and scams. OATS also recently published a holistic brochure called “AI for Older Adults” (Senior Planet, n.d.), which provides facts about AI as a new technology, including benefits, risks, and how it can address the health-related, financial, and lifestyle needs of people ages 50 and older.

And, through its AgeTech Collaborative, AARP is working with startups leveraging AI to build products and services that improve the lives of older adults. These include products that help older Americans complete everyday healthcare tasks, from taking medications to scheduling doctor appointments, as well as those that help alleviate the problem of social isolation among older adults (Ogilbee, 2024).

Conclusion

Artificial intelligence holds immense promise for enhancing the quality of life for older adults, offering a range of advancements in virtually every sector, from transportation to healthcare. However, the deployment of AI also brings significant risks, including the potential for biased decision-making and the proliferation and amplification of scams targeting vulnerable populations. As AI continues to evolve and expand, it is crucial for policymakers, the private sector, and nonprofits to foster the beneficial applications of AI while ensuring robust safeguards against its potential harms. By striking this balance, the power of AI to support older adults can be harnessed while mitigating the risks associated with its use.

Dorothy Siemon, JD, is senior vice president, and Karen Lyons, MPA, is policy development and integration director for the Office of Policy Development and Integration at AARP in Washington, DC.

Photo credit: Shutterstock/Suri_Studio

References

AARP. (2024, February 8). Artificial intelligence and health care: Promises and pitfalls [Statement for the Record to the United States Senate Committee on Finance]. https://www.aarp.org/content/dam/aarp/politics/advocacy/2024/02/aarp-senate-finance-ai-health-statement-for-record-2-8-24.pdf

American Medical Association. (2025). Physicians concerned AI increases prior authorization denials [Press release]. https://www.ama-assn.org/press-center/press-releases/physicians-concerned-ai-increases-prior-authorization-denials

Bipartisan House Task Force on Artificial Intelligence. (2024). Bipartisan House Task Force Report on Artificial Intelligence. 118th Congress, U.S. House of Representatives. https://republicans-science.house.gov/_cache/files/a/a/aa2ee12f-8f0c-46a3-8ff8-8e4215d6a72b/6676530F7A30F243A24E254F6858233A.ai-task-force-report-final.pdf

Center for Democracy and Technology. (2019, November). AI & machine learning. https://cdt.org/ai-machine-learning/

Certner, D. (2024, August 1). Re: Request for information on uses, opportunities, and risks of Artificial Intelligence in the financial services sector [Letter to U.S. Department of the Treasury]. Government Affairs, AARP. https://www.aarp.org/content/dam/aarp/politics/advocacy/2024/08/treasury-ai-rfi-response.pdf

Cooley, P. (2024a, July 2). Payments industry to use AI to detect fraud, improve efficiency. Payments Dive. https://www.paymentsdive.com/news/payments-industry-to-use-ai-to-detect-fraud-improve-efficiency/720452/

Cooley. (2024b, October 16). California’s new AI laws focus on training data, content transparency. https://www.cooley.com/news/insight/2024/2024-10-16-californias-new-ai-laws-focus-on-training-data-content-transparency

Dastin, J. (2018, October 10). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reutershttps://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/

Exec. Order No. 14110, 3 C.F.R. 657. (2023). https://www.govinfo.gov/app/details/CFR-2024-title3-vol1/CFR-2024-title3-vol1-eo14110/summary

Exec. Order No. 14179, 90 Fed. Reg. 20 (2025). https://www.govinfo.gov/app/details/FR-2025-01-31/2025-02172

Federal Bureau of Investigation. (2024). Criminals use generative artificial intelligence to facilitate financial fraud. U.S. Department of Justice. https://www.ic3.gov/PSA/2024/PSA241203

Khosla, V. (2012. Technology will replace 80% of what doctors do. Fortune. https://www.khoslaventures.com/fortune-technology-will-replace-80-of-what-doctors-do/

Kornfield, M. (2024, January 22). Fake Biden robocalls urge Democrats not to vote in New Hampshire primary. Washington Post. https://www.washingtonpost.com/politics/2024/01/22/biden-robocall-new-hampshire-primary/

Matousek, M. (2019). A new study found that self-driving vehicles may have a harder time detecting people with dark skin, and it could point to a bigger issue with how the technology is tested. Business Insiderhttps://www.businessinsider.com/self-driving-cars-worse-at-detecting-dark-skin-study-says-2019-3

National Conference of State Legislatures. (2024). Artificial intelligence 2024 legislation. https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation

Ogilbee, M. (2024). How AI is being optimized in agetech: An in-depth guide. AgeTech Collaborative from AARP. https://agetechcollaborative.org/insights/how-ai-is-being-optimized-in-agetech-an-in-depth-guide/

Price, E. F. (2023). Driverless cars have more trouble detecting kids, dark-skinned pedestrians. PCMag. https://www.pcmag.com/news/driverless-cars-dark-skinned-pedestrians

Puig, A. (2023). Scammers use AI to enhance their family emergency schemes. Federal Trade Commission. https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes  

Ross, C. (2022). Epic’s overhaul of a flawed algorithm shows why AI oversight is a life-or-death issue. STAT. https://www.statnews.com/2022/10/24/epic-overhaul-of-a-flawed-algorithm/

Ross, C. & Herman, B. (2023).Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need. STAT. https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/

Schumaker, E., Leonard, B., Paun, C., & Peng, E. (2023). AMA president: AI will not replace doctors. Politico. https://www.politico.com/newsletters/future-pulse/2023/07/10/ai-will-not-replace-us-ama-president-says-00105374

Senior Planet. (n.d.) AI for older adults. AARP. https://seniorplanet.org/wp-content/uploads/2024/11/AI-Guide-for-Older-Adults_Dig.pdf

Shepardson, D. (2024). US political consultant indicted over AI-generated Biden robocalls. Reuters. https://www.reuters.com/world/us/us-political-consultant-indicted-over-ai-generated-biden-robocalls-2024-05-23/

Terrell, K. (2022). Age bias in job postings hurts older workers, study finds. AARP. https://www.aarp.org/work/job-search/age-bias-job-ads/

U.S. Department of the Treasury. (2024, October 17). Treasury announces enhanced fraud detection processes, including machine learning AI, prevented and recovered over $4 billion in fiscal year 2024 [Press release]. https://home.treasury.gov/news/press-releases/jy2650

Recent Articles

Read more articles by browsing our full cataloge