Imagine this. 

During the COVID-19 outbreak when the city is locked down, grocery stores close early and stepping outside feels like a risk. Evelyn, a 76-year-old retired nanny, lives alone, her children and grandchildren hundreds of miles away, and feels a need to try something she has never done before: order groceries online. 

She taps the grocery app her granddaughter installed the last time she visited. The icons are unfamiliar, the text is hard to read with Evelyn’s declining eyesight, and the checkout process is complicated, requiring her to fill in her address, phone number, credit card, and coupon information. After struggling to find the items she needs, she realizes the store offers only email support, with no phone number or live help.

Evelyn tries to place the order but ends up locking herself out of her account after inactivity for a few minutes and mistyping her password three times when trying to log back in. Evelyn has to call a younger family member, who is very busy, for help. However, what they suggest is to either wait a few days for the lock to be lifted, write an email to customer service, or create a brand-new account, options that confuse Evelyn even more. Alone in her kitchen, Evelyn sighs and closes the app: “Perhaps this new gadget will never be for me.”

Evelyn’s experience is not unique. Many older people like her have faced similar moments of quiet frustration and isolation as essential services move online. This subtle but widespread exclusion has a name: digital ageism.

What Is Digital Ageism?

In literature, the term digital ageism refers to age biases in technology such as AI—practices that exclude, discriminate, or neglect the needs of older adults (Chu et al., 2022; Stypinska, 2023). Unlike racial and gender biases, which have been extensively discussed in AI ethics research, age-related bias, i.e., ageism (Butler, 1969), has received relatively little attention (Chu et al., 2022; Rosales & Fernández-Ardèvol, 2020). Nevertheless, digital ageism manifests in multiple tangible ways.

One phenomenon is called the “physical–digital divide,” where older generations feel offended or isolated when those around them (e.g., their grandchildren) engage with information and communication technologies while they cannot (Ball et al., 2019). Younger generations are often referred to as “digital natives,” while older generations are called “digital immigrants,” those born before the age of ubiquitous technology use who may feel socially excluded due to the increasing adoption of technology (Prensky, 2001).

Product designers often focus on younger people as their main user group, leaving elders with unmet needs.

This divide is not solely a matter of access or skill—it is also shaped by implicit assumptions in technological design. Existing negative stereotypes often characterize older people as “technically inept” or having a “lack of interest” (Gallistl et al., 2020; Hargittai & Dobransky, 2017; Köttl et al., 2021).

Hence, in pursuit of profit, product designers often focus on younger people as their main user group, leaving older adults with unmet needs and turning them into “invisible users.” Such thoughtlessness ranges from poor interface accessibility (e.g., small font sizes) to the absence of alternative non-digital access for essential services such as banking, healthcare, and social welfare (Hou et al., 2022). As a result, older people may experience frustration, dependency on others, or even withdrawal from the activities that once empowered them (Lu, Yao & Yin, 2022). 

Ageism also manifests in other aspects of technology, such as biased data used to train AI systems, spurious correlations made by automatic decision-making systems between outcome and sensitive attributes (e.g., employment decision versus age), and language use (Chu et al., 2023; Stypinska, 2023). In image datasets, older adults are often underrepresented and grouped into wide buckets such as “50+” or “60+,” compared to their younger counterparts who are put in smaller bins—effectively losing track of nuances within this demographic category (Chu et al., 2023). Zhao (2020) investigated popular Machine Learning (ML) models (such as Random Forests and XGBoost*) on the task of identifying citizens with high risks of long-term unemployment and found they tend to make unfair positive predictions against older populations.

Díaz et al. (2018) found that sentiment analysis models are more likely to associate terms such as “old” and “aging” with negative sentiment compared to terms with younger connotations. These examples highlight how AI systems, when built without mindful attention to age representation, can reinforce and even amplify societal biases against older adults.

Designing a Future That Includes Our Older Selves

Digital ageism is not a distant issue. It is the only prejudice that will inevitably affect everyone, regardless of their gender and race (Stypinska, 2023). Aging is a universal trajectory, and the digital systems we build today are the ones we will rely upon in later life. If technology continues to be designed with only the young in mind, we are shaping a future that excludes our future selves. When older people are unable to access tech-reliant essential services, such as healthcare portals, financial systems, or government benefits, it compromises their independence, social isolation deepens, and inequality widens. These challenges are not the result of aging, but of systems that fail to accommodate the diversity across human lifespan.

‘Ethical regulations should mandate transparency, accountability, and regular bias assessments in AI systems.’

To counter this, we must readdress inclusivity in the digital age. This requires establishing ageism-related technical mitigation and ethical regulation in the AI field to improve the well-being of older people in the digital world (Chu et al., 2023). One goal is to involve older adults, not as passive users but as co-designers, taking their perspectives into consideration in every stage of development, including physical, cognitive, and emotional needs (Fischer et al., 2020). Non-digital alternatives should always remain available, so no one is locked out of vital services due to a lack of access or lack of digital literacy. 

Additionally, as AI systems are generally data-driven, one positive practice is to enhance the representation of older generations in the training and evaluation data of AI systems. For example, Georgopoulos et al. (2020) introduced a generative data augmentation technique based on stylistic face aging, capturing fine-grained aging patterns in images. Jung et al. (2018) presented a dataset called “100-celebrities,” balancing for demographic attributes including gender, race, and age, but also noting that age, among all attributes, is the hardest for AI systems to predict, indicating a potential underdevelopment of age-specific considerations in model design and evaluation. 

Another necessary direction is to implement age-related fairness auditing frameworks, which systematically assess model behavior across age groups and ensure that decision outcomes do not disproportionately disadvantage older individuals (Bellamy et al., 2019). Importantly, ethical regulations should mandate transparency, accountability, and regular bias assessments in AI systems, particularly those deployed in high-stakes sectors like healthcare and finance, to ensure that they do not perpetuate age-related discrimination. For instance, the World Health Organization (WHO) reported the global use of AI in healthcare for older people and presented legal, non-legal, and technical measures to minimize the risk of ageism and maximize AI’s benefits for older people (WHO, 2022). 

Together, these approaches point to a more age-inclusive AI ecosystem, which acknowledges the dignity and evolving needs of all individuals, not just the digital-native majority, and helps build a society that works for everyone, at every stage of life.

*Random Forest and XGBoost are ensemble ML algorithms typically used for structured data. They work by combining multiple Decision Trees, which use a tree-like structure to classify or regress data by partitioning the data into smaller subsets based on features and making predictions along each branch. 

Xinchen Yang is a doctoral student in the Computer Science department of the University of Maryland, College Park, working in the CLIP Lab under the supervision of Professor Marine Carpuat.

Photo credit: Shutterstock/Miljan Zivkovic

References

Ball, C., Francis, J., Huang, K-T., Kadylak, T., Cotten, S. R., & Rikard, R. V. (2019). The physical–digital divide: Exploring the social gap between digital natives and physical natives. Journal of Applied Gerontology 38(8), 1167–1184.

Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63(4/5), 4:1–4:15.

Butler, R. N. (1969). Age-ism: Another form of bigotry. The Gerontologist 9(4), 243–246. https://doi.org/10.1093/geront/9.4_Part_1.243

Chu, C. H., Nyrup, R., Leslie, K., Shi, J., Bianchi, A., Lyn, A., McNicholl, M., Khan, S., Rahimi, S., Grenier, A. (2022). Digital ageism: Challenges and opportunities in Artificial Intelligence for older adults. The Gerontologist, 62(7), 947–955. https://doi.org/10.1093/geront/gnab167

Chu, C. H., Donato-Woodger, S., Khan, S. S., Nyrup, R., Leslie, K., Lyn, A., Shi, T., Bianchi, A., Rahimi, S. A., & Grenier, A.  (2023). Age-related bias and artificial intelligence: A scoping review. Humanities and Social Sciences Communications, 10(1), 1–17.

Díaz, M., Johnson, I., Lazar, A., Piper, A. M., Gergle, D. (2018). Addressing age-related bias in sentiment analysis. Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems. Association for Computing Machinery.

Fischer, B., Peine, A., & Östlund, B. (2020). The importance of user involvement: a systematic review of involving older users in technology design. The Gerontologist 60(7), e513–e523.

Gallistl, V., Rohner, R., Seifert, A., & Wanka, A. (2020). Configuring the older non-user: Between research, policy and practice of digital exclusion. Social Inclusion, 8(2), 233–243.

Georgopoulos, M., Oldfield, J., Nicolaou, M. A., Panagakis, Y., & Pantic, M. (2020). Enhancing facial data diversity with style-based face aging. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. IEEE.

Hargittai, E., & Dobransky, K. (2017). Old dogs, new clicks: Digital inequality in skills and uses among older adults. Canadian Journal of Communication, 42(2), 195–212.

Hou, G., Anicetus, U., & He, J. (2022). How to design font size for older adults: A systematic literature review with a mobile device. Frontiers in Psychology, 13, 931646.

Jung, S-G., An, J., Kwak, H., Salminen, J., & Jansen, B. (2018). Assessing the accuracy of four popular face recognition tools for inferring gender, age, and race. Proceedings of the International AAAI Conference on Web and Social Media. Association for the Advancement of Artificial Intelligence.

Köttl, H., Gallistl, V., Rohner, R., & Ayalon, L. (2021). “But at the age of 85? Forget it!”: Internalized ageism, a barrier to technology use. Journal of Aging Studies, 59, 100971.

Lu, X., Yao, Y., & Jin, Y. (2022). Digital exclusion and functional dependence in older people: findings from five longitudinal cohort studies. eClinicalMedicine 54: 101708. https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(22)00438-2/fulltext

Prensky, M. (2001). Digital natives, digital immigrants, Part 1. On The Horizon, 9, 3–6. http://dx.doi.org/10.1108/10748120110424816.

Rosales, A., & Fernández-Ardèvol, M. (2020). Ageism in the era of digital platforms. Convergence 26(5-6), 1074–1087.

Stypinska, J. (2023). AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & Society, 38(2), 665–677. DOI: 10.1007/s00146-022-01553-5.

World Health Organization. (2022). Ageism in artificial intelligence for health: WHO policy brief.

Zhao, L. F. (2020). Data-driven approach for predicting and explaining the risk of long-term unemployment. E3S Web of Conferences. https://www.e3s-conferences.org/articles/e3sconf/abs/2020/74/e3sconf_ebldm2020_01023/e3sconf_ebldm2020_01023.html

Recent Articles

Read more articles by browsing our full cataloge