Over the past decade, the growing use of digital technologies among older adults has attracted increasing attention from researchers, policymakers, and organisations working in the field of ageing. Across many countries, digital inclusion has become a key policy priority, accompanied by programmes aimed at improving older adults’ access to devices, connectivity, and digital skills.
While these initiatives have helped reduce barriers to digital adoption, many still focus predominantly on basic skills, such as using messaging applications or search engines. Yet old age is highly heterogeneous, and so too are older adults’ digital interests and abilities. While some individuals are just beginning to develop foundational digital competencies, others are already exploring far more advanced technological domains, including programming, artificial intelligence (AI), and even the assembly or repair of computers.
This diversity becomes particularly evident when we turn to AI. Despite persistent assumptions that AI primarily attracts younger users, emerging evidence suggests that a significant number of older adults are already interacting with AI-driven systems in their everyday lives. Some are curious and wish to learn more about these technologies, while others already use them regularly. Evidence from the National Poll on Healthy Aging, conducted by the University of Michigan, shows that 55% of older adults surveyed reported using some form of interactive AI, such as voice assistants like Amazon Alexa or Siri, or chatbots such as ChatGPT, Gemini, or Copilot.
As artificial intelligence becomes increasingly embedded in digital environments, it may also be necessary to rethink how we conceptualise digital inclusion and exclusion. Much of the existing research on the digital divide has distinguished between three main dimensions: (a) access to digital devices and connectivity, (b) digital skills, and (c) the benefits individuals derive from technology use. Yet the rapid integration of AI into everyday technologies raises an important question: are we witnessing the emergence of a new layer of the digital divide?
If artificial intelligence systems are becoming a key interface through which people interact with information, services, and social environments, then analyses of digital inclusion and the digital divide must take it into account. This raises new questions: do older adults have access to AI systems? Do they have the skills needed to interact with them effectively? And perhaps most importantly, are they able to benefit from these technologies if they wish to do so?
However, the inclusion of older adults in the world of artificial intelligence is not solely a matter of expanding policies and initiatives in this area. It also raises important questions about how AI systems are developed. In this context, the concept of digital ageism becomes particularly relevant.
Ageism is commonly defined as stereotypes, prejudices, and discriminatory attitudes based on age. Digital ageism extends this concept to technological environments. While the term has been used to describe exclusion and discrimination in online environments, recent work has increasingly framed it in the context of artificial intelligence.
One influential conceptualisation, proposed by Chu et al. (2022), situates digital ageism within a broader cycle of injustice in digital technologies affecting older adults. According to this perspective, AI systems can produce and reinforce ageist biases through multiple pathways that involve interactions between representation, design, technology, and allocation. For example, biased datasets can shape technological development in ways that fail to consider the diverse needs and interests of this age group, often focusing primarily on health-related aspects. As a result, AI systems trained on such datasets may struggle to accurately represent or respond to the experiences and realities of later life.
Furthermore, developers and designers, like all members of society, operate within cultural environments where ageist stereotypes remain widespread. These assumptions may therefore unintentionally shape technological design choices, influencing everything from interface design to the assumptions embedded in algorithmic models. If artificial intelligence systems are trained on biased data and developed within ageist cultural contexts, it is unsurprising that ageist representations may appear in their outputs.
At the same time, the limited participation of older adults in the design and testing of digital technologies has been widely criticised within the field of Human–Computer Interaction focused on ageing. Without the involvement of older users, technological development may fail to capture their experiences interacting with digital systems and may hinder the identification of biased representations of ageing.
A simple exercise illustrates this phenomenon. Asking an image-generating AI system to “generate an image of an older person” often produces highly stereotypical representations. In my case, the result was a white-haired older woman with a gentle smile, an image reminiscent of the “kind grandmother” archetype from children’s stories. While such representations may seem harmless, they reveal how narrow and stereotypical portrayals of ageing can be reproduced through AI systems.
Empirical research has begun to document these patterns more systematically. For instance, a recent study analysing 164 images generated by OpenAI’s DALL-E using prompts related to geriatric concepts such as dementia identified ageist characteristics in the outputs across two time points (2022 and 2023). Notably, the images more often depicted negative emotional expressions than positive or neutral ones, illustrating how algorithmic outputs may reinforce problematic portrayals of ageing.
Taken together, these developments suggest that addressing digital ageism will require action across several domains.
First, greater attention must be paid to the development of AI systems to avoid reproducing ageist biases. This includes improving the representation of older adults in training datasets and actively involving them in the design and evaluation of technological systems.
Second, digital inclusion initiatives should expand beyond basic digital skills to include understanding how to interact with AI systems. Ensuring that older adults can benefit from these technologies, if they choose to do so, requires recognising AI as an important dimension of digital education.
Third, older adults, like individuals of all ages, need to develop a critical understanding of artificial intelligence. This includes identifying the potential benefits of these technologies while also being aware of their limitations and risks. Skills such as recognising misinformation generated by AI (i.e., hallucinations), understanding how algorithms work, and considering issues of data privacy related to these systems are becoming essential components of digital competence.
Considering the above, the intersection of AI and ageing demands both an inclusion agenda—one that recognises older adults as active users—and a fairness agenda that acknowledges them as an affected group whose needs must be considered. Together, these agendas will shape whether AI reduces or amplifies inequalities in ageing societies.
It is also important to recognise that, as AI continues to expand into everyday life, the interactions of older adults with these systems will inevitably contribute to shaping their future development. The data generated through these interactions will become part of the digital ecosystems that train and refine AI technologies. In this sense, the relationship between ageing societies and artificial intelligence is not simply one of adaptation but also one of co-evolution. Ensuring that this future does not reproduce existing inequalities requires a clear commitment to confronting digital ageism. If artificial intelligence is to serve ageing societies, it must be designed with ageing societies in mind.
About the Author
Javiera Rosell is an academic visitor at the Oxford Institute of Population Ageing. She is an Assistant Professor, Department of Psychology, Faculty of Social Sciences, Universidad de Chile. Also, she is a researcher at the Millennium Institute for Care Research (MICARE).
Opinions of the blogger is their own and not endorsed by the Institute
Comments Welcome: We welcome your comments on this or any of the Institute's blog posts. Please feel free to email comments to be posted on your behalf to administrator@ageing.ox.ac.uk or use the Disqus facility linked below.

