Safe-guarding an ethical future for AI in education
This article has been reviewed according to Science X’s
editorial process
and policies.
Editors have highlighted
the following attributes while ensuring the content’s credibility:
fact-checked
trusted source
proofread
Digital literacy experts are advocating for the need to change the way we engage with and teach Artificial Intelligence (AI), setting students up for success in the future.
Te Whare Wānanga o Waitaha | University of Canterbury (UC) Associate Professor of Digital Education Futures Kathryn MacCallum and colleagues have identified the critical components of an AI literacy curriculum that can be adopted by teachers of students at any level. These components are outlined in the initial findings of a three-phase Delphi study, that appeared in ASCILITE Publications.
The foundational components of their research suggest that an AI literacy curriculum must include an understanding of AI concepts, learning about the application of AI and developing necessary technical skills along with an understanding of the issues, challenges and opportunities that AI brings, including ethical considerations.
“The concept of AI literacy has become increasingly prominent in recent years, and due to the increasing pervasiveness of AI technologies in every part of society, everyone must become AI literate,” says Associate Professor MacCallum.
She says fostering an understanding of AI at a young age is becoming more critical but it is also something that should be infused within all tertiary programs.
“We need a future-focused curriculum to support our students to live in a digital society and for that, they need to understand how AI is developed, its diversity and its influences on us. AI literacy needs to sit alongside other digital literacies, where we support students to be more than just technology users; we want them to be the future creators of these future technologies.”
She says her framework differentiates from others in that it explores the different levels of AI literacy, moving from an informed user to developers and designers of AI systems. The intention of the framework is not to tie this literacy to a specific age but come from the framing that all students need a basic knowledge of AI to be aware users of it. So, even at a basic level, technical understanding is critical.
“AI is embedded into most systems we engage with today,” Associate Professor McCallum says. “For example, AI technologies are embedded in social media and search engines and therefore influence what we see, engage with, and even what we listen to.
“We often saw ethics and social issues disconnected from the teaching of how AI works. In this framework, ethics is a critical part, but it also comes from understanding how AI systems are developed, so we can see the implications this has on us,” she says.
Associate Professor MacCallum says, “One outcome of the study is the framing of Aotearoa New Zealand’s unique bicultural focus, which is at the core of this study, and an important lens often missing from other frameworks.”
“Having a bicultural and ethical lens infused into the framework will hopefully support students to be more aware of the influence they have as future creators.”
More information:
Kathryn MacCallum et al, Identifying the components of foundational Artificial Intelligence (AI) literacy – Early results from a Delphi study, ASCILITE Publications (2023). DOI: 10.14742/apubs.2023.672
Provided by
University of Canterbury
Citation:
Safe-guarding an ethical future for AI in education (2024, March 28)
retrieved 28 March 2024
from https://phys.org/news/2024-03-safe-ethical-future-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.