International Review of Research in Open and Distributed Learning

Volume 24, Number 2

May - 2023

 

Stakeholder Perspectives on the Ethics of AI in Distance-Based Higher Education

 

Wayne Holmes1*, Francisco Iniesto2, Stamatina Anastopoulou3, and Jesus G. Boticario4
1University College London, UK, 2The Open University, UK, 3University of Leicester, UK, 4UNED, Spain, *Corresponding author

 

Abstract

Increasingly, Artificial Intelligence (AI) is having an impact on distance-based higher education, where it is revealing multiple ethical issues. However, to date, there has been limited research addressing the perspectives of key stakeholders about these developments. The study presented in this paper sought to address this gap by investigating the perspectives of three key groups of stakeholders in distance-based higher education: students, teachers, and institutions. Empirical data collected in two workshops and a survey helped identify what concerns these stakeholders had about the ethics of AI in distance-based higher education. A theoretical framework for the ethics of AI in education was used to analyse that data and helped identify what was missing. In this exploratory study, there was no attempt to prioritise issues as more, or less, important. Instead, the value of the study reported in this paper derives from (a) the breadth and detail of the issues that have been identified, and (b) their categorisation in a unifying framework. Together these provide a foundation for future research and may also usefully inform future institutional implementation and practice.

Keywords: Artificial Intelligence, ethics, distance-based higher education, students, teachers, institutions, theoretical framework

Stakeholder Perspectives on the Ethics of AI in Distance-Based Higher Education

Artificial Intelligence (AI) technologies are increasingly being applied in educational settings, such as schools and universities, a development that has many practical and ethical implications that are yet to be fully understood or addressed (Holmes & Porayska-Pomsta, 2023) (NB Artificial Intelligence is capitalised to identify it as a field of enquiry rather than intelligence that is artificial; Holmes & Tuomi, 2022). Given that distance-based higher education (HE) institutions are typically online and gather huge amounts of student data, they are well-placed to incorporate AI technologies in their systems (Dogan et al., 2023). However, little is currently known about the potential or actual consequences of such a development (Bates et al., 2020). Accordingly, as such consequences begin to reveal themselves over time, and to help institutions prevent or mitigate those that are negative, this paper investigated the perspectives of the three key groups of stakeholders in distance-based higher education—students, teachers, and institutions—regarding the ethics of AI in distance-based higher education.

Introduction

To ground the following discussion, first, what exactly is meant by AI? There have been many attempts to define AI during its 60-year history; see Holmes et al. (2022) for some of those definitions. Here, in line with Holmes and Tuomi (2022), we prefer the approach provided by United Nations International Children’s Emergency Fund (UNICEF, 2021):

AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (p. 16)

AI has achieved some remarkable successes, such as the recently introduced large language models (LLMs) that can automatically generate human-like text in response to a prompt (e.g., ChatGP; OpenAI, 2022). Meanwhile, AI has also been frequently challenged: for its (a) biases that might lead to unfair and discriminatory outcomes, (b) apparently autonomous decisions that can have serious consequences, (c) impact on privacy given its use of large amounts of personal data, and (d) potential to be used for malicious purposes. AI has also been challenged for the hyperbole and the many myths surrounding it (e.g., Bender et al., 2021).

Second, what exactly is meant by AI and education (AI&ED; Holmes et al., 2022)? There are at least three dimensions of AI&ED: (a) learning with AI—using AI tools to support teaching and learning, either to deliver instruction or to accompany student learning, often referred to as AIED; (b) learning about AI—learning how AI works and how it can be created, sometimes known as the technological dimension of AI literacy; and (c) preparing for AI—learning what it means to live in a world increasingly impacted by AI, sometime known as the human dimension of AI literacy (Holmes et al., 2022; Miao & Holmes, 2021). In the study presented in this paper, we focused specifically on learning with AI, which might be further subdivided into (a) institutional-facing AI, namely AIED tools that have been designed to support the functioning of institutions, changing decision making in all areas, addressing issues such as recruitment, finances, and timetabling; (b) teacher-facing AI, namely AIED tools designed to directly support teachers, of which there are very few examples; and (c) student-facing AI, namely AIED tools designed to directly support learning, which have been the subject of more than 40 years of research and have been commercialised by multiple million dollar-funded commercial organisations (Holmes et al., 2019; Tahiru, 2021, Teng et al., 2022).

In fact, system-facing, teacher-facing, and student-facing AIED in HE are developing rapidly, with AIED tools increasingly being provided by a rapidly growing industry of commercial organisations (Knox, 2020). Examples include (a) adaptive learning platforms (Rivera Muñoz et al., 2022); (b) automated essay grading (Ramesh & Sanampudi, 2022); (c) writing assistance (e.g., Godwin-Jones, 2022); (d) research assistance (Wagner et al., 2022); and (e) student support (Goel & Polepeddi, 2017; Wollny et al., 2021; For a more detailed discussion of the state of the art of AIED see Holmes & Tuomi, 2022). Meanwhile, the distinctive characteristics of online-distance learning, such as large numbers of students who work asynchronously with little if any face-to-face contact with faculty or peers (Ubachs et al., 2017), mean that distance-based universities are increasingly the focus of AI developers. In fact, the application of AI at scale in distance-based universities has long been explored (e.g., Boticario, 2019), while student-facing AI tools are already being used by thousands of distance students worldwide (e.g., to predict outcomes; Herodotou et al., 2020) and are likely to impact many more.

However, there remains little evidence at scale for the efficacy or impact of these applications (Holmes & Tuomi, 2022), and already multiple issues are beginning to reveal themselves. First, it has been suggested that teachers using AIED in HE rarely have sufficient experience or training to take advantage of the possibilities or to facilitate their students (Bates et al., 2020; Nichols & Holmes, 2018). Second, students in HE have diverse cultural and economic backgrounds and varied experience with the use of AIED technologies (Hashakimana & Habyarimana, 2020) as well as varied accessibility needs (especially for students who have a disability) which current AIED technologies rarely address (Iniesto et al., 2021; Miao & Holmes, 2021). Third, HE institutions perhaps need to better understand how the AI algorithms have been designed, and their impact on data privacy, ownership, and use (Bell et al., 2021; Williamson, 2020). Fourth, universities must address AIED technologies that are developing faster than the curricula in their postgraduate and undergraduate degrees (Huang, 2021).

In addition, the growing relationship between AIED and HE has occurred without serious engagement with the potential ethical consequences (Holmes et al., 2019; Holmes & Porayska-Pomsta, 2023). For example, what are the ethical implications of AIED tools designed to replace teacher functions (e.g., see XPRIZE)? In short, while the ethics of AI has been the focus of much work (Jobin et al., 2019), the ethics of the research and practice of AIED in HE has received limited attention (Bidarra et al., 2020). This is especially true of distance-based universities, where there is a lack of clear guidance, policies, and regulations to address the specific ethical issues raised using AI to enhance distance teaching and learning. For these many reasons, we conducted a qualitative exploratory study with distance-based HE students, teachers, and institutions (Zawacki-Richter et al., 2019). There is no claim that the issues uncovered generalise nor is there any attempt to prioritise which issues are more or less important. Instead, the value of the study reported in this paper derives from (a) the breadth and detail of the issues that have been identified, and (b) their categorisation in a unifying framework, which together provide a foundation for future research, and also might usefully inform future implementation and practice.

The Ethics of AI in Education

The ethics of AI in general have resulted in multiple sets of ethics guidelines, as summarised by Jobin et al. (2019) and Hagendorff (2020), as well as international recommendations (e.g., United Nations Educational, Scientific and Cultural Organization [UNESCO], 2021), almost all of which broadly focus on data and algorithms. The ethics of data involves issues such as consent, privacy, ownership, data choices, data provenance, and proxies. Meanwhile, the ethics of algorithms involves issues such as biases, unintended consequences, human control, transparency, accountability and the specificities of individual machine learning models (Crawford et al., 2019).

The ethics of AIED have also raised a variety of complex issues centred on data and how that data is analysed and exploited (i.e., the algorithms or computational approaches). However, for AIED, investigating the ethics of data and algorithms is necessary but not sufficient (Holmes et al., 2021): the ethics of learning with AI cannot be reduced to questions about data and algorithms alone. Any comprehensive ethics of learning with AI also needs to account for the ethics of education itself, which involves issues such as choice of pedagogy, what counts as useful knowledge, the teacher/student relationship, self-fulfilling expectations, student agency, surveillance, diversity, equity, inclusion, and the validity of assessments, among others (Holmes et al., 2021). In addition, some ethical issues may arise not from the decision to use AI, but from the choice of which AI approach to use (Jivet et al., 2017). This is especially true given that, all too frequently, assumptions made by some AI engineers are either naïve, unsupported, or contested by the learning sciences (Malik et al., 2021).

Holmes et al. (2021) proposed a framework that includes all three areas that need to be addressed by any comprehensive ethics of learning with AI, namely data, algorithms, and education (Figure 1).

Figure 1

Framework for the Ethics of Learning with AI

Note. Adapted from “Ethics of AI in Education: Towards a Community-Wide Framework,” by W. Holmes et al., 2021, International Journal of Artificial Intelligence in Education, Volume, 32, pp. 504-526, https://doi.org/10.1007/s40593-021-00239-1. Copyright 2021 by Springer.

There is, however, as shown in Figure 1, a second level, in the overlaps between adjacent areas: (a) the ethics of data used in general AI, which has received a great deal of attention (Jobin et al., 2019); (b) the ethics of data used in education (more usually known as Learning Analytics or Educational Data Mining, which again has received much attention (Kitto & Knight, 2019); and (c) the ethics of algorithms in educational contexts (which, so far, has received very little attention). To give just one example for this last overlap, both emotion detection algorithms and pass-rate estimation algorithms may be set up with the best of intentions, but by default require a level of student surveillance and might all too easily lead to unexpected outcomes, such as misleading recommendations (Slade & Tait, 2019). The three main areas and the three main overlaps in Figure 1 are what Holmes et al. (2019) identified as the known unknowns. However, what remain to be identified or investigated are the unknown unknowns that exist at the overlap among all three areas, as marked with the question mark at the centre of Figure 1.

It is important to acknowledge the inevitable limitations of such a framework. It does not suggest there are clear, unambiguous or rigid differences between the various categories. Indeed, any particular issue might be placed in more than one area. Nonetheless, the framework is still useful for helping to illuminate connections and identify issues that have not yet been considered.

While discussions around the ethics of AI and education have recently begun to emerge (e.g., Holmes & Porayska-Pomsta, 2023; Holmes et al., 2021), little is yet known about the attitudes of students, teachers, and the institutions themselves regarding the ethical consequences, benefits, and risks. For example, do students and teachers welcome the introduction of AI technologies in their teaching and learning, or do they have objections (e.g., about the possible impact on human interactions)? In fact, with AI rapidly coming to distance learning, it is incumbent on the distance learning institutions to ensure that the use of AI technologies respects human values and attitudes (Holmes et al., 2022), for which knowing the opinions of key stakeholders is critical.

Accordingly, this paper set out to trigger and inform a discussion by exploring the ethics of AI in distance-based HE from the perspectives of the three key groups of stakeholders: the students, the teachers, and the institutions themselves. The overarching aim was to identify what ethical issues centred on learning with AI are of concern to these stakeholders, in order to provide a foundation for future research and to inform future implementation and practice. For this purpose, we used the framework, Figure 1, proposed by Holmes et al. (2021), amended to include the perspectives of the three stakeholder groups (Figure 2), to analyse issues of concern. The framework also helped to identify some additional potential issues of concern that were missing from the empirical data.

Figure 2

The Stakeholder Framework for the Ethics of Learning with AI That Involves the Ethics of Data, Algorithms, and Education

Note. Adapted from “Ethics of AI in Education: Towards a Community-Wide Framework,” by W. Holmes et al., 2021, International Journal of Artificial Intelligence in Education, 32, pp. 504-526, https://doi.org/10.1007/s40593-021-00239-1. Copyright 2021 by Springer.

Methodology

This study explored the ethics of AI in distance-based higher education from the perspectives of three key groups of distance learning stakeholders: students, teachers, and the institutions themselves. It built on the student-facing, teacher-facing, system-facing trichotomy described by Holmes et al. (2019), with one key amendment. Rather than ‘system’, we focused on institutions, given that institutions comprise both the systems in place and the people who run them, who are, by definition, key stakeholders in the context under discussion. We used an indefinite article for each stakeholder perspective to acknowledge that there may be competing opinions within that group, and to reinforce that the identified issues were not generalised. We were interested in the views of the three groups of stakeholders as they pertain to the ethics of AI in distance learning and teaching.

Inevitably, the three different stakeholder groups raised different research challenges and required different research methods. The students at a distance university are by definition not on a campus, nor do they often attend conferences together. Hence, this study used an online survey of students from a single distance university, the Open University (OU-UK). However, for the teacher and institutional perspectives, this study took advantage of two key international academic gatherings of distance-based higher education teachers and administrators in order to hold two workshops.

Survey

To capture some distance-based higher education university student perspectives, an online survey was designed and implemented (using Qualtrics). The survey method was adopted for its suitability for identifying rather than evaluating issues (Nayak & Narayan, 2019). It aimed to elicit a student voice on the application of AI in distance education (Holmes & Anastopoulou, 2019). In particular, the survey explored students’ thoughts, opinions, understanding of, and emotional disposition towards the application of AI to support students, staff, teaching, and learning.

The survey was conducted at a single online distance university, the OU-UK, with 2,500 randomly selected current distance students invited to participate. The survey was open for 21 days, during which time a self-selected sample of 221 (~9%) responded, with 155 answering all of the questions and the others answering most but not all the questions. The low response rate was within the range expected by the university when surveying its students. Undertaking the survey was voluntary, no incentives were offered, and no questions were compulsory. The survey comprised 13 closed questions and 10 open-ended questions, which together covered a wide range of issues. For the study reported in this paper, we have included here only the three open-ended questions that addressed the ethics of AI in online distance universities:

Workshops

Two workshops were held to capture some perspectives of online distance university teachers and institutions. Workshops were adopted as a research method for their suitability for identifying and discussing rather than rigorously evaluating issues (Ørngreen & Levinsen, 2017). The workshops were held at conferences in 2019. One was organised by the European Association of Distance Teaching Universities (EADTU) in October, 2019 in Madrid, the “Online, Open and Flexible Higher Education Conference” which focussed on trends in global and European higher education in blended and distance learning. The other was the International Council for Open and Distance Education (ICDE) “World Conference on Online Learning” in Dublin, November 2019, which aimed to anchor the growth of new models of open, online and digital learning in the wider context of UNESCO’s sustainable development goals.

At each conference the workshops were called “The Ethics of Artificial Intelligence to Enhance Distance Teaching: Who Cares?” The workshops were designed and organised by the authors as an opportunity for researchers exploring ethical issues around the use of AIED in distance-based higher education to share their insights, identify key ethical issues, map out ways to address the multiple challenges, and inform best practice. They aimed to help establish a basis for meaningful ethical reflection necessary for innovation and built on the experience of three earlier similar workshops organised by the authors at the AIED conferences in 2018 and 2019 (Holmes et al., 2018) and the European Conference on Technology Enhanced Learning conference in 2019.

Participants in each workshop contributed to the discussions and were self-selected from the attendees at the EADTU and ICDE conferences named above. They comprised around 30 international distance education teachers and institutional stakeholders, including lecturers (professors), researchers, administrators, and institutional policymakers. The workshops used a participatory approach, with round-table small-group discussions triggered by provocative statements to address proposed AI in distance-based higher education challenges as well as whole-workshop discussions. Both workshops began by considering what the ethics of AI in distance education might look like in 2025, and what needs to be done to ensure its effects are worthwhile. Questions included: What data are collected, and what data should not be collected? How can informed consent be assured? What data, algorithmic, or other biases might need to be addressed? How do we protect student and teacher agency, and protect against unintended consequences? How do we assure the accuracy and validity of AI-assisted assessments? The workshop participants were encouraged to add their reactions, thoughts, ideas, and concerns to a shared Padlet virtual bulletin board.

Analysis

For both the survey and workshop data, we undertook a thematic analysis (Joffe, 2012). First, both sets of data were read and coded by at least two researchers, using the novel framework shown in Figure 2. These codes were then reviewed by two different researchers, and then the data under each code was summarised. Every effort was made to represent and summarise the data accurately and fairly; even so, the authors were aware that they may still have introduced biases. Nonetheless, given the exploratory nature of the study reported in this paper, unlike in a systematic review, any such biases are unlikely to have notably skewed the results.

Results

The survey responses and the contributions made in both workshops demonstrated that the topic of ethics of AI in distance-based higher education was thought to be, at least by these particular participants, of importance and thus worthy of further inquiry. To illustrate, we begin this section of the paper with some example direct quotations from the survey and workshops arranged according to the three stakeholder perspectives (student, teacher, institutional), in a tabular version (Table 1) of Figure 2.

Table 1

Illustrative Direct Quotations From the Three Stakeholder Perspectives, Organised According to the Stakeholder Framework for the Ethics of Learning With AI

Framework category Students Teachers Institutions
Data “Can we use student data to develop AI models without student agreement?” (W2) “Teachers do not understand the consequences of how to use data of their students, not even from the educational viewpoint.” (W2) “What frameworks should we trust the "ethics" of AI enterprises?” (W2)
Data in AI “The system in order to operate more effectively will need to know more about the individual, this leaves data much more vulnerable as the temptation to malicious individuals who have nothing better to do.” (S)
Algorithms “AI could override the socio-economic background of students by predicting their needs.” (W1) “We need AI to train teachers to work (together) with AI tools.” (W1) “Educational institutions are already keeping a lot of data that potentially can be used to help the students but can also be misused.” (W2)
Data in education “Any attempts to cover up the use of AI tools (e.g., by trying to make them too 'human' in their interactions). It should always be possible to distinguish an AI tool being used.” (S).
Education “I feel by using AI this will lower the educational standards.” (S) “Re-allocating teacher resources where AI is doing all the "boring" stuff and teachers can concentrate on things that matter more, like helping disabled students.” (W2) “The use of AI in education will change the whole ecosystem of education to build trust of all stakeholders.” (W1)
Algorithms in education “There is a huge asymmetry in an understanding of both the reality and potential of AI between commercial interests and policy-makers.” (W2)

Note. W1 = EADTU workshop; W2 = ICDE workshop; S = survey.

In the following sections, we summarise issues raised in the survey and workshops from student, teacher, and institutional perspectives. In Table 2, example issues are summarised according to the stakeholder framework for the ethics of learning with AI.

A Student Perspective

Issues raised by participants that were of particular relevance to students included informed consent, data ownership, privacy, personalisation, biases, and social impact. To begin, various participants argued that AI has the potential to improve learning, by providing more personalised support, perhaps delivered by personal lifelong learning companions, thus leading to better results. However, the actual meanings of the words personalised, improve, and better were not explored. For example, AI systems might help overcome the socio-economic disadvantages of at-risk students by predicting and addressing their specific needs—although students who are economically disadvantaged might not even be able to access the best technologies and so might lose out. In fact, personalised learning systems might also lead to students being homogenised, the polar opposite of individualised: the current crop of so-called personalised systems aim to ensure that all the students learn the same things.

Another key focus was informed consent. Do students have a genuine opportunity to choose whether to opt in or opt out of the AI system, a possibility that should be but is not always available (Khalil et al., 2018)? In particular, what about the data that the system collects? Currently, there is no clear understanding of (a) who owns the data (the student, the institution or the private company who runs the system?); (b) what the impact of that data is on privacy; or (c) how biases from partial data or algorithms might be identified and mitigated. Another risk noted by participants was that, by focusing on human-to-machine interactions over human-to-human interactions, and especially when the systems are driven by industry needs rather than student needs, learning might become dehumanised, lacking the benefits of social interaction, student-to-student collaboration, communities of learning, and emotional understanding.

A Teacher Perspective

Issues raised by participants that were of particular relevance to teachers included data, training and support, supporting versus replacing teachers, saving teacher time, and human interactions. The usefulness of data to support teacher decision making was mentioned by many participants, together with the acknowledgement that teachers are rarely experienced in using student data effectively. This leads to the second issue, that of the need for teacher training in AI—what it is and how it might be used in education, as well as the many implications related to these concerns. In recent years, there has been a great deal of emphasis on teachers’ digital competencies and digital literacy, which now needs to be extended to include AI, and should be embedded in teacher training. Similarly, participants suggested that teachers should be supported to navigate the many free resources online, to identify those videos and other materials that are of high quality, as that will help them better understand the potential and impact of AI. Therefore, it seems that teachers, as well as students, are demanding more clear messages on what, where, and how to use AIED.

Participants mentioned another issue of importance from a teacher perspective, that of whether the AI applied in educational contexts has been designed to support teachers or, as is the case with many current applications, to replace teacher functions and thus by default to potentially replace teachers. Despite the rhetoric, for example that AI will save teacher time and allow them to focus on other aspects of supporting their students, an argument that has been made for educational technology since the 1930s, which does not appear to have been realised (Watters, 2021), and for which there is currently little evidence, teachers might understandably be concerned. As the AI becomes more sophisticated, what will be the impact on their role (might they change from teacher to mentor?) or on their jobs, as they will always be more expensive than machines? Similarly, with students spending more time engaging one-on-one with the AI programmes, what will be the impact on human interactions (teacher-student and student-student) and on broader understandings of learning?

Table 2

Example Issues From the Three Stakeholder Perspectives, Organised According to the Stakeholder Framework for the Ethics of Learning With AI

Framework category Students Teachers Institution
Data
W1: The value of data
W2: Ownership of data
H: —
F: —
E: Informed consent
W1: Training teachers
W2: Teachers supported by AI
H: -
F: -
E: -
W1: Data misuse
W2: Anti-fraud assessment
H: -
F: -
E: Data breach
Data in AI
W1: Less human more ethics in AI
W2: Data misuse
H: -
F: Privacy
E: Privacy
Algorithms
W1: Consent
W2: Personalising learning
H: Learning paths
F: Increased disadvantaged
E: -
W1: -
W2: Training teachers
H: Better support for the teacher
F: Lack of human interaction
E: -
W1: -
W2: Trustable technology
H: support institutional services
F: not guaranteed value for money
E: -
Data in education
W1: -
W2: —
H: Better teacher support
F: -
E: Being aware that it is AI
Education
W1: -
W2: Students are unique
H: AI to enhance learning
F: Lack of human interaction
E: Poorer learning experience
W1: Reallocating teachers’ resources
W2: Changing role of teachers
H: Keep educators
F: AI to replace teachers
E: AI to replace teachers
W1: Stakeholders trust
W2: Quality assurance
H: AI to provide better courses
F: AI not fit for purpose
E: -
Algorithms in education
W1: Reality and potential of AI
W2: Biases and commercial aspects
H: -
F: -
E: Biases in decision making

Note. W1 = EADTU workshop; W2 = ICDE workshop; H = hopes (survey question 13); F = fears (survey question 14); E = ethical concerns (survey question 15).

An Institutional Perspective

Issues raised by participants that were of particular relevance to institutions included data, trust, the advantages of AI, and the challenges of implementation. To begin, participants noted that many distance-based institutions, particularly those that are mainly online, already collect a wide range of data that might potentially be used to improve institution services and support students. However, participants also noted that, without care, this data could all too easily be misused or lead to unintended consequences.

Accordingly, participants suggested, as AI is increasingly being applied to support teaching and assessment, institutions will have to ensure that data models are accurate and well-protected; they must place increasing emphasis on preventing data breaches and data fraud. Participants also noted an asymmetry between HE institutions and the AI companies about the benefits that AI might genuinely bring, and about the implications—a disparity that needs to be negotiated (Renz & Hilbig, 2020).

Participants also broadly agreed that the application of AI in HE is generally beneficial for institutions, thanks to its ability to identify patterns of behaviour to profile students and make effective recommendations. However, they also noted that there is an ever-present danger that mistakes from the past, such as gender biases, can be embedded unintentionally in AI systems, reducing both their acceptability and their effectiveness. Finally, participants also noted the institutional challenges of implementing AI systems widely in HE settings. While specific technologies can easily be piloted in limited contexts, it becomes much more difficult to include AI systems in institutional IT systems while avoiding bringing the whole system down. Large-scale implementation will have pedagogical, organisational, legal, technical, and ethical consequences, all of which need to be identified and robustly addressed.

Discussion

The survey and workshops identified a wide range of issues pertaining to the ethics of AI used in online distance universities, many of which might be more widely applicable. However, when this data was aligned with the theoretical framework, various gaps appeared suggesting other issues that ought to be considered. To use a grandiose metaphor, consider how Mendeleev proposed the existence of gallium due to a gap in his periodic table (Uppenbrink, 2000). For example, even though the survey and workshop participants had not mentioned them, many potential issues centred on human rights (Holmes et al., 2022). Accordingly, in Table 3, we have summarised the empirical issues (i.e., those arising from the survey and the workshops), augmented by some theoretical issues (identified by italic font and square brackets) that participants did not mention but that emerge from a reflection on the extended framework. In other words, the theoretical framework helped identify some gaps in the empirical data. For example, it was notable that the ethics of education was mostly missing, with all but one response focusing on data or algorithms.

Table 3

Interpretation of the Empirical Issues and Theoretical Issues From the Three Stakeholder Perspectives

Framework category Students Teachers Institutions
Data
  • right to withhold personal data and consent
  • right to data security and privacy
  • [right to see/access data collected about them]
  • right to own data that they created
  • awareness that data has institutional and commercial value
  • right to be trained about data
  • right to know how data about their teaching is used by the institution
  • [right to withhold personal data and consent]
  • [right to own data that they created]
  • [right to data security and privacy]
  • [awareness that data has institutional and commercial value]
  • responsibility to respect GDPR and data ownership
  • [responsibility to institute clear informed consent practices and to respect the outcomes]
  • awareness of the impact of data on student/ teacher privacy [and individual agency]
  • responsibility to keep datasets accurate and up to date
  • responsibility to prevent data breaches and data fraud
  • [awareness that commercial AI systems might mean commercial exploitation of student and teacher data]
Data in AI
  • responsibility to ensure accurate, unbiased and well-protected data models
  • responsibility to avoid the misuse of data models
Algorithms
  • awareness that personalisation can mean homogenisation
  • right to algorithmic privacy (e.g., right for systems not to infer personal emotional states)
  • right to opt in/opt out of algorithms
  • right to be trained in AI, to enable rational choices
  • right to be supported to navigate AI-powered online resources
  • right to better understand the potential and impact of AI [right to learn how to interpret the outcomes of algorithmic analyses]
  • [responsibility to reflect on how AI systems inform decision making]
  • responsibility to understand how data is analysed.
  • responsibility to allow individuals to decide how their data is analysed.
  • [responsibility to interpret data in multicultural contexts]
  • [responsibility to understand how inaccurate or outdated student models affect later decisions]
  • [responsibility to consider the impact of predictions on student self-efficacy, resilience and mental health]
Data in education
  • [awareness that data in education is always limited: it only represents online activities (e.g., interaction with a learning management system) and does not include offline activities (e.g., reading a book or engaging in collaborative problem-solving]
Education
  • [right to high quality and appropriate pedagogy]
  • [right to collaborative engagement with teachers and students]
  • [right to individual agency]
  • [right to high quality engagement and relationships with students]
  • awareness of importance of trust in relationships between institutions, students, and commercial suppliers
  • [awareness of importance of human agency in teaching and learning]
  • [responsibility to ensure education is inclusive (i.e., does not discriminate based on gender, disability or socio-economic status)]
  • [responsibility to ensure students are free from surveillance]
Algorithms in education
  • awareness of unintentional biases
  • awareness of disparity between commercial and academia interests
  • awareness that personalised support might not contribute to better results
  • [awareness that AI often replaces teacher functions and so might replace teachers]
  • [awareness that teachers need professional development]
  • [requirement to take responsibility when AI goes wrong]
  • [awareness that AI profiles students (engages in surveillance)]

Note. Empirical issues are shown in non-italics font; while theoretical issues are shown in italic font and square brackets.

Next, we discuss both the empirical and the theoretical issues for the ethics of learning with AI in terms of the three stakeholder perspectives.

A Student Perspective on the Empirical and Theoretical Issues

As mentioned by the participants, or emerging from the reflection on the framework, from a student’s perspective there are multiple ethical issues centred on data: the right to withhold personal data, the right to see/access data collected about them, the right to data security and privacy, and the right to own the data that they create when they engaged with an AI system.

There was also the need for students to be made aware, as part of the informed consent process, that data has institutional and commercial value. In fact, it has been argued that the meaning of consent in the digital age is negotiable (Tarran, 2018). In any case, there is a fundamental difference between legal consent, where users simply tick a box after having been presented with screeds of fine print information, and ethical consent where users fully understand and are comfortable with how their data is being used.

Similar issues arise regarding AI algorithms: the right to opt in or opt out of particular algorithms, and the right to algorithmic privacy, which includes the right not to be surveilled and not to have one’s personal emotional states inferred and used. While the aim of this algorithmic surveillance and profiling might be laudable (e.g., to move students from negative to positive emotional states in order to enhance their learning), it might be argued that it represents an unacceptable infringement of personal privacy. In addition, students should be aware that despite the putative benefits of so-called personalisation through algorithms, the unintended consequences could be homogenisation rather than enabling students to develop their individual potential or to self-actualise.

Finally, while participants mentioned few, there are also multiple education-specific rights that any application of AI in distance-based HE and elsewhere must address, including but not limited to the right to high quality and appropriate pedagogy, the right to collaborative engagement with teachers and students, and the right to individual agency. To give one example, what are the ethical consequences of data collected automatically by a learning management system being analysed in order to predict student success or failure? Given that students have the human right to view that data (Holmes et al., 2022), presumably they also have the right to view the prediction. However, if the prediction is that the student will fail, what is the potential impact on the student—will they redouble their efforts or give up? While learners have sometimes been asked for their general views on the use of predictive learning analytics (e.g., Rets et al., 2023), the ethical question of the impact of such a prediction on student self-efficacy, resilience and mental well-being is yet to be properly considered.

A Teacher Perspective on the Empirical and Theoretical Issues

Regarding data, this study suggested that teachers should have the same rights as students (e.g., consent, privacy, and ownership). They also have the right to know how data about their teaching is being used, and that the data has institutional and commercial value. It also needs to be recognised that teachers are not necessarily familiar with how data is collected and analysed, how best to deal with it, and how it impacts on their teaching or their students (whether positively or negatively). Accordingly, professional development programmes for teachers need to be developed and made available, covering, for example, how to interpret data, what data might be missing, and the ethical consequences for teachers and their students. In particular, this should address the fact that data in education is always limited: it only represents online activities (e.g., interaction with a learning management system) and does not include offline activities (e.g., reading a book or engaging in collaborative problem-solving). Professional development also needs to include algorithmic literacy, how the algorithms manipulate data and make recommendations, and how teachers can make a humanistic use of AI in their classrooms (Miao & Holmes, 2021). Issues such as unintentional biases, the disparity between commercial and academia interests, the awareness that AI often replaces teacher functions and so could possibly replace teachers, and that AI profiling might be considered surveillance, all need to be addressed. Teachers should also be encouraged to engage with other challenging issues, such as the question of whether the application of AI in education genuinely saves teacher time, something that educational technologists have promised (but not delivered) for almost 100 years. As well, does so-called personalised support genuinely contribute to better student outcomes (in terms of knowledge, skills, and values—not just examination results)?

An Institutional Perspective on the Empirical and Theoretical Issues

For institutions, the ethical issues related to data and algorithms tend to be responsibilities rather than rights, including the responsibility to (a) respect data regulations (such as GDPR); (b) respect student privacy and ownership of their data; (c) ensure that consent is fully informed and freely given (not just ticked by the student when they first enrolled many months previously); and (d) safeguard data security. An ethical institutional approach also involves ensuring that data is (a) accurate, up-to-date, unbiased, inclusive (e.g., it does not discriminate based on gender, disability, socio-economic status); (b) well-protected (to prevent data breaches, data misuse, and data fraud); and (c) easily challenged by students and teachers, while recognising that the data always only provides a partial picture of student achievements. Such an approach also means ensuring that algorithmic analyses are fair, transparent, valid, and reliable. At the same time, it is necessary to avoid (a) biased assumptions (perhaps because of the multi-cultural contexts within which it is collected); (b) outdated medical models (such as disability classifications still used in many educational contexts); and (c) statistical apophenia (finding causal patterns where no meaningful patterns are present). Instead, the key is to focus on humanistic approaches to teaching and learning such as promoting student agency and avoiding student surveillance.

Institutions also need to take care when partnering with commercial enterprises, whose values usually differ from the university’s, especially given that student data is usually exploited outside the institution by the commercial developer. Institutions should ensure that any commercial partners meet the highest ethical standards, and that their practices are demonstrably trustworthy. This also raises the issue of trust, between institutions and students, as well as between institutions and the commercial organisations that are providing the AI systems. In order to develop trust, the systems and the companies need to be trustworthy. The onus should be put on the system developers themselves to ensure that they deserve trust; rather than on the students to trust something that might or might not be trustworthy.

Finally, it is critical to engage with the promises that AI is supposed to deliver (such as personalised learning) while encouraging the use of innovative approaches to teaching, learning and assessment—rather than simply automating poor pedagogic practices. For example, institutions might encourage the development of AI that enables more nuanced, accurate, and valid assessment of student achievements, rather than AI that simply automates or proctors exams.

Conclusion

This paper investigated the perspectives of key stakeholders on the ethics of artificial intelligence applied in distance-based higher education. Two workshops and a survey helped identify multiple concerns, to which were added some missing concerns that emerged from a reflection on an ethics of learning with AI framework (Holmes et al., 2021). The study identified multiple ethical issues (or issues with ethical implications) in terms of data, algorithms, and education, as well as their overlaps. Key takeaways (no doubt, readers can think of other potential missing issues, to input to the discussion that this paper aims to stimulate), many of which are likely applicable beyond the specific context of distance-based HE to HE in general, include the following ethical requirements.

To reiterate, we do not claim that the range of ethical issues discussed in this paper are definitive. The ethical concerns that ought to be considered are only likely to grow further as new AI developments are deployed in educational contexts, as evidenced by the novel ethical issues raised relatively recently by LLMs, such as ChatGPT, potentially being used by students to write essays (Susnjak, 2022). Nor is there any attempt to prioritise which issues are more or less important. These and other limitations (e.g., that only students from one distance university were surveyed) are being addressed in ongoing research. Instead, the value of this paper derives from the breadth and detail of the ethical issues that have been identified, partly empirical (from the survey and workshops’ data) and partly theoretical (inferred by means of a framework). Together, this not only provides a foundation for future debate and research, it also might usefully inform future institutional implementation and practice, and appropriate regulations. In particular, the paper highlights the value of engaging with all relevant stakeholders—students, teachers, and institutions—to help ensure that the application of AI in distance-based HE is genuinely for the benefit of all.

References

Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education, 17(1), 42. https://doi.org/10.1186/s41239-020-00218-x

Bell, G., Gould, M., Martin, B., McLennan, A., & O’Brien, E. (2021). Do more data equal more truth? Toward a cybernetic approach to data. Australian Journal of Social Issues, 56(2), 213-222. https://doi.org/10.1002/ajs4.168

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922

Bidarra, J., Simonsen, H., & Holmes, W. (2020, June). Artificial intelligence in teaching (AIT): A road map for future developments [Presentation]. EMPOWER Webinar Week (EADTU). https://doi.org/10.13140/RG.2.2.25824.51207

Boticario, J. G. (2019, 16-18 October). A roadmap towards personalized learning based on digital technologies and AI at Higher Education. OOFHEC2019: The Online, Open and Flexible Higher Education Conference. https://canal.uned.es/video/5da96278a3eeb0d93f8b4568

Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A., Mathur, V., McElroy, E., Sánchez, A. N., Raji, D., Rankin, J. L., Richardson, R., Schultz, J., West, S. M., & Whittaker, M. (2019). AI NOW 2019 report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf

Dogan, M. E., Goru Dogan, T., & Bozkurt, A. (2023). The use of artificial intelligence (ai) in online learning and distance education processes: A systematic review of empirical studies. Applied Sciences, 13(5), Article 5. https://doi.org/10.3390/app13053056

Godwin-Jones, R. (2022). Partnering with AI: Intelligent writing assistance and instructed language learning. Language Learning and Technology, 26(2), 5-24. https://doi.org/10125/73474

Goel, A. K., & Polepeddi, L. (2017). Jill Watson: A virtual teaching assistant for online education (College of Computing Technical Report No. 503). Georgia Tech. https://smartech.gatech.edu/bitstream/handle/1853/59104/goelpolepeddi-harvardvolume-v7.1.pdf

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8

Hashakimana, T., & Habyarimana, J. de D. (2020). The prospects, challenges and ethical aspects of artificial intelligence in education. Journal of Education, 3(7), 14-27. https://stratfordjournals.org/journals/index.php/journal-of-education/article/view/655

Herodotou, C., Rienties, B., Hlosta, M., Boroowa, A., Mangafa, C., & Zdrahal, Z. (2020). The scalable implementation of predictive learning analytics at a distance learning university: Insights from a longitudinal case study. The Internet and Higher Education, 45, 100725. https://doi.org/10.1016/j.iheduc.2020.100725

Holmes, W., & Anastopoulou, S. (2019). What do students at distance universities think about AI? Proceedings of the Sixth ACM Conference on Learning @ Scale. Association for Computing Machinery (Article No.: 45; pp. 1-4). https://doi.org/10.1145/3330430.3333659

Holmes, W., Bektik, D., Whitelock, D., & Woolf, B. P. (2018). Ethics in AIED: Who Cares? (C. Penstein Rosé, R. Martínez-Maldonado, H. U. Hoppe, R. Luckin, M. Mavrikis, K. Porayska-Pomsta, B. McLaren, & B. du Boulay, Eds.; Vol. 10948, pp. 551-553). Springer International Publishing. https://doi.org/10.1007/978-3-319-93846-2

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education. Promises and implications for teaching and learning. Center for Curriculum Redesign.

Holmes, W., Persson, J., Chounta, I.-A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education. A critical view through the lens of human rights, democracy, and the rule of law. Council of Europe. Available at: https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd

Holmes, W., & Porayska-Pomsta, K. (Eds.). (2023). The ethics of AI in education. Practices, challenges, and debates. Routledge.

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Buckingham Shum, S., Santos, O. C., Rodrigo, M. M. T., Cukorova, M., Bittencourt, I. I., & Koedinger, K. (2021). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32, 504-526. https://doi.org/10.1007/s40593-021-00239-1

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education: Research, Development and Policies, 57(4), 542-570. https://doi.org/10.1111/ejed.12533

Huang, X. (2021). Aims for cultivating students’ key competencies based on artificial intelligence education in China. Education and Information Technologies, 26, 5127-5147. https://doi.org/10.1007/s10639-021-10530-2

Iniesto, F., Coughlan, T., & Lister, K. (2021). Implementing an accessible conversational user interface: Applying feedback from university students and disability support advisors. Proceedings of the 18th International Web for All Conference. Association for Computing Machinery (Article No.: 45; pp. 1-5) https://doi.org/10.1145/3430263.3452431

Jivet, I., Scheffel, M., Drachsler, H., & Specht, M. (2017). Awareness is not enough: Pitfalls of learning analytics dashboards in the educational practice. In é. Lavoué, H. Drachsler, K. Verbert, J. Broisin & M. Pérez-Sanagustín (Eds.), Data Driven Approaches in Digital Education (pp. 82-96). Springer International Publishing. https://doi.org/10.1007/978-3-319-66610-5_7

Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial intelligence: The global landscape of ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

Joffe, H. (2012). Thematic analysis. Qualitative research methods in mental health and psychotherapy, 1, 210-223. https://doi.org/10.1002/9781119973249.ch15

Khalil, M., Prinsloo, P., & Slade, S. (2018). User consent in MOOCs: Micro, meso, and macro perspectives. The International Review of Research in Open and Distributed Learning, 19(5). https://doi.org/10.19173/irrodl.v19i5.3908

Kitto, K., & Knight, S. (2019). Practical ethics for building learning analytics. British Journal of Educational Technology, 50(6), 2855-2870. https://doi.org/10.1111/bjet.12868

Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298-311. https://doi.org/10.1080/17439884.2020.1754236

Malik, A., Demszky, D., Koh, P. W., Doumbouya, M., Hudson, D. A., Nie, A., Nilforoshan, H., Tamkin, A., Brunskill, E., Goodman, N., & Piech, C. (2021). Education. In R. Bommasani, D. A. Hudson, & E. Adeli (Eds.), On the opportunities and risks of foundation models (pp. 67-72). https://arxiv.org/abs/2108.07258

Miao, F., & Holmes, W. (2021). AI and education: Guidance for policy-makers. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000376709

Nayak, M., & Narayan, K. A. (2019). Strengths and weaknesses of online surveys. Technology, 6(7), https://doi.org/10.9790/0837-2405053138

Nichols, M., & Holmes, W. (2018). Don’t do evil: Implementing artificial intelligence in universities. In J. M. Duart & A. Szűcs (Eds.), Towards personalized guidance and support for learning (pp. 109-117). European Distance and E-Learning Network. https://www.eden-online.org/proc-2485/index.php/PROC/article/view/1669

OpenAI. (2022, November 30). ChatGPT: Optimizing language models for dialogue. OpenAI. https://openai.com/blog/chatgpt/

Ørngreen, R., & Levinsen, K. (2017). Workshops as a research methodology. Electronic Journal of E-Learning, 15(1), 70-81. https://vbn.aau.dk/en/publications/workshops-as-a-research-methodology

Ramesh, D., & Sanampudi, S. K. (2022). An automated essay scoring systems: A systematic literature review. Artificial Intelligence Review, 55(3), 2495-2527. https://doi.org/10.1007/s10462-021-10068-2

Renz, A., & Hilbig, R. (2020). Prerequisites for artificial intelligence in further education: Identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education, 17(1), 14. https://doi.org/10.1186/s41239-020-00193-3

Rets, I., Gillespie, A., & Herodotou, C. (2023). Six Practical Recommendations Enabling Ethical Use of Predictive Learning Analytics in Distance Education. Journal of Learning Analytics, 10(1), (Early Access). https://doi.org/10.18608/jla.2023.7743

Rivera Muñoz, J., Berríos, H., & Arias-Gonzales, J. (2022). Systematic review of adaptive learning technology for learning in higher education. Eurasian Journal of Educational Research, 98, 221-233. https://doi.org/10.14689/ejer.2022.98.014

Slade, S., & Tait, A. (2019). Global guidelines: Ethics in learning analytics. International Council for Open and Distance Education. https://bit.ly/3kKXSvA

Susnjak, T. (2022). ChatGPT: The end of online exam integrity? ArXiv Preprint https://doi.org/10.48550/arXiv.2212.09292

Tahiru, F. (2021). AI in education: A systematic literature review. Journal of Cases on Information Technology, 23(1), 1-20. https://doi.org/10.4018/JCIT.2021010101

Tarran, B. (2018). What can we learn from the Facebook-Cambridge Analytica scandal? Significance, 15(3), 4-5. https://doi.org/10.1111/j.1740-9713.2018.01139.x

Teng, Y., Zhang, J., & Sun, T. (2022). Data-driven decision-making model based on artificial intelligence in higher education system of colleges and universities. Expert Systems, e12820. https://doi.org/10.1111/EXSY.12820

Ubachs, G., Konings, L., & Brown, M. (2017). The envisioning report for empowering universities. EADTU. https://empower-new.eadtu.eu/images/report/The_Envisioning_Report_for_Empowering_Universities_1st_edition_2017.pdf

United Nations Educational, Scientific and Cultural Organization. (2021). Recommendation on the ethics of artificial intelligence. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000381137

United Nations International Children’s Emergency Fund. (2021). Policy guidance on AI for children. UNICEF. https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf

Uppenbrink, J. (2000). Mendeleyev’s dream. Science, 289(5485), 1696-1696. https://www.jstor.org/stable/i355109

Wagner, G., Lukyanenko, R., & Paré, G. (2022). Artificial intelligence and the conduct of literature reviews. Journal of Information Technology, 37(2), 209-226. https://doi.org/10.1177/02683962211048201

Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press.

Williamson, B. (2020). Datafication of education. In H. Beetham & R. Sharpe (Eds.), Rethinking pedagogy for a digital age (pp. 212-226). Routledge. https://doi.org/10.4324/9781351252805-14

Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, M., & Drachsler, H. (2021). Are we there yet? A systematic literature review on chatbots in education. Frontiers in Artificial Intelligence, 4, 654924. https://doi.org/10.3389/frai.2021.654924

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education: Where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0

 

Athabasca University

Creative Commons License

Stakeholder Perspectives on the Ethics of AI in Distance-Based Higher Education by Wayne Holmes, Francisco Iniesto, Stamatina Anastopoulou, and Jesus G. Boticario is licensed under a Creative Commons Attribution 4.0 International License.