International Review of Research in Open and Distributed Learning

Volume 27, Number 2

May - 2026

Exploring the Potential of Generative AI for Academic Support in Open and Distance Learning: A Case Study of Learner Experiences

Sefa Emre Öncü, Merve Gevher, Erdem Erdoğdu, and Serpil Koçdar
Faculty of Open Education, Anadolu University, Eskişehir, Türkiye

Abstract

This exploratory case study provides an in-depth analysis of the potential of generative artificial intelligence (GenAI) to enhance academic support in open and distance learning (ODL) systems. The study examined learner experiences with a GenAI-based academic support application in an online web publishing course over a semester, focusing on two phases: free use and structured use. Data were collected through semi-structured interviews and dialogue transcripts from 10 distance learners. Findings highlighted both continuity and transformation in learner practices. In both phases, GenAI was valued for time-saving and accurate responses aligned with course materials. Structured tasks in phase 2 encouraged more purposeful engagement, including systematic self-assessment and information verification. Despite technical challenges such as device incompatibility and occasional hallucinations, learners expressed motivation, satisfaction, and a demand for institutional integration. The results, while preliminary, suggest that GenAI-based academic support holds strong potential for broader implementation in large-scale open universities, offering a pathway to balancing quality, access, and cost in addressing the enduring challenges of mass higher education.

Keywords: generative AI, academic support, open and distance learning, learner experiences

Introduction

Open and distance learning (ODL) has expanded significantly worldwide, serving millions of learners who require flexible access to education. While ODL provides opportunities for inclusivity and lifelong learning, it also presents persistent challenges in providing timely, personalized, and scalable academic support (Mejeh & Rehm, 2024). In large-scale open universities, where hundreds of thousands of learners are enrolled, limited opportunities for direct interaction with instructors often contribute to transactional distance and feelings of isolation (Moore & Kearsley, 2012). Learners frequently express the need for on-demand, interactive assistance that goes beyond prepared course materials and limited synchronous sessions. Beyond Moore and Kearsley’s (2012) foundational work, transactional distance persists as a critical and evolving challenge in modern ODL environments. Recent literature has confirmed that bridging this psychological gap remains essential for fostering learner engagement (Gökoğlu et al., 2024) and effectively integrating generative artificial intelligence (GenAI) technologies (Karataş & Yüce, 2024).

Recent advances in GenAI have raised the possibility of addressing these challenges by providing learners with continuous, responsive, and adaptive academic support (Rincon-Flores et al., 2024). Tools such as ChatGPT can simulate conversational interaction, supply explanations, and scaffold learning processes. However, despite the growing enthusiasm surrounding GenAI in education, empirical research on its role in academic support within ODL contexts remains limited, particularly in large-scale open universities. Most existing studies have examined the technical features of GenAI or its experimental classroom applications (Li et al., 2025), leaving a gap in understanding its potential for academic support and its role in shaping learners’ experiences in ODL environments.

This study contributes to filling that gap by exploring how a GenAI-based academic support application was experienced by learners in a web publishing course at a large-scale open university in Türkiye. The research was preliminary and exploratory in nature, involving a small group of volunteer learners, and the findings are therefore context-specific rather than generalizable. By examining how learners interacted with GenAI over the course of one semester, this study offers preliminary insights into both its potential and its limitations as a tool for academic support in ODL.

Literature Review

Recent years have witnessed a rapid growth in studies examining GenAI and chatbot-based support in higher education. Research has indicated that AI-driven systems have been implemented to address learner and staff needs, demonstrating both opportunities and limitations (Arun et al., 2019; Choque-Diaz et al., 2018; Hien et al., 2018; Murad et al., 2019). Collectively, these studies have emphasized the necessity of evaluating technological advancements through both pedagogical and ethical lenses.

For instance, Lappalainen and Narayanan (2023) developed Aisha, a ChatGPT-API-based chatbot designed to provide library services outside of working hours. Their findings demonstrated that GenAI could offer fast and effective support to learners and faculty alike. Similarly, Hmoud et al. (2024) explored the use of ChatGPT in higher education and found that it enhanced motivation and improved learners’ communication experiences. Comparable studies have highlighted ChatGPT’s potential as an alternative to traditional chatbot systems in libraries and academic support units.

The ODL literature has further suggested that GenAI can mitigate transactional distance and enhance social presence. Kandemir and Kılıç Çakmak (2024) showed that reduced dialogue in online course design leads to decreased academic achievement, while Gökoğlu et al. (2024) demonstrated complex relationships among social presence, transactional distance, and learner engagement. These findings suggest that GenAI-enabled support systems—by offering continuous and responsive dialogue—could help reduce psychological distance and strengthen engagement. Such benefits would be particularly critical in mega-scale open universities, where high student–teacher ratios make it challenging to sustain personalized interaction.

Möller et al. (2024) tested the AI-powered teaching assistant Syntea (https://www.iu.org/digital-tools/syntea/) with hundreds of distance learning students across more than 40 courses at the IU International University of Applied Sciences. Their analysis provided the first evidence that GenAI can substantially increase the speed of learning, showing that the use of Syntea reduced study time by about 27% on average in the third month after its release. In parallel, studies focusing on feedback have further highlighted GenAI’s potential. Zhan et al. (2025) illustrated how GenAI can automate feedback loops, thereby increasing learner participation. Similarly, Bhullar et al. (2024) identified that GenAI applications in higher education primarily focus on learner engagement and individualized guidance. Park and Doo (2024) similarly reported that, across the blended learning studies they reviewed, AI applications were used mainly in the online asynchronous individual learning component, with limited attention to AI-supported designs that connect online activities with classroom-based offline learning.

Beyond service-oriented use cases, empirical studies have increasingly documented the domain-specific learning affordances of GenAI. In AI-assisted language learning, ChatGPT-supported writing activities have been associated with improvements in EFL students’ academic writing performance and motivation (Song & Song, 2023). In skills-oriented contexts, access to ChatGPT has been linked to better adherence to coding standards and lower code complexity in novice programmers’ submissions, as indicated by static code analysis metrics (Haindl & Weinberger, 2024). Experimental evidence has also suggested that ChatGPT can enhance university students’ creative problem-solving outputs in terms of quality-related indicators (Urban et al., 2024). At a broader institutional level, early higher education scholarship has cautioned that AI may reshape teaching roles and influence core practices of content delivery, control, and assessment, with implications for governance (Popenici & Kerr, 2017). Finally, reliability remains a persistent concern: analyses of “AI hallucination” have demonstrated that distorted information in AI-generated content can negatively affect users and therefore requires systematic identification and management when GenAI is deployed in educational settings (Sun et al., 2024).

Ethical concerns constitute another important theme in the emerging GenAI literature. Davar et al. (2025) drew attention to privacy risks and the danger of uncontrolled reliance, while also highlighting the potential of AI chatbots to provide personalized support at scale. Kasneci et al. (2023) emphasized risks related to the opacity of large language models and pointed to the dangers of biased outputs. Strzelecki (2023) showed that learners’ trust, perceived fairness, and ethical judgements strongly shape their acceptance of GenAI tools. In large-scale ODL systems, these issues are exacerbated by the volume and sensitivity of learner data and by structural power asymmetries between institutions and students, making questions of data protection, transparency, and accountability particularly salient. From a critical perspective, Wieczorek (2025) warned that AI will not democratize education by default, as existing inequalities may be reproduced or intensified in the absence of institutional safeguards. Overall, these debates indicate that GenAI-based academic support in ODL should be embedded within robust ethical frameworks that explicitly address privacy, bias, overreliance, and transparency, rather than being treated as a neutral technological add-on.

A holistic pedagogical lens is therefore essential for conceptualizing academic support as an integrated process that spans instructional design, learner agency, and inclusive participation. From a pedagogical perspective, Shoufan (2023) noted that learners often struggle with prompt engineering, underscoring the need for explicit prompt pedagogy in instructional design. Furthermore, Koçdar et al. (2024) emphasized the necessity of universal design principles to ensure accessibility for learners with disabilities.

Overall, while prior studies provide valuable insights into the role of GenAI in higher education, most have examined general use cases or service-oriented applications. Few have investigated how GenAI-based academic support systems, trained for specific courses or contexts, affect learner experiences in ODL environments (Möller et al., 2024). This gap is especially significant in mega-scale open universities, where delivering scalable yet personalized academic support remains one of the greatest challenges. This study addresses that gap by exploring the potential and challenges of implementing GenAI as a transformative academic support mechanism for open and distance learners.

Conceptual and Theoretical Framework

This study was grounded in a synthesis of established theories that illuminate how GenAI can influence learning in ODL contexts: transactional distance, social presence, Bloom’s taxonomy, and Deweyan reflection. Together, these theories are unified under the Community of Inquiry (CoI) framework (Garrison et al., 1999), which integrates teaching, social, and cognitive presence.

Transactional Distance and Social Presence

Moore’s (1989) theory of transactional distance highlighted the psychological and communication gaps that arise in distance education, often due to reduced dialogue and rigid structures. GenAI, by offering 24/7 availability and responsive interaction, has the potential to mitigate this gap by creating new forms of dialogue. Complementing this, social presence theory (Gunawardena & Zittle, 1997; Rourke et al., 1999) emphasizes the learner’s ability to perceive others as real in online environments. GenAI systems, by embedding empathetic and reflective responses, can intentionally foster social presence and help reduce feelings of isolation.

Bloom’s Taxonomy

Bloom’s taxonomy (Bloom, 1956; Anderson & Krathwohl, 2001) classified cognitive processes from recall to creation. GenAI can accelerate lower-level tasks, such as recalling and summarizing, while also requiring learners to engage in new forms of higher-order thinking. Skills such as prompt design, critical evaluation of outputs, and ethical judgment (Gonsalves, 2024) are increasingly integral to analysis and evaluation. In creation, GenAI may facilitate AI–human collaboration (Lubbe et al., 2025), reframing how learners achieve creative outcomes.

Deweyan Reflection

Dewey (1933) emphasized learning as a process of reflective thinking and experiential engagement. GenAI can support this process by generating ideas or alternative perspectives, but risks arise if learners rely on it uncritically (Wieczorek, 2025). When used in structured activities such as reflective journaling or feedback loops, GenAI can stimulate critical thinking and deeper learning (Demir & Özdemir, 2025; Mandai et al., 2024). This is particularly relevant in ODL contexts, where learners may lack continuous instructor support.

Synthesized Framework

Within the CoI model, GenAI can enhance teaching presence through structured guidance, strengthen social presence through dialogue, and deepen cognitive presence by encouraging reflection and higher-order inquiry. GenAI can extend academic support without replacing the indispensable role of human educators (Bozkurt, 2024).

Research Aim

The main purpose of this study was to examine the usability of GenAI applications in academic support services for learners in ODL and to explore how such applications transform the learner experience. In line with this purpose, the study addressed the following research question:

How do learners experience and perceive the use of a GenAI-based academic support application across two modes of use—free use and structured use—within an ODL context?

Methodology

Research Design

This study employed a qualitative case study design to explore learners’ experiences with a GenAI-based academic support system. A holistic single-case design was adopted (Yin, 2009), with the aim not to produce generalizable results but to generate preliminary insights into how GenAI might enhance academic support in large-scale ODL contexts. As one of the widely used qualitative research methods, the case study approach is particularly suited to in-depth exploration of complex educational phenomena (Creswell & Creswell, 2021).

Course Context

The research was conducted in the online course, Web Publishing, part of the online Web Design and Coding Associate Degree Program at a large-scale open university in Türkiye. This course was deliberately selected because learners have often reported difficulties in mastering both technical and conceptual components, and instructors have frequently received requests for additional guidance. The course therefore provided an appropriate context to examine how GenAI-based support could supplement limited instructor availability and offer just-in-time assistance.

The self-paced course had 4,768 registered learners. Course participants were provided with a range of online learning materials through the learning management system, including videos, textbooks in multiple formats, audio resources, self-assessment activities, quizzes, interactive exercises, and infographics. In addition to these asynchronous resources, the course also offered a 1-hour synchronous session each week, during which a content expert from the academic staff delivered lectures and responded to learners’ questions. Participation in both the asynchronous materials and the live sessions was optional. As a result, approximately 30 learners attended the synchronous sessions each week, while others had the opportunity to watch the recorded sessions later.

Beyond these regular learning opportunities, a dedicated GPT-based application—named Academic Support and powered by OpenAI’s ChatGPT-4o model—was developed specifically for this study. The relevant course textbook and associated learning materials were introduced into the Academic Support application. Subsequently, the necessary directives were defined and then provided to the application, which allowed the AI-based application to be trained under the guidance of academic and technical experts. This initiative was undertaken to enable an AI-based virtual assistant, capable of providing responses grounded in the specific course textbook and materials, to support learners, effectively serving as an extension of the instructor.

Participants

Since the aim of this study was to conduct an in-depth exploration of learners’ experiences over the course of one semester, it was deliberately designed with a small group of participants. The study was introduced during the first synchronous session of the semester by the course instructor, and 10 distance learners volunteered to participate. Ethical approval was obtained from the university’s Ethical Approval Committee, and all participants provided informed consent through a volunteer participation form, which outlined the study’s purpose, procedures, and data confidentiality measures. During the semester, two participants withdrew from the study, leaving eight learners who completed the process. This attrition reduced the diversity of perspectives and is acknowledged as a limitation of the research.

The participants represented a range of ages, from early twenties to over forty, and were nearly balanced in terms of gender. Their enrollment years spanned from 2017 to 2023, with most entering in the past 2 years. To preserve anonymity, participants were systematically coded (e.g., L1, L2), and all findings and direct quotations are presented using these codes.

Data Collection

Data were collected through (a) semi-structured interviews conducted twice with each learner (after the midterm exam and at the end of the semester), and (b) transcripts of learners’ written dialogues with the GenAI-based academic support application during weekly and individual tasks. The interviews elicited learners’ perceptions of satisfaction, motivation, teaching presence, and challenges they encountered.

Data Analysis

Thematic analysis was employed to interpret the data. An initial coding framework was developed based on the research questions and conceptual framework, and additional codes emerged inductively during analysis. To enhance reliability, two researchers independently coded a subset of data (Saldaña, 2021). Disagreements were resolved through discussion until consensus was reached, and the coding scheme was refined accordingly. The finalized framework was then applied to the full dataset.

Findings

The findings are presented as exploratory and context-specific, reflecting the experiences of a small group of distance learners in one course. Four major themes emerged from the analysis: (a) motivation and engagement, (b) confidence and reassurance, (c) self-assessment and reflection, and (d) challenges and risks. Differences between phase 1 (free use) and phase 2 (structured tasks) are highlighted. It is important to note that the same learners participated in both phase 1 (free use) and phase 2 (structured use); therefore, the differences observed between the two phases reflect changes in the same individuals’ practices rather than differences between separate groups.

Themes

Motivation and Engagement

Learners consistently described the GenAI-based application as motivating. In phase 1, when learners used the tool freely, many engaged with it sporadically. In phase 2, structured tasks prompted more regular use, which learners associated with increased motivation. As one participant explained, “When I had weekly tasks, I felt encouraged to interact. It motivated me to study more.” The system also provided quick summaries and explanations, which most learners described as “shortcuts” that saved time and encouraged further study.

Learner comments illustrating this theme included the following statements: “It definitely saved time; I would keep using it and recommend it to others,” and “Because it was fast and always there, I felt more willing to sit down and study with it.”

Confidence and Reassurance

The application played a strong role in reassuring learners about their exam preparation. Learners emphasized that the tool’s alignment with course materials and expert-designed prompts made them feel secure. One learner noted, “Because it used the same textbook as the course, I trusted the answers. It gave me confidence before the exam.” This sense of reassurance was more pronounced in phase 2, when structured prompts guided learners through unit-based review.

In relation to this theme, learners explicitly voiced their trust in the system, for example, stating, “I feel like it follows the textbook, so I feel much safer when I study with it,” and “I know I am talking to an application, but that actually makes me feel in a safer space than when I ask a person.”

Self-Assessment and Reflection

The system also supported self-assessment and reflection. Learners reported that they were able to identify gaps in their knowledge and clarify misconceptions. One participant commented, “I realized what I didn’t know when I saw the explanations.” The unit-based activities in phase 2 were particularly effective in prompting reflective learning, helping learners evaluate their progress systematically.

Learner accounts further underscored this reflective dimension, for instance: “I realized how little I actually knew and could see exactly where my gaps were,” and “When the place of the words changed in a question I already knew, I suddenly couldn’t answer it, and that made me see my weaknesses much more clearly.”

Challenges and Risks

Despite these benefits, learners also reported challenges. Some expressed frustration with occasional incorrect or incomplete answers, particularly when uploading files or asking complex questions. A participant observed, “Sometimes it gave an answer that didn’t match the book. That was confusing.” Others admitted difficulty in knowing how to use the system effectively without structured prompts. This finding underscores the risk of overreliance and highlights the importance of task design and instructor guidance.

Learners’ descriptions of these challenges included remarks such as, “In the riddle game it marked my answer as wrong even though I was sure it was correct,” and “I couldn’t use voice interaction through the academic version of the tool, and that sometimes broke my concentration and study flow.”

Comparative Patterns Between Phase 1 and Phase 2

When the midterm and end-of-term interviews were considered together, patterns of both continuity and transformation emerged between the free-use phase and the structured-use phase. In both phases, the central elements of teaching–learning presence were framed around time saving and the expectation of accurate answers based on the textbook. However, in phase 2, the systematic design of unit-based tests made the learning experience more targeted and purposeful.

In terms of motivation and belonging, chat-based interaction triggered self-regulation in both phases, but after the structured activities, learners demonstrated a clearer recognition of their shortcomings and a stronger willingness to study. While overall satisfaction appeared sustainable, learners in phase 2 increasingly emphasized the need for seamless one-click integration of the Academic Support application into the institutional e-Campus platform where course materials are hosted. This demand shifted the conversation toward institutional-level scalability.

Technical barriers, such as tablet–mobile incompatibility and connectivity issues, recurred across both phases. Yet, in phase 2, the number of incorrect or hallucinatory responses decreased, suggesting that example-focused prompts partially alleviated these challenges. Regarding activity preferences, the app features Test Me and Where Do I Stand? were consistently described as indispensable. In phase 2, learners also expressed more concrete demands for question-marking options and greater conceptual variety.

Development expectations in both phases centered on visually enriched interfaces, flashcards, and proactive notifications. In the structured phase, learners further highlighted the need for data-driven improvements such as statistical feedback and personalized task schedules. Collectively, these comparative findings indicate that while a GenAI-supported academic support system retains its time-efficient and motivating qualities, it also holds potential for advancement in technical flexibility and data-driven personalization.

Evolution of Learners’ Perceptions from Phase 1 to Phase 2

The analysis of both phases illustrates the functional evolution of learners’ perceptions of GenAI. In phase 1, learners associated the tool primarily with fast answers and teacher-like support, as reflected in their use of terms such as “teacher,” “prompt,” and “answer.” By phase 2, the vocabulary shifted toward information, question, feedback, and learning, highlighting a stronger emphasis on information verification and self-assessment. By the end of the semester, these dual functions converged into a holistic view of GenAI as an academic support ecosystem.

The prominence of terms such as “activity,” “feature,” “support,” “useful,” and “content” demonstrates that learners began to view GenAI not only as a question-answering tool but also as a multilayered learning partner capable of offering customizable activities, useful features, and rich content. The consistent responses of students across both phases underscore that the learner remained central, while the prominence of the terms “explanation” and “feedback” indicates that GenAI supported conceptual clarity and reinforced self-regulated learning cycles through instant feedback.

In sum, while core functions such as answering questions and supporting learning persisted, learners’ use of GenAI evolved from perceiving it as a fast-answering virtual teacher in phase 1 to adopting it as a personalized exam coach and learning assessment partner in phase 2. This shift reflects a qualitative deepening of interaction, where GenAI was increasingly positioned as both a digital teacher that strengthens cognitive presence and a formative assessment tool that promotes reflective, self-regulated learning.

Discussion

This study conducted an in-depth exploration of learners’ experiences with a GenAI-based academic support application in an ODL context. The findings should be interpreted as preliminary and context-specific, given the small sample size and short duration of the research. Nevertheless, they offer important insights into the opportunities and challenges of using GenAI for academic support in open universities.

The results highlight the potential of GenAI to reduce transactional distance (Moore, 1989) by providing continuous, responsive dialogue that learners described as similar to interacting with a teacher or peer. This aligns with the Community of Inquiry (CoI) model (Garrison et al., 1999), in which teaching and social presences are critical for learner engagement. The learners’ perception of the tool as a 24/7 accessible teacher illustrates how GenAI can strengthen teaching presence by delivering just-in-time explanations, while also reinforcing social presence by simulating supportive dialogue. These findings are consistent with prior studies that report positive effects of AI tools on learners’ sense of belonging and motivation (Popenici & Kerr, 2017; Urban et al., 2024).

From a broader perspective, these patterns also speak to how GenAI can be positioned within the learner-support ecology of large-scale open universities. Rather than operating solely as a generic productivity tool, the application in this study functioned as an additional support layer embedded in existing course structures and advising processes, which resonates with Li et al.’s (2025) call to redesign open and distance education through “digital bridges” that augment, rather than replace, human support.

Another key contribution of the study is the comparative analysis of phase 1 and phase 2. In phase 1, when learners used the system freely, engagement was sporadic and benefits varied widely. In contrast, in phase 2, structured weekly tasks acted as pedagogical triggers, resulting in more consistent use, deeper reflection, and greater confidence. This shift highlights that the educational value of GenAI is not inherent to the technology itself but depends on how it is embedded in instructional design. Structured prompts, scaffolding, and instructor oversight help transform GenAI from a convenience tool into a meaningful learning aid. This supports recent arguments that prompt pedagogy and carefully designed AI integration are necessary for sustainable impact (Gonsalves, 2024).

Furthermore, the contrast between phases 1 and 2 and the associated learning gains is consistent with recent work on AI-driven tutoring, adaptive support, and GenAI-enhanced feedback. Möller et al. (2024) showed that an AI-powered tutoring system for distance learners could reduce study time while maintaining progress, and Mejeh and Rehm (2024), together with Rincon-Flores et al. (2024), demonstrated how adaptive support systems can personalize learning processes in online settings. Zhan et al. (2025) likewise conceptualized GenAI as an enabler of more-continuous feedback engagement. The present study contributes a qualitative ODL perspective to this body of work by illustrating how structured GenAI activities can channel learners’ time-saving strategies into more sustained, reflective, and feedback-oriented engagement, even though issues such as hallucinations and overreliance remain salient.

At the cognitive level, learners reported that the system helped them identify gaps in their understanding and prepare more effectively for exams. In this sense, GenAI functioned as both a cognitive shortcut for lower-level tasks (e.g., summarizing) and a catalyst for higher-order reflection, in line with Bloom’s taxonomy (Anderson & Krathwohl, 2001). This finding resonates with research suggesting that GenAI can enhance learning performance by supporting both efficiency and depth (Haindl & Weinberger, 2024; Song & Song, 2023). At the same time, risks were observed: some learners became uncertain when confronted with inaccurate responses, and others struggled to use the tool effectively without structured guidance. These risks echo concerns in the literature about hallucinations, overreliance, and integration challenges (Hmoud et al., 2024; Sun et al., 2024).

From a motivational perspective, the tool appeared to satisfy key elements of self-determination theory (Ryan & Deci, 2000). Learners described how the application enhanced their sense of competence (“I realized what I didn’t know”), provided autonomy through flexible access, and fostered relatedness by simulating a supportive peer or tutor. These experiences suggest that GenAI can play a role in sustaining learner motivation, particularly in large-scale ODL systems where human support is limited. However, the motivational benefits are fragile and can be undermined by technical problems or unclear task design, which may discourage use over time.

Finally, the findings contribute to the growing discussion about the role of educators in the age of GenAI. The study illustrates how, rather than replacing instructors, GenAI can extend their presence and free them to focus on higher-level pedagogical tasks. Learners valued the system precisely because it was aligned with course materials and developed under expert supervision, which gave it credibility. This reinforces the idea that GenAI works best as part of a human–AI partnership, where the technology provides scalable support while educators design, monitor, and refine its use.

Limitations

This study has some limitations. First, it was conducted with a very small number of volunteer learners ( n = 10) in a single course at one large-scale open university. The findings are therefore exploratory and context-specific rather than generalizable. Second, participant attrition during the semester, from ten to eight learners, reduced the diversity of perspectives and may have influenced the richness of the data. Third, the research was limited to one semester, which restricted the ability to observe the longer-term effects of GenAI-based support on persistence, performance, or engagement. Fourth, the study relied on self-reported data from interviews and learner–AI dialogue transcripts, which, while valuable, may be subject to bias. Fifth, technical limitations of the GenAI tool, such as occasional inaccuracies or file-upload problems, were reported by learners and may have shaped their experiences.

Sixth, although the interview data provided insights into how the eight participating learners used the GenAI-based Academic Support system, the study did not systematically track the types of questions directed to academic staff by the larger course cohort. Anecdotal observations suggest that learners tended to use the AI tool for concept clarification, content verification, and practice questions, whereas questions posed to instructors during synchronous sessions focused more on assessment expectations or technical issues. However, this distinction was not examined in a structured manner and therefore remains an important area for future research. Systematically mapping the kinds of queries addressed to GenAI versus those directed to human instructors would help clarify which aspects of academic support can be effectively augmented by AI and which require sustained human involvement, thereby providing a more nuanced understanding of how GenAI might free instructor time for higher-level pedagogical responsibilities.

Finally, the study did not include an analysis of exam scores or a comparison between the sampled participants and the wider cohort. This was a deliberate methodological decision. As an exploratory and preliminary qualitative case study with a small volunteer sample, the aim was not to measure learning outcomes, but to gain an in-depth understanding of learners’ experiences, interactions, and perceptions while using the GenAI-based Academic Support application. Given the small sample size and the self-selected nature of participation, quantitative comparisons such as exam score analysis would not yield meaningful or generalizable results. Future research with larger samples and experimental or quasi-experimental designs may explore whether GenAI-based academic support influences exam performance or other measurable learning outcomes.

Despite these limitations, the study provides important preliminary insights into the role of GenAI as an academic support tool in ODL and highlights directions for further research with larger, more diverse samples and longer observation periods.

Conclusion

This exploratory case study indicates that GenAI-based applications have the potential to extend academic support for distance learners in large-scale ODL systems. By reducing transactional distance, enhancing teaching and social presence, and supporting reflective engagement, such tools can help address one of the most persistent challenges of open universities: providing timely and personalized academic support at scale.

Interpreting the findings in light of broader institutional challenges faced by open universities, particularly those described by Daniel (2004), provides additional insight. Daniel (2004) identified the eternal triangle of education as access, quality, and cost, emphasizing the inherent tensions in balancing these three dimensions. The results of this study suggest that GenAI applications may help reconfigure this triangle in favour of open universities. First, by offering scalable, on-demand assistance aligned with course materials, GenAI can enhance the quality of learner support without proportionally increasing faculty workload. Second, the 24/7 accessibility of such systems broadens access for learners who combine study with work or family responsibilities. Third, while the initial development and integration of GenAI tools requires investment, over time they may help reduce costs by automating routine support tasks and allowing educators to focus on higher-level pedagogical responsibilities such as designing meaningful learning activities, interpreting learning analytics to support at-risk learners, facilitating deep inquiry, mentoring students, and making curriculum-level pedagogical decisions. Thus, carefully designed GenAI applications could enable open universities to address Daniel’s (2004) challenge of simultaneously increasing access, improving quality, and reducing costs.

At the same time, the study emphasizes that GenAI is best understood as a supportive tool rather than a substitute for educators. Its effectiveness depends on careful task design, alignment with course materials, and continuous instructor oversight. Challenges such as hallucinations, technical limitations, and risks of overreliance highlight the need for cautious and critical integration.

As a preliminary investigation with a small number of learners, this study offers context-specific insights rather than generalizable conclusions. Future research should involve larger samples across multiple courses and institutions to examine the scalability and sustainability of GenAI-based academic support in ODL. Ultimately, the findings suggest that when combined with pedagogical frameworks such as the Community of Inquiry and supported by reflective learning design, GenAI as a tool that helps educators and learners transform challenges into opportunities and enrich their learning experience (Bozkurt, 2024). Crucially, the successful integration of AI in education demands that institutions not only train purpose-specific models but also invest in continuous AI literacy and prompt engineering training for learners and employees to guarantee responsible and ethical engagement with these technologies.

Funding Details

This study was supported by the Anadolu University Scientific Research Projects Commission under project no. SÇB-2024-2593.

Acknowledgment

Based on the Academic Integrity and Transparency in AI-Assisted Research and Specification Framework, the authors of this paper acknowledge that the paper was translated, proofread, and edited with the assistance of DeepL and OpenAI’s ChatGPT (versions as of September 2025), complementing the human editorial process. Human authors critically assessed and validated the content to maintain academic rigor. The authors also assessed and addressed the potential biases inherent in the AI-generated content. The final version of the paper is the sole responsibility of the human authors.

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman. https://eduq.info/xmlui/handle/11515/18824

Arun, K., Nagesh, A. S., & Ganga, P. (2019). A multi-model and AI-based CollegeBot management system (AICMS) for professional engineering colleges. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 8(9), 2910-2914. https://doi.org/10.35940/ijitee.I8818.078919

Bhullar, P. S., Joshi, M., & Chugh, R. (2024). ChatGPT in higher education—A synthesis of the literature and a future research agenda. Education and Information Technologies, 29(16), 21501-21522. https://doi.org/10.1007/S10639-024-12723-X

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. Longmans.

Bozkurt, A. (2024). Tell me your prompts and I will make them true: The alchemy of prompt engineering and generative AI. Open Praxis, 16(2), 111-118. https://doi.org/10.55982/openpraxis.16.2.661

Choque-Diaz, M., Armas-Aguirre, J., & Shiguihara-Juarez, P. (2018). Cognitive technology model to enhance academic support services with chatbots. In Proceedings of the 2018 IEEE 25th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018. IEEE. https://doi.org/10.1109/INTERCON.2018.8526411

Creswell, J. W., & Creswell, J. D. (2021). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). Sage Publications.

Daniel, J. (2004). Technology and education: Adventures in the eternal triangle. Commonwealth of Learning. https://oasis.col.org/

Davar, N. F., Dewan, M. A. A., & Zhang, X. (2025). AI chatbots in education: Challenges and opportunities. Information, 16(3), Article 235. https://doi.org/10.3390/info16030235

Demir, B., & Özdemir, D. (2025). AI voice journaling for future language teachers: A path to well-being through reflective practices. British Educational Research Journal. Advance online publication. https://doi.org/10.1002/BERJ.4174

Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative progress. Henry Regnery Company. https://archive.org/details/howwethink0000unse/page/n3/mode/2up

Garrison, D. R., Anderson, T., & Archer, W. (1999). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

Gökoğlu, S., Yılmaz, F. G. K., & Yılmaz, R. (2024). Student engagement, Community of Inquiry, and transactional distance in online learning environments: A stepwise multiple linear regression analysis. The International Review of Research in Open and Distributed Learning, 25(4), 107-127. https://doi.org/10.19173/irrodl.v25i4.7660

Gonsalves, C. (2024). Generative AI’s impact on critical thinking: Revisiting Bloom’s taxonomy. Journal of Marketing Education. Advance online publication. https://doi.org/10.1177/02734753241305980

Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. American Journal of Distance Education, 11(3), 8-26. https://doi.org/10.1080/08923649709526970

Haindl, P., & Weinberger, G. (2024). Does ChatGPT help novice programmers write better code? Results from static code analysis. IEEE Access, 12, 114146-114156. https://doi.org/10.1109/ACCESS.2024.3445432

Hien, H. T., Cuong, P.-N., Nam, L. N. H., Nhung, H. L. T. K., & Thang, L. D. (2018). Intelligent assistants in higher-education environments: The FIT-EBot, a chatbot for administrative and learning support. In Y. Yagi, M. Bui, H. Q. Thang, & L. T. K. Oanh (Chairs), SoICT ’18: Proceedings of the 9th International Symposium on Information and Communication Technology (pp. 69-76). ACM. https://doi.org/10.1145/3287921.3287937

Hmoud, M., Swaity, H., Hamad, N., Karram, O., & Daher, W. (2024). Higher education students’ task motivation in the generative artificial intelligence context: The case of ChatGPT. Information, 15(1), Article 33. https://doi.org/10.3390/INFO15010033

Kandemir, B., & Kılıç Çakmak, E. (2024). Transactional distance’s influence on students’ social, cognitive, teaching presence, and academic achievement. American Journal of Distance Education, 39(4), 358-381. https://doi.org/10.1080/08923647.2024.2393490

Karataş, F., & Yüce, E. (2024). AI and the future of teaching: Preservice teachers’ reflections on the use of artificial intelligence in open and distributed learning. The International Review of Research in Open and Distributed Learning, 25(3), 304-325. https://doi.org/10.19173/irrodl.v25i3.7785

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., ... Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/J.LINDIF.2023.102274

Koçdar, S., Hamutoğlu, N. B., Erdoğdu, E., & Uçar, H. (2024). Academic performance of learners with special needs in open and distance learning: A study in Anadolu University open education system. European Journal of Special Needs Education, 40(3), 489-504. https://doi.org/10.1080/08856257.2024.2380594

Lappalainen, Y., & Narayanan, N. (2023). Aisha: A custom AI library chatbot using the ChatGPT API. Journal of Web Librarianship, 17(3), 37-58. https://doi.org/10.1080/19322909.2023.2221477

Li, K. C., Belawati, T., & Hou, H. (2025). Digital bridges and virtual scaffolds: Reimagining open and distance learning for 2025 and beyond [Editorial]. Asian Association of Open Universities Journal, 20(1), 1-3. https://doi.org/10.1108/AAOUJ-06-2025-178

Lubbe, A., Marais, E., & Kruger, D. (2025). Cultivating independent thinkers: The triad of artificial intelligence, Bloom’s taxonomy and critical thinking in assessment pedagogy. Education and Information Technologies, 30, 17589-17622. https://doi.org/10.1007/s10639-025-13476-x

Mandai, K., Tan, M. J. H., Padhi, S., & Pang, K. T. (2024). A cross-era discourse on ChatGPT’s influence in higher education through the lens of John Dewey and Benjamin Bloom. Education Sciences, 14(6), Article 614. https://doi.org/10.3390/EDUCSCI14060614

Mejeh, M., & Rehm, M. (2024). Taking adaptive learning in educational settings to the next level: Leveraging natural language processing for improved personalization. Educational Technology Research and Development, 72(3), 1597-1621. https://doi.org/10.1007/s11423-024-10345-1

Möller, M., Nirmal, G., Fabietti, D., Stierstorfer, Q., Zakhvatkin, M., Sommerfeld, H., & Schütt, S. (2024). Revolutionising distance learning: A comparative study of learning progress with AI-driven tutoring (arXiv:2403.14642). ArXiv. https://doi.org/10.48550/arXiv.2403.14642

Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1-7. https://doi.org/10.1080/08923648909526659

Moore, M. G., & Kearsley, G. (2012). Distance education: A systems view of online learning (3rd ed.). Cengage Learning.

Murad, D. F., Fernando, E., Irsan, M., Murad, S. A., Akhirianto, P. M., & Wijaya, M. H. (2019). Learning support system using Chatbot in “Kejar C Package” homeschooling program. In A. Setyanto (Ed.), 2019 International Conference on Information and Communications Technology (pp. 32-37). IEEE. https://doi.org/10.1109/ICOIACT46704.2019.8938479

Park, Y., & Doo, M. Y. (2024). Role of AI in blended learning: A systematic literature review. The International Review of Research in Open and Distributed Learning, 25(1), 164-196. https://doi.org/10.19173/IRRODL.V25I1.7566

Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), Article 22. https://doi.org/10.1186/S41039-017-0062-8

Rincon-Flores, E. G., Castano, L., Guerrero Solis, S. L., Olmos Lopez, O., Rodríguez Hernández, C. F., Castillo Lara, L. A., & Aldape Valdés, L. P. (2024). Improving the learning-teaching process through adaptive learning strategy. Smart Learning Environments, 11(1), Article 27. https://doi.org/10.1186/s40561-024-00314-9

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (1999). Assessing social presence in asynchronous text-based computer conferencing. International Journal of E-Learning & Distance Education / Revue Internationale Du e-Learning et La Formation à Distance, 14(2), 50-71. https://www.ijede.ca/index.php/jde/article/view/153/341

Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68-78. https://doi.org/10.1037/0003-066X.55.1.68

Saldaña, J. (2021). Coding techniques for quantitative and mixed data. In A. J. Onwuegbuzie & R. B. Johnson (Eds.), The Routledge reviewer’s guide to mixed methods analysis (pp. 151-160). Routledge. https://doi.org/10.4324/9780203729434-14

Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805-38818. https://doi.org/10.1109/ACCESS.2023.3268224

Song, C., & Song, Y. (2023). Enhancing academic writing skills and motivation: Assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Frontiers in Psychology, 14, Article 1260843. https://doi.org/10.3389/FPSYG.2023.1260843

Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learning Environments, 32(9), 5142-5155. https://doi.org/10.1080/10494820.2023.2209881

Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), Article 1278. https://doi.org/10.1057/S41599-024-03811-X

Urban, M., Děchtěrenko, F., Lukavský, J., Hrabalová, V., Svacha, F., Brom, C., & Urban, K. (2024). ChatGPT improves creative problem-solving performance in university students: An experimental study. Computers & Education, 215, Article 105031. https://doi.org/10.1016/j.compedu.2024.105031

Wieczorek, M. (2025). Why AI will not democratize education: A critical pragmatist perspective. Philosophy and Technology, 38(2), Article 53. https://doi.org/10.1007/S13347-025-00883-8

Yin, R. K. (2009). Case study research: Design and methods (4th ed.). SAGE.

Zhan, Y., Boud, D., Dawson, P., & Yan, Z. (2025). Generative artificial intelligence as an enabler of student feedback engagement: A framework. Higher Education Research and Development, 44(5), 1289-1304. https://doi.org/10.1080/07294360.2025.2476513

Athabasca University

Creative Commons License

Exploring the Potential of Generative AI for Academic Support in Open and Distance Learning: A Case Study of Learner Experiences by Sefa Emre Öncü, Merve Gevher, Erdem Erdoğdu, and Serpil Koçdar is licensed under a Creative Commons Attribution 4.0 International License.