Volume 27, Number 2
Stefan Stenbom1 and D. Randy Garrison2
1KTH Royal Institute of Technology, Sweden; 2University of Calgary, Canada
Generative artificial intelligence (AI) is transforming education, creating opportunities for personalization, efficiency, and engagement while also raising concerns about misinformation, overreliance, and the erosion of critical thinking. To navigate these tensions, this article argues for the necessity of a coherent theoretical framework to guide the educational adoption of AI. Drawing on the Community of Inquiry (CoI) framework and its construct of shared metacognition, we outline how collaborative inquiry can integrate AI in ways that preserve human agency and sustain deep and meaningful learning.
We examine the potential for AI to assume multiple roles within a community of inquiry—supporting instructional design, guiding learners as an independent resource, assisting instructors through analytics, participating in discussions, and sustaining dialogical partnerships with students. While these roles highlight the capacity of AI to enrich learning communities, they also underscore risks of passivity, diminished authenticity, and overdependence if reflective inquiry is bypassed.
We argue that shared metacognition—collective monitoring and management of thinking—offers a responsible pathway for educators and learners to engage critically with AI-generated outputs, ensuring that technology strengthens rather than supplants collaborative inquiry. In conclusion, we contend that AI can contribute to worthwhile educational experiences only when framed within a coherent conceptual perspective that emphasizes skeptical engagement, collaborative reflection, and the preservation of human purpose. In this regard, the CoI framework has considerable potential to provide understanding and guidance in the adoption of AI tools.
Keywords: artificial intelligence (AI), the Community of Inquiry (CoI) framework, shared metacognition, critical thinking, collaborative inquiry
The development of artificial intelligence (AI) reflects a history marked by milestones that have reshaped how humans interact with technology. Early AI applications took the form of rule-based expert systems (Buchanan, 2005; Haenlein & Kaplan, 2019), designed to mimic decision-making in areas such as medical diagnosis, financial advice, and manufacturing (Waterman, 1986). The subsequent rise of machine learning marked a major breakthrough, enabling computers to learn from data, improve over time, and make predictions. This transformed sectors from health care to finance through applications such as predictive modeling and algorithmic trading (Jordan & Mitchell, 2015).
In recent years, AI has moved to the forefront of technological innovation through its ability to simulate human thinking. This is especially evident with the advent of generative AI, a leap from the deterministic and narrow applications of earlier systems (Corbeil & Corbeil, 2025). Generative AI refers to technology that creates content—whether it be text, images, audio, or video—that can be indistinguishable from human output. Its most visible form currently—chat-based interaction tools—has captured the imagination of the tech world and sparked broad interest across sectors. Among these, education stands out as a field poised for transformation, with many considering generative AI as a paradigm shift (Bond et al., 2024). This conviction arises from AI’s potential to reshape traditional educational models, refining the boundaries of learning, teaching, and scholarly inquiry (Bozkurt & Sharma, 2023). Through personalized learning, enhanced interaction, and augmented teaching, AI stands at the threshold of reimagining education (Zawacki-Richter et al., 2019). Yet, as AI inevitably transforms education, we must preserve the core elements of a worthwhile educational experience (Selwyn, 2022).
To move beyond ad hoc adoption of AI technologies, we must recognize the unique characteristics of AI alongside the essentials of a worthwhile educational experience. The transformative potential of generative AI in education calls for a coherent understanding of the complexities, inherent potential, and risks of adopting such a powerful technology in an educational learning environment. A sound theoretical perspective offers the necessary structure and perspective to integrate AI’s potential while mitigating its challenges. This article responds to that challenge by examining the characteristics of generative AI from an educational perspective and proposing a theoretical framework to guide educators in its responsible adoption. In doing so, we seek a vision and rationale for integrating generative AI in ways that sustain meaningful education. Although the analysis in this article is situated in higher education, we deliberately formulate the conceptual claims to have relevance beyond this context, extending to other forms and levels of learning.
The basic assumption underlying this discussion is that the adoption of AI requires critical thinking and discourse, entailing careful interrogation of information and reasoned dialogue (Rios et al., 2025; Sitepu et al., 2025). We argue that critical thinking and collaborative inquiry are essential for leveraging AI’s strengths while mitigating the risks of overreliance on this powerful technology. While AI excels at filtering and organizing vast amounts of information, it also raises concerns about the erosion of critical thinking and dialogue. The risk lies in accepting AI outputs uncritically, bypassing the metacognitive process of analysis and reflective inquiry.
Considering the transformative potential of AI from an educational perspective requires examining the frameworks guiding our understanding and design of effective learning environments. Theoretical frameworks that explicitly inform the educational adoption of AI technologies are notably absent. Among the theoretical approaches used to study teaching and learning in digital contexts, the Community of Inquiry (CoI) framework has been particularly influential and holds potential for understanding and guiding AI-related educational initiatives. The CoI framework was developed to analyze and design learning processes in collaborative-constructivist learning environments (Cleveland-Innes et al., 2024; Garrison, 2017). Garrison et al. (2000, 2001) conceptualized it as a theoretical construct to describe and support learning within, though not limited to, digital environments grounded in the theoretical perspectives of Dewey (1933), Peirce (1955), and Lipman (2003). Since its inception, the CoI framework has garnered widespread use, discussion, and examination (Bozkurt & Zawacki-Richter, 2021).
Three interrelated elements are central to the CoI framework: Teaching, Social, and Cognitive Presence. Teaching Presence “originates from the multidimensional roles and responsibilities of a teacher in collaborative and constructivist learning environments” (Stenbom & Cleveland-Innes, 2024, p. 8). Social Presence is the degree to which students and teachers “feel socially and emotionally connected with others in an online environment” (Swan, 2020, p. 80). Cognitive Presence is the core thinking and meaning-making element when participants engage in individual and cooperative practical inquiry (Garrison, 2015). The CoI framework is grounded in the principles of a worthwhile educational experience that promotes critical reflection and discourse. It was never considered to be unique to online learning. Therefore, we argue that the CoI framework has great potential to guide educators in understanding, designing, and implementing collaborative learning that effectively leverages AI tools to support deep and meaningful learning.
The introduction of AI, and specifically generative AI, into a community of inquiry offers a new lens through which to view and enhance CoI presences. AI’s capabilities for personalized learning, dynamic content generation, and interactive engagement have significant potential to enrich communities of inquiry (Anderson et al., 2025). Exploring this integration requires understanding how AI can complement and extend learning communities within educational contexts. Considering this, recent research highlights the need for “theoretical and conceptual frameworks for understanding and evaluating AI” (Namaziandost & Rezai, 2024, p. ii) in online learning. In this context, Nasr et al. (2025) applied the Cognitive Presence (practical inquiry) construct to examine how generative AI influences critical thinking across its phases. Nasr et al. (2025) demonstrated the value of a theoretical lens—specifically the Cognitive Presence construct—showing that learner co-participation with AI can effectively support critical inquiry and thinking.
At its core, the CoI theoretical framework is a process model, representing the dynamic of collaborative inquiry grounded in personal reflection and critical discourse. In this way, educators can foster deep and meaningful educational experiences and outcomes. From a community of inquiry perspective, we suggest that AI should be understood not as a neutral tool operating outside the learning process, but rather as a sociotechnical actor whose functions intersect with Teaching, Social, and Cognitive Presence in different ways. AI tools may support instructional design, facilitation, and direction (Teaching Presence); influence psychological safety, belonging, and affective/emotional engagement (Social Presence); and support learners’ inquiry processes (Cognitive Presence) (Stenbom et al., 2026). In this article, we do not assume a single role for AI; rather, we propose that AI is capable of assuming multiple roles within a community of inquiry, depending on how it is designed, positioned, and engaged within the learning environment. We elaborate on these roles later in the article.
As we will see, the educational challenge of AI lies in maintaining academic integrity to ensure that learners actively construct personal meaning and shared knowledge. This is crucial for learners to critically assess AI-generated results and remain prepared to defend their reasoning. Thus, the CoI framework, with its emphasis on critical discourse, is well-suited to guide the educationally sound adoption of AI. Critical thinking and inquiry remain central to leveraging AI’s strengths while mitigating the risk of intellectual passivity that can accompany easy access to AI-generated information. The CoI framework offers both cognitive and methodological grounding for meaningful learning in AI-mediated environments.
Studies have noted that AI research has insufficiently explored issues of critical thinking and collaboration (Bozkurt & Sharma, 2023). Yet AI has the potential to support “collaborative learning by facilitating communication and cooperation among learners, instructors, and resources” (Namaziandost & Rezai, 2024, p. i). With this in mind, we argue that AI can amplify reflective interaction and foster skeptical, critical approaches through collaboration and shared inquiry. This is crucial as true knowledge (deep and meaningful learning) is constructed and confirmed through personal reflection and critical discourse guided by collaborative inquiry dynamics and supported by shared meta-cognitive awareness (Garrison, 2015). This argument also speaks to facilitating insight and creativity in the educational process. Therefore, this conceptual article addresses the research question: How can a collaborative-constructivist perspective inform our understanding of generative AI in digital learning environments? To answer this, we begin with a brief overview of the capabilities and limitations of generative AI.
AI refers to the use of computer systems designed to perform tasks that traditionally would require human intelligence, such as problem-solving, pattern recognition, language understanding, and decision-making. In educational settings, AI can take many forms, including adaptive learning platforms, automated assessment tools, intelligent tutoring systems, and conversational agents supporting learners in real time (Bond et al., 2024; Zawacki-Richter et al., 2019). The adoption of AI in education accelerated dramatically with the introduction of generative AI in 2022–2023, marked by the release of tools such as ChatGPT, Copilot, Gemini, and Claude. Since then, the field has continued to evolve rapidly, as AI systems can now create text, images, video, and audio, and new systems appear daily. As Beckman et al. (2025, p. 1) caution, “the rapid pace of technological change with generative artificial intelligence is accelerating much faster than our capacity to understand and regulate it.” Although AI is not new, it has become more accessible and user-friendly than ever before.
This rapid development further highlights the importance of clarifying key concepts, especially the distinction between deep learning as an AI method and deep learning as a pedagogical approach central to the CoI framework. AI deep learning trains algorithmic models on vast datasets (LeCun et al., 2015; Jordan & Mitchell, 2015), while in a community of inquiry, deep learning depends on critical discourse and systematic reflection (Lipman, 2003; Garrison, 2017). AI can reveal connections otherwise overlooked, yet its results are often unverifiable because sources are rarely transparent; AI is only as good as its training data. This calls for reflection and dialogue to assess plausibility and consider alternatives. Generative AI does not itself manage inquiry; in contrast, educational deep learning is a human-centered process of questioning, constructing meaning, and developing metacognitive awareness to guide inquiry. The challenge for educators is to use AI to support, not replace, collaborative inquiry that cultivates imagination and discovery.
AI has considerable potential to execute well-defined or repetitive activities with efficiency and precision. Moreover, generative AI can also support more open ended and creative processes. In education, AI tools have been shown to support data-driven decision making, provide adaptive learning guidance, help educators monitor progress, and enable more flexible and personalized pathways (Corbeil & Corbeil, 2025). However, while AI can open new avenues, its value in education depends on how it is used. Without an intentional focus on reflection and discourse, there is a serious risk that learners could become passive recipients of AI-generated information, accepting it without question and forfeiting the cognitive engagement needed for deep learning. In a collaborative-constructivist framework, where meaning emerges through sustained dialogue, reflection, and the testing of ideas, such passivity undermines the very processes that make learning transformative.
Instead, we call for an approach where AI serves as a catalyst for inquiry, prompting learners to consider diverse perspectives, alternative explanations, and new resources that can extend the conversation. It can help to structure complexity, visualize relationships, and provide feedback that encourages deeper questioning. Yet these affordances must be balanced against the danger of using AI as a shortcut that bypasses reasoning, debate, and negotiating meaning. The challenge is to integrate AI in ways that preserve human agency in sense-making, ensuring that technology enriches rather than replaces the collaborative discourse and critical thinking essential to deep learning.
The literature on AI and digital learning is growing rapidly, with many studies highlighting the potential of generative AI to enhance learner engagement. For instance, Kılınç (2023) argues that ChatGPT has the “ability to engage in dynamic, context-aware conversations that can facilitate a more engaging and interactive learning environment, thereby enhancing students’ critical thinking and problem-solving skills” (p. 206). We support this view, yet most people remain vulnerable to confirmation bias. In addition, people are now open to assault from AI misdirection and misinformation (Garrison, 2023). Unchecked, such misinformation risks reinforcing echo chambers that erode critical discourse and weaken learners’ capacity for reflective inquiry. This makes the educator’s role in fostering critical evaluation and open dialogue more essential than ever if AI is to strengthen, rather than diminish, deep learning. For this reason, much work is required to appreciate exactly how best to achieve engaging but critical learning environments that use generative AI technologies. To this end, a sound theoretical framework is essential to explore and assess approaches that use generative AI tools.
Kılınç (2023) clearly outlines the benefits and limitations of generative AI when he states, “Limitations and hazards associated with using ChatGPT in education include the potential for perpetuating biases, producing, and spreading misinformation, positioning itself as the ultimate epistemic authority without sufficient evidence” (p. 207). All of this leads educators to “emphasize the need to harness technology, cultivate a sense of community, and encourage educators to pursue continual professional development” (p. 230). The greatest educational risk of generative AI is reducing learning to the passive consumption of easily digestible content, without the purposeful discourse needed to uncover distortions and hidden assumptions. Educators must not undermine critical reflection and discourse by overvaluing AI’s capacity for information generation and assimilation.
A review of online learning research noted that intelligent tutoring systems have played an important role (Hwang et al., 2022; Jansson et al., 2024). From the perspective of traditional distance education grounded in independent study, intelligent tutoring systems that personalize and support independent study have clear benefits. However, personalization and efficiency cannot replace the collaborative inquiry through which learners challenge claims and assumptions. This concern is evident in another article exploring the boundaries of AI, which states, “generative AI requires enhancing the scope of current educational roles or adopting new ones such as facilitators of learning, curators of learning resources, designers of learning experiences, and assessors of learning” (Bozkurt & Sharma, 2023, p. i). As we move forward in adopting AI technologies, we must address these responsibilities while balancing the advantages of personalization with the need for collaborative inquiry.
Another crucial issue closely linked to AI’s growing influence is learning analytics. Considerable overlap exists between AI and learning analytics in supporting cognitive presence within educational communities. A review of AI in online higher education identified performance assessment and prediction as its primary functions and reported positive effects on instructional quality and learning outcomes (Ouyang et al., 2022). This highlights how learning analytics can be used to evaluate learning processes and enhance critical engagement. AI can also strengthen analytic tools that identify metacognitive strategies and guide the monitoring and management of collaborative inquiry.
AI presents enormous challenges to educators. Addressing these challenges requires a coherent theoretical perspective to guide the meaningful integration of AI into education. The theoretical perspective offered here is based on metacognitively informed collaborative inquiry capable of monitoring and managing discourse in real time.
The CoI framework provides a well-established foundation for understanding how meaningful learning experiences are achieved through the interplay of teaching, social, and cognitive presence (Bozkurt & Zawacki-Richter, 2021). Within the framework, the concept of shared metacognition has a distinctive role as an explanatory model for how collaborative inquiry is monitored, regulated, and sustained over time (Garrison, 2015). Shared metacognition captures how learners collectively take responsibility for reflecting on their thinking, evaluating emerging understandings, and managing the direction and quality of inquiry—processes that become particularly critical in AI-rich learning environments.
Metacognition refers to the awareness and regulation of thinking and learning processes. In other words, it is the ability to reflect on what is known, how it is known, and how to adjust approaches to achieve better outcomes. It involves monitoring and controlling cognitive processes and jointly articulating and negotiating thinking with others (Flavell, 1987). Within the CoI framework, shared metacognition captures collective monitoring and regulation of thinking that occurs in a collaborative learning environment (Jansson et al., 2021). This offers a coherent way to harness the potential of AI while managing the inherent risks of such powerful and pervasive technology. Educational experiences in AI-rich contexts require maintaining reflective responsibility and control through metacognitive monitoring and managing of the collaborative inquiry process. In this regard, the Shared Metacognition construct (Garrison & Akyol, 2015a, 2015b) provides a starting point for the discussion of why and how shared metacognition is of relevance as we enter the age of AI.
The theoretical foundation of the shared metacognition construct is grounded in the literature on metacognition with a focus on regulation. The premise is that deep approaches to thinking and learning necessitate that we “communicate, explain, and justify ... one’s thinking to self and others” (Flavell, 1987, p. 27). In short, thinking collaboratively “reveals our thought processes and encourages us to think about our thinking” (Garrison, 2015, p. 82). Therefore, regulation of collaborative inquiry depends on participants taking responsibility for monitoring and managing the inquiry process. The shared metacognition construct consists of self- and co-regulation, each of which has monitoring and management functions, embedded in collaborative inquiry. These dynamic dimensions have been validated both structurally and transactionally (Garrison & Akyol, 2015a, 2015b). To be clear, shared metacognition relies on discourse, where critical feedback uncovers errors and misleading outputs produced by AI.
Shared metacognition is critical to understanding and effectively implementing inquiry in a community of learners. Through this process, participants learn to audit and verify AI results. To this end, the Shared Metacognition instrument was developed to empirically assess purposeful regulation of learning in communities of inquiry (Garrison & Akyol, 2015a, 2015b). Martha et al. (2023) used the Shared Metacognition construct and questionnaire to examine how metacognitive support, provided through teaching presence, influences self- and co-regulation in collaborative inquiry. They found significant improvement in both perspectives, supported by quantitative and qualitative data, and concluded that “integrating metacognitive and motivational scaffolds fosters cognitive engagement and manages learner motivation” (p. 582). These findings demonstrate the effectiveness of shared metacognition in regulating collaborative inquiry and its relevance for auditing AI results.
Generative AI will inevitably reshape how educators design deep and meaningful learning experiences. It can support online learning communities by curating resources and generating natural-language responses, yet its opacity and potential for fabricating content pose serious risks. Interactive AI may reduce these risks, but it also tempts educators to surrender academic direction. While AI can inform the regulation of inquiry, excessive reliance threatens to erode critical engagement. The essential challenge is to sustain a questioning attitude toward AI results and resist dependence on its generative power.
To reiterate, we argue that shared metacognition within the CoI framework provides a constructive means to integrate generative AI that can enhance learning while curbing uncritical use of flawed outputs. Shared metacognition lies at the core of understanding, monitoring, and managing collaborative inquiry, ensuring learners retain awareness and responsibility in constructing and validating knowledge. In this way, it offers a foundation for managing AI’s benefits and risks and realizing its transformative educational potential. Educational leaders must model and promote their critical use to achieve meaningful learning outcomes—an essential challenge explored in the next section.
Fundamentally, AI adoption requires a mindset that is skeptical and critical yet also open-minded and curious, recognizing its opportunities while remaining attentive to potential challenges. This balanced mindset is essential for navigating AI’s opportunities and risks. In education, it is crucial to understand how generative AI can enrich learning communities while inevitably reshaping the educational experience. Effective implementation, therefore, requires insight into its potential across the design, facilitation, and direction of meaningful learning.
While AI is found in the core capabilities of a range of technologies, the focus here is on how AI can enhance and support collaborative learning. Within a community of inquiry, AI can help monitor and manage teaching, social, and cognitive presence, with learning analytics playing a key role. Its ability to synthesize vast information and engage interactively—as seen in tools such as ChatGPT, Copilot, Gemini, and Claude—offers significant opportunities for collaborative inquiry. Despite risks of misuse (e.g., automated essay generation), generative AI’s capacity for sustained dialogue can stimulate deeper questioning and reflection, supporting the goals of the CoI framework (Rospigliosi, 2023). A review of ChatGPT as a tool for supporting inquiry in higher education recently analyzed its role through the lens of the CoI framework (De Silva et al., 2025). The study assessed how ChatGPT fosters social and cognitive presence in online research communities and found that it positively influenced the quality, efficiency, and motivation of student research. Yet it also cautioned that ChatGPT cannot replace human supervision or peer collaboration, emphasizing the need for students to verify the accuracy and reliability of its outputs.
When designing and shaping learning methods in collaborative-constructivist contexts, the first key element is recognizing how AI can enhance learning within such communities. The focus must remain on critical thinking and inquiry—core principles of the CoI framework and shared metacognition—rather than on AI-generated results. Generative AI can be especially valuable in early inquiry phases by helping define problems, clarify solutions, and synthesize existing knowledge to reveal alternative perspectives. Building on these findings, studies have observed practical applications of AI in collaborative inquiry in areas such as writing and reflection. Southworth (2023) noted that focusing on the writing process helps students develop metacognitive and analytical skills, especially when peer feedback and self-evaluation are included. Similarly, Shen and Teng (2024) found that AI-assisted drafting allows learners to focus on substance rather than surface. Such practices reflect Teaching Presence in designing, facilitating, and directing critical engagement with AI-generated content.
The second element would be to use AI for learning analytics. AI supports learning analytics by transforming large, complex learner data into meaningful insights that reveal learning processes, predict risks, and support timely, informed pedagogical decisions (Mamede & Santos, 2025). By enabling pattern detection, personalization, and analysis of both quantitative and qualitative data, AI helps educators and learners move from retrospective reporting to actionable understanding and support (Ouyang et al., 2023; Sajja et al., 2025). AI-driven learning analytics fit naturally with text-based online learning transactions and can serve as a powerful diagnostic resource, revealing the personal and collaborative complexities of a learning community. Previous research shows that AI tools have greatly enhanced the effectiveness of a community of inquiry through analytics. Learning analytics depends on the timely assessment of engagement and performance to ensure the effective progression of collaborative inquiry. AI systems can also be trained on shared metacognitive strategies associated with collaborative inquiry. Finally, when the CoI framework was developed, the authors envisioned automatic coding of the presence for diagnostic purposes. One promising example is Castellanos-Reyes et al. (2025), which leveraged GPT-based large language models to automate the content analysis of Cognitive Presence. Their findings suggest that these models show potential in achieving high accuracy in coding, offering a scalable approach to applying the CoI framework for research and diagnostic purposes.
The CoI framework provides a means to explore how AI systems can be understood as actors within a community of inquiry. In this sense, learning activities may involve both human participants and virtual participants in the form of AI agents. The term AI agent is used to describe an AI-based system that can receive input, respond in context, and act with a certain degree of independence in ways that influence the learning activity. Through this kind of participation, AI agents may function in ways similar to human members of a community of inquiry. Integrating a high-quality AI agent into a community of inquiry would naturally bring several benefits, enriching the educational landscape. Ultimately, AI agents could support human teachers by serving as co-instructors, enhancing the learning experience through personalized and adaptive strategies. While the notion that AI agents could replace human instructors sparks debate, it is essential to focus on how these technologies may augment and support the pivotal aspects of facilitation and direction in the educational process. In synergy with human educators, AI can strengthen key dimensions of Teaching Presence—design, facilitation, and direction—thereby enriching the community of inquiry.
As AI becomes increasingly prevalent in education, its integration into communities of inquiry holds significant potential. Building on earlier discussions, this section examines how AI can enrich learning activities guided by the CoI framework’s focus on critical thinking and inquiry. We outline five distinct roles AI may assume within a community of inquiry, illustrating practical applications and their implications for educators and learners. These roles are not mutually exclusive and may intersect or coexist within the same learning experience.
As instructors design courses and learning activities guided by the CoI framework, AI can provide tools for structuring learning activities, sequencing content, and encouraging inclusivity. For instance, AI systems can generate presentation materials and thought-provoking questions or create case studies tailored to specific learning objectives, contributing to the design and organization component of Teaching Presence. By automating repetitive tasks, such as curating resources or mapping learning outcomes, AI allows educators to focus on higher-order planning. Additionally, AI can help design activities that encourage collaboration and reflection, ensuring that inquiry-based learning emphasizes shared metacognition and collaborative inquiry. AI tools can also identify potential barriers in course design, such as cultural or linguistic biases, and suggest adjustments to foster equitable participation. These design capabilities expand the CoI framework by emphasizing the preparatory work required to establish an effective community.
As external resources, AI tools provide students and instructors with flexible, on-demand material. Functioning outside formal learning management systems or communication tools, AI tools such as ChatGPT, Copilot, Gemini, and Claude act as information sources, synthesizers, and supports for critical thinking. Students might use AI to clarify complex concepts or generate counterarguments, which they then bring into learning activities to deepen Cognitive Presence. Specifically, AI can support students across the categories of Cognitive Presence, from generating new ideas during triggering events and exploration, through synthesizing information during integration and evaluating solutions during resolution. These tools empower learners to engage in reflection on their learning processes and their role in group activities. AI can also support group efforts by serving as a shared resource for brainstorming or synthesizing knowledge, fostering collaboration and collective engagement. In short, AI tools offer opportunities to extend learners’ capacity to question, analyze, and collaboratively construct meaning—key features of Cognitive Presence within the CoI framework.
When integrated into the educational platform, AI can become an analytical and strategic support for instructors. Drawing on approaches from Learning Analytics, particularly profiling and prediction, AI systems may monitor participants’ activity, evaluate community interactions, and provide actionable insights to enhance teaching and social presence while promoting cognitive presence. For example, AI can track participation trends, identify students in need of extra support, and suggest facilitation strategies to foster community and prompt collaborative inquiry. Yet profiling and prediction also carry risks: they may categorize learners in ways that oversimplify complex identities, reinforce stereotypes, or privilege certain perspectives, thereby challenging the inclusivity and openness of collaborative learning (Bond et al., 2024). Moreover, AI can help instructors assess where students are situated within the phases of Cognitive Presence—triggering events, exploration, integration, or resolution—and recommend individualized feedback or interventions tailored to their cognitive development. This role adds nuance to the CoI framework by emphasizing instructors’ capacity for data-driven decision-making. Here, the human instructor remains in charge, with the AI acting solely as an adviser providing suggestions for action. Through dynamic feedback loops, AI reduces the cognitive load on educators, allowing them to focus on fostering meaningful interactions. However, ethical considerations arise, particularly regarding transparency and bias in historical data, which may reproduce inequities and undermine inclusivity.
In this transformative role, AI becomes a semi-autonomous member of a community of inquiry, actively contributing to teaching, social, and cognitive presence. As an instructor, co-instructor, or fictional student, AI agents can engage in discussions, provide real-time scaffolding, and adapt their input based on the group’s needs. A well-trained AI agent may be of value in all aspects of Teaching Presence (i.e., designing, facilitating students, and providing directions) and Social Presence (i.e., being affective, prompting open communication, and fostering group cohesion), thereby supporting students’ practical inquiry (Cognitive Presence).
In situations where a human and an AI instructor work together, the AI can complement the human instructor’s expertise by handling well-defined routine tasks and providing real-time support during discussions. All support functions mentioned above can be integrated into such AI agents, which in this case may act automatically, without requiring human approval. Although communities of inquiry led largely by AI are not ideal, it is reasonable to expect such approaches to emerge, driven by their promise of scalability and efficiency. This raises important questions about the potential impact on the quality of collaborative learning and the preservation of human-centered elements within the community. This role challenges traditional boundaries, positioning AI as a participant agent rather than a supporting tool. While this autonomy enhances scalability and responsiveness, it also raises concerns about authenticity in presence and the risk of dehumanizing collaborative learning. For instance, the social presence created by an AI may lack genuine emotional intelligence, potentially undermining trust and engagement in the community. We therefore call for an integrated approach, where AI manages well-defined and repetitive activities consistently and without fatigue while the human instructor ensures higher-order thinking, integrating perspectives, guiding inquiry, and sustaining the authenticity of collaborative inquiry.
This role is grounded in the concept of a partnership between a student and a tutor (Stenbom et al., 2016), reimagined through the substitution of the tutor with an AI counterpart. Unlike the AI as an Independent Resource for Inquiry role—where AI is treated as a tool or information source—a relationship of inquiry emerges when interaction becomes sustained and dialogical. Here, the learner engages with an AI tutor not only as a provider of answers but as a partner whose outputs are continually scrutinized, adapted, and guided, reflecting elements of pedagogical trust and mutual shaping.
This type of interaction has been tested for years in the form of conversational agents, virtual tutors, and chatbots. However, with the advent of generative AI and large language models, these systems have become significantly more sophisticated. In a relationship of inquiry, the AI assumes the role of a tutor by contributing to Teaching Presence (designing, guiding, directing, explaining, and scaffolding learning), supporting Social Presence (interacting affectively and openly, and supporting relationship cohesion), and fostering Cognitive Presence (prompting inquiry). This dynamically models a one-to-one version of the AI Agents as Members of a Community of Inquiry role rooted in the principles of the CoI framework.
This approach offers several advantages. It provides scalable and consistent tutoring support, making individualized learning more accessible. Learners benefit from real-time guidance tailored to their specific needs, which can help them better understand content and stay motivated. Furthermore, interacting with an AI tutor can foster metacognitive skills and promote self-directed learning by encouraging students to reflect on their understanding and strategies.
At the same time, however, the lack of human intelligence may reduce the authenticity of Social Presence and Teaching Presence. There is also a risk that learners may become overly dependent on AI, potentially missing opportunities to engage in peer collaboration and develop interpersonal skills. Lastly, ethical concerns must be considered, especially regarding how data is used and the potential for AI systems to reinforce biases present in their training data.
This article has examined the question: How can a collaborative-constructivist perspective inform our understanding of the adoption of generative AI in digital learning environments? We argued that collaborative inquiry shaped by shared metacognition most effectively facilitates and directs deep and meaningful thinking and learning in the context of generative AI technologies. This speaks directly to the essence of the CoI framework and specifically shared metacognition to encourage critical analysis of AI generative information and facilitating approaches to learning that extend beyond surface meaning. In the final analysis, educators and learners must manage the risks of AI by encouraging skepticism and maintaining control of educational decisions. AI offers clear benefits, such as supporting individualized learning, efficiently handling well-defined problems, and carrying out repetitive tasks without fatigue. Yet, however sophisticated its reasoning capabilities may appear, AI remains soulless. It can construct coherence and simulate meaning, but meaning becomes truly educational when humans engage with, interpret, and integrate these outputs into shared understanding. AI may speed up information assimilation, but educators must slow inquiry to allow for critical reflection and discourse. That is, we need to take the time to reflect and openly share our understanding of generative AI outputs. Thoughtful and constructive use of generative AI necessitates critical thinking and discourse best made possible through shared metacognitive inquiry that reflects collaborative monitoring and management of the learning process.
As the reasoning power of AI grows, the need to question and make sense of AI-generated information may diminish. From a collaborative-constructivist perspective, this risk underscores the importance of educating individuals to resist complacency and of employing AI to enhance critical and collaborative thinking. This highlights the need for a theoretical framework to study and understand AI’s integration into educational practice. The increasing power of AI and its potential dominance in generating coherent information raises the role and importance of human critical analysis of information and knowledge structures in terms of the evolution of knowledge. Indeed, we are already in a situation where much of the information we encounter has been at least partially generated by AI. This uncertainty about the origins of information requires us to adopt a critical stance toward everything we engage with, demanding constant reflection, verification, and skeptical inquiry. The argument here is that an educational framework is essential for understanding and implementing AI to generate new insights and construct shared knowledge. Such a conceptual educational framework can be a guide and justification for a purposeful, critical, collaborative inquiry approach where curious skepticism is the central tenet when incorporating AI tools in an educational context.
The key to the successful educational adoption of AI is to understand the challenge of such a potentially impactful technological tool. Every indication is that AI has started to transform educational practices, which makes it imperative that we have a means to make sense of the complex issues. As we start down this transformational road, it becomes imperative to have a coherent perspective of the elements and dynamics that need to be considered. At the same time, we must recognize that the rapid pace of development means our present understanding is provisional. Rather than seeking final answers, the task is to remain open to continuous reflection and refinement, allowing our theoretical and educational perspectives to evolve alongside the technology itself. We must be sure to measure, analyze, and understand the impact of AI in terms of the nature and goals of the learning experience—in other words, to provide it with a soul grounded in human purpose. This is why it is essential to have a meaningful and defensible theoretical framework that can guide us in the application and assessment of AI in achieving deep and meaningful learning experiences. Only then will we understand the potential of AI to shape worthwhile learning experiences.
During the preparation of this work, the authors used ChatGPT (OpenAI) to support language editing and improve clarity of expression. After using this software, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.
Anderson, J. E., Nguyen, C. A., & Moreira, G. (2025). Generative AI-driven personalization of the Community of Inquiry model: Enhancing individualized learning experiences in digital classrooms. The International Journal of Information and Learning Technology, 42(3), 296-310. https://doi.org/10.1108/IJILT-10-2024-0240
Beckman, K., Apps, T., Howard, S. K., Rogerson, C., Rogerson, A., & Tondeur, J. (2025). The GenAI divide among university students: A call for action. The Internet and Higher Education, 67, 101036. https://doi.org/10.1016/j.iheduc.2025.101036
Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-023-00436-z
Bozkurt, A., & Sharma, R. C. (2023). Challenging the status quo and exploring the new boundaries in the age of algorithms: Reimagining the role of generative AI in distance education and online learning. Asian Journal of Distance Education, 18(1). https://www.asianjde.com/ojs/index.php/AsianJDE/article/view/714
Bozkurt, A., & Zawacki-Richter, O. (2021). Trends and patterns in distance education (2014–2019): A synthesis of scholarly publications and a visualization of the intellectual landscape. The International Review of Research in Open and Distributed Learning, 22(2), 19-45. https://doi.org/10.19173/irrodl.v22i2.5381
Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. AI Magazine, 26(4), 53-60. https://doi.org/10.1609/aimag.v26i4.1848
Castellanos-Reyes, D., Olesova, L., & Sadaf, A. (2025). Transforming online learning research: Leveraging GPT large language models for automated content analysis of cognitive presence. Internet and Higher Education, 65. https://doi.org/10.1016/j.iheduc.2025.101001
Cleveland-Innes, M., Stenbom, S., & Garrison, D. R. (Eds.). (2024). The design of digital learning environments: Online and blended applications of the community of inquiry. Routledge. https://doi.org/10.4324/9781003246206
Corbeil, J. R., & Corbeil, E. M. (2025). Teaching and learning in the age of generative AI: Evidence-based approaches to pedagogy, ethics, and beyond. Routledge. https://doi.org/10.4324/9781032688602
De Silva, G. H. B. A., Sandanayake, T. C., Firdhous, M. F. M., & Senarathne, C. D. (2025). ChatGPT in higher education: A review of its impact on student research. Journal of Business and Technology, 129-138. https://doi.org/10.4038/jbt.v9i5.227
Dewey, J. (1933). How we think. D.C. Heath and Co.
Flavell, J. H. (1987). Speculations about the nature and development of metacognition. In F. E. Weinert & R. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 21-29). Lawrence Erlbaum.
Garrison, D. R. (2015). Thinking Collaboratively: Learning in a community of inquiry. Routledge. https://doi.org/10.4324/9781315740751
Garrison, D. R. (2017). E-learning in the 21st century : a community of inquiry framework for research and practice (3rd ed.). Routledge.
Garrison, D. R. (2023, May 19). Editorial 41: Online Learning and AI. The Community of Inquiry. https://www.thecommunityofinquiry.org/editorial41
Garrison, D. R., & Akyol, Z. (2015a). Corrigendum to “Toward the development of a metacognition construct for communities of inquiry.” The Internet and Higher Education, 26, 56. https://doi.org/10.1016/j.iheduc.2015.03.001
Garrison, D. R., & Akyol, Z. (2015b). Toward the development of a metacognition construct for communities of inquiry. The Internet and Higher Education, 24, 66-71. https://doi.org/10.1016/j.iheduc.2014.10.001
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of distance education, 15(1), 7-23. https://doi.org/10.1080/08923640109527071
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5-14. https://doi.org/10.1177/0008125619864925
Hwang, G.-J., Tu, Y.-F., & Tang, K.-Y. (2022). AI in online-learning research: Visualizing and interpreting the journal publications from 1997 to 2019. The International Review of Research in Open and Distributed Learning, 23(1), 104-130. https://doi.org/10.19173/irrodl.v23i1.6319
Jansson, M., Hrastinski, S., Stenbom, S., & Enoksson, F. (2021). Online question and answer sessions: How students support their own and other students’ processes of inquiry in a text-based learning environment. The Internet and Higher Education, 51, 100817. https://doi.org/10.1016/j.iheduc.2021.100817
Jansson, M., Tian, K., Hrastinski, S., & Engwall, O. (2024). An initial exploration of semi-automated tutoring: How AI could be used as support for online human tutors. Proceedings of the International Conference on Networked Learning, 14(1). https://doi.org/10.54337/nlc.v14i1.8070
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260. https://doi.org/10.1126/science.aaa8415
Kılınç, S. (2023). Embracing the future of distance science education: Opportunities and challenges of ChatGPT integration. Asian Journal of Distance Education, 18(1), 205-237. https://doi.org/10.5281/zenodo.7857396
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. https://doi.org/10.1038/nature14539
Lipman, M. (2003). Thinking in education (2nd ed.). Cambridge University Press.
Mamede, H. S., & Santos, A. (Eds.). (2025). AI and learning analytics in distance learning. IGI Global. https://doi.org/10.4018/979-8-3693-7195-4
Martha, A. S. D., Santoso, H. B., Junus, K., & Suhartanto, H. (2023). The Effect of the Integration of Metacognitive and Motivation Scaffolding Through a Pedagogical Agent on Self- and Co-Regulation Learning. IEEE Transactions on Learning Technologies, 16(4), 573-584. https://doi.org/10.1109/tlt.2023.3266439
Namaziandost, E., & Rezai, A. (2024). Special issue: Artificial intelligence in open and distributed learning: Does it facilitate or hinder teaching and learning? The International Review of Research in Open and Distributed Learning, 25(3), i-vii. https://doi.org/10.19173/irrodl.v25i3.8070
Nasr, N. R., Tu, C.-H., Werner, J., Bauer, T., Yen, C.-J., & Sujo-Montes, L. (2025). Exploring the impact of generative AI ChatGPT on critical thinking in higher education: Passive AI-directed use or human–AI supported collaboration? Education Sciences, 15(9), 1198. https://doi.org/10.3390/educsci15091198
Ouyang, F., Wu, M., Zheng, L., Zhang, L., & Jiao, P. (2023). Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. International Journal of Educational Technology in Higher Education, 20(1), 4. https://doi.org/10.1186/s41239-022-00372-4
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020. Education and Information Technologies, 27(6), 7893-7925. https://doi.org/10.1007/s10639-022-10925-9
Peirce, C. S. (1955). The fixation of belief. In C. S. Peirce & J. Buchler (Eds.), Philosophical writings of Peirce (pp. 5-22). Courier Dover.
Rios, T. C.-D. L., Solis-Trujillo, B., Perez-Ruiz, J., & Aquije-Mansilla, M. (2025). Systematic review of critical thinking using artificial intelligence. Edelweiss Applied Science and Technology, 9(3), 990-1001. https://doi.org/10.55214/25768484.v9i3.5405
Rospigliosi, P. A. (2023). Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interactive Learning Environments, 31(1), 1-3. https://doi.org/10.1080/10494820.2023.2180191
Sajja, R., Sermet, Y., Cwiertny, D., & Demir, I. (2025). Integrating AI and learning Analytics for data-driven pedagogical decisions and personalized interventions in education. Technology, Knowledge and Learning. https://doi.org/10.1007/s10758-025-09897-9
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57, 620-631. https://doi.org/10.1111/ejed.12532
Shen, X., & Teng, M. F. (2024). Three-wave cross-lagged model on the correlations between critical thinking skills, self-directed learning competency and AI-assisted writing. Thinking Skills and Creativity, 52, 101524. https://doi.org/10.1016/j.tsc.2024.101524
Sitepu, M. S., Prasojo, L. D., Hermanto, H., Salido, A., Nurhakim, L., Setyorini, E., Disnawati, H., & Wiratsongko, B. (2025). Mapping and exploring strategies to enhance critical thinking in the artificial intelligence era: A bibliometric and systematic review. European Journal of Educational Research, 15(1), 305-322. https://doi.org/10.12973/eu-jer.15.1.305
Southworth, J. (2023). Rethinking university writing pedagogy in a world of ChatGPT. University Affairs. https://universityaffairs.ca/opinion/rethinking-university-writing-pedagogy-in-a-world-of-chatgpt/
Stenbom, S., & Cleveland-Innes, M. (2024). Introduction to the Community of Inquiry theoretical framework. In M. Cleveland-Innes, S. Stenbom, & D. R. Garrison (Eds.), The Design of Digital Learning Environments (1st ed., pp. 3-25). Routledge. https://doi.org/10.4324/9781003246206-2
Stenbom, S., Garrison, D. R., & Bozkurt, A. (2026). Augmenting inquiry, preserving the core: Stenbom and Garrison on AI’s role and human-centered learning within the Community of Inquiry (CoI) framework. Open Praxis 18(1), 181-191. https://doi.org/10.55982/openpraxis.18.1.1042
Stenbom, S., Jansson, M., & Hulkko, A. (2016). Revising the community of inquiry framework for the analysis of one-to-one online learning relationships. International Review of Research in Open and Distance Learning, 17(3), 36-53. https://doi.org/10.19173/irrodl.v17i3.2068
Swan, K. (2020). Teaching and learning in post-industrial distance education. In M. Cleveland-Innes & D. R. Garrison (Eds.), An Introduction to Distance Education (2nd ed., pp. 67-89). Routledge. https://doi.org/10.4324/9781315166896
Waterman, D. A. (1986). A guide to expert systems. Addison-Wesley.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education — where are the educators? International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0

Artificial Intelligence and Communities of Inquiry: Reimagining Educational Experiences by Stefan Stenbom and D. Randy Garrison is licensed under a Creative Commons Attribution 4.0 International License.