International Review of Research in Open and Distributed Learning

Volume 22, Number 3

August - 2021

 

Instructor Impact on Differences in Teaching Presence Scores in Online Courses

 

Holly Fiock1, Yukiko Maeda2, and Jennifer C. Richardson3
1Department of Curriculum and Instruction, Purdue University, 2Department of Educational Studies, Purdue University, 3Department of Curriculum and Instruction, Purdue University

 

Abstract

Using three interdependent constructs: social, cognitive, and teaching presence, the Community of Inquiry framework is a theoretical process model of online learning. Specifically, teaching presence contains three sub-elements—(a) facilitation of discourse, (b) direct instruction, and (c) instructional design and organization—that work together to create a collaborative-constructivist learning environment. Data from the Community of Inquiry survey from 160 learners in 11 course sections were analyzed using a one-way analysis of variance (ANOVA) to determine whether statistically significant differences existed in teaching presence scores between sections of two online courses with identical course design taught by different instructors. Results showed significant differences between individual instructors’ teaching presence scores for each of the two courses. Specifically, significant differences were found in each sub-element of teaching presence except for one course’s instructional design and organization. Conceptual and methodological explanations of the findings are provided, and implications and suggestions for future research are discussed.

Keywords: online learning, Community of Inquiry framework, teaching presence, higher education, direct instruction

Introduction

The rapid growth of online educational courses has created changes in class communication and community dynamics. In face-to-face courses, learners can physically see and immediately receive feedback from instructors, whereas in online courses, communication lacks the vocal tones, nuances, and immediacy of responses (Hailey et al., 2001). These issues have led students to report areas of concerns such as feelings of alienation or disconnectedness with others (Boston et al., 2010; Hart, 2012; Phirangee & Malec, 2017). As such, the increase in online educational courses, online communication, and learner isolation issues have driven research into the role of community building, presence, and instructor interaction with learners in online environments (Phirangee et al., 2016).

Specifically, interaction between online learners and instructors is of great importance to community building, learner success, and course satisfaction (Akyol & Garrison, 2008; Arbaugh, 2008). The Community of Inquiry (CoI) framework provides guidelines on how to develop online communities of inquiry for meaningful and effective learning environments (Garrison et al., 2000). A CoI is “a group of individuals who collaboratively engage in purposeful critical discourse and reflection to construct personal meaning and confirm mutual understanding” (Garrison & Akyol, 2013, p. 105). Garrison et al. (2000) developed the CoI framework as a working, dynamic model with three core presences: cognitive, social, and teaching. Garrison et al. (2000) state that while both social and cognitive (content-related) presences and interactions are vital for learners in online contexts, teaching presence is needed to help guide and focus interactions toward meeting the course goals and objectives (Arbaugh, 2008) and is used as “a mechanism for bridging the transactional distance between learner and instructor commonly associated with distance education” (Arbaugh & Hwang, 2006, p. 17). Of the three presences, teaching presence is of great consequence because “what instructors do in the classroom is critical to learners’ sense of scholarly ‘belonging’ and ultimate persistence in their academic pursuits” (Shea et al., 2006, p. 176).

Literature Review

Community of Inquiry

The CoI framework represents a collaborative-constructivist model of learning in online environments (Castellanos-Reyes, 2020). Social presence refers to how connected, both socially and emotionally, learners are with others while in an online course or environment (Swan et al., 2008). Cognitive presence is the extent to which learners construct meaning in online environments where reflection and discourse are used (Swan et al., 2008). Teaching presence is defined as the design, facilitation, and direction of cognitive and social processes to support learning and is considered a key element in the establishment of online community (Garrison et al., 2000; Garrison & Arbaugh, 2007).

Teaching presence has three sub-elements: (a) facilitation of discourse, (b) direct instruction, and (c) instructional design and organization (Anderson et al., 2001; Caskurlu et al., 2020). However, it is important to note that some researchers (e.g., Shea et al., 2006) argue that teaching presence consists of only two sub-elements: (a) instructional design and organization and (b) facilitation of discourse and direct instruction combined. The authors of this study view the teaching presence sub-elements as independent concepts; therefore, in this research, we explored students’ perceptions of the three teaching presence sub-elements across different instructors of the same online course to add to the existing research base.

Teaching Presence

The first sub-element, facilitation of discourse (FD), is defined as the methods or means instructors use to help students engage with the content, course information, and instructional materials (Anderson et al., 2001). Frequently, FD occurs within the discussion board, where the instructor can work with students to develop a shared understanding of course topics. When facilitating discourse among learners, instructors make observations of the students and act accordingly: they may raise additional questions, change the direction(s) of discussion, manage ineffective student comments, encourage considerations from different points of view, draw out inactive students, and comment on and answer students’ concerns (Anderson et al., 2001; Brower, 2003; Coppola et al., 2004; Swan et al., 2008).

Furthermore, research shows learners are likely to feel an increased sense of community and feel more connected to their instructors when instructors are active in the discussions (Epp et al., 2017; Phirangee et al., 2016; Rovai, 2007). Watson et al. (2017), in conducting a case study, found that 60% of teaching presence scores in a massive online open course were dedicated to facilitating discourse, showing the importance of learners’ desire for instructor guidance during discussion participation. However, the instructor alone cannot guarantee a learner’s engagement with course materials and content. As Anderson et al. (2001) state, “The teacher shares responsibility with each individual student for attainment of agreed upon learning objectives” (p. 7). Therefore, to encourage peer interactions within FD, the instructor can model appropriate behaviors, match students with similar ideas to elicit conversations, and provide opportunities for peer-to-peer interactions (Anderson et al., 2001; Richardson et al., 2009; Stewart, 2017).

The second sub-element, an instructor’s direct instruction (DI), is characterized as sharing of subject matter knowledge or expertise with students in the form of candid intellectual and scholarly leadership (Anderson et al., 2001). Sometimes confused with FD, DI goes beyond facilitating discussions and discourse to include providing intellectual reasoning. Specifically, as the subject matter expert, the instructor “must play this role because of the need to diagnose comments for accurate understanding, inject sources of information, direct discussions in useful directions, and scaffold learner knowledge to raise it to a new level” (Garrison & Arbaugh, 2007, p. 164). Thus, it is not surprising that DI is typically associated with feedback and assessment as it provides learners with the necessary guidance to advance to complex topics while navigating through course materials, helping the students to achieve the courses’ learning objectives. DI can also be given by peers, especially in situations where “students exchange and negotiate multiple perspectives with a group of knowledgeable peers,” allowing for “opportunities for constructing new knowledge” (Stewart, 2017, p. 69). Particularly in online environments, Gurley (2018) found that DI by itself was not enough for learners to be able to construct knowledge; all three sub-elements of teaching presence (facilitation of discourse, direct instruction, and instructional design and organization) are critical for effective development of “critical thinking and practical inquiry” skills in online learners (p. 199).

Last, Anderson et al. (2001) explain that the third sub-element, instructional design and organization (DO), is an aspect of teaching presence that involves the design, structure, process, interaction, and evaluative elements of an online course. These include the personalized facets the instructor places into the course such as organization, communication plans, explanation of activities, and assignments, all typically individualized by each instructor. Generally, the element of course design is developed and created prior to the start of the course (preplanned). Stewart (2017) explains that using the CoI framework is crucial in helping “instructors more consistently design activities that put students in situations where they are likely to benefit from interacting with peers” (p. 68), a key component within teaching presence. Peer-to-peer design activities include opportunities where instructors can create, apply, and use collaborative learning principles within course assignments, activities, group work, and course discussions (Lowenthal & Parscal, 2008; Richardson et al., 2009).

Numerous studies (Coppola et al., 2004; Palloff & Pratt, 1999; York & Richardson, 2012) have noted the need for instructors to clearly design their course, being as “transparent” as possible, “because the social cues and norms of the traditional classroom are absent” from online courses (Arbaugh & Hwang, 2006, pp. 11-12). Shea, Pickett, et al. (2003) state, “Good learning environments are knowledge centered in that they are designed to achieve desired learning outcomes” (p. 63). While course design is often preplanned, DO elements can (and should) be implemented and/or adjusted during the live course so that instructors can actively guide learners toward meeting the learning outcomes (Shea, Pickett, et al., 2003).

Purpose of the Study and Research Question

As learner enrollment in online courses increases, it is important to understand how the instructor contributes to teaching presence scores, specifically focusing on the three sub-elements (FD, DI, and DO) (Anderson et al., 2001). Previous studies have explored the relationships between teaching presence and online discussions (Blignaut & Trollip, 2005; Collison et al., 2000; Lowenthal & Parscal, 2008; Watson et al., 2017); however, an instructor’s teaching presence goes beyond just discussion board activity. As Fiock (2020) states, “we must not exclude how an instructor’s presence can be established in other aspects of the course (i.e., course announcements, weekly overviews, feedback to students or student groups, or design of assignment and course activities)” (p. 140). DI activities, such as giving detailed feedback to the learner, providing additional resources as needed, and serving as the content expert (Richardson et al., 2010), may have a greater influence than design elements of teaching presence on students’ reported perceptions. Therefore, understanding the perceived differences in the three teaching presence sub-elements is an important first step in helping instructors focus their attentions on specific strategies and use of course activities when challenged with designing, facilitating, and directing online learning—especially since, as Stewart (2017) states, “CoI also helps instructors focus on what they can control—they may not be able to ensure that students will be considerate or task-oriented, but they can ensure that the activity design sets students up for success” (p. 79).

Commonly, there are two models for online course development in large online programs: (a) courses designed by instructors and (b) “standard” or “canned courses” (Puzziferro & Shelton, 2008, p. 130). In the first model, where courses are designed by the instructor, the faculty member or instructor who is teaching the course develops all the course materials and activities. In the second model, “standard” or master courses are designed by one or more instructors in unison and then copied or cloned in the learning management system to multiple sections of the same course, which then may be taught by different instructors. As no two instructors are the same, typically the instructional design and organization of class materials will vary from course to course and from instructor to instructor, especially in courses designed by the instructor. In situations where standard or canned courses are used, there are multiple sections of the same course that share the same design elements, and therefore, it may be possible to assess teaching presence differences due to instructor variability.

As such, the purpose of this study was to determine if there are statistically significant differences in teaching presence scores among multiple sections of a “standard” course where each section has identical course design but is taught by different instructors. Currently, the number of teaching presence studies focusing exclusively on the three sub-elements are small, and results are inconclusive (Caskurlu et al., 2020). Therefore, we focused on instructor differences by controlling for the variation in course contents and design as we used the data from multiple sections of the same course (i.e., “standard” courses). Consequently, the course sections as initially launched were identical, with room for differences occurring during the implementation with the various instructors and their actions. The research questions for this study were as follows:

To what extent do students report different teaching presence (TP) scores in different sections of the same course having identical design but with different instructors?

  1. To what extent do student perceptions of FD of different sections of the same course vary due to the instructors?
  2. To what extent do student perceptions of DI of different sections of the same course vary due to the instructors?
  3. To what extent do student perceptions of instructional DO of different sections of the same course vary due to the instructors?

Method

Study Setting and Data Source

We used part of a sizable archival data set collected by an online master’s program in the field of instructional design offered by a large Midwestern public university. The program was the first to go fully online at the university in 2011. Once admitted to the program, learners take 8-week long courses for five semesters. On average, 250 students per year are enrolled in the online program (with three admission start periods during the spring, summer, and fall semesters). While minimal demographic information was collected from the participants during data collection, students enrolled in the online program are generally full-time professionals and part-time students. Students range from 21 to 60 years of age, with a mean age of 37.5 years and a gender breakdown of 67.7% female and 32.3% male.

The data used for this study were obtained from two purposively selected graduate-level education courses in the fall 2017 semester. The two courses used for this study were (a) Course A: An Introduction to Learning Design and Technology, and (b) Course B: A Program Assessment and Evaluation course. The introduction course serves as launch into the field and the master’s program covering broad topics such as learning theories, instructional design models, and emerging trends in the field. The assessment and evaluation course helps learners to develop their expertise in program evaluation design, using evaluation models to examine and create learning and performance interventions.

Student perceptions of TP were measured with the CoI survey (Arbaugh et al., 2008) every semester in the master’s program. The survey was administered during the last week of the learners’ online courses (week eight) as part of the program’s course evaluation. Learners were offered 2% extra credit if 90% of students completed the survey. As part of the course evaluation process, the entire fall 2017 student population received the survey via an e-mail or course announcement, with at least one reminder e-mail or course announcement. For the study, 160 students voluntarily completed the survey (n = 57 among four sections in Course A, 57% response rate; n = 103 among seven sections in Course B, 65% response rate). Anonymity was assured as no personal or identifiable information was asked of the learners, and the survey was sent by anonymous link.

Dependent Variables

The CoI survey contains 34 items measuring presence in online courses using the three constructs (teaching, social, and cognitive presence). This study focused only on TP and its three sub-elements (FD, DI, and DO; see Appendix). The dependent variables in this study were the three sub-elements of TP. Items 1-4 addressed DO, items 5-10 addressed FD, and items 11-13 addressed DI (see Appendix for item descriptions in each sub-element). Students responded on a Likert-type scale (5 = strongly agree; 4 = agree; 3 = neutral; 2 = disagree; 1 = strongly disagree). Sub-element scores were computed by taking an average of the responses on the items relevant to the specific sub-element. Arbaugh et al. (2008) reported high Cronbach’s alpha for internal consistency of.94 for TP (M = 3.34, S D = 0.61) based on all 13 items and also reported construct validity evidence for supporting the three-factor structure of the CoI with principal components analysis in graduate-level courses. For our study, Cronbach’s alpha reliability index for internal consistency was computed for each sub-element, which supports a high internal consistency with the current sample. The FD sub-element (5 items) had a Cronbach’s alpha for Course A, α = .954, and Course B, α = .956. The DI sub-element (3 items) had a Cronbach’s alpha for Course A, α = .887, and Course B, α = .817. The DO sub-element consisted of four items and had a Cronbach’s alpha for Course A, α = .906, and Course B, α = .893.

Independent Variable

The instructor of the course served as an independent variable in this study. There were four instructors in Course A and seven instructors in Course B. As shown in Table 1, the instructors for this study had varied backgrounds and experiences but all held doctoral degrees in the field of instructional design (e.g., learning design and technology, learning technologies, instructional technologies, or distance education). Prior to teaching for the university in this study, all instructors went through a vetting process to ensure program and instructor quality. This vetting process included participation in a mentor/mentee program if the instructor had no or limited online teaching experience to ensure they were prepared to teach in the program.

Table 1

Summary of Instructor Demographics by Course

Instructor Gender Experience
Course A
1 F 10 years instructional design, 3 years higher ed teaching
2 F 9 years higher ed teaching, 2 years K-12 teaching
3 F 6 years higher ed teaching, 22 years in business
4 F 9 years instructional design, 4 years higher ed teaching
Course B
5 M 17 years of instructional design, 12 years of online and face-to-face teaching
6 F 9 years online programing, 5 years K-12 teaching, 5 years higher ed teaching
7 F 6 years K-12 teaching, 5 years higher ed teaching
8 M 17 years in corporate training, 9 years higher ed teaching
9 F 7 years higher ed teaching, 6 years instructional design
10 F 25 years higher ed teaching, 6 years instructional design
11 F 9 years higher ed teaching, 6 years instructional design, 3 years K-12 teaching

Note. F = female; M = male.

Statistical Analysis Procedure

Analyses focused on participating students’ self-reported TP scores in relation to the instructor who taught their course. A one-way univariate fixed-effect between-subjects analysis of variance (ANOVA) was conducted to compare the instructor effect on TP sub-elements (i.e., FD, DI, and DO) in courses with the same instructional design and organization, but different facilitation of discourse and direct instruction. The decision was made to conduct a separate univariate analysis by course and by sub-element instead of the application of multivariate analysis for the following reasons. First, we were not interested in comparing the TP differences by course. The analysis of the two courses aimed to cross-validate the findings and to verify if the same conclusion was reached for the different courses. Second, while the sub-elements of TP were highly correlated in our study, ranging from r = .699 (DO and DI for Course B) to r = .930 (DI and FD for Course A), we view these sub-elements as independent concepts within TP (Anderson et al., 2001). Third, our focus of the analysis was to shed light on each element in TP, instead of TP as a whole, to understand its potential variation by the instructor. While we acknowledge the risk of committing a Type I error by conducting multiple ANOVA analyses, Huberty and Morris (1989) support the use of multiple ANOVAs as used in this study.

Prior to the ANOVA analysis, a series of descriptive analyses were conducted to explore the impact of the outliers in dependent variables and to examine if underlying data assumptions for ANOVA were satisfied. In checking for the equality of variances, Levene’s test showed that unequal variances were detected for Course A—FA: F (3, 54) = 4.849, p = .005; DI: F (3, 53) = 4.231, p = .003; and DO: F (3, 54) = 4.786, p = .005. Moreover, Course B showed unequal variances for FA—F (6, 97) = 2.052, p = .066—and DO—F (6, 97) = 2.238, p = .046—but equal variances for DI—F (6, 96) = 2.359, p = .036. This seems to be mainly due to the existence of the outliers, which also contributed to negatively skewed distributions. In addition, we observed that score distributions for some instructors were affected by a ceiling effect, which may have restricted the score range for these distributions. We carefully evaluated these outliers and decided not to exclude them because we did not detect any issue with the data entries and considered them aligned with reported responses from the population. Consistent with the observations of outliers, a set of Kolmogorov-Smirnov normality tests indicated that none of the TP sub-element data from each course followed a normal distribution. Course A showed the following: FD: D (57) = 0.244, p < .001; DI: D (57) = 0.302, p < .001; and DO: D (57) = 0.259, p < .001. And course B showed the following: FD: D (103) = 0.152, p < .001; DI: D (103) = 0.207, p < .001; and DO: D (103) = 0.219, p < .001.

With some evidence of nonnormality of data and unequal variances among instructors, we first explored the instructor variation on TP sub-elements with the application of a Kruskal-Wallis test, a nonparametric alternative to the one-way ANOVA (e.g., Harwell et al., 1992; Khan & Rayner, 2003). Because the statistical conclusions drawn from the results of the nonparametric test were consistent with those based on the ANOVA, and the ANOVA is usually robust to normality assumption violation with even with small sample size unless the kurtosis statistic is high (Khan & Rayner, 2003), we concluded that any effect of these assumption violations is inconsequential, and therefore we only report the results of the ANOVA. The statistical significance for all inferential tests was evaluated with alpha level of .05.

Results

Tables 2, 3, and 4 show descriptive summaries for each TP sub-element as functions of both course and instructor, as well as the ANOVA results.

Table 2

Descriptive Statistics of Facilitation of Discourse (FD) Scores as a Function of Instructor and Course

Course Instructor n M SD F p
A 1 10 4.38 0.778 3.745 .016*
A 2 16 4.86 0.318
A 3 16 4.39 0.614
A 4 16 4.01 1.021
B 5 10 4.65 0.552 2.346 .037*
B 6 24 3.81 1.073
B 7 10 3.67 0.926
B 8 19 4.04 0.821
B 9 16 3.79 0.830
B 10 14 4.70 1.241
B 11 10 4.70 0.436

Note: * p < .05.

Table 3

Descriptive Statistics of Direct Instruction (DI) Scores as a Function of Instructor and Course

Course Instructor n M SD F p
A 1 9 4.63 0.611 3.430 .023*
A 2 16 4.85 0.365
A 3 16 4.42 0.639
A 4 16 4.08 0.993
B 5 10 4.83 0.360 2.663 .020*
B 6 24 3.81 1.063
B 7 10 4.00 0.609
B 8 19 3.98 0.842
B 9 16 3.98 0.767
B 10 14 4.67 0.938
B 11 10 4.67 0.667

Note: * p < .05.

Table 4

Descriptive Statistics of Instructional Design and Organization (DO) Scores as a Function of Instructor and Course

Course Instructor N M SD F p
A 1 10 4.48 0.731 4.415 .008*
A 2 16 4.91 0.272
A 3 16 4.31 0.814
A 4 16 4.13 0.626
B 5 10 4.80 0.468 1.934 .083
B 6 24 4.21 0.803
B 7 10 4.15 0.412
B 8 19 4.49 0.852
B 9 16 4.23 0.790
B 10 14 4.84 0.896
B 11 11 4.84 0.358

Note: * p < .05.

In looking at the overarching research question—To what extent do students report different TP scores in different sections of the same course having identical design but with different instructors?—we found statistically significant differences. Specifically, results from the ANOVA found statistically significant differences in DI scores by instructors for both courses for the first research sub-question—To what extent do student perceptions of DI of different sections of the same course vary due to the instructors?—Course A showed F (3, 53) = 3.430, p = .023, ω2 = 0.11, and Course B, F (6, 96) = 2.663, p = .020, ω2 = 0.09. The second research sub-question—To what extent do the student perceptions of FD of different sections of the same course vary due to the instructors?—found statistically significant differences in both Course A, F (3, 54) = 3.745, p = .016, ω2 = 0.12, and Course B, F (6, 96) = 2.346, p = .037, ω2 = 0.07. Last, in answering the third research sub-question—To what extent do student perceptions of DO of different sections of the same course vary due to the instructors?—results from the ANOVA were split. Course A showed significant differences by instructor—F (3, 54) = 4.415, p = .008, ω2 = 0.15—but Course B—F (6, 97) = 1.934, p = .083—albeit trending toward significant, was not statistically significantly different. In summary, statistically significant instructor variation was observed among all TP sub-elements except the DO for Course B. The effect sizes, represented as Omega-squared (ω2), which is known as a conservative estimate of the proportion of explained variance due to the independent variable (e.g., Privitera, 2017), are relatively small, ranging from 0.07 to 0.15. Thus, about 7% to 15% of the variation in students’ perceptions on the TP sub-elements are attributed to the different course instructors.

Discussion and Implications

As the growth of online courses continues to rise, investigations into teaching presence are of great importance. Explaining how deep and meaningful learning occurs within a community through the interaction of the three presences (cognitive, social, and teaching), the CoI framework “describes and measures the elements of collaborative online learning experiences” (Caskurlu, 2018, p. 1). TP is crucial to students’ perceived and actual learning and satisfaction (Caskurlu et al., 2020; Garrison & Cleveland-Innes, 2005); therefore, determining the extent to which students report different TP scores in different sections of the same course with identical design but different instructors is important; and the findings from this study reveal that students do recognize differences in instructors’ direct instruction, facilitation of discourse, and the course’s instructional design and organization (Garrison & Arbaugh, 2007). Using a one-way ANOVA to compare students’ teaching presence scores (FD, DI, and DO), our findings show a significant instructor influence on students’ reported TP scores. Next, we discuss potential explanations as to what factors may have led to our findings.

First, and not surprising, our findings align with previous CoI framework research by showing that students do recognize differences between instructors of the same course for DI. As discussions are a medium in which instructors, as subject matter experts, provide DI by sharing “intellectual and scholarly leadership” (Caskurlu, 2018, p. 3), directing and providing feedback on the discussion boards is one way to ensure learners correctly understand and apply course topics (Garrison & Arbaugh, 2007). Beyond discussion commentary, the role of learner feedback or assessment from the instructor is one focus of DI. While normally an individualized and personalized aspect, the use of “canned” feedback could demotivate students (Cole et al., 2017). York and Richardson (2012) state that “timely, relevant, and adequate feedback can influence a learner’s perception of interaction” (p. 88); feedback characteristics, style, and use could explain differences in reported DI scores.

Additionally, discussions are the focus, in general, when investigating TP in online contexts (see Shea et al., 2010). Therefore, in cooperation, the peer and instructor’s activity in the discussion boards may have influenced both DI and FD scores and the variance we found. The difference between the design of the discussion questions (prior to the start of the course) and instructors’ FD in discussions is in how instructors effectively guide and direct students to connect with course content in their learning. Both Course A and Course B showed significant differences between instructors of the same course, leading us to believe the instructor or peer activity in the course discussions played a role in the differences we found, as they should. Further research, such as the use of qualitative analysis of discussion content and the role of peers, is required to confirm our hypothesis.

Typically, FD includes activities where instructors “review and comment upon student responses, raise questions and make observations to move discussions in a desired direction, keep discussion moving efficiently, draw out inactive students, and limit the activities of dominating posters when they become detrimental to the learning of the group” (Garrison & Arbaugh, 2007, p. 164). Therefore, how students accept or interpret these interactions from their instructor may explain the reported differences we found. In a study conducted by Morgan (2011), considerable variation was found in how instructors perceive and use the discussion boards (e.g., active instructor discussion participation vs. minimal activity). This variance in instructor participation could also be amplified by an instructor’s FD. Arbaugh and Hwang (2006) explain that “Facilitating Discourse can be done by anyone with facilitation training and skills, but only content experts can recognize content-related misconceptions or refer students to additional materials relevant to course material” (p. 12). While each instructor had a variety of teaching and professional experience (see Table 1), it is unclear whether any instructor held additional training or skills, specifically in facilitation, which may have impacted learners’ perceived differences.

Dispersed between the instructor and students, TP helps to “provide students practical insights on how to be actively involved in the course thereby constructing their knowledge through collaboration, interaction with others, and experiencing others’ points of views” (Caskurlu et al., 2020). While TP is most often thought of in terms of the instructor, and the CoI survey items all refer to the instructor’s actions, an often-overlooked component of FD is the role of peer interactions and influence on reported FD scores. Focused on the meaningful (collaborative-constructivist) learning experience (Swan et al., 2009) in a CoI, the role of peer interactions could be a factor in the differences found between the FD and DI scores between individual instructors in both courses in this study—not necessarily the instructors’ actions alone. Both instructor and peer interactions may have contributed to the 7% to 15% effect size variation in students’ perceptions on the TP sub-elements. This possibility is supported by Shea, Fredericksen, et al.’s (2003) results: they found students’ reported perceptions of effective peer discourse facilitation was almost as high as the instructor of the course (i.e., peer FD scores were close to the same as the reported instructor FD scores).

A finding we were not expecting was significant differences between course instructors for the DO sub-element. Since the courses in this study follow the model of using “standard” courses (i.e., courses designed by a lead instructor and then copied across multiple sections), we were not expecting to find differences. While Course B supported this hypothesis, Course A showed significant differences between instructors. A possible explanation is that Course A, as an introductory course, serves as launch into the field, providing learners opportunities to explore a range of instructional design topics, including some of their own choosing. More specifically, the course lead for Course A advised individual instructors to bring in outside resources, information, and points of view. The instructor flexibility to add in their own content into the course (via additional content, resources or required readings) may have led students to report these differences as part of the design and organization of the course. Nonetheless, the findings indicate that teaching matters, and good teaching is likely to occur when good course design is in place.

Furthermore, as instructors had varied backgrounds (e.g., Instructor 1 had 10 years of instructional design experience, and Instructor 3 had 22 years of business experience), the content and resources added to the course by each individual instructor (e.g., adding resources, creating videos, changing readings or focus of weekly topics, etc.) could be wildly different and could spark (or deter) interest in the student population, thereby explaining the significant difference and explained variance. This possible explanation aligns with Anderson et al.’s (2001) study, where they found that “the students and the teacher have expectations of the teacher communicating content knowledge that is enhanced by the teacher’s personal interest, excitement and in-depth understanding of the content” (p. 8), which, based on each individual instructor’s background, may be different from instructor to instructor. As described earlier, each course started with the same DO. However, while generally part of the planned portion of the course or pre-course, DO can occur while the course is running as it is meant to be flexible and adaptable based on meeting student needs (Shea, Fredericksen, et al., 2003). Therefore, the changes each individual instructor made to the live, running course could have impacted the DO scores, leading to the reported differences seen in Course A.

Last, in looking specifically at the three sub-elements, Shea et al. (2006) argue that TP consists of only two sub-elements: (a) DO and (b) FD and DI combined. Caskurlu (2018) supports this claim in findings from a confirmatory factor analysis that yielded a high covariance between the two sub-elements. Especially at the undergraduate level, Garrison and Arbaugh (2007) found in their study that students may not be able to differentiate between FD and DI. Caskurlu (2018) further explains this as students not being able to distinguish between the items used to measure both FD and DI. In our study, we also found high correlations between these two (e.g., r = .930 for Course A).

Limitations and Future Research

While our findings provide unique insights into the instructional design by revealing variation in TP for the same course taught by different instructors, the study is not free from the potential threats to internal and/or external validity. First, as this was an exploratory study on the data retrieved from one online master’s program in education, the interpretation of the findings may be limited to programs with similar students and instructors. Additional studies in various online settings, courses, or disciplines are warranted to enhance the findings’ generalizability.

Second, while we found variation in students’ TP by instructors, it is still unknown what factors contributed to the observed variations and how the peer interactions interplay in the variation. Thus, qualitative investigations will be crucial in helping us develop further understanding of these findings—for example, what specific strategies did each instructor use in their course (e.g., using audio and video elements, actively participating on discussion boards, answering e-mails quickly, providing frequent feedback, sharing of personal experiences, etc.) (Argon, 2003)?

Finally, along with the explosion of online learning opportunities, discussion of the CoI framework from theoretical and psychometric perspectives has been evolving (see Kozan & Caskurlu, 2018). The results of this study suggest further opportunity for exploration with the CoI survey redesign as TP is defined as being “distributed between students and instructor” (Garrison et al., 2000, as cited in Caskurlu et al., 2020, p. 11), yet the TP items on the CoI survey only refer to “the instructor” in the question stems (Caskurlu et al., 2020, p. 11). Additionally, Caskurlu et al. (2020) state that research into these peer interactions within a CoI are vital as they “provide students practical insights on how to be actively involved in the course thereby constructing their knowledge through collaboration, interaction with others, and experiencing others’ points of views” (p. 11). Therefore, in its current state, by only focusing on the instructor, the CoI instrument misses out on measuring other dynamic interactions (e.g., peer-to-peer) crucial in a CoI (Kozan & Caskurlu, 2008). Moreover, our reported high correlations also illustrate that the three sub-elements of TP (FD, DI, and DO scores) have sizable conceptual overlaps or dependency among them. We anticipate further development of and active discussions on defining TP will continue in the field, which may lead to a better indicator of the role that the instructor plays versus peers’ roles in online teaching presence scores. While these limitations would set a boundary on the contributions of the current quantitative findings for implications, they also suggest key directions or potential foci for future studies to develop deeper understanding of how TP is cultivated through the dynamic interactions of course design, instructors, and students. We hope our empirical quantitative evidence provides new insights into future research on TP.

Conclusion

Previous research (see Anderson et al., 2001; Archer, 2010; Shea et al., 2010) has called for additional inquiry into online course examinations focusing on TP and its sub-elements; this study was designed to fill this void. By using the CoI framework, we found statistically significant differences in TP scores between sections of two online courses with identical course design taught by different instructors. While reasons for the significant differences are discussed, we call for and anticipate further research to define TP and its sub-elements, especially regarding peer interactions and the role it plays in a CoI. Ultimately, our hope is that this study and its findings help move both conversations and research forward regarding TP and its sub-elements.

Acknowledgments

We would like to acknowledge and thank Dr. James Lehman for his help and guidance on this paper.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

References

Akyol, Z., & Garrison, D. R. (2008). The development of a Community of Inquiry over time in an online course understanding the progression and integration of social, cognitive and teaching presence. Online Learning Journal, 12 (3), 3-22. https://doi.org/10.24059/olj.v12i3.72

Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conference context. Online Learning Journal, 5(2), 1-17. https://www.doi.org/10.24059/olj.v5i2.1875

Arbaugh, J. B. (2008). Does the Community of Inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9(2), 1-21. https://doi.org/10.19173/irrodl.v9i2.490

Arbaugh, J.B., Cleveland-Innes, M., Diaz, S.R., Garrison, D.R., Ice, P., Richardson, & Swan, K.P. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. The Internet and Higher Education, 11(3-4), 133-136. https://doi.org/10.1016/j.iheduc.2008.06.003

Arbaugh, J. B., & Hwang, A. (2006). Does “teaching presence” exist in online MBA courses? The Internet and Higher Education, 9(1), 9-21. https://doi.org/10.1016/j.iheduc.2005.12.001

Archer, W. (2010). Beyond online discussions: Extending the community of inquiry framework to entire courses. The Internet and Higher Education, 13(1-2), 69-69. https://doi.org/10.1016/j.iheduc.2009.10.005

Argon, S. R. (2003). Creating social presence in online environments. New Directions for Adult and Continuing Education, 100, 57-68. https://doi.org/10.1002/ace.119

Blignaut, A. S., & Trollip, S. R. (2005). Between a rock and a hard place: Faculty participation in online classrooms. Education as Change, 9(2), 5-23. https://doi.org/10.1080/16823200509487114

Boston, W., Díaz, S. R., Gibson, A. M., Ice, P., Richardson, J., & Swan, K. (2010). An exploration of the relationship between indicators of the Community of Inquiry framework and retention in online programs. Online Learning Journal, 14(3), 3-19. https://doi.org/10.24059/olj.v13i3.1657

Brower, H. H. (2003). On emulating classroom discussion in a distance-delivered OBHR course: Creating an online community. Academy of Management Learning & Education, 2(1), 22-36. http://www.jstor.org/stable/40214163

Caskurlu, S. (2018). Confirming the subdimensions of teaching, social, and cognitive presences: A construct validity study. The Internet and Higher Education, 39, 1-12. https://doi.org/10.1016/j.iheduc.2018.05.002

Caskurlu, S., Maeda, Y., Richardson, J. C., & Lv, J. (2020). A meta-analysis addressing the relationship between teaching presence and students’ satisfaction and learning. Computers and Education, 157, 103966. https://doi.org/10.1016/j.compedu.2020.103966

Castellanos-Reyes, D. (2020). 20 Years of the Community of Inquiry framework. TechTrends, 64, 557-560. https://doi.org/10.1007/s11528-020-00491-7

Cole, A. W., Nicolini, K. M., Anderson, C., Bunton, T., Cherney, M. R., Fisher, V. C., Draeger, R., Jr., Featherston, M., Motel, L., Peck, B., & Allen, M. (2017). Student predisposition to instructor feedback and perceptions of teaching presence predict motivation toward online courses. Online Learning Journal, 21(4), 245-262. https://doi.org/10.24059/olj.v21i4.966

Collison, G., Elbaum, B., Haavind, S., & Tinker, R. (2000). Facilitating online learning: Effective strategies for moderators. Atwood Publishing.

Coppola, N. W., Hiltz, S. R., & Rotter, N. G. (2004). Building trust in virtual teams. IEEE Transactions on Professional Communication, 47(2), 95-104. https://doi.org/10.1080/07421222.2002.11045703

Epp, C. D., Phirangee, K., & Hewitt, J. (2017). Student actions and community in online courses: The roles played by course length and facilitation method. Online Learning, 21(4), 53-77. https://doi.org/10.24059/olj.v21i4.1269

Fiock, H. (2020). Designing a Community of Inquiry in online courses. The International Review of Research in Open and Distributed Learning, 21(1), 135-153. https://doi.org/10.19173/irrodl.v20i5.3985

Garrison, D. R., & Akyol, Z. (2013). The Community of Inquiry theoretical framework. In M. G. Moore (Ed.), Handbook of distance education (pp. 104-119). Routledge.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. https://doi.org/10.1016/S1096-7516(00)00016-6

Garrison, D. R., & Arbaugh, J. B. (2007). Researching the Community of Inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157-172. https://doi.org/10.1016/j.iheduc.2007.04.001

Garrison, D. R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133−148. https://doi.org/10.1207/s15389286ajde1903_2

Gurley, P. (2018). Educators’ preparation to teach, perceived teaching presence, and perceived teaching presence behaviors in blended and online learning environments. Online Learning Journal, 22(2), 197-220. https://doi.org/10.24059/olj.v22i2.1255

Hailey, D. E., Grant-Davie, K., & Hult, C. A. (2001). Online education horror stories worthy of Halloween: A short list of problems and solutions in online instruction. Computers and Composition, 18, 387-397. https://doi.org/10.1016/S8755-4615(01)00070-6

Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature. Journal of Interactive Online Learning, 11, 19-42. http://www.ncolr.org/jiol/issues/pdf/11.1.2.pdf

Harwell, M. R., Rubinstein, E. N., Hayes, W. S., & Olds, C. C. (1992). Summarizing Monte Carlo results in methodological research: The One- and two-factor fixed effects ANOVA cases. Journal of Educational Statistics, 17(4), 315-339. https://doi.org/10.3102/10769986017004315

Huberty, C. J., & Morris, J. D. (1989). Multivariate analysis versus multiple univariate analyses. Psychological Bulletin, 105, 302-308. https://doi.org/10.1037/0033-2909.105.2.302

Khan, A., & Rayner, G. D. (2003). Robustness to non-normality of common tests for the many-sample location problem. Advances in Decision Sciences, 7(4), 187-206. https://doi.org/10.1155/S1173912603000178

Kozan, K., & Caskurlu, S. (2018). On the Nth presence for the Community of Inquiry framework. Computers and Education, 122, 104-118. https://doi.org/10.1016/j.compedu.2018.03.010

Lowenthal, P. R., & Parscal, T. (2008). Teaching presence online facilitates meaningful learning. The Learning Curve, 3(4), 1-2. https://www.researchgate.net/publication/265376234_Teaching_Presence_Online_Facilitates_Meaningful_Learning

Morgan, T. (2011). Online classroom or community-in-the-making? Instructor conceptualizations and teaching presence in international online contexts. International Journal of E-Learning and Distance Education, 25(1), 1-13. http://www.ijede.ca/index.php/jde/article/view/721/

Palloff, R., & Pratt, K. (1999). Building learning communities in cyberspace. Jossey-Bass.

Phirangee, K., Epp, C. D., & Hewitt, J. (2016). Exploring the relationships between facilitation methods, students’ sense of community and their online behaviours. Online Learning Journal, 20(2). https://doi.org/10.24059/olj.v20i2.775

Phirangee, K., & Malec, A. (2017). Othering in online learning: An examination of social presence, identity, and sense of community. Distance Education, 38(2), 160-172. https://doi.org/10.1080/01587919.2017.1322457

Privitera, G. (2017). Statistics for behavioral sciences (3rd edition). Sage.

Puzziferro, M., & Shelton, K. (2008). A model for developing high-quality online courses: Integrating a systems approach with learning theory. Online Learning Journal, 12(3-4), 119-136. https://doi.org/10.24059/olj.v12i3-4.1688

Richardson, J. C., Arbaugh, J. B., Cleveland-Innes, M., Ice, P., Swan, K., & Garrison, D. R. (2010). Using the Community of Inquiry framework to inform effective instructional design. In L. Moller & J. Huett (Eds.), The next generation of distance education (pp. 97-125). Springer. https://doi.org/10.1007/978-1-4614-1785-9_7

Richardson, J. C., Ice, P., & Swan, K. (2009, August 4-7). Tips and techniques for integrating social, teaching, & cognitive presence into your courses [Poster presentation]. Conference on Distance Teaching & Learning, Madison, WI.

Rovai, A. (2007). Facilitating online discussions effectively. The Internet and Higher Education, 1(1), 77-88. https://doi.org/10.1016/j.iheduc.2006.10.001

Shea, P., Hayes, S., Vickers, J., Gozza-Cohen, M., Uzuner, S., Mehta, R., Valchova, A., & Rangan, P. (2010). A reexamination of the community of inquiry framework: Social network and content analysis. Internet and Higher Education, 13(1-2), 10-21. https://doi.org/10.1016/j.iheduc.2009.11.002

Shea, P. J., Fredericksen E. E., Picket A. M., & Pelz, W. E. (2003). A preliminary investigation of “teaching presence” in the SUNY learning network. In J. Bourne & J. C. Moore (Eds.), Elements of quality online education: Practice direction (Vol. 4., pp. 279-312). Sloan-C. http://hdl.handle.net/1802/2783

Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and Web-enhanced college courses. The Internet and Higher Education, 9(3), 175-190. https://doi.org/10.1016/j.iheduc.2006.06.005

Shea, P. J., Pickett, A. M., & Pelz, W. E. (2003). A follow-up investigation of “teaching presence” in the SUNY Learning Network. Online Learning Journal, 7(2), 61-80. https://doi.org/10.24059/olj.v7i2.1856

Shea, P., Vickers, J., & Hayes, S. (2010). Online instructional effort measured through the lens of teaching presence in the community of inquiry framework: A re-examination of measures and approach. The International Review of Research in Open and Distance Learning, 11(3). https://doi.org/10.19173/irrodl.v11i3.915

Stewart, M. K. (2017). Communities of Inquiry: A heuristic for designing and assessing interactive learning activities in technology-mediated FYC. Computers and Composition, 45, 67-84. https://doi.org/10.1016/j.compcom.2017.06.004

Swan, K., Garrison, D., & Richardson, J. C. (2009). A constructivist approach to online learning: The Community of Inquiry framework. In C.R. Payne (Ed.), Information technology and constructivism in higher education: Progressive learning frameworks (pp. 43-57). IGI Global. http://doi.org/10.4018/978-1-60566-654-9.ch004

Swan, K., Shea, P., Richardson, J., Ice, P., Garrison, D. R., Cleveland-Innes, M., & Arbaugh, J. B. (2008). Validating a measurement tool of presence in online Communities of Inquiry. E-Mentor, 2(24), 1-12. http://www.e-mentor.edu.pl/_xml/wydania/24/543.pdf

Watson, S. L., Watson, W. R., Janakiraman, S., & Richardson, J. (2017). A team of instructor’s use of social presence, teaching presence, and attitudinal dissonance strategies: An animal behaviour and welfare MOOC. The International Review of Research in Open and Distributed Learning, 18, 69-91. https://doi.org/10.19173/irrodl.v18i2.2663

York, C. S., & Richardson, J. C. (2012). Interpersonal interaction in online learning: Experienced online instructors’ perceptions of influencing factors. Journal of Asynchronous Learning Network, 16(4), 83-98. https://files.eric.ed.gov/fulltext/EJ982684.pdf

Appendix

Community of Inquiry Survey Instrument (draft v. 14)

Teaching Presence

Design and Organization

1. The instructor clearly communicated important course topics.

2. The instructor clearly communicated important course goals.

3. The instructor provided clear instructions on how to participate in course learning activities.

4. The instructor clearly communicated important due dates/time frames for learning activities.

Facilitation

5. The instructor was helpful in identifying areas of agreement and disagreement on course topics that helped me to learn.

6. The instructor was helpful in guiding the class toward understanding course topics in a way that helped me clarify my thinking.

7. The instructor helped to keep course participants engaged and participating in productive dialogue.

8. The instructor helped keep the course participants on task in a way that helped me to learn.

9. The instructor encouraged course participants to explore new concepts in this course.

10. Instructor actions reinforced the development of a sense of community among course participants.

Direct Instruction

11. The instructor helped to focus discussion on relevant issues in a way that helped me to learn.

12. The instructor provided feedback that helped me understand my strengths and weaknesses relative to the course’s goals and objectives.

13. The instructor provided feedback in a timely fashion.

Social Presence

Affective Expression

14. Getting to know other course participants gave me a sense of belonging in the course.

15. I was able to form distinct impressions of some course participants.

16. Online or Web-based communication is an excellent medium for social interaction.

Open Communication

17. I felt comfortable conversing through the online medium.

18. I felt comfortable participating in the course discussions.

19. I felt comfortable interacting with other course participants.

Group Cohesion

20. I felt comfortable disagreeing with other course participants while still maintaining a sense of trust.

21. I felt that my point of view was acknowledged by other course participants.

22. Online discussions help me to develop a sense of collaboration.

Cognitive Presence

Triggering Event

23. Problems posed increased my interest in course issues.

24. Course activities piqued my curiosity.

25. I felt motivated to explore content-related questions.

Exploration

26. I used a variety of information sources to explore problems posed in this course.

27. Brainstorming and finding relevant information helped me resolve content-related questions.

28. Online discussions were valuable in helping me appreciate different perspectives.

Integration

29. Combining new information helped me answer questions raised in course activities.

30. Learning activities helped me construct explanations/solutions.

31. Reflection on course content and discussions helped me understand fundamental concepts in this class.

Resolution

32. I can describe ways to test and apply the knowledge created in this course.

33. I have developed solutions to course problems that can be applied in practice.

34. I can apply the knowledge created in this course to my work or other non-class-related activities.

5-Point Likert-Type Scale

1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree

 

Athabasca University

Creative Commons License

Instructor Impact on Differences in Teaching Presence Scores in Online Courses by Holly Fiock, Yukiko Maeda, and Jennifer C. Richardson is licensed under a Creative Commons Attribution 4.0 International License.