International Review of Research in Open and Distributed Learning Instructor Impact on Differences in Teaching Presence Scores in Online Courses

Using three interdependent constructs: social, cognitive


Introduction
The rapid growth of online educational courses has created changes in class communication and community dynamics.In face-to-face courses, learners can physically see and immediately receive feedback from instructors, whereas in online courses, communication lacks the vocal tones, nuances, and immediacy of responses (Hailey et al., 2001).These issues have led students to report areas of concerns such as feelings of alienation or disconnectedness with others (Boston et al., 2010;Hart, 2012;Phirangee & Malec, 2017).
As such, the increase in online educational courses, online communication, and learner isolation issues have driven research into the role of community building, presence, and instructor interaction with learners in online environments (Phirangee et al., 2016).Specifically, interaction between online learners and instructors is of great importance to community building, learner success, and course satisfaction (Akyol & Garrison, 2008;Arbaugh, 2008).The Community of Inquiry (CoI) framework provides guidelines on how to develop online communities of inquiry for meaningful and effective learning environments (Garrison et al., 2000).A CoI is "a group of individuals who collaboratively engage in purposeful critical discourse and reflection to construct personal meaning and confirm mutual understanding" (Garrison & Akyol, 2013, p. 105).Garrison et al. (2000) developed the CoI framework as a working, dynamic model with three core presences: cognitive, social, and teaching.Garrison et al. (2000) state that while both social and cognitive (content-related) presences and interactions are vital for learners in online contexts, teaching presence is needed to help guide and focus interactions toward meeting the course goals and objectives (Arbaugh, 2008) and is used as "a mechanism for bridging the transactional distance between learner and instructor commonly associated with distance education" (Arbaugh & Hwang, 2006, p. 17).Of the three presences, teaching presence is of great consequence because "what instructors do in the classroom is critical to learners' sense of scholarly 'belonging' and ultimate persistence in their academic pursuits" (Shea et al., 2006, p. 176).

Community of Inquiry
The CoI framework represents a collaborative-constructivist model of learning in online environments (Castellanos-Reyes, 2020).Social presence refers to how connected, both socially and emotionally, learners are with others while in an online course or environment (Swan et al., 2008).Cognitive presence is the extent to which learners construct meaning in online environments where reflection and discourse are used (Swan et al., 2008).Teaching presence is defined as the design, facilitation, and direction of cognitive and social processes to support learning and is considered a key element in the establishment of online community (Garrison et al., 2000;Garrison & Arbaugh, 2007).
Teaching presence has three sub-elements: (a) facilitation of discourse, (b) direct instruction, and (c) instructional design and organization (Anderson et al., 2001;Caskurlu et al., 2020).However, it is important to note that some researchers (e.g., Shea et al., 2006) argue that teaching presence consists of only two sub-elements: (a) instructional design and organization and (b) facilitation of discourse and direct instruction combined.The authors of this study view the teaching presence sub-elements as independent 57 concepts; therefore, in this research, we explored students' perceptions of the three teaching presence subelements across different instructors of the same online course to add to the existing research base.

Teaching Presence
The first sub-element, facilitation of discourse (FD), is defined as the methods or means instructors use to help students engage with the content, course information, and instructional materials (Anderson et al., 2001).Frequently, FD occurs within the discussion board, where the instructor can work with students to develop a shared understanding of course topics.When facilitating discourse among learners, instructors make observations of the students and act accordingly: they may raise additional questions, change the direction(s) of discussion, manage ineffective student comments, encourage considerations from different points of view, draw out inactive students, and comment on and answer students' concerns (Anderson et al., 2001;Brower, 2003;Coppola et al., 2004;Swan et al., 2008).Furthermore, research shows learners are likely to feel an increased sense of community and feel more connected to their instructors when instructors are active in the discussions (Epp et al., 2017;Phirangee et al., 2016;Rovai, 2007).Watson et al. (2017), in conducting a case study, found that 60% of teaching presence scores in a massive online open course were dedicated to facilitating discourse, showing the importance of learners' desire for instructor guidance during discussion participation.However, the instructor alone cannot guarantee a learner's engagement with course materials and content.As Anderson et al. (2001) state, "The teacher shares responsibility with each individual student for attainment of agreed upon learning objectives" (p. 7).Therefore, to encourage peer interactions within FD, the instructor can model appropriate behaviors, match students with similar ideas to elicit conversations, and provide opportunities for peer-to-peer interactions (Anderson et al., 2001;Richardson et al., 2009;Stewart, 2017).
The second sub-element, an instructor's direct instruction (DI), is characterized as sharing of subject matter knowledge or expertise with students in the form of candid intellectual and scholarly leadership (Anderson et al., 2001).Sometimes confused with FD, DI goes beyond facilitating discussions and discourse to include providing intellectual reasoning.Specifically, as the subject matter expert, the instructor "must play this role because of the need to diagnose comments for accurate understanding, inject sources of information, direct discussions in useful directions, and scaffold learner knowledge to raise it to a new level" (Garrison & Arbaugh, 2007, p. 164).Thus, it is not surprising that DI is typically associated with feedback and assessment as it provides learners with the necessary guidance to advance to complex topics while navigating through course materials, helping the students to achieve the courses' learning objectives.DI can also be given by peers, especially in situations where "students exchange and negotiate multiple perspectives with a group of knowledgeable peers," allowing for "opportunities for constructing new knowledge" (Stewart, 2017, p. 69).Particularly in online environments, Gurley (2018) found that DI by itself was not enough for learners to be able to construct knowledge; all three sub-elements of teaching presence (facilitation of discourse, direct instruction, and instructional design and organization) are critical for effective development of "critical thinking and practical inquiry" skills in online learners (p.199).
Last, Anderson et al. (2001) explain that the third sub-element, instructional design and organization (DO), is an aspect of teaching presence that involves the design, structure, process, interaction, and evaluative elements of an online course.These include the personalized facets the instructor places into the 58 course such as organization, communication plans, explanation of activities, and assignments, all typically individualized by each instructor.Generally, the element of course design is developed and created prior to the start of the course (preplanned).Stewart (2017) explains that using the CoI framework is crucial in helping "instructors more consistently design activities that put students in situations where they are likely to benefit from interacting with peers" (p.68), a key component within teaching presence.Peer-to-peer design activities include opportunities where instructors can create, apply, and use collaborative learning principles within course assignments, activities, group work, and course discussions (Lowenthal & Parscal, 2008;Richardson et al., 2009).
Numerous studies (Coppola et al., 2004;Palloff & Pratt, 1999;York & Richardson, 2012) have noted the need for instructors to clearly design their course, being as "transparent" as possible, "because the social cues and norms of the traditional classroom are absent" from online courses (Arbaugh & Hwang, 2006, pp. 11-12).Shea, Pickett, et al. (2003) state, "Good learning environments are knowledge centered in that they are designed to achieve desired learning outcomes" (p.63).While course design is often preplanned, DO elements can (and should) be implemented and/or adjusted during the live course so that instructors can actively guide learners toward meeting the learning outcomes (Shea, Pickett, et al., 2003).

Purpose of the Study and Research Question
As learner enrollment in online courses increases, it is important to understand how the instructor contributes to teaching presence scores, specifically focusing on the three sub-elements (FD, DI, and DO) (Anderson et al., 2001).Previous studies have explored the relationships between teaching presence and online discussions (Blignaut & Trollip, 2005;Collison et al., 2000;Lowenthal & Parscal, 2008;Watson et al., 2017); however, an instructor's teaching presence goes beyond just discussion board activity.As Fiock (2020) states, "we must not exclude how an instructor's presence can be established in other aspects of the course (i.e., course announcements, weekly overviews, feedback to students or student groups, or design of assignment and course activities)" (p.140).DI activities, such as giving detailed feedback to the learner, providing additional resources as needed, and serving as the content expert (Richardson et al., 2010), may have a greater influence than design elements of teaching presence on students' reported perceptions.
Therefore, understanding the perceived differences in the three teaching presence sub-elements is an important first step in helping instructors focus their attentions on specific strategies and use of course activities when challenged with designing, facilitating, and directing online learning-especially since, as Stewart (2017) states, "CoI also helps instructors focus on what they can control-they may not be able to ensure that students will be considerate or task-oriented, but they can ensure that the activity design sets students up for success" (p.79).
Commonly, there are two models for online course development in large online programs: (a) courses designed by instructors and (b) "standard" or "canned courses" (Puzziferro & Shelton, 2008, p. 130).In the first model, where courses are designed by the instructor, the faculty member or instructor who is teaching the course develops all the course materials and activities.In the second model, "standard" or master courses are designed by one or more instructors in unison and then copied or cloned in the learning management system to multiple sections of the same course, which then may be taught by different instructors.As no two instructors are the same, typically the instructional design and organization of class materials will vary from course to course and from instructor to instructor, especially in courses designed by the instructor.In situations where standard or canned courses are used, there are multiple sections of the same course that share the same design elements, and therefore, it may be possible to assess teaching presence differences due to instructor variability.
As such, the purpose of this study was to determine if there are statistically significant differences in teaching presence scores among multiple sections of a "standard" course where each section has identical course design but is taught by different instructors.Currently, the number of teaching presence studies focusing exclusively on the three sub-elements are small, and results are inconclusive (Caskurlu et al., 2020).Therefore, we focused on instructor differences by controlling for the variation in course contents and design as we used the data from multiple sections of the same course (i.e., "standard" courses).
Consequently, the course sections as initially launched were identical, with room for differences occurring

Dependent Variables
The CoI survey contains 34 items measuring presence in online courses using the three constructs (teaching, social, and cognitive presence).This study focused only on TP and its three sub-elements (FD, DI, and DO; see Appendix).The dependent variables in this study were the three sub-elements of TP.Items 1-4 addressed DO, items 5-10 addressed FD, and items 11-13 addressed DI (see Appendix for item descriptions in each sub-element).Students responded on a Likert-type scale (5 = strongly agree; 4 = agree; 3 = neutral; 2 = disagree; 1 = strongly disagree).Sub-element scores were computed by taking an average of the responses on the items relevant to the specific sub-element.Arbaugh et al. (2008) reported high Cronbach's alpha for internal consistency of .94 for TP (M = 3.34, SD = 0.61) based on all 13 items and also reported construct validity evidence for supporting the three-factor structure of the CoI with principal components analysis in graduate-level courses.For our study, Cronbach's alpha reliability index for internal consistency was computed for each sub-element, which supports a high internal consistency with the current sample.The FD sub-element (5 items) had a Cronbach's alpha for Course A, α = .954,and Course B, α = .956.The DI sub-element (3 items) had a Cronbach's alpha for Course A, α = .887,and Course B, α = .817.The DO sub-element consisted of four items and had a Cronbach's alpha for Course A, α = .906,and Course B, α = .893.

Independent Variable
The instructor of the course served as an independent variable in this study.There were four instructors in Course A and seven instructors in Course B. As shown in Table 1, the instructors for this study had varied backgrounds and experiences but all held doctoral degrees in the field of instructional design (e.g., learning design and technology, learning technologies, instructional technologies, or distance education).Prior to teaching for the university in this study, all instructors went through a vetting process to ensure program and instructor quality.This vetting process included participation in a mentor/mentee program if the instructor had no or limited online teaching experience to ensure they were prepared to teach in the program.

Instructor Gender Experience
Course A 1 F 10 years instructional design, 3 years higher ed teaching

Statistical Analysis Procedure
Analyses focused on participating students' self-reported TP scores in relation to the instructor who taught their course.A one-way univariate fixed-effect between-subjects analysis of variance (ANOVA) was conducted to compare the instructor effect on TP sub-elements (i.e., FD, DI, and DO) in courses with the same instructional design and organization, but different facilitation of discourse and direct instruction.
The decision was made to conduct a separate univariate analysis by course and by sub-element instead of the application of multivariate analysis for the following reasons.First, we were not interested in comparing the TP differences by course.The analysis of the two courses aimed to cross-validate the findings and to verify if the same conclusion was reached for the different courses.Second, while the sub-elements of TP were highly correlated in our study, ranging from r = .699(DO and DI for Course B) to r = .930(DI and FD for Course A), we view these sub-elements as independent concepts within TP (Anderson et al., 2001).
Third, our focus of the analysis was to shed light on each element in TP, instead of TP as a whole, to understand its potential variation by the instructor.While we acknowledge the risk of committing a Type I error by conducting multiple ANOVA analyses, Huberty and Morris (1989) support the use of multiple ANOVAs as used in this study.
Prior to the ANOVA analysis, a series of descriptive analyses were conducted to explore the impact of the outliers in dependent variables and to examine if underlying data assumptions for ANOVA were satisfied.
In checking for the equality of variances, Levene's test showed that unequal variances were detected for Course A-FA: F(3, 54) = 4.849, p = .005;DI: F(3, 53) = 4.231, p = .003;and DO: F(3, 54) = 4.786, p = .005.Moreover, Course B showed unequal variances for FA-F(6, 97) = 2.052, p = .066-andDO-F(6, 97) = 2.238, p = .046-butequal variances for DI-F(6, 96) = 2.359, p = .036.This seems to be mainly due to the existence of the outliers, which also contributed to negatively skewed distributions.In addition, we observed that score distributions for some instructors were affected by a ceiling effect, which may have restricted the score range for these distributions.We carefully evaluated these outliers and decided not to With some evidence of nonnormality of data and unequal variances among instructors, we first explored the instructor variation on TP sub-elements with the application of a Kruskal-Wallis test, a nonparametric alternative to the one-way ANOVA (e.g., Harwell et al., 1992;Khan & Rayner, 2003).Because the statistical conclusions drawn from the results of the nonparametric test were consistent with those based on the ANOVA, and the ANOVA is usually robust to normality assumption violation with even with small sample size unless the kurtosis statistic is high (Khan & Rayner, 2003), we concluded that any effect of these assumption violations is inconsequential, and therefore we only report the results of the ANOVA.The statistical significance for all inferential tests was evaluated with alpha level of .05.

Results
Tables 2, 3, and 4 show descriptive summaries for each TP sub-element as functions of both course and instructor, as well as the ANOVA results.The effect sizes, represented as Omega-squared ( 2 ), which is known as a conservative estimate of the proportion of explained variance due to the independent variable (e.g., Privitera, 2017), are relatively small, ranging from 0.07 to 0.15.Thus, about 7% to 15% of the variation in students' perceptions on the TP subelements are attributed to the different course instructors.

Discussion and Implications
As the growth of online courses continues to rise, investigations into teaching presence are of great importance.Explaining how deep and meaningful learning occurs within a community through the interaction of the three presences (cognitive, social, and teaching), the CoI framework "describes and measures the elements of collaborative online learning experiences" (Caskurlu, 2018, p. 1).TP is crucial to students' perceived and actual learning and satisfaction (Caskurlu et al., 2020;Garrison & Cleveland-Innes, 2005); therefore, determining the extent to which students report different TP scores in different sections of the same course with identical design but different instructors is important; and the findings from this study reveal that students do recognize differences in instructors' direct instruction, facilitation of discourse, and the course's instructional design and organization (Garrison & Arbaugh, 2007).Using a oneway ANOVA to compare students' teaching presence scores (FD, DI, and DO), our findings show a significant instructor influence on students' reported TP scores.Next, we discuss potential explanations as to what factors may have led to our findings.
First, and not surprising, our findings align with previous CoI framework research by showing that students do recognize differences between instructors of the same course for DI.As discussions are a medium in which instructors, as subject matter experts, provide DI by sharing "intellectual and scholarly leadership" (Caskurlu, 2018, p. 3), directing and providing feedback on the discussion boards is one way to ensure learners correctly understand and apply course topics (Garrison & Arbaugh, 2007).Beyond discussion commentary, the role of learner feedback or assessment from the instructor is one focus of DI.While normally an individualized and personalized aspect, the use of "canned" feedback could demotivate students (Cole et al., 2017).York and Richardson (2012) state that "timely, relevant, and adequate feedback can influence a learner's perception of interaction" (p.88); feedback characteristics, style, and use could explain differences in reported DI scores.
Additionally, discussions are the focus, in general, when investigating TP in online contexts (see Shea et al., 2010).Therefore, in cooperation, the peer and instructor's activity in the discussion boards may have influenced both DI and FD scores and the variance we found.The difference between the design of the discussion questions (prior to the start of the course) and instructors' FD in discussions is in how instructors effectively guide and direct students to connect with course content in their learning.Both Course A and Course B showed significant differences between instructors of the same course, leading us to believe the instructor or peer activity in the course discussions played a role in the differences we found, as they should.
Further research, such as the use of qualitative analysis of discussion content and the role of peers, is required to confirm our hypothesis.
Typically, FD includes activities where instructors "review and comment upon student responses, raise questions and make observations to move discussions in a desired direction, keep discussion moving efficiently, draw out inactive students, and limit the activities of dominating posters when they become detrimental to the learning of the group" (Garrison & Arbaugh, 2007, p. 164).Therefore, how students accept or interpret these interactions from their instructor may explain the reported differences we found.
In a study conducted by Morgan (2011), considerable variation was found in how instructors perceive and use the discussion boards (e.g., active instructor discussion participation vs. minimal activity).This variance in instructor participation could also be amplified by an instructor's FD.Arbaugh and Hwang

66
(2006) explain that "Facilitating Discourse can be done by anyone with facilitation training and skills, but only content experts can recognize content-related misconceptions or refer students to additional materials relevant to course material" (p.12).While each instructor had a variety of teaching and professional experience (see Table 1), it is unclear whether any instructor held additional training or skills, specifically in facilitation, which may have impacted learners' perceived differences.
Dispersed between the instructor and students, TP helps to "provide students practical insights on how to be actively involved in the course thereby constructing their knowledge through collaboration, interaction with others, and experiencing others' points of views" (Caskurlu et al., 2020).While TP is most often thought of in terms of the instructor, and the CoI survey items all refer to the instructor's actions, an oftenoverlooked component of FD is the role of peer interactions and influence on reported FD scores.Focused on the meaningful (collaborative-constructivist) learning experience (Swan et al., 2009) in a CoI, the role of peer interactions could be a factor in the differences found between the FD and DI scores between individual instructors in both courses in this study-not necessarily the instructors' actions alone.Both instructor and peer interactions may have contributed to the 7% to 15% effect size variation in students' perceptions on the TP sub-elements.This possibility is supported by Shea, Fredericksen, et al.'s (2003) results: they found students' reported perceptions of effective peer discourse facilitation was almost as high as the instructor of the course (i.e., peer FD scores were close to the same as the reported instructor FD scores).
A finding we were not expecting was significant differences between course instructors for the DO subelement.Since the courses in this study follow the model of using "standard" courses (i.e., courses designed by a lead instructor and then copied across multiple sections), we were not expecting to find differences.
While Course B supported this hypothesis, Course A showed significant differences between instructors.A possible explanation is that Course A, as an introductory course, serves as launch into the field, providing learners opportunities to explore a range of instructional design topics, including some of their own choosing.More specifically, the course lead for Course A advised individual instructors to bring in outside resources, information, and points of view.The instructor flexibility to add in their own content into the course (via additional content, resources or required readings) may have led students to report these differences as part of the design and organization of the course.Nonetheless, the findings indicate that teaching matters, and good teaching is likely to occur when good course design is in place.
Furthermore, as instructors had varied backgrounds (e.g., Instructor 1 had 10 years of instructional design experience, and Instructor 3 had 22 years of business experience), the content and resources added to the course by each individual instructor (e.g., adding resources, creating videos, changing readings or focus of weekly topics, etc.) could be wildly different and could spark (or deter) interest in the student population, thereby explaining the significant difference and explained variance.This possible explanation aligns with Anderson et al.'s (2001) study, where they found that "the students and the teacher have expectations of the teacher communicating content knowledge that is enhanced by the teacher's personal interest, excitement and in-depth understanding of the content" (p.8), which, based on each individual instructor's background, may be different from instructor to instructor.As described earlier, each course started with the same DO.However, while generally part of the planned portion of the course or pre-course, DO can occur while the course is running as it is meant to be flexible and adaptable based on meeting student needs 67 (Shea, Fredericksen, et al., 2003).Therefore, the changes each individual instructor made to the live, running course could have impacted the DO scores, leading to the reported differences seen in Course A.
Last, in looking specifically at the three sub-elements, Shea et al. (2006) argue that TP consists of only two sub-elements: (a) DO and (b) FD and DI combined.Caskurlu (2018) supports this claim in findings from a confirmatory factor analysis that yielded a high covariance between the two sub-elements.Especially at the undergraduate level, Garrison and Arbaugh (2007) found in their study that students may not be able to differentiate between FD and DI.Caskurlu (2018) further explains this as students not being able to distinguish between the items used to measure both FD and DI.In our study, we also found high correlations between these two (e.g., r = .930for Course A).

Limitations and Future Research
While our findings provide unique insights into the instructional design by revealing variation in TP for the same course taught by different instructors, the study is not free from the potential threats to internal and/or external validity.First, as this was an exploratory study on the data retrieved from one online master's program in education, the interpretation of the findings may be limited to programs with similar students and instructors.Additional studies in various online settings, courses, or disciplines are warranted to enhance the findings' generalizability.
Second, while we found variation in students' TP by instructors, it is still unknown what factors contributed to the observed variations and how the peer interactions interplay in the variation.Thus, qualitative investigations will be crucial in helping us develop further understanding of these findings-for example, what specific strategies did each instructor use in their course (e.g., using audio and video elements, actively participating on discussion boards, answering e-mails quickly, providing frequent feedback, sharing of personal experiences, etc.) (Argon, 2003)?
Finally, along with the explosion of online learning opportunities, discussion of the CoI framework from theoretical and psychometric perspectives has been evolving (see Kozan & Caskurlu, 2018).The results of this study suggest further opportunity for exploration with the CoI survey redesign as TP is defined as being "distributed between students and instructor" (Garrison et al., 2000, as cited in Caskurlu et al., 2020, p. 11), yet the TP items on the CoI survey only refer to "the instructor" in the question stems (Caskurlu et al., 2020, p. 11).Additionally, Caskurlu et al. (2020) state that research into these peer interactions within a CoI are vital as they "provide students practical insights on how to be actively involved in the course thereby constructing their knowledge through collaboration, interaction with others, and experiencing others' points of views" (p.11).Therefore, in its current state, by only focusing on the instructor, the CoI instrument misses out on measuring other dynamic interactions (e.g., peer-to-peer) crucial in a CoI (Kozan & Caskurlu, 2008).Moreover, our reported high correlations also illustrate that the three sub-elements of TP (FD, DI, and DO scores) have sizable conceptual overlaps or dependency among them.We anticipate further development of and active discussions on defining TP will continue in the field, which may lead to a better indicator of the role that the instructor plays versus peers' roles in online teaching presence scores.While these limitations would set a boundary on the contributions of the current quantitative findings for implications, they also suggest key directions or potential foci for future studies to develop deeper 68 understanding of how TP is cultivated through the dynamic interactions of course design, instructors, and students.We hope our empirical quantitative evidence provides new insights into future research on TP.

Conclusion
Previous research (see Anderson et al., 2001;Archer, 2010;Shea et al., 2010) has called for additional inquiry into online course examinations focusing on TP and its sub-elements; this study was designed to fill this void.By using the CoI framework, we found statistically significant differences in TP scores between sections of two online courses with identical course design taught by different instructors.While reasons for the significant differences are discussed, we call for and anticipate further research to define TP and its sub-elements, especially regarding peer interactions and the role it plays in a CoI.Ultimately, our hope is that this study and its findings help move both conversations and research forward regarding TP and its sub-elements.

75
16. Online or Web-based communication is an excellent medium for social interaction.

Open Communication
17.I felt comfortable conversing through the online medium.
18.I felt comfortable participating in the course discussions.
19.I felt comfortable interacting with other course participants.

Group Cohesion
20.I felt comfortable disagreeing with other course participants while still maintaining a sense of trust.
21.I felt that my point of view was acknowledged by other course participants.
22. Online discussions help me to develop a sense of collaboration.

Triggering Event
23. Problems posed increased my interest in course issues.
25.I felt motivated to explore content-related questions.

Exploration
26.I used a variety of information sources to explore problems posed in this course.

27.
Brainstorming and finding relevant information helped me resolve content-related questions.
28. Online discussions were valuable in helping me appreciate different perspectives.

5-Point Likert-Type Scale
1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree during the implementation with the various instructors and their actions.The research questions for this study were as follows: To what extent do students report different teaching presence (TP) scores in different sections of the same course having identical design but with different instructors?1.To what extent do student perceptions of FD of different sections of the same course vary due to the instructors?2. To what extent do student perceptions of DI of different sections of the same course vary due to the instructors?3. To what extent do student perceptions of instructional DO of different sections of the same course vary due to the instructors?Method Study Setting and Data Source We used part of a sizable archival data set collected by an online master's program in the field of instructional design offered by a large Midwestern public university.The program was the first to go fully online at the university in 2011.Once admitted to the program, learners take 8-week long courses for five semesters.On average, 250 students per year are enrolled in the online program (with three admission start periods during the spring, summer, and fall semesters).While minimal demographic information was collected from the participants during data collection, students enrolled in the online program are generally full-time professionals and part-time students.Students range from 21 to 60 years of age, with a mean age of 37.5 years and a gender breakdown of 67.7% female and 32.3% male.The data used for this study were obtained from two purposively selected graduate-level education courses in the fall 2017 semester.The two courses used for this study were (a) Course A: An Introduction to Learning Design and Technology, and (b) Course B: A Program Assessment and Evaluation course.The introduction course serves as launch into the field and the master's program covering broad topics such as learning theories, instructional design models, and emerging trends in the field.The assessment and evaluation course helps learners to develop their expertise in program evaluation design, using evaluation models to examine and create learning and performance interventions.Student perceptions of TP were measured with the CoI survey (Arbaugh et al., 2008) every semester in the master's program.The survey was administered during the last week of the learners' online courses (week eight) as part of the program's course evaluation.Learners were offered 2% extra credit if 90% of students completed the survey.As part of the course evaluation process, the entire fall 2017 student population received the survey via an e-mail or course announcement, with at least one reminder e-mail or course announcement.For the study, 160 students voluntarily completed the survey (n = 57 among four sections in Course A, 57% response rate; n = 103 among seven sections in Course B, 65% response rate).Anonymity was assured as no personal or identifiable information was asked of the learners, and the survey was sent by anonymous link.
instructional design, 12 years of online and face-to-face teaching 6 F 9 years online programing, 5 years K-12 teaching, 5 years higher ed teaching ed teaching, 6 years instructional design, 3 years K-12 teaching Note.F = female; M = male.
exclude them because we did not detect any issue with the data entries and considered them aligned with reported responses from the population.Consistent with the observations of outliers, a set of Kolmogorov-Smirnov normality tests indicated that none of the TP sub-element data from each course followed a normal distribution.Course A showed the following: FD: D(57) = 0.244, p < .001;DI: D(57) = 0.302, p < .001;and DO: D(57) = 0.259, p < .001.And course B showed the following: FD: D(103) = 0.152, p < .001;DI: D(103) = 0.207, p < .001;and DO: D(103) = 0.219, p < .001.
differences in DI scores by instructors for both courses for the first research sub-question-To what extent do student perceptions of DI of different sections of the same course vary due to the instructors?-CourseA showed F(3, 53) = 3.430, p = .023, 2 = 0.11, and Course B, F(6, 96) = 2.663, p = .020, 2 = 0.09.The second research sub-question-To what extent do the student perceptions of FD of different sections of the same course vary due to the instructors?-foundstatistically significant differences in both Course A, F(3, 54) = 3.745, p = .016, 2 = 0.12, and Course B, F(6, 96) = 2.346, p = .037, 2 = 0.07.Last, in answering the third research sub-question-To what extent do student perceptions of DO of different sections of the same course vary due to the instructors?-resultsfrom the ANOVA were split.Course A showed significant differences by instructor-F(3, 54) = 4.415, p = .008, 2 = 0.15-but Course B-F(6, 97) = 1.934, p = .083albeittrending toward significant, was not statistically significantly different.In summary, statistically significant instructor variation was observed among all TP sub-elements except the DO for Course B.

Table 2
Descriptive Statistics of Facilitation of Discourse (FD) Scores as a Function of Instructor and Course

Table 3
Descriptive Statistics of Direct Instruction (DI) Scores as a Function of Instructor and Course

Table 4
Descriptive Statistics of Instructional Design and Organization (DO) Scores as a Function of Instructor In looking at the overarching research question-To what extent do students report different TP scores in different sections of the same course having identical design but with different instructors?-wefound statistically significant differences.Specifically, results from the ANOVA found statistically significant Integration 29.Combining new information helped me answer questions raised in course activities.30.Learning activities helped me construct explanations/solutions. 31.Reflection on course content and discussions helped me understand fundamental concepts in this class.32.I can describe ways to test and apply the knowledge created in this course.33.I have developed solutions to course problems that can be applied in practice.76 34.I can apply the knowledge created in this course to my work or other non-class-related activities.