International Review of Research in Open and Distributed Learning A Meta-Analysis on the Effects of Synchronous Online Learning on Cognitive and Affective Educational Outcomes

Synchronous online learning (SOL) provides an opportunity for instructors to connect in real-time with their students though separated by geographical distance. This meta-analysis examines the overall effect of SOL on cognitive and affective


Introduction
With the increase in the number of online courses (Seaman et al., 2018), research on online learning has grown (Martin et al., 2020).Primary research has made way for systematic reviews and meta-analyses conducted on various online learning models.While there are several meta-analyses of online learning, most focus on asynchronous online learning.There is still a need for a meta-analysis to examine the effects of synchronous online learning (SOL).

Synchronous Online Learning
SOL occurs when students and the instructor are together in "real time" but not at the "same place."SOL is a specific type of online learning gaining importance due to the convenience it offers to both students and instructors while enhancing interactivity.Instructors and students are realizing the necessity of immediate interaction in their online experience, which is often referred to as "same time, some place learning."Adding synchronous components to online courses can enrich meaningful interaction between student-instructor and student-student (Martin et al., 2012).As shown in Figure 1, SOL is considered a subset of online learning, and online learning a subset of distance education.

Synchronous Online Learning Conceptual Diagram
Synchronous online environments allow students and instructors to communicate using audio, video, text chat, interactive whiteboard, application sharing, instant polling, etc. as if face-to-face in a classroom.
Participants can talk, view each other through a webcam, use emoticons, and work together in breakout rooms.Zoom, Blackboard Collaborate, Elluminate, Adobe Connect, and Webex are some of the synchronous online technologies prevalent in higher education.Synchronous technologies can be incorporated into online courses for community-building or social learning and are better suited to discussing less complex issues, getting acquainted, or planning tasks (Hrastinski, 2008).Synchronous online technologies are less flexible in terms of time, but can be accessed from anywhere.They render immediate feedback and allow multi-modality communication (Martin & Parker, 2014).

Comparisons of Synchronous Online Learning
A number of empirical studies have compared SOL with the asynchronous online and face-to-face modes of learning, and a variety of significant findings have been reported in terms of specific learning outcomes, such as online interactions, sense of cooperation, sense of belonging, student emotions, cognitive presence, and critical and reflective thinking skills.We review these findings in the following sections.

Synchronous Versus Asynchronous Online Learning
Online learner interaction is one of those variables or outcomes empirically investigated when comparing synchronous and asynchronous learning environments.For example, using a content analysis method, Chou (2002) examined and compared online learners' interaction transcripts from synchronous and asynchronous discussions.In synchronous discussions, learners engaged in more social-emotional exchanges, using more two-way communication, whereas the interactions in asynchronous modes of learning were much more focused on the learning tasks, using primarily one-way communication with less interactive exchanges (Chou, 2002).Using a case study research design, Mabrito (2006) similarly explored differences in the patterns and nature of learner interactions between synchronous and asynchronous modes of communication by analyzing online learners' transcripts of discussions.
More recently, Peterson et al. (2018) found that asynchronous online cooperation yielded less sense of belonging and more negative emotions among learners, while the synchronous mode of communication positively influenced student sense of belonging, emotions, and cooperation in online groups.In a similar study, Molnar and Kearney (2017), as a result of their analysis of asynchronous and synchronous modes of online discussion, concluded that although both modes contributed to students' cognitive presence, students participating in synchronous Web discussions engaged in more cognitive presence than their peers in the asynchronous discussions.These studies clearly indicate that the synchronous mode of online communication can also positively influence cognitive processes and skills of online learners.

Synchronous Online Versus Face-to-Face Learning
Several studies have also empirically compared SOL with traditional face-to-face learning in terms of outcomes.Kunin et al. (2014) compared postgraduate dental residents' perceptions regarding the perceived effectiveness of synchronous and asynchronous modes of online learning to traditional face-to-face learning and found that participants perceived the face-to-face mode as being most conducive to their ability to learn, while also favoring the asynchronous over the synchronous mode after experiencing both.On the other hand, Garratt (2014) investigated whether a synchronous mode of instruction could be used effectively to teach a set of psychomotor skills to a cohort of paramedic students in comparison to face-to-face instruction of the same skills.Garratt (2014) found no significant difference in the skills performance results of the two groups, indicating that the synchronous mode of learning could be as effective as traditional face-to-face instruction to teach even complex psychomotor skills, although it should be noted that the very limited sample size was a serious limitation to the study.Haney et al. (2012) used synchronous and face-to-face modes of instruction to teach wound closure skills to two groups of paramedics.On tests of both knowledge and skills, the students who received the same instruction through videoconferencing performed at least as well as those who received traditional face-to-face instruction, while traditional face-to-face instruction was still perceived to be the more effective method of teaching (Haney et al., 2012).
In support of the equal or almost equal effectiveness of the synchronous mode of online learning in comparison to face-to-face learning, Siler and Vanlehn (2009) found that synchronous one-to-one tutoring worked at least as effectively as face-to-face tutoring in terms of students' gains in learning physics and several motivational outcomes, although the face-to-face tutoring was found to be more time-efficient and conducive to emotional exchanges, while also allowing more interaction.More recently, Francescucci and Rohani (2019) compared synchronous and face-to-face learning in terms of exam grades and perceived student engagement and found that students who received the synchronous online version of an introductory marketing course academically performed as successfully as their peers who took the face-toface version of the same course.These studies cumulatively indicate that although the traditional face-toface mode of learning is, as expected, perceived to be a more effective method of learning and instruction overall, the synchronous or asynchronous mode of online learning has the potential to help achieve desirable outcomes as effectively and successfully as conventional modes of learning and instruction.

Meta-Analysis on Synchronous Distance Education
Reviews of research have been conducted on distance education and exclusively on online learning.There have been a number of meta-analyses on distance education, specifically comparing face-to-face to online learning (Allen, 2004;Cook et al., 2008;Jahng et al., 2007;Shachar & Neumann, 2010;Todd et al., 2017;Zhao et al., 2005).However, we did not find a meta-analysis specifically examining SOL, comparing it to asynchronous online learning or to face-to-face learning, though we found a few studies examining SOL as a moderator variable (Bernard et al., 2004;Means et al., 2013;Williams, 2006).In the Bernard et al. (2004) review that examined 232 studies, synchronous and asynchronous were examined as a moderator variable.
They found asynchronous distance education to have a small significant positive effect (g+ = 0.05) on student achievement, and synchronous distance education to have a small significant negative effect (g+ = -0.10).However, in this case, the studies were focused on all aspects of distance education and not specifically on online learning.Means et al. (2013) examined synchronicity as a moderator variable and found that it was not a significant moderator of online learning effectiveness.Williams (2006) examined

Purpose of the Study
While there are a few meta-analyses focusing on the broader comparison of online learning versus face-toface or blended learning, there is a gap in the research comparing SOL with either face-to-face or asynchronous online learning.There is only one systematic review conducted on SOL (Martin et al., 2017) and a few moderator analyses on synchronous distance education (Bernard et al., 2004, Means et al., 2013;Williams, 2006).However, there is no meta-analysis focusing on SOL, though it is a critical aspect of online learning.
A meta-analysis can advance the field of SOL by providing information to contextualize what we know about online learning and technology and how it is applied (Oliver, 2014).Systematic reviews help develop a common understanding among researchers about the state of their field and improve future research to close gaps and eliminate inconsistencies.We hope to provide a quantitative synthesis of research literature on SOL from 2000 to 2019 and examine SOL's effectiveness in achieving educational outcomes.

Identification and Screening Process
We used the PRISMA flow model (Figure 2) to guide the process of identification, screening, eligibility, and inclusion of studies.The PRISMA guidelines were proposed by the Ottawa Methods Centre for reporting items for systematic reviews and meta-analyses (Moher et al., 2009).Our initial search identified n = 807 manuscripts, which was reduced to n = 529 after removing duplicate entries.To ensure consistent screening procedures, we hosted a discussion session with two team members and screened a random sample of five manuscripts for calibration purposes.After screening the titles and abstracts, full-text screening was conducted in two rounds with n = 28 manuscripts.After systematically applying our inclusion and exclusion criteria, n = 19 manuscripts qualified for final inclusion in the study.They were subjected to our coding and data extraction procedures.

Study Coding and Data Extraction
The research team developed and used a Google form to code the variables described in

Control conditions and type
Number of control conditions were coded.This included one control with one synch, one control with more than one synch, one synch and more than one control, and more than one synch and more than one control.The control type was also coded to be either asynchronous or face-to-face.

Course duration and synchronous session duration
The different options for course duration included: less than 15 weeks, 15 weeks, more than 15 weeks, and unknown.Synchronous session duration included: less than 30 minutes, 30 minutes to 2 hours, more than 2 hours, and unknown.

Instructor and student equivalence
Instructor equivalence was coded as; same instructor, different instructor, and unknown, while student equivalence was coded as random assignment, nonrandom assignment with statistical control, non-random assignment without statistical control, and unknown.

Time and material equivalence
Time equivalence was coded as yes, no and unknown, and material equivalence was coded as same curriculum materials, different curriculum materials, and unknown.
Interaction features Learner-learner, learner-instructor, and learner-content interactions were coded as opportunity to interact, no opportunity to interact, and unknown.

Instructional teaching method
This was coded as lecture, interactive lesson, unknown, and other.

Synchronous technology
Synchronous technology type along with different synchronous feature used were coded.

Demographics
Types of synchronous learners (K-12, undergraduate, graduate, military, industry/business, professionals), discipline, gender and age of participants, and country were coded.

Effect sizes
Statistical information (M, SD, n) to calculate effect sizes.

Dependent and Moderating Variables
Cognitive and affective educational outcomes were the dependent variables used in this study.Cognitive outcomes include measures such as learning, achievement, critical thinking skills, comprehension, and similar outcomes.The affective outcomes included learner satisfaction, emotions, attitudes, motivation, and related measures.Though it was our intention to also code for behavioral educational outcomes, only two studies reported on behavioral outcome and hence this was not part of this meta-analysis.
Several variables important in SOL were coded and examined as moderators.Though we coded for a number of variables, there was not sufficient information to examine all as moderators.Thus, only seven were chosen: two pedagogical (course duration and type of instructional method), one methodological (student equivalence), three demographic (learner level, discipline, country), and one publication type variable (publication source). Moderators

Effect Size Calculation and Data Analysis
Data were analyzed using the computer software Comprehensive Meta-Analysis, version 3 (CMA 3.0; Borenstein et al., 2014).Effect size used in the current meta-analysis was Hedges' g.First, standardized mean difference (Cohen's d) was calculated by dividing the raw mean difference between the synchronous treatment condition and the control condition (asynchronous or face-to-face condition) by the pooled standard deviation of the two conditions using the following formula.Notations were borrowed from Borenstein et al. (2009).
We have three types of effect size statistics.Most studies reported statistics of means, standard deviations, and sample sizes for the synchronous treatment condition and the control condition (i.e., asynchronous or face-to-face).One study reported raw mean difference and significance of difference (i.e., Cleveland-Innes & Ally, 2004) and one study reported Cohen's d (i.e., Francescucci & Rohani, 2019).The original data had 86 cases of effect size statistics in the 19 primary studies.Before conducting the meta-analysis, we had to deal with statistics that may have yielded dependent effect sizes within studies.For example, Peterson et al. (2018) reported multiple effect size statistics calculated from different affective measures.Ignoring the dependence issue would pose threats to validity of meta-analytic results because it may result in a spuriously smaller standard error of the summary effect size and a higher risk of committing type I error (Ahn et al., 2012).Literature suggested procedures in handling the dependence such as averaging or weighted averaging method (Borenstein et al., 2009) or robust variance estimation (RVE) (Hedges et al., 2010).Although RVE performs better than the averaging procedure in estimating unbiased standard error (Moeyaert et al., 2017), it requires a large sample (i.e., number of primary studies) for accuracy (Tanner-Smith & Tipton, 2014).Therefore, we used the weight averaging procedure to deal with the dependence issue.This resulted in 27 effect sizes in the 19 primary studies after we averaged effect size statistics of the same measure type (i.e., affective or cognitive) for each control group (i.e., asynchronous or face-to-face) within studies.
We employed a random-effects model for several reasons.First, the fixed-effect model assumes that all studies share one common effect size in the population (Borenstein et al., 2009), which can only make conditional inferences to the studies included in a meta-analysis (Field, 2001).Second, we hypothesized that the true effects were heterogeneous and the proposed moderators may explain the heterogeneity.
Therefore, employing the random-effect model and assuming that the true effect sizes vary across studies was more appropriate and plausible.There were four conditions in the current meta-analysis: First, we estimated the overall effect size for each condition.Overall averaged effect size, standard error, confidence intervals, Z and its related p-value, and heterogeneity statistics (Q and its p-value, " ( , and # ( ) were computed.Overall average effect size provides an estimate of the effects of SOL on educational outcomes.Its standard error and confidence intervals provide evidence of the estimation accuracy.The Z and its p-value show whether the effect size estimate is statistically significant.Heterogeneity statistics provide evidence of the variation of the true effect sizes across studies.We also conducted moderator analyses on the four conditions to determine if the heterogeneity (if any) in effect sizes could be accounted for by pedagogical, methodological, demographical, and publication variables.All the moderators are categorical variables, and analyses were conducted with the mixed effects analysis (MEA) as implemented in the CMA 3.0.
Finally, it was important to address the issue of publication bias which is when the published research is not representative of the population of work in the domain.In this meta-analysis, both journal articles and dissertations were included, which means some grey literature was accounted for, but there was still the risk of publication bias.Several strategies were used to determine publication bias.Funnel plots showing the relationship between standard errors or studies included and effect sizes (Borenstein, 2009) illustrate the spread of the studies.In addition, classic fail-safe N (Rosenthal, 1979), to represent the number of missing studies to bring the p-value to a non-significant level, was included.Finally, we used Orwin's failsafe N (Orwin, 1983), which assists in computing the number of missing studies needed to bring the summary effect to a level below the specified value other than zero.

Publication Patterns
Table 3 shows the publication details of the 19 journal articles and dissertations included in this study.The studies were published in a wide array of journals in several different disciplines, and dissertations were completed at institutions of higher education across the United States.Among the studies, three were published in each of 2008, 2010, and 2014, while there were two in each of 2012, 2015, 2016, and 2018, and one study in 2004 and 2006.

Characteristics of the Primary Studies
Descriptive information about the 19 primary studies is presented in Table 4.The final sample consisted of k = 27 independent effect sizes (across the four models) and N = 4,409 participants.A total of n = 1,114 students received SOL and the number of students who received asynchronous online learning and face-toface learning were n = 1,079 and n = 2,216, respectively.Approximately half the studies were conducted with undergraduate students (n = 10, 52.6%), and the rest were conducted with graduates or professionals (n = 9, 47.4%).With respect to disciplines, the most frequently studied was education (n = 5, 26.3%), followed by business (n = 4, 21.1%) and medicine or nursing (n = 3, 15.8%).A majority of the studies were conducted in the United States (78.9%), and four others were conducted in Australia (i.e., Dyment & Downing, 2018), Canada (i.e., Cleveland-Innes & Ally, 2004), Japan (i.e., Shintani & Aubrey, 2016), and China (Taiwan) (i.e., Chen & Shaw, 2006).There were 12 journal articles (63.2%) and seven dissertations (36.8%).Note.Asynch = asynchronous; F2F = face-to-face.

Overall Effect Sizes
Meta-analyses assume normal distribution of observed effect sizes for accurate estimation (Borenstein et al., 2009).The distribution of Hedges's g is plotted in Figure 3, which suggests that effect sizes were approximately normally distributed.Given the within-study dependent effect sizes, we conducted metaanalyses of the four conditions separately (i.e., synchronous vs. asynchronous with cognitive outcomes, synchronous vs. asynchronous with affective outcomes, synchronous vs. face-to-face with cognitive outcomes, synchronous vs. face-to-face with affective outcomes).The overall effect size statistics for each of the four conditions is presented in Table 5.The effect size was statistically significant in only one model (synchronous vs. asynchronous with cognitive outcomes), and it did not overlap zero in the confidence interval.

Synchronous vs. Asynchronous With Cognitive Outcomes
Seven studies comparing SOL with asynchronous online learning in terms of cognitive outcomes are shown in Figure 4.The last line indicates the statistics for the summary effect.The results of the weighted average applying a random model revealed a statistically significant effect size (g = 0.37, p = .02),with a 95% confidence interval of 0.055 to 0.679, indicating that SOL significantly and positively impacted students' cognitive outcomes.The significant Q-value suggests that the true effect sizes were heterogeneous across studies (Q-value = 28.63,p < .001)with 79% of the observed variance reflecting true heterogeneity (! != 79.04).

Figure 4
Forest Plot of Cognitive Outcomes (Synchronous vs. Asynchronous)

Synchronous vs. Asynchronous with Affective Outcomes
The eleven studies that compared SOL to asynchronous online learning with affective outcomes are shown in Figure 5.The results of the weighted average applying a random model revealed that SOL did not have a statistically significant effect on affective outcomes (g = 0.32, p = .051),with a 95% confidence interval of -0.001 to 0.641.The Q-value of homogeneity was statistically significant, indicating the true effect sizes varied across studies (Q-value = 50.19,p < .001)and a majority of variation of the observed effect sizes was due to between-studies variation (! != 80.08).

Synchronous vs. Face-to-Face with Cognitive Outcomes
Four studies comparing SOL with face-to-face learning in terms of cognitive outcomes are shown in Figure 6.Results revealed a statistically insignificant negative effect size (g = -0.20,95% CI [-0.749, 0.352], p = .48),indicating that SOL did not statistically significantly improve students' cognitive outcomes compared with traditional face-to-face learning.The Q-value was statistically significant, indicating that the true effect sizes varied across studies (Q-value = 29.82,p < .001)and a substantial observed variation was real (! != 89.94).

Figure 6
Forest Plot of Cognitive Outcomes (Synchronous vs. Face-to-Face)

Synchronous vs. Face-to-Face with Affective Outcomes
A final subset included five studies comparing SOL with face-to-face learning in affective outcomes and is illustrated in Figure 7. Results revealed a statistically insignificant and small effect size (g = 0.20, 95% CI [-0.195, 0.568], p = .34),indicating that SOL did not significantly improve students' affective outcomes compared with the face-to-face learning mode.Heterogeneity statistics suggested that the true effect sizes varied across studies (Q = 22.52, p < .001)and a large proportion of the observed variance was betweenstudy variation (! != 82.24).

Figure 7
Forest Plot of Affective Outcomes (Synchronous vs. Face-to-Face)

Analysis of Moderator Variables
Since effect sizes were found to be heterogeneous across studies, moderator analyses were conducted to examine what factors may account for the heterogeneity of each condition.Seven moderating variables were chosen, falling into four categories: pedagogical, methodological, demographic, and publication variables.
The results from the moderator analyses can be found in the Appendix in Tables A through D.

Effect Sizes of Pedagogical Moderator Variables
Type of instructional method and course duration were examined as potential pedagogical variables moderating effect size estimates.
Instructional Method.For the condition of synchronous vs. asynchronous with cognitive outcomes, the type of instructional method did not moderate effect size estimates.Although studies employing interactive lessons had a significant effect size estimate (g = 0.626, p = .048)and studies employing lectures resulted in an insignificant effect size (g = 0.118, p = .302),there was no statistically significant difference between the two conditions (Q-value = 0.115, p = .735).The results of moderator analyses for the condition of synchronous vs. asynchronous with cognitive and affective outcomes are presented in Table A and Table B. We found a moderating effect of the type of instructional method on effect size results.Interactive lessons had an effect size estimate statistically significantly larger than lectures (Q-value = 10.756,p = .001)and unknown condition (Q-value = 4.045, p = .044).Results of pedagogical moderator analyses for the condition of synchronous vs. face-to-face with cognitive outcomes and affective outcomes are presented in Table C and Table D, respectively.Since all studies employed lectures (k = 4) for the condition of synchronous vs. face-to-face with cognitive outcomes and all studies employed interactive lessons (k = 5) for the condition of synchronous vs. face-to-face with affective outcomes, the type of instructional method could not be examined as a moderator.yielded statistically significantly larger effect sizes than those with a course duration of one semester or longer (Q-value = 5.364, p = .021).In the condition of synchronous vs. asynchronous with affective outcomes, although effect sizes were all insignificant across the three conditions of course duration, there were statistically significant differences between the duration of less than one semester and that of one semester or longer, with the former yielding a statistically significantly larger effect size than the latter (Qvalue = 4.191, p = .041).However, course duration did not moderate effect size under the condition of synchronous vs. face-to-face with cognitive outcomes (Q-value = 0.050, p < .824).On the condition of synchronous vs. face-to-face with affective outcomes, it was found that effect sizes varied as a function of course duration with shorter duration (i.e., less than one semester) having larger effect size estimates than the duration of one semester or longer (Q-value = 14.019, p < .001).

Effect Sizes of Methodological Moderator Variables
Student Equivalence.Student equivalence was examined as a potential methodological variable moderating the effect size estimates.This variable indicates whether studies employed random or nonrandom assignment to distribute students to the treatment and control condition.There were three studies employing random assignment and four studies employing non-random assignment when comparing the synchronous with the asynchronous condition in cognitive outcomes.Although both conditions yielded insignificant effect size estimates, non-random assignment had a statistically significantly larger effect size than random assignment (Q-value = 5.837, p < .016).On the condition of synchronous vs. asynchronous with affective outcomes, most studies employed non-random assignment (k = 6).Results revealed that student equivalence has moderating effects on effect sizes, with studies employing non-random assignment producing effect sizes statistically significantly larger than those employing random assignment (Q-value = 5.291, p = .021).Half the studies employed the random assignment (k = 2) when the control type was faceto-face and the outcomes were cognitive variables.Student equivalence did not moderate the effect size estimates (Q-value = 0.136, p < .713).In the condition of synchronous vs. face-to-face with affective outcomes, there were three studies employing random assignment and only one study employing nonrandom assignment.An additional study did not report information on student assignment.Results revealed that student equivalence moderated the effect size estimates, with studies employing non-random assignment having statistically significantly larger effect size than studies in the other two categories, studies employing random assignment (Q-value = 14.019, p < .001)and the study without information (Qvalue = 19.331,p < .001).

Effect Sizes of Demographic and Publication Source Moderator Variables
Learner level, discipline, and country were examined as potential demographic variables to moderate effect sizes.We also hypothesized that effect sizes would vary as a function of publication source since studies with significant results or larger effect sizes tend to be published (Rothstein et al., 2005).
Learner Level.Results revealed that learner level did not moderate effect size in the two conditions with cognitive outcomes.However, effect sizes varied as a function of learner levels when outcomes were affective.Although none of the effect sizes was significant, the effect size for graduate/professional was statistically significantly larger than the undergraduate comparison for both conditions (Q-value = 7.732, p = .005for asynchronous, and Q-value = 10.570,p = .001for face-to-face).Note.synch = synchronous; asynch = asynchronous; F2F = face-to-face.

Figure 9
Funnel Plot for the Random Effect Model (Asynchronous vs. Synchronous) for the Affective Domain Note. k = 11; The diamond represents the average effect size (Hedges's g).

Figure 11
Funnel Plot for the Random Effect Model (Face-to-face vs. Synchronous) for the Affective Domain Note. k = 5; The diamond represents the average effect size (Hedges's g).

Limitations and Delimitations
Prior to discussing our results, we present our delimitations and limitations so readers can interpret the findings in light of these considerations.While we planned to examine three learning outcomes, there were not sufficient studies focusing on behavioral outcomes and, hence, that outcome was not examined.Also, among the studies examined, the numbers were still small because we did four model comparisons and did not combine the control group of face-to-face and asynchronous since each of these has different characteristics and shares the same samples (e.g., independence of observation).While some meta-analyses report combined effects for affective and cognitive outcomes, we believe these two constructs are too different to report in a single model.When we framed the study, we coded for several variables; however, we realized that authors did not report several of the details in their methods.While we desired to examine types of interaction, we found this was not reported in most studies.The findings of the moderator analysis should be taken with caution since the number of studies, especially when comparing synchronous online to face-to-face, were very few.Also notable, we averaged effect size by combining multiple effect sizes, which ignores any subject variability.We opted to do this as correlations are not usually reported, and we assumed a correlation value of 1.0.Finally, the common problem of publication bias was detected in all four models, and thus, additional studies could produce much different results.

Publication Source and Country
There were no differences between the groups based on country or publication source.As a reminder, most studies were published in the United States, and additionally, most studies were published as journal articles.
Overall, the findings of this study are different from those in the work of Bernard et al. (2004) who found that synchronous distance education had a negative effect and Means et al. (2013) who did not find synchronicity as a significant moderator.From the early days of online learning and when synchronous distance education included other forms of synchronicity, this study found one model where synchronous online learning had a small significant effect compared to the asynchronous online condition.This is similar to Williams (2006), who found a positive effective size when examining synchronous distance education with asynchronous online learning.

Implications and Future Directions
SOL had a significant moderate effect over asynchronous online learning for cognitive outcomes.This shows that including synchronous sessions in online courses is important.In addition, it was found that interactive lessons had significantly higher effect than lectures.This finding has implications for centers for teaching and learning, and for faculty developers who provide training on the use of synchronous tools and offer workshops.Workshops focusing on synchronous online technology should emphasize designing interactive lessons so that students get the greatest benefit.For campuses without synchronous online tools, this study has implications for administrators to purchase and include a synchronous online tool in the learning management system.Also, for instructors who are teaching online or considering online teaching, this suggests that including synchronous online meetings in their courses would be helpful.
There were only 19 studies that we were able to identify and use in this meta-analysis.There is a need for more high-quality studies on this topic.Since the number of studies were few, the moderator analysis resulted in few cell sizes.There is also a need for more studies to focus on behavioral outcomes in addition to cognitive and affective outcomes.Also, another challenge we encountered during coding was insufficient information reported in the methodology to describe synchronous online sessions.It is important for authors to give as much detail as possible about both the pedagogy and methodology.For example, we were unable to identify the various synchronous functionalities used in the intervention or, if all types of interaction occurred, learner-learner, learner-instructor, and learner-content.We acknowledge this might be due to journal word count limits, but the important consideration is that pedagogical and methodological dimensions are equally relevant to report in a manuscript.

Figure 2 PRISMA
Figure 2 a) synchronous treatment condition vs. asynchronous condition with cognitive outcomes, b) synchronous treatment condition vs. asynchronous condition with affective outcomes, c) synchronous treatment condition vs. face-to-face condition with cognitive outcomes, and d) synchronous treatment condition vs. face-to-face condition with affective outcomes.

Table 3
Journal Articles and Dissertation Details

Table 4
Descriptive Data for the Primary Studies

Table 5
Overall Effect Size Estimates for the Four Conditions

Table 6
Classic Fail-Safe N and Orwin's Fail-Safe N for Each Model