Digital Design Matters: Development and Validation of the Online Course Design Matters: Development and Validation of the Online Course Design Elements (OCDE) Instrument Design Elements (OCDE) Instrument

Course design is critical to online student engagement and retention. This study focused on the development and validation of an online course design elements (OCDE) instrument with 38 Likert-type scale items in five subscales: (a) overview, (b) content presentation, (c) interaction and communication, (d) assessment and evaluation, and (e) learner support. The validation process included implementation with 222 online instructors and instructional designers in higher education. Three models were evaluated which included a one-factor model, five-factor model, and higher-order model. The five-factor and higher-order models aligned with the development of the OCDE. The frequency of use of OCDE items was rated above the mean 4.0 except for two items on collaboration and self-assessment. The overall OCDE score was related to self-reported levels of expertise but not with years of experience. The findings have implications for the use of this instrument with online instructors and instructional designers in the design of online courses.


Introduction
Higher education campus enrollment has decreased; however, the number of online courses and online enrollment has continued to increase (Allen & Seaman, 2017). Though online enrollment has increased, online student dropout and lack of engagement in distance education are still issues of concern. Dropout can be prevented through well-designed online courses (Dietz-Uhler et al., 2007). It is clear that highquality course design is critical to the success of online courses. Several researchers have examined online course design in online learning. Jaggars and Xu (2016) examined the relationships among online course design features, course organization and presentation, learning objectives and assessment, and interpersonal interaction and technology. They found that design features influenced student performance, and interaction affected student grades. Swan (2001) found clarity of design, interaction with instructors, and active discussion influenced students' perceived learning and satisfaction. Laurillard et al. (2013) recommended effective pedagogy to foster individual and social processes and outcomes, promote active engagement, and support learning with a needs assessment.
Some higher education institutions have developed or adopted rubrics to not only provide guidance for instructors' course design efforts, but also to evaluate the design in online courses. Baldwin et al. (2018) reviewed six rubrics commonly used to evaluate the design of online courses. They identified 22 design standards that were included in several of the rubrics. We reviewed the research on five categories of design standards, namely (a) overview, (b) content presentation, (c) interaction and communication, (d) assessment and evaluation, and (e) learner support, and examined their impact on online learning.

Assessment and Evaluation
Interaction and Communication

Content Presentation
Overview images, and (c) header information for tables. Dell et al. (2015) highlighted the importance of including information about the accessibility of all course technologies in the course.

Interaction and Communication
Interaction and communication are critical in online courses. Some of the strategies to enhance interaction and communication include (a) providing opportunities for student-to-student interaction, (b) using activities to build community, (c) including collaborative activities to support active learning, and (d) using technology in such a way as to promote learner engagement and facilitate learning. Moore (1989) proposed an interaction framework, and listed student-student interaction as essential for online courses, in addition to student-content and student-instructor interaction. Moore stated that adult learners might be selfmotivated to interact with peers, whereas younger learners might need some stimulation and motivation. Luo et al. (2017) highlighted that interaction assists in building a sense of community. These authors describe a sense of community "as values students obtain from interactions" with the online community (p. 154). Hence it is essential to intentionally design activities that build and maintain community in online courses. Strategies to build community include humanizing online courses by using videos and designing collaborative assignments that provide learners with opportunities to interact with peers (Liu et al., 2007). Shackelford and Maxwell (2012) found using introductions, collaborative group projects, whole-class discussions, as well as sharing personal experiences and resources predicted a sense of community. Online collaboration supports active learning as it involves lateral thinking, social empathy, and extensive ideation (Rennstich, 2019). Salmon (2013) described the importance of designing e-tivities for online participation and providing learners with scaffolding to achieve learning outcomes. A variety of technology systems and tools have been used to promote online learner engagement. Some of these technologies are e-mail, learning management systems, wikis, blogs, videos, social media, and mobile technologies (Anderson, 2017;Fathema et al., 2015;Pimmer et al., 2016).

Assessment and Evaluation
Assessment and evaluation are essential in an online course to measure students' learning outcomes and determine overall course effectiveness. Some of the strategies for well-designed assessment and evaluation include (a) aligning assessments with learning objectives, (b) providing several assessments throughout the course, (c) including grading rubrics for each assessment, (d) providing self-assessment opportunities for learners, and (e) giving students opportunities to provide feedback for course improvement. Dick (1996) emphasized the importance of aligned assessments in the instructional design process. Instructional design models recommend that assessments be aligned with learning objectives and instructional events. In addition, Quality Matters (2020) considered alignment between objectives, materials, activities, technologies, and assessments in online courses as essential because it helps students to understand the purpose of activities and assessments in relation to the objectives and instructional material.
Researchers have pointed out the importance of administering a variety of assessments throughout the course so that students can gauge their learning progress (Gaytan & McEwen, 2007). Martin et al. (2019) reported that award-winning online instructors recommend using rubrics for all types of assessments.
Rubrics not only save time in the grading process, but they can assist instructors in providing effective feedback and supporting student learning (Stevens & Levi, 2013). Self-assessments help learners identify their progress towards the course outcomes.
Evaluation is an important element in course improvement. Kumar et al. (2019) found that expert instructors used mid-and end-semester surveys and student evaluations. They also use data from learning management systems and institutional course evaluations to improve courses. These practices illustrate the importance of providing learners with opportunities to give feedback to instructors.

Learner Support
Support is essential for online learners to be successful. Some of the strategies for providing support to the online learner include providing (a) intuitive and consistent course navigation, (b) media that can be easily accessed and viewed, (c) details for minimum technology requirements, and (d) resources for accessing technology and institutional support services. Support can be offered at the course, program, and college or institution level. At the course level, it is essential to provide learner support for easy and consistent navigation (Graf et al., 2010); otherwise, students can become easily frustrated and dissatisfied. Because online learners come from different backgrounds and have access to different resources, they may use various devices and platforms to access courses. Therefore, it is important to specify technology requirements and to design the course with media and files that can be easily viewed and accessed with mobile devices (Han & Shin, 2016;Ssekakubo et al., 2013). Additionally, it is important for the institution to provide a variety of support services (e.g., academic, technical).

Experience and Expertise
Individuals with many years of experience in designing online courses tend to have a high level of expertise.
Award-winning faculty who had designed and taught online courses were interviewed to identify important course design elements. These faculty members mentioned that they followed a systematic process. They chunked course content, aligned course elements using a backwards design approach, provided opportunities for learner interaction, and addressed the needs of diverse learners .
Expert designers have "a rich and well-organized knowledge base in instructional design" (Le Maistre, 1998, p. 33). In general, compared to novice designers, they are more knowledgeable regarding design principles and are able to access a variety of resources (Perez et al., 1995).

Research Purpose
The purpose of this study was to develop the Online Course Design Elements (OCDE) instrument and establish its reliability and construct validity. Baldwin et a. (2018) reviewed some of the few rubrics focus on online course design, most of these instruments have not been validated. Some of these rubrics were created by universities or at the state level.
Building on design elements from across the six rubrics examined in Baldwin et al. (2018), the OCDE captured the most common design elements from these various rubrics. This instrument filled the gap by designing a valid and reliable instrument that instructors and designers of online courses may use at no cost for developing or maintaining online courses. In addition to designing the instrument, we also examined whether years of experience or expertise was related to instructors' and instructional designers' use of design elements.
More specifically, the objectives of this study were to (a) develop an instrument to identify design elements frequently used in online courses, (b) validate the instrument by verifying its factor structure, and (c) examine the relationships of the latent variables to years of experience and self-reported level of expertise.
While the instrument was validated in higher education, it can also be adapted and used by researchers and practitioners to other instructional contexts including K-12 and corporate.

Method
This research was carried out in two phases. The first phase focused on the development of the OCDE instrument, and the second phase focused on validating the instrument. During the first phase, the research team developed the instrument, and the instrument was then reviewed by a panel who were experts in designing online courses and surveys. In the second phase, statistical analysis of reliability and validity of the instrument was conducted through a confirmatory factor analysis (CFA) and a multiple indicator multiple cause (MIMIC) model. CFA was used to test the conceptual measurement model implied in the design of the OCDE. The MIMIC was used to examine the relationships of the OCDE to participants' years of experience and self-reported levels of expertise. After seeking the authors' permission (Baldwin et al., 2018) to build on the results of their study, the 22 elements were used as the foundation of the OCDE instrument. We added critical elements to the instrument based on existing research (Jones, 2013;Luo et al., 2017;Stavredes & Herder, 2014). These included (a) a course orientation, (b) a variety of instructional materials, (c) student-to-instructor interaction, and (d) consistent course structure. The instrument that was reviewed by experts for face validity had 37 items in five categories. All items prompted respondents to indicate how frequently they used the design elements on a Likert scale ranging from 1 (Never) to 5 (Always).

Phase 1: Development of the OCDE Instrument
Four experts were provided with a digital copy of the instrument and instructions to evaluate the clarity and fit of all items, make changes, and add or delete relevant items. Once their review was completed, the experts returned the instrument with feedback by e-mail to the lead researcher. Experts were selected based on their expertise and experience in online or blended teaching in higher education and their expertise in survey research methodology. Two experts were research methodologists with expertise in teaching online, and two experts were online learning experts. The researchers discussed the expert feedback, and several items were revised based on the reviewers' feedback. Some of the changes recommended by the experts were to (a) provide examples for the items in parenthesis, (b) add an item regarding major course goals, (c) delete additional items on course objectives, and (d) modify the wording of some items. The final version of the instrument included 38 items with Likert scale responses (Table 1). Interaction and communication 4 7 Assessment and evaluation 6  7 Learner support 4 7 Total 22 38

Procedure and Data Collection
Data were collected in the Spring 2020 semester with the use of an online Qualtrics survey that was housed on a protected server. All subscribers to e-mail distribution lists of two professional associations received an invitation to participate in the study. Members of these organizations work with information or instructional technologies in industry or higher education as instructors, instructional designers, or in different areas of instructional support. Therefore, these individuals have varied experience in designing and supporting online courses. Additionally, invitations to participate in the study were posted to groups of these organizations on one social networking site. In order to increase the response rate, one reminder was sent or posted after two weeks. All responses were voluntary and anonymous, and no incentives were provided to participants.

Participants
A total of 222 respondents completed the survey including 101 online instructors and 121 instructional designers who were involved with online course design. Most of the respondents identified as female (n = 158; 71%). The average age of respondents was 48 years (SD = 10.74) and the average years of experience was 10.54 (SD = 6.93). Nearly half of respondents (n = 107; 48%) rated their level of expertise as expert, 29% identified as proficient, 15% as competent, and 5% identified as advanced beginner. Only one individual was a novice.

Data Analysis
Descriptive statistics were reported at both the item level and the category level. After the data collection, three models were evaluated: (a) Model 1, a one-factor model; (b) Model 2, a five-factor model; and (c) Model 3, a five-factor higher-order model. The five-factor and higher-order models align with the development of the OCDE. Model 1 specified a unidimensional construct and endorsed the use of a total score instead of subscales. This model was examined to determine if the covariance among items was due to a single common factor. Model 2 specified a correlated five-factor model with eleven items loading on the overview factor (items 1-1), six items loading on the content presentation factor (items 13-18), seven items loading on the remaining factors of interaction and communication (items 20-26), assessment and evaluation (items 28-34), and learner support (items 36-42). Model 3 specified the same factor structure as Model 2 but included a second-order factor of OCDE. Correlated error variances were used to modify the model if the re-specification agreed with theory. In order to determine the best model, both statistical criteria and information about the parameter estimates were used. Because the models are not nested and statistical tests of differences between models were not available using weighted least square mean and variance adjusted (WLSMV) estimations (e.g., DIFFTEST or Akaike's Information Criterion), no statistical tests of differences were conducted.
The pattern coefficient for the first indicator of each latent variable was fixed to 1.00. Indices of model-data fit considered were chi-square test, root mean square error of approximation (RMSEA), standardized root mean squared residual (SRMR), and comparative fit index (CFI). For RMSEA, Browne and Cudeck (1992) suggested that values greater than .10 might indicate a lack of fit. CFI values greater than .90, which indicates that the proposed model is greater than 90% of than that of the baseline model, will serve as an indicator of adequate fit (Kline, 2016). Perfect model fit is indicated by SRMR = 0, and values greater than .10 may indicate poor fit (Kline, 2016). All models are overidentified indicating there is more than enough information in the data to estimate the model parameters.
After determining the best fitting model, a multiple indicators multiple cause model (MIMIC) was conducted to examine the a priori hypothesis that years of experience and level of expertise would have positive relationships with the latent variables of the OCDE. Specifically, we hypothesized years of experience and level of expertise to have a positive relationship to the latent variables.

Results
In this section we review the data screening process, the descriptive statistics from the OCDE implementation, the validation of OCDE, and examination of the relationship between OCDE and variables of years of experience and level of expertise.

Data Screening
Initially, 238 individuals responded to the survey invitation; however, 16 cases had one-third or more data missing, and these 16 cases were deleted from the data set. Missing values for all variables did not exceed 1.4% (i.e., three respondents). Little's (1988) Missing Completely at Random (MCAR) test was not statistically significant, χ 2 = 113.76, df = 142, p = .961, suggesting that values could be treated as missing completely at random. These missing values were estimated using expectation-maximization algorithm (EM). All values were within range and no univariate or multivariate outliers were detected. Because the data were ordinal in nature, WLSMV estimations were used to estimate all parameters of the model.
WLSMV is specifically designed for ordinal data (e.g., Likert-type data) and makes no distributional assumptions about the observed variables (Li, 2016). The variance inflation factor for all items were below 5.0, suggesting multicollinearity was not problematic.

Descriptive Statistics
The means and standard deviations for all items are reported in

Confirmatory Factor Analysis (CFA)
The results of the CFA are shown in Table 3. In all the of the analyses, the chi-square goodness-of-fit statistics were statistically significant. This suggest that none of the models fit perfectly. The other goodness-of-fit statistics suggested a reasonable fit for the models, except for Model 1. Specifically, all of the correlated error variables were between items in the same factor. For the overview factor, the error variance for item 9 (instructor's response time to e-mails and/or phone calls) was allowed to correlate with item 10 (instructor's turnaround time for feedback on submitted assignments). The error variance for item 8 (a biography of the instructor) was correlated with item 9 (instructor's response time to e-mails and/or phone calls). The two items in the interaction and communication factor with correlated error variances were item 21 (required student-to-student interaction, such as graded activities) and item 22 (frequently occurring student-to-student interactions, such as weekly). In the learner support factor, the error variance for item 36 (easy course navigation, such as menus) correlated with item 37 (consistent course structure, such as design and look). The goodness-of-fit statistics are reported in Table 3. For both modified models, the chi-square goodness-of-fit statistics were statistically significant, but the other fit statistics suggested an acceptable fit of the observed covariance to the model-implied covariance.

Modified Models
The correlation between the five factors (reported in Table 4) ranged between .48 to .85. This suggests shared variance among the factors. Given the size of the correlation coefficients and large degree of overlap among the factors, the modified Model 3 appears to be the best model and is discussed in greater detail. The unstandardized and standardized pattern coefficients for Model 3 modified are reported in Table 5. All coefficients are statistically significant (p < .001). Several of the standardized coefficients fell below .70, indicating that over half of the variance is unaccounted for in the model. The path coefficients between the factors and the second order factor ranged from .67 to .93 and were statistically significant. The recommended model is shown in Figure 2. Note that the covariances among the factors are not included in the figure.

Figure 2
Best Fitting Higher-Order Model

Experience and Expertise on Design Strategies
A MIMIC model was conducted to examine the relationship between OCDE higher-order latent factor and the predictor variables of years of experience and level of expertise. The results suggested that level of expertise was a statistically significant predictor of OCDE (unstandardized coefficient = .09, SE = .04, standardized coefficient = .23), but years of experience was not statistically significant (unstandardized coefficient < .01, SE < .01, standardized coefficient <.01). This suggested for a one unit increase in the selfreport level of expertise, there was about a .23 standard deviation increase in OCDE score.

Discussion
In this section, we discuss instructors' and instructional designers' frequency of use of the design elements, validation of the OCDE instrument, and the significance of expertise but not experience in course design.

Frequency of Use
In this implementation with 222 respondents, except for two items, the frequency of use of OCDE items

Validity of the OCDE Instrument
Evidence from Models 2 and 3 in this study supports inferences from OCDE. The total score demonstrates good reliability and factor structure. However, due to low reliability coefficients especially in the content presentation subscale where the reliability coefficient was at .66, caution needs to be taken if the factors or subscales are used individually. High correlation was found among the subscales, especially between overview and content presentation (.81), and overview and assessment and evaluation (.85). The OCDE instrument is recommended to be used as a whole, but due to low reliability coefficients, caution is advised if using individual subscales of overview, content presentation, interaction and communication, assessment, and evaluation and learner support.

Expertise and Years of Experience in the Design of Online Courses
The overall OCDE score is related to self-reported level of expertise. However, years of experience is not related to the OCDE. The perceived level of expertise was reported as expert, proficient, competent, advanced beginner, or novice. The level of expertise in online course design was a statistically significant predictor of the OCDE score, whereas years of experience was not. Perez et al. (1995) stated that compared to novice designers, expert designers use more design principles and access a variety of knowledge sources.
A previous study suggested that experts are not just those with wealth of experience from their years teaching online  but also those who have the expertise and fluency.
While expertise can be developed with experience over time, this is not the only way to acquire it. Research on online learning strategies that focus on instructors' years of experience might help us understand whether online teaching experience obtained over time makes one an expert instructor. Shanteau (1992) recommended that instead of focusing on their years of teaching experience, experts should be identified

Limitations
There were some limitations to this study. The elements included in this study are not an exhaustive list for the design and development of good quality online courses, though the OCDE was developed from the summary of six instruments, and from research and expert review. The reliability coefficients of some subscales were below .80, suggesting that rather than make decisions about individual subscales, the results of the study suggest that the research-based instrument can provide useful aggregated information to practitioners. As well, since the data are self-reported, social desirability may have been a factor in some of the participants' responses. In addition, the OCDE was implemented with a relatively small sample of instructors and instructional designers most of whom were based in the United States.

Conclusion
The goal of the study was to develop and validate an instrument to address critical elements in online course design. Results show that the OCDE with its five constructs-overview, content presentation, interaction and communication, assessment and evaluation, and learner support-is a valid and reliable instrument.
When relationships of the latent variables to years of experiences and self-reported level of expertise were examined, results indicated that the level of expertise in online course design was a statistically significant predictor of the OCDE score. The OCDE instrument was implemented in higher education. However, practitioners and researchers may adapt and use the instrument for design and research in different settings.
Researchers should continue to examine design elements that are not included in the OCDE and implement them in different settings such as K-12, community colleges, and other instructional settings. This study may be replicated with a larger sample or with participants who teach or support faculty in a variety of disciplines. Using the instrument in other countries, particularly where online teaching and learning is still either a novelty or not as established as in the United States would be worthwhile.
The OCDE can be used to support online teaching and design professional development for instructors and instructional designers, particularly those who are novices or beginners. Instructional designers can offer training for instructors using the OCDE as a checklist. Instructors who are interested in teaching online may also use this rubric to guide their course design.