April – 2014

Is Blended e-Learning as Measured by an Achievement Test and Self-Assessment Better than Traditional Classroom Learning for Vocational High School Students?

Chang photo Shu photo Liang photo Tseng photo Hsu photo

Chi-Cheng Chang1, Kuen-Ming Shu2, Chaoyun Liang3, Ju-Shih Tseng1, and Yu-Sheng Hsu1
1National Taiwan Normal University, Taiwan, 2National Formosa University, Taiwan, 3National Taiwan University, Taiwan

Abstract

The purpose of this study is to examine the effects of blended e-learning on electrical machinery performance (achievement test and self-assessment). Participants were two classes of 11th graders majoring in electrical engineering and taking the electrical machinery class at a vocational high school in Taiwan. The participants were randomly selected and assigned to either the experimental group (n = 33) which studied through blended e-learning or the control group (n = 32) which studied through traditional classroom learning. The experiment lasted for five weeks. The results showed that (a) there were no significant differences in achievement test scores between blended e-learning and traditional learning; (b) students in the experimental group obtained significantly higher scores on self-assessment than students in the control group; (c) students’ scores on self-assessment were significantly higher after studying through blended e-learning than before. Overall, blended e-learning did not significantly affect students’ achievement test scores, but significantly affected their self-assessment scores.

Keywords: Blended e-learning; self-assessment; electrical machinery; learning performance

Introduction

As information technology has developed over the past years, e-learning technology has shaped education. However, e-learning is not always appropriate to be implemented in all curricula. Some curricula are appropriate to be learned by traditional learning, but some curricula are appropriate to be learned by e-learning, depending on the purposes of each curriculum (Bolliger & Martindale, 2004). Blended e-learning keeps the advantages of both traditional learning (instructor-oriented) and e-learning (learner-oriented) (Bersin, 2004). The drawbacks of e-learning including reduced real interactions and high drop-out rates due to frustration can be covered by the advantages of traditional learning, so students’ learning quality and performance can be enhanced (Cottrell & Robison, 2003; Singh, 2003). Hence, blended e-learning has become a trend in education (Bonk, 2006) and is appropriate to most learners who have different learning styles (Wakefield, Carlisle, Hall, & Attree, 2008).

In recent years, an increased number of researchers have been involved in studies about blended e-learning. Some of the study results revealed that blended e-learning enhanced students’ learning performance (Gülbahar & Madran, 2009; Usta & Ozdemir, 2007; Vaughan & Garrison, 2005). However, different aspects of learning performance that were enhanced by blended e-learning were not further examined. Some studies (Bersin, 2004; Hofmann, 2008; López-Pérez, Pérez-López, & Rodríguez-Ariza, 2011) stated that blended e-learning had more positive effects than traditional learning, but they mostly focused on higher education or employment training, not on primary and secondary schools. Some studies examined the effects of individual differences or gender on learning performance, such as learning achievement, attitudes, and satisfaction, but they did not compare the results with a control group (Alshwiah, 2009; Lee et al., 2007; Méndez & González, 2010). Although some study results showed that blended e-learning facilitated students’ attitudes toward the course with three aspects, cognition, skill, and attitude (CSA) (Chen & Lin, 2002), there was a lack of comparison among the effects on the three aspects.

The Employment e-Training Platform in the Project of Multi-Employment e-Training, proposed by the Council of Labor Affairs in Taiwan, focuses on the subjects of electricity, electronics, and food and beverage service, and so on. The learning unit, transformer, may be served as a complement of the electrical machinery course in vocational high schools. Electrical machinery plays an important role in electrical engineering; therefore it is a graduation requirement. The key feature for vocational high schools is practical training programs. Most practical training programs are learning-by-doing. Problems faced by instructors in practical training programs include large size classes and insufficient facilities. Such problems make teachers unable to cater to individual differences and students unable to reach learning goals (Roblyer, 2006). These problems can be overcome when practical training programs are delivered by blended e-learning, in which learning activities are extended outside the classroom (Garrison & Vaughan, 2008). Blended e-learning also promotes greater depth and breadth of learning. Furthermore, practical training programs that are delivered through both traditional learning and e-learning will allow students to absorb knowledge and build skills due to repetitive reading and practice (Bersin, 2004). This will enrich and facilitate their learning experiences.

Based on the study background above, the purpose of the present study was to compare the effects of blended e-learning and traditional learning on electrical machinery performance (achievement test and self-assessment). Thus, the statistics were applied in the present study to analyze the differences in learning performance (achievement test and self-assessment) between blended e-learning and traditional learning. The learning material in the present study was the learning unit, transformer, from the Employment e-Training Platform (http://el.evta.gov.tw/). The learning intentions for the learning unit included: cognition, skill, and attitude. Blended e-learning that covers the advantages of both e-learning and traditional learning enables students to learn at their own pace and to practice repeatedly, so it is beneficial to utilize blended e-learning on electrical machinery. The research questions are as follows:

  1. Are there any significant differences in electrical machinery achievement test scores between blended e-learning and traditional learning?
  2. Are there any significant differences in self-assessment scores with three aspects including cognition, skill, and attitude between blended e-learning and traditional learning?
  3. Are there any significant differences in self-assessment scores with three aspects including cognition, skill, and attitude before and after studying through blended e-learning?

Research Method

Participants

Participants were two classes of 11th graders, with a total of 65 students, majoring in electrical engineering and taking the electrical machinery class at a vocational high school in Taiwan. The participants were randomly selected and assigned to either the experimental group (n = 33) or the control group (n = 32). The two groups were taught by the same teacher who had more than 10 years teaching experience and two years experience with blended e-learning.

Experimental Design

The pretest-posttest nonequivalent-group quasi-experimental design was employed in the present study. The experimental design is shown in Table 1.

Table 1

Experimental Procedure

Preparation.

The teaching schedule and method in the experiment were discussed with the teacher. Before the experiment, an orientation on e-learning and learning guidance was provided to students, so students were ready to take the course through the Internet.

Pretest.

Students’ scores on the last two midterms were collected for examining the homogeneity of both groups. The pretest on self-assessment was administered to students for understanding the assessment on their own performance.

Learning activity.

The experiment lasted for five weeks, as shown in Table 2, and at three hours per week; so there was a total of fifteen hours. The learning unit was transformer, including its principles, structures, characteristics, connections, tests, and maintenance.

Table 2

The differences between the learning methods of both groups were as follows: a) the control group received face-to-face lectures, paper-based handouts, and teaching materials, with three in-class hours per week; b) the experimental group received two in-class hours per week and one class hour in the computer classroom per week. Students who spent one class hour in the computer classroom logged into the website, Employment e-Training Platform, for access to the learning unit, transformer. The experimental group was supported by review and repeated practices using the website. The activity for blended e-learning was based on the eight learning phases proposed by Baldwin-Evans (2006) and Bielawski and Metcalf (2005), as shown in Table 3. The differences between both groups are shown in Table 4.

Table 3

Table 4

Figure 1

Posttest.

The experiment lasted five weeks. After the experiment was over, students in both groups were required to take the posttest, including both achievement test and self-assessment.

Validity and Reliability of Instruments

1) Achievement test

The achievement test was designed by the teacher based on the course material. The teacher had ten years of teaching experience on electrical machinery in the vocational high school. The achievement test was used for many years and modified based on the changes to the course materials and students’ learning conditions. Therefore, the achievement test applied in the present study contained face validity.

There were 25 multiple-choice questions in the achievement test. The 25 questions were related to the transformer and were categorized into four dimensions, including its principles, structures and characteristics, connections, and tests and maintenance. Item analysis was performed for examining the reliability of the achievement test. The top 27% of the total scores was assigned to the high score group, whereas the bottom 27% was assigned to the low score group (Kelley, 1939). A t-test was conducted to examine the differences in the score of each question between the high score group and the low score group. The results showed that there were two insignificant questions, which should be deleted. Pearson’s correlation was then performed to examine the relationships between the score of each question and the overall score of the test. The result showed that there was only one insignificant question, which should be deleted.

The difficulty index refers to the percentage of students who answered the item correctly, whereas the discrimination index refers to how well the item discriminates between low and high score groups (Ebel & Frisbie, 1991). The calculating formula for the difficulty index (P) is (Ph+Pl)/2 and the discrimination index (D) is Ph-Pl.

If the difficulty index of an item is close to .5, the item has a moderate level of difficulty; if it is less than .25, the item is difficult; and if it is greater than .75, the item is easy (Ebel & Frisbie, 1991). The difficulty index for items in the achievement test ranged from .17 to .64. The overall difficulty index for the achievement test was .36, meaning that the difficulty level of the test was between moderate and difficult.

On the other hand, if the discrimination index of an item is greater than .4, the item is excellent; if it is greater than .3 and less than .4, the item is good; and the minimum standard for the discrimination index is .25 (Ebel & Frisbie, 1991). The discrimination indices for items in the achievement test were greater than .25, with three items greater than .3 and five items greater than .4. The overall discrimination index for the achievement test was .43, meaning that the discrimination level of the test was excellent.

2) Self-assessment

According to the literature review, a self-assessment questionnaire about blended e-learning developed by the study contained three aspects: cognition (5 items), skill (5 items), and attitude (6 items) (see Appendix). In total, there were 16 items in the questionnaire. The self-assessment questionnaire was revised several times by the researcher and the teacher, so it possessed content validity.

a) Item analysis

The top 27% of the total scores was assigned to the high score group, whereas the bottom 27% was assigned to the low score group (Kelley, 1939). Independent samples t-test was conducted to examine the differences in the score of each item between both groups. The results revealed that t values of all items were significant, indicating that the questionnaire possessed a good discrimination level. Pearson’s correlation was then performed to examine the relationships between the score of each question and the overall score of the test. The result was consistent, so no item was deleted.

b) Factor analysis

The Kaiser-Meyer-Olkin (KMO) index was greater than .5 and the Bartlett test of sphericity was significant (see Table 5), indicating that factor analysis could be performed (Gravetter & Wallnau, 2008; Kaiser, 1974). Principal component analysis (PCA) with an orthogonal rotation was conducted to examine the construct validity. The result showed that factor loading for each item was greater than .5, indicating that there was no need to delete items (Hair, Black, Babin, & Anderson, 2010). Factors with eigenvalues greater than 1 were extracted, including cognition, skill, and attitude. The explained variances of the three aspects were all greater than 50%, revealing that the questionnaire possessed good construct validity (Hair et al., 2010), as shown in Table 5.

c) Reliability

Cronbach’s α for each aspect in the questionnaire was greater than .7, indicating that the questionnaire had a good reliability (Bryman & Cramer, 2011).

Table 5

Results

The Differences on the Achievement Test Scores Between Blended e-Learning and Traditional Learning (Research Question 1)

The average score of the last two midterms on electrical machinery was applied as the covariance for preventing the interruptions of prior knowledge. ANCOVA was performed to examine the differences on achievement test scores between blended e-learning and traditional learning. Levene’s test of equality of covariance was insignificant (p = .858), meaning that the variance of pretest score was equal across groups and the homogeneity assumption was sustained, as shown in Table 6. Furthermore, regression slope appeared insignificant, suggesting that the relationship between the covariance and the dependent variable (posttest score) would not be affected by the independent variables, and the homogeneity assumption was sustained.

Table 6

The average score on the achievement test for the experimental group was slightly higher than that for the control group, but ANCOVA showed an insignificant result (p = .825), as shown in Table 7 and 8. This result revealed that the difference between both groups was not significant, indicating that blended e-learning did not significantly affect achievement test scores.

Table 7

Table 8

The Differences on the Self-Assessment Between Blended e-Learning and Traditional Learning (Research Question 2)

The MANCOVA with the pretest score of self-assessment as the covariance was performed to examine the differences on self-assessment scores between blended e-learning and traditional learning. As shown in Table 9, Box’s test of equality of covariance matrices and Levene’s test of equality of covariance were insignificant, meaning that the variance of cognition, skill, and attitude was equal across groups and the homogeneity assumption was sustained. Furthermore, Wilk’s Λ (p = .250) and regression slope appeared insignificant, suggesting that the homogeneity assumption was sustained and the covariance had the same degree of impact to the participants.

Table 9

As shown in Table 10, Wilk’s Λ (p < .01) showed a significant result, indicating that students in both groups had significant differences in at least one dependent variable (cognition, skill, and attitude). The result revealed that there were significant differences in cognition (F = 13.309; p < .01) and skill (F = 6.246; p < .05) between the two groups, but there was no significant difference in attitude (F = 3.455; p = .068). The experimental group had significantly higher adjusted means on cognition, skill, and overall self-assessment than the control group, as shown in Table 11, indicating that blended e-learning students were significantly better than traditional learning students in cognition and skill.

Table 10

Table 11

The criteria for determining the effect size of MANCOVA are: η2 of .010 is a small effect, η2 of .059 is a medium effect, and η2 of .138 or above is a large effect (Cohen, 1988). Among the effect sizes for the three aspects of self-assessment, cognition had the largest effect (η2 = .177), indicating a high correlation, and skill had a medium correlation (η2 = .092). In other words, blended e-learning had a high effect on students’ cognition and a medium effect on students’ skill. After the five-week experiment, there were significant differences on cognition and skill between both groups, but there was no significant difference on attitude.

For the overall self-assessment, there was a significant difference between both groups. The effect size (η2) of the overall self-assessment was .129, indicating a medium correlation between blended e-learning and students’ overall self-assessment score.

The Differences on the Self-Assessment Before and After the Blended e-Learning (Research Question 3)

Paired-samples t-test was conducted to examine the differences before and after the blended e-learning. As shown in Table 12, there were significant differences on the overall self-assessment, including three aspects, cognition, skill, and attitude, before and after blended e-learning. This result implied that blended e-learning had a significant impact on students’ self-assessment, which confirmed the study done by Chen and Lin (2002).

The effect size of the t-test on each aspect of self-assessment is shown in Table 12. Cohen (1988) proposed an effect size coefficient, called Cohen’s d, for examining the difference in outcome before and after the treatment. The formula for Cohen’s d was the mean score of the pretest (μ1) subtracted from the mean score of the posttest (μ2), and then divided by the standard deviation (σ1) of the pretest, as shown in the following:

Cohen’s d = (μ2 - μ1) / σ1

The criteria for determining the effect size of t-test are: η2 of .2 or below is a small effect, η2 between .5 and .8 is a medium to large effect, and η2 of .8 or above is a large effect (Cohen, 1988). The effect sizes for the three aspects of self-assessment and the overall self-assessment were medium to large, revealing that blended e-learning enhanced students’ self-assessment scores on electrical machinery (cognition, skill, attitude, and overall).

Table 12

Discussion

For research questions 1 and 2, there were no significant differences on achievement test scores between both groups, but there were significant differences on self-assessment scores. The experimental group had significantly higher self-assessment scores than the control group, indicating that the experimental group had more positive perceptions of blended e-learning but did not significantly outperform the control group in the achievement test. A possible explanation is that it was the first time for the experimental group to experience blended e-learning which led to significantly higher scores on the self-assessment than the control group. However, the course implemented in the study lasted only five weeks which was not enough time for students to get used to blended e-learning, so there was no significant difference in achievement test scores between both groups. The effect of blended e-learning on achievement test scores should be examined in the long run, so that students have enough time to get used to blended e-learning which can be a complement to traditional learning. The differences on achievement test scores between both groups can be further understood when the course lasts two or more months.

For research questions 2 and 3, there were significant differences on self-assessment scores between both groups; and there was also a significant difference on self-assessment scores for the experimental group before and after the blended e-learning. This result confirmed that blended e-learning can enhance students’ self-assessed learning performance (Chen & Lin, 2002; Garrison & Vaughan, 2008; Kim, Bonk, & Teng, 2009; Usta & Ozdemir, 2007; Vaughan & Garrison, 2005).

By comparing with traditional learning, students who learned through blended e-learning had more positive perceptions of cognition and skill because blended e-learning can make up for the drawbacks of traditional learning. The explanation for it is that blended e-learning provides both a traditional learning and an e-learning environment at the same time, which enables students to review the material repeatedly and discuss with peers online. However, there was no significant difference on attitude between both groups because the development of attitude was slower, which confirmed the viewpoint proposed by Linn and Miller (2005).

Conclusion and Implication

Implication for Practice

For blended e-learning, teachers need to put more efforts into and spend more time on interactions with students (including classroom and the Internet) than teachers in traditional learning (Rovai & Jordan, 2004). Students in the experimental group did not have prior experience of blended e-learning. Peer discussions and interactions were less frequent on the Internet because students did not get used to an e-learning environment. Therefore, teachers are not only required to encourage students to discuss issues with peers, but they are also required to engage in students’ discussions for enhancing peer interactions (Hiltz & Goldman, 2005).

The purpose of the Employment e-Training Platform was to fulfill students’ workplace needs and remove employment barriers. Hence, the platform was revised each year by interacting with industry. In order to help vocational high school students meet requirements in the workplace, it is recommended that the Bureau of Employment and Vocational Training in Taiwan communicates and cooperates with industries and academics. By doing so, teaching materials from both vocational high schools and employment training organizations can be shared with each other, and vocational high school teachers can employ the learning materials in the Employment e-Training Platform for blended e-learning and hence enhance students’ knowledge and skills.

The study results revealed that blended e-learning had significantly positive effects on self-assessed cognition and skill. It is recommended that the Employment e-Training Platform adds more course content and materials with animated simulation. It was found that blended e-learning had no significant impact on students’ achievement test scores, but significantly affected self-assessment scores. Therefore, it is suggested that teachers who engage in blended e-learning should not only assess students’ learning performance by achievement tests, but also by self-assessment, so students’ learning performance can be assessed both objectively and subjectively.

Limitation and Future Work

The sequence of the learning activities in the present study was traditional learning followed by e-learning, because e-learning was considered as a supporting learning tool that was provided after class. However, the role of e-learning can be considered differently, such as a learning tool for the course preview. It is suggested that the sequence of the learning activities in future studies can be that e-learning comes before traditional learning. Finally, the study results from both learning sequences (traditional learning comes before e-learning vs. e-learning comes before traditional learning) can be compared and examined in a future study.

References

Alshwiah, A. (2009). The effects of a blended learning strategy in teaching vocabulary on premedical students’ achievement, satisfaction and attitude toward English language (Unpublished master thesis). Arabian Gulf University, Bahrain.

Baldwin-Evans, K. (2006). Key steps to implementing a successful blended learning strategy. Industrial and Commercial Training, 38(3), 156-163.

Bersin, J. (2004). The blended learning book: Best practices, proven methodologies, and lessons learned. San Francisco, CA: Pfeiffer.

Bielawski, L., & Metcalf, D. (2005). Blended eLearning: Integrating knowledge, performance support, and online learning (2nd ed.). Amherst, MA: HRD Press.

Bolliger, D. U., & Martindale, T. (2004). Keys factors for determining student satisfaction in online courses. International Journal of E-Learning, 3(1), 61-67.

Bonk, C. J. (2006). The future of online teaching and learning in higher education. Educause Quarterly, 11(4), 22-30.

Bryman, A., & Cramer, D. (2011). Quantitative data analysis with IBM SPSS 17, 18 & 19: A guide for social scientists. London, UK: Psychology Press.

Chen, N. S., & Lin, K. M. (2002). Analysis of learning behavior and learning performance in WBI. Journal of Information Management, 8(2), 121-133.

Cohen, J. (1988). Statistical power analysis for the behavioral science (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Cottrell, D. M., & Robison, R. A. (2003). Blended learning in an accounting course. The Quarterly Review of Distance Education, 4(3), 261-269.

Ebel, R. L., & Frisbie, D. A. (1991). Essentials of educational measurement. Englewood, NJ: Prentice Hall.

Garrison, D. M., & Vaughan, N. D. (2008). Blended learning in higher education: Framework, principles, and guidelines. San Francisco, CA: Jossey-Bass.

Gravetter, F. J., & Wallnau, L. B. (2008). Statistics for behavioral science (7th ed.). Belmont, CA: Thomson.

Gülbahar, Y., & Madran, R. (2009). Communication and collaboration, satisfaction, equity, and autonomy in blended learning environments: A case from Turkey. International Review of Research in Open and Distance Learning, 10(2), 117-138.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis: A global perspective. New Jersey, USA: Pearson Education.

Hiltz, S. R., & Goldman, R. (2005). Learning together online: Research on asynchronous learning networks. Mahwah, NJ: Lawrence Erlbaum Associates.

Hofmann, A. (2008). Development in blended learning. Economics and organization of enterprise, 1(1), 55-62.

Kaiser, H. F. (1974). Little Jiffy, Mark IV. Educational and Psychological Measurement, 34, 111-117.

Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. Journal of Educational Psychology, 30(1), 17-24.

Kim, K. J., Bonk, C. J., & Teng, Y. T. (2009). The present state and future trends of blended learning in workplace learning settings across five countries. Asia Pacific Education Review, 10(3), 299-308.

Lee, C., Yeh, D., Kung, R., & Hsu, C. (2007). The influences of learning portfolios and attitudes on learning effects in blended e-learning for Mathematics. Journal of Educational Computing Research, 37(4), 331-350.

Linn, R. L., & Miller, M, D. (2005). Measurement and assessment in teaching (9th ed.). Englewood Cliffs, NJ: Prentice-Hall.

López-Pérez, M. V., Pérez-López, M. C., & Rodríguez-Ariza, L. (2011). Blended learning in higher education: Students’ perceptions and their relation to outcomes. Computers & Education, 56(3), 818-826.

Méndez, J. A., & González, E. J. (2010). A reactive blended learning proposal for an introductory control engineering course. Computers & Education, 54(4), 856-865.

Roblyer, M. D. (2006). Integrating educational technology into teaching (4th ed.). Upper Saddle River, NJ: Pearson/Merrill Prentice Hall.

Rovai, A. P., & Jordan, H. M. (2004). Blended learning and sense of community: A comparative analysis with traditional and fully online graduate courses. International Review of Research in Open and Distance Learning, 5(2), 1-13.

Singh, H. (2003). Building effective blended learning programs. Education and Technology, 43(6), 51-54.

Usta, E., & Ozdemir, S. M. (2007). An analysis of students’ opinions about blended learning environment. Paper presented at the International Educational Technology (IETC) Conference, Nicosia, Turkey.

Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. Internet and Higher Education, 8(1), 1-12.

Wakefield, A. B., Carlisle, C., Hall, A. G., & Attree M. J. (2008). The expectations and experiences of blended learning approaches to patient safety education. Nurse Education in Practice, 8(1), 54-61.

Appendix

Appendix