International Review of Research in Open and Distributed Learning

Volume 23, Number 3

September - 2022

 

Qualifying with Different Types of Quizzes in an Online EFL Course: Influences on Perceived Learning and Academic Achievement

 

Ünal Çakıroğlu1, Esin Saylan2, İsak Çevik3, and Adem Özkan4
1Prof.Dr., Department of Computer Education and Instructional Technology, Trabzon University; 2Instructor, Vakfıkebir Vocational School, Trabzon University; 3Instructor, Ağrı Vocational School, Ağrı İbrahim Çeçen University; 4PhD Candidate, Trabzon University

 

Abstract

This quasi-experimental study explored how different online exam types differentiate learners’ academic achievement and perceived learning. The participants comprised 95 undergraduate students enrolled in an English course at a Turkish university in three groups, each taking a different type of quiz: with multiple-choice, open-ended, and mixed type questions. The results indicated that the academic achievement of the students in multiple-choice and open-ended groups increased and that quiz results improved the most for the multiple-choice group relative to the other groups. The study found a moderate level of significant relationship between cognitive and affective perceived learning and multiple-choice quiz scores. In addition, the study found a weak level of significant relationship between cognitive and affective perceived learning and mixed-design quiz scores, and between cognitive learning and the academic achievement scores of the mixed-design group. Semi-structured online interviews undertaken to further explain the quantitative data displayed positive influences of the different types of quizzes in terms of study behaviors and satisfaction. The findings of this study are expected to shed light for practitioners aiming to use different online assessment types.

Keywords: quiz types, online learning, EFL course, perceived learning

Introduction

In recent years, technological and pedagogical improvements have made online learning more attractive, and a great number of students prefer to study English as a Foreign Language (EFL) in courses delivered online. Moreover, methods used in computer-assisted language learning have proven to be effective in delivering EFL courses (Ebadi & Rahimi, 2018; Yang, 2017) and in facilitating teachers to monitor learner progress through online formative assessments (Alharbi & Meccawy, 2020).

Recent studies highlight the benefits of online assessment tools, such as improving student motivation, enhancing active learning, and deterring cheating as long as the questions are not too easy (Rinaldi et al., 2017; Schneider et al., 2018). The use of online exams in different forms of online assessment tools, such as fill-in-the blanks, multiple-choice, true-false, cloze test, word-order, match the columns, and table-verbs, has led to the discovery of a new world of teaching effectiveness and learning approaches (Yadollahi & Rahimi, 2011). Some scholars have highlighted the advantages of various online formative assessment tools such as Google Forms, Blackboard, Plickers, Socrative, and Kahoot! (Alharbi & Meccawy, 2020; Fageeh, 2015; Jazil et al., 2020). These are perceived as positive tools that enhance achievement in different ways, offering to improve learners’ responses (Elbasyouny, 2021). For example, Fageeh (2015) reported that online testing via Blackboard provides opportunities for multiple practices, influencing achievement, automated scoring and instant feedback. Another study found that students perceived their learning as effective via an online grammar assessment included in Google Forms, a supportive tool that provides immediate feedback after completing the exam (Jazil et al., 2020). The perceived usefulness of online testing can also enhance students’ performance. In fact, a high correlation has been reported between students’ performance, perceived learning, and satisfaction with online learning (Gray & DiLoreto, 2016). Thus, applying different assessment approaches may result in different learning outcomes such as academic achievements or perceived learning.

Assessment tools in online learning

Online assessment tools can be used for formative assessments (quizzes) or summative assessments (exams). A series of studies have been carried out to examine the learning outcomes of various types of online assessments. Sek et al. (2012) find that the most preferred assessment format was multiple-choice questions, followed by true/false questions and single choice questions. Kılıç and Çetin (2018) report the most preferred exam type to be multiple-choice tests because students consider they could succeed better. In contrast, Ogange et al. (2018) document that students perceive the various types of formative online assessments as nonsignificant. Accordingly, understanding the effects of different quiz types on students’ perceived performance and success has important implications for instructors’ decisions regarding online assessment.

The research to date has tended to focus on the effect of online exams on students’ performance, motivation, study style, and exam anxiety (Pan et al., 2019; Vayre & Vonthron, 2019) rather than on their perceived learning which allows students to evaluate themselves.

Perceived Learning in Online Learning

Perceived learning is an indicator of the effectiveness of online learning environments (Barbera et al., 2013) and is considered as an evaluation of learning experience (Caspi & Blau, 2011). Researchers define perceived learning in cognitive and socio-emotional dimensions. While the cognitive dimension is the sense of achieving new knowledge, the socio-emotional dimension includes the students’ degree of involvement, experiences, and feelings in the learning process (Caspi & Blau, 2011).

Previous studies have reported that perceived learning has a significant and positive relationship with online course flexibility and student-student interaction (Marks et al., 2005); student-instructor interaction (Kang & Im, 2013); cognitive presence, social presence, and teaching presence (Arbaugh, 2008; Rockinson-Szapkiw et al., 2016); and learning content and course design (Barbera et al., 2013). Additionally, Paechter et al. (2010) report that students’ expectations regarding subject knowledge have predictive power for their perceived learning in online learning settings. Furthermore, Artino (2008) reports a significant correlation between perceived learning and satisfaction in online learning settings.

Perceived learning is also considered to be a significant predictor of students’ course grades in online learning settings (Rockinson-Szapkiw et al., 2016). Having different learning approaches gives students options to study with various tools in different time periods with varying goals and expectations about their learning. In line with this expectation, they engage actively in online quizzes by paying more attention to the classes (Dobbins & Denton, 2017). In this sense, previous studies have addressed positive perceptions about online quizzes in terms of learning outcomes, technology acceptance, and perceived usefulness (Raes & Depaepe, 2020). Accordingly, different types of online exams can be thought to differentiate the perceived learning levels, and the relationship between online quizzes and students’ perceived learning is considered to be a variable worth examining. Within this framework, this study aims to reveal the relationship between perceived learning and the academic achievement of students studying using different types of online exams in an online EFL course.

Aim of the Study

One prominent area that uses online assessment is EFL classes. These are commonly required courses in the first year of all universities in Turkey, targeting basic skills such as reading, writing, speaking, listening, and grammar and vocabulary. Given the importance of the use of quizzes in online EFL classes, this study aims to determine how online exam types differentiate learners’ perceived learning and academic achievement. As the nature of the topics is appropriate for the preparation of different kinds of tests for the same outcomes, we focused on an EFL course. The motivation for the study was the idea that an online EFL course supported by different quiz types would differentiate students’ academic achievements and perceived learning. Focusing on the relationship between academic achievement and perceived learning, this research seeks answers to the following research questions:

  1. Is there a significant difference between the academic achievement (quizzes, EFL test) scores of the students who study with different quiz types?
  2. Is there a significant difference between the perceived learning scores of the students who study with different quiz types?
  3. What is the relationship between students’ academic achievement (with regard to question types in quizzes, EFL test) and their perceived learning scores?

Method

This quasi-experimental study with three randomized study groups, using a pre- and post-test design, was carried out in a university-level EFL course in Turkey. The same instructor taught the same instructional package to all groups during the fall semester of the 2020-2021 academic year. An academic achievement test including all the targeted teaching modules was applied as a pre-test.

The students were introduced to the question types of the quizzes they would have following every module: Group A—multiple-choice questions; Group B—mixed-design questions (fill-in-the blanks, true-false, matching); and Group C—open-ended questions. The study lasted for 16 weeks with a total of 18 online quizzes, 6 for each group. The academic achievement test was applied as a post-test followed by an online interview with volunteering students. The perceived learning scale was used to determine the perceived learning scores. Figure 1 shows the procedure followed during the study.

Figure 1

Study Procedure

Other than the established methodology of Presentation, Practice, and Production in teaching EFL, the online course design was outlined to fit with the quizzes. Each group had one synchronous lesson of at least 90 minutes on Adobe Connect Web Conferencing System every week. The quizzes were conducted as out-of-class activities at the end of each module and reviewed at the beginning of the following lesson, as shown in Figure 2.

Figure 2

Course Design with Quizzes

In part 1 of the course design, the quiz questions were answered, and students discussed the questions, contextual clues, and correct answers. In part 2, the students formed quiz-type sample questions and shared them with each other to get familiar with the quiz question type(s). Sample materials were presented to students to study for the next quiz, shaping their learning approach fitting with the question type. In part 3, the course topic was delivered via an appropriate teaching strategy by providing examples according to the type of the quiz questions. In part 4, a question-answer session was followed by a summary of the lesson and information about the next quiz (duration, number of questions, etc.). 

Data Collection Tools

The quantitative data were gathered using the perceived learning scale, academic achievement test, and online quizzes.

Perceived Learning Scale

The original Cognitive, Affective and Psychomotor (CAP) Perceived Learning Scale designed by Rovai et al. (2009) for both face-to-face and online learning includes a total of 9 items, 3 items for each of the cognitive learning, affective learning, and psychomotor learning subscales. Since this study did not include psychomotor skills as learning goals, we were only concerned with cognitive and affective learning, each represented by 1 item. The item, or question, related to cognitive perceived learning was the following: “When you evaluate on a scale of 0 to 9, how much do you think you learned in this course? (0: meaning I think I learned nothing; 9: meaning I think I learned a lot)” as adapted by Çelik (2020) from Richmond et al. (1987) with a correlation coefficient of .806. Adapting this item with the help of four field experts, we tested the affective perceived learning with the item, “When you evaluate on a scale of 0 to 9, how much do you think your attitude towards the course changed? (0:meaning I think my attitude didn’t change at all; 9: meaning I think my attitude changed a lot).” The Pearson correlation to assess the test retest reliability of the affective perceived learning was found r(74) = .802. The two ten-point Likert items ranging from 0 to 9 had an internal consistency reliability of Cronbach’s alpha of .933.

Academic Achievement Test

We assessed the effectiveness of the interventions on students’ academic achievement scores using a standardized academic achievement test of 20 mixed-type questions, each worth 5 points, in four parts selected from the course book end-of-term tests that serve as the framework for the course. The Smart Choice course book (Wilson & Healy, 2016) by Oxford University Press, which is widely used in various K-12 and tertiary level schools, includes tests containing questions targeting the outcomes of the foreign language curriculum and is also the main instrument of various online EFL studies (Jakob & Afdaliah, 2019; Wongpornprateep & Boonmoh, 2019). The instructor and one field expert reviewed the test items in terms of content validity. Google Forms was used to implement the tests.

Online Quizzes

The online quizzes developed by the researchers, one of whom was also the course instructor, covered the relevant module of that week. While the content of the questions was the same, the form of the questions in the quizzes differed for each group. Figure 3 outlines the forms of questions in the quizzes.

Figure 3

Some Examples of the Same Questions of Different Question Types

Each quiz included 20 questions, targeting the same vocabulary and grammatical content with different question types. In all groups, some questions also included visuals for clarification and guidance. All the quiz questions were similar to the question contents of the academic achievement test. Two field experts checked whether each question in the quizzes aimed at the same objectives.

Interviews

Online interviews were carried out with a total of 18 volunteering students after the post-test based on their perceived learning scores—2 low, 2 medium, and 2 high scores (6 students from each quiz group)—to further explain the data from the scales. The participants were asked questions about the effects of the quizzes on their learning, study behavior, attitude towards the lesson, academic achievement, and the factors they perceive contributing to their learning.

Participants

Ninety-five freshman students (F = 64, M = 31, mean age = 19) enrolled in the English 1 course participated in this study. None of the participants completed preparatory English class as this is not required for the vocational school students. However, they all had the same A1 level of basic English classes. The participants were from the departments of Banking and Insurance, Finance, Public Relations, and Accounting, and they were randomly assigned to one of three groups: open-ended (n = 32), multiple-choice (n = 33), and mixed-design (n = 30).

Data Analysis

An analysis of variance (ANOVA) with the Tukey HSD post-hoc test and Kruskal-Wallis H test were used to determine significant differences among the students’ scores in pre-tests, post-tests, and quizzes. Split Plot ANOVA was used to show the change in academic achievement in the pre-tests and post-tests. A t-test was used to put forth the difference in pre-tests and post-tests within groups.

The data obtained from the interviews were analyzed descriptively and students’ opinions were presented referring to the perceived learning scores of the groups. For example, when quoting from the open-ended quiz group, students with perceived learning total score of <9 were considered as “low,” between 9 and 15 as “medium,” and >15 as “high”; and these were coded as OEL-1, OEM-1, and OEH-1, respectively.

Results

The quantitative data were presented with regard to the research questions, and the interview data were used to explain the factors in the intervention process.

Academic Achievement Scores of the Groups

Is there a significant difference in the pre-test results of the students in different quiz types?

The normal distribution of the data for the pre-test was confirmed with the homogeneity of the variance test. Levene’s test showed that the variances for quiz types were equal, F(2,92) = 0.163; p = 0.850; p > 0.05. Since the data showed the normality assumptions, we performed a one-way ANOVA test to compare the pre-test scores of the groups, as shown in Table 1.

Table 1

One-way ANOVA Results of the Groups for Pre-test

Sum of squares df Mean square F Sig.
Between groups 2330.705 2 1165.353 2.874 .062
Within groups 37299.295 92 405.427
Total 39630.000 94

Note. *p < .05.

Table 1 indicates that there was not a statistically significant difference between pre-test results of the groups as determined by one-way ANOVA (F(2,92) = 2.874; p = .062; p > 0.05).

Is there a significant difference in the post-test results of the students in different quiz types?

The normal distribution of the data for the post-test was confirmed by the homogeneity of the variance test. Levene’s test showed that the variances for post-test were not equal, F(2,92) = 3.281; p = 0.042; p < 0.05. Therefore, a non-parametric test, the Kruskal-Wallis H test, was carried out and the difference between the posttest scores of the groups is shown in Table 2.

Table 2

Kruskal-Wallis H Test Results of the Groups for Post-test

Test statistics Post-test
Chi-Square 2.755
df 2
Asymp. Sig. .252

Note. * p < .05.

Table 2 indicates that there was not a statistically significant difference between groups as χ2(2) = 2.755, p = 0.252, p > 0.05. However, the change in the academic achievement scores is shown in Figure 4.

Figure 4

Change of Pre-tests and Post-tests for Academic Achievement

Figure 4 reveals that the post-test scores increased significantly from x = 43.18 to x = 55.90 in multiple-choice groups; from x = 48.83 to x =52.66 in mixed-design groups; and from x = 55.15 to x = 62.81 in open-ended groups. The change was higher in the multiple-choice group compared to the others, surpassing the average post-test score of the mixed-design group, which had a higher academic achievement score in the pre-test.

Is there a significant difference in the quiz scores of the students in different quiz types?

The homogeneity of the variance test showed that the data were normally distributed and Levene’s test showed that the variances for quiz mean scores were equal, F(2,92) = 1.301; p = 0.277; p > 0.05. The difference of the groups in the quiz mean scores is shown in Table 3.

Table 3

ANOVA results of the groups for average quiz scores

Sum of Squares df Mean Square F Sig. Tukey HSD
Between Groups 9746.200 2 4873.100 21.906 .000 A-B,A-C,B-C
Within Groups 20465.484 92 222.451
Total 30211.684 94

Note. A—multiple-choice quiz group; B—mixed-design quiz group; C—open-ended questions group * p < .05.

A statistically significant difference was found between groups as (F(2,92) = 21.906; p = .000; p < 0.05). Post hoc Tukey HSD test results revealed a significant difference between multiple-choice (x = 64.81; SD = 13.42) and mixed-design groups (x = 54.90; SD = 14.85) in favor of multiple-choice and between multiple-choice and open-ended (x = 40.43; SD = 16.35) groups in favor of multiple-choice groups. In addition, there was a significant difference between mixed-design and open-ended groups in favor of the mixed-design quiz group.

Is there a significant difference in the academic achievement scores of the students within groups?

The differences in the academic achievement scores of the multiple-choice, mixed-design and open-ended groups were all confirmed with the paired samples t-test as shown in Table 4, Table 5 and Table 6, respectively.

Table 4

Paired Samples T-Test of the Multiple-choice Group for Academic Achievement

Paired samples test Paired differences t df Sig. (2-tailed)
Mean Std. deviation Std. error mean 95% confidence interval of the difference
Lower Upper
Pair 1 pre-test - post-test -12.72727 20.99445 3.65467 -20.17158 -5.28296 -3.482 32 .001

Note. * p < .05.

Table 4 shows that the difference in the academic achievements of the multiple-choice quiz group between pre-test (x = 43.18; SD = 18.86) and post-test (x = 55.90; SD = 24.98) was significant t(32) = -3.482; p = .001; p < 0.05. 

Table 5

Paired Samples t-Test of the Mixed-design Group for Academic Achievement

Paired samples test Paired differences t df Sig. (2-tailed)
Mean Std. deviation Std. error mean 95% confidence interval of the difference
Lower Upper
Pair 1 pre-test - post-test -3.83333 20.66495 3.77289 -11.54975 3.88309 -1.016 29 .318

Note. * p < .05.

The difference in academic achievements of the mixed-design group between pre-test (x = 48.83; SD = 20.32) and post-test (x = 52.66 SD = 22.42) was not significant t(29) = -1.016; p = .318; p > 0.05. 

Table 6

Paired Samples t-Test of the Open-Ended Group for Academic Achievement

Paired samples test Paired differences t df Sig. (2-tailed)
Mean Std. deviation Std. error mean 95% confidence interval of the difference
Lower Upper
Pair 1 pre-test - post-test -7.65625 17.17953 3.03694 -13.85013 -1.46237 -2.521 31 .017

Note. * p < .05.

The difference in academic achievements of the open-ended group between pre-test (x = 55. 15; SD = 21. 19) and post-test (x = 62.81; SD = 16.74) was significant t(31) = -2.521; p = .017; p < 0.05.

Perceived Learning Scores of the Groups

Is there a significant difference in the perceived learning scores of the students in different quiz types?

We examined the perceived learning scores of the groups under affective learning and cognitive learning dimensions. We confirmed the normal distribution of the data for affective learning and cognitive learning using the homogeneity of variance test. Levene’s test showed that the variances for affective learning, F(2,92) = 6.210; p = 0.003; p < 0.05, and for cognitive learning were not equal, F(2,92) = 5.345; p = 0.006; p < 0.05. Table 7 shows the Kruskal-Wallis H test results showing the difference between the affective and cognitive perceived learning scores of the groups.

Table 7

Kruskal-Wallis H Test Results of the Groups for Affective and Cognitive Learning Scores

Test statistics Affective Cognitive
Chi-Square 2.906 5.406
df 2 2
Asymp. Sig. .234 .067

Note. * p < . 05.

The descriptive results revealed the mean scores of:

Table 7 shows that there was not a statistically significant difference in affective perceived learning results between groups, χ2(2) = 2.906, p = .234, p > 0.05, and in cognitive perceived learning results between groups as determined by the Kruskal-Wallis H test, χ2(2) = 5.406, p = .067, p > 0.05.

Relationships between Academic Achievement and Perceived Learning Scores of the Groups

Is there a significant relationship between students’ learning performance and their perceived learning scores?

We used a Pearson correlation coefficient to determine the relationship between affective learning, cognitive learning, averages of quiz scores, and academic achievement scores of multiple-choice, mixed-design, and open-ended quiz groups, as shown in Table 8, Table 9 and Table 10, respectively.

Table 8

Multiple-choice Quiz Group Correlations between Affective Learning, Cognitive Learning, Quiz Averages, and Post-test as Academic Achievement Scores

Correlations Affective learning Cognitive learning Quiz averages Post-test
Affective learning Pearson correlation 1 .933** .449** .336
Sig. (2-tailed) .000 .009 .056
N 33 33 33 33
Cognitive learning Pearson correlation .933** 1 .418* .263
Sig. (2-tailed) .000 .015 .140
N 33 33 33 33
Quiz averages Pearson correlation .449** .418* 1 .741**
Sig. (2-tailed) .009 .015 .000
N 33 33 33 33
Post-test Pearson correlation .336 .263 .741** 1
Sig. (2-tailed) .056 .140 .000
N 33 33 33 33

Note. * p < .05. ** p < .01.

The results revealed a very strong positive correlation between perceived affective learning and cognitive learning, a moderate positive correlation between perceived affective learning and average quiz scores, and a weak positive correlation between affective learning and academic achievement scores. In addition, there was a moderate positive correlation between perceived cognitive learning and the average quiz scores, a weak positive correlation between cognitive learning and academic achievement, but a strong positive correlation between average quiz scores and academic achievement, which means increases in quiz averages were correlated with increases in academic achievement.

Students’ perspectives about the process also provided clues to explain the positive effect of quizzes on their perceived learning. In this regard, students with high perceived learning scores stated that quizzes had a positive effect on their learning, while students with low perceived learning scores stated that quizzes had no effect on their learning. For example, MCH-5 stated, "The exams are very efficient as they are loaded right after finishing the subject and I understand the subjects better," while MCL-1 stated, "The quizzes are not very effective on my learning as they are easy to answer and I know I will pass the test very easily."

Table 9

Mixed-design Group Correlations between Affective Learning, Cognitive Learning, Quiz Means, and Posttest as Academic Achievement Scores

Correlations Affective learning Cognitive learning Quiz averages Post-test
Affective learning Pearson correlation 1 .901** .393* .343
Sig. (2-tailed) .000 .032 .064
N 30 30 30 30
Cognitive learning Pearson correlation .901** 1 .393* .385*
Sig. (2-tailed) .000 .031 .036
N 30 30 30 30
Quiz averages Pearson correlation .393* .393* 1 .756**
Sig. (2-tailed) .032 .031 .000
N 30 30 30 30
Post-test Pearson correlation .343 .385* .756** 1
Sig. (2-tailed) .064 .036 .000
N 30 30 30 30

Note. *p < .05 **p < .01

We found a very strong positive correlation between affective learning and cognitive learning, and a weak positive correlation between affective learning and average quiz scores. In addition, we found a weak positive correlation between affective learning and academic achievement scores, and a weak positive correlation between cognitive learning and average quiz scores. We found a weak positive correlation between cognitive learning and academic achievement and a strong positive correlation between quiz averages and academic achievement, which means increases in quiz means were correlated with increases in academic achievement at a high level.

Students’ perspectives showing the factors explaining the effect of the interventions to the research variables were generally in line with the quantitative data. While students with low perceived learning scores stated that the quizzes were not impressive on their learning, students with medium and high perceived learning scores expressed the positive effects of quizzes. In this sense, MIXL-1 stated, “Quizzes are good work but not beneficial for my learning,” while MIXM-4 stated, “Quizzes allow me to repeat the topics I have learned,” and MIXH-6 stated, “The exams help me improve what I learned in the lesson.” Overall, students reported that quizzes had positive effects on their learning.

Table 10

Open-Ended Group Correlations between Affective Learning, Cognitive Learning, Average Quiz Scores, and Post-test as Academic Achievement Scores

Correlations Affective learning Cognitive learning Quiz averages Post-test
Affective learning Pearson correlation 1 .663** .147 .101
Sig. (2-tailed) .000 .421 .583
N 32 32 32 32
Cognitive learning Pearson correlation .663** 1 .259 .252
Sig. (2-tailed) .000 .152 .165
N 32 32 32 32
Quiz averages Pearson correlation .147 .259 1 .521**
Sig. (2-tailed) .421 .152 .002
N 32 32 32 32
Post-test Pearson correlation .101 .252 .521** 1
Sig. (2-tailed) .583 .165 .002
N 32 32 32 32

Note. **p < .01

We found a strong positive correlation between affective learning and cognitive learning, but a very weak positive correlation between affective learning and average quiz scores and between affective learning and academic achievement scores and a weak positive relationship between cognitive learning and quiz scores and between cognitive learning and academic achievement scores. However, we found a moderate positive correlation between average quiz and academic achievement scores, which indicates that increases in quiz average scores were correlated with increases in academic achievement at a moderate level.

The perspectives of the open-ended quiz group’s students explain the relationship between perceived learning scores and academic achievement, though at a low level. Unlike the other two test groups, all of the volunteering interview students in this group stated the positive effects of quizzes on their perceived learning. For example, OEL-2 stated, “I studied for quizzes, and they helped me learn by making it easier to learn,” and, similarly, OEM-4 claimed, “I think quizzes were difficult for me but having to study affected my learning.” OEH-5 stated, “Quizzes certainly have an effect on my learning. I realized that I understood and improved my English skills more with them,” and OEH-6 stated, “I think I improved what I learned in the lessons better with quizzes.”

The overall interviews revealed that the students with high perceived learning scores in all groups reported that quizzes helped them learn by providing opportunities to review and practice the topics they learned. Students with low perceived learning scores in multiple-choice and mixed groups generally stated that quizzes had no effect on their learning.

Discussion

Various studies have reported the positive effects of using several assessments instead of a single final exam, such as improved student learning and retention (Rezaei, 2015), student engagement, and feedback opportunities (Holmes, 2015). Day et al. (2018) indicate that assessment leads to more effective study behavior promoting student academic achievement, but that the type of continuous assessment does not influence academic achievement; that is, students’ performances do not differ depending on whether assessment is through a written assignment, a partial exam, or homework assignments. However, Brown and Wang (2013) claim that the types of exams used for assessment lead students to use different learning approaches in the process of preparing for the exam. Different from the former study and similar to the latter, our study found that students’ quiz scores improved more in the multiple-choice group, mixed-design group, and open-ended group. The fact that the correct option in multiple-choice exams is among the choices, which act as clues, makes it easier for students to recall the correct answer, but the answers in open-ended exams are required to be written in students’ own sentences. The mixed-design group includes both types of questions, which may result in the great discrepancy between the quiz mean scores.

The average quiz scores of the mixed-design group were lower than the multiple-choice group but higher than the open-ended group. However, the open-ended quiz group was the most successful in the post achievement test with all question types. The comparison of the descriptive results of the pre- and post-test academic achievement verifies the role of the question types. Given that students in multiple-choice and the mixed-design groups, to some extent, choose among predetermined options or statements, they may have felt all the questions would be uncomplicated. This resulted in superficial studying or none, as they assumed they would pass the quiz or the exam readily. In contrast, the open-ended quiz group was overwhelmed with questions, with limited or no hints other than contextual clues or pictures. This required students to use all their academic knowledge, learning strategies, and skills. However, students may use an in-depth learning approach to understand the subject when asked questions requiring answers based on interpretation. Because test items leading to remembrance or guesswork require less mental effort, the academic achievement scores of the students in this study may have fallen behind the open-ended group, who studied with deeper learning strategies. Notwithstanding the seeming disadvantage in the quizzes, the open-ended group acquired higher perceived learning and achievement scores and expressed absolute ideas on the benefits of this kind of quiz type. Confronted with challenging questions, the open-ended group might have been compelled to study for sentence structures or different expressions and to consider that the only option to pass the quiz or the exam was to study hard, leading to higher learning.

Regarding the perceived learning, this study did not find any statistically significant differences between groups in total perceived learning scores. However, the open-ended group had the highest total perceived learning scores, which may be explained by the fact that the open-ended group used deeper learning strategies to study. The mixed-design group fell behind the multiple-choice group, despite facing fill-in-the-blanks type questions as well as true/false and match-type questions. This might be the result of their feeling they did superficial learning with the less accustomed type of matching questions.

In-depth analysis of the multiple-choice group revealed a moderate level of relationship between affective learning scores and average quiz scores, and between cognitive learning scores and average quiz scores. There was a weak level of relationship between affective learning scores and academic achievement scores, and between cognitive learning scores and academic achievement scores. This variation may be the result of students’ perceiving higher learning in quizzes with options facilitating ease-of-decision but their low performance in the academic achievement test with open-ended, short answer, fill-in-the-blanks as well as multiple-choice questions, which they are more familiar with.

Previous studies have reported higher cognitive perceived learning in online courses as a result of increased student satisfaction (Baturay, 2011) and higher achievement (Rockinson-Szapkiw et al., 2016). Consistent with the literature, the students in the multiple-choice quiz group achieved higher scores in the quizzes and referred to the positive impacts of quizzes, implying their satisfaction with their higher scores, which might have led them to believe they acquired higher learning. The differences between multiple-choice and mixed-design groups can be explained by the fact that the true/false questions that the mixed-design group faced required less thinking and had simpler, easy-to-guess matching questions. The students frequently face multiple-choice tests or open-ended questions in their academic lives, but they seldom face the mixed-design exams, which may have adversely affected their learning approach and academic achievement.

Finally, the analysis of the open-ended group showed a very weak positive relationship between affective learning scores and average quiz scores, and between affective learning scores and academic achievement scores. There was a weak relationship between cognitive learning scores and average quiz scores, and between cognitive learning scores and academic achievement scores. The discrepancy from the other two groups is obviously a result of the students’ feeling they have learnt more while studying for questions that provide no hints but have made more mistakes in answering the questions without any options or clues within the given limited time. As is seen in previous studies, enhancing interactions that influence learners’ perceived learning and satisfaction relates strongly to learner-content interaction (Alqurashi, 2019; Baber, 2020; Baturay, 2011; Lin et al., 2017). In this framework, all the students interviewed in this group confirmed the positive role of the open-ended type of questions in guiding them to use deeper learning and studying strategies while interacting more with the content of the quizzes.

Overall, the types of the questions given to different groups might have changed the study behaviors. Thus, students’ study behaviors may have influenced the expectations and satisfactions that were indirectly related to the perceived learning. In addition, the observed increase in the academic achievement of the open-ended group could be attributed to their study behavior.

Some researchers argue that the determinants of perceived learning and satisfaction outcomes of students in online learning are course structure, instructor knowledge, and facilitation of learning process by feedback (Baber, 2020; Cole et al., 2021). Others report the variables that principally influence student satisfaction and perceived learning in online courses to be the course design, interaction, and the learning content (Barbera et al., 2013; Cui, 2021). In accordance with the previous studies, the course structure in our study enabled the instructor to give feedback on the quizzes and learning processes by answering the quiz questions and allowing students to create similar questions in the first lesson following the related quiz. This allowed higher interaction, which may have positively affected students’ perceived learning.

Conclusion

This study set out the relationships between different quiz types, academic achievement, and perceived learning. The participants were most successful in the multiple-choice type of questions and least successful in the open-ended questions. Conversely, those who were exposed to open-ended quizzes were most successful in the achievement test, revealing the effect of this type of question in improving study behaviors and deepening learning strategies for mixed-design exams. The students in the open-ended quiz group displayed the highest affective and cognitive learning scores, implying the impact of dealing with questions that require deeper learning strategies. Finally, the current study confirmed the positive relationship between the overall perceived learning scores and academic achievement scores, that is, the higher the perceived learning score, the higher the academic achievement score. The question types in this study shaped students’ study behaviors and also affected their expectations and satisfactions.

Limitations and Implications

This study is not exempt from limitations. The sample size in the groups was small and the instructional package was specific to the English course. A larger sample size and content would enhance the sensitivity analysis. The study was carried out through the most frequently used online quiz types. Further studies could examine the types of quizzes created with other assessment types. We hope the results of the study are helpful to online instructors who desire to make more effective use of various types of quizzes in online EFL courses.

References

Alharbi, A. S., & Meccawy, Z. (2020). Introducing Socrative as a tool for formative assessment in Saudi EFL classrooms. Arab World English Journal, 11(3), 372-384. https://dx.doi.org/10.24093/awej/vol11no3.23

Alqurashi, E. (2019). Predicting student satisfaction and perceived learning within online learning environments. Distance Education, 40(1), 133-148. https://doi.org/10.1080/01587919.2018.1553562

Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? The International Review of Research in Open and Distributed Learning, 9(2). https://doi.org/10.19173/irrodl.v9i2.490

Artino, A. R. (2008). Motivational beliefs and perceptions of instructional quality: Predicting satisfaction with online training.  Journal of Computer Assisted Learning 24(3), 260-270. https://doi.org/10.1111/j.1365-2729.2007.00258.x

Baber, H. (2020). Determinants of ’students’ perceived learning outcome and satisfaction in online learning during the pandemic of COVID-19. Journal of Education and E-Learning Research, 7(3), 285-292. https://doi.org/10.20448/journal.509.2020.73.285.292

Barbera, E., Clara, M., & Linder-Vanberschot, J. A. (2013). Factors influencing student satisfaction and perceived learning in online courses. E-learning and Digital Media, 10(3), 226-235. https://doi.org/10.2304/elea.2013.10.3.226

Baturay, M. H. (2011). Relationships among sense of classroom community, perceived cognitive learning and satisfaction of students at an e-learning course. Interactive Learning Environments, 19(5), 563-575. https://doi.org/10.1080/10494821003644029

Brown, G. T., & Wang, Z. (2013). Illustrating assessment: How Hong Kong university students conceive of the purposes of assessment. Studies in Higher Education, 38(7), 1037-1057. https://doi.org/10.1080/03075079.2011.616955

Caspi, A., & Blau, I. (2011). Collaboration and psychological ownership: How does the tension between the two influence perceived learning? Social Psychology of Education, 14(2), 283-298. https://doi.org/10.1007/s11218-010-9141-z

Çelik, B. (2020). An examination of presage, process and product dimensions in massive open online courses [Doctoral thesis, Middle East Technical University]. https://hdl.handle.net/11511/69117

Cole, A. W., Lennon, L., & Weber, N. L. (2021). Student perceptions of online active learning practices and online learning climate predict online course engagement.  Interactive Learning Environments 29(5), 866-880. https://doi.org/10.1080/10494820.2019.1619593

Cui, Y. (2021). Perceived learning outcomes and interaction mode matter: Students’ experience of taking online EFL courses during COVID-19. English Language Teaching, 14(6), 84-95. https://doi.org/10.5539/elt.v14n6p84

Day, I. N., van Blankenstein, F. M., Westenberg, P. M., & Admiraal, W. F. (2018). Explaining individual student success using continuous assessment types and student characteristics. Higher Education Research & Development, 37(5), 937-951. https://doi.org/10.1080/07294360.2018.1466868

Dobbins, C., & Denton, P. (2017). MyWallMate: An investigation into the use of mobile technology in enhancing student engagement. TechTrends, 61(6), 541-549. https://doi.org/10.1007/s11528-017-0188-y

Ebadi, S., & Rahimi M. (2018). An exploration into the impact of WebQuest-based classroom on EFL learners’ critical thinking and academic writing skills: A mixed-methods study. Computer Assisted Language Learning, 31(5-6), 617-651. https://doi:10.1080/09588221.2018.1449759

Elbasyouny, T. R. B. (2021). Enhancing Students’ Learning and Engagement through Formative Assessment using Online Learning Tools [Unpublished doctoral dissertation]. British University in Dubai. https://bspace.buid.ac.ae/handle/1234/1842

Fageeh, A. I. (2015). EFL student and faculty perceptions of and attitudes towards online testing in the medium of Blackboard: Promises and challenges. JALT CALL Journal, 11(1), 41-62. https://doi.org/10.29140/jaltcall.v11n1.183

Gray, J. A., & DiLoreto, M. (2016). The effects of student engagement, student satisfaction, and perceived learning in online learning environments. International Journal of Educational Leadership Preparation, 11(1), n1. https://files.eric.ed.gov/fulltext/EJ1103654.pdf

Holmes, N. (2015). Student perceptions of their learning and engagement in response to the use of a continuous e-assessment in an undergraduate module. Assessment & Evaluation in Higher Education, 40(1), 1-14. https://doi.org/10.1080/02602938.2014.881978

Jakob, J. C., & Afdaliah, N. (2019). Using Oxford Smart Choice Multi-ROM to develop the students’ listening ability. EduLite: Journal of English Education, Literature and Culture, 4(1), 25-34. http://dx.doi.org/10.30659/e.4.1.25-34

Jazil, S., Manggiasih, L. A., Firdaus, K., Chayani, P. M., & Rahmatika, S. N. (2020). Students’ attitudes towards the use of Google Forms as an online grammar assessment tool. In Proceedings of the International Conference on English Language Teaching (ICONELT 2019 ). Advances in social science, education and humanities research (pp. 166-169). Atlantis Press. https://doi.org/10.2991/assehr.k.200427.033

Kang, M., & Im, T. (2013). Factors of learner-instructor interaction which predict perceived learning outcomes in online learning environment.  Journal of Computer Assisted Learning 29(3), 292-301. https://doi.org/10.1111/jcal.12005

Kılıç, Z., & Çetin, S. (2018). Examination of students’ exam type preferences in terms of various variables. Elementary Education Online, 17(2), 1051-1065. https://doi.org/10.17051/ilkonline.2018.419353

Lin, C. H., Zheng, B., & Zhang, Y. (2017). Interactions and learning outcomes in online language courses. British Journal of Educational Technology, 48(3), 730-748. https://doi.org/10.1111/bjet.12457

Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29(4), 531-563. https://doi.org/10.1177/1052562904271199

Ogange, B. O., Agak, J. O., Okelo, K. O., & Kiprotich, P. (2018). Student perceptions of the effectiveness of formative assessment in an online learning environment. Open Praxis, 10(1), 29-39. https://search.informit.org/doi/10.3316/informit.423669258504414

Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in elearning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222-229. https://doi.org/10.1016/j.compedu.2009.08.005

Pan, S. C., Cooke, J., Little, J. L., McDaniel, M. A., Foster, E. R., Connor, L. T., & Rickard, T. C. (2019). Online and clicker quizzing on jargon terms enhances definition-focused but not conceptually focused biology exam performance. CBE-Life Sciences Education, 18(4), ar54. https://doi.org/10.1187/cbe.18-12-0248

Raes, A., & Depaepe, F. (2020). A longitudinal study to understand students’ acceptance of technological reform. When experiences exceed expectations. Education and Information Technologies, 25(1), 533-552. https://doi.org/10.1007/s10639-019-09975-3

Rezaei, A. R. (2015). Frequent collaborative quiz taking and conceptual learning. Active Learning in Higher Education, 16(3), 187-196. https://doi.org/10.1177/1469787415589627

Rinaldi, V. D., Lorr, N. A., & Williams, K. (2017). Evaluating a technology supported interactive response system during the laboratory section of a histology course. Anatomical Sciences Education, 10(4), 328-338. https://doi.org/10.1002/ase.1667

Rockinson-Szapkiw, A. J., Wendt, J., Whighting, M., & Nisbet, D. (2016). The predictive relationship among the community of inquiry framework, perceived learning and online, and graduate students’ course grades in online synchronous and asynchronous courses. International Review of Research in Open and Distributed Learning, 17(3), 18-35. https://doi.org/10.19173/irrodl.v17i3.2203

Rovai, A. P., Wighting, M. J., Baker, J. D., & Grooms, L. D. (2009). Development of an instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher education settings. The Internet and Higher Education, 12(1), 7-13. https://doi.org/10.1016/j.iheduc.2008.10.002

Schneider, J. L., Ruder, S. M., & Bauer, C. F. (2018). Student perceptions of immediate feedback testing in student centered chemistry classes. Chemistry Education Research and Practice, 19(2), 442-451. https://doi.org/10.1039/C7RP00183E

Sek, Y. W., Law, C. Y., Liew, T. H., Hisham, S. B., Lau, S. H., & Pee, A. N. B. C. (2012). E-assessment as a self-test quiz tool: The setting features and formative use. Procedia-Social and Behavioral Sciences, 65, 737-742. https://doi.org/10.1016/j.sbspro.2012.11.192

Vayre, E., & Vonthron, A. M. (2019). Relational and psychological factors affecting exam participation and student achievement in online college courses. The Internet and Higher Education, 43, 100671. https://doi.org/10.1016/j.iheduc.2018.07.001

Wilson, K., & Healy, T. (2016). Smart choice: Smart learning-on the page and on the move. Workbook with self-study listening. Starter Level. Oxford University Press.

Wongpornprateep, P., & Boonmoh, A. (2019). Students’ perceptions towards the use of VLE in a fundamental English course: A review of Smart Choice Online Practice and Smart Choice On the Move. Journal of Studies in the English Language, 14(2), 91-131. Retrieved from https://so04.tci-thaijo.org/index.php/jsel/article/view/200688

Yadollahi, S., & Rahimi, M. (2011). ICT use in EFL classes: A focus on EFL teachers’ characteristics. World of Journal of English Language, 1(2), 17-29. https://doi.org/10.5430/wjel.v1n2p17

Yang, R. (2017). The use of questions in a synchronous intercultural online exchange project. ReCALL, 30(1), 112-130. https://doi.org/10.1017/S0958344017000210

 

Athabasca University

Creative Commons License

Qualifying with Different Types of Quizzes in an Online EFL Course: Influences on Perceived Learning and Academic Achievement by Ünal Çakıroğlu, Esin Saylan, İsak Çevik, and Adem Özkan is licensed under a Creative Commons Attribution 4.0 International License.