International Review of Research in Open and Distributed Learning

Volume 17, Number 4

June - 2016

Using the mTSES to Evaluate and Optimize mLearning Professional Development

Author photos

Robert Power1, Dean Cristol2, Belinda Gimbert3, Robin Bartoletti4, and Whitney Kilgore5
1University of Ontario Institute of Technology, Canada, 2,3Ohio State University, USA, 4University of North Texas Health Science Center, USA, 5University of North Texas, USA

Abstract

The impact of targeted professional development activities on teachers' perceptions of self-efficacy with mobile learning remains understudied. Power (2015a) used the Mobile Teacher's Sense of Efficacy Scale (mTSES) survey instrument to measure the effects of a mobile learning themed professional development course on teachers' confidence with and interest in mobile learning. The current study looks at changes in perceptions of self-efficacy amongst participants in another open course about mobile learning called Instructional Design for Mobile Learning (ID4ML), which took place from May 4 – June 6, 2015 (Power, Bartoletti & Kilgore, 2015). The purpose of this study is to verify the reliability and construct validity of the mTSES instrument developed by Power (2015a, 2015b) and Power, Cristol and Gimbert (2014), and to explore trends in self-efficacy changes amongst a more diversified participant population. This paper reports on the findings from the analysis of data collected using the mTSES tool. The findings provide useful feedback on the impacts of participating in the ID4ML course. They also provide further support for the utility of the mTSES instrument as a measure of perceptions of self-efficacy with mobile learning. These findings point to the potential utility of the mTSES as a tool for both planning and evaluating mLearning professional development training for teachers.

Keywords: CSAM, mLearning, mobile learning, mTSES, professional development, self-efficacy, teacher training

Introduction

Despite increasing calls for wider integration of mobile technologies into formal education, one of the most significant determinants of teachers' willingness to adopt mobile learning strategies remains understudied (Kenny, Park, Van Neste-Kenny, & Burton, 2010). A strong sense of confidence in their own abilities increases the likelihood that teachers will experiment with new technologies or teaching approaches (Tschannen-Moran & Woolfolk Hoy, 2001a). This study examined changes in participants' perceptions of self-efficacy after participating in a Massive Open Online Course (MOOC) called Instructional Design for Mobile Learning (ID4ML) (Power, Bartoletti, & Kilgore, 2015). The Mobile Teacher's Sense of Efficacy Scale (mTSES) (Power, Cristol, & Gimbert, 2014; Power, 2015a, 2015b) was used to gauge perceptions of self-efficacy before and after participation in the mobile learning themed professional development. The results revealed that the course had helped participants gain confidence in their abilities to use mobile devices and applications to increase student engagement. However, ID4ML participants showed decreased confidence in their abilities with designing instruction and classroom management for mobile learning. The results were compared to those reported for participants in a recent MOOC with an explicit focus on a framework for pedagogical decisions about mobile learning design (Power, 2015a). Analyses of demographic trends in mTSES results from the two courses point to areas where changes could be made to increase the likelihood that participants will integrate mobile learning into their teaching practice. The results of this study demonstrate the utility of the mTSES instrument as a tool for assessing the effectiveness of mobile learning focused professional development. They also highlight the potential for the mTSES to be used by professional development planners to design training to meet the specific needs of target audiences. The mTSES instrument has the potential to compliment other professional development planning and evaluation tools, allowing planners to specifically target perceptions of confidence alongside other intended learning outcomes.

Background

Teachers' adoption of new instructional technologies and pedagogical strategies is influenced by confidence in their ability to do so effectively. This perception of confidence is referred to as a teacher's sense of self-efficacy by Tschannen-Moran and Woolfolk Hoy (2001a), who defined it as "a judgement of… capabilities to bring about desired outcomes of student engagement and learning" (p. 783). Perceptions of self-efficacy can influence a teacher's "levels of planning and organization" and "willingness to experiment with new methods to meet the needs… of students" (p. 783). Higher levels of self-efficacy amongst teachers have also been demonstrated to be predictors of "persistence when things do not go smoothly and their resilience in the face of setbacks" (p. 783). In contrast, lack of a sense of confidence on one's abilities results in greater tendencies amongst teachers to abandon new strategies and tools, or even to leave the profession altogether. Addressing perceptions of self-efficacy appear crucial in any effort to increase the adoption of new techniques and technologies.

The imperatives to integrate mobile technologies and mobile learning strategies are becoming increasingly commonplace in discourse on how to meet the changing needs of learners and education systems (Ally & Prieto-Blázquez, 2014; Traxler, 2012; Groupe Spécial Mobile Association, 2012). However, Ally and Prieto-Blázquez (2014, pp. 145-146) warned that current teacher training programs continue to be based on an outdated education system model that does not adequately prepare teachers to integrate mobile technologies into teaching practice. Teachers' perceptions of self-efficacy can be negatively impacted by a lack of training in instructional design for mobile learning (Kenny et al., 2010). Negative perceptions of self-efficacy have been highlighted as a significant hindrance to wider-spread adoption of mobile learning strategies amongst teachers and education systems (Ally, Farias, Gitsaki, Jones, McLeod, Power & Stein, 2013; Kenny et al., 2010; Power, 2015a). Despite this, the concept of perceptions of self-efficacy "does not yet appear to have been examined in any detail in a mobile learning context" (Kenny et al., 2010, p. 2).

Power, Cristol and Gimbert (2014) and Power (2015a) have attempted to address the absence of discourse about the promotion of teachers' perceptions of self-efficacy with mobile learning. One tool that has been developed is the Mobile Teacher's Sense of Efficacy Scale (mTSES). The mTSES instrument is based upon Tschannen-Moran and Woolfolk Hoy's (2001a, 2001b) Teacher's Sense of Efficacy Scale (TSES). The original TSES instrument consists of 24 questions. It uses a nine-point scale to measure teachers' levels of confidence with their ability to complete common, critical teaching tasks on the sub-domains of Student Engagement, Instructional Strategies, and Classroom Management. The mTSES consists of 38 questions, and uses the same nine-point scale and sub-domains. It provides teachers' scores with respect to common instructional tasks for the original TSES scale, as well as with respect to the integration of mobile learning strategies (Power, 2015a, 2015b). By providing scores for the original TSES and the mTSES scales, the mTSES instrument compares teachers' perceptions of self-efficacy with teaching in general to their perceptions about the use of mobile learning strategies.

Power (2015a) used the mTSES instrument to measure the impact of professional development training on participants' perceptions of self-efficacy with the integration of mobile learning strategies. The professional development consisted of a MOOC called Creating Mobile Reusable Learning Objects Using Collaborative Situated Active Mobile (CSAM) Learning Strategies (Power et al., 2014; Power, 2015a). The three week long MOOC introduced the CSAM learning design framework (Power, 2013; Power et al., 2014), and explored the use of the framework to guide instructional design decisions about the integration of mobile reusable learning objects (RLOs) into participants' own teaching contexts. Participants built prototype mobile RLOs, and also used the CSAM framework as a post-assessment tool for their prototypes. The mTSES instrument was integrated as a learning activity at both the beginning and the end of the MOOC. Participants were provided with a tool to self-score their mTSES surveys, and were asked to reflect upon changes in their perceptions of self-efficacy. Power (2015a) analyzed participants' pre-course and post-course mTSES scores, and found an overall increase in their perceptions of self-efficacy with mobile learning in comparison to their original TSES sub-domain scores. While those gains diminished when the mTSES was re-administered as a follow-up three months after the completion of the course, Power (2015a) found that participants still had stronger perceptions of self-efficacy with mobile learning strategies than could be accounted for through maturation alone. Qualitative data were also collected to help gain a better understanding of how participation in the CSAM MOOC impacted perceptions of self-efficacy. Power (2015a) used open-response survey questions and follow-up interviews to ask about participants' perceptions of the CSAM MOOC, its impact upon their perceptions of self-efficacy, and what they perceived as necessary going forward to adopt mobile learning strategies. The mTSES results and qualitative data were used to identify potential improvements to the design of the professional development MOOC, as well as to make recommendations for further research and future professional development practice.

The CSAM MOOC studied by Power (2015a) had a total of 72 registered participants, who came from a relatively homogeneous North American background. The pre-course mTSES survey was completed by 36 study participants, and the post-course mTSES was completed by 22 participants. One of the recommendations for further research proposed by Power (2015a) was that the mTSES instrument be used to study mobile learning self-efficacy perceptions amongst a larger, more diverse sample of teachers and instructional developers. This paper presents findings from the use of the mTSES with participants in a free MOOC called Instructional Design for Mobile Learning (ID4ML) (Power et al., 2015).

ID4ML was conducted from May 4 – June 6, 2015, using the Canvas™ (Canvas, n.d.; Instructure, n.d.) open learning management system. The course consisted of five modules, as outlined in Table 1:

Table 1

Course Modules for Instructional Design for Mobile Learning (ID4ML)

Week Module
Week 0 Introduction to the Course
Week 1 Defining and Understanding Mobile Learning
Week 2 Instructional Design Principles for mLearning
Week 3 Hands on Mobile Learning
Week 4 Course Wrap Up

The primary focus of the ID4ML MOOC was on exploration of a variety of mobile applications and mobile learning tools, and discussion of the potential for integration of those resources into participants' teaching and learning practices. A specific focus on pedagogical design for mobile learning was limited to the Week 2: Instructional Design Principles for mLearning module. Content for the Week 2 module was drawn from the CSAM MOOC (Power et al., 2014; Power, 2015a). However, participants were not required to dedicate as much time to personal instructional design projects as in the original CSAM MOOC. Nor were they asked to design, produce, or evaluate a prototype mobile RLO using the CSAM framework.

A total of 2231 people were enrolled in ID4ML. Course participants came from all global regions. All course participants were invited to participate in the current research study through an information letter and a link to an online informed consent form posted in the Week 0 course orientation module. Research participation was strictly voluntary. Participants were provided with links to an online pre-course mTSES survey in the Week 0 module, and to an online post-course mTSES survey in the Course Wrap Up module.

Research Questions

This paper builds upon the findings from the use of the mTSES instrument by Power (2015a). The mTSES was administered to participants in the ID4ML MOOC with the aim of exploring its utility as a tool for planning and evaluating professional development about using mobile learning resources and strategies. The specific research questions explored were:

  1. Are measures of the construct validity and reliability of the mTSES tool consistent with previous measurements?
  2. What effect did participation in ID4ML have upon participants' perceptions of self-efficacy with the use of mobile learning strategies in teaching practice?

    1. Are there differences in the effects of participation in ID4ML upon participants' perceptions of self-efficacy with mobile learning strategies based upon demographic characteristics?

    2. How do changes in ID4ML participants' perceptions of self-efficacy with mobile learning strategies compare to those reported by Power (2015a)?

Methodology

Quantitative data were collected for this research using pre-course and post-course administrations of the mTSES instrument. Volunteers from the ID4ML course were invited to participate in the study, and to complete the two mTSES surveys. Participants used hyperlinks within the course to access the online mTSES surveys. The hyperlinks to the pre-course and post-course administrations of the mTSES were only available during designated times in the Week 0: Introduction to the Course and the Week 4: Course Wrap Up modules, respectively. Access to the surveys outside of these times was blocked so that all pre-course and post-course mTSES submissions measured perceptions of self-efficacy following uniform periods of exposure to the ID4ML training. Course participants who enrolled in ID4ML after the initial orientation week did not participate in the research study, and participants were unable to complete the post-course survey beyond the course completion date.

Changes in participants' perceptions of self-efficacy were determined using the procedures outlined by Power (2015a). Data from the pre-course and post-course administrations of the mTSES were analyzed using Microsoft™ Excel™. Mean scores were calculated on a nine-point scale for each of the 38 question items from the aggregate data from each mTSES administration. The overall mean scores were then used to calculate mean scores for each of the TSES and mTSES sub-domains. Mean TSES and mTSES scores were also calculated based upon the demographic categories of years of teaching experience, participant status, geographic region, and gender. Aggregate mean scores for the TSES and mTSES domains and sub-domains, as well as those for the different demographic categories, were compared to determine initial and post-course differences in perceptions of self-efficacy with teaching in general versus the use of mobile learning strategies. The aggregate and demographic category pre-course and post-course TSES and mTSES scores were also compared to determine the extent of changes in perceptions of self-efficacy along each scale.

Participant Demographics

Participants in the ID4ML study came from more diverse demographic backgrounds than those from Power (2015a). Participation in the ID4ML study was voluntary. Of the 2231 registered participants in the ID4ML MOOC, a total of 105 completed the pre-course mTSES survey, and 37 completed the post-course mTSES survey. Table 2 presents a comparison of the total number of participants and the demographic breakdowns of participants between Power (2015a) and the ID4ML study.

Table 2

Demographic Breakdowns of Participants in Power (2015a) and ID4ML

Power (2015a) ID4ML
1st mTSES 2nd mTSES 1st mTSES 2nd mTSES
Gender
Female 62 20
Male 43 17
Region
Africa - Middle East 7 4
Asia (Far East) 6 2
Australia / New Zealand 6 4
Europe 18 9
North America 36 22 59 12
South / Central America 9 6
Status
Student 5 1 13 7
Undergraduate education student 3 3
Graduate education student 10 4
Faculty 23 16 43 15
K-12 teacher 13 5
Post-secondary instructor 30 10
Private sector training professional 17 4
Not currently employed 4 0
Other 8 5 28 11
Years of Teaching Experience
0-5 years 9 4 37 13
6-10 years 7 4 11 6
11-15 years 8 5 26 8
> 15 years 12 9 31 10
Total 36 22 105 37

Response rates were lower for the post-course mTSES administrations for both Power (2015a) and the ID4ML study. However, such recidivism is not unusual in research studies involving repeated survey or questionnaire administrations (Cohen, Manion, & Morrison, 2011). The attrition in survey submission rates was also lower than typical MOOC participant attrition and completion rates (Jordan, 2014; Parr, 2013).

Results

Construct Validity and Reliability of the mTSES

Determination of the construct validity and reliability of the mTSES instrument was conducted using the procedures outlined by Benton-Borghi (2006) and Power (2015a). Microsoft™ Excel™ was used to calculate total survey Cronbach's alpha scores for both the TSES and mTSES domains for the pre-course and post-course administrations. Cronbach's alpha scores were also calculated for the sub-domains of Student Engagement, Instructional Strategies, and Classroom Management, for both the TSES and mTSES domains. These scores were compared to the Cronbach's alpha scores obtained by Tschannen-Moran and Woolfolk Hoy (2001a, 2001b) for the original TSES instrument, by Benton-Borghi (2006) for the Teacher's Sense of Inclusion Efficacy Scale (I-TSES), and by Power (2015a) for the TSES and mTSES. The reliabilities of the various survey instruments are presented in Table 3.

Table 3

TSES, I-TSES and mTSES Reliabilities (Cronbach's alpha)

SCALES Cronbach's alpha (α)
Engagement Instruction Classroom Management Total
TSES (Tschannen-Moran and Woolfolk Hoy, 2001) .85 .89 .91 .93
I-TSES (Benton-Borghi, 2006) .86 .89 .88 .93
First TSES (Power, 2015a) .86 .87 .78 .93
First TSES (ID4ML) .89 .89 .91 .96
Second TSES (Power, 2015a) .91 .87 .93 .95
Second TSES (ID4ML) .89 .92 .90 .96
First mTSES (Power, 2015a) .88 .84 .77 .92
First mTSES (ID4ML) .90 .90 .90 .96
Second mTSES (Power, 2015a) .90 .89 .91 .96
Second mTSES (ID4ML) .90 .89 .89 .96

The Cronbach's alpha reliability scores were generally consistent for the total scales, as well as for the three sub-domains, across all instrument administrations. Power (2015a) noted that the comparability of reliability scores for the total scales as well as for the sub-domains "supports the conclusion of comparable construct validity between the TSES and the modified mTSES" (p. 135). This conclusion is further supported by the consistency of the reliability scores obtained from the ID4ML surveys. The similarities in the reliability scores and construct validities mean that researchers can place confidence in comparisons of total scale and sub-domain scores between the original TSES (self-efficacy with common teaching tasks) and the mTSES (self-efficacy with the use of mobile learning strategies). The similarities in reliability also mean that researchers can place confidence in the use of the mTSES as a tool for measuring changes in teachers' perceptions of self-efficacy with mobile learning.

Domain Score Analysis

Participants' mean scores on the sub-domains of Student Engagement, Instructional Strategies, and Classroom Management were calculated for both the TSES and mTSES scales for the pre-course and post-course administrations of the mTSES instrument. Mean scores for the pre-course mTSES were subtracted from those for the second survey administration to determine the mean change in scores for each sub-domain from the beginning of the course to the end of the course. Table 4 reports the mean sub-domain scores for each scale as obtained by Power (2015a), as well as for the participants from ID4ML.

Table 4

Changes in TSES and mTSES Subdomain Scores Between 1st and 2nd Administrations

SCALES 1st Admin 2nd Admin MChange
TSES Scoring (Power, 2015a) MmTSES1 MmTSES2 MChange
Efficacy in Student Engagement: 6.04 6.23 .19
Efficacy in Instructional Strategies: 6.94 7.25 .31
Efficacy in Classroom Management: 6.86 6.87 .01
mTSES Scoring (Power, 2015a) MmTSES1 MmTSES2 MChange
Efficacy in Student Engagement with mLearning: 5.90 6.48 .57
Efficacy in Instructional Strategies with mLearning: 6.59 7.27 .68
Efficacy in Classroom Management with mLearning: 6.78 6.89 .11
TSES Scoring (ID4ML) MmTSES1 MmTSES2 MChange
Efficacy in Student Engagement: 6.40 6.91 .51
Efficacy in Instructional Strategies: 6.87 7.50 .64
Efficacy in Classroom Management: 6.60 7.09 .49
mTSES Scoring (ID4ML) MmTSES1 MmTSES2 MChange
Efficacy in Student Engagement with mLearning: 6.44 7.07 .64
Efficacy in Instructional Strategies with mLearning: 6.80 7.43 .62
Efficacy in Classroom Management with mLearning: 6.62 7.07 .45

The mean scores obtained for each sub-domain for the first mTSES administration for participants in ID4ML were consistent with those reported by Power (2015a). Most sub-domain scores for the ID4ML group varied between .1 and .2 points on the nine point scale from those reported by Power (2015a), with only two sub-domain scores showing a greater difference (the Efficacy with Student Engagement score on the TSES scale was .36 points higher on the first administration for participants in the ID4ML group, and .54 points higher for the Student Engagement subdomain on the mTSES scale for participants in the ID4ML group). A similar trend was observed for the mean scores obtained for each sub-domain for the second mTSES administration. For the second administration, sub-domain scores for the ID4ML group and Power (2015a) typically varied between .05 and .18 points on the nine point scale. Again, the greatest differences in scores between the ID4ML and Power (2015a) groups were observed for the Student Engagement subdomain (the mean ID4ML score for the TSES was .68 points higher than the that of Power (2015a) group, and the mean score for the mTSES scale was .59 points higher). However, the mean changes in scores (MChange) were greater for the ID4ML participants than those reported by Power (2015a) for all three sub-domains on the TSES scale. The mean changes in TSES sub-domain scores for participants in Power (2015a) ranged between .01 and .31 points, compared to mean changes ranging between .49 and .64 amongst the ID4ML participants. There was less variance in the changes in the mTSES subdomain scores between the two groups. M Change on the mTSES scale for the Power (2015a) participants ranged from .11 to .68 points. The ID4ML participants recorded MChange scores on the mTSES scale ranging from .45 to .64 points.

Net Changes Accounting for Maturation

Changes in participants' mean scores on the mTSES scale sub-domains appear generally consistent between the ID4ML participants and those reported by Power (2015a). However, participants in Power (2015a) showed lower mean changes in their scores on the TSES scale sub-domains. The procedures outlined by Power (2015a) were used to determine the actual extent to which ID4ML participants' perceptions of self-efficacy with mobile learning strategies (the mTSES scale) had changed as a result of participation in the professional development. The mean changes in each sub-domain score for the TSES scale (TSES2 – TSES1) were subtracted from the mean sub-domain score changes for the mTSES scale (mTSES2 – mTSES1) to yield the net change accounting for the effects of maturation upon participants. Table 5 reports the net change (intervention effect) for participants from ID4ML, as well as those reported by Power (2015a).

Table 5

Net Change (Intervention Effect)

Domain Net Change
(mTSES2 – mTSES1) – (TSES2 – TSES1)
Power (2015a)
Efficacy in Student Engagement .38
Efficacy in Instructional Strategies .37
Efficacy in Classroom Management .11
ID4ML
Efficacy in Student Engagement .12
Efficacy in Instructional Strategies -.01
Efficacy in Classroom Management -.04

Power (2015a) reported net increases in participants' perceptions of self-efficacy with mobile learning strategies for all three sub-domains. Participants' scores on the mTSES scale showed net changes for the Student Engagement (.38 points) and Instructional Strategies (.37 points) sub-domains. The Classroom Management sub-domain showed a smaller increase of.11 points on the nine point scale. In contrast, participants from ID4ML only showed a net increase in their perceptions of self-efficacy for the Student Engagement sub-domain (.12 points). Net decreases in perceptions of self-efficacy were observed for both the Instructional Strategies and Classroom Management sub-domains. The differences in the net changes per sub-domain between Power (2015a) and ID4ML are illustrated in Figure 1.


Figure 1. Differences in net sub-domain score changes (9-point scale) for Power (2015a) and ID4ML.

Demographic Analyses

Changes in perceptions of self-efficacy were further analyzed along four different demographic categories, including participants' years of teaching experience, status (with respect to the education profession), geographic region, and gender. These changes were compared to similar demographic analyses reported by Power (2015a).

Years of Teaching Experience

Participants from both research studies with less than five years of teaching experience were the least likely to show increases in their perceptions of self-efficacy. Mean scores for participants with less than five years of teaching experience in both ID4ML and Power (2015a) showed decreases on two of the three mTSES scale sub-domains. Participants with less than five years of teaching experience from Power (2015a) also showed decreases in their mean scores on all three TSES sub-domains, while the mean TSES scores for participants from ID4ML showed almost no increase on two sub-domains, and a small decrease for the third sub-domain. Participants with between 5-10 years of teaching experience, and those with between 10-15 years of teaching experience, showed the most frequent and largest increases in their perceptions of self-efficacy with the use of mobile learning strategies. Amongst participants from both groups, decreases in mean scores for perceptions of self-efficacy with mobile learning strategies were most frequent for the Classroom Management sub-domain. However, participants with between 10-15 years of teaching experience were the only ones from ID4ML to show an increase in their mean mTSES sub-domain score for Instructional Strategies. Table 6 reports the changes in TSES and mTSES scores for participants from both ID4ML and Power (2015a) according to years of teaching experience.

Table 6

Changes in TSES and mTSES Scores by Years of Teaching Experience

Teaching Experience TSES Domains mTSES Domains
Student Eng. Instr. Strategies Classroom Mgt Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
0-5 years -.01 -.06 -.51 -.15 .06 -.31
5-10 years .83 .91 .41 1.25 1.49 .48
10-15 years -.18 -.09 -.02 .39 .39 .14
>15 years -.03 .25 -.13 .49 .60 -.09
ID4ML
0-5 years .05 .00 -.01 .17 -.05 -.10
5-10 years -.31 -.14 .02 .02 -.08 .14
10-15 years .08 -.03 .02 .20 .14 -.15
>15 years .13 -.14 .06 .23 -.28 .09

The procedures outlined by Power (2015a) were used to determine the net changes in participants' perceptions of self-efficacy with the use of mobile learning strategies accounting for the effects of maturation. Table 7 reports the net changes (intervention effect) for participants from both ID4ML and Power (2015) based upon years of teaching experience.

Table 7

Net Change (Intervention Effect) According to Years of Teaching Experience

Teaching Experience Net Change
(mTSES2 – mTSES1) – (TSES2 – TSES1)
Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
0-5 years -.14 .12 .20
5-10 years .42 .58 .07
10-15 years .57 .48 .16
>15 years .52 .35 .04
ID4ML
0-5 years .11 -.05 -.09
5-10 years .33 .05 .13
10-15 years .12 .17 -.17
>15 years .10 -.14 .03

Perceptions of self-efficacy with mobile learning strategies on the Student Engagement sub-domain showed a net decrease of -.14 points on the nine point scale for participants from Power (2015a). In contrast, participants with less than five years of teaching experience from ID4ML showed a net increase in their mean scores of .11 points for the Student Engagement sub-domain. However, the ID4ML participants with less than five years of teaching experience showed net score decreases for both remaining sub-domains. Participants from all other teaching experience groups from Power (2015a) showed net score increases across all three sub-domains, with those participants with between 10-15 years of experience showing the greatest overall increases. Only those participants from ID4ML with between 5-10 years of teaching experience showed net score increases for all three mTSES scale sub-domains.

Participant Status

Participants from Power (2015a) who identified themselves as graduate-level education students showed increases in mean scores for all three sub-domains on the mTSES scale. Student participants from Power (2015a) showed an increase in their mean score on the Student Engagement sub-domain of 1.19 points on the nine point scale. Those participants from Power (2015a) who identified themselves as teachers showed a small overall decrease in their mean score on the Classroom Management sub-domain for the mTSES scale. In contrast, ID4ML participants who identified themselves as either undergraduate or graduate-level education students showed more frequent decreases in their mean sub-domain scores on both the TSES and mTSES scales. ID4ML participants who identified themselves as K12 teachers, or as private-sector training professionals, also showed frequent decreases in their perceptions of self-efficacy. Those who identified themselves as post-secondary instructors from the ID4ML group showed increases in their mean scores for two of the three TSES scale sub-domains, and for all three mTSES scale sub-domains. Table 8 presents the mean changes by participant status in TSES and mTSES sub-domain scores for ID4ML participants, as well as those from Power (2015a).

Table 8

Changes in TSES and mTSES Scores by Participant Status

Status TSES Domains mTSES Domains
Student Eng. Instr. Strategies Classroom Mgt Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
Teacher -.03 .26 -.13 .28 .56 -.01
Student .68 .34 .25 1.19 .85 .35
ID4ML
Undergraduate education student -.40 .58 -.20 -.07 .04 -.17
Graduate education student -.07 -.04 -.05 .06 -.31 -.10
K-12 teacher .23 -.06 -.02 .31 -.18 .01
Post-secondary instructor .02 -.05 .07 .24 .04 .01
Private sector training professional -.07 -.15 -.01 -.22 -.41 .37
Other .11 -.09 .04 .28 .05 -.13

Net changes accounting for the effects of maturation based upon participant status are presented in Table 9.

Table 9

Net Change (Intervention Effect) According to Participant Status

Status Net Change
(mTSES2 – mTSES1) – (TSES2 – TSES1)
Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
Teacher .31 .30 .12
Student .51 .51 .10
ID4ML
Undergraduate education student .33 -.54 .03
Graduate education student .12 -.28 -.05
K-12 teacher .08 -.12 .03
Post-secondary instructor .22 .08 -.07
Private sector training professional -.15 -.25 .38
Other .17 .14 -.17

Net score increases for perceptions of self-efficacy with mobile learning strategies were reported for all mTSES sub-domains for participants from Power (2015a). Five of the six participant status groups from ID4ML showed net score increases for the Student Engagement sub-domain. Only those participants who identified themselves as private sector training professionals showed a net decrease (-.15 points on the nine point scale) for the Student Engagement sub-domain. In contrast, private sector training professionals from ID4ML showed the greatest net score increases (.38 points) for the Classroom Management sub-domain. ID4ML participants who identified themselves as undergraduate education students and K12 teachers both showed net score increases of.03 points for the Classroom Management sub-domain. All other categories of ID4ML participants showed net score decreases for Classroom Management. For the Instructional Strategies sub-domain, only those ID4ML participants who identified themselves as post-secondary instructors, or as belonging to the "Other" category, showed net score increases. Undergraduate education students from the ID4ML group showed a net score decrease of -.54 points for the Instructional Strategies sub-domain.

Region

Participants from Power (2015a) were affiliated with four educational institutions. Three institutions were based in either Canada or the United States. The fourth institution was based in Qatar. However, the instructional faculty from the Qatari institution were comprised exclusively of Canadian educators employed on one to three year teaching contracts. Thus, all participants from Power (2015a) are categorized as belonging to the North America category in Table 10 (below). Participants from ID4ML were asked to self-identify their geographic region when completing the pre-course and post-course mTSES administrations. Table 10 reports changes in sub-domain scores for both the TSES and mTSES scales for participants from both ID4ML and Power (2015a).

Table 10

Changes in TSES and mTSES Scores by Region

Region TSES Domains mTSES Domains
Student Eng. Instr. Strategies Classroom Mgt Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
North America .19 .31 .01 .57 .68 .11
ID4ML
Africa - Middle East .19 .21 -.04 .10 .03 -.08
Asia (Far East) -.25 -.17 -.08 -.04 .19 -.02
Australia / New Zealand .09 -.12 .00 .11 -.19 -.05
Europe .15 -.13 .03 .31 -.15 -.04
North America -.04 -.08 .05 .15 -.08 .02
South / Central America .43 .08 -.08 .14 -.02 -.02

North American participants from both ID4ML and Power (2015a) showed the strongest increases in their reported perceptions of self-efficacy with mobile learning strategies. Mean scores for the Student Engagement sub-domain increased by .57 points on the nine point scale for North American participants from Power (2015a), and by .15 points for participants from the same region from ID4ML. For the North American groups, mean scores for the Instructional Strategies sub-domain on the mTSES scale increased by.68 points for participants from Power (2015a), but decreased by -.08 points for participants from ID4ML. North American participants from Power (2015a) showed an increase of .11 points on the Classroom Management sub-domain on the mTSES scale, compared to an increase of.02 points for North American participants from ID4ML. However, unlike their counterparts from Power (2015a), the North American participants from ID4ML showed marginal decreases (-.04 and -.08 points) for two of the three sub-domain scores on the TSES scale.

ID4ML participants from Africa and the Middle East showed the most frequent increases in their mean scores across the TSES and mTSES scales. On both scales, African and Middle Eastern participants showed increased mean scores for both the Student Engagement and Instructional Strategies sub-domains, and marginal decreases (-.04 and -.08 points) for their mean Classroom Management sub-domain scores. Participants from all other regions showed overall decreases in their mean scores for either three or four of the six combined TSES and mTSES subdomains.

The trend of decreases in ID4ML participants' mean sub-domain scores is also demonstrated after calculating for the net changes accounting for maturation during the course. Table 11 presents the net changes in participants' mTSES sub-domain scores from both ID4ML and Power (2015a).

Table 11

Net Change (Intervention Effect) According to Region

Region Net Change
(mTSES2 – mTSES1) – (TSES2 – TSES1)
Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Power (2015a)
North America .38 .37 .10
ID4ML
Africa - Middle East -.08 -.18 -.04
Asia (Far East) .21 .35 .06
Australia / New Zealand .02 -.06 -.05
Europe .15 -.03 -.07
North America .19 .00 -.03
South / Central America -.29 -.10 .06

North American participants from Power (2015a) showed net increases for all three mTSES sub-domains after accounting for the effects of maturation. ID4ML participants from four out of six regions showed net score decreases for the Instructional Strategies and Classroom Management sub-domains. ID4ML participants from the Africa – Middle East and South / Central America regions showed net score decreases on the Student Engagement sub-domain. Net sub-domain score increases that were observed for participants from ID4ML were also smaller than those observed amongst the participants from Power (2015a).

Gender

Differences in TSES and mTSES sub-domain scores were not reported by gender by Power (2015a). Table 12 reports the mean pre-course and post-course TSES and mTSES sub-domain scores for female and male participants from ID4ML, as well as the changes in participants' mean scores for each sub-domain.

Table 12

Changes in TSES and mTSES Scores in ID4ML by Gender

Gender TSES Domains mTSES Domains
Student Eng. Instr. Strategies Classroom Mgt Student Eng (mobile) Instr. Strategies (mobile) Classroom Mgt (mobile)
Female
1st Administration 6.38 6.91 6.61 6.48 6.80 6.68
2nd Administration 7.01 7.65 7.14 7.06 7.39 7.17
Change .63 .74 .52 .58 .59 .49
Male
1st Administration 6.42 6.81 6.58 6.39 6.81 6.53
2nd Administration 6.79 7.33 7.04 7.10 7.47 6.96
Change .37 .52 .46 .71 .66 .42

Mean scores for both the TSES and mTSES scale sub-domains were fairly homogeneous between female and male participants, with variance in sub-domain scores ranging between .01 and .15 points on the nine point scale. However, there were differences between genders as to the scales for which each group showed greater increases. Female participants showed greater increases in their mean sub-domain scores for the TSES scale from the beginning of ID4ML to the end of the course. In contrast, male participants showed greater increases in their mean scores for the sub-domains on the mTSES scale. The procedures outlined by Power (2015a) were used to calculate the net increases in mean sub-domain scores for each gender accounting for the effects of maturation. Table 13 reports the net changes (intervention effect) in mTSES sub-domain scores by gender.

Table 13

Net Change (Intervention Effect) According to Gender

Domain Net Change
(mTSES2 – mTSES1) – (TSES2 – TSES1)
Female
Efficacy in Student Engagement -.05
Efficacy in Instructional Strategies -.15
Efficacy in Classroom Management -.03
Male
Efficacy in Student Engagement .34
Efficacy in Instructional Strategies .14
Efficacy in Classroom Management -.04

Calculations of the net changes (intervention effects) show that male participants from ID4ML displayed increases in their mean scores of .34 points on the nine point scale for the Student Engagement sub-domain, and .14 points for the Instructional Strategies sub-domain. Mean scores for the Classroom Management sub-domain decreased by similar margins for both female (-.03 points) and male (-.04 points) participants. Female participants also displayed marginal overall decreases in their mean scores for the Student Engagement and Instructional Strategies sub-domains.

Discussion

Verification of the construct validity and reliability of the mTSES instrument was the first objective set out by this study's research questions. The total scale and sub-domain reliability scores obtained from ID4ML participants' mTSES survey submissions were consistent across the TSES and mTSES scales for both the pre-course and post-course administrations. The reliability scores obtained were also consistent with those reported by Tschannen-Moran and Woolfolk Hoy (2001a), Benton-Borghi (2006), and Power (2015a). The consistencies of the reported reliability scores support confidence in the use of the mTSES instrument as a tool to measure perceptions of self-efficacy with mobile learning strategies, and in comparisons between participants' TSES and mTSES sub-domain scores. The mTSES instrument is useful for comparing teachers' perceptions of confidence with common teaching tasks to their perceptions of self-efficacy with mobile learning strategies.

The second research question relates to what the mTSES survey administrations reveal about the effects of the ID4ML MOOC upon participants' perceptions of self-efficacy. Many participants supplied enthusiastic endorsements of the perceived value of the course through social media and the MOOC's LMS platform. For example, once participant commented:

Thank you for ID4ML! I'm not a teacher but as a web developer / lifelong learner I found the class exceptionally well done. I've been taking MOOCs… for several years now and this course ranks near the top for an engaging mix of media types and interactive projects (Canvas user, May 30, 2015).

This study wanted to determine if the enthusiasm expressed by some ID4ML participants corresponded with real changes in confidence in their abilities to adopt mobile learning in teaching practice. Participants' mean pre-course and post-course mTSES scores, and changes in their mean mTSES scores, were compared across the demographic categories of years of teaching experience, participant status, geographic region, and gender. The ID4ML mTSES results were also compared to those reported by Power (2015a). The analyses provide insights into the impact of the ID4ML MOOC. They also provide insights into the potential of the mTSES instrument as a needs assessment tool, and as a post-training assessment tool, when planning mobile learning themed professional development for specific target audiences.

The analysis of the net mTSES scale score changes revealed that participants in ID4ML did not show the same improvements in perceptions of self-efficacy with mobile learning strategies as participants from Power (2015a). However, analysis of participant demographics from each group point to possible reasons for these differences. Participants from Power (2015a) were almost exclusively practicing K12 or post-secondary teachers, or graduate-level education students. In contrast, just over half of the respondents to the pre-course mTSES from the ID4ML group were comprised of K12 or post-secondary teachers and teacher-training students. The remaining ID4ML respondents consisted of private-sector training professionals, participants who were not currently employed, and participants who identified themselves as "other." Similar ratios were seen amongst ID4ML respondents for the post-course mTSES. It is possible that participants who had previous training and experience with educational theory and practice were better prepared to benefit from the professional development experience. This possibility is supported by analysis of net mTSES sub-domain score changes based on participants' years of teaching experience. For both the ID4ML and Power (2015a) groups, participants with more years of teaching experience tended to show the greatest score increases for all three sub-domains.

Another potential contributor to the differences between the ID4ML and Power (2015a) groups observed net mTSES score changes is the structure and content of the training itself. Participants in the Power (2015a) MOOC were exposed to three weeks of training focused exclusively on making, implementing, and evaluating instructional design decisions for mobile learning. Participants in the four week ID4ML MOOC were exposed to a one week module that introduced the same instructional design framework (the CSAM learning design framework) as presented in Power (2015a). However, they were not required to use the framework to either prepare a detailed instructional design plan, or to evaluate a mobile learning instructional design plan once a prototype had been implemented. The ID4ML MOOC placed a greater degree of emphasis on ranges of available applications for mobile learning, and hands-on experiences with the mechanics of using selected mobile applications. Perceptions of self-efficacy with mobile learning strategies amongst participants from Power (2015a) may have increased to a greater degree because their training focused more on pedagogical decision-making than did that of their counterparts in ID4ML.

The impact of the differences in focus of the ID4ML and Power (2015a) MOOCs is also evidenced in analyses of the net score changes for the three individual mTSES sub-domains. Whereas participants from Power (2015a) showed equal net score increases for both the Student Engagement and Instructional Strategies sub-domains, participants from the ID4ML group only showed a net score increase for the Student Engagement sub-domain. Net score changes for the Instructional Strategies and Classroom Management sub-domains showed decreased perceptions of self-efficacy amongst participants from ID4ML. These changes indicate that the exposure to various mobile learning applications in ID4ML increased participants' confidence in the ability of mobile learning tools and strategies to engage their students. However, the training did not leave participants with more confidence in their abilities to design mobile learning instruction, or to manage a classroom where mobile learning strategies were being used. Confidence in classroom management abilities for mobile learning was also lower than in the Student Engagement and Instructional Strategies sub-domains for participants from Power (2015a). This lower net sub-domain score points to a need for more emphasis specifically on classroom management skills for mobile learning in future professional development for teachers.

Geographic region does not appear to play as significant a role as other demographic factors in participants' perceptions of self-efficacy with mobile learning strategies for either the pre-course or post-course mTSES administrations, or in observed levels of net sub-domain score changes. North American participants from Power (2015a) showed greater net score changes than those observed for any regional group from ID4ML. Amongst ID4ML participants, mTSES respondents from the North American and Asia (Far East) regions showed net score increases for the most mTSES sub-domains. Net score decreases were observed on either two or three of the three mTSES sub-domains for ID4ML participants from all other regions. However, the majority of the ID4ML group's net sub-domain score changes (14 of 18 scores) varied within a range only .31 points on the nine point scale. The two most extreme net sub-domain score changes varied by a difference of .66 points.

Differences in mTSES score changes were not reported by gender for the Power (2015a) MOOC. However, female and male participants from ID4ML did perform differently on the TSES and mTSES scales. Female ID4ML participants showed greater increases over the duration of the training in their perceptions of self-efficacy on the TSES scale (common teaching related tasks) than did their male counterparts. In contrast, male ID4ML participants displayed greater increases in their sub-domain scores on the mTSES scale (perceptions of self-efficacy with the use of mobile learning strategies). Compared to their male counterparts, changes in female ID4ML participants' sub-domain scores were more consistent across both the TSES and mTSES scales. Female participants' sub-domain score changes across both the TSES and mTSES scales varied within a range of .25 points, whereas the TSES and mTSES sub-domain score changes for male participants varied within a range of .34 points. When analyzed for the effects of maturation, only the male ID4ML participants showed any net sub-domain score increases. Net score increases were observed for two of the three mTSES sub-domains for male participants, compared to net score decreases on all three sub-domains for female participants.

Recommendations for Research and Practice

The ID4ML MOOC (Power et al., 2015) and the CSAM MOOC (Power, 2015a) had different instructional focuses, and different demographic compositions. Further research is recommended to compare trends in mTSES score changes between more similar professional development courses and demographic groups. It is also recommended that future research into the effects of mobile learning themed professional development include a mixed-methods approach, as outlined by Power (2015a). Quantitative data analyses from mTSES survey administrations should be augmented with qualitative analyses of open-response questionnaires and participant interviews in order to gain a broader understanding of how particular professional development programs affect perceptions of self-efficacy. Power (2015a) attempted to minimize the effects of cognitive load associated with device and application mastery, in order to focus on the effects of scaffolding pedagogical decision-making on teachers' perceptions of self-efficacy. Additional research would be beneficial to explore the degree to which lack of device and application mastery affects self-efficacy and subsequent adoption rates of mobile learning strategies. Additionally, follow-up surveys and interviews with participants in Power (2015a) inquired as to their interest in and intentions to adopt mobile learning strategies. Longitudinal research to explore actual adoption rates would be beneficial with target groups such as those from both the Power (2015a) and the ID4ML groups. An examination of differences in adoption rates compared to changes in specific mTSES subdomain scores would help to identify which subdomains of self-efficacy, if any, have the greatest impact on integration into teaching and learning practice.

How Can Professional Development Planners Use the mTSES to Improve Targeted PD?

Teachers are more likely to integrate new technologies and new instructional strategies if they feel confident in their abilities to do so (Tschannen-Moran & Woolfolk Hoy, 2001a). If the aim of a professional development program is to increase the adoption of mobile technologies and mobile learning strategies, then professional development planners must aim to increase participants' perceptions of self-efficacy. The mTSES instrument can be used by professional development planners to determine the extent to which a training intervention has impacted upon perceptions of self-efficacy with mobile learning. This information can point to potential training program revisions. It can also be used to help make decisions about follow-up support and additional training for participants. For instance, participants from ID4ML most consistently showed increases in their confidence in the use of mobile learning strategies to engage their students. But they demonstrated less confidence in their own abilities with mobile learning instructional design, and with classroom management. For participants demonstrating such trends as these, professional development planners could look to integrate more content and learning activities into ID4ML targeting these two sub-domains. Planners could also develop further training interventions targeting the Instructional Strategies and Classroom Management sub-domains.

Professional development planners need not wait until a training program has been developed and implemented to make use of the mTSES instrument. The mTSES could be administered with target participants during a needs assessment phase. The results from target participants' sub-domain scores could then be used to make decisions about preparedness for a training intervention, and areas of focus for the intervention. The mTSES tool could also be re-administered at the onset of the developed training program, and the end of the program, and as a longer-term post-training assessment of the impacts on perceptions of self-efficacy.

Conclusions

Education stakeholders are calling more frequently for the integration of mobile technologies and mobile learning strategies into instructional design in formal education systems. However, teachers' perceptions of confidence in their abilities to use mobile learning strategies has been cited as a barrier to larger scale adoption of mobile learning (Ally et al., 2013). At the same time, there has been a lack of research into self-efficacy with respect to mobile learning (Kenny et al., 2010). The Mobile Teacher's Sense of Efficacy Scale (mTSES) instrument was developed in an attempt to address the lack of mobile learning self-efficacy research (Power et al., 2014; Power, 2015a, 2015b). The mTSES instrument has been shown to have consistent reliability and construct validity compared to previous versions of the original TSES scale (Benton-Borghi, 2006; Tschannen-Moran & Woolfolk Hoy, 2001a, 2001b). The use of the mTSES showed changes in teachers' perceptions of self-efficacy with the use of mobile learning strategies amongst participants in the CSAM MOOC (Power, 2015a). However, the mTSES tool revealed that participants from the ID4ML MOOC showed increases only in their levels of confidence with their abilities to use mobile learning to improve student engagement. Analyses of changes in the mTSES sub-domain scores for ID4ML participants point to a need for more emphasis in future professional development training on instructional design decisions and strategies. The mTSES changes reported for both ID4ML and Power (2015a) also revealed that participants from both courses remain least confident with their classroom management skills for mobile learning. Use of the mTSES instrument pointed to potential improvements that professional development designers could make for the ID4ML and Power (2015a) MOOCs.

Future teacher professional development endeavors related to mobile learning must focus on increasing perceptions of self-efficacy. It is recommended that professional development planners utilize the mTSES instrument as a needs assessment tool to determine the preparedness of target participants for proposed training. The mTSES survey can also be used to gauge the success of training interventions at increasing teachers' confidence in their abilities to use mobile learning strategies. Effectively assessing teachers' training needs, and impacts upon their perceptions of self-efficacy, is a critical precursor to increasing the integration of mobile learning into teaching practice.

Notes

  1. Power, R. (2015). A framework for promoting teacher self-efficacy with mobile reusable learning objects (Doctoral dissertation, Athabasca University), 220-224. Retrieved from http://hdl.handle.net/10791/63

References

Ally, M., Farias, G., Gitsaki, C., Jones, V., MacLeod, C., Power, R., & Stein, A. (2013, October). Tablet deployment in higher education: Lessons learned and best practices. Panel discussion at the 12th World Conference on Mobile and Contextual Learning, Doha, Qatar.

Ally, M. & Prieto-Blázquez, J. (2014). What is the future of mobile learning in education? Mobile Learning Applications in Higher Education [Special Section]. Revista de Universidad y Sociedad del Conocimiento (RUSC), 11(1), 142-151. doi: 10.7238/rusc.v11i1.2033.

Benton-Borghi, B. (2006). Teaching every student in the 21 st century: Teacher efficacy and technology (Doctoral dissertation). Ohio State University. Retrieved from http://www.pucrs.br/famat/viali/tic_literatura/teses/BentonBorghi%20Beatrice%20Hope.pdf

Canvas (2016). About us. Retrieved from http://www.instructure.com/about-us

Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.). New York: Routledge.

Groupe Spécial Mobile Association (2012). Mobile education landscape report. Retrieved from http://www.gsma.com/connectedliving/wp-content/uploads/2012/03/landscape110811interactive.pdf

Instructure (2016). Canvas learning management system. Retrieved from https://canvas.instructure.com/login

Jordan, K. (2014). Initial trends in enrollment and completion in Massive Open Online Courses. The International Review of Research in Open and Distance Learning, 15(1), pp. 134-160. Retrieved from http://www.irrodl.org/index.php/irrodl/article/viewFile/1651/2813

Kenny, R.F., Park, C.L., Van Neste-Kenny, J.M.C., & Burton, P.A. (2010). Mobile self-efficacy in Canadian nursing education programs. In M. Montebello, V. Camilleri and A. Dingli (Eds.), Proceedings of mLearn 2010, the 9th World Conference on Mobile Learning, Valletta, Malta.

Parr, C. (2013, May 9). MOOC completion rates ‘below 7%.' Times Higher Education. Retrieved from https://www.timeshighereducation.co.uk/news/mooc-completion-rates-below-7/2003710.article

Power, R. (2013). Collaborative situated active mobile (CSAM) learning strategies: A new perspective on effective mobile learning. Learning and Teaching in Higher Education: Gulf Perspectives, 10(2). Retrieved from http://lthe.zu.ac.ae/index.php/lthehome/article/view/137

Power, R. (2015a). A framework for promoting teacher self-efficacy with mobile reusable learning objects (Doctoral dissertation). Athabasca University. Retrieved from http://hdl.handle.net/10791/63

Power, R. (2015b). The Mobile Teacher's Sense of Efficacy Scale (mTSES). Retrieved from http://robpower.weebly.com/mtses.html

Power, R., Bartoletti, R., & Kilgore, W. (2015). Instructional design for mobile learning (ID4ML) [Massive Open Online Course]. Retrieved from https://www.canvas.net/courses/instructional-design-mobile-learning

Power, R., Cristol, D., & Gimbert, B. (2014). Exploring tools to promote teacher efficacy with mLearning. In M. Kalz, Y. Bayyurt, & M. Specht (Eds), Mobile as a mainstream -- Towards future challenges in mobile learning: Communications in Computer and Information Science Volume 479 (pp. 61-68). Retrieved from http://link.springer.com/chapter/10.1007/978-3-319-13416-1_7

Traxler, J. (2012). Mobile learning it's here but what is it? Interactions Journal 9(1). Retrieved from http://www2.warwick.ac.uk/services/ldc/resource/interactions/issues/issue25/traxler/

Tschannen-Moran, M., & Woolfolk Hoy, A. (2001a). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17(7), 783-805.

Tschannen-Moran, M., & Woolfolk Hoy, A. (2001b). Teacher's sense of efficacy scale. Retrieved from http://anitawoolfolkhoy.com/instruments/#Sense

Appendix A
Combined Teacher's Sense of Efficacy Scale (TSES) and Mobile Teacher's Sense of Efficacy Scale (mTSES) Survey1

Introduction

This questionnaire is designed to help gain a better understanding of your level of comfort with the kinds of tasks that you would need to do when integrating technology-based resources (such as mobile devices and mobile reusable learning objects) in school activities. Indicate your opinion about each of the statements below.

Teacher Beliefs How much can you do?
Nothing Very Little Some Influence Quite a Bit A Great Deal
1 How much can you do to get through to the most difficult students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
2 How much can you do to control disruptive behavior during collaborative learning activities? (1) (2) (3) (4) (5) (6) (7) (8) (9)
3 How much can you use alternative (technology-based) resources to motivate students who show low interest in school work? (1) (2) (3) (4) (5) (6) (7) (8) (9)
4 How much can you gauge student comprehension of content delivered using technology resources? (1) (2) (3) (4) (5) (6) (7) (8) (9)
5 How much can you use alternative (technology-based) resources to get through to the most difficult students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
6 How well can you respond to difficult questions from your students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
7 How much can you do to adjust your lessons to the proper level for individual students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
8 To what extent can you craft good collaborative learning activities for your students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
9 How well can you provide appropriate challenges for very capable students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
10 How well can you respond to defiant students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
11 How much can you do to calm a student who is disruptive? (1) (2) (3) (4) (5) (6) (7) (8) (9)
12 How much can you use alternative (technology-based) resources to help your students value learning? (1) (2) (3) (4) (5) (6) (7) (8) (9)
13 How much can you do to get students to follow classroom rules? (1) (2) (3) (4) (5) (6) (7) (8) (9)
14 How well can you implement alternative (technology-based) strategies in your classroom? (1) (2) (3) (4) (5) (6) (7) (8) (9)
15 How much can you use a variety of technology-based assessment strategies? (1) (2) (3) (4) (5) (6) (7) (8) (9)
16 How much can you use alternative (technology-based) resources to help your students think critically? (1) (2) (3) (4) (5) (6) (7) (8) (9)
17 To what extent can you make your expectations clear about student behavior? (1) (2) (3) (4) (5) (6) (7) (8) (9)
18 How much can you gauge student comprehension of what you have taught? (1) (2) (3) (4) (5) (6) (7) (8) (9)
19 How much can you do to foster student creativity? (1) (2) (3) (4) (5) (6) (7) (8) (9)
20 How much can you use a variety of assessment strategies? (1) (2) (3) (4) (5) (6) (7) (8) (9)
21 How well can you implement alternative strategies in your classroom? (1) (2) (3) (4) (5) (6) (7) (8) (9)
22 How much can you assist families in helping their children do well in school? (1) (2) (3) (4) (5) (6) (7) (8) (9)
23 How well can you establish a classroom management system with each group of students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
24 How much can you do to improve the understanding of a student who is failing? (1) (2) (3) (4) (5) (6) (7) (8) (9)
25 How much can you do to help your students think critically? (1) (2) (3) (4) (5) (6) (7) (8) (9)
26 How much can you do to motivate students who show low interest in school work? (1) (2) (3) (4) (5) (6) (7) (8) (9)
27 How well can you establish routines to keep activities running smoothly? (1) (2) (3) (4) (5) (6) (7) (8) (9)
28 How much can you do to help your students value learning? (1) (2) (3) (4) (5) (6) (7) (8) (9)
29 How much can you use technology to foster student creativity? (1) (2) (3) (4) (5) (6) (7) (8) (9)
30 How much can you use alternative (technology-based) resources to improve the understanding of a student who is failing? (1) (2) (3) (4) (5) (6) (7) (8) (9)
31 How much can you use technology to adjust your lessons to the proper level for individual students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
32 To what extent can you provide an alternative explanation or example when students are confused? (1) (2) (3) (4) (5) (6) (7) (8) (9)
33 How well can you keep a few problem students from ruining an entire lesson? (1) (2) (3) (4) (5) (6) (7) (8) (9)
34 How much can you do to get students to believe they can do well in school work? (1) (2) (3) (4) (5) (6) (7) (8) (9)
35 How much can you do to control disruptive behavior in the classroom? (1) (2) (3) (4) (5) (6) (7) (8) (9)
36 To what extent can you craft good questions for your students? (1) (2) (3) (4) (5) (6) (7) (8) (9)
37 How well can you keep a few problem students from ruining an entire collaborative learning activity? (1) (2) (3) (4) (5) (6) (7) (8) (9)
38 How well can you use technology to provide appropriate challenges for very capable students? (1) (2) (3) (4) (5) (6) (7) (8) (9)

Directions for Scoring the combined Teacher's Sense of Efficacy Scale (TSES) and Mobile Teacher's Sense of Efficacy Scale (mTSES)

(adapted from Tschannen-Moran, & Woolfolk Hoy, 2001)

Factor Analysis

It is important to conduct a factor analysis to determine how your participants respond to the questions. We have consistently found three moderately correlated factors: Efficacy in Student Engagement, Efficacy in Instructional Practices, and Efficacy in Classroom Management, but at times the make-up of the scales varies slightly.

Subscale Scores

To determine the Efficacy in Student Engagement, Efficacy in Instructional Practices, E fficacy in Classroom Management, Efficacy in Student Engagement with mLearning, Efficacy in Instructional Practices with mLearning, and Efficacy in Classroom Management with mLearning subscale scores, we compute unweighted means of the items that load on each factor. Generally these groupings are:

TSES

Efficacy in Student Engagement : Items 1, 19, 22, 24, 25, 26, 28, 34

Efficacy in Instructional Strategies : Items 6, 7, 9, 18, 20, 21, 32, 36

Efficacy in Classroom Management : Items 10, 11, 13, 17, 23, 27, 33, 35

mTSES

Efficacy in Student Engagement with mLearning : Items 3, 5, 12, 16, 22, 29, 30, 34

Efficacy in Instructional Strategies with mLearning : Items 4, 6, 8, 14, 15, 21, 32, 38

Efficacy in Classroom Management with mLearning : Items 2, 10, 11, 13, 17, 23, 27, 37

Reliabilities

In Tschannen-Moran, M., & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing and elusive construct. Teaching and Teacher Education, 17, 783-805, the following were found:

Mean SD alpha
OSTSES 7.1 .94 .94
Engagement 7.3 1.1 .87
Instruction 7.3 1.1 .91
Management 6.7 1.1 .90

 

Athabasca University

Creative Commons License

Using the mTSES to Evaluate and Optimize mLearning Professional Development by Robert Power, Dean Cristol, Belinda Gimbert, Robin Bartoletti, and Whitney Kilgore is licensed under a Creative Commons Attribution 4.0 International License.