Using the mTSES to Evaluate and Optimize mLearning Professional Development

The impact of targeted professional development activities on teachers’ perceptions of self-efficacy with mobile learning remains understudied. Power (2015a) used the Mobile Teacher’s Sense of Efficacy Scale (mTSES) survey instrument to measure the effects of a mobile learning themed professional development course on teachers’ confidence with and interest in mobile learning. The current study looks at changes in perceptions of self-efficacy amongst participants in another open course about mobile learning called Instructional Design for Mobile Learning (ID4ML), which took place from May 4 – June 6, 2015 (Power, Bartoletti & Kilgore, 2015). The purpose of this study is to verify the reliability and construct validity of the mTSES instrument developed by Power (2015a, 2015b) and Power, Cristol and Gimbert (2014), and to explore trends in self-efficacy changes amongst a more diversified participant population. This paper reports on the findings from the analysis of data collected using the mTSES tool. The findings provide useful feedback on the impacts of participating in the ID4ML course. They also provide further support for the utility of the mTSES instrument as a measure of perceptions of self-efficacy with mobile learning. These findings point to the potential utility of the mTSES as a tool for both planning and evaluating mLearning professional development training for teachers.


Article abstract
The impact of targeted professional development activities on teachers' perceptions of self-efficacy with mobile learning remains understudied. Power (2015a) used the Mobile Teacher's Sense of Efficacy Scale (mTSES) survey instrument to measure the effects of a mobile learning themed professional development course on teachers' confidence with and interest in mobile learning. The current study looks at changes in perceptions of self-efficacy amongst participants in another open course about mobile learning called Instructional Design for Mobile Learning (ID4ML), which took place from May 4 -June 6, 2015 (Power, Bartoletti & Kilgore, 2015). The purpose of this study is to verify the reliability and construct validity of the mTSES instrument developed by Power (2015aPower ( , 2015b and Power, Cristol and Gimbert (2014), and to explore trends in self-efficacy changes amongst a more diversified participant population. This paper reports on the findings from the analysis of data collected using the mTSES tool. The findings provide useful feedback on the impacts of participating in the ID4ML course. They also provide further support for the utility of the mTSES instrument as a measure of perceptions of self-efficacy with mobile learning. These findings point to the potential utility of the mTSES as a tool for both planning and evaluating mLearning professional development training for teachers.

Introduction
Despite increasing calls for wider integration of mobile technologies into formal education, one of the most significant determinants of teachers' willingness to adopt mobile learning strategies remains understudied (Kenny, Park, Van Neste-Kenny, & Burton, 2010). A strong sense of confidence in their own abilities increases the likelihood that teachers will experiment with new technologies or teaching 351 approaches (Tschannen-Moran & Woolfolk Hoy, 2001a). This study examined changes in participants' perceptions of self-efficacy after participating in a Massive Open Online Course (MOOC) called Instructional Design for Mobile Learning (ID4ML) (Power, Bartoletti, & Kilgore, 2015). The Mobile Teacher's Sense of Efficacy Scale (mTSES) (Power, Cristol, & Gimbert, 2014;Power, 2015aPower, , 2015b) was used to gauge perceptions of self-efficacy before and after participation in the mobile learning themed professional development. The results revealed that the course had helped participants gain confidence in their abilities to use mobile devices and applications to increase student engagement. However, ID4ML participants showed decreased confidence in their abilities with designing instruction and classroom management for mobile learning. The results were compared to those reported for participants in a recent MOOC with an explicit focus on a framework for pedagogical decisions about mobile learning design (Power, 2015a). Analyses of demographic trends in mTSES results from the two courses point to areas where changes could be made to increase the likelihood that participants will integrate mobile learning into their teaching practice. The results of this study demonstrate the utility of the mTSES instrument as a tool for assessing the effectiveness of mobile learning focused professional development. They also highlight the potential for the mTSES to be used by professional development planners to design training to meet the specific needs of target audiences. The mTSES instrument has the potential to compliment other professional development planning and evaluation tools, allowing planners to specifically target perceptions of confidence alongside other intended learning outcomes.

Background
Teachers' adoption of new instructional technologies and pedagogical strategies is influenced by confidence in their ability to do so effectively. This perception of confidence is referred to as a teacher's sense of self-efficacy by Tschannen-Moran and Woolfolk Hoy (2001a), who defined it as "a judgement of… capabilities to bring about desired outcomes of student engagement and learning" (p. 783). Perceptions of self-efficacy can influence a teacher's "levels of planning and organization" and "willingness to experiment with new methods to meet the needs… of students" (p. 783). Higher levels of self-efficacy amongst teachers have also been demonstrated to be predictors of "persistence when things do not go smoothly and their resilience in the face of setbacks" (p. 783). In contrast, lack of a sense of confidence on one's abilities results in greater tendencies amongst teachers to abandon new strategies and tools, or even to leave the profession altogether. Addressing perceptions of self-efficacy appear crucial in any effort to increase the adoption of new techniques and technologies.
The imperatives to integrate mobile technologies and mobile learning strategies are becoming increasingly commonplace in discourse on how to meet the changing needs of learners and education systems (Ally & Prieto-Blázquez, 2014;Traxler, 2012;Groupe Spécial Mobile Association, 2012).
However, Ally and Prieto-Blázquez (2014, pp. 145-146) warned that current teacher training programs continue to be based on an outdated education system model that does not adequately prepare teachers to integrate mobile technologies into teaching practice. Teachers' perceptions of self-efficacy can be negatively impacted by a lack of training in instructional design for mobile learning (Kenny et al., 2010).
Negative perceptions of self-efficacy have been highlighted as a significant hindrance to wider-spread adoption of mobile learning strategies amongst teachers and education systems (Ally, Farias, Gitsaki, Jones, McLeod, Power & Stein, 2013;Kenny et al., 2010;Power, 2015a). Despite this, the concept of 352 perceptions of self-efficacy "does not yet appear to have been examined in any detail in a mobile learning context" (Kenny et al., 2010, p. 2). Power, Cristol and Gimbert (2014) and Power (2015a) have attempted to address the absence of discourse about the promotion of teachers' perceptions of self-efficacy with mobile learning. One tool that has been developed is the Mobile Teacher's Sense of Efficacy Scale (mTSES). The mTSES instrument is based upon Woolfolk Hoy's (2001a, 2001b) Teacher's Sense of Efficacy Scale (TSES). The original TSES instrument consists of 24 questions. It uses a nine-point scale to measure teachers' levels of confidence with their ability to complete common, critical teaching tasks on the sub-domains of Student Engagement, Instructional Strategies, and Classroom Management. The mTSES consists of 38 questions, and uses the same nine-point scale and sub-domains. It provides teachers' scores with respect to common instructional tasks for the original TSES scale, as well as with respect to the integration of mobile learning strategies (Power, 2015a(Power, , 2015b. By providing scores for the original TSES and the mTSES scales, the mTSES instrument compares teachers' perceptions of self-efficacy with teaching in general to their perceptions about the use of mobile learning strategies. Power (2015a) (Power et al., 2014;Power, 2015a).
The three week long MOOC introduced the CSAM learning design framework (Power, 2013;Power et al., 2014), and explored the use of the framework to guide instructional design decisions about the integration of mobile reusable learning objects (RLOs) into participants' own teaching contexts. Participants built prototype mobile RLOs, and also used the CSAM framework as a post-assessment tool for their prototypes. The mTSES instrument was integrated as a learning activity at both the beginning and the end of the MOOC. Participants were provided with a tool to self-score their mTSES surveys, and were asked to reflect upon changes in their perceptions of self-efficacy. Power (2015a) analyzed participants' pre-course and post-course mTSES scores, and found an overall increase in their perceptions of self-efficacy with mobile learning in comparison to their original TSES sub-domain scores. While those gains diminished when the mTSES was re-administered as a follow-up three months after the completion of the course, Power (2015a) found that participants still had stronger perceptions of self-efficacy with mobile learning strategies than could be accounted for through maturation alone. Qualitative data were also collected to help gain a better understanding of how participation in the CSAM MOOC impacted perceptions of selfefficacy. Power (2015a) used open-response survey questions and follow-up interviews to ask about participants' perceptions of the CSAM MOOC, its impact upon their perceptions of self-efficacy, and what they perceived as necessary going forward to adopt mobile learning strategies. The mTSES results and qualitative data were used to identify potential improvements to the design of the professional development MOOC, as well as to make recommendations for further research and future professional development practice.
The CSAM MOOC studied by Power (2015a) had a total of 72 registered participants, who came from a relatively homogeneous North American background. The pre-course mTSES survey was completed by 36 353 study participants, and the post-course mTSES was completed by 22 participants. One of the recommendations for further research proposed by Power (2015a) was that the mTSES instrument be used to study mobile learning self-efficacy perceptions amongst a larger, more diverse sample of teachers and instructional developers. This paper presents findings from the use of the mTSES with participants in a free MOOC called Instructional Design for Mobile Learning (ID4ML) (Power et al., 2015).
ID4ML was conducted from May 4 -June 6, 2015, using the Canvas™ (Canvas, n.d.; Instructure, n.d.) open learning management system. The course consisted of five modules, as outlined in Table 1: Table 1 Course Modules for Instructional Design for Mobile Learning (ID4ML) Week Module Week 0 Introduction to the Course Week 1 Defining and Understanding Mobile Learning Week 2 Instructional Design Principles for mLearning Week 3 Hands on Mobile Learning Week 4 Course Wrap Up The primary focus of the ID4ML MOOC was on exploration of a variety of mobile applications and mobile learning tools, and discussion of the potential for integration of those resources into participants' teaching and learning practices. A specific focus on pedagogical design for mobile learning was limited to the Week 2: Instructional Design Principles for mLearning module. Content for the Week 2 module was drawn from the CSAM MOOC (Power et al., 2014;Power, 2015a). However, participants were not required to dedicate as much time to personal instructional design projects as in the original CSAM MOOC. Nor were they asked to design, produce, or evaluate a prototype mobile RLO using the CSAM framework.

Research Questions
This paper builds upon the findings from the use of the mTSES instrument by Power (2015a). The mTSES was administered to participants in the ID4ML MOOC with the aim of exploring its utility as a tool for 354 planning and evaluating professional development about using mobile learning resources and strategies.
The specific research questions explored were: 1. Are measures of the construct validity and reliability of the mTSES tool consistent with previous measurements?
2. What effect did participation in ID4ML have upon participants' perceptions of self-efficacy with the use of mobile learning strategies in teaching practice?
a. Are there differences in the effects of participation in ID4ML upon participants' perceptions of self-efficacy with mobile learning strategies based upon demographic characteristics?
b. How do changes in ID4ML participants' perceptions of self-efficacy with mobile learning strategies compare to those reported by Power (2015a)?

Methodology
Quantitative data were collected for this research using pre-course and post-course administrations of the mTSES instrument. Volunteers from the ID4ML course were invited to participate in the study, and to complete the two mTSES surveys. Participants used hyperlinks within the course to access the online mTSES surveys. The hyperlinks to the pre-course and post-course administrations of the mTSES were only available during designated times in the Week 0: Introduction to the Course and the Week 4: Course Wrap Up modules, respectively. Access to the surveys outside of these times was blocked so that all precourse and post-course mTSES submissions measured perceptions of self-efficacy following uniform periods of exposure to the ID4ML training. Course participants who enrolled in ID4ML after the initial orientation week did not participate in the research study, and participants were unable to complete the post-course survey beyond the course completion date.
Changes in participants' perceptions of self-efficacy were determined using the procedures outlined by Power (2015a). Data from the pre-course and post-course administrations of the mTSES were analyzed using Microsoft™ Excel™. Mean scores were calculated on a nine-point scale for each of the 38 question items from the aggregate data from each mTSES administration. The overall mean scores were then used to calculate mean scores for each of the TSES and mTSES sub-domains. Mean TSES and mTSES scores were also calculated based upon the demographic categories of years of teaching experience, participant status, geographic region, and gender. Aggregate mean scores for the TSES and mTSES domains and subdomains, as well as those for the different demographic categories, were compared to determine initial and post-course differences in perceptions of self-efficacy with teaching in general versus the use of mobile learning strategies. The aggregate and demographic category pre-course and post-course TSES and mTSES scores were also compared to determine the extent of changes in perceptions of self-efficacy along each scale.

Participant Demographics
Participants in the ID4ML study came from more diverse demographic backgrounds than those from Power (2015a). Participation in the ID4ML study was voluntary. Of the 2231 registered participants in the ID4ML MOOC, a total of 105 completed the pre-course mTSES survey, and 37 completed the post-course 355 mTSES survey. Table 2 presents a comparison of the total number of participants and the demographic breakdowns of participants between Power (2015a) and the ID4ML study. Response rates were lower for the post-course mTSES administrations for both Power (2015a) and the ID4ML study. However, such recidivism is not unusual in research studies involving repeated survey or questionnaire administrations (Cohen, Manion, & Morrison, 2011). The attrition in survey submission rates was also lower than typical MOOC participant attrition and completion rates (Jordan, 2014;Parr, 2013).

Construct Validity and Reliability of the mTSES
Determination of the construct validity and reliability of the mTSES instrument was conducted using the procedures outlined by Benton-Borghi (2006)  The reliabilities of the various survey instruments are presented in Table 3. 357 The Cronbach's alpha reliability scores were generally consistent for the total scales, as well as for the three sub-domains, across all instrument administrations. Power (2015a) noted that the comparability of reliability scores for the total scales as well as for the sub-domains "supports the conclusion of comparable construct validity between the TSES and the modified mTSES" (p. 135). This conclusion is further supported by the consistency of the reliability scores obtained from the ID4ML surveys. The similarities in the reliability scores and construct validities mean that researchers can place confidence in comparisons of total scale and sub-domain scores between the original TSES (self-efficacy with common teaching tasks) and the mTSES (self-efficacy with the use of mobile learning strategies). The similarities in reliability also mean that researchers can place confidence in the use of the mTSES as a tool for measuring changes in teachers' perceptions of self-efficacy with mobile learning.

Domain Score Analysis
Participants' mean scores on the sub-domains of Student Engagement, Instructional Strategies, and Classroom Management were calculated for both the TSES and mTSES scales for the pre-course and postcourse administrations of the mTSES instrument. Mean scores for the pre-course mTSES were subtracted from those for the second survey administration to determine the mean change in scores for each subdomain from the beginning of the course to the end of the course. Table 4 reports the mean sub-domain scores for each scale as obtained by Power (2015a), as well as for the participants from ID4ML.

Net Changes Accounting for Maturation
Changes in participants' mean scores on the mTSES scale sub-domains appear generally consistent between the ID4ML participants and those reported by Power (2015a). However, participants in Power

Demographic Analyses
Changes in perceptions of self-efficacy were further analyzed along four different demographic categories, including participants' years of teaching experience, status (with respect to the education profession), geographic region, and gender. These changes were compared to similar demographic analyses reported by Power (2015a).

Years of Teaching Experience
Participants from both research studies with less than five years of teaching experience were the least likely to show increases in their perceptions of self-efficacy. Mean scores for participants with less than five years of teaching experience in both ID4ML and Power (2015a)  362 domain score for Instructional Strategies. Table 6 reports the changes in TSES and mTSES scores for participants from both ID4ML and Power (2015a) according to years of teaching experience. The procedures outlined by Power (2015a) were used to determine the net changes in participants' perceptions of self-efficacy with the use of mobile learning strategies accounting for the effects of maturation. Table 7 reports the net changes (intervention effect) for participants from both ID4ML and Power (2015) based upon years of teaching experience.   Net changes accounting for the effects of maturation based upon participant status are presented in Table   9. were asked to self-identify their geographic region when completing the pre-course and post-course mTSES administrations. Table 10 reports changes in sub-domain scores for both the TSES and mTSES scales for participants from both ID4ML and Power (2015a).

Gender
Differences in TSES and mTSES sub-domain scores were not reported by gender by Power (2015a). Table   12 reports the mean pre-course and post-course TSES and mTSES sub-domain scores for female and male participants from ID4ML, as well as the changes in participants' mean scores for each sub-domain.
369   Thank you for ID4ML! I'm not a teacher but as a web developer / lifelong learner I found the class exceptionally well done. I've been taking MOOCs… for several years now and this course ranks near the top for an engaging mix of media types and interactive projects (Canvas user, May 30, 2015).

Discussion
This study wanted to determine if the enthusiasm expressed by some ID4ML participants corresponded with real changes in confidence in their abilities to adopt mobile learning in teaching practice.
Participants' mean pre-course and post-course mTSES scores, and changes in their mean mTSES scores, were compared across the demographic categories of years of teaching experience, participant status, geographic region, and gender. The ID4ML mTSES results were also compared to those reported by Power (2015a). The analyses provide insights into the impact of the ID4ML MOOC. They also provide insights into the potential of the mTSES instrument as a needs assessment tool, and as a post-training assessment tool, when planning mobile learning themed professional development for specific target audiences.
The analysis of the net mTSES scale score changes revealed that participants in ID4ML did not show the same improvements in perceptions of self-efficacy with mobile learning strategies as participants from Power (2015a). However, analysis of participant demographics from each group point to possible reasons for these differences. Participants from Power (2015a) were almost exclusively practicing K12 or postsecondary teachers, or graduate-level education students. In contrast, just over half of the respondents to the pre-course mTSES from the ID4ML group were comprised of K12 or post-secondary teachers and teacher-training students. The remaining ID4ML respondents consisted of private-sector training professionals, participants who were not currently employed, and participants who identified themselves as "other." Similar ratios were seen amongst ID4ML respondents for the post-course mTSES. It is possible that participants who had previous training and experience with educational theory and practice were better prepared to benefit from the professional development experience. This possibility is supported by analysis of net mTSES sub-domain score changes based on participants' years of teaching experience. For both the ID4ML and Power (2015a) groups, participants with more years of teaching experience tended to show the greatest score increases for all three sub-domains.
Another potential contributor to the differences between the ID4ML and Power (2015a) groups observed net mTSES score changes is the structure and content of the training itself. Participants in the Power (2015a) MOOC were exposed to three weeks of training focused exclusively on making, implementing, and evaluating instructional design decisions for mobile learning. Participants in the four week ID4ML MOOC were exposed to a one week module that introduced the same instructional design framework (the CSAM learning design framework) as presented in Power (2015a). However, they were not required to use 372 the framework to either prepare a detailed instructional design plan, or to evaluate a mobile learning instructional design plan once a prototype had been implemented. The ID4ML MOOC placed a greater degree of emphasis on ranges of available applications for mobile learning, and hands-on experiences with the mechanics of using selected mobile applications. Perceptions of self-efficacy with mobile learning strategies amongst participants from Power (2015a) may have increased to a greater degree because their training focused more on pedagogical decision-making than did that of their counterparts in ID4ML.
The impact of the differences in focus of the ID4ML and Power (2015a) MOOCs is also evidenced in analyses of the net score changes for the three individual mTSES sub-domains. Whereas participants from Power (2015a) showed equal net score increases for both the Student Engagement and Instructional Strategies sub-domains, participants from the ID4ML group only showed a net score increase for the Student Engagement sub-domain. Net score changes for the Instructional Strategies and Classroom Management sub-domains showed decreased perceptions of self-efficacy amongst participants from ID4ML. These changes indicate that the exposure to various mobile learning applications in ID4ML increased participants' confidence in the ability of mobile learning tools and strategies to engage their students. However, the training did not leave participants with more confidence in their abilities to design mobile learning instruction, or to manage a classroom where mobile learning strategies were being used.
Confidence in classroom management abilities for mobile learning was also lower than in the Student Engagement and Instructional Strategies sub-domains for participants from Power (2015a). This lower net sub-domain score points to a need for more emphasis specifically on classroom management skills for mobile learning in future professional development for teachers.
Geographic region does not appear to play as significant a role as other demographic factors in participants' perceptions of self-efficacy with mobile learning strategies for either the pre-course or postcourse mTSES administrations, or in observed levels of net sub-domain score changes. North American participants from Power (2015a) showed greater net score changes than those observed for any regional group from ID4ML. Amongst ID4ML participants, mTSES respondents from the North American and

Recommendations for Research and Practice
The ID4ML MOOC (Power et al., 2015) and the CSAM MOOC (Power, 2015a) had different instructional focuses, and different demographic compositions. Further research is recommended to compare trends in mTSES score changes between more similar professional development courses and demographic groups.
It is also recommended that future research into the effects of mobile learning themed professional development include a mixed-methods approach, as outlined by Power (2015a). Quantitative data analyses from mTSES survey administrations should be augmented with qualitative analyses of openresponse questionnaires and participant interviews in order to gain a broader understanding of how particular professional development programs affect perceptions of self-efficacy. Power (2015a) attempted to minimize the effects of cognitive load associated with device and application mastery, in order to focus on the effects of scaffolding pedagogical decision-making on teachers' perceptions of selfefficacy. Additional research would be beneficial to explore the degree to which lack of device and application mastery affects self-efficacy and subsequent adoption rates of mobile learning strategies.
Additionally, follow-up surveys and interviews with participants in Power (2015a)

374
Professional development planners need not wait until a training program has been developed and implemented to make use of the mTSES instrument. The mTSES could be administered with target participants during a needs assessment phase. The results from target participants' sub-domain scores could then be used to make decisions about preparedness for a training intervention, and areas of focus for the intervention. The mTSES tool could also be re-administered at the onset of the developed training program, and the end of the program, and as a longer-term post-training assessment of the impacts on perceptions of self-efficacy.

Conclusions
Education stakeholders are calling more frequently for the integration of mobile technologies and mobile learning strategies into instructional design in formal education systems. However, teachers' perceptions of confidence in their abilities to use mobile learning strategies has been cited as a barrier to larger scale adoption of mobile learning (Ally et al., 2013). At the same time, there has been a lack of research into self-efficacy with respect to mobile learning (Kenny et al., 2010). The Mobile Teacher's Sense of Efficacy Scale (mTSES) instrument was developed in an attempt to address the lack of mobile learning self-efficacy research (Power et al., 2014;Power, 2015aPower, , 2015b Factor Analysis.
It is important to conduct a factor analysis to determine how your participants respond to the questions.
We have consistently found three moderately correlated factors: Efficacy in Student Engagement, Efficacy in Instructional Practices, and Efficacy in Classroom Management, but at times the make-up of the scales varies slightly.

Subscale Scores.
To determine the Efficacy in Student Engagement, Efficacy in Instructional Practices, Efficacy in

Classroom Management, Efficacy in Student Engagement with mLearning, Efficacy in Instructional
Practices with mLearning, and Efficacy in Classroom Management with mLearning subscale scores, we compute unweighted means of the items that load on each factor. Generally these groupings are: