Developing Open Education Literacies with Practicing K-12 Teachers

This study seeks to understand how to use formal learning activities to effectively support the development of open education literacies among K-12 teachers. Considering preand post-surveys from K-12 teachers (n = 80) who participated in a three-day institute, this study considers whether participants entered institutes with false confidence or misconceptions related to open education, whether participant knowledge grew as a result of participation, whether takeaways matched expectations, whether time teaching (i.e., teacher veterancy) impacted participant data, and what specific evaluation items influenced participants’ overall evaluations of the institutes. Results indicated that 1) participants entered the institutes with misconceptions or false confidence in several areas (e.g., copyright, fair use), 2) the institute was effective for helping to improve participant knowledge in open education areas, 3) takeaways did not match expectations, 4) time teaching did not influence participant evaluations, expectations, or knowledge, and 5) three specific evaluation items significantly influenced overall evaluations of the institute: learning activities, instructor, and website / online resources. Researchers conclude that this type of approach is valuable for improving K-12 teacher open education literacies, that various misconceptions must be overcome to support large-scale development of open education literacies in K-12, and that open education advocates should recognize that all teachers, irrespective of time teaching, want to innovate, utilize open resources, and share in an open manner.


Résumé de l'article
This study seeks to understand how to use formal learning activities to effectively support the development of open education literacies among K-12 teachers. Considering pre-and post-surveys from K-12 teachers (n = 80) who participated in a three-day institute, this study considers whether participants entered institutes with false confidence or misconceptions related to open education, whether participant knowledge grew as a result of participation, whether takeaways matched expectations, whether time teaching (i.e., teacher veterancy) impacted participant data, and what specific evaluation items influenced participants' overall evaluations of the institutes. Results indicated that 1) participants entered the institutes with misconceptions or false confidence in several areas (e.g., copyright, fair use), 2) the institute was effective for helping to improve participant knowledge in open education areas, 3) takeaways did not match expectations, 4) time teaching did not influence participant evaluations, expectations, or knowledge, and 5) three specific evaluation items significantly influenced overall evaluations of the institute: learning activities, instructor, and website / online resources. Researchers conclude that this type of approach is valuable for improving K-12 teacher open education literacies, that various misconceptions must be overcome to support large-scale development of open education literacies in K-12, and that open education advocates should recognize that all teachers, irrespective of time teaching, want to innovate, utilize open resources, and share in an open manner.

Introduction
Despite decades of work in the area and hundreds of initiatives and research studies focused on utilizing technology to improve classroom teaching and learning, effective technology integration remains a "wicked problem," complicated by diverse learning contexts, emerging technologies, and social trends that make formalized approaches to technology integration and theory development difficult (Kimmons, in press;Mishra & Koehler, 2007). Within this space, those intent upon improving K-12 teaching and learning with technology have had difficulty agreeing upon what constitutes effective integration, what the purposes of integration might be, and how such integration might help to solve some of the persistent problems plaguing educational institutions without falling prey to technocentric approaches to change (Papert, 1987).
Most proponents of open education focus exclusively upon higher education, despite much excitement among teachers for expanding open practices to K-12 and preliminary evidence that open education can help to address persistent K-12 problems. Reasons for lack of spill-over into K-12 vary, but it is likely that this difference stems in part from the fact that change in K-12 must either occur at the highly bureaucratic state level or at the hidden local level, whereas higher education institutions and their professors have more flexibility to try innovative approaches and also enjoy greater visibility for sharing results. Nonetheless, advances are being made in bringing open practices to K-12 through both practice and research.
Perhaps the most well-known study in this regard was completed by Wiley, Hilton, Ellington, and Hall (2012), wherein they conducted a preliminary cost impact analysis on K-12 school use of open science textbooks and found that these resources may be a cost-effective alternative for schools if certain conditions are met (e.g., high volume).
Beyond driving down costs, however, others have suggested that open education can help support the emergence of "open participatory learning ecosystems" (Brown & Adler, 2008, p. 31), can counterbalance the deskilling of teachers that occurs through the purchasing of commercial curricula (Gur & Wiley, 2007), and can provide a good Developing Open Education Literacies with Practicing K-12 Teachers Kimmons Vol 15 | No 6 Creative Commons Attribution 4.0 International License Dec/14 73 basis for creating system-wide collaborations in teaching and learning (Carey & Haney, 2007). These potentials represent promising aims for K-12 and have even led to the development of open high schools intent upon democratizing education and treating access to educational materials as a fundamental human right (Tonks, Weston, Wiley, & Barbour, 2013).
However, it is also recognized that the shift to open is problematic for a number of reasons (Baraniuk, 2007;Walker, 2007), not least of which is the fact that K-12 teachers must develop new information literacies to become effective open educators (Tonks, Weston, Wiley, & Barbour, 2013), and little work has been done to study how to best support these professionals in developing literacies and practices necessary to embrace openness or to utilize and create their own open educational resources (cf. Jenkins, Clinton, Purushotma, Robinson, & Weigel, 2006;Rheingold, 2010;Veletsianos & Kimmons, 2012 institute but typically included subject area specialization (e.g., science, mathematics).
The actual structure of learning activities at each institute was also atypical as compared to most K-12 professional development experiences. Each institute consisted of roughly 3 phases or days. Day One was more traditional in the sense that it was largely instructor-centered and focused on presentations, provocative videos, and class-wide discussions. During Day One, a small portion of the time was also devoted to helping participants to get to know their PLCs and to begin making plans for how they would work together through the institute. Day Two was completely different. At the start, participants immediately took a few minutes for a planning session with their PLCs to set goals and to gather thoughts from the day before and then began a series of development sprints where each PLC worked together to create open educational resources that would be valuable to their members' schools and classrooms. During Day Two, the instructor interjected occasionally to provide guidance and support, but all learning and activities were driven by the goals established by each PLC autonomously.
During Day Three, the PLCs were given time to wrap up their projects, the instructor provided final guidance on sharing, and each PLC presented their products to the larger group and also made their resources available to the public on the web.
Throughout this process, technology was heavily used to support collaboration and communication. The open course website was made available to participants and the public before the institute began and remains open and available indefinitely (Kimmons, 2014). This decision was surprising to participants, who were accustomed to professional development experiences where information was initially provided but severed upon completion. Making information and resources perpetually available to participants gave them more freedom to focus on working on their own products and critically evaluating learning experiences as opposed to spending time laboriously taking notes in preparation for the time when access to information resources would cease.
Within the lab space utilized for the institutes, each PLC was assigned to a horseshoeshaped table with a display switching matrix and large-screen interactive display along with personal computing devices to connect into their tables. This allowed each participant to wirelessly access information resources and work on institute materials individually but also to work within the context of a group setting where they could autonomously and effectively collaborate, share, and present their information to other group participants. Throughout this process, collaborative document creation software (i.e., Google Drive) was used so that participants could work on the same documents simultaneously and share resources in a common, cloud-based folder.
Before these institutes, many participants had never experienced using these types of software and hardware tools before, and most had never used them in a synchronous, collaborative setting. Furthermore, the lab also provided access to a variety of other

Methods
This study employed a longitudinal survey design methodology (Creswell, 2008) to collect and analyze data from institute participants before and after the institute. This method was deemed to be appropriate, because research questions lent themselves to quantitative analysis of trends among institute participants over the course of the threeday experience.

Sample
Survey respondents included eighty (n = 80) participants in the targeted Technology and Open Education summer institutes. In total, over one hundred K-12 educators participated in the institutes, but not all elected to participate in the study. Participants were predominantly female, reflecting an uneven gender distribution of the K-12 labor force in the target state, came from all geographic regions of the target state, and were generally veteran teachers (72% having taught for five or more years). More detailed participant demographic information was not collected, because it was deemed unnecessary to answer the research questions.

Data Collection
Throughout the institutes, both quantitative and qualitative feedback was elicited from participants, but this report deals primarily with quantitative results. Data sets for this study included two online surveys: one conducted immediately before the institute and one conducted immediately after the institute.

Survey Instruments
Both surveys were delivered online, and participants completed them by following a link on their personal or provided laptops or mobile devices while at the institute. Surveys consisted of a number of questions that may be categorized as eliciting one of the following: • fact (e.g., years teaching); • expectation (e.g., personal learning goal); • knowledge (e.g., self-assessment); • evaluation (e.g., instructor evaluation); • open response (e.g., general feedback).

Pre-survey.
The pre-survey consisted of the following two factual questions, knowledge question, and expectation question:

Response rate.
A complete response was determined by the presence of both a pre-survey and postsurvey for each participant. Since all study participants were encouraged to complete surveys on-site, the response rate was high (80%), and missing surveys likely reflected improper entry of unique identification numbers or accidental failure to complete one survey.

Analysis
Data from the pre-survey and post-survey were merged using a unique identifier provided by participants in each survey. Participant data that did not include both surveys were considered incomplete and were excluded from analysis. If multiple responses existed for participants, timestamps were used to select the earliest submission for the pre-survey (to avoid post-surveys mistakenly taken as pre-surveys) and the latest submission for the post-survey (to avoid pre-surveys mistakenly taken as post-surveys). All other submissions were discarded. Several tests were run on the data to answer pertinent research questions, and an explanation of each research question and its accompanying test(s) is now explained.

RQ1: False confidence and misconceptions.
H0: There was no difference between self-evaluations of prior knowledge collected before the institute and after the institute. H1: Self-assessments of prior knowledge collected before the institute were different than self-assessments of prior knowledge collected after the institute.
In the pre-survey, participants were asked "How well do you understand each of the following concepts or movements?" and then were expected to self-evaluate their understanding of six open or general education knowledge domains ("Common Core", "open education," "copyright," "fair use," "copyleft," and "public domain") according to a 5-point Likert scale. It was believed that participants might initially rate themselves one way on these knowledge areas but that upon completion of the institute, they might come to realize that their initial self-assessments were incorrect. For this reason, the post-survey included the same question, which was reworded as follows: "How well did you understand each of the following concepts or movements before the institute?" These data were

RQ3: Expectations and takeaways.
H0: Valued takeaways from the institute matched initial expectations.
H1: Valued takeaways from the institute did not match initial expectations.
In the pre-survey, participants were asked "What do you hope to gain from this institute (please rank with the most valuable at the top)?" and were provided with the following four items: All of these were topics addressed in the institute. In the post-survey, participants were again asked to rank these same four items in accordance with this question: "What was the most valuable knowledge or skills that you gained from this institute (please rank from most valuable to least)?" Paired samples T-tests were then run on each item with the expectation that a change in average ranking of an item would reflect a difference between participants' initial expectations of the institute and actual takeaways. H0: Time teaching has no effect on expectation, knowledge, or evaluation metrics.
H1: Time teaching has an effect on expectation, knowledge, or evaluation metrics.
In the pre-survey, participants were asked "How long have you been teaching?" and were provided with the following three options: "1 year or less," "2-5 years," or "more than 5 years." A one-way ANOVA with Bonferroni post hoc test was then run with time teaching as the factor and each expectation, knowledge, and evaluation item from the pre-survey and post-survey as a dependent variable. It was expected that this test would reveal any cases where time teaching had an effect on survey outcomes.

RQ5: Influences on overall evaluation.
H0: There is no linear correlation between participants' overall evaluations and specific evaluation items.
H1: There is a linear correlation between participants' overall evaluations and specific evaluation items.
In the post-survey, participants were asked "How would you rate this institute?" and were then expected to evaluate the institute overall and in ten specific evaluation items all ten specific evaluation items as the independent variables to determine whether linear correlations existed between specific evaluation items and the overall score, thereby revealing which specific evaluation items informed the overall rating.

Findings
Descriptive statistics revealed that participants believed their institutes to be highly valuable and effective. The average participant overall rating for the institute was 4.86 on a 5-point Likert scale, and 44% of participants believed their institute was the best professional development experience they had ever experienced, and another 44% believed that it was much better than most other professional development experiences that they had experienced in the past. In their evaluations, participants rated all aspects of the institute highly, and participants strongly agreed that the institutes were a good use of their time, that they were of practical value to their classroom practice, and that the institutes encouraged them to think critically about technology integration (cf. .57 A This item was formulated on a 7-point Likert scale (M = 6.29, SD = .8), but results were converted to a 5-point scale to allow for uniformity in reporting.

RQ1: False Confidence and Misconceptions
The comparison of pre-survey prior knowledge with post-survey prior knowledge yielded a number of significant differences between how participants initially evaluated their knowledge on topics related to open education and how they later came to assess their prior knowledge. In the cases of open education, copyright, fair use, and public domain, participants' self-assessments went down in the post-survey, so we must reject the null hypothesis and conclude that self-assessments differed significantly before and after the institute for these cases (cf. Table 2). This finding suggests that initial participant self-assessments might have been based on false confidence or misconceptions about what the terms meant, but that as participants became more familiar with terms through the institutes, they came to recognize how little they actually knew before entering the institute. Differences on Common Core and copyleft were not significant, suggesting that the institute did not change participant understanding of what these terms meant (as is likely the case with Common Core) or that participants had no prior knowledge of the term (as is likely the case with copyleft).

RQ2: Knowledge Growth
The comparison of pre-survey prior knowledge with post-survey final knowledge and also the comparison of post-survey prior knowledge with post-survey final knowledge yielded significance in every case (cf. Table 3 and Table 4). Thus, we must reject the null hypothesis and conclude that participants reported knowledge growth as a result of the institute in every domain.

RQ3: Expectations and Takeaways
The comparison of pre-survey expectations with post-survey outcomes yielded significant results in every case (cf . Table 5). Thus, we must reject the null hypothesis and conclude that valued takeaways did not match initial participant expectations.

Comparison of Pre-Survey Expectations and Post-Survey Outcomes
To clarify this finding further, if we were to list expectations and outcomes in accordance with their rankings, we would see that the largest changes occurred in the cases of technology integration, wherein participants expected to learn about technology integration but did not count it as a valuable outcome, and PLCs, wherein participants did not expect their PLCs to be valuable but then evaluated them highly as an outcome (cf. Table 6). Table 6 Expectations and Outcomes in Ranked Order Expectations from pre-survey Outcomes from post-survey 1 Technology integration 1 Open education 2 Open education 2 Professional learning community 3 Technology skills 3 Technology integration 4 Professional learning community 4 Technology skills RQ4: Time Teaching ANOVA tests on knowledge items generally did not reveal differences between participants when grouped according to time teaching or teacher veterancy. The only significant main effects between groups were found on the Common Core and fair use items in the pre-survey and on the Common Core item in the post-survey (cf. Table 7).
Bonferonni post hoc tests revealed that this difference can be attributed to the least experienced teaching group, which self-assessed lower than more experienced groups in all three metrics, with an average difference ranging between .71 and 1.14 points on the 5-point scale (cf.

RQ5: Influences on Overall Evaluation
Participants rated sessions highly across all ten specific evaluation items, but the stepwise linear regression revealed that three specific evaluation items (activities, instructor, and website) significantly predicted overall ratings (cf. Table 9). The regression model for all three of these predictors also explained a significant proportion of variance in overall ratings, R 2 = .649, F(3, 68) = 41.94, p < .001. Of these factors, activities and instructor had a positive linear correlation with overall ratings, while website had a negative linear correlation. All other factors were excluded from the regression model due to lack of significance.   Wiley, & Barbour, 2013). This finding was corroborated in the knowledge growth analysis, which found that participants' self-evaluations on specific knowledge items This corroborates our anecdotal findings that teachers tend to believe that they understand what these concepts mean and what they entail, but that upon examination and the completion of focused learning activities, participants come to recognize that they did not understand the concepts very well to begin with. This is problematic for open education, because it is difficult to appeal to a need when teachers do not recognize that a need exists. If teachers already believe that they understand copyright and fair use, for instance, then they have no impetus to learn about these concepts and   (Gur & Wiley, 2007).
Fourth, though there is no theoretical basis for assuming that innovation adoption is correlated with age factors (cf. Rogers, 2003), it has been our experience that many advocates for innovation and technology integration resort to a narrative of innovation which considers younger teachers to be more willing to innovate than their more experienced peers. Our findings, however, reveal that time teaching had no impact on participants' expectations of the institutes or their evaluations of the experience, which means that veteran teachers responded just as positively to the learning activities as did their less experienced counterparts. The only significant differences we found related to two knowledge items: Common Core (pre-survey and post-survey) and fair use (presurvey only). In the case of Common Core, it makes sense that more veteran teachers would self-assess higher than less experienced teachers, because they have had more experience teaching and adapting to new standards or ways of teaching and also work in districts that have devoted a sizable amount of training to Common Core, while the less experienced teachers would have just recently completed their teacher education programs and likely would not have completed many district or school level trainings.
The difference with fair use, on the other hand, reveals that veteran teachers entered the institute with greater perceived knowledge of fair use than did their novice counterparts but that this difference disappeared by the end of the institute. This means that either veteran teachers truly began the institute with a greater knowledge of fair use than their novice counterparts or they had more false confidence in this regard. Given the fact that training on issues of copyright and fair use are uncommon for teachers, we believe that the latter interpretation is likely more accurate and that as teachers spend time in the classroom and use copyrighted works, they develop a false sense of confidence related to fair use. This interpretation is corroborated by the fact that when novice teachers and veteran teachers self-assessed their prior knowledge on the post-survey, differences between groups disappeared, meaning that after participants had focused training related to fair use, they self-evaluated themselves equally low on initial knowledge. This is problematic, because it suggests that as teachers gain experience in the classroom, they also develop a false sense of confidence related to fair use and therefore likely begin utilizing copyrighted materials in ways that may not be permissible. This also means that although the development of open education literacies is essential for ongoing diffusion (Tonks, Weston, Wiley, & Barbour, 2013), teachers may not recognize the need to learn more about open education, because they assume that they already sufficiently understand these topics. Interestingly, though participants provided anecdotal feedback that the website and online resources were valuable, their ratings in this regard are negatively correlated with overall satisfaction with the institutes. The reason for this is unknown, but it may be that those teachers who valued the ability to peruse resources on their own and to learn at their own pace via provided online resources found the face-to-face institute to be less valuable, whereas those who found the online resources to be less useful needed to rely more heavily on the institute and valued the experience more as a result. This may mean that some educators might be more effectively introduced to open education via online, asynchronous learning experiences, while others may be more effectively reached through face-to-face, synchronous experiences.