A wide range of projects and organizations is currently making digital learning resources (learning objects) available to instructors, students, and designers via systematic, standards-based infrastructures. One standard that is central to many of these efforts and infrastructures is known as Learning Object Metadata (IEEE 1484.12.1-2002, or LOM). This report builds on Report #11 in this series, and discusses the findings of the author's recent study of ways in which the LOM standard is being used internationally.
“Metadata” refers to systematically created and formatted descriptions of resources, intended for learning, informational, or other purposes. The LOM standard has become the most widely used solution for classifying and describing digital resources intended specifically for learning and education. It is only one way of describing digital and online resources, however. Other metadata standards and methods have been developed for the same purpose, including Dublin Core and the Rich Site Summary (RSS): see Report #11 in this series. A common feature of these standards and methods is the fact that each defines the function and structure of a number of data elements. Examples of these include the title, author, and location of the resource. RSS, for example, focuses on three of these data elements - title, link, and description; while Dublin Core specifies only 16 metadata elements. The LOM standard, on the other hand, includes 76 data elements, covering wide-ranging characteristics attributable to LOs, including their size, level and type of interactivity, and the educational context to which they are best suited.
The LOM defines all of its data elements in interrelationships that are both hierarchical and iterative. At the top of the hierarchy of LOM elements are nine broad category elements: General, Lifecycle, Meta-metadata, Technical, Educational, Rights, Relation, Annotation and Classification. The category elements each contain sub-elements, which, in turn, often contain further sub-elements. Many of the category elements, sub-elements, and subordinate elements can be repeated. This results in complex hierarchical and iterative structures, allowing for a total of over 16,000 possible, concatenated element repetitions. Some of the sub-elements in the LOM (e.g., the title element) can be assigned an alphanumeric value. Other elements are associated with a limited set of pre-defined values (e.g., describing educational context such as school, higher education or training). In this last case, the set of values is often referred to as a “vocabulary” or “controlled vocabulary.” Still other elements in the LOM contain descriptions of persons (authors, editors, etc.) that are specially formulated and formatted using a specification known as vCard.
Given its relative size and complexity, as well as the fact that it is the first technical e-learning standard to be widely adopted, the implementation of the LOM presents an excellent opportunity for study and research. By looking at how it has been implemented in projects and in specific metadata records, it is possible to learn valuable lessons about e-learning standards implementation, and about how to develop and refine further standards to meet implementers' and educators' needs.
The current report presents the basic findings of an international survey of the implementation of the LOM standard. This survey was undertaken as a part of ongoing Canadian work in an international e-learning standardization forum: the ISO/IEC (International Standards Organization/International Electrotechnical Commission) subcommittee on Information Technology for Learning, Education and Training. The survey was conducted in two phases. The first involved the manual analysis of very small sets of randomly selected metadata records from a variety of collections and projects. The second stage involved the statistical, aggregate analysis of much larger sets of sample records, taken from five large collections from widely varying regions, including the European Union, Canada, and China. The findings of both stages of the survey were consistent and mutually reinforcing (see below). Only general findings and conclusions are reported in this paper. More detailed survey data and analysis are available in the original survey reports, submitted to the ISO/IEC committee (Friesen and Nirhamo, 2003; Friesen, 2004).
The survey of LOM implementation was guided by three specific questions. Each question relates to the data elements of the LOM, and to the way in which each element is understood and used (or alternatively, not used). These questions, and their contextualizing explanations, are provided here:
The findings of the current survey are presented as responses to each of the three questions raised above.
A number of other findings pointed to issues additional to those raised in the questions above.
What do these findings mean for learning object implementation, and for the many projects and initiatives where learning object metadata are being used? On a positive note, the survey has revealed considerable convergence among implementations in element choice and utilization. Implementers have consistently opted to use roughly the same subset of elements, focusing on the description of the intellectual content of the resource. The fact that these same elements are also included in other, simpler metadata solutions, however, raises an important question: “What is the value added by the multiplicity and complexity of elements and element structures in the LOM?” The fact that a range of elements, and many of the possible element iterations in the LOM, remain unused means that their value is not being realized. At the same time, the price paid for this complexity and multiplicity, in terms of implementation work and data portability issues, is appreciable. These conclusions suggest that a very considerable return on learning object investment will be required for profit ultimately to accrue to learners and end-users.
———————————————————————
The next report in the series examines recent developments in the WebCT course management system.
N.B. Owing to the speed with which Web addresses are changed, the online references cited in this report may be outdated. They can be checked at the Athabasca University software evaluation website: http://cde.athabascau.ca/softeval/. Italicised product names in this report can be assumed to be registered trademarks.
JPB Series Editor, Technical Evaluation Reports
Friesen, N. (2004). Final Report on the International LOM Survey. Retrieved 20 October, 2004, from: http://dlist.sir.arizona.edu/archive/00000403/
Friesen, N., Nirhamo, L., and Knoppers, J. (2003). International Survey of Learning Object Metadata Implementation for ISO/IEC. Retrieved 20 October, 2004, from: http://mdlet.jtc1sc36.org/doc/SC36_WG4_N0029.pdf
Najjar, J., Ternier, S., and Duval, E. (2003). The Actual Use of Metadata in ARIADNE: an empirical analysis. Proceedings of the 3rd Annual Ariadne Conference. Retrieved 20 October, 2004, from: http://www.cs.kuleuven.ac.be/~najjar/papers/EmpiricalAnalysis_ARIADNE2003.pdf