Searching for and Positioning of Contextualized Learning Objects

Learning object economies are marketplaces for the sharing and reuse of learning objects (LO). There are many motivations for stimulating the development of the LO economy. The main reason is the possibility of providing the right content, at the right time, to the right learner according to adequate quality standards in the context of a lifelong learning process; in fact, this is also the main objective of education. However, some barriers to the development of a LO economy, such as the granularity and editability of LO, must be overcome. Furthermore, some enablers, such as learning design generation and standards usage, must be promoted in order to enhance LO economy. For this article, we introduced the integration of distributed learning object repositories (DLOR) as sources of LO that could be placed in adaptive learning designs to assist teachers’ design work. Two main issues presented as a result: how to access distributed LO and where to place the LO in the learning design. To address these issues, we introduced two processes: LORSE, a distributed LO searching process, and LOOK, a micro context-based positioning process, respectively. Using these processes, the teachers were able to reuse LO from different sources to semi-automatically generate an adaptive learning design without leaving their virtual environment. A layered evaluation yielded good results for the process of placing learning objects from controlled learning object repositories into a learning design, and permitting educators to define different open issues that must be covered when they use uncontrolled learning object repositories for this purpose. We verified the satisfaction users had with our solution.


Résumé de l'article
Learning object economies are marketplaces for the sharing and reuse of learning objects (LO). There are many motivations for stimulating the development of the LO economy. The main reason is the possibility of providing the right content, at the right time, to the right learner according to adequate quality standards in the context of a lifelong learning process; in fact, this is also the main objective of education. However, some barriers to the development of a LO economy, such as the granularity and editability of LO, must be overcome. Furthermore, some enablers, such as learning design generation and standards usage, must be promoted in order to enhance LO economy. For this article, we introduced the integration of distributed learning object repositories (DLOR) as sources of LO that could be placed in adaptive learning designs to assist teachers' design work. Two main issues presented as a result: how to access distributed LO, and where to place the LO in the learning design. To address these issues, we introduced two processes: LORSE, a distributed LO searching process, and LOOK, a micro context-based positioning process, respectively. Using these processes, the teachers were able to reuse LO from different sources to semi-automatically generate an adaptive learning design without leaving their virtual environment. A layered evaluation yielded good results for the process of placing learning objects from controlled learning object repositories into a learning design, and permitting educators to define different open issues that must be covered when they use uncontrolled learning object repositories for this purpose. We verified the satisfaction users had with our solution.

Introduction Basic Concepts of Learning Objects Economy
Through the years, the concept of the learning object (LO) has been considered by many diverse and qualified people. The IEEE Learning Technology Standards Committee (LTSC, 2009), in its work on the Learning Object Metadata Standard (2002), defined a learning object as any element, digital or non-digital, that may be used for learning, education, or training. Such a definition categorizes almost everything as a learning object, but even so, not just anything is one. According to Polsani (2005), a LO needs to be accessible, reusable, and interoperable while also intended for a learning process. Wiley (2000) reinforced the concept of reuse by introducing the definition for "object" from the object-oriented programming paradigm of computer science, where it is understood as a component that can be reused in multiple contexts. In this manner, a learning object is presented as a small instructional component that can be reused in different learning contexts when required. This definition is important to us because our study is based on the learning object economy (Duncan, 2004), where reuse is a key aspect.
Learning object economies are marketplaces for the sharing and reuse of LO. As in any economy, different actors play different roles. Ochoa (2008) identifies eight actors: market-makers, authors, resellers, publishers, teachers, end-users, assemblers, and regulators.
Market-makers are researchers and trainers who provide support for LO interchanges with learning object repositories (LOR), open courseware sites, and learning object technologies. Authors, such as teachers or learning designers are LO creators. Resellers are those who have acquired the rights to exploit LO, for example, universities or private companies.
Publishers put together and publish LO. Teachers use the LO for instructional purposes.
End-users use LO for learning. Assemblers reuse small LO to construct more complex LO.
Finally, regulators set the rules by which the sharing takes place.

Barriers to Assembling a Learning Object Economy
Offering a learning process that is available to all is a motivation for stimulating the development of the learning object economy. However, to ensure that this necessity becomes a reality, some barriers in the learning object economy must be overcome, as shown by Duncan (2004).
There are two main technical barriers to reusing LO: granularity and editability. Granularity refers to how complex a learning object should be. Wiley (2000) introduced two different viewpoints for deciding this: an efficiency and an instructional point of view. From the efficiency point of view, Wiley indicates that the decision regarding learning object granularity can be viewed as a trade-off: The possible benefits of reuse come at the expense of cataloguing. Conversely, from the instructional point of view, the major issues are the scope and sequence of the learning design.
Editability is important because any aspect of a learning object can be changed if it is avail-able in a suitable form. If a LO is editable, its granularity can be modified. There are many distributed LO that are not editable; in fact, this is one of the most common excuses provided by teachers for not reusing LO.
Counting editable and open LO requires agreements among the LO economy actors. In particular, adequate author rights management would increase their confidence in distribut- ing editable and open content. Implementing author tools to support LO editability, which would address the accessibility issues in the content, is one of the most important issues to meet for the successful establishment of this economy.
Barriers from the pedagogical view are basically related to the LO context. According to Dey and Abowd (2000), context is defined as any information that can be used to characterize the situation of an entity-in this case, the LO. Context in education is essential, but in practice, incorporating context in LO inhibits reuse. Addressing the context issues would allow instructors to use LO in different scenarios. Small granularity drives the context issues, and LO editability allows teachers to contextualize the LO according to the learners' needs.

Enablers of the Learning Object Economy
Along with the barriers, some enablers must be promoted in order to develop the learning object economy: learning design generation and standards promotion.

Learning design generation.
Learning design is a term coined by a pedagogical movement asking for more consistent approaches to describing and documenting teaching practices in order to facilitate communication and sharing, while also improving teaching practice. However, there is currently no standard definition for learning design (Koper & Yongwu, 2009). A well-accepted definition for the instructional design process is simple: the process that should be followed by teachers in order to plan and prepare instruction (Reigeluth, 1999). This process should address people's cognitive, emotional, social, and physical needs in an integral way. Given that LO are only content, to foster real learning experiences they need to be administered properly.
Adequate pedagogical theories and techniques need to be in place in order to insure that the LO have real impact (Koper & Yongwu, 2009).
Automatic learning design generation is an important topic in the research area of adaptive learning systems and technology-enhanced learning. Some researchers (Duque, Méndez, Ovalle Carranza, & Jiménez Builes, 2002;Morales, Castillo, & Fernández-Olivares, 2009;Ulrich & Melis, 2009;Karampiperis & Sampson, 2006;Hernández et al., 2009;Baldiris, Graf, & Fabregat, 2011) have proposed approaches to help teachers generate learning designs adjusted to user characteristics such as learning styles and competences, which is not an easy task, particularly for teachers. Actually, this problem implies that teachers need to know the different instructional theories; they also must be able to control the different user variables in the learning design construction, such as learning styles and competencies, among others. Furthermore, teachers need to know how to develop standardized 79 learning designs for the specific learning platform they use. Besides the personalization problem, another important issue for learning design generation is how to place learning objects from different learning object repositories into the generated designs.

Standards promotion.
If a global learning object economy is the goal, there must be common standards that every party agrees with to enable LO-sharing among heterogeneous systems (Ochoa, 2008). Im-

Contributions and Outline of the Paper
In this paper, we aim to stimulate the enablers of the learning object economy to support the generation of standardized and adapted learning designs. Our investigation promotes LO reuse by encouraging instructors to access distributed learning object repositories (DLOR) as sources of LO with diverse granularity that could be elements in a generated learning design. Our proposal consists of two different parts: the distributed learning object metadata searching process (LORSE) and the micro-context-based positioning process (LOOK).
The distributed learning objects metadata searching process is a mechanism to promote reuse. It is supported by agent technologies, and its main purpose is to look for external LO that were not developed by the teachers which could be used as inputs in a learning design generation process. A micro-context-based positioning process analyzes a learning object's current micro-context (in the LOR) and future micro-contexts (in the learning design), using disambiguation techniques to establish the most promising micro-context for the LO in a learning design, and supports the placement of the object in its correct context.
The rest of this article is structured as follows. In Section 2, we introduce the distributed learning object metadata searching process. The third section describes the micro-contextbased positioning process. In the fourth section we present the results of a layered evaluation. Finally, in the fifth section, we make some conclusions and comments on future research.

Section 2 LORSE: A Metadata Searcher of Open Learning Objects in Distributed Learning Repositories Based on Intelligent Agents
In order to facilitate the distributed learning object metadata search process, we developed LORSE, a distributed learning object metadata searcher, to promote reuse in the learning object economy. With LORSE, teachers, students, and external institutions can search in different learning object repositories using a unified interface. At the implementation level, LORSE (Baldiris, Bacca, Noguera Rojas, Guevara, & Fabregat, 2011) has been modelled as an independent set of JADE intelligent agents that collaborate to support users in the LO search process. When the Merlot agent (the specific search agent in charge of integration in the Merlot repository) is born, the Merlot search service is registered to the directory facilitator agent in order to allow other agents or processes to locate and send requests to it. The Merlot agent is activated when a search request is received. Merlot's agent implements a particular behavior, a client for the RESTful web service offered by the Merlot repository. When a request is sent to the agent, according to the terms and conditions of the query, the agent performs a connection with the service, sending the corresponding parameters, and then obtains a response as an XML document (metadata). The implementation of both the Connexions and UDG Agent is similar to the one for the Merlot agent; they have behaviors designed to interact with the RESTful web service offered by these applications.
To integrate the DalSpace digital repository, the Deep Blue Repository from the University of Michigan, the DLESE Repository, ARIADNE, SMETE, and GATEWAY into the multiagent platform, we created an intelligent agent for each. This agent presents indexer behavior, using the OAI-PMH harvester protocol to index the categories (catalogues) and records in the categories (resources) of each particular repository. Each metadata resource is stored in a database as a tree. In this manner, the information is available for a search process.
In order to test the extended version of LORSE independently, we integrated our development in an OpenACS/dotLRN learning environment. For the integration process, it was necessary to install the LORSE client package on this platform, which implements a web service client upon .LRN in order to send requests to the LORSE multiagent platform and process its responses. This package offers a user interface that provides functionalities allowing users to search several repositories in a transparent way. Therefore, when teachers use the learning environment, they are able to search for LO in those repositories to enhance the activities designed in the platform without leaving the learning environment. The main purpose of this section is to provide an introduction for the micro-context-based positioning process LOOK, which aims to place learning objects previously found by LORSE in learning designs.
To achieve this objective, two different sources of information are available: (1) the information from LOR, particularly the catalogue or indexed mechanism of the LO, and the LO metadata; and (2) the available information provided by the teacher for the competence definition, which defines the appropriate knowledge that a person should possess and show in a specific context. The competence definition consists of four categories of information: competence general information, which provides general data about the competence; competence elements, which are smaller learning purposes that provide more specific and concrete learning process outcomes; didactical guidelines; and the competence context of application.
Competence elements describe the essential knowledge that students should use in a specific context to demonstrate that they have acquired new information, and competence evidence is a mechanism that measures students' levels of achievement in each particular competence element. Schum (1994) explained how the evidence coming from different sources can be evaluated. In our case, analysis of the evidence is related to the relevance of the learning object that will address what the teacher is looking for, which he or she has defined in the competence definition of the course. In the following section, we introduce the main topics of relevance.
Learning Object Relevance Borlund (2003) mentioned three central conclusions from the nature of relevance and its role in information behavior: Relevance is a multidimensional cognitive concept whose meaning is largely dependent on users' perceptions of information and their own information need situations; Relevance is a dynamic concept that depends on users' judgments of quality of the relationship between information and information need at a certain point in time; Relevance is a complex but systematic and measurable concept if approached conceptually and operationally from the user's perspective. 83 Saracevic (1996) distinguished between five basic types of relevance: (1) system or algorithmic relevance, which describes the relation between the query (terms) and the collection of information expressed by the information object(s); (2) a topical-like type, associated with aboutness or criterion; (3) pertinence or cognitive relevance, related to the information need as perceived by the user; (4) situational relevance, depending on the task interpretation; and (5) motivational and affective, which is goal-oriented.
Ochoa (2008) used a modified version of Saracevic's categories (eliminating the motivational and affective dimensions) as the basis to define a set of complete metrics for LO relevance identification. These metrics are shown in Table 1. Learning Object Relevance in the Micro-Context Automatic word sense disambiguation (WSD) has been an interest and concern since the earliest days of computer language treatment in the 1950s. It is defined as the association of a given word in a text or discourse with a definition or meaning distinguishable from other meanings potentially attributable to that word (Ide, 1997). All disambiguation work involves matching the context of the instance of the word to be disambiguated with either information from an external knowledge source (knowledgedriven WSD), or information about the contexts of previously disambiguated instances of the word derived from corpora (data-driven or corpus-based WSD).
The assignment of senses to words is accomplished by relying on two major sources of information: • the context of the word to be disambiguated in the broad sense, including information in the text or discourse in which the word appears, together with extra-linguistic information about the text; • external knowledge sources, including lexical, encyclopedic resources (among others), and hand-devised knowledge sources, which provide data useful to associate words with meanings.
Most disambiguation works use the local context of a word occurrence as the primary information source for WSD. Local or "micro" context is generally considered to be some small window of words surrounding a word occurrence in a text or discourse, from a few words of the context to the entire sentence in which the target word appears.
We consider the micro-context of a learning object to be a part of the curricular structure where the learning object should be placed (the learning design to be generated).
Consider the curriculum structure in Table 2 that belongs to a course teaching Unified Modelling Language (UML), which was generated based upon the competence definition provided by a teacher. We analyzed two different possible micro-contexts, the micro-context of the LO in the repository structure (catalogue), where the LO is placed, and the micro-context of the LO in the curricular structure, where the LO will be placed. Comparing these possible microcontexts, a user can decide the best location for the learning object in the learning design.
Then, the first step is to define the micro-context of each learning object (LO) to be placed and also the possible micro-context in the curriculum structure.
The micro-context where a LO is placed in a LOR catalogue is provided by equation 1.
In equation 1, LO is the learning object, and C is the catalogue in the LOR. loMicroContext defines the LO micro-context in a particular LOR catalogue. Table 3 shows the loMicroContext of one LO, Introduction to OMG's Unified Modelling Language. Table 3 Introduction to OMG's UML Science and Technology  Table 2 in the CS are shown in Table 4. The models Class diagrams Now, the second step is to calculate the similarity between the different CS micro-contexts and the LO micro-context in order to place the LO in the structure. For this step, we proposed the use of different metrics to calculate the similarity between the TF-IDF (term frequency-inverse document frequency) inferred vectors in the analyzed micro-context (CS and LO). We used similarity measures that have been extensively validated in information retrieval: the Dice coefficient and cosine distance (Dice, 1945). Section 4: Evaluation

Description of the Proposed Evaluation Process
After implementing our solutions for searching and locating LO, we conducted an evaluation of our developments. As we mentioned in the introduction, this article introduces our solution for looking up learning objects in distributed learning object repositories and positioning them in the most promising micro-contexts of learning designs that will be generated in the future.
Brusilovsky, Karagiannidis, and Sampson (2001) reported that the layered evaluation for adaptive hypermedia systems was a good approach to use to completely validate the elements for this kind of system. We used a layered evaluation process to measure the results in our research because the most important associated decision process (place a learning object in a learning design structure) supports an adaptive mechanism (adaptive learning design generation process based on students' and teachers' preferences). According to the adaptive system evaluation theory, different layers should be considered in order to test all the elements of the adaptive system (Brusilovsky et al., 2001;Karagiannidis & Sampson, 2000;Brusilovsky & Sampson, 2004). We define the following set of evaluation layers for our study: 1) The decision-making evaluation layer, where the question is, Are the decisions about where the learning objects should be placed valid and meaningful for teachers?
2) The user satisfaction evaluation layer, where the question is, Does the proposed solution match with the teachers' expectations?
Test Course: Object-Oriented Design with UML Object-Oriented Design with UML is a course offered by the University of Girona in the formal education system. The course is supposed to establish student competence in UML: "The student will be able to design object oriented software using the unified modelling language (UML). The student will identify the most adequate diagrams to support the specification of each step in the object oriented development process." To complete this competence, five different competence elements and the associated competence knowledge were defined.

89
• First competence element: Student defines Unified Modelling Language and identifies its main associated diagrams. Competence knowledge: Unified Modelling Language and its diagrams.
• Second competence element: Student understands the concept of use case diagrams and their associated concepts, such as actors, inclusion, extension, and generalization.
Competence knowledge: Use case diagrams.
• Third competence element: Student understands the concept of class diagrams and designs class diagrams considering users' requirements. Competence knowledge: Class diagrams.
• Fourth competence element: Student understands the concept of interaction diagrams, particularly sequence and collaboration diagrams. He or she expresses the dynamic view of the software using these diagrams. Competence knowledge: Interaction diagrams, sequence and collaboration diagrams.
• Fifth competence element: Student understands the concept of activity diagrams to construct activity flows. Competence Knowledge: Activity diagrams.
For this course, 87 open learning objects were constructed. These learning objects were placed in an instance of the Fedora Commons Repository available at University of Girona.
The set of learning objects that supported the learning process included diverse types of atomic resources with specific pedagogical intentions. These included exercises, simulations, diagrams, figures, graphs, indices, slides, tables, narrative texts, experiments, problem statements, lectures, questionnaires, exams, and self-assessments. Furthermore, each learning object had one associated LOM metadata where the most relevant information about the learning object was defined by a labelling process.

The Decision-Making Evaluation Layer
The main purpose of this evaluation layer is to validate our process for placing learning objects from different learning object repositories in the curricular structure of a learning design.
According to the typologies from McGreal (2008)

Method
We looked for the catalogue provided for each defined repository. We performed different kinds of searches in the defined repositories using diverse search criteria. The criteria were defined with the information provided for the metadata in each repository and the searching mechanism provided each one. Then we selected the 10 most relevant LO for our study.
Using the previous information, we constructed the LO micro-context (loMicroContext) in the repository in two different ways. The first one was built as described in the LOOK section above. The second one also considered the LO metadata as a part of the LO microcontext. This was necessary since in many cases the LO micro-context based on the LO catalogue was not significant for our study; the LO micro-context did not support the proposed similarity analysis.
The next step was building the micro-context in the curricular structure (cuMicroContext).
We defined six micro-contexts: five different micro-contexts according to the five competence requirements defined in the course competencies list and a general course microcontext. This general course micro-context consisted of the title, description, and all the knowledge associated with the competence requirements.
With all the micro-contexts involved (loMicroContext and the cuMicroContext), we proceeded to compare them, calculating the similarity measurements among the microcontexts. We calculated the similarity of each learning object to each curricular structure micro-context. Then, we consolidated an average similarity, grouping the learning objects according to the repository where the LO were placed. Table 5 shows the most relevant results of this study. The first column defines the different criteria used for searching in the considered learning object repositories. The same criteria was used to define the LO micro-contexts. Additional columns represent the results of the average similarity consolidation for the general course micro-context.
Let us introduce an example: 0.2368 is the average similarity measure calculated among the 10 learning objects retrieved using the metadata-in this case, abbreviated keywords from Merlot. For each learning object, the similarity of its micro-context was calculated with respect to the general course micro-context.
We do not show the analysis of the other partial curricular structure micro-contexts considering the competence knowledge because the similarity measures were very small and extremely close. This did not permit us to determine the most promising micro-context for a learning object.
One of the most important conclusions we drew from this study was that using the definitions from the provided catalogue for uncontrolled repositories to define the learning object micro-context in a new learning design is very difficult. That can be seen in row six of Table   5. The reason is simple: The catalogue definition is too general for the LOOK positioning process to place the learning objects in a micro-context defined by the competence. The micro-context of the catalogue does not meet the micro-context extracted from the competence definition.  In order to test our proposal in a controlled environment, we prepared a complete course of Object-Oriented Design with UML. The main objective of this study was to analyze our approach's capacity for adequately placing the learning objects into a specific course structure. The starting point was the "correct" classification developed by an expert teacher.
This means that a teacher told us how he or she put the objects into the proposed curricular structure.

Results and Conclusions
Tables 6 and 7 present the LOOK system's precision, placing the LO in the best curricular structure micro-context. The obtained results came from calculating the average similarity for each set of learning objects previously placed by teachers in a particular csMicrocontext. The results show a correspondence between the teacher's classification and the LOOK process classification, and indicate that in general, LOOK places the LO in the best csMicro-context according to the teacher's opinion.
In Tables 6 and 7, the rows show the identified csMicro-contexts (introduction, activity diagram, class diagram, use case diagram, and interaction diagram) and the columns represent the micro-context where the teacher classifies the set of learning objects previously.
The values in the table indicate the average similarity between the micro-context for each set of LO previously classified by the teachers and each csMicro-context.
For example, in the first column, we calculated the average similarity of the set of LO previously classified by a teacher in the introduction micro-context and each csMicro-context. In particular, Table 6 presents the results applying DICE similarity measure. DICE analysis generates a precision of 100%, which means the process has localized 100% of the set of learning objects in the adequate curricular structure micro-contexts. Nevertheless, CO-SINE analysis generates 100% precision with respect to the classification provided by the teacher.
In general, the results of the study presented in Tables 6 and 7

User Satisfaction Evaluation Layer Description
Our main objective in this evaluation layer was to develop a qualitative study (Hernández Sampieri & Baptista Lucio, 2004) that would permit us to achieve a better understanding of potential opportunities for improving our approach and show us more effective ways to support this task. The strategy we used was to develop case studies, which permitted us to concentrate on a particular situation-in our case, the use of distributed learning objects for creating learning designs.
The analysis was based on interviews with teachers, case studies where we applied a gap model instrument (Hernández Sampieri & Baptista Lucio, 2004) to evaluate their satisfaction level. The gap model allowed us to capture the difference between the teachers' expectations and the satisfaction that they really obtained from the offered service.
The gap model was applied in a particular instrument (a survey) to measure user satisfaction for four aspects of our proposal: • satisfaction with the searching process (SEQ1), that is, the possibility of searching in different distributed repositories in a unique environment; • the usability of the tool, developed on a dotLRN platform, to integrate LORSE (SEQ2); • satisfaction with the results offered by the search process (SEQ3); • satisfaction with the possible location of LO in a curricular structure available for testing (SEQ4).
The instrument was applied to 15 teachers (cases) at the University of Girona, Spain, as a part of descriptive research, where teachers had the opportunity to test our proposed application. These instructors teach different courses at the university from different areas of knowledge: pedagogy, economy, law, psychology, tourism, and administration science.
Some of these courses are already supported by a virtual learning environment (Moodle).

Methodology
We arranged sessions with teachers from University of Girona. The main researcher introduced teachers to the learning object repository environment, showing them some of the most important ones. The main researcher introduced LORSE, its functionality and integration into the dotLRN learning management system as a porlet. The teachers had the opportunity to conduct some searches using the system. The LOOK process was described to the teachers, who observed the possible learning objects included in the test course. A session of discussions and brainstorming was proposed to every teacher in order to gain their opinions about our research. They were very motivated in this session.

Results and Conclusions
The results presented in Figure 2 show a very close relationship between the importance perceived by the users referred to the evaluated issues and their satisfaction with the solution. One of the most important parts of the descriptive analysis was the conclusions and opinions highlighted by the teachers: They all thought the reuse of learning objects was a possibility to facilitate the virtual learning process because efforts from teachers at different universities might be united. All the teachers emphasized the necessity of guaranteeing the quality of the selected learning objects to support learning design. For them, quality means that both the selected learning object should be contextualized for the teachers' and students' needs, and it must guarantee learning design quality.
According to the interviews from each teacher, we concluded that 60% of teachers consider it a good practice for universities to include in their strategic plans the creation of spaces to update teachers about the resources for learning and teaching available around the world and in their own institutions. Teachers think that much research and knowledge developed by important institutions is not well known in the academic context and, for this reason, their efforts may not be widely used by teachers. This is the case for the available and open learning object repositories.

Conclusions and Future Work
The main purpose of this article is to introduce our research into searching for learning objects in distributed learning object repositories and their positioning process in the most promising micro-contexts of future learning designs. Our solution includes the definition of two different processes: the distributed learning object metadata searching process (LORSE) and the micro-context-based positioning process (LOOK), which we introduced here.
We presented our results in two evaluation layers, the decision-making layer and the user satisfaction layer. The decision-making layer encouraged us to conclude that, on one hand, a search process for the LO over controlled LOR for feeding learning designs is a promising option. Learning objects selected and placed in the learning design meet the teachers' opinions in a previous manual positioning process. In this process, the importance of the metadata labelling process and the competence definition has been demonstrated. On the other hand, the decision-making process for including learning objects from uncontrolled learning object repositories in semi-automatically generated learning designs is a difficult process. In fact, to achieve a viable solution with these repositories, the object metadata needs to be refined. Metadata available in the involved repositories currently has limited information.
To obtain a closer view of the teachers' satisfaction with our proposal, we used a user satisfaction evaluation layer. The results obtained with teachers from University of Girona permitted us to define some improvements from a user-centered design view. Although the results were promising and we obtained a high user satisfaction level, we also need to address some important elements.
Some teachers suggested improving the appearance of the learning design player because they believe it could be difficult to manage for the student. The teachers suggested simplifying both the LORSE and LOOK interfaces, in order to facilitate easy use of the programs and to improve the usability of our solution. Results obtained in the descriptive analysis stimulated the development of evaluation scenarios when the main issues were testing the usability and accessibility of the proposed solution.
Currently, our research interest is focused on some of the different issues identified in our research: A good way to improve our solution for uncontrolled learning objects repositories could be to develop a characterization for the learning object repositories using ontology. This will optimize the search process to obtain more contextualized LO. Characterizing learning object repositories using ontologies would allow us to add the necessary semantics that support the selection of the repositories for a specific design process. In particular, as a result of the evaluation we identified the necessity of the following knowledge: character and granularity of the LOR, technical details, and main knowledge areas (e.g., math and languages). Finally, we need to develop a usability and accessibility testing scenario in order to verify the facility of our solution to meet those user needs in more detail.