Learning Technology

publication of

IEEE Computer Society's
Technical Committee on Learning Technology (TCLT)

Volume 5  Issue 1

Editorial board
ISSN 1438-0625


Advertising in the newsletter
January 2003

Author guidelines

ICON From the editor ..
ICON Guest editorial: Learning objects metadata: implementations and open issues (Kateryna Synytsya)
ICON eduSource: Creating learning object repositories in Canada (Rory McGreal, Griff Richards, Norm Friesen, Gilbert Paquette and Stephen Downes)
ICON The Le@rning Federation Metadata Application Profile (Jon Mason and Nigel Ward)
ICON Implementing metadata collection: a projects problems and solutions (Ben Ryan and Steve Walmsley)
ICON Learning Object Metadata in Operations Research/Management Science (Leena Suhl and Stephan Kassanke)
ICON Implementing and Extending Learning Object Metadata For Learning-directed Assembly of Computer-based Training (Robert Farrell, Samuel S. Dooley, John C. Thomas, William Rubin and Stephen Levy)
ICON Introduction of the Core Elements Set in Localized LOM Model (Xin Xiang, Zhongnan Shen, Ling Guo and Yuanchun Shi)
ICON Two Scenarios Using Metadata (Javier Sarsa Garrido)
ICON IEEE LOM Standard Not Yet Ready For "Prime Time" (Frank Farance)
ICON Metadata vocabularies for describing learning objects: implementation and exploitation issues (Andrew Brasher and Patrick McAndrew)
ICON On the integration of IEEE-LOM Metadata Instances and Ontologies (Miguel-Ángel Sicilia Urbán and Elena García Barriocanal)
ICON Versioning of Learning Objects (Christopher Brooks, John Cooke and Julita Vassileva)
ICON A Framework for Creation, Integration and Reuse of Learning Objects (Liliana Patricia Santacruz-Valencia, Ignacio Aedo, Peter T. Breuer and Carlos Delgado Kloos)
ICON Using the IEEE LTSC LOM Standard in Instructional Planning (Bruno Q. Pinto, Carlos R. Lopes and Marcia A. Fernandes)
ICON Toward decoupling instructional context, learning objectives, and content in learning object metadata (Tom Murray)
ICON Standardized Content Archive Management – SCAM (Fredrik Paulsson and Ambjörn Naeve )
ICON Online Discussions: The First Premise for Interactive Learning (Muhammad K. Betz)
ICON University Infoline Service – Internet telephony for interactive communication in e-learning (Tatiana Kováciková and Pavol Segec)
ICON Call for papers

From the editor ..


Welcome to the January 2003 issue of Learning Technology.

This issue contains special section on "Learning objects metadata: implementations and open issues", guest edited by Dr Katherine Sinitsa from International Research and Training Center of Information Technology and Systems, Kiev, Ukraine.

The IEEE International Conference on Advanced Learning Technologies, Athens, Greece (July 9-11, 2003) is turning out to be a very high quality conference. The website of the event is The call for paper submissions is available in this newsletter below.

You are also welcome to complete the FREE MEMBERSHIP FORM for Learning Technology Task Force. Please complete the form at:

Besides, if you are involved in research and/or implementation of any aspect of advanced learning technologies, I invite you to contribute your own work in progress, project reports, case studies, and events announcements in this newsletter. For more details, please refer author guidelines at


Learning Technology Newsletter


Back to contents



3rd IEEE International Conference on Advanced Learning Technologies (ICALT 2003)
July 9-11, 2003
Athens, Greece

* Important Dates

February 28, 2003original paper proposal submission
March 30, 2003notification of acceptance
April 14, 2003final camera-ready manuscript
April 18, 2003author registration deadline

* Proceedings

All accepted Full and Short Papers and Poster Extended Summaries, will appear in a single volume to be published by the IEEE Computer Society Press. Extended versions of selected papers will be invited for a Special Issue of the Educational Technology & Society (ISSN 1436-4522) journal.

* Topics of Interest

Adaptive and Intelligent Applications
Advanced uses of Multimedia and Hypermedia
Ambient Intelligence and Ubiquitous learning
Application of Artificial Intelligence Tools in Learning
Architecture of Learning Technology Systems
Building Learning Communities
Computer Supported Collaborative Learning
Distance Learning
e-Learning for All: Accessibility Issues
Educational Modelling Languages
Evaluation of Learning Technology Systems
Instructional Design Theories
Integrated Learning Environments
Interactive Simulations
Knowledge Testing and Evaluation
Life-Long Learning Paradigms
Learning Styles
Media for Learning in Multicultural Settings
Metadata for Learning Resources
Mobile Learning Applications
Pedagogical and Organisational Frameworks
Practical Uses of Authoring Tools
Robots and Artefacts in Education
Simulation-supported Learning and Instruction
Socially Intelligent Agents
Speech and (Natural) Language Learning
Learning Objects for Personalised Learning
Teaching/Learning Strategies
Technology-facilitated Learning in Complex Domains
Virtual Reality

* Program Co-Chairs

- J. Michael Spector, Syracuse University, USA
- Vladan Devedzic, University of Belgrade, Yugoslavia

* General Co-Chairs

- Kinshuk, Massey University, New Zealand
- Demetrios G Sampson, CERTH-ITI and University of Piraeus, Greece

* Paper Submissions

Please follow the submission procedure given at the conference website:

* For general information, please contact:

ICALT 2003 Administration Office
c/o Mrs Ioanna Veletza
Informatics and Telematics Institute
Centre for Research and Technology - Hellas
42 Arkadias and Taygetou Str., Chalandri, Athens GR-15234
Tel.: +30-210-6839916/17
Fax: +30-210-6896082


Back to contents



Learning objects metadata: implementations and open issues


Introduction to the special issue

In contrast to traditional industry standards that often document established solutions already validated in multiple implementations, learning technology standards are typically enabling and future-oriented. The IEEE Learning Object Metadata standard and a number of recommended specifications of educational metadata are no different in this respect. They are serving to facilitate to the development of large-scale distributed learning frameworks and new models of learning resources design and delivery, but have a limited implementation experience behind them.

The idea of using metadata for the description of objects is not new. In library science and archiving, metadata formats have a solid background in standards of pre-Internet era. Such approaches enabled an end-user to search for a stored object by typical publisher-provided query fields such as “title”, “author”, or “year of publication”. The Dublin Core Metadata Initiative has been the leading cross-domain metadata initiative since the invention of the Web. It has distilled the key aspects of other larger library-based metadata schema and removed repository borders offering a user-oriented approach for description of a variety of information objects on the Web by defining a minimal set of essential and extensible descriptors. However, while this has helped establish a foundation for cross-domain resource discovery, many educational organizations have felt a need to provide more specific information about objects they create, store, or deliver, and thus offer their vision of metadata for educational resources.

Approval of the IEEE LOM standard was a significant step in bringing together efforts in this field. Its data model offers a model for description of a learning object - any entity that may be used in the educational process - and thus is aimed at facilitation of search, evaluation and exchange of products, components and learning content. It empowers targeted object descriptions by making all elements optional, thus any subset of LOM elements may be chosen for specific purpose. It enables interoperability of descriptions by offering detailed list of elements and in some case – a set of recommended values for them. It also recognizes the fact that not all needs may be met by the proposed standardized elements and allows for extensions.

This special issue is devoted to the discussion of learning object metadata within and beyond the LOM framework, their implementations, extensions, limitations, and potential. A diversity of opinions, experiences, and solutions presented in this Newsletter stresses the variety of research directions and implementation issues related to metadata that may give birth to further guidelines and standards in this area.

A number of papers discuss implementation of metadata collections on different stages of the project and for different purposes. Rory McGreal et a. introduces a large Canadian project, which is aimed among others at promotion and refinement of a metadata framework for learning objects repositories (see “eduSource: Creating learning object repositories in Canada”.) Jon Mason and Nigel Ward present their vision of the metadata role and share their five-year experience in development and implementation of a Metadata Application Profile in Australia (see “The Le@rning Federation Metadata Application Profile”). Ben Ryan and Steve Walmsley describe some intermediate results of the UK-based project on metadata collection for interoperable learning object repository, which are related, in particular, to the qualities of software that supports metadata authoring (see “Implementing metadata collection: a project’s problems and solutions”). Leena Suhl and Stephan Kassanke report on the practical implementation of LOM and present LOM Editor and visualization mechanisms in the context of repository of reusable learning objects for a specific field of study (see “Learning Object Metadata in Operations Research/Management Science”).

Whereas the focus of the above papers is on implementation per se, another group of papers considers standard usability for specific cases. Robert Farrell et al presents a model and discusses the need for consistent LOM extensions in a context of learner-centered or job-oriented learning (see “Implementing and Extending Learning Object Metadata For Learning-directed Assembly of Computer-based Training”). Xin Xiang et al argues that a large number of optional metadata elements may be misleading for end-users and introduces a localized LOM Model with mandatory core elements in a Chinese E-Learning Technology Standard (see “Introduction of the Core Elements Set in Localized LOM Model”). Similar problem is addressed by Javier Garrido, who suggests to distinguish purpose of metadata in learning process (see “Two scenarios using metadata”). A block of critical papers is concluded by practical recommendations of Frank Farance (see “IEEE LOM Standard Not Yet Ready For "Prime Time"”) who addresses the key LOM pitfalls.

The real value of interoperability may be achieved only by consistent description of objects in the proposed metadata format. Andrew Brasher and Patrick McAndrew discuss the role of vocabularies in this respect (see “Metadata vocabularies for describing learning objects: implementation and exploitation issues”). Miguel Sicilia and Elena García explore the potential of ontologies in more precise object description by specifying classification, tagging formal assertions, or identifying relations between objects (see “On the integration of IEEE-LOM Metadata Instances and Ontologies”). Christopher Brooks et al explains the importance of version specification for authoring and use of a learning object and offers some practical solutions (see “Versioning of Learning Objects”).

Final group of papers is devoted to some research on advanced application of metadata. Liliana Santacruz-Valencia et al present some steps towards hierarchical representation of learning objects similar to ADL and CISCO models (see “A framework for creation, integration and reuse of learning objects”). Bruno Queiroz et al present a study on learning object metadata application to arrange adaptive learning experiences for a student (see “Using the IEEE LTSC LOM Standard in Instructional Planning”. Tom Murray tries to increase metadata usability by suggesting a complementary metadata scheme that distinguishes between three types of information related to the learning: content, knowledge and context (see “Toward decoupling instructional context, learning objectives, and content in learning object metadata”). Fredrik Paulsson and Ambjörn Naeve assert that metadata should not be considered as static and homogenous and thus offer an advanced metadata model that supports the use of multiple metadata sets on the same resources (see “Standardized Content Archive Management – SCAM: Storing and Distributing Learning Resources”).

Hope that you find this issue interesting and thoughts-provoking.


Kateryna Synytsya


Back to contents



eduSource: Creating learning object repositories in Canada


The eduSource project is a pan-Canadian collaborative project to create a testbed of linked and interoperable learning object repositories. The project is providing leadership in the ongoing development of the associated tools, systems, protocols and practices that will support such an infrastructure. The primary delivery mechanism for this testbed will be the broadband Internet, and in particular CA*Net 3/4. This project is based on national and international standards; it is fully bilingual; it will be accessible to all Canadians including those with disabilities; and it will share and disseminate its findings across Canada and internationally. Each of the partners and their associates are bringing considerable resources to the project. Collectively the contributions of the partners amount to $5,280,000 of the total project value of $9,530,000, CANARIE, Canada’s Advanced Internet Development Organization is contributing up to $4,250,000.

To simplify its activities, eduSource has six designated primary partners who will lead this work – Athabasca University, Alberta’s Netera Alliance, (New Brunswick Distance Education Network (NBDEN), the New Media Innovation Centre in British Columbia, Téléuniversité du Québec, and the University of Waterloo, Ontario – with the Netera Alliance serving as the lead contractor. In addition, the project includes a host of associates in the private and public sector representing learning institutions from across the country.

eduSource project objectives

The primary objective of the project will be the design and testing of a prototypical but functional, national learning object repository infrastructure and the development of specifications and tools for a “Repository in a Box” that can be shared with organizations across the country. This will be accomplished by bringing together Canada’s leading experts, researchers and practitioners in the field to share best practices for the large-scale, country-wide deployment of a network of learning object repositories.

In this context, the project team has identified four broad goals for the project:

  1. To promote and refine a repository metadata framework through the ongoing development of the CanCore protocol;
  2. To support experimental research in key areas such as pedagogy, accessibility, protocols, network engineering, hardware integration, quality of service, security, rights management, content development and software applications;
  3. To implement a national testbed to investigate processes such as peer review, content repurposing, user support, professional development and content transactions; and
  4. To communicate and disseminate its findings through cooperation and partnership with other federal and provincial agencies, institutions and the private sector.

To meet these goals, the eduSource project has identified a number of specific objectives:

  1. It will address and examine issues of interoperability by connecting a critical mass of learning objects housed in repositories across the country.
  2. It will play a leadership role in developing and promoting national and international standards.
  3. It will develop a blueprint for the rights management of learning objects.
  4. It will link and integrate the development of repository software programs.
  5. It will create a physical testbed of servers linked together through CA*net 4.
  6. It will build a bilingual pan-Canadian community of practice.
  7. It will examine new business and management models for object repositories.
  8. It will develop a communications plan for the dissemination of these results.
  9. It will accomplish these goals within the context of a comprehensive program of evaluation and feedback.
  10. It will make certain that that these repositories will be accessible to all Canadians and particularly to those learners with disabilities.

Project Description

Canada’s recently announced Innovation Strategy is predicated on an ever-increasing supply of well-educated and skilled workers in all parts of the economy and in all parts of the country. Learning object repositories are the first step in ensuring that this demand can be met. Individual provinces, learning institutions and even the private sector are all capable of developing their own separate and discrete repositories but the value of these projects grows exponentially when they are connected together.

The CanCore metadata profile is being developed as a series of generic guidelines for a practical implementation of the IEEE LOM. It recommends a subset of the LOM, which has been documented with implementation guidelines and is now being developed to cover the entire LOM field set. CanCore is an attempt to standardize the vocabulary and structure of the data implemented into the LOM. This is considered essential for semantic web applications and the use of intelligent agents.

EduSource will be designed around a series of complementary and interwoven work packages. What holds these pieces together is the idea of developing and testing a shared prototypical “Repository in a Box.” Each package has been designed to support this concept and taken together they provide much more than just a testbed for a network of linked repositories – in effect, they provide the blueprint for others all across Canada to create a proliferation of repositories that can all work and communicate together


EduSource is a comprehensive project that ties together various work packages and creates synergies between its partners and associates. All of the partners are Canadian organizations and meaningful work will be carried out in every region of the country. Moreover each partner brings unique skills to the project and the work packages have been organized to ensure that those skills bolster and complement one another. For example, CanCore guidelines are developed in Alberta through Athabasca University but implemented in the CanLOM metadata repository in New Brunswick and in Explor@ in Quebec. In fact, the wide-scale adoption of CanCore demonstrates how the principles of interoperability and open systems have guided our previous projects and are the foundation of our present work together.

Because the project builds on and integrates pre-existing repository work by experienced project teams, eduSource can ensure the soundness of its methods and the feasibility of the technologies involved. In addition not only are eduSource members aware of the best national and international standards, such as CanCore, they are helping to create those standards. All of the partners are connected to CA*net 3 and have helped develop working applications such as client-server and peer-to-peer applications and learninc content management systems.
The expertise of the team and the members’ experience in collaborative product development are powerful indicators that eduSource will be beneficial to the future of the Canadian economy not only through the exporting and commercialization of its results, products and expertise to a growing global market, but also through the significant competitive advantage it will impart to Canada and Canadians through the effective exploitation of broadband networks for the delivery of education and training. More information on this project is available at


Edusource canada project description. (2002, December 1). Retrieved December 29, 2002, from
*** This site links to all the partner sites.

Friesen, N., & McGreal, R. (2002, Fall). Learning object and metadata specification bodies. International Review of Research in Open and Distance Learning, 2 3. Retrieved December 2, 2002 from

Friesen, N., Mason, J., & Ward, N. (2002 ,October). Building educational metadata application profiles. Paper presented at the Proceedings of the International Conference on Dublin Core and Metadata for e-Communities, Florence, Italy. Available

Innes, J., McGreal, R., & Roberts, A. (2002). Metadata specifications. In H. H. Adelsberger, B. Collis & J. M. Pawlowski (Eds.), Handbook on information technologies for education and training (pp. 273 - 288). Stuttgart: Springer-Verlag.

Richards, G., McGreal, R., & Friesen, N. (2002). The evolution of learning object repository technologies: Portals for on-line objects for learning (pool). In E. Cohen & E. Boyd (Eds.), Proceeding of the IS2002, Informing Science +ITt Education Conference, June, 2002. (pp. 176 - 182). Cork, Ireland: IS2002.

Special contributors:
T. Anderson, N. Friesen, M. Sosteric: Athabasca University
Ken Hewitt, Janelle Ring, Douglas MacLeod: Netera Alliance
Marek Hatala, Tom Calvert: NewMIC/Simon Fraser Univ.
Margot Chiasson, Toni Roberts: TeleEducation NB
Tom Carey, Kevin Harrigan: University of Waterloo
Thanks to CANARIE.


Rory McGreal
Athabasca University

Griff Richards
NewMIC/Simon Fraser Univ.

Norm Friesen
CAREO/Athabasca University

Gilbert Paquette
CIRTA (LICEF) Téléuniversité du Québec

Stephen Downes
National Research Council


Back to contents



The Le@rning Federation Metadata Application Profile


The Le@rning Federation (TLF) is a five year initiative aimed at developing a shared national pool of quality online learning content for Australian schools within a framework that facilitates distributed access. It has been co-funded within a policy context developed in collaboration between the Australian Commonwealth government and State and Territory education authorities and focused on the strategic importance of fostering online culture in school education.

Within this collaborative context metadata plays a pivotal role. It is required to support the access, search, selection, use, trade and management of learning objects, where a learning object is defined as ‘a digital resource facilitating learning experiences related to a particular educational purpose’. After careful review of all requirements the Le@rning Federation has developed an application profile that combines or references a number of metadata schemes or namespaces, which includes the IEEE LOM. Importantly, its use of the terminology ‘learning object’ has been developed with a view to also meeting content packaging requirements developed by the IMS Global Learning Consortium.

TLF learning objects typically contain resources (files), organisations, metadata, and other learning objects. The files and sub-ordinate learning objects are used to facilitate a range of learning experiences. An organisation specifies a navigation path through the learning object. A learning object may have many organisations, and hence many possible navigation paths. Within a learning object, metadata is structured information the about the learning object, its resources and organisations. TLF metadata supports learning object and resource management, description of educational purpose, technical interoperability, digital rights management and accessibility.

Guided by principles of interoperability and pragmatism the TLF has recognised that adoption of international metadata standards is critical. It also recognises that adoption of metadata standards should not compromise the ability of school education systems to achieve their own educational priorities. Navigating a pragmatic ‘middle path’ between international and national standards has been a challenging process. In particular, Australia had been an early adopter of DC-based metadata for general resource discovery at a whole-of-government level.

The TLF metadata application profile has been developed as a response to the fact that no existing metadata standard met all TLF requirements. Consequently, metadata elements are sourced from different metadata specifications or namespaces:

Furthermore, some TLF requirements were not met by any standard. TLF describes the educational purpose of its learning objects in great detail. Educational purpose is defined in terms of learning area, strand, outcome, skills, student activity and learning design. These descriptions can represented indirectly using the general purpose Classification element within the IEEE LOM standard. However, TLF decided to define new education metadata elements to represent these concepts more directly.

The resulting TLF metadata element set is grouped into five categories:

The management category groups the information related to both the management and discovery of the digital resource as a whole. It contains some common descriptive elements as well as lifecycle and contribution information.

The technical category groups the technical requirements and characteristics of the digital resource. For example, it contains information on the file types, software and hardware requirements of the digital asset.

The educational category supports description of the educational integrity of a Learning Object and includes elements for describing:

The rights category groups the intellectual property rights and conditions of use of the digital resources. To place a pool of legally reusable educational material within the reach of all Australian students and teachers requires it to be managed in a way that negotiates and provides agreed reimbursement to owners of intellectual property and that facilitates the creation, trade and usage of online content. To achieve this, TLF curriculum content needs to meet relevant statutory and contractual obligations. TLF metadata contains support for digital rights management by including both text and Open Digital Rights Language (ODRL) statements.

The accessibility category incorporates an Accessibility Specification developed by the TLF that conforms to Commonwealth laws concerning accessibility. The Specification aims to ensure that online resources and services are inclusive of a range of teaching and learning capacities, contexts and environments. It affirms policy commitments by State and Territory education systems to inclusive educational provision. TLF metadata contains support for describing the accessibility of Learning Objects in terms of W3C Web Accessibility Checkpoints and TLF-defined learner accessibility profiles.

For more information see,

The Le@rning Federation:

Dublin Core Metadata Initiative

EdNA Metadata Standard

IEEE Learning Object Metadata


Jon Mason limited

Nigel Ward limited


Back to contents



Implementing metadata collection: a projects problems and solutions



Project background

The HLSI XML repository of learning objects and associated project produced tools enables partner materials, created as a series of Learning Objects to be used and re used, re authored and re tasked to serve a number of curriculum demands. Once produced, Learning Objects have currency across a wide range of subject areas and can be authored and re authored into very high quality on line learning materials with little more word processing skills.

Software development to support the project

To achieve the aims and objectives of the project a number of software systems were developed, to be used by educational practitioners, focused on automating and supporting the;

The two main software artefacts were the re-authoring tool, XML Tools, and the repository.

XML Tools and the repository

XML Tools is a customised Microsoft Word environment that provides an authoring system for educational practitioners to create learning content suitable for on-line delivery. It provides simple interfaces for inserting multi-media objects and for collecting metadata describing these objects. Stylistic and visual formatting is done using the normal Word functionality.

The repository is a database driven application that utilises IMS specifications to allow the import and export of learning content, and the discovery of learning content through searches over a subset of the IEEE LOM (Learning Object Metadata) standard.

The primary function of the repository is to support the re-use and re-authoring of educational content using XML Tools.

Problems with metadata collection

In the first eighteen months of the project, educational content was commissioned from various sources, including partners in the project and commercial vendors. The objective was to create an initial collection of content that could be used as a stimulus to creating new content through re-use and re-authoring.

As the focus of the repository was to support re-use through enabling practitioners to search for content that could then be re-authored, it was essential that the content in the repository was described by relevant and accurate metadata.

It soon became apparent as practitioners searched for material that either the search software was not functioning correctly or there was a problem with the metadata. This posed an acute problem because if practitioners could not find relevant and useful content they could not re-use the content.

The metadata was identified as the problem by rigorously defining search test cases and comparing the results returned to those expected, based on direct examination of the metadata records.

The main problem areas were:

To correct the problems with the metadata the only solution was to edit the existing records and apply rigorous quality control procedures. However, this proved to be impractical at the time due to lack of resources and suitably qualified personnel to undertake the task.

A decision was taken to concentrate on redesigning the metadata collection strategy and to develop improved software tools to assist and control the process. The issue of editing the existing metadata records was to be addressed when resources became available.

Our solution

To address the issue of metadata collection in the next round of content development the project focused on developing an application profile to define, describe and explain the metadata elements required. Fundamental to this strategy was a commitment to personnel and professional development that would,

Application profile

The application profile utilises a subset of the IMS Metadata 1.2.2 specification. The elements were chosen to reflect the needs of the partners within the project and would be reviewed on a regular schedule in light of feedback, comment and consultation from the practitioners and developers of content.

Improved software

The improvements, re-design and re-development of the authoring software were founded on the following principles:

User selectable vocabularies — to address the issue of terminology use and interpretation of terminology the software would support a number of pre-defined vocabularies the user could select when entering metadata.

User selectable templates — to assist the user when entering repetitive metadata within one piece of content and across subject areas, allow the creation of metadata templates where they pre-enter standard information, and then apply these templates when creating new metadata records.

Automation of technical metadata collection — to relieve the burden of entering technical metadata, automate the collection by processing all digital resources used to extract the fullest possible information and pre-populate the metadata entry forms.

Ask only for what is needed — where the application profile defines an optional or advisable status for metadata elements, only ask for the mandatory elements and then allow the user to choose whether to enter the optional or advisable elements.

Do not allow default values — if the software allows metadata element values to be defaulted the projects experience is that users will opt for the default value. This severely affects the performance and accuracy of the repository search as metadata record elements that have a default value might match a search criterion, although the author of the educational content did not made an explicit decision to give that metadata element a value.


In the first two years of the project a number of important lessons were learnt.

If educational content is not described with relevant and accurate metadata you cannot reliable and accurately use that metadata for searching.

If you cannot search for an educational resource because it does not have metadata, or a search returns several hundred or thousand results, you either; cannot re-use the resource because you cannot locate it or you decide which resource is relevant to your needs because of the time required to assess the results of the search.

If you cannot re-use an educational resource the whole concept of a repository and a tool to re-author content will not work.

We therefore have to convince educational practitioners and content developers that the effort they invest in metadata will be rewarded with a method and system of content authoring and re-authoring, that will bring benefits that more than make up for that effort.


Ben Ryan
Software Development Manager
HLSI Project

Steve Walmsley
Project Director
HLSI Project


Back to contents



Learning Object Metadata in Operations Research/Management Science



The project OR-World is funded within the Fifth Framework Programme of the European Community and started 2000. The goal of the project is to develop a hypermedia network of learning objects where each object which is subject to reuse is well described by appropriate metadata. The learning object granularity varies from simple media elements to complex thematic metastructures. Objects can be reused in the sense of combining objects of lower granularity to objects of higher complexity. The OR-World framework distinguishes between several types of objects (case studies, theory documents, etc.) which are XML based and rendered to varying target representations such as HTML and PDF (see [Kassanke/Steinacker 2001]).

The knowledge domain Operations Research and Management Science (OR/MS) is highly interrelated, basic algorithms and methods can be applied to practical problems in varying contexts. Case studies which demonstrate the use of certain methods in practical problem situations are therefore of crucial important for our students to realize in which situation a specific method is appropriate and where it cannot be applied. A previous structuring of a content area is helpful for building the links between objects in a hypermedia network. The subject area OR/MS is already well-structured, thus it is well suited for the representation within OR-World. Reusability of learning material is one of the key issues of the project and is realized by using the “relation” section of the LOM standard. The conceptual framework of LOM has been implemented in the project by a component called the LOM editor.

LOM Editor

The LOM editor is used for entering and browsing the learning objects metadata. The LOM recommendation (categories general, educational, technical etc.) is available for describing learning objects. An author makes a learning object available to the system by describing it with appropriate metadata and linking it to other learning objects. The metadata descriptions can be multilingual, an English description is obligatory but can be enhanced by additional descriptions, currently in German, Finnish and Dutch. The language versions of metadata available are denoted in the upper right corner of Figure 1.

General section of a learning object
Figure 1: General section of a learning object

With regard to reusability, LOM offers the prerequisites for describing learning objects appropriately and finding them before actually reusing them. A media element (in terms of LOM: an atomic element) is most likely to be reused in other contexts due to its general nature compared to a learning object which is specific to a particular context. A good example for a general media element is an optimization applet which is used in several course elements and case studies. The applet solves optimization problems and can be applied in different contexts, thus leading to a high degree of reusability. In general, a media element is embedded and supplemented by comments and additional notes which are specific to the context. Still reusability is not limited to lower level elements, e.g., case studies can appear as an enhancement of theory elements as well as part of a case study collection. In both cases the composition of learning objects is expressed in the metadata.

The relationships between learning objects are expressed in the relation section where typed relationships such as IsPart, HasPart, IsRelatedTo, etc. may be specified. Building on the granularity level of a learning object, other learning objects can be included by defining a HasPart-relationship to another object. The learning object “Red Brand Canners” is a case study on how to apply methods from linear programming to a specific problem situation. This object is included in a basic course on optimization systems as well as in a dedicated selection of case studies. This can be expressed by adding the “IsPart” relationships as shown in Figure 2. Simultaneously the “HasPart” relationships are added to the higher level objects.

Relationship section of a learning object
Figure 2: Relationship section of a learning object

Technically speaking, the metadata is stored in a relational database scheme which covers the LOM standard. The database approach has several advantages in terms of speed and consistency over a file based approach where metadata is stored in the form of XML documents. A few simplifications have been introduced; the Meta-Metadata section of LOM has been dropped since in our case it hardly offers any value to the users of the LOM editor. Informal user evaluations show that authors tend to stick to the general attributes (title, description, keywords, etc.) instead of describing objects as detailed as possible. However, authors have to fill in the relationship section in order to make objects known to the OR-World system.

We use the relationship data available in the database for creating a list form of learning objects available and the included objects as well as a graphical representation in form of a hyperbolic tree. Users may drag the nodes of this representation with a mouse navigating in the hyperspace. As indicated in Figure 3 the “Red Brand Canners” case study is included twice, as part of the course “Basics of optimization systems” as well as part of the case studies section, although only one instance of the learning object is defined. This visualisation is fully based on the metadata; the tree is automatically generated from the relationships defined within the editor. Additional information such as title and description may be retrieved from the database as well.

Hyperbolic tree representation
Figure 3: Hyperbolic tree representation

Access to the LOM editor is available via [LOM Editor 2002] (login with id “guest” and password “guest”).


OR-World provides methods to author, describe and publish learning elements. Although the project has been finished, the development of the framework as well as the development of the editor continues. The next steps will focus on integrating semantic aspects into the framework and extending decision support for course building. Once a significant number of learning objects is available, stronger classification methods than simple value searches are considered to be necessary in order to allow an efficient course construction. A domain ontology will be integrated where relationships such as prerequisites and related links are predefined for specific domain areas. This approach avoids specifying these relationships for each learning object separately in favour of classifying a learning object belonging to parts of the ontology, thus allowing to derive additional navigational links and references automatically.

The current version of the LOM editor is closed in terms of accessibility from other applications. The generation of serialized XML based representations for the learning objects from the database would allow course brokering systems and search engines to access the OR-World metadata descriptions. Conforming to LOM is a first step in building distributed applications which make use of a common understanding and structure of learning objects metadata. In this sense, LOM is the key to open the doors to a world wide library of educational learning material.


[Hypertree 2002] OR-World Hypertree implementation,, Last access 2002-12-14.

[Kassanke/Steinacker 2001] Kassanke S. & Steinacker A. (2001). Learning Objects Metadata and Tools in the Area of Operations Research, Proceedings of EdMedia 2001, Tampere, Finland, June 2001.

[LOM Editor 2002] LOM Editor,, Last access: 2002-12-14.

[OR-World 2002] OR-World web site,, Last access: 2002-12-14.


Leena Suhl
University of Paderborn, FB5, DS & OR Lab
Warburger Straße 100
33098 Paderborn, Germany
Fon: +49 5251 60 52 46 / Fax: +49 5251 60 35 42

Stephan Kassanke
University of Paderborn, FB5, DS&OR Lab
Warburger Straße 100
33098 Paderborn, Germany
Fon: +49 5251 60 24 16 / Fax: +49 5251 60 35 42


Back to contents



Implementing and Extending Learning Object Metadata
For Learning-directed Assembly of Computer-based Training


The Learning Objects Framework project at IBM Research is developing new models for generating on-demand interactive web-based learning experiences from interchangeable modular learning objects, by implementing and extending the IEEE LTSC Learning Object Metadata (LOM) 1.0 standard [1]. This article explains the motivation for our work, our model of content reuse, our content and metadata models, and relevant features of our LOM XML schema. We have submitted our base XML schema to the LOM XML Binding Working Group [2].


Complete courses are the primary deliverables of most training programs. However, full courses often take too long to develop and deliver to have significant business impact. Technical learners often need only small portions of courses. We have developed a learning environment where users discover and assemble modular learning objects into short, personalized web-based training courses as needed.

Our customers identified IBM Redbooks [3] as a valuable knowledge source that could be repurposed for e-learning. Over 4,000 IBM Redbooks exist on various technical topics. For our implementation, we chose seven key Redbooks, focusing on IBM WebSphere [4], that comprise over a million words of text and over 2,000 images. Our goal was to transform these books into learning objects for on-demand, learner-directed assembly for web-based training.

We were faced with several questions:

Reuse Model

We identified three possible sources for metadata (see Figure 1):

Metadata Processing Model for Reuse
Figure 1: Metadata Processing Model for Reuse

In this model, learning object content and metadata are created and shared by authors, subject matter experts, instructional designers, and programs performing automatic processing on original sources.

Content Model

Creating modular learning objects from original book sources involves both automatic and user-driven content extraction. Our initial experiments suggested the material could not consistently be divided into learning objects as either chapters or sections. We thus employed twenty WebSphere experts to divide the books into modules. Experts were asked to create modules up to 30-minutes in length by identifying portions of the book with at least one clear learning objective. This process produced over 200 learning object modules, taking approximately two to four hours per expert. We mapped learning object modules back to the original book using two 1.1. General Identifiers: one for the position of the learning object in the table of contents (a book numbering catalog) and one for a reference to the content within the original book document file (a source URL).

Learning object modules identified by experts consisted of files containing diverse information: system architecture diagrams, lists of definitions, code listings, and more. We thus allowed module files to contain one or more learning resources that consisted of sections within the learning object content file. We linked learning resources to modules using the 7. Relation “partof”. We also extended the 5.2 Learning Resource Type vocabulary, requiring experts to identify at least one learning resource type for each resource. In our Redbooks application, this consisted of identifying the target platform or product.

Metadata Model

Our metadata information model needed to support both the LOM standard and extensions for searching, selecting, and assembly of learning objects by end users of the learning delivery system.

When prompted to identify information they would need for finding relevant learning modules, users identified keywords, audience, context, and level of difficulty as most important. We were able to map 1.5 Keywords and 5.8 Difficulty directly to the IEEE LTSC LOM. However, we needed to create a custom element for the target audience of the learning object, and we needed to extend 5.6 Context to include the various usage contexts where the knowledge learned could be applied. We developed a custom vocabulary for audience that included the various job positions of our users.

We also asked users which information elements would help them decide to select specific learning objects for inclusion in their custom course. Users identified that titles (1.2 Title), short descriptions (1.4 Description), estimated learning time (5.9 Typical Learning Time), and estimated difficulty would help. Some users also identified the author’s reputation as important, so we included a name and picture and extended the standard to include a short biography in the 2.3 Contribute element.

Finally, we asked instructional designers to identify which information elements would help them correctly sequence learning objects. They identified prerequisites, corequisites, and learning objectives. We developed metadata extensions for these elements.

XML Schema

A flexible and extensible LOM schema enables improved interoperability between applications, authoring tools, and content across the e-learning industry. We joined the IEEE LOM XML Binding Working Group to help establish the standard and submitted our base LOM XML Schema for consideration. Our base LOM XML Schema implements all of the LOM elements and is extensible enough to support a wide range of e-learning applications.

Our XML schema uses schema profiles composed from among alternative schema modules. Each module allows for different levels of schema validation. For example, one application may include a schema module that performs no vocabulary validation while another may include a schema module that validates the standard LOM vocabularies. We added a schema module to support validation of custom vocabularies alongside the standard LOM vocabularies, allowing arbitrary element ordering within LOM aggregate elements, and strictly enforcing child element multiplicity constraints.

The LOM data model anticipates the need to introduce elements not included in the standard collection. Extension elements must retain the value space and data type of existing elements. Our schema supports full validation of custom elements, an important requirement of our target applications.


The LOM standard provided our project with a useful implementation guide, but did not supply all of the metadata elements needed for learner-directed assembly of web-based training. Building on the work of other participants in the IEEE LOM XML working group, we developed a flexible, extensible XML schema for LOM that supports the ability to both extend LOM metadata and optionally validate those extensions. We have demonstrated the utility of our approach by building a prototype application using IBM Redbooks that extends the base schema to support learner-directed assembly of computer-based training targeted at the Information Technology industry.


IEEE 1484.12.1-2002 Learning Object v1 Metadata Final Draft,

IEEE P1484.12.3/D1, 2002-12-15 Draft Standard for XML Binding for Learning Object Metadata Data Model,

IBM Redbooks Web site,

IBM Websphere software platform,


Robert Farrell
Samuel S. Dooley
John C. Thomas
William Rubin
Stephen Levy

IBM Research
Hawthorne, NY


Back to contents



Introduction of the Core Elements Set in Localized LOM Model


The LOM model has been developed to be too complex and hard to use, we try to divide all LOM elements into the core elements set and the optional elements set in our localized LOM model so as to make it more compact, flexible, and easy to comply with.

LOM Draft Standard and CELTS-3.1

The LOM draft standard is arguably the most mature of the LTSC draft standards. As stated in the “Purpose” section of the standard, the purpose of LOM standard is to facilitate search, evaluation, acquisition, and use of learning objects, for instance, by learners or instructors or automated software processes. [1].

Adopting the hierarchy and all elements of the LOM model, CELTS (Chinese E-Learning Technology Standard) -3.1 [2] is the localized version of up-to-date IEEE 1484.12.1 draft standard in China. Among others, the foremost revision to the LOM model in CETLS-3.1 is the introduction of the core elements set.

Why the Core Elements Set ?

LOM Model has been designed to be hierarchical and rather complex, which has made the model hard to understand and practically use. It is reported that it took approximately half an hour for an average teacher to fill in all the LOM elements for a resource. Consequently, it is very resource-costly and ineffective to construct a resource database if all the LOM elements are mandated to be used for all resources.

By scrutinizing all the LOM elements as well as the hierarchy, we found that the importance of them vary, some elements, for example, 1.2 “title”, and 2.3 “contribute”, are indispensable for nearly all learning objects, while 4.7 “duration” may not make any sense to a JPEG image, and 7 “relation” is not useful for most raw media, too. Hence it is not effective and flexible to make all the LOM elements “mandatory” to all users indiscriminatingly.

But we cannot as well leave users to decide which elements should be chosen to describe their own resources, which obeys the initiative of developing standards. Otherwise there will be a chaos that some applications choose this part of LOM elements as they need, while others choose that part, the elements they choose may or may not be intersectant, few portability and interoperability exist in this kind of chaos.

A tradeoff is to identify some elements in the LOM model as “mandatory”, and the rest “optional”, therefore divide the elements of LOM model into two parts, the core elements set and the optional elements set.

All elements in the core elements set are mandated to be contained in a metadata instance, while elements in the optional elements set may or may not occur in a metadata instance. By this kind of discrimination, metadata applications should no longer hesitate to adopt the LOM model just because they have to digest its complex hierarchy, instead, they are allowed to freely choose optional elements they are interested in provided that all elements in the core elements set are included. Also, we avoid the chaos in the choice of LOM elements, should no obligation attribute on elements is imposed, i.e., all the elements are optional.

Content of the Core Elements Set

All the elements in the core elements set are listed in following table:




















Life Cycle




















Metadata Schema










Learning Resource Type






Taxon Path






Table 1. The core elements set in CELTS-3.1

The relationship of the CETLS-3.1 core elements set, SCORM [3], CanCore [4] and Dublin Core [5] can be schematized in a Venn diagram (the diameter of a circle roughly reflects the count of simple elements in the core set that circle represent):

Relationship of several core sets
Figure 1. Relationship of several core sets

As can seen from this diagram, basically, the core set of CELTS-3.1 is a subset of CanCore, which in turn is a subset of LOM. We think that CanCore is still too large and complex.

Dublin Core is a rather compact and general purpose metadata model. The elements that we drop from Dublic Core involve coverage, rights of a resource and the features of the resource in relationship to other learning objects. We assume that coverage and relation are trifling while rights are totally neglected in China (a localization feature), and may be added in the future. The elements Dublin Core lacks, however, are those involve metadata itself and classification information, both are classified important in construction and management of resource databases and the search and evaluation of learning resources, hence are included in the CELTS-3.1 core set.

The same philosophy applies when we reference SCORM application profile. Besides “rights”, the difference between the mandatory elements set in SCORM CA/SCO application profile and the CELTS-3.1 core set is trivial, while the mandatory elements set in SCORM Asset application profile is too small to describe most learning resources.

It is arguably difficult to provide a general core set feasible for all learning resources of any granularity, but we do call for a core elements set that is localized, compact, understandable and practicable.

• mandatory
As we have mentioned before, the most important feature of the core elements set is that all elements in the core elements set must be contained in a conforming metadata instance. This is the constraint we impose additionally on the localized application of IEEE LOM Model.

• minimal
The elements in the core elements set are the minimal elements set which a LOM instance should contain to be conformant with the localized LOM model, and it is sufficient for a metadata instance which contains all the elements in the core elements set to be considered as a conforming metadata instance.

• sufficient
The elements in the core elements set are chosen so that they are sufficient to describe the most important characteristics of a learning object, that is, the elements in the core elements set are indispensable for almost all learning object.

• conforming
The original definition of conforming – “A strictly conforming LOM metadata instance shall consist solely of LOM data elements.” has been revised to “A strictly conforming LOM metadata instance shall consist all mandatory data elements and solely of LOM data elements.” to reflect the introduction of the core elements set.

To be summary, the core elements set is a constraint we impose on the localized application of LOM Model to make it more easy to understand, adopt and practically use. It is a tradeoff between descriptive completeness and semantic simplicity of the LOM model.

Status of the Core Elements Set

The core elements set is a recently imposed constraint on the localized LOM model, and it has not yet underwent feasibility evaluation by practically use throughout the country and hence is subject to change. A resource database conformant with this model as well as a conformance test suite are under developing. We are expecting the result.


[1] IEEE 1484.12, “Final 1484.12.1 LOM draft standard”,

[2] Chinese E-Learning Technology Standard, “CELTS-3.1, Learning Object Metadata: Information Model”,

[3] Advanced Distributed Learning, “The SCORM Content Aggregation Model, Version 1.2”,

[4] CanCore Initiative, “CanCore Element Set 1.1: Draft Version”,

[5] Dublin Core Metadata Initiative, “Dublin Core Metadata Element Set, Version 1.1”,


Xin Xiang, Zhongnan Shen, Ling Guo, Yuanchun Shi
Computer Science Department
Tsinghua University


Back to contents



Two Scenarios Using Metadata


Context of using metadata in education

Application of metadata definition to learning materials is anecdotal for the majority of teachers in Spain. This seems to be a generalized situation in education opposed to private companies where using metadata is demanded. Accordingly, digital libraries use and recommend the use of metadata for retrieving learning materials. The main difference is focused on the fact that teachers, as learning materials composers, need a qualification to do this while private publishers have specialized stuff to “metadata” the materials which revenue in economical benefits. Many metadata have to be filled automatically by web page creators. (Thomas, CF; Griffin, LS, 1998). So it seems that “labeling” has become more extended among institutions that make profitable their efforts. Furthermore, in Internet the process of “labeling” is not generating the expected raise of “clicks” derived from a better cataloguing of information.

In the other side, composing tools as web page or web animations apps, have no possibility of classifying the learning materials using metadata standards, perhaps because software enterprises are waiting for a major consensus among different international initiatives. Only recently, most popular learning management systems (LMS) are beginning to include metadata importing and exporting and the retrieving mechanisms. This software in which “metadata capability” exists or these specific tools designed to introduce learning related metadata, could be even questioned as a regular working tool for teachers due to the required effort. Many of them have no initial literacy for making quality web pages and so either for using standards.

Two scenarios using metadata

The differences in utility of metadata must start splitting the current situation into two scenarios: informative space and formative space.

- Informative space is aimed to serve the right contents for a seeking process. Examples of this are an electronic encyclopedia or a thematic web searching engine. In this space, profitability more than effective learning is the goal. Teachers could be involved but often these are lucrative businesses of private companies or big editorial groups. The benefits of efforts in classifying materials revert quickly and are justified economically. We can think: how many of us have our personal digital photo album “keyword-indexed”? Implies this too much effort? Perhaps a simple folder structure is enough? However a company that sells digital photographs can’t afford a bad indexed repository, all of them can be accessed by multiple keywords definitions.

What can teachers expect as a reward for using metadata in informative spaces?

In an informative system all metadata provided would be an added value every time they were considered by the system. Only “metadated” resources can be sought in a direct way. The set of retrieved materials will constitute such a coherent knowledge set as descriptive their related metadata are. The student’s active participation will be the responsible for creating the conceptual relations. In informative systems the knowledge transfer is achieved through constructivism theories more than through a prescriptive behaviorism of the learning sequence. A typical case is, for example, the simple retrieving of contents in a web searching engine.

- Formative space is aimed to achieve a better learning. This space, mainly composed by organized educative materials, is often constructed by teachers. A typical example is a educational website or a more powerful learning management system (LMS). The standard sets of metadata (even only the mandatory subsets) define an excessive group of properties for cataloguing web pages, pdfs, images, sounds or videos. In words of Hatala and Richards, “the business and educational communities have been slow to adopt the full IMS standard mainly due to the high number of the fields and vagueness with which the values for these fields have been defined. Too much information results in too much time spent cataloguing that no one will bother”. IMS Core 1.1 was a first try to compile a simplified set of IEEE elements, but has been dropped. ADL SCORM 1.2, also based in IEEE LOM 1.2, establishes a distinction between mandatory and optional metadata elements. These are two examples illustrating this try-to-simplify concern.

What can teachers expect as a reward for using metadata in formative spaces?

Opposite to informative systems, formative ones, which have a defined instruction sequence do not need a whole “labeling” of every learning object. The organizations contained in a content package are enough to guarantee the consistency of the relations among objects. Therefore, in order to minimize efforts and redundancies it is necessary to define a minimum set of metadata for the objects in formative spaces. This simplified set has to be centered in two main categories of educative relevance: “General” and “Educational”. How can an “Educational” category be proposed as an option in a Learning Model (as in ADL SCORM 1.2 or IMS Core 1.1)?

Category Educational Relevance Relevant Fields Remarks
General High Catalog*
Entry *
Language *
Main fields to identify the learning object. Most of them are not necessary in single assets “metadating”.
Teachers should complete these fields in order to provide a better seeking capability.
Lifecycle Medium Version *
Meta-metadata Low Metadatascheme  
Technical Medium Format +
Size +
Metadata editors and web page editors should provide an automated way for filling in these fields.
Educational High Interactivity Type
Learning Resource Type
Typical Age Range *
Typical Learning Time
Other hidden fields are related with these in a minor or major way. For example, Interactivity Level field is often supplied by Interactivity Type field.
Rights Medium Copyright * The teachers demand intellectual protection (often not minimally guaranteed in web content).
Relation Low - Covered by relations in organizations
Classification Medium Taxon Path * In order to define a seek process this information completes the “General” information category.

* User-Predefinable fields. The user could predefine these fields when needed. Generally the resources inside a course package share common characteristics as language, typical age range, copyright and others that should be predefinable and assignable to multiple resources.

+ Automateable fields. These fields should be filled in by the application. Any other automateable field should be added to this set.


In order to reach the expected benefits of “metadating” and its generalized use by content creator teachers, at least two aims have to be pursued:

References (Web)

ADL SCROM v.1.2. The SCORM Content Aggregation Model:

Dublin Core Tools for Automatic Production of Metadata:

Eulers Dublin Core Metadata Template:

Hatala, M.; Richards, G. Global vs. Community Metadata Standards:Empowering Users for Knowledge Exchange:

IEEE Information Technology - Learning Technology - Learning Objects Metadata.(P1484.12), Learning Object Metadata Working Draft (WD5):

IMS Core 1.1 - IMS Learning Resource Meta-data Best Practices and Implementation Guide:

IMS Learning Resource Meta-data Information Model Version 1.2 - Final Specification:

Thomas, CF; Griffin, LS: Who will create the metadata for the Internet? First Monday, 2(12), December 1998.


Javier Sarsa Garrido
Prof. New Technologies applied to Education
University of Zaragoza


Back to contents



IEEE LOM Standard Not Yet Ready For "Prime Time"


There are significant deficiencies in the IEEE LOM Standard that cause many major stakeholders to have a "wait and see" attitude towards the adoption and implementation of LOM. For example, after several years of working with major publishers in K-16, we have found strong resistance to significant adoption of LOM (there has been little resistance to insignificant adoption of LOM, e.g., the Dublin Core-overlap elements).

Largely, the concerns relate to a variety of significant technical problems within the LOM Standard. This paper highlights some of these problems, but doesn't not attempt to be exhaustive. This paper makes some recommendations to stakeholders and standards developers and is aimed at promotion of more mature version of the standard that will hopefully become available in the near future.

Good Interoperability Standards

A good interoperability standard may be characterized by:

In this regard, the LOM Standard fails these measures. Some of the issues related to the recent version of the LOM standard are analyzed below (in arbitrary order).

Issue: Most of LOM has been done better by Dublin Core (DC)

Many of the data elements are already covered by DC. Many stakeholders believe that they should just choose DC because DC is better-defined and there is significantly more IT infrastructure to support it.

Recommendation: Just adopt the DC and add "extensions" for learning, education, and training.

Issue: The non-Dublin Core portion of LOM is done poorly

Many stakeholders find the learning-related portions of LOM are very weak — from a learning perspective. Several data elements, due to their imprecise definitions, are based on loose opinions of what the data element means and how it relates to content. Based on samples of LOM records, it is difficult to see (1) how can one code these values consistently?, and (2) how can one search on such poorly coded data (metadata)? The following data elements seem to be most problematic:

1.7 Structure
1.8 Aggregation Level
2.2 Status
2.3.1 Role
4.7 Duration
5.1 Interactivity Type
5.2 Learning Resource Type
5.3 Interactivity Level
5.4 Semantic Density
5.5 Intended End User Role
5.6 Context
5.8 Difficulty
5.9 Typical Learning Time
6 Rights
7 Relation
9 Classification

These stakeholders believe: (1) if it can't be coded consistently, then (2) it can be searched consistently, so (3) why spend the significant development cost to add LOM metadata? Additionally, there are considerable internationalization issues that need to be addressed before LOM can satisfy the international audience.

Recommendation: Either improve the definitions, or remove the data elements, or identify which data elements aren't useful for IT interoperability.

Issue: The approach towards LOM value domains is fundamentally flawed

LOM calls these "vocabularies", but they are properly called value domains (VD). In the LOM Standard:

... A vocabulary is a recommended list of appropriate values. Other values, not present in the list, may be used as well. However, metadata that rely on the recommended values will have the highest degree of semantic interoperability, i.e., the likelihood that such metadata will be understood by other end users or systems is highest. ...

This statement misleads users in coding their LOM metadata. To use an analogy, the value single has two different meanings, depending upon whether it is from VD #1 {single, married} or VD #2 {single, married, divorced, widowed}. In other words, it is impossible to determine the meaning of single unless one knows which context single is from (VD #1 or VD #2). Continuing this analogy, if LOM were to have VD #1 for a particular data element, then the LOM standard would encourage users to use single from VD #1, regardless of whether the semantics of single came from VD #1 or VD #2. Thus, the undesired happens: "metadata that rely on the recommended values" will have lower semantic interoperability. This flaw is pervasive throughout LOM's value domains ("vocabularies").

Recommendation: Adopt the ISO/IEC 11179 (ISO metadata registries) approach towards the definition of value domains.

Issue: Who are the intended stakeholders for the LOM standard?

While most people can imagine some descriptive information about a learning resource, LOM fails to address any significant IT interoperability for any major stakeholder. For example, stakeholders such as developers, librarians, libraries, researchers, teachers, students, publishers, etc. would have a hard time finding LOM satisfying a critical IT interoperability need. In other words, it might be nice to code LOM information, but who will really depend upon it for critical IT interoperability outside their own organization?

Considering the number of ambiguities in LOM, the author has heard very few complaints in the IEEE LOM Working Group (and elsewhere) about interoperability issues of LOM. The author can only conclude that for the current users of LOM: (1) they are so isolated that outside organizations perform no real processing of LOM records (i.e., nothing more than store/retrieve), or (2) they have minimal/insignificant use of LOM so interoperability isn't an issue (e.g., uses only Dublin Core overlap elements).

For the publishers we have contacted, there seems to be little motivation to invest in LOM right now: (1) it's expensive to code, (2) the results will be inconsistent with other publishers, (3) it won't generate any significant revenue, (4) web search engines (e.g., Google) don't match META tags or other kinds of metadata. In other words: large cost, little benefit.

For the teachers, it is difficult to do wide searches and to search with efficiency. Realistically, a better approach for searching would be for content developers to add/embed the words "learning content" or "learning resource" to web pages and use traditional web search engines. A brief comparison of existing LOM repositories vs. traditional web-based search supports this conclusion.

For the content developers, LOM was sold as a promise to support reusable "learning objects", i.e., chunks of learning content that could be quickly mixed together to form better "bundles" of learning content. Unfortunately, the idea of (significantly) reusable "program objects" (software) has failed, too, for the same reason that (significantly) reusable "learning objects" will fail: (1) content developers don't want to unbundle all the pieces for technical, legal, and monetary reasons, (2) developing such a reusable frameworks requires significant consensus on what the framework itself should be (a politically charged and contentious issue).

Recommendation: The LOM standard should concentrate on satisfying the inter-organizational IT interoperability needs of major stakeholders.

In conclusion, these are just a sampling of problems. If these were solved, major stakeholders would reconsider further commitments towards adoption of LOM.

Bibliographic note

The author has been involved with national and international standards development for more than 20 years. Over the past 5 years, the author has worked with a variety of content developers, publishers, infrastructure engineers, and conformance testers (collectively: stakeholders) on the Learning Object Metadata standard. The author has also participated in the development of the IEEE 1484.12.1 standard (IEEE LOM Standard).


Frank Farance
Farance Inc.


Back to contents



Metadata vocabularies for describing learning objects: implementation and exploitation issues



In the education domain there are proposals [1] for the use of metadata descriptions of e-learning resources which enable computer systems to identify instances relevant to a user’s needs. However, such systems can only work effectively if metadata is associated consistently and accurately with instances of the resources, and if the search facilities provided to users enable the users to exploit the metadata. This paper examines how to address the need for a production process for e-learning resources to incorporate descriptions and classification of the resources within metadata, and considers how to enable users to exploit this metadata. It describes an approach for the prototyping, implementation, exploitation and maintenance of controlled vocabularies [2] for use within metadata descriptions of ‘learning objects', where a ‘learning object' is a reusable self-contained digital entity with an educational intent.

Production of descriptive metadata for learning objects

Some descriptors of e-learning resources, or learning objects, can be determined easily by software. Descriptors such as file size, file format, and location (e.g. the URL) of a resource fall into this automatically determinable category. Terms contained within a resource can also be identified automatically. However, there are many other useful characteristics of e-learning resources which are not automatically determinable, but require human intervention, e.g. because they are characteristic of the expected use of a resource [3].

Experience at the Open University and elsewhere [4] has shown that such human produced information is often neither complete nor consistent. Problems in obtaining useful metadata can be considered to be:

For the first of these factors, motivation, it is clear that the provision of external standards and systems for handling stores of resources [1, 5] is providing an incentive for organisations to include producing metadata within their workflow. For individuals within the system the motivation factors can be more difficult to determine, if benefits are only in reuse or the subsequent finding of objects the perceived value will be low at the time the object is produced.

Workflow solutions to accuracy and consistency often take a central approach, for example focussing the work of adding metadata on a few individuals such as information science specialists who bring cataloguing experience. For educational material the characterising of material requires subject specialist knowledge as well as the skills of producing descriptions. In a system where metadata is seen as an additional aspect often added at the end of a workflow there is also a risk of adopting simplified systems and the missed opportunity to exploit the descriptions in the initial production of the materials. Greenberg et al. [6] have described how a simple Web form can assist authors in generating “good quality” Dublin Core metadata.

A technical approach to improving accuracy and consistency has been developed drawing from the experiences at the Open University, Greenberg’s work [6] and research within the GUARDIANS project [13]. This considered how to assist those completing metadata with the aim to allow a larger group of people, including authors and editors, to supply metadata to sufficient quality. The approach uses the current range of XML editors in conjunction with purpose built schema to address the issues of accuracy and consistency. Two aspects are tackled:

  1. vocabularies are developed to control the terminology that can be used in completing a metadata instance,
  2. the information is augmented with descriptions to help people to understand the metadata requirements.

The use of vocabularies was formalised in two ways: (i) by definition of a schema for vocabulary structure and (ii) to explicitly refer to the location of vocabularies used within a metadata instance via a URL.

The structure schema (see e.g. ) offers the facility to define a vocabulary with the properties of a thesaurus, which provides features such as the explicit definition of relationships between terms in the vocabulary. Making such relationships explicit aids users of the vocabulary (in this case metadata authors), because it helps to reduce uncertainty about which is the most appropriate term.

The location means that vocabularies are stored and referenced via a URL. This offers access to the complete vocabulary through the interpretation of the metadata, and also provides a record for vocabulary decisions as they are made. In contrast, existing IMS recommendations [7] suggest referring to vocabularies by name (e.g. Library of Congress, LOMv1.0), and the now approved IEEE standard (IEEE 1484.12.1 – 2002 [14]) does recommend the use of URIs to reference vocabularies as good practice but does not explain how to code the vocabularies in a way that facilitates their exploitation. Other attempts to code these practices have tended to rely on separate thesauri (e.g. [8]) or developing guidelines [9, 10]. These are valuable but do not provide a tight coupling with the creation process.

The tools that have been developed are: an XML schema for instantiating vocabularies, XSL transformations which transform XML documents conforming to this schema into enumerations, metadata schema which include these enumerations. These tools enable rapid prototyping and evaluation of specific thesauri and metadata schema combinations, and their working relationship is shown in figure 1.

Relationship of tools for rapid prototyping

Figure 1: Relationship of tools for rapid prototyping

Exploitation of descriptive metadata

The dominant purpose of learning object metadata is to increase the effectiveness of retrieval systems. The use of thesauri to store the allowable values for descriptive metadata elements can increase the effectiveness of both machine only and machine/person systems by allowing the retriever to use the additional information within a thesaurus (e.g. relationships) to modify their search criteria [see e.g. 11]. Figure 2 illustrates this and other ways in which an XML encoded thesaurus can be exploited.

Diagram summarising mechanisms for exploiting XML encoded thesauri
Figure 2: Diagram summarising mechanisms for exploiting XML encoded thesauri

If retrieval is not the prime motivating factor, e.g. in the case of individuals within the production system, other factors are needed. Accurate, consistent metadata can help authors choose learning objects from to fulfil a specific objective such as creation of a new object, and help them decide how to process each of the chosen objects to fulfil the objective.

Descriptive metadata can also be used by learning objects themselves. For example, a query and a mechanism for displaying the corresponding results can be encoded in an object, thus making the objects behaviour reflect the environment that is being queried (database(s), internet). Learning objects with these characteristics were implemented within the GUARDIANS project [13]. Again, accurate and consistent metadata is essential if the behaviour of such objects is to be predictable.


This paper has described a particular implementation of vocabularies as thesauri that can then be integrated into the workflow and tools involved in producing metadata. We believe this approach supports the accuracy and consistency in metadata that is a vital part of establishing the value in metadata and its application in the education domain. This approach has already shown itself to be of value at a small scale within our own projects, and we hope that there will be wider adoption of this or similar systems. We recognise further issues need to be addressed, for example:

This work was supported in part by the GUARDIANS project [13].


[1] IMS Global Learning Consortium, Inc,

[2] International Organization for Standardization (1986), “Documentation — Guidelines for the establishment and development of monolingual thesauri”, ISO 2788, 2nd ed., ISO, Geneva.

[3] Marshall, C. "Making Metadata: a study of metadata creation for a mixed physical-digital collection" in Proceedings of the ACM Digital Libraries '98 Conference, Pittsburgh, PA (June 23-26, 1998) pp. 162-171.

[4] Chan, Lois Mai. (1989). Inter-indexer consistency in subject cataloging. Information Technology and Libraries, 8(4):349--58.

[5] The Gateway to Educational Materials,

[6] Greenberg, J., Pattuelli, M. C., Parsia, B., & W. D. Robertson. (2001). Author-generated Dublin Core Metadata for Web Resources: A Baseline Study in an Organization. Journal of Digital Information (JoDI), 2(2)

[7] IMS Learning Resource Meta-Data XML Binding Version 1.2.1 Final Specification

[8] Engineering Information Thesaurus,

[9] EEVL resource suggestion guidelines

[10] FAILTE Metadata guidelines

[11] Distributed Information Retrieval: Exploiting a controlled vocabulary to improve collection selection and retrieval effectiveness, James C. French,Allison L. Powell, Fredric Gey, Natalia Perelman, Proceedings of the tenth international conference on Information and knowledge management, October 2001

[12] “Vocabularies repository”, Version 0 Draft 6, July 2002, Frans Van Assche,L. Anido-Rifon, Lorna M. Campbell, Marc Willem

[13] GUARDIANS project (IST-1999-20758),

[14] IEEE 1484.12.1-2002, Learning Object Metadata.standard


Andrew Brasher
Institute of Educational Technology
The Open University

Patrick McAndrew
Institute of Educational Technology
The Open University


Back to contents


On the integration of IEEE-LOM Metadata Instances and Ontologies



The Learning Object Metadata (LOM) Standard [IEEE 2002] represents in our view an important step towards fostering the construction of a new generation of artificial intelligence-based Web Learning systems. The reason for this belief is that it provides a common conceptual vocabulary for describing content elements – that are generically referred to as learning objects – through metadata items that can be treated as instances of a plain and reduced “knowledge representation language” at the epistemological level – according with McCarthy’s use of the term [McCarthy, 1977] –. These metadata elements can then be combined with richer knowledge representation formalisms [Davis et al., 1993] or advanced information representation models. This way, learning objects can be integrated into systems that use artificial intelligence techniques to provide extended functionalities, all without breaking the original semantic assumptions of the metadata specifications.

In this article, we briefly describe our experience in early prototype implementations of software that combines LOM metadata items with more advanced forms of information or knowledge representation. Specifically, we have found that metadata records can be effectively linked to shared conceptualizations in the form of ontologies. To clarify our discussion, we will refer to the conceptual UML diagram depicted in Figure 1, which summarizes the major definitions in the LOM standard.

Major elements of the LOM described as a UML Diagram

Figure 1. Major elements of the LOM described as a UML Diagram.

In Figure 1, learning objects are associated to metadata instances, which in turn are aggregations of a number of metadata element instances, consisting on one or several values. The form and type of the values is constrained by the metadata element they represent. Additionally, these elements are logically grouped in metadata categories (General, Lifecycle, etc.). Both simple and aggregated metadata elements are permitted.

Ontologies and the LOM Standard

The LOM specification explicitly states that its scope does not include “how a learning technology system represents or uses a metadata instance for a learning object”. This assumption entails that the LOM is intended to stay at a higher level of abstraction to that of specific knowledge representation formalisms like description logics – in which current ontology languages like DAML+OIL are based –. Nonetheless, probably LOM metadata will be combined with ontologies frequently in forthcoming years, since its ultimate aim (although not its scope and approach) is similar to the generic aim of the so-called Semantic Web effort, which is based on assuming that shared, well-engineered, consensual ontologies for any imaginable domain will be available on the Web in the near future [Ding & Fensel, 2001].

Ontologies would provide coherence to metadata instances by making any metadata item referring to a specific domain, e.g. ‘computer programming’ or ‘art history’, be linked to the same knowledge item, which would be part of a rich conceptualization system. The LOM meta-metadata category is an appropriate place to declare the dependencies of the metadata record with ontologies, and links to ontology terms can be put into the Classification category (this is preferred to using the Annotation category, since the latter is used in the LOM standard as a way to attach unstructured, informal comments). Nonetheless, this form of annotation (depicted conceptually in Figure 2, where a TaxonPath element is associated to a class in an ontology, denoted generically by the <<ontology-term>> UML stereotype) represents a concrete sentence depending on the specified purpose (an enumeration defined in the standard: idea, prerequisite, educational objective, etc.). This way, we can state sentences like the “discipline of the object is programming”, its “prerequisite knowledge is boolean logic”, or its “educational objective is control structures”, connecting the learning object with ontology terms “programming”, “boolean logic” and “control structures” in different ways. A similar approach has been used in [Palomar et al., 2002] to link data in learner profiles to ontology terms.

Annotating Learning Objects through Classifications.

Figure 2. Annotating Learning Objects through Classifications.

But we can go further in the use of ontologies to annotate learning objects. If we represent the whole learning object by a term LearningObject in a concrete ontology definition language, we can assert properties about the object (and even of its parts, if we model them inside the ontology) by using arbitrary properties or axioms defined in an ontology. For example, we can state that the prerequisites of the LearningObject are knowledge on the relational model and either knowledge of Java or C++ (note that this and many other arbitrarily complex assertions about learning objects can’t be directly represented by the current LOM metadata elements). Figure 3 shows a screenshot of the definition of that restriction with the “Expression Editor” of the OILEd ontology editor .

Annotating Learning Objects with Assertions inside an ontology.
Figure 3. Annotating Learning Objects with Assertions inside an ontology.

A pointer to the markup describing these kinds of assertions can be put in the metadata instance of the learning object. There is not clear point in the structure of the LOM to put this kind of general-purpose assertions, and, for now, we have used the Annotations section for that purpose.

Ontologies can be used also to describe the types of links (a link can be considered a special case of learning object, according to the highly generic definition of the term in the LOM), enabling additional kinds of applications, as described in [Sicilia et al., 2002]. Since this is a specialized form of the case depicted in Figure 2, we’ll not cover it in detail here.


From the viewpoint of the designer of ontology-based systems, the LOM standard provides the necessary hooks to combine learning objects with reasoning or knowledge-based search functionalities. The purposes of the Classification data element can be used to assert some simple sentences about the object, and more complex sentences can be specified by the reification of the learning object as an ontology term, so that assertions can be specified by virtue of existing ontology definition languages, that are automatically generated by existing ontology editors.


[Davis et al., 1993] Davis, R., Shrobe, H. & Szolovits, P. (1993). “What is a Knowledge Representation?”, AI Magazine, 14(1): 17–33

[Ding & Fensel, 2001] Ding, Y. & Fensel, D. (2001). “Ontology library systems: the key for successful ontology reuse”. In: First Semantic Web Working Symposium, California, USA., pp. 93-112

[IEEE 2002] IEEE Learning Technology Standards Comitee (2002). Learning Object Metadata. IEEE 1484.12.1 – 2002

[McCarthy, 1977] McCarthy, J. (1977). “Epistemological problems of artificial intelligence”. In: Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Massachusetts, 1038–1044

[Palomar et al., 2002] Palomar, D., Sicilia, M.A. & García, E. (2002). “Modeling And Interchange Of Enhanced Life-Long Learning Profiles”. In: Proceedings of the International Conference on Information and Communication Technologies in Education (ICTE), Badajoz, Spain

[Sicilia et al., 2002] Sicilia, M.A., García, E., Díaz, P., Aedo, I. (2002). Learning Links: Reusable Assets with Support for Vagueness and Ontology Based Typing. In: Proceedings of the ICCE Workshop on Concepts and Ontologies in Web-based Educational Systems, Technische Universiteit Eindhoven CS-Report 02-15, 35–40


Miguel-Ángel Sicilia Urbán
DEI Laboratory, Computer Science Department
Carlos III University
Leganés (Madrid), Spain

Elena García Barriocanal
Computer Science Department
University of Alcalá
Alcalá de Henares (Madrid), Spain


Back to contents



Versioning of Learning Objects



This article is a report on an ongoing project within the ARIES group at the University of Saskatchewan, which seeks to build a learning object authoring environment. The release of the Learning Object Metadata (LOM) [1] standard has been a welcomed specification for cataloguing educational resources, but contains only weak support for capturing the temporal nature of these educational resources. This article argues that learning objects will be used in a distributed and mutable fashion. It outlines how versioning is currently supported in the LOM, and illustrates the issues associated with this support. Finally, it outlines an ongoing learning object authoring environment implementation that will be used to capture and codify the temporal nature of these educational resources.


The LOM standard provides a general method for describing educational resources to help facilitate the “search, evaluation, acquisition, and use of learning objects” [1]. This implies that a significant feature of a learning object is its ability to be both shared and differentiated from other learning objects. Given this, it seems evident that communities of individuals will form to build, disseminate, and use learning objects. People will modify learning objects to suite their needs, and feed these new learning objects back into the community. The rise in peer-to-peer based educational tools and repositories (e.g. [2,3]) furthers this idea by suggesting that not only will objects be constantly modified but access to objects (and their associated metadata) will change over time as peers join and leave the community.


Core to the idea of creating derivative works is the necessity to capture the temporal history of a learning object. The LOM claims support for this through the lifecycle entity which captures the “history and current state of this learning object and those entities that have affected this learning object during its evolution” [1]. This entity includes a number of child entities, though the two most relevant ones for this discussion are the version and status entities (entities 2.1 and 2.2 respectively). The version entity describes the edition of the object using human readable plain text, while the status entity describes the completion status of the object using a small, predefined vocabulary (draft, final, revised, or unavailable).

Is this information adequate to represent the change in a learning object? Consider a learning management system filled with intelligent software agents whose job is to customize courses for students based on student profiles. Is the string value associated with the version entity enough information for an agent to reason about when comparing two learning objects that may have been derived from some other common learning object? As an example, is a learning object with a version of “1.2.alpha”, which appeared within the LOM specification as an example, better or worse for a student if the agent has already found a learning object with the version “”, and the preferred version of the learning object is “”?

At first it might seem that standardizing on a grammar or vocabulary for representing the version might be an appropriate solution, but this still leaves problems. Consider a textbook as a learning object. A textbook has two versioning elements related to it, the edition number and the printing date. Different printings of the same edition of a text usually change very little – the content stays the same and only minor issues such as spelling, typos, and grammatical errors are fixed. On the other hand, different editions of a textbook can vary greatly – they can add, remove, and heavily modify content which can change the meaning of the textbook thus changing the suitability of it as a learning resource for a given student. Thus there are at least three different compatibility states for a derived learning object with respect to the object it was derived from – fully compatible, partially compatible, and incompatible.

Towards a Learning Object Versioning Environment

The lifecycle information within the LOM specification is too coarse grained to be appropriate for learning objects that will be dynamically discovered and arranged by software agents. While the LOM supports simple human readable versioning metadata, and a way of relating two learning objects to one another, there is no machine understandable method of capturing the semantics of the change in a learning object from one version to another.,

The ARIES group within the University of Saskatchewan is developing a learning object authoring tool that uses human-in-the-loop techniques to capture semantic changes of learning objects as they are being authored. These changes are then mapped to the individual metadata entities that they affect, and are serialized with a predefined vocabulary. The authoring environment will include the capture of semantics at both a human and a software agent level. Human readable semantics will be encoded without changing the LOM data model by using a combination of the lifecycle (section 2), relation (section 7), and annotation (section 8) categories. Semantics meant for software agent consumption will be captured in a new category and encoded according to best practice guides (such as [4]) where appropriate. This will allow for machine readable and fine grained versioning of learning objects, helping to support the dynamic discovery of appropriate objects by software agents.

In addition to this mapping with the LOM, there are at least two core versioning concepts from the field of Software Configuration Management (SCM) that are being implemented:

  1. Branching – the act of creating a derivative artifact of some artifact which already has a derivative artifact creates a new “branch”. For instance, a learning object describing database management systems might be later customized by two different authors for two specific database classes. This object would thus have two branches to it. Branches are usually identified within an artifact using a unique branch number.
  2. Roll-backs – the act of converting an artifact to an earlier version of itself. An educational institution might purchase a license to use a set of learning objects every year. These objects might be updated every year, but individual educators might want to use older versions that fit better with course content or assessment mechanisms that have already been developed.

Implementation of these concepts allows for an agent that finds newer versions of learning objects to manipulate them into older versions if the older versions are more appropriate for the user’s purpose. It also allows an agent to make quantitative comparisons between learning objects when an optimal learning object can’t be found.


[1] IEEE P1484.12.1-2002 Draft Standard for Learning Object Metadata, Learning Object Metadata Working Group of the IEEE Learning Technology Standards Committee (2002) available online at

[2] M. Hatala, and G. Richards. (2003) Making a Splash: A Heteregeneous Peer-To-Peer Learning Object Repository. WWW 2003, Budapest, Hungary.

[3] W. Nejdl, B. Wolf, C. Qu, S. Decker, M. Sintek, A. Naeve, M. Nilsson, M. Palmer, T. Risch. (2002) EDULTELLA: A P2P Networking Infrastructure Based on RDF. Presented at 11th International World Wide Web Conference, Honolulu, Hawaii, USA.

[4] IMS Global Learning Consortium. (2001) IMS Learning Resource Meta-Data Best Practice and Implementation Guide, Version 1.2.1. Available online at Accessed December 23, 2002.


Christopher Brooks
Computer Science Department
University of Saskatchewan
Saskatoon, SK, S7N 5A9 Canada

John Cooke
Computer Science Department
University of Saskatchewan
Saskatoon, SK, S7N 5A9 Canada

Julita Vassileva
Computer Science Department
University of Saskatchewan
Saskatoon, SK, S7N 5A9 Canada


Back to contents



A Framework for Creation, Integration and Reuse of Learning Objects



The lack of standards supporting interoperability and reusability of learning content is a major concern in educational technology. Several academic and business initiatives have started to promote the use of learning objects technology in providing strong connections between learners, learning content, content developers and training managers. However, interoperability between different tools is difficult to achieve. In this paper we propose a framework which supports generation, integration and reuse of different kinds of learning objects, following the educational standard specifications and providing teachers/designers with a tool for the creation of didactical units in Web-based self-contained courses.


Learning Objects, metadata specifications, interconnection mechanism, XML.


Learning objects [1] is a promising way to create modules of reusable learning content tagged with meta-data [2][3]. These support effective search mechanisms, providing advantages for students and teacher-developers a like.

Certain initiatives are trying to resolve practical difficulties related to the use of learning object technology. These arise in the indexation and retrieval of material (ARIADNE [4], Warwick Framework [5]), creation of new learning content based on individual learning requirements (LALO [6]), or development of standards, specifications and tools (IMS [7], LTSC [8], ADL-SCORM [9]). Stimulated by these initiatives, several computer-based training vendors have implemented their own tools, which have begun to provide us with wide range of learning objects to choose from. However, interoperability between different learning objects is not always supported.

It ELO (Electronic Learning Object) aims to be a framework for the generation, integration and reuse of different kinds of learning objects. It is supported by the ELO-Tool development environment. ELO objects include in their structure a software mechanism which provides content and facilitates access, encouraging the incorporation of heterogeneous objects types. In the following sections we present the ELO conceptual model, its representation and approach to interconnectivity.

The ELO Concept

Learning objects have inherited some concepts from object- oriented software design. For example, an ELO has attributes, behaviours and interfaces which define its interactions with other objects and internal and public actions (methods) [10]. It is useful to introduce a piece of imagery: an analogy is with a `puzzle piece´ in a puzzle (see figure 1). The contour represents the interface, specification and type data of an ELO. Any puzzle piece that matches can go in the hole, but the complete puzzle (course) is the composition of properly matched pieces (ELOs).



ELO Framework

Figure 2 shows the ELO development environment, called ELO-Tool. Its is composed of three main modules:


The ELO Model Hierarchy

The ELO Model is characterized by a multi-layer structure which permits the generation of ELOs at different levels based on three hierarchical elements, whose internal structures control the connections between different ELOs. These elements are:




Didactical Units (D.U): These constitute the highest level of the ELO model hierarchy. They represent a complete process of education and learning, consisting of objectives, contents, detailed activities and evaluation activities. They are the elementary units of programming pedagogical actions (see figure 5). These consist of two components: meta-data which describes the didactical units structure and contain Requirements (a list of skills and knowledge necessary to complete each didactical unit in particular), Objectives (they let the teacher identify the kind of practices and evaluations to be included in the didactical unit, in order to ensure the acquisition of the skills needed to work with this didactical unit), Summary (key concepts that must be assimilated during and after each didactical unit), Evaluation (methods and practices that allow the evaluation of the objectives) and Skills and knowledge (abilities attained through the interaction with a particular didactical unit). The other component is Multimedia composition which represents information units or a set of multimedia contents related in a multimedia presentation (implemented with SMIL[15]).



ELO Interconnections

The ELO framework must supports interconnection between ELOs. The next example shows how to connect three content units (CU(A), CU(B) and CU(C)) to create a didactical unit (DU) (See Figure 6).


Each content unit has its own requirements R(A), R(B) and R(C) and skills and knowledges (S&K)(A), (S&K)(B) and (S&K)(C) . CU(A) will be linked to CU(B) only if the R(B) are satisfied, for which the (S&K)(A) must be included in the R(B). In the same way, CU(C) will be linked to CU(A) and CU(B) only if the (S&K)(C) are included in the R(A) and R(B). The didactical unit created will have a new set of skills and knowledge (S&K)(DU) and requirements R(UD). It can be expresed by:

link (CU(A) , CU(B) ) :- sat (R(B) , CU(A) )
sat (R(B) , CU(A) ) :- implies ( (S&K)(A) ? R(A) , R (B) )

link (CU(A) , CU(B))
{R(A)} CU(A) {(S&K)(A)}
{R(B)} CU(B) {(S&K)(B)}
{R(A)-R(B)} CU(A) ; CU(B) {(S&K)(A) ? (S&K)(B)}


The purpose of the ELO framework is to promote the use of meta-data tagging as a means to interconnect different kinds of learning objects. The ELO-Tool characteristics are such that they allow the creation and integration of learning objects, modification of their educational meta-data schemas, translation between meta-data schemas and support of different XML technologies.


This work is funded by the MCYT (Spanish Ministry of Science and Technology) under project AURAS (TIC-16-50-C02-01).


[1] Wiley, D. A. "Connecting learning objects to instructional design theory: A definition, a metaphor and a taxonomy". In D. Wiley (Ed.), The Instructional Use of Learning Objects. Bloomington: Association for Educational Communications and Technology, 2000.

[2] LOM Standard, "Learning Object Metadata", IEEE P1484.12.1, the last working draft of this standard is available from, 2002.

[3] Katzman J., Caton J., "Evaluating Learning Content Management Systems (LCMS)", Peer3 white paper, May 15, 2001, pp. 7-13.

[4] ARIADNE, Alliance of Remote Instructional Authoring and Distribution Networks for Europe, 2000, Available from:

[5] Lagoze, C. , "The Warwick Framework: A Container Architecture for Diverse Sets of Metadata", D-Lib Magazine, July/August, ISSN 1082-9873, 1996.

[6] LALO Learning Architectures and Learning Objects, 2000. Available from :

[7] IMS Learning Resource Meta-data Specification, Version 1.2 Available from:

[8] LTSC. Learning Learning Technology Standards Committee, 2000. Available from:

[9] ADL. Advanced Distributed Learning, Available from:

[10] Ip A., Morrison I., "Learning Objects in Different Pedagogical Paradigms", Proc. ASILITE 2001. Available from:

[11] XML eXtensible Markup Language, 2000. Available from:

[12] DCMI (Dublin Core Metadata Initiative), Available from:

[13] LOM (Learning Object Metadata). Available from:

[14] El Saddik et al., "Metadata for Smart Multimedia Learning Objects", Proc. Fourth Australian Computing Conf., ACM Press, New York, 2000, pp. 87-94.

[15] Hoschka P., "An Introduction to the Synchronized Multimedia Integration Language”, IEEE MultiMedia, Vol. 5, No 4., September-October 1998, pp. 84-88.


Liliana Patricia Santacruz-Valencia
Carlos III University of Madrid
Communications Technologies PhD. Student

Ignacio Aedo
Computer Science Department
Carlos III University of Madrid

Peter T. Breuer
Telematic Engineering Department
Carlos III University of Madrid

Carlos Delgado Kloos
Telematic Engineering Department
Carlos III University of Madrid


Back to contents



Using the IEEE LTSC LOM Standard in Instructional Planning


We are developing a Multi-Agent System for Web-based Education that has characteristics of intelligence and adaptability [1]. In order to do that, we are using techniques of Artificial Intelligence Planning for the automatic curriculum generation. This is known as Instructional Planning [2]. This task is accomplished by the pedagogical agent, which is one of the agents of the system [3] [4]. The efficiency of the system depends on a good representation of the domain knowledge, which includes preferences and knowledge of a student.

The knowledge base of the system stores the knowledge of the domain. The knowledge base contains a set of properties about each one of the existing concepts. This information allows the system to identify the pedagogical characteristics and the interdependence among the concepts. Also, they are essential for the process of instructional planning, which is divided into two phases [5]:

In order to satisfy these requirements we choose the IEEE LTSC Learning Object Metadata (LOM) Standard. In this model a concept is represented as a learning object. This object is a logical container that can be presented in the Web. A Learning Object Metadata standard defines the minimal set of properties required to allow these objects to be managed, located, and evaluated [6].

The table 1 presents the categories and data that make up the knowledge base of our system.

categories and data that make up the knowledge base of our system

In our work we excluded the classification category and some fields in other categories, which are described in the IEEE LTSC LOM Standard. That information would not disturb the good use of the model and its absence helps the authoring work.

During the process of content planning, the system uses the information contained in the domain, structure and aggregation level fields of the general category. Also, it uses the information about the relationship among objects, contained in the relation category. In the process of delivery planning the system uses the information of the educational category to select learning objects most adapted to the preferences of the student [4]. It is important to notice that a concept can be represented by several similar learning objects. Each one of these objects has different features contained in the fields of the educational category.

An example will clarify the process of delivery planning. Suppose that the knowledge base has five different learning objects, which define the same concept. The information in the educational category of each one of these objects is described in the table 2.

Learning Objects

Assume that the other fields in the knowledge base have similar values in each one of five objects. Also, suppose that the preferences of the student have the following values TP = Narrative Text, TI = Expositive, NI = High, DS = Very High. Once the delivery planner has to present the concept mentioned above, it checks the preferences of the student. The system compares these preferences with the information of the educational category in each one of the five learning objects. Then, it selects the object most appropriated to the preferences of the student. In this example, the object 3 would be selected.

In figure 1 it is shown the flow of information among the knowledge base, the instructional planner and the delivery planner.

flow of information among the knowledge base

In the technical category there is relevant information for the presentation of the content, such as software and hardware requirements. In the Cycle of life, MetaMetadata, Management and Notations categories there are important information to make a good management of these objects.

The implementation of our system will produce software for web-based education with innovative features in relation to the adaptation of the course to the students [1] [4]. This software is being developed using MySql as a database system, and the programming languages PHP and Java.


[1] B. Queiroz, F. Dorça, C. R. Lopes and M. A. Fernandes, “An Intelligence Multi-Agent System for Web-based Education.”, In: Proceedings of the XXII Conference of Brazilian Society of Computation, Florianópolis, Brazil, 2002. (In Portuguese)

[2] P. Brusilovsky, “Adaptive and Intelligent Technologies for Web-based Education.” In: Rollinger, C. & Peylo, C.(eds.) K. Intelligence. Special Issue on Intelligent Systems and Teleteaching, 4, 1999, 19-25.

[3] B. Queiroz, C. R. Lopes and M. A. Fernandes, “A Proposal of a pedagogic agent for Instructional Planning.”, In: Proceedings of the XIII Brazilian Symposium of Computers in Education, São Leopoldo, Brazil, 2002, p. 515. (In Portuguese)

[4] B. Queiroz, C. R. Lopes and M. A. Fernandes, “Automatic Curriculum Generation for a Web-based Educational System.”, Accepted to: The International Conference on Computers in Education (ICCE2002), Auckland, New Zealand, 2002.

[5] B. J. Wasson, “Determining the Focus of Instruction: Content planning for intelligent tutoring systems”, Ph.D. dissertation, Research Report 90-5, Department of Computational Science, University of Saskatchewan, Canada, 1990.

[6] “Draft Standard for Learning Object Metadata”, IEEE Learning Technology Standards Committee (LTSC), [database online], April 18 2001: [cited Jun. 12, 2002]. Available


Bruno Q. Pinto
Department of computing
Federal University of Uberlândia

Carlos R. Lopes
Department of computing
Federal University of Uberlândia

Marcia A. Fernandes
Department of computing
Federal University of Uberlândia


Back to contents



Toward decoupling instructional context, learning objectives, and content in learning object metadata


Many efforts are underway to develop web-based repositories and catalogues that give educators and learners easy access to a wide range of quality educational materials (including EDUTELLA: Nejdl et al. 2002; IMS & IEEE LOM: Hodgins et al. 2002; OCW & OKI: Kumar & Long 2002; GEM: Fitzgerald 2001; ARIADNE: Forte et al 1997; MERLOT: Wetzel 2001; also SCROM, and Educause). These efforts have goals to support e-learning by ensuring interoperability, reusability, manageability, accessibility, and durability of on-line resources. Successfully serving the needs of teachers, learners, parents, administrators, content developers, and distributors requires a large number of services and resources to be put into place. While the pace and convergence of these e-learning projects is significant, current available systems, tools, and practices still have many limitations. In particular, the systems tend to lack pedagogical depth. We propose a complementary metadata scheme that will better match the ways teachers think about content and construct lessons, better serve the diverse learning needs of students, and more clearly reflect recent advances in cognitive learning theories. Our proposed metadata design scheme is based on discussions and experiences with many teachers, teacher trainers, and curriculum developers (including Murray & Galton 2002 & Murray 1999).

Many progressive instructional theories (including cooperative learning, inquiry investigations, project-based learning, etc.) involve a teacher selecting instructional materials to meet the pedagogical and pragmatic needs of the situation, often in real time. We have found that teachers are assemblers and composers of content. They repurpose, modify, and combine small grain sized resources and parts of lessons more often than they use resources or lessons exactly as found. Learning object metadata schemes (such as LOM), as currently offered and used, tend to limit, confound, and over-simplify three aspects of education and pedagogy that are best represented separately: content, knowledge, and context. "Content" refers to any artifact, multimedia, activity, etc. that can be presented to a student or which a student can use. "Knowledge" here refers to abstract topics or cognitive entities, such as "Newton's Third Law," an automobile maintenance procedure, or "the capital of Italy". "Context" refers to the instructional or learning context, and includes parameter such as the grade level, the teacher's goals, computer hardware limitations, etc. What happens in actual applications of learning objects in learning/teaching situations is that the teacher or learner gathers and uses particular content units with the goal of teaching/learning particular knowledge units all within a particular learning context. What is important to realize is that there is a degree of independence, and of interdependence, and a many-to-many mapping among these three types of learning objects. A content object, such as a picture or a science activity specification, could potentially be used in many contexts, and could be useful to teach many knowledge units. Current emerging standards, as used, tend to combine the three types of information into a single learning object or metadata specifications, and not do an adequate job of representing these three entities the their inter relationships separately. Our scheme adds two types of entities (learning objects) to the existing types of learning objects: knowledge units (KUs) and learning-contexts (LCs). We will use the term Content Unit (CU) to refer to the types of objects that are usually associated with "learning objects."

Knowledge Units. Our goal here is to decouple learning objectives from traditional learning objects so that KUs and CUs each become first class objects that flexibly can refer to each other. We use a knowledge-based and ontology-based approach (Mizoguchi & Murray 1999) to representing subject matter categories. We replace the semantically impoverished terms in controlled vocabularies/thesauruses with first class objects called knowledge units (KUs) that represent the concepts, skills, principles, etc. to be learned. Relationships such as prerequisite, generality are defined among KUs, creating a semantic network. The conceptual structures thus created will provide a richer and more pedagogically useful representation of domain categories, and lead to a more powerful indexing scheme. A KU object can point to many CUs that might be used to teach it. (See the discussion on "distributed models for curriculum design" in Murray 1998, 1999.)

Learning Contexts. Our goal here is to decouple learning context from traditional learning objects so that CUs and LCs (and KUs) each become first class objects that can flexibly refer to each other. When a teacher takes the effort to identify a set of CUs that fit her instructional objectives (KUs) for a particular classroom situation (LC), this combination of KU, CU, and LC should be saved for use by other teachers. It contains significant value added to the educational repository. To do this we propose an object called the Learning Context (LC) that binds a set of KUs and CUs with the parameters that define a particular learning context. Learning context parameters include: pedagogical-style, student characteristics, situational attributes (many of these parameters already exists in the LOM model).

Our data modeling scheme decentralizes and democratizes metadata creation. Whereas currently the metadata describing characteristic of a LO are centralized and not particularly malleable, this scheme allows any certified member of the teaching community to post an LC object ascribing properties to the LOs in the context of a particular purpose/context. The scheme effectively encapsulates (provide wrappers for) LOs, allowing any internet resource, regardless of its legacy status or the prior existence of metadata descriptors, to have metadata created for it without modifying the original LO (as suggested in Nejdl et al. 2002). We plan to implement and test this system contingent upon funding.


Fitzgerald, M (2001). The Gateway to Educational Materials: An evaluation Study: Year 2. ERIC Clearinghouse technical report.

Forte, E., Wentland, M. & Duval, E. (1997). The ARIADNE Project: Knowledge Pools for Computer-based and Telematics-supported Classical, Open, and Distance Learning. European Journal of Engineering Education 22(1).

Hodgins, W. et al. (2002). Making Sense of Learning Specifications & Standards: A Decision Maker's Guide to their Adoption. Industry Report by the MASIE Center: Saratoga Springs, NY.

Kumar, M.S.V. & Long , P. (2002) MITs Open Courseware Initiative (OCW) and Open Knowledge Initiative (OKI). At

Mizoguchi, R. and Murray, T. (Eds.) (1999); Proceedings of "Ontologies for Intelligent Educational Systems," Workshop at AIED-99, LeMans France, July 1999.

Murray, T (1998). A Model for Distributed Curriculum on the World Wide Web. J. of Interactive Media in Education 98(5). On-line journal at

Murray, T. (1999). "A Model for Distributed Curriculum: From Tutor-Centered to Topic-Centered Representations," AIED-99 panel "Ontologies for Intelligent Educational Systems," LeMans France, July 1999.

Murray, T. & Galton, A. (2002). "Professional Development for the Integration of Inquiry-based Simulation Software into Secondary School Science Classes." Working paper.

Murray, T. & Leighton, P. (2002). Toward Adding Conceptual and Pedagogical Depth to Educational Digital Library Metadata. Working paper.

Murray, T. (1999). Authoring Intelligent Tutoring Systems: Analysis of the state of the art. Int. J. of AI and Education. Vol. 10 No. 1, pp. 98-129.

Nejdl, W., Wold, B., Staab, S., & Tane, J. (2002). EDUTELLA: Searching and Annotating Resources within and RDF-based P2P Network. White paper at

Wetzel, M., & Hanley, G. (2001). Evaluation of MERLOT Tools, Processes, and Accomplishments. Center for Usability in Design and Assessment: Long Beach CA


Tom Murray
Hampshire College School of Cognitive Science
Amherst, MA


Back to contents



Standardized Content Archive Management – SCAM

- Storing and Distributing Learning Resources



The use of metadata and international standards is essential for the distribution of learning material as learning objects and learning components. SCAM is an archive system for storing and distributing learning components. By using existing standards for learning related metadata and metadata in general, such as IMS metadata and RDF, an advanced metadata model is implemented. The metadata model in SCAM supports the use of multiple metadata sets on the same resources, using different vocabularies and taxonomies in a layered manner. Based on the SCAM system, different types of archives can be built, such as e.g. portfolios or general learning component archives. Archives can be connected to each other by the use of an Edutella peer. SCAM addresses some of the more common, meta data related problems of storing and distributing learning components.


E-learning, semantic web, learning object, learning component, SCAM, interoperability, archive, metadata, RDF, learning technology standards.


E-learning and the use of ICT in education have literally exploded during the last couple of years. Much of the e-learning today is distributed using web technology. Either via the World Wide Web, Intranet or via Extranet. Often using Virtual Learning Environments (VLE) based on web technology or a back-end that uses web technology for presentation and interaction. The rise and growth of e-learning introduces a number of new problems (and of cause, many opportunities). One of the more significant problems concerns the organization and distribution of learning resources. This is the problem that this article addresses. It is also the purpose of the SCAM project, which is the focus of this article.

The KMR-group at KTH is presently coordinating a collaborative effort that involves the Swedish Educational Broadcasting Company (UR), the National Agency for Education (Skolverket) and the National Centre for Flexible Learning (CFL). These three dominant Swedish public service e-learning players have teamed up and are now jointly contributing to a Public e-Learning Platform (PeLP) based on open source code and emerging international e learning standards. The SCAM system constitutes a vital part of this platform.

It gets more and more common to distribute learning resources as Learning Objects or Learning Components. The SCAM system uses those concepts in order to implement the basis for a general archive system. Unfortunately, the definitions of the terms “Learning Object” and “Learning Component” are not well calibrated. There are several definitions (and visions), but most of them have some characteristics in common. One such commonality is the use of international Learning Technology Standards . Another is a modular design ambition and the strive to be reasonably context-independent - and hence reusable. One of the more common visions is the LEGO™-like model where several independent Learning Objects can be assembled and contextualized in order to obtain a compilation of learning material that suits a specific pedagogical situation. This is a vision that has been criticized and accused of being to simplified and unrealistic. Other, more complex models, such as the atomic model have been suggested. [Wiley, 2002]. A more exhaustive discussion of the subtleties of the concept “Learning Object” is however outside the scope of this article. Here we give a high level description of the SCAM system, the ideas behind it and the problems it addresses.

The SCAM system was developed to constitute a general basis for constructing standardized archives for digital learning resources. This means that the use of international Learning Technology Standards (as well as other Technology Standards) is most essential. The work is based on the assumption that the use of metadata and metadata international standards are prevented by the complexity of the implementations needed. At the same time, a great part of the implementation is similar for most projects in this domain. Hence, a common basis would greatly increase the effectiveness by enhancing reuse as well as by hiding the complex implementation details and provide a higher abstraction level for the average programmer.

The metadata problem

One of the most important missions of SCAM is to serve as a metadata catalogue for learning resources. The resources themselves may be distributed and referred to by URI:s. One of the fundamental problems and common misconceptions regarding metadata is that metadata is objective, static and has logically defined semantics [Nilsson, Palmér, Naeve, 2002]. Since this is not really true, especially not for learning resources, we need a mechanism for supporting a metadata ecosystem of dynamically evolving metadata over multiple metadata sets, using different metadata models, vocabularies and taxonomies for the same set of resources. We also need a mechanism that provides metadata semantics. In order to address those problems, we have turned to the Resource Description Framework or RDF . As Bray expresses it: “RDF is a framework for describing and interchanging metadata.” [Bray, 2000]. The use of RDF is a good start in order to design a system that has the wanted characteristics. For this reason, the SCAM metadata implementation is based on the newly released RDF bindings for LOM/IMS Metadata. To describe the structure of learning resources, IMS Content Packaging is implemented using, an RDF-based version of the IMS Content Packaging specification. This specification was developed by Mikael Nilsson and Matthias Palmér , in order to solve this problem (and others) for the SCAM project as well as for the Edutella project. Edutella is an RDF based peer-to-peer (P2P) infrastructure for metadata interchange and interoperability on the semantic web. [Nejdl et al. 2002].

Current state of SCAM

The first version of SCAM (SCAM I) was released in March –2002 and is avaliable at Source Forge under a combined GPL/MPL Open Source license. This release is a working prototype with the main purpose to prove the concept.

Since SCAM is a basis for building metadata-based archives, it is more or less invisible. To visualize SCAM I a teacher/student portfolio system was built on top of it. The implementation was made by providing a simple, form-based metadata editor and a simple web-based user interface. This SCAM I portfolio can be used for organizing and storing metadata on personal as well as shared learning resources.

As of December 2002, SCAM is approaching version II. Many of the identified metadata problems have been addressed. At the same time, SCAM has gone through a major architecture revision and refactoring. The most significant architectural change is that SCAM II is based on Enterprise Java (J2EE) for reasons of performance and scalability. The release of SCAM II is planned for Q2 2003.

Unsolved problems and the future of SCAM

Despite the successful work on SCAM II, there are still many issues to investigate further in future research.


One of the most important attributes in order to store and distribute learning material is a sound use of standards. However, the work with the SCAM project has shown us that we need a more flexible metadata model that supports multiple metadata layers for multiple vocabularies, taxonomies and ontologies. This is absolutely essential in order to promote efficient reuse and contextualization of learning resources. We must realise that metadata is not static and that metadata is not necessary provided by once by a single provider. We must establish the right conditions for creating a dynamic, democratic, metadata ecosystem. As described by [Nilsson, 2001] as:

“ a place where metadata can flourish and cross-fertilize, where it can evolve and be reused in new and unanticipated contexts, and where everyone is allowed to participate.”

SCAM is a step in that direction, but more research is needed in this area.


Bray, T. (1998), RDF and Metadata, June 09, 1998 .

Nejdl. W., Wolf, B., Qu, C., Decker, S., Sintek, M., Naeve, A., Nilsson, M., Palmér, M., Risch, T.(2002), Edutella: A P2P Networking Infrastructure Based on RDF, Proc. of the 11th World Wide Web Conference (WWW2002), Hawaii, May 7-11, 2002.

Nilsson, M. (2001), The Semantic Web: How RDF will change learning technology standards, Feature article, Centre for Educational Technology Interoperability Standards (CETIS), October 2001,

Nilsson, M. & Palmér, M. & Naeve, A. (2002), Semantic Web Meta-data for e-Learning - Some Architectural Guidelines, Proceedings of the 11th World Wide Web Conference, Hawaii, May 7-11, 2002.

Nilsson, M. et al. (2001), IMS meta-data 1.2 RDF binding, Appendix to the IMS 1.2 metadata binding specification., 2001,,

Wiley, D. A. (2002), Connecting Learning Objects to Instructional Design Theory: A Definition, a Metaphor and a Taxonomy, in The Instructional Use of Learning
Objects [], Agency for Instructional Technology - Association for Educational Communication & Technology, 2002.


Fredrik Paulsson
KMR (Knowledge Management Research)
CID (Centre for user oriented IT Design)
NADA (Dept. of Numerical Analysis and Computing Science)
KTH (Royal Institute of Technology)
100 44 Stockholm, Sweden

Ambjörn Naeve
KMR (Knowledge Management Research)
CID (Centre for user oriented IT Design)
NADA (Dept. of Numerical Analysis and Computing Science)
KTH (Royal Institute of Technology)
100 44 Stockholm, Sweden


Back to contents



Online Discussions: The First Premise for Interactive Learning



There are several essential components of online classes that determine the unique praxis of this form of distance education. A short list would include the following: a highly important syllabus, written lectures, and discussion questions (Betz, 2002). Arguably the most important of these components from the point of view of pedagogy and learning, are discussion questions.

In that the online learning environment is devoid of face-to-face (F2F) interaction between students and between students and the teacher, online courses are susceptible to a deficiency of interactions. As a result, online educators have become dependent upon discussion forums to create interaction. It is the interaction of course participants that distinguishes online educational endeavors from correspondence courses, and it can be said that online courses without interaction are reduced to electronic correspondence courses. To say that electronic correspondence courses are an underutilization of the potential for learning available through World Wide Web-based courses, is an understatement.

As Muirhead (2002a) points out, “[Online] Teachers will need to develop a class structure and online teaching style that encourages creativity, reflective thinking, and self-directed learning” ( 5). In order to foster and realize the full potential for learning online, teachers need to adjust their roles, end the unilateral dispensation of knowledge that characterizes traditional courses, and become the facilitators of a student learning that derives bilaterally, in learning ventures shared by all participants. As noted by Brown (2002), the facilitator of learning in an online class, “poses questions, moves toward ever higher thinking skills, encourages students to question each other, and provides mini-summaries” (p. 9).


As a backdrop to the topic of online discussions, it is important to understand the meaning of the term, learner-centered instruction. Traditional instruction, which is instructor-centric, is inappropriate in the Informational Age, and prominent universities in the area of online education evidence this fact. Novices or faculty from more traditional universities and public schools have are still trying to make traditional education a viable form of online instruction, often with disastrous results (Muirhead, 2002b). The problem is that traditional efforts which are endemically unilateral nullify the prospects of interaction between students and between students and faculty. Given the absence of face-to-face teacher/student interactions, traditionally taught, online courses are reduced to electronic correspondence courses.

Facilitation of courses is based on the premise of learner-centered instruction. From the point of view of learning theory, students’ unique long-term memories require that students actively broker learning activities in order to ensure integration of new learning with existing knowledge and skills. The faculty Facilitator then becomes a prompter and a guide of learning, not a dispenser of knowledge. The premise of learner-centered instruction fits perfectly with online courses, because of the students’ continual access to the World Wide Web, as well. With the Information Superhighway passing across each student’s individual desktop, what additional resources, other than prompting and guidance, can the “teacher” provide to augment learning?

Discussion Questions and Participation

Discussion questions are in reality, discussion prompts that are added to course content, usually on a weekly basis, to stimulate student interaction. Discussion questions (DQ’s) can be range in number from two to ten or more, each week, depending upon the Facilitator’s discretion. The protocol for using DQ’s is that the first contingent be posted on the first day of the week. Students first post their initial response to the prompts and then, add comments and questions to other students’ responses and comments. The Facilitator reads all postings and asks questions to elicit student discovery. Guidelines for the facilitation of DQ’s include: (a)The Facilitator should maintain a friendly, positive tone for discussions; (b) The Facilitator should ask deeper, more thought provoking questions of students based on their responses; and (c) The Facilitator should contribute as much to the discussions as the most active student (Betz, 2002).

A potential problem that can arise in the use of DQ’s in online courses relates to the level of student participation in the process. For that reason, the Facilitator must set rules for student participation, that require students to post a defined number of responses and comments to DQ’s, subject to penalty in the from of loss of points. For example, students could be required to post messages on five of the seven days of the week or three of the five days of a work week, in order to receive full credit for participating in the class discussions. Further, trivial, unrelated or inadequate responses need not be counted towards fulfilling the requirements of participation. The Facilitator can set the parameters for acceptable participation, requiring, perhaps, one hundred words for each post or a total of five hundred words on all posts for the week. The number of words and posts or days online is subject to the judgment of the Facilitator, but, without a participation requirement that is supported by a related effect on the course grade, student participation in course discussions will likely deteriorate.


Online courses run the risk of becoming inert, electronic correspondence courses, if traditional teaching techniques are relied upon to instigate student learning. In the absence of face-to-face interactions between students and between students and instructors, online discussions must be created that promote interaction for learning. Online discussions, if conducted with optimal effect, presuppose a shift from the traditional, instructor-centric mode of learning, to a clearly more Socratic mode of learning that is learner-centric. The logistics of hosting online discussions require student discipline and a refined protocol of accountability, the oversight of which, is among the newly defined duties of a Facilitator of online courses.


Betz, M. (2002). A case study of essential of practice at an online university. USDLA Journal, 16(10). Retrieved December 17, 2002, from

Brown, D. (2002). The role you play in online discussions. Syllabus: Technolgy for Higher Education, 16(5), 9.

Muirhead. B. (2002a). Integrating critical thinking into online classes. USDLA Journal, 16(11). Retrieved December 17, 2002 from

Muirhead, B. (2002b). Salmon’s e-activities: The key to active online learning. USDLA Journal, 16(8). Retrieved December 17, 2002, from


Muhammad K. Betz
University of Phoenix Online


Back to contents



University Infoline Service – Internet telephony for interactive communication in e-learning


Internet telephony now enables a wealth of new communication services. Traditional telephony services, such as call forwarding, call waiting or call transfer can be enhanced with directory services, Web, or e-mail. Programmability of Internet telephony services is a crucial issue for providing these services. There is a strong connection between programming Internet telephony services and the protocols that are used for their delivery. Among them, signalling protocols play an important role. A number of protocols have been defined for Internet. In this paper, we concern ourselves only with the Session Initiation Protocol (SIP). On an example of the University Infoline service, we introduce a new category of service features that enhance the so-called “click-to-call” type of services. The service can enhance an e-learning system in the possibility of the on-line voice communication between a student and a teacher.

Session Initiation Protocol (SIP) [1] is a signalling protocol developed to set up, modify and tear down multimedia sessions over the Internet. It provides signalling and control functionality for a large range of multimedia communications. SIP was designed to work hand in hand with other core Internet protocols like HTTP and SMTP. Many functions in a SIP-based network rely upon complementary protocols, including IP. The text-coding scheme of the SIP has been undertaken from the Web-browsing scheme.

For defining services, SIP took a different approach as compared to standard telephony. Implementation of SIP services can be done in general:

SIP services can be programmed either by the trusted (such as administrators), or by the untrusted (such as end users) users.

On the example of the University Infoline service, we show the implementation of a SIP service by the use of Java programming language and the proprietary SIP API by the trusted users. A service logic is implemented on the SIP application server (which assembly the functions of the SIP registrar, SIP proxy, Third Party Call Control module) side. Thus, the service logic is a Java programme, which directs the application server’ s actions based on inputs from the SIP messages exchanged between SIP elements, the service logic and localization database and the data got from the Web page.

For running the University Infoline Service (Figure 1), an account for a tutor has to be created by an administrator (1) and the data stored in a database (2). Using his username and password (4), the tutor edits his personal information or time availability (4). Then, a tutor does SIP registration and authentication of his SIP UA in the SIP service server (5). SIP server collects and stores the localization data of tutors in the database (6).

University Infoline Service

If a student retrieving the University web space wishes to get the detailed information on the course, he clicks on the hyperlink (7). He gets a window with the course description, together with the presence information of a course tutor (got from the database – (8)) and a proposal to establish a SIP call. If a student provides his current SIP address and clicks on the “Call” button, his HTTP request is sent to the web server. Then he gets a new web page presented in a new web browser window. To this window, the Java applet is loaded (9). The applet then establishes a communication socket with the SIP server (service logic) and sends the information about the caller (student) and callee (tutor) to it (10). The service logic accepts a request for a call. To get the information about a current status of a tutor, the service logic inquires the database (11).

We handle different states of a SIP call, which may occur between a student and a tutor according to the tutors´ personal preferences (the possibility of a tutor to setup the time of day when he/she is available and when he/she wishes to accept calls initiated from the web on his personal SIP UA (User Agent).
Based on a current status of a tutor, the service logic performs different actions:

  1. Tutor is online and available for calls initiated from the Web
  2. Tutor is online and available for calls initiated from the Web (but he is busy with another call)
  3. Tutor is online and not available
  4. Tutor is off-line

In the case 1, the service logic asks the 3rd party call control to establish a call between the student and the tutor (12).

In the case 2, the service logic instructs the Java applet to allow the student to leave his/her contact data, which is sent to the service logic in the SIP proxy server (13) and to the database (14). When the tutor becomes free, the service logic informs the student according to the preferred way of contact (either by setting up the 3rd party call (15), or by sending an e-mail notification (16), or the both). In the case 3 and 4, the procedure is the same as in the Figure 1, up to the step 11. In this moment, the service logic finds out that the tutor is off-line or not available, thus, it proceeds in the same way as it does in the Figure 1, starting from the step 13.


Efficient programming of new communication services is a key issue for Internet telephony. With SIP, services can be created that combine elements from telephony and other web applications such as email, messaging, the Internet and video streaming. On an example of the University Infoline service, we present a category of new enhanced “click-to-call” services, which can be simply integrated to any type of voice-enhanced e-commerce services.


[1] M. Handley, H. Schulzrinne, E.Schooler, J. Rosenberg, “SIP: session initiation protocol,” RFC 2543, Internet Engineering Task Force, March 1999


Tatiana Kováciková
Department of Information Networks
University of Zilina
Moyzesova 20, 010 26

Pavol Segec
Department of Information Networks
University of Zilina
Moyzesova 20, 010 26


Back to contents



Call for papers


Conference Name: CAL'03 Conference, 21st Century Learning
Date: 8 - 10 April 2003
Venue: Queen's University Belfast, Northern Ireland
Web site:


Back to contents


End of newsletter