Monthly Archives: July 2005

DIDET project


The DIDET project impressed me with their presentation at the recent HEA workshop on “Designing for Blended Learning” in Southampton. We were told DIDET stands for ‘Digital Libraries for Global Distributed Innovative Design, Education and Teamwork’. This collaboration between the University of Strathclyde and Stanford University in the US aims to harvest digital resources from design engineering students. Another goal of the project is to enhance student learning.

There were several points which resonated with me as sound and successful e-learning:

Firstly, students were working across sites, which enhances their learning experience and also provides them with a simulated industrial environment in a global market. Secondly, the technologies used for the student projects provided a rich, flexible and interactive learning environment.

It is sadly not very common that universities in the UK view their students as authors and knowledge creators, but this is wrong. The project proves that student creativity is immessurable and often surpasses that of the lecturers. Therefore products resulting from projects such as the design ideas on ice crushers that DIDET used, are useable in the context of institutional repositories.

The content production through project products, also raises possibilities for student e-portfolios and their future employability. This of course comes with some additional issues that the project is looking at.

The infrastructure provided to the project confirms and supports my claim from a previous post of a dual sharing infrastructure. Content is managed at two levels: (a) informally and dynamic via a Wiki, and, (b) formally, more permanent, and quality assured in a digital library. The former allows group collaboration and ad-hoc knowledge structuring, the latter caters for reuse, quality, metadata and standards, and granularity.

The project tasks concentrated on concept mapping and the mapping of design problems in teams. The supporting online platform (TikiWiki) provided to the distributed groups with a wide set of IT tools, including blogs, forums, shoutbox, content storage, file galleries,

FAQs and survey tool. The Wiki environment allowed the small groups to jointly contribute to content with easy editing and upload mechanisms.

All in all an impressive example of how e-learning can enhance the learning experience, engage students and benefit the institution at the same time!

Primitives to Patterns


To me (educational and other) patterns have two connotations: one is to do with ‘repetitiveness’ or ‘re-occurrance’. In planning, this can mutate to become reuse, i.e. a planned re-occurrance. The other meaning is to do with some ‘holistic notion’ for something, e.g. a ‘face’ instead of ‘nose, mouth and eyes’. The latter suggests a pattern to consist of components – something that fits in well with activity structures and UoLs. The first notion is obviously also very useful in consideration of LD patterns.

In LD there is no necessity to strictly adhere to Alexander’s definition of patterns. They can, in my opinion, be treated domain specific. Indeed, they must have been around in child psychology from long before Alexander and before software development, cause that’s the way children learn – by recognising social, physical, linguistic and behavioural patterns (sometimes reinforced through feedback). A typical example would be first language acquisition, or for a child to recognise and understand the concept of “weekend” as opposed to weekdays: it’s e.g. the day when mum and dad are not going away for work, or some such thing. This is what we are looking at in Learning Design, i.e. recognition of specific behaviours which enables us to (1) understand better what we are doing, and (2) re-apply it in other situations in an analogous way.

I have some apprehension towards “best practice” or even “good practice”. These are twofold: firstly, who says what is good or best? Secondly, what is good today is outdated tomorrow. Whatever Alexander thought, I believe patterns should be notionless with as little in-built value as possible. The value attached to them will reside externally, e.g. by the amount of reuse during a periode of time.

Thus Learning Design is about capturing practice – punto. Patterns may help us in understanding, expressing, and re-applying practice.

Patterns are not to do primarily with creating something new or finding new solutions. However, it can be an analogous or adapted manifestation of something earlier, and in rare cases perhaps provoke a new or complementary approach?

PS: See the full discussion on this topic at learningnetworks.

The ELF is dead – long live the ELF!


Were you around when VLEs and then MLEs got us all excited? Now it is the ELF or e-Learning Framework that gets us funding from JISC and therefore enthuses the HE community. But what is it? The recent JISC program meeting in Cambridge should have clarified this to the community so we all sing from the same hymn sheet. Instead it highlighted the differences.

The ELF is meant to be a reference framework of e-learning services along which institutions can develop their service architecture. It is not a wholly new idea (cf. the much wider and more learner-oriented approach by George Copa from Oregon State University in 2000), it is at present incomplete and by definition open, so will never be complete. That is fine, but are we certain about the bits that are there at present. The answer is “not quite”. JISC program funding is directed towards reference models as if it was the holy grail. Institutions bidding to go on the quest can hope that their adventure will receive monetary support. However, quickly, a debate broke out between the honorable knights of the order as to what a reference model is supposed to do.

Some argued a reference model is to be prescriptive in its nature. That provoked resistance for we (the community) want to have the freedom of choice and this would work against competition principles – also for providers of commercial software. It is not the JISC’s role to force feed us with a particular solution, only to facilitate institutions to make a choice that suits their needs. The other held opinion was just another name for example of good practice. The projects involved in developing reference models merely aim to exemplify how it could be done if one chose to follow their approach.

The next question is what constitutes a service and how do you define it. It’s a bit like the eternal juggle of faculties and their associated domains. Have not most universities moved from 3 to 5 and back to 4 faculties in their long existence? So how do we distinguish services, e.g. repository services from hosting services and archival services? It’s not easy and although I can foresee future convergence between some of these (bless the Integrated Information Environment), different standards, data models, business processes and tools are currently associated with each of them.

So is the ELF grail a fancy idea that will go away as we leave the Dark Middle Ages of e-learning? Sooner than many might think, I believe. However, there maybe some legacy of harmonisation to some extent between schools of thinking and implementations.

I see the ELF as potentially providing institutions with a Richter scale of e-learning service provision that they can hold on to as they evolve and develop. You can measure your progress against it and it leaves it to the HEIs as to how far they want to go.

Standards Jungle


DIDL METS OAI-PMH MPEG-21 LOM PREMIS IMS-CP SCORM RAMLET PREMIS DC DREL SAML, … has the e-learning world gone mad?! The overload of data reference models, expression languages, and metadata standards may soon implode.

Has anyone actually got a grip on all these emerging and existing standards and successfully implemented either a service-based or system-based e-learning architecture that conforms to all of these? A recent JISC programme summit exposed that even the guys from CETIS have little to no clue as to what they are, how they work with each other and how they should be handled.

And that’s just the theory (and in theory everything usually works!). There is extremely limited practical experience that would support the abundance of models and standards.

They are created primarily to guarantee interoperability between systems. In the current standards jungle, however, I see this aim as very much threatened by overkill and confusion.

So why are there so many of them? Is it some personalities that seek recognition in the community by inventing a specification” or “reference model” a day? A sign of technological virility? Is it organisations who try to give themselves purpose, a place in the market, and a future? Or competing communities such as libraries against technologists? Or is it just an overzealous good will and naivity that more standards will do better than fewer?

Not only are they all “open” standards – and this, unfortunately, includes to mean “open to interpretation” (e.g. by commercial companies that want to protect their product on the market), but they have already become an unmanageable overload to the system which still requires human input to create the majority of appropriate support and metadata. And what for? So a tiny digital object like a semi-colon can be wrapped like a Russian doll in dozens of onion layered data packages.

Hmmm. I do think some standards are needed. However, not every electron microscope, telescope, or web application needs to produce their own individual set of data reference model and metadata scheme. The efforts that go into standards implementation look to outweigh the benefits. Once created and certified it is far from easy to get rid of a standard even when superceded by something better. What we, therefore, urgently need in this overcrowded standards space is natural selection which only the fittest will survive.

OAIS and PREMIS


Digital preservation is an emerging field in the digital age. It concerns itself with the long-term preservation of digital data beyond the current data structures, formats and storage media. The UK’s Digital Curation Centre organised an international conference in this field which provided me with a steep learning curve.

The term ‘ingest’ was totally new to me. It denotes all [human] processes that go into preservation. It’s front-loaded to a digital archive or repository, which is where the data goes. It is a summary term for all activities such as checking the integrity, duplicating, describing, applying metadata, cataloguing, etc, etc. the whole lot.

There is a cost even to taking a decision of whether to ingest or discard something. In the current environment discarding is always cheaper than ingesting. But what if it was cheaper to ingest than to make a decision? And what if there were analysis tools that make huge collections easy to search? The example that came to my mind was the Google Gmail approach of providing users with a super search tool instead of folder structures.

Maybe humanly applied metadata will eventually become a thing of the past as I outlined in an earlier posting. There are certainly efforts being undertaken of producing as much metadata as possibly in an automated way. New laboratory equipment has metadata creation functionality built in, advanced picture searches are beginning to analyse the semantic content of images rather than seeing only pixels. So maybe – just maybe – in future we need to supply less of these descriptive data ourselves.

The key standard in the area of preservation is the OAIS (Open Archival Information System) reference model for long-term preservation. It covers the most vital parts of the processes and defines key elements, such as that ‘information’ is not only a string of bits, but must be usable. Basically the OAIS model bundles the digital object with representation information that contains all the necessary explanations to understand the object.

OAIS Info Model

The OAIS also describes the managing process required for preservation, starting with the creator submitting a SIP (Submission Information Package). This undergoes the ingest process into the archive where it is stored and managed. At the other end of the OAIS is the user retrieving a DIP (Dissemination Information Package). This is not necessarily identical to the SIP. What is in the archive is an AIP (Archival Information Package) as well as a PDI (Preservation Description Information) that is the documentation how it was preserved.

OAIS Reference Model

One question I put forward was how the model links different digital objects. Sometimes an object does not make sense unless it is in a complete set. Take for example an astronomical photograph of a quadrant in the sky. Only together with the shot taken 10 minutes later it becomes evident that there is a moving comet in the photograph. However, the answer to my question was that the OAIS does not link objects – perhaps a weakness?! The OAIS also does not specify a taxonomy to facilitate retrieval. It leaves this to the implementation.

However, I found out a little later that some data models that are mapped onto OAIS such as the PREMIS (Preservation Metadata Implementation Strategies) does reference Intellectual Entities that consist of multiple objects, e.g. a movie that consists of an audio and a video track which can be treated as individual objects. PREMIS is a spec that has widespread acceptance and is implemented a lot. Many projects aim to be PREMIS conformant.

An interesting and contrasting concept to other repository ideas is that a file under OAIS is not modifiable. Modification leads to a new file, i.e. a new object. So no versioning issues there but it requires a separate ingest from scratch.

I also found it interesting that curators look at their work from the perspective of a string of bits. If you were to unearth a string of bits in 20 years time, what tools would you have to be able to understand this ‘information’.

Digital Preservation Archives


At the Digital Curation Centre’s workshop on long-term curation within digital repositories it was initially unclear to me why an institutional repository such as we are operating in the shape of a Learning Objects Repository (LOR) cannot be the same as a digital archive for preservation. Some speakers seemed to suggest there needs to be coherence in strategy and processes for both, but yet kept the two things quite separate. My immediate gut reaction was, oh no, not another data silo with workflow management, institutional commitment, system integration issues and cost!

Pondering over this question for a while at least one distinguishing factor became clear why they cannot be one and the same repository, and why we cannot use the OAIS model for both. There is one crucial difference: The digital archive needs to contain the real content object, while other repositories can and will contain links to external objects (metatagged references) but not necessarily the objects themselves. The latter makes no sense in preservation, because who would want to preserve a link to nowhere? However, the other institutional repositories will have external items referenced in them which, for (copy)right or other reasons, cannot be physically stored. They are nevertheless critical in the day-to-day operations, e.g. a BBC video or audio programme.

To put the experts to the test I asked them the provocative question that now that we know how to preserve data objects, how do we preserve the object repositories. They were quite a bit puzzled by the question, but I found it only fair. If you collect the vocabulary of a dying language in a dictionary, you also need to preserve the dictionary. The answers were along the lines of “we produce the archives in Open Source” – hmmm, that won’t be enough: (1) who documents and preserves open source code?, and, (2) the application runs on an operating system. If this is no longer available the code won’t run.