Posts Tagged ‘interoperability scenarios’

RFC Version of the Cookbook released

Thursday, October 14th, 2010 Cookbook - Request for Comment

The demand for powerful and rich Digital Libraries capable of supporting a  broad  variety of interdisciplinary activities and the pressing need to address the data deluge are intimately bound up with the increasing need for “building by re-use” and “sharing”. Interoperability plays a crucial role in responding to these needs. Despite efforts to address interoperability, current solutions are  still limited. The lack of a systematic approach on the one hand and scarce knowledge of  current solutions adopted on the other are among the main impediments to interoperability. What’s more, solutions are  all too often confined to the systems they have been designed for.

Chartered with addressing interoperability challenges, the project and its contributing experts have produced a Request for Comment  version of the Technology and Methodology Digital Library Cookbook. The Cookbook is aimed at collecting and describing a portfolio of best practices and pattern solutions to common challenges face when it comes to developing large-scale interoperable Digital Library systems.

This first Request for Comment (RFC) version of the Cookbook, which should not be considered neither authoritative nor final but rather as a “work in progress” with the aim of enhancing it through external feedback.

Contributing to the Cookbook
Requests for Comments regard both the Cookbook as a whole, as well as on any of its components by leveraging expertise outside the project. The Cookbook main components are:

  • Interoperability Levels&  Digital Libraries
  • Interoperability Model/Framework
  • Interoperability Model in Action
  • Best Practices for organisational, semantic and technical interoperability across six core DL concepts (content, functionality, user, policy, quality and architecture)
  • Interoperability Scenarios

Feedback on the Cookbook is requested until the end of November 2010 and should be sent to To provide feedback in the form of a blog posting, please contact Before sending feedback, we strongly advise you to read the terms and conditions.

Bookmark and Share

Expert View – Edward Fox on Credible Interoperability Requirements

Thursday, August 12th, 2010

Expert View - Edward Fox

In addition to Europeana, it would help to have a number of examples where interoperability is useful. One is the National Science Digital Library in the U.S., where different Pathways and other projects manage sub-areas of Science, Technology, Engineering, and Mathematics education. But while there is OAI-PMH based interoperability, the user experience is far from convenient, for example, never knowing what will come up when a metadata record is brought forth. Nor is it possible to do browsing on more than a very superficial set of facets.
The Networked Digital Library of Theses and Dissertations has interesting interoperability issues regarding operations by students, their work with their mentors/examiners, their graduate program administrators, their library, the national library, the NDLTD Union Catalog, etc.

  • Again, how can we browse on topic?
  • What happens when metadata records are incomplete or erroneous?
  • How can over time we deal with all types of content?
  • What about policies of the department, college, university, nation?
  • What about access restrictions, especially if for a fixed time period (e.g., 1 year)?

Another challenging example is the Crisis, Tragedy, and Recovery Network. We have many stakeholder groups with different needs: those affected, their families, their friends, their care givers, emergency workers, volunteers helping, psychiatrists, mental health professionals, health care professionals, administrators, policy makers, students, researchers and so on.  There are many types of contents: papers, news stories, videos, help manuals, emergency preparedness plans, testimonials, blogs, tweets, emails, cell phone messages, reports, law suits, government reports, survey data. This needs to run in a distributed way, working with different languages and cultures, nodes in each location, but sharing data. There needs to be data mining across the distributed collection, browsing, searching, GIS connection, visualization, data analysis, etc.

Edward Fox, Professor of Computer Science, Virginia Tech, U.S.

Bookmark and Share Blog powered byWordPress