December 29, 2004

Open Amazon - lesson for libraries?

There's a fascinating article, 'Amazon: Giving Away the Store', in the January issue of Technology Review.

It describes how Amazon.com has opened up access to the riches of its product database via web services, allowing developers anywhere and everywhere to grab data and re-use it to enhance their own sites. Sales have to be routed through Amazon, but the satellite site gets a commission.
This exposes Amazon to an even wider potential market whilst outsourcing the development cost and creativity (as well as some of the profit).

Apart from the possibilities for libraries to use Amazon web services, which has been happening for some time, there is a clear parallel here with what libraries and their systems need to be doing with their own content and services: separating presentation from business logic and content so that they can offer their content and services beyond the OPAC in the places where the users are and presented in ways that are appropriate to those places. This renews the old library adage, 'get the stuff to the chaps.'

Much has been written about the chaps being mostly at Google and Amazon and, more generally, in the 'open' web. One example of the open Web that uses Amazon web services is AllConsuming.net, a site that monitors books being discussed in blogs. In addition to the link to Amazon, there should also be an option to find the book in the user's preferred library. This is provided at the independent Amazon Light but, for a global audience, a single list of libraries all over the world is a crude and ineffective mechanism. What it needs is an embedded service linking to a maintained directory of libraries, providing robust links and a good method to enable the user to select their prefered library from the huge number available.

December 24, 2004

Alerting tools - changing the focus

I’ve been reading an interesting article in Dlib about Scientific publishers investigating the use of RSS, this has got me thinking about whether alerting will finally reach mass take-up in libraries: http://www.dlib.org/dlib/december04/hammond/12hammond.html

Profiling and alerting has hitherto been a feature built into the systems that people use, eg ‘provide alerts’ option to watch a topic or subject, or ‘email me’ with new tables of contents as a journal is published.

The increasing availability and adoption of RSS tools is disrupting this model – shifting the tools known by librarians as ‘SDI’ (that’s Selective Dissemination of Information, rather than Strategic Defense Initiative) into the hands of the users. There are now lots of RSS readers available for free download – these are tools that allow users to subscribe to ‘feeds’ (ie changes to a site) and manage the content from these. Systems and web sites are increasingly becoming RSS-aware and publishing RSS feeds. The key advantage for users is that they have a single interface to use to set up and manage their feeds – that’s why RSS will be used where current models aren't. Question is, who’s using RSS to date – is this a tool used by the users of libraries?

I guess this will spawn demand for a new set of tools allowing discovery of and subscription to relevant RSS feeds - maybe aggregation services for the best feeds in a subject?

It also prompts libraries to change the way they think about their content – how can they make their content and services available through ‘push’ technogies?

Suspect the world will change further if Microsoft embed RSS in the next version of IE/Outlook, as expected.

December 13, 2004

What’s in an I-Name?

Up until recently my question was “What is an I-Name” Then I listened to the ITConversations interview with Owen Davis on the subject of IdentityCommons, which underpins I-Names, and it all became clearer.

The simple answer appears to be “DNS, but for people“ I-Names are assigned to people in a similar way to the way Domain names are allocated to their owners. You identify an unused I-Name, pays your money, and its yours! You pay an Identity Service Provider such as 2idi.

I’ve now got mine =Richard.Wallis it only cost me a donation of $25, and it is mine all mine for the next 50 years. That should impress the other Richard Wallises out there, I got in first! It raises an interesting point though, all I-Names are unique, but all people names are not. When was the last time you saw an eBay User Id that was the user’s actual name? But again selecting an identity, or handle, that describes you is an interesting exercise in its self

So what! What can I use my I-Name for, beyond showing off that I have got one by putting it in my eMail signature. Today not much, but it has potential.

As an I-Name is a guaranteed unique universal private address, or identity, it could be used by all sorts of systems to confirm who you are. It picks up on the same ideas as Microsoft Passport, but without the perception of world domination.

I-Names are also applicable to organisations so as well as being able to uniquely identify me, it should be able to identify the me that works at Talis separately from the me that is at home buying stuff off eBay. The same ‘me’ but in two different contexts.

Extending that concept to ‘me’ on a University course context that because of it has licensed access to a particular eJournal, starts to make things interesting. Add to that the possibility of Amazon knowing my I-Name and will then trust me for one-click purchases and things could get very interesting.

So is this the Holly Grail of identity management that will solve all the problems Shibboleth, Athens, WS-Federation, etc. have all tried to address with differing levels of success? I doubt it, as Jon Udell has quiet rightly pointed in his thoughts on the subject
“having spent more hours than I care to admit poring over specs and architecture diagrams from the Passport, Shibboleth, Liberty, and WS-Federation projects, I suspect (as does Doc Searls) that some other identity standard will prevail.”

But there again it could be one of the lights at the end of the tunnel that together will solve the travelling identity problem, and will be so obvious [like DNS is now] after we have all given in and start using the de-facto standards that emerge.

December 08, 2004

JISC IESR event

I've been invited to speak at a workshop orgainsed by the JISC IESR in January.

The JISC IESR - the Information Environment Service Registry - is all about collection and service descriptions. I find the development of collection descriptions an interesting area. It probably stems from my dealings with the RIDING project many years ago at Leeds and Sheffield universities!

I'm particularly interested in their application when it comes to electronic resources. Is there such a thing as a generic description? and what is their true application?

The way I see it is that every electronic resource is being described on web sites and in metasearch tools by libarians all over the world so there must be a need to share those descriptions, but would you take a generic description and then change it? So is it as simple as a download, or is their a more complex need to share?

I guess I better figure it out before the workshop in January!




December 07, 2004

A vision for the E-Learning Framework

The E-Learning Framework is a major international initiative that has important implications for libraries. The UK part of it is the JISC e-Learning Programme supported by CETIS. Where could the international e-Learning Framework be in five years' time? Some of the key international partners have outlined their vision.

Dan Rehak of the Learning Systems Architecture Lab, Carnegie Mellon University in the US
... hopes that in five years time there will be sufficient web service alternatives in each of the ELF service definitions or ‘bricks’ to allow institutions to choose the services most relevant to them and their institutional e-learning infrastructure. We mustn’t lose sight of the ultimate aim which is better learning opportunities for students.
Kerry Blinco and Neil McLean of the Department of Education, Science and Training (DEST) in Australia
... have an air of confidence that the service oriented approach will succeed. That confidence is probably built on the experiences of working with the Tasmanian Education Department who have successfully built a service based education environment. The Learning Architecture Project (LeAP) is delivering a number of interoperable online applications to enhance teaching and learning in 218 schools and colleges across Tasmania. ...

Neil thinks that the framework is now at the cottage industry phase where academics, software developers and policy makers are involved in its development. In five years time Neil predicts that open source web services will have taken off and there will be a proliferation of teaching applications for people to use. At this stage it is important to keep both academics and software developers involved by using an iterative development process for the ELF that everyone feels that they can be part of.
Neil McLean co-authored, with Clifford Lynch of the Coalition for Networked Information, a key white paper on Interoperability between library information services and learning environments.