January 25, 2005

Revolution or another competitor in library software?

I noticed a paper called THE COMING REVOLUTION IN LIBRARY SOFTWARE by David Dorman on IndexData’s Web site. I checkout this site regularly because they provide good, reliable, efficient open source technology for search and retrieval of meta-data with packages such as Yaz and Zebra. In the paper he argues a new business model for delivering software to libraries called commercial open-source will cause a paradigm shift in the market. He makes a plea for libraries to fund the initial development costs with a 10 point plan. He sees further development and support being charged to customers. This doesn’t seem so different from a traditional commercial model where development costs are recovered from customers through purchase costs and further development is funded through recurrent payments. He claims open source software development results in ‘less expensive and better designed software, and speedier development’ than development by traditional vendors. In the competitive market of library software supply this seems difficult to justify. Competition is driving down customer costs because cost is a factor in winning bids. ‘Better designed’ software can give more reliable and usable solutions generating fewer support calls. Lower maintenance costs are attractive to vendors because it reduces costs making lower recurrent charges possible. Lower total cost for customers over the lifetime of a system is an important competitive advantage. Vendors seek speedier development to reduce time to market as competitors fight to attract new customers. He says for commercial open source vendors ‘Development, rather than being an opportunity to sell more licenses, or a burdensome overhead cost to be avoided if possible, becomes the primary revenue generator’. My experience with Talis is we enjoy our development; our purpose is to develop solutions for our customers. And if we don’t develop attractive solutions we don’t sell so development is central to our success. On quality he implies peer review of open source code is more likely to achieve high quality software than a software engineering process incorporating reviews at each major milestone of analysis, design, implementation and test. In addition to software development processes Talis publishes end user documentation, database schema, stored procedure code, scripts for useful utilities. He suggests abandoning proprietary development tools in favour of open source alternatives. We use third party programming tools such as Microsoft’s Visual Studio, a best of breed tool, to speed delivery and enhance quality. On commercial open source he says ‘This Model requires a new and closer relationship between vendors and libraries’. Isn’t this what all vendors are striving for? Talis fosters a community who share useful tools in source code provided by customers and partners through our Talis Developer Network. His vision assumes there are cohorts of willing library programmers with sufficient knowledge, skill, free time and resources to develop library software. Instead I see a community of customers behaving with enlightened self-interest to pass on experiences to fellow customers. In practice my guess is IndexData will have a core development team with a few external trusted developers providing code fixes. I don’t see a paradigm sift, I see another competitor in the library software market.

Mobile and PDA technologies and their future use in education

This is the title of the latest JISC Techwatch report, published in November 2004, whch I've just dipped into. Here's their overview:
In recent years there has been a phenomenal growth in the number and technical sophistication of what can loosely be termed 'mobile devices' such as PDAs, mobile phones and media players. Increasingly these devices are also internet-enabled. This JISC report reviews the current state of the art, explores the potential uses within education and discusses some of the trends in technological development such as wireless networking, device convergence and 'always-on' connectivity.
An email update from one of the authors, via the Techwatch email list, last week, points out that there remains considerable uncertainty ('fog') around fast wireless access technologies, but the following conclusion serves to emphasise, for me, the need for libraries and their systems suppliers to be focusing on delivering data and services to these technologies:
... widespread adoption by students and staff of always-on mobile devices will partly be driven by the development of wireless broadband networks that can deliver the Internet to these devices. As the competition to deliver high speeds through the various technology paths increases so the likely time to market for low cost consumer solutions is likely to fall. As currently planned by manufacturers this kind of high speed access should be relatively normal by the end of the decade.
Although this has an academic library perspective, it will surely apply equally to actual and potential users of public libraries because this is about general consumer technology. Once again it's a reminder to take the library to the users, use the technology that they use (redefining the meaning of 'mobile' for libraries!), or be ignored.

January 19, 2005

Amazon make queueing a reliable experience

An Amazon Web Services announcement which snuck under my radar recently was the launch of their Beta [aren't they all nowadays!] Simple Queue Service.

This is not as you may at first think something to keep the people waiting, behind the person checking out every book on their favourite subject whilst returning all the items found in their three year old's toy box, amused so they don't hassle the person waving the bar-code reader when they eventually arrive at the front of the queue.

No, this is a bit of technology delivered as a service which should excite the developers of interactive applications which may or may not access Amazon content. It provides a general purpose service to manage a set of queues of up to 4,000 data messages of up to 4 kbytes in size with a message maximum life time of 30 days.

When developers are building applications and services which involve the interaction between more than one system, they very quickly bang up against the need to pass messages between those systems. Most developers will tell you that this is not rocket science, even when the message delivery has to be reliable [some form of guarantee that a message is not lost, or incorrectly delivered].

The problem that is often tripped over when implementing such systems is that for messages to be delivered reliably they need to pass through a messaging system which keeps temporary copies of messages and manages queues etc. Such systems need to be managed, maintained, backed up, etc. The overhead of such housekeeping operations, is often considered to be such a pain that it can detract from the business case for delivering a new service.

So what are Amazon up to in launching something that will be hidden under the hood of other peoples applications, and unlike their other Web Services will not necessarily lead to clicks back to buy stuff from them?

Firstly, I would expect that it is a low cost service to provide. They almost certainly have been using this technology in-house to support their own services for sometime. Adding a few publicly visible servers to their set would not add much overhead.

Secondly, are they dipping their toe in to the emerging market for the supply of software component services. A software equivalence to Sun's 1$ a CPU cycle service?

Whatever their commercial strategy on this, what they are doing is floating it on the trusted Amazon brand.

OK you want to delegate off to some third party the job of looking after the messaging queues that underpin your application. So who do you pick? Someone you trust, with a 'good name' so why not Amazon. Would you choose them over some little known hosting company, or maybe another little company with their headquarters in Seattle?

I'll leave you to ponder on that....

January 16, 2005

Meditations from ALA

Ken Chad Executive Director, Talis

Where's the innovation?

"Where do you see the innovation coming from ?" asked Andrew Pace from North Carolina State University towards the end of Friday's "View from the Top" seminar. The question was addressed to me and the other panelists - the CEOs and chief executives in the library and information industry. Judging from the response of the main US library system vendors not from there! Roland Dietz, President and CEO of Endeavor (owned by Elsevier) had earlier, and not surprisingly singled out Google as a major challenge. So is the innovation going to come from outside - Amazon, Google even Microsoft? For me at Talis this is a fundamental question. We are putting a lot of investment in smart people and have some smart ideas too. Of course we have to keep our focus on evolving our core products and services but we won't survive long unless we innovate.

Where's the value?

The seminar provoked a lot of discussion about the "value" of libraries and how that appears not to be expressed the dollars spent with the library automation vendors. Libraries (especially public libraries) are faced with budget cuts. Money is being spent not so much on library technology but rather on other enterprise wide systems for the institution as a whole. We see this too in the UK . Money (lots of it in some cases) in universities and local authorities is going into human resources (or CRM) systems, finance packages, portals and, notably in HE on Virtual Learning environments (VLEs).

Bob Walton, the Vice President for Business and Finance at the College of Wooster and a recent purchaser of such systems wondered why it is that, in his view, compared to library systems, these other enterprise systems are:-

Less sophisticated
Less reliable
More expense in terms of software licencing
More expensive in ongoing maintenance
Take much longer (three times longer?) to implement
Hugely (ten times?) more expensive in terms of training

Maybe its simply because there is less competition? That market is continuing to consolidate --as will the library market. But the short answer is that's where the institution sees the value. It's true that over the last 25 years librarians and vendors have jointly done a good job in implementing and developing reliable high quality systems. Rob McGee (head of RMG consultants) remarked that maybe, as the library vendors had done such a great job, they should get into this "ERP" sector? "The entry costs are now too high" thought Vinod Chachra from VTLS.

Value on my mind

The value thing is on my mind a lot of course. On the plane over I was reading the Guardian Life section. The job ads in the IT section are just one way to see what's going on in the industry, especially in universities. I note that a major UK university is going to be spending around £25,000 a year (more if you take all costs related to employment into account) on a person to primarily work on integrating the library system with the VLE. A friend of mine recently got a similar job at another university. It's not a short term contract job either, so over five years that's a substantial sum (certainly compared to the cost of library software) being spent on just one aspect of "integration". That's just some indication of where universities see the value and, not unsurprisingly, it's about improving the overall learning environment.

So what about public libraries? Where do they really see the value (i.e. where are they going to be spending their money)? The Government (DCMS), in its recent (Nov 2004) report to Parliament on "Public Library Matters" puts some emphasis on learning too and also sees public libraries as "community hubs". So is that where the money will go? How will those goals be supported by technology?

Heritage and museums have been seen as a way to help with building cohesive communities. Phrases like a "sense of place" come to mind. So why doesn't that sector puts much value on technology? Of course there are a few major projects in the big museums and, in the UK, the New Opportunities Fund (NOF) kick started some projects on digitisation a few years ago. I got involved in some of that and saw some brilliant work being done. But overall it seems museums and archives too don't place much value on IT to support their collections. I bring this up in the context of ALA as I was having coffee with one of the "greats" in our industry -a past president of one of the most successful library vendors. She had been looking to start a new business and thought she saw a great need for better museum and archive systems in the US and the UK. "There seemed so much good stuff I could do.." She even thought of buying one of the companies. "No money in it Ken: I just couldn't see a good return". Is she right?


January 15, 2005

Blogging from Boston

ALA Midwinter got underway yesterday in Boston, MA. I arrived late on Thursday (although not as late as some!) and by 11am (it took me a while to get going!) I was ready to go for what is going to be a busy schedule over the next few days.

My first meeting was RMG Consultants annual event at Midwinter - the Annual Presidents' Seminar: The View from the Top. The topic up for discussion was "THE NEW INTEGRATED LIBRARY SYSTEM: AN ENTERPRISE SOLUTION".

Ken Chad from Talis was 'up there' with the rest of them, representing our view of the world. I saw some old and not so old faces, and it was great to catch up with a few people. It was also good to see a few faces from across the pond represented - alongside Ken was Robin Murray from Fretwell-Downing and Sebastian Hammer from Index Data. A full list of panelists is available on the RMG web site.

So - how did it go? Well, a few common themes started to appear after a little while (it was a 3 hour session!).

There was some discussion of enterprise systems and ERP, but it was never really bottomed what was meant by this! But, what did become clear was how libraries need to integrate and become more outward facing. The need to provide library services and content through non-library channels did come across well from a few panelists. It was clear that the library's content and services are part of a wider organisation in a way that it has never been before, and vendors and libraries alike have to meet this challenge.

Open source and the sense of community across the library world was repeated throughout the session. This was varied, but it was emphasised that open source does not mean free - a popular misconception! However, open source in the ILS market won't evolve in the same way as it has in e-learning for example. The ILS market is mature and saturated, the e-learning market is emergent - one plays well to open source, the other not so well. However, if we talk about the problems that libraries have today, and the solutions that software or technology could provide, then maybe open source will come in to its own.

Market consolidation is inevitable - there are too many players currently. There are two ways this could come about - by merger/acquisition or by a 'best of breed' approach to the problem. There was some discussion about cross-licesning and partnering both within the industry and beyond.

Finally, libraries need to improve the way they sell themselves to their organisation. Nothing new there!

After a wander round the exhibition hall, and a chat with a few old faces, - I had my first face to face (informal!) meeting with my colleagues in the VIEWS group. It was great to put faces to names - a social meeting more than anything at the end of an interesting day.

Saturday started early - 6am, and on a Saturday too!

It was Endeavor's Digital Breakfast - a really interesting session, which apart from the great breakfast and freebies - told us about the work they've been doing to improve the usability of their product set. They've undertaken a user-centered design approach to Encompass, EJOS, Meridian, and are planning to do some work on a new version of the OPAC - that promises to be user-centered. They're clearly positioning usability as their differentiator this ALA. Elsevier have a usability team of 20 staff - what an amazing resource to have available to you!

After taking in the exhibition for an hour or so, and grabbing coffee with a former colleague, I headed off to the RUSA MARS Hot Topics Discussion Group - "Metasearch: what it is, what it could be, and how standards can help us get there!". It was a standing room only session, and a lively debate followed 2 presentations. The first presentation was by Andrew Pace from NCSU and one of the co-chairs of the NISO Metasearch Initiative which a couple of us at Talis are involved with. This was followed by a presentation from Boston College about their Metalib implementation. The subsequent discussion was primarily around the connectors used to do the metasearching itself - meaning screen scraping mainly, and vendors, information providers and librarians joined in the debate. It was interesting to hear certain vendors defend their screen scrape approach as what the customer wants, while the information providers asked why this method was being use, when they have a perfectly good Z-target. The debate will rage on - there are no easy answers to this one!

I've had a few nice chance meetings with people I've worked with on projects in the past as I've gone from meeting to meeting - that's one of the best bits of ALA - catching up with old faces!

January 14, 2005

ALA, Boston, day 1

As usual, it has been a varied, busy and enjoyable first day at ALA here in Boston.

I kicked off by participating in a small discussion group hosted by OCLC on implementing the concepts of the Functional Requirements for Bibliographic Records, commonly known as FRBR (pronounced ferber). By exploiting the relationships between our current manifestation level records, search results can be grouped and presented to users in more meaningful ways, as well as retrieving relevant results that otherwise would be missed and eliminating irrelevant results. There are good examples already in the VTLS system and RedLightGreen, and OCLC’s FictionFinder and xISBN services. But there is scope for more, such as grouping and filtering results according to the user’s preferences. Cataloguing efficiencies could be achieved and quality and consistency of cataloguing improved by sharing records at the Work level. This is all getting closer to becoming a reality, with the changes to content rules coming through in AACR3 and with XML-based technologies.

Standards was the theme of my other two sessions. ‘Codified Innovations: Data Standards and their Useful Applications’ focused on standards relating to the control of e-journals. This is a field that is suffering from a combination of a lack of standardisation and a lack of implementation of available standards. Frieda Rosenberg and Diane Hillman have done some interesting work recently on holdings data, where a lack of standardisation is, for example, impeding the quality of results from link resolvers. Their work also called on FRBR concepts: An approach to Serials with FRBR in Mind. We also had an update on the revision ISSN, which has had a very troublesome time finding its way through deeply conflicting interests. It seems that consensus has formed around re-affirming the current definition, with the expectation or hope that the process of doing this will lead to publishers being more consistent in applying the ISSN assignment rules. There will also be a new, title-level ISSN to support library requirements and it is hoped that a place in MARC field 024 can be defined for it.

Finally, the Automation Vendor Information Advisory Committee (AVIAC) explored the issues for systems vendors aound the implementation of 13-digit ISBNs. Those present seemed to have a fair grasp of the implications and we heard some useful background information from a member of the ISO ISBN Revision Committee. A key point for me was that library system vendors should not ignore the possibility that their customers might want to use the 14-digit Global Trade Item Number (GTIN), where the extra digit specifies an aggregation of a particular product such as a carton of the new Harry Potter. More on this when I give my presentation on Monday.

January 13, 2005

RSS is not just another TLA

The quiet appearance of those three letters RSS on the scene back in 1999 was not an earth shattering event, but the uses to which the technology is now being put are growing by the day.

It gave rise to the now infamous podcasting last summer, which allows the automatic download of [or 'tuning in' to] Internet broadcasts or 'Podcasts'. So when you want to listen to your favourite hour of the week, it is already loaded on to your iPod or PC drive.

At Talis, as part of Project Bluebird, we are researching the usefulness of RSS as a way of alerting library users to events that take place in their library account.

The Tony Hammond article in D-Lib on the Role of RSS in Science Publishing and the recent announcement from IngentaConnect of:

in excess of 20,000 new RSS feeds containing the latest table of contents data for the academic journals that are still being actively loaded into our databases. Like our friends at Nature

Now MSN have release a Beta version of RSS Feeds for Search Results. So enter a search into your RSS reader and get alerted when new results turn up!

Cool, so now applying that idea to the library world it won't be long before an OPAC is brought to its knees with all its users' RSS readers polling their favourite subject search for new items!

Whence RSS next?