I have spoken about library logistics before.
Logistics is about moving information, materials and services through a network cost-effectively. Resource sharing is supported by a library logistics apparatus. The emerging e-resource discovery to delivery chain, tied together with resolution services, is a logistics challenge. Many of the e-resource management issues are like supply-chain management issues. [Lorcan Dempsey's weblog: Library logistics]
It seems to me that recent developments highlight the logistics theme. Think of the systemwide inventory management questions that are beginning to arise in relation to off site storage and mass digitization. Or the issues that arise when we connect multiple discovery environments to backend library - or other - fulfillment options.
I like the UPS slogan about synchronizing commerce. It reminds us of the central role of data in logistics and of the need for integrity of data along supply chains or other processes. I was reminded of this while reading Michael Cairns' interesting post about Booknet Canada and the Global Data Synchronization Network.
Industries other than publishing also battle data reliability and timeliness and, over the years led by umbrella groups such as UCC and EAN (now combined into one organization named GS1), they have developed programs to embrace supply chain efficiency and its' co-relation data integrity. Data Synchronisation (GDSN) is such a program which I have noted a few times in the past (Post). The objective of the GDSN is to ensure that all trading partners are working with the same set of product details that are simultaneously synchronized at a network level and in transaction details such as purchase orders and shipping details. The benefits of synchronised data can extend from 'simple' efficiency improvements in the ordering and receipt process to higher effectiveness in marketing and promotions programs. [PersonaNonData: Five Questions on Global Data Synchronization]Michael interviews Michael Tamblyn, President of Booknet Canada which is offering services based on GDSN. Among the advantages he suggests are:
Then there is the more forward-looking work: collaborative sales data mining for independents, backlist optimization and forecasting research, industry cost analysis on returns, digital publishing trends, our annual Technology Forum. And on it goes. [PersonaNonData: Five Questions on Global Data Synchronization]
There is a temptation in library discussions to focus on discovery and end-user issues when thinking of bibliographic data. However, bibliographic data is increasingly important to efficient library operations more generally. Think of the blurring of circulation and resource sharing in consortial arrangements, the issues of managing and tracking print collections in the context of the mass digitization and off site storage initiatives, connections between external discovery environments and library systems, resolution and the management of knowledge bases, and so on. Systemwide data synchronization and data integrity issues are becoming more central. Increasingly we recognize that efficient management of resources imposes data needs.
Some examples: What books have been digitized by Google, etc? Is an available-for-use digitized copy of this book available more easily than getting it in 3 days on ILL. How would last copies be registered and curated within a systemwide framework (Ohio, for example, or the UK, or ...)? Can I let a user make an optimum request based on price/speed of delivery balance? Can I do recommender systems across aggregate circulation data, or aggregate resolution data? Can I develop core collection recommendations based on aggregate holdings data? Can I make selection decisions based on a view of what my regional partners are selecting? Can I begin to do some modelling of collections based on the aggregate holdings of off site storage facilities. Can I receive collection development recommendations based on my users' use of Google Scholar? Can I be assured that my users will be linked correctly - and as seamlessly as possible - into my collections from Google, or Worldcat.org, or a growing range of other potential discovery venues? Can I make collection development decisions based on aggregate Counter data?
There is an earlier discussion of some similar data issues by my colleagues and me in a Library Journal article: Making data work harder.