Monthly Archives: February 2015

Abbott government’s metadata plan tipped to cost $300m

This article once again brings metadata into the news. In reading the latest about this situation in Australia I was struck wondering just how this data is kept and handled. The reported on political component of the issue seems to revolve around who will foot the bill for the effort of retaining the data (referred to as “the scheme”), but I failed to find information describing how this data would be handled. More importantly, who will be handling this data and what will be their qualifications? From the article it seems the decision of cost sharing is yet to be decided and does not directly mention who will be handling the data. Presumably the Australian government is requiring companies to employ and train individuals to be caretakers of this information. Given the current climate of hostile data breaches it is alarming that more focus is not given to the means and personnel of “the scheme” described in the article. The secure and proper management of this information will be critical to successfully executing the mandate of the government.


Digital Public Library of America: Young but Well Connected

Unlike the previous article I examined which was a bit of introduction to the Digital Public Library of America this article discussed a bit more of the function of DPLA. Specifically the authors discuss the concepts of “service hub” and “content hub” by which the DPLA diffuses its workflow. The article’s content is a dated, but measured description of these functions. What caught my attention most were the comments that have been left since this article was published in 2013. They address deficiencies in both the service hubs and content hubs. One comment thread discusses the deficiencies of the discovery service of the DPLA while the final comment questions why there is no information on contemporary information. Metadata interoperability is the key to the speed at which DPLA can expand its offerings. Though I don’t think it is necessary for DPLA to contain information as rapidly as suggested by the final comment mentioned earlier, it would serve the interest of the DPLA to account for the information that is being sought and not found even if they do not adopt this information into their collection management.


Revelations from the literature

I enjoyed the analysis presented in “Revelations from the literature” which discusses how web scale discovery services have already changed professional practice and analyzes the literature focusing on web scale discovery services to demonstrate the state of web scale delivery efficiency. The literature concerning web scale discovery services grew exponentially from 2010 to 2012 as demonstrated in the article.  This literature forms the basis for the authors’ analysis and demonstration of how web scale searching has changed professional practice. As web scale implementations explode in implementation it is important to have abase understanding of how the Google phenomenon has changed practices, policies, and behavior for both the professional and the end user. However, like many things in the digital realm, implementation is the easy part – it is efficient development which is the difficult part of the equation. Also challenging is changing the perception of these library tools as simply another form of “Google search” and impress upon the users the difference in integrity, quality, relevancy, and other factors between a simple web search tool and a library implemented web scale discovery service.


Considering Emulation for Digital Preservation

The blog post “Considering Emulation for Digital Preservation” captivated me for several reasons. First I was unaware of the Preserving.exe initiative at the Library of Congress and find this endeavor absolutely engaging. As computer equipment has become more and more common and increasingly affordable it has become disposable. That is to say, much like a toaster or similarly inexpensive household appliance, when it breaks or becomes obsolete the tendency has grown to be that it can just be thrown out, recycled, or otherwise disposed of without much thought. However, it is important to retain at least some examples of this hardware and software for future generations to study, observe, and learn from past attempts at innovative design. Secondly, I was captivated by the discussions around emulation. Software emulation is something I am very familiar with having implemented VMware installations for both personal and professional use. On the more recreational side of things I have used the Multiple Arcade Machine Emulator mentioned the post for many years, since its inception in the 90s, to explore software either unavailable to me or deemed obsolete before I was able to interact with it. In this way I completely understand the argument for emulation in preserving software and even hardware, to demonstrate its functionality, study its operability, and to further engage students, patrons, and those curious about the history of these devices which are commonplace now, but 30 years ago were only emerging into the American home.

Article link:

Toward element-level interoperability in bibliographic metadata

The article “Toward element-level interoperability in bibliographic metadata” contains technical information about the quintessential problem of libraries wanting to interact with one another – interoperability.  The authors give an example of what they call a “crosswalk” between metadata schemas, in this case ONIX and MARC, but they discuss crosswalks existing across many different sets of schemas. They identify though that this approach, while effective, is limiting due the crosswalk needing to be designed for whatever schemas are being transferred to and from. The need they identify is for a ubiquitous crosswalk or method of transferring metadata information from any schema to any other schema. To do this they go on to identify the basic processes needed to record input, process the translation from schema to schema, and finally to write the output in the desired format. The authors continue their examination of this crosswalk service examining the finer points and challenges associated with interpreting across standards not the least of which is the scope of description and the problem of elements not equating across schemas. I found this work fascinating due to the importance of such an endeavor to creating interoperable institutions which I believe is one of the most important tasks facing libraries along with digital curation. As demand for information increases it is assured that patrons will seek resources beyond the local scope of their library which necessitates their institution being able to operate functionally with outside institutions. The first step toward ensuring successful interoperation is the ability to effectively communicate across metadata boundaries.

Article link:

What is the DPLA?

The article “What is the DPLA?” provides an overview of the Digital Public Library of America contemporary to the time when it was preparing to launch its first iteration. Personally I find the DPLA initiative particularly fascinating based on the scope of its mission and the methods of its execution. The scope at its launch and presently contains mostly public domain material – which alleviates copyright concerns – but the true scope of the DPLA is to be an inclusive record of the participating institutions’ digital offerings. This scope is grandiose to be sure, but seems ever more achievable as collections become digitized and born digital material is curated. The methods for presentation utilize common internet technology to provide effective exhibition – including HTML5, JavaScript, CSS3, et al. As the DPLA grows its partner institutions the collection will certainly reflect this growth. This will allow smaller institutions to utilize the resources of the partner institutions and provide them a greater variety of services than they might have otherwise and for essentially no cost as most will have sufficient equipment for accessing the DPLA. The author here sums up the potential of the DPLA best with his closing sentences when he writes: “The form that the DPLA will take in five, ten, 20 years? That’s up to all of us. And the best is yet to come.” I think the DPLA has the potential to be a great resource if it is utilized as the community enriching and collaborative resource which it has been conceived as.

Article link:

Competencies Required for Digital Curation: An Analysis of Job Advertisements

The article “Competencies Required for Digital Curation: An Analysis of Job Advertisements” is very fascinating to me as I have worked with primarily digital assets now for over a year. At first the job was very daunting and difficult to find standardized information about procedures and policies pertaining to digital material. For this article the authors surveyed almost 200 job advertisements for positions they bring under the term “digital curation”. Even this term is not a common precept among employers offering these types of jobs or even potential employees seeking these jobs. Essentially the role of digital curator is a niche that is being carved out by those pioneering ways to effectively, efficiently, and pragmatically organize, manage, and apply information principles upon digital material. The article details attempts to create a standardized job description, roles, tasks, and duties that should be expected of a digital curator. Interesting to me was the discovery that the posted qualifications for many job advertisements is essentially the same as many of jobs in information institutions, that is, an ALA accredited degree in LIS. Many activities relating to the management of digital content, as per the analysis of job advertisements, are similar or indeed the same as those traditionally thought of fro print collections. The concept I took away from the article is essentially that the “devil is in the details”.  That is to say the refinement occurring in the specific functions of digital curation through job advertisements is crucial to establishing the digital curator as a member of the library staff.


Ockham’s Bathroom Scale, Lego blocks, and Microformats

The blog post “Ockham’s Bathroom Scale, Lego blocks, and Microformats” presents an interesting concept for information professionals dealing with the onslaught of so-called microformats as a sort of “quick and dirty” fix to simple organizational needs. Essentially the author compares these formats to Lego bricks. These toys are modular meaning they may be manipulated in many number of ways, but it will always work together. While this concept seems simple it is deceivingly so contends the author because these toys have been designed and tested through a half century of development and implementation. He represents the sinister side of these toys through an analogy to the dreaded Lego in the floor – a painful occurrence which many parents can attest to having experienced. This analogy serves as something of a warning to the fast adoption of varying microformats – that while they solve simple problems and are easily implemented leaving fragmented and isolated systems in play across many institutions may be as painful as the dreaded Lego in the floor when it comes time to integrate, collaborate, and perform inter-institutional functions.


Defining “Born Digital”

The essay “Defining ‘Born Digital’” is a very nice overview of the trappings associated with digitally created content. The types of digital content it identifies are the most prevalent forms of content that one would encounter when managing a digital collection and he does a very nice job explaining these types of material and their origins. Most important is the identification of key issues involving digital content management. Much as procedures and techniques had to be constantly developed and refined for print collection so to does digital content present challenges both for current management initiatives and for future endeavors in the management of digital content. Changing formats, file types, program capabilities, and many other factors must be considered to keep digital content accessible and relevant to end user needs. Also important in this essay is the mention of the lack of turnkey solutions for managing digital content. I think this is due to the almost trailblazing nature of digital content management, that is, the need in many instances to develop completely new policies for handling digital content and constant review of the policies for practicality.  Having worked with digital content for over a year I can attest to the unwieldy nature of differing formats, file types, and the rest of the overwhelming variety of digital content specifications.

The essay: