eCommons

DigitalCollections@ILR
ILR School
 

PTA Issue 3 (2014)

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 6 of 6
  • Item
    Strategies for Implementing a Mass Digitization Program
    Moore, Erik (2014-11)
    [Excerpt] In 2007, OCLC published the report Shifting Gears: Gearing Up to Get Into the Flow to bring to the forefront a much needed conversation about digitization of archival collections, and access to the rich content accessible only through paper or other analog formats. The authors emphasized that any successful large digitization program would focus on access and quantity. They challenged archivists to rethink policies, procedures, and technologies that either slowed the process of mass digitization, or were unfriendly to the implementation of a rapid capture program. Recent articles, blog posts, and columns demonstrate that we as a profession continue to grapple with ways to implement digitization programs that are both sustainable and efficient. The strategies offered in this paper highlight a practical program for the mass digitization of organizational archival records using a rapid capture process that is replicable regardless of the size or resources of the repository. It will review the establishment of a rapid capture workflow at the University of Minnesota Archives; provide details on how it functions, including equipment information, scanner settings, and workflow procedures; explain the selection process for scanning; describe how it has helped to create inreach opportunities; and finally, examine how it has changed not only daily operations, but the perspective on what it means to provide broad access to the collections.
  • Item
    XQuery for Archivists: Understanding EAD Finding Aids as Data
    Wiedeman, Gregory (2014-11)
    [Excerpt] XQuery is a simple, yet powerful, scripting language designed to enable users without formal programming training to extract, transform, and manipulate XML data. Moreover, the language is an accepted standard and a W3C recommendation much like its sister standards, XML and XSLT. In other words, XQuery’s raison d’etre coincides perfectly with the needs of today’s archivists. What follows is a brief, pragmatic, overview of XQuery for archivists that will enable archivists with a keen understanding of XML, XPath, and EAD to begin experimenting with manipulating EAD data using XQuery.
  • Item
    From the Editor
    Miles, Randall (2014-11)
    [Excerpt] Well, this new venture of ours has made it to Issue no.3: our one-year anniversary. Overall I have been pleased, both with the work submitted by our authors and by the response from our readers. With this issue we will have published thirteen articles. The journal has not developed as I had initially hoped, more on that below, but it seems to be having an impact within the profession. The first two issues have had just under 3,000 page views each, and we have 76 followers. Not the 6,000-issue circulation of American Archivist, but quite respectable, especially for a new journal.
  • Item
    Doc to PDF and HTML
    Willey, Eric (2014-11)
    [Excerpt] Max J. Evans notes the paradox the digital age presents to archivists: the explosion of information and budget cuts means increasing backlogs and less time to gain detailed subject knowledge of collections, while users believe that all information is “quickly and easily available” if not already digitized and on the web. For some institutions the lack of digitization extends not only to collections, but also to access points such as finding aids. In a 2004 survey of seventeen institutional repositories Christina J. Hostetter found that “in most cases, archives have approximately 10 percent or less of their descriptions to holdings online.” In a 2010 paper Christopher J. Prom found that among surveyed institutions “the ‘average’ institution makes descriptive information at any level of completeness available on the Internet for a paltry 50% of its processed collections and 15% of its unprocessed collections.” While these statistics include information regarding processed collections not available in any form (online or off), Prom notes many respondents to his survey identified a strong need for “better tools to do their descriptive work” including a “streamlined process for creating finding aids in an open source format that can be viewed on the web.” Prom concludes, in part, that “it is currently beyond the capacity of many institutions to implement MARC and EAD in a cost-effective fashion” and more economical means of providing online access points are needed. The current article provides one means of batch creating HTML or PDF documents from existing word processing documents. The method described has a relatively low barrier to entry, and is particularly targeted at smaller institutions which might face challenges in creating online access points due to lack of funding and specialized training.
  • Item
    Processing Unidirectional E-Mail Memos for Preservation and Access
    Schmidt, Lisa M. (2014-11)
    [Excerpt] As with most institutions, Michigan State University now communicates official university business through e-mail messages rather than traditional paper memos. The university’s IT Services department periodically sends out aggregations of these Deans, Directors, and Chairs (DDC) messages that originate in various offices and departments. Many of the messages include supplementary information as attachments, typically in PDF, DOC, XSL, and/or PPT file formats. Per MSU’s retention schedules, DDC messages have historical value and must be preserved and made accessible. The original file formats—MSG (Windows PC) or EML (Apple Macintosh)—could be converted to MBOX or EML (if necessary), de facto preservation format standards for e-mail, and accessed in nearly any e-mail program. With the DDC messages functioning as unidirectional memos, however, the University Archives & Historical Collections (UAHC) decided to take a simpler approach and convert them to PDF format. The original appearance of the message would be retained, including the header.
  • Item
    Moving Digital Images
    Sweetser, Michelle (2014-11)
    For over six years the Marquette University Archives managed patron-driven scanning requests using a desktop version of Extensis Portfolio while building thematically-based digital collections online using CONTENTdm. The purchase of a CONTENTdm license with an unlimited item limit allowed the department to move over 10,000 images previously cataloged in Portfolio into the online environment. While metadata in the Portfolio database could be exported to a text file and immediately imported into CONTENTdm’s project client, we recognized that we had an opportunity to analyze and clean our metadata using OpenRefine as a part of the process. We also hoped to update our Portfolio database and the metadata embedded into the files themselves to reflect the results of this cleanup. This article will discuss the process we used to clean metadata in OpenRefine for ingest into CONTENTdm as well as the use of Portfolio and the VRA Panel Export-Import Tool for writing metadata changes back to the original image files.