eCommons

DigitalCollections@ILR
ILR School
 

PTA Issue 7 (2017)

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Access and Preservation in Archival Mass Digitization Projects
    Yolkowski, John; Jamieson, Krista (2017-01)
    [Excerpt] In 2014, the Dalhousie University Archives began its first archival mass digitization project with the Elisabeth Mann Borgese fonds. The successful completion of this project required the project team to address both broad and specific technical and intellectual challenges, from rights management in an online access environment to the durability of the equipment used. To best understand the challenges faced, there will first be a brief introduction to the fonds and project goals of balancing preservation and access before moving on to a discussion of these challenges in further detail, and finally, concluding with a discussion of some considerations, best practices, and lessons learned from this project.
  • Item
    Streamlining Delivery of Online Oral History Metadata through LibGuides
    Fox, Heather; Holtze, Terri; Kuehn, Randy (2017-01)
    [Excerpt] Founded in 1968, the University of Louisville Oral History Center (OHC) houses over 2000 interviews of people from politicians to everyday citizens. Collectively, the oral histories represent an incredibly rich source of historical information. The challenge with a collection of this type and scope is making that information accessible to the people who might use it. Some of the material has been transcribed; some hasn’t. Some of the interviews have been digitized; others are still on cassette tapes. Having full-text or full-sound of the entire collection online is just not possible at this time; the work of transcribing or digitizing the materials would take an enormous amount of labor. Creating hierarchical finding aids would not accommodate the item-level description necessary for meaningful access to the oral histories. So we looked for ways to make information about the collection available: who was interviewed, who conducted the interview, when, what topics were covered, etc. Over the years access to this metadata evolved from typed lists available at the reference desk to records in the library’s catalog.
  • Item
    Using LibAnswers in the Archives: A Review and Implementation Report
    Hutchinson, Tim (2017-01)
    [Excerpt] The implementation of LibAnswers by the University of Saskatchewan represents the culmination of fundamental changes to the way reference service is delivered in University Archives & Special Collections. In 2013, there was an amalgamation of two units that shared space but were organizationally independent. Previously, e-mail reference was primarily handled by one employee from each unit, with assistance and referrals as needed. With the 2013 amalgamation, the delivery model changed to have all staff members – archivists, librarians, and senior library/archives assistants – take half-day shifts on the reference desk, which would include walk-in traffic, phone calls and e-mail.
  • Item
    Using Google Analytics, Voyant and Other Tools to Better Understand Use of Manuscript Collections at L. Tom Perry Special Collections
    Lee, Ryan K.; Nimer, Cory L.; Daines, J. Gordon III; Rupp, Shelise (2017-01)
    [Excerpt] Developing strategies for making data-driven, objective decisions for digitization and value-added processing. based on patron usage has been an important effort in the L. Tom Perry Special Collections (hereafter Perry Special Collections). In a previous study, the authors looked at how creating a matrix using both Web analytics and in-house use statistics could provide a solid basis for making decisions about which collections to digitize as well as which collections merited deeper description. Along with providing this basis for decision making, the study also revealed some intriguing insights into how our collections were being used and raised some important questions about the impact of description on both digital and physical usage. We have continued analyzing the data from our first study and that data forms the basis of the current study. It is helpful to review the major outcomes of our previous study before looking at what we have learned in this deeper analysis. In the first study, we utilized three sources of statistical data to compare two distinct data points (in-house use and online finding aid use) and determine if there were any patterns or other information that would help curators in the department make better decisions about the items or collections selected for digitization or value-added processing. To obtain our data points, we combined two data sources related to the in-person use of manuscript collections in the Perry Special Collections reading room and one related to the use of finding aids for manuscript collections made available online through the department’s Finding Aid database ( http://findingaid.lib.byu.edu/). We mapped the resulting data points into a four quadrant graph (see figure 1).
  • Item
    Python for Archivists: Breaking Down Barriers Between Systems
    Wiedeman, Gregory (2017-01)
    [Excerpt] Working with a multitude of digital tools is now a core part of an archivist’s skillset. We work with collection management systems, digital asset management systems, public access systems, ticketing or request systems, local databases, general web applications, and systems built on smaller systems linked through application programming interfaces (APIs). Over the past years, more and more of these applications have evolved to meet a variety of archival processes. We no longer expect a single tool to solve all our needs and embraced the “separation of concerns” design principle that smaller, problem-specific and modular systems are more effective than large monolithic tools that try to do everything. All of this has made the lives of archivists easier and empowered us to make our collections more accessible to our users. Yet, this landscape can be difficult to manage. How do we get all of these systems that rely on different software and use data in different ways to talk to one another in ways that help, rather than hinder, our day to day tasks? How do we develop workflows that span these different tools while performing complex processes that are still compliant with archival theory and standards? How costly is it to maintain these relationships over time as our workflows evolve and grow? How do we make all these new methods simple and easy to learn for new professionals and keep archives from being even more esoteric?