POST: Open Access Publishing and Geo-Spatial Tools for (Music) Research

Anna Kijas (University of Connecticut) has posted the text and slides from her Digital Frontiers 2014 talk, “Open Access Publishing and Geo-Spatial Tools for (Music Research).” Kijas discusses her process for selecting a platform to document and disseminate her research on the Venezuelan pianist and composer Teresa Carreño, as her “goal evolved from simply publishing a print book to publishing an open access, knowledge site, where I can document a representative selection of performances from Carreño’s career from the early 1860s through 1917 with content and data derived from primary source materials, as well as newly created metadata, controlled vocabulary, and geo-spatial and temporal visualizations.” After reviewing Viewshare, WordPress, and Omeka, Kijas developed the project, Documenting Teresa Carreño, using Omeka, with Neatline and Scripto plugins to generate maps, timelines, and transcriptions.

In an earlier post focused on Carreño’s appearances at Carnegie Hall, Kijas combines her close research on these performances with data curation and the web-based RAW tool to generate visualizations of these compositions and composers performed over the course of nineteen years and thirty-two appearances.

POST: Doing Web Accessibility

Michael Rodriguez (Hodges University) has written a post at LITA Blog about improving web accessibility for library webpages. Rodriguez discusses several ways to do this work, including using the W3C Markup Validator, the WAVE Tool, and browser developer tools. He notes,

You’re not a web developer, you say? Neither am I. But even if your job has nothing to do with digital services, librarians need to know about these technical matters so as to make the case for prioritizing web accessibility and to be able to speak the language of colleagues (often the IT department) who do engage in web development. Web accessibility builds equal access and diverse communities.

POST: What Does it Take to Be a Well-Rounded Digital Archivist?

Peter Chan (Stanford University) has written a post examining the requirements for eight job advertisements for digital archivists from the past year. He found that

all of them required formal training in archival theory and practice. Some institutions placed more emphasis on computer skills and prefer applicants to have programming skills such as PERL, XSLT, Ruby, HTML and experience working with SQL databases and repositories such as DSpace and Fedora. Others required knowledge on a variety of metadata standards. A few even desired knowledge in computer forensic tools such as FTK Imager, AccessData Forensic Toolkits and writeblockers.

Chan also provides a useful outline of many of the tasks that may fall under the responsibilities of Digital Archivists and lists the knowledge, skills, software, and tools needed to accomplish the them.

POST: Evaluating Non-Traditional Digital Humanities Dissertations

Amanda Visconti (MITH) has written a post about how to “chart progress and record effort through the non-traditional dissertation.” She notes,

Having a unique format scrubs a lot of the traditional methods for evaluating these factors: maybe there’s no “chapter” to turn in, or you’ve undertaken a critical building project in a programming language your committee can’t read (and even if your committee could read through the entire body of code you author, you wouldn’t want them to need to—just as with a monograph project, you want to be respectful of mentors’ time by asking for feedback at specific milestones and with some kind of concise artifact to assess).

Visconti suggests leveraging tools like GitHub to evaluate writing and code tracking, Basecamp to chart project progress, and Google Apps to manage communication and accountability between student and advisor.

 

POST: We Have Never Been Digital

Geoffrey Rockwell (University of Alberta) has written a post responding to Thomas Haigh’s reflection “on the intersection of computing and the humanities,” concluding that “we have never been and never will be entirely digital.” Rockwell explores the implications of this on digital humanities and whether the discipline can evolve to meet new scholarly demands. Citing Haigh:

“There is a sense in which historians of information technology work at the intersection of computing and the humanities. Certainly we have attempted, with rather less success, to interest humanists in computing as an area of study. Yet our aim is, in a sense, the opposite of the digital humanists: we seek to apply the tools and methods of the humanities to the subject of computing.”

Rockwell responds:

On this I think he is right – that we should be doing both the study of computing through the lens of the humanities and experimenting with the uses of computing in the humanities. I would go further and suggest that one way to understand computing is to try it and that which you know and that is the distinctive contribution of the digital humanities. We don’t just “yack” about it, we try to “hack” it. We think-through technology in a way that should complement the philosophy and history of technology. Haigh should welcome the digital humanities or imagine what we could be rather than dismiss the field.

POST: For God’s Sake, Stop Digitizing Paper

Joshua Ranger has written a post on the AVPreserve blog that calls on archivists (and others) to examine their digitization practices and priorities. Arguing that audiovisual materials are in greater danger of obsolescence, Ranger declares, “We should agree to stop digitizing paper and other stable formats for a set period because, in a way, it is bad for preservation.” Though his focus in on audiovisual materials, Ranger draws attention to the underlying rationale for digitization in general. He notes:

[A] lot of digitization work is essentially a wasted effort if it needs to be done again for access, or future preservation work, if files, access portals, metadata, and digital humanities projects are lost. And I’m not just saying lost as in the fretting about the unreliability of digital files, but lost due to human failure in managing servers, migrating data, or letting websites go dead.

POST: Analysis of Privacy Leakage on a Library Catalog Webpage

Eric Hellman (unglue.it) has written up a recent presentation at the Code4Lib-NYC meeting in which he performed an “Analysis of Privacy Leakage on a Library Catalog Webpage.”

Hellman selected a single webpage for a book in the NYPL online catalog and traced “all the requests my browser made in the process of building that page.” Noting that “my browser contacts 11 different hosts from 8 different companies,” Hellman investigates each company’s privacy policy and use of cookies to give an alarming picture of the way that patron browsing data is shared via cloud-based library catalogs. He concludes:

In 1972, Zoia Horn, a librarian at Bucknell University, was jailed for almost three weeks for refusing to testify at the trial of the Harrisburg 7 concerning the library usage of one of the defendants. That was a long time ago. No longer is there a need to put librarians in jail.

POST: Getting Digital Humanities Done: Schedule, Software, Etc. for a Digital Dissertation

Amanda Visconti (University of Maryland) has written a post that sheds light on how a research developer works “on a daily basis.” Visconti describes the elements that comprise her physical and digital work environments.

Similar to the productivity interviews at The Setup, Visconti says, “I’ll describe the workplace set-up, schedule, and software that help me make progress on my Infinite Ulysses project, with the hope of hearing more from others about the day-to-day environment and behavior that produces their digital humanities work.”

POST: Making Scanned Content Accessible Using Full-text Search and OCR

Chris Adams (Library of Congress) has written a guest post for The Signal detailing how the library community can affordably meet the challenge of creating metadata for “our terabytes of carefully produced and diligently preserved TIFF files” to promote discovery and engagement.

In “Making Scanned Content Accessible Using Full-text Search and OCR,” Adams documents how to get “from scan to search” in four steps. Adams also offers possible directions for the future including “a simple web application which would display images with the corresponding OCR with full version control, allowing the review and correction process to be a generic workflow step for many different projects.”

POST: Why Digital Humanities Researchers Support Google’s Fair Use Defense

Matthew Sag (Loyola University Chicago School of Law) has contributed a guest post for the Authors Alliance blog, explaining “Why Digital Humanities Researchers Support Google’s Fair Use Defense.” Sag co-authored the amicus brief “urging the Second Circuit Court of Appeals to side with Google in this dispute.” He explains:

Digital Humanities scholars fervently believe that text mining and the computational analysis of text are vital to the progress of human knowledge in the current Information Age. Digitization enhances our ability to process, mine, and ultimately better understand individual texts, the connections between texts, and the evolution of literature and language.

 

POST: The Networked Catalog

Matt Miller (NYPL Labs) has written a post introducing an experimental interactive network visualization of the New York Public Library’s catalog data. The NYPL Labs team have been “fascinated with our catalog and the possibilities its data represent,” and asked:

[W]hat if the catalog had a “See All” button? What if you could see everything at once, to get the big picture about what subjects the library has information on and what are the related topics? 

Using the catalog’s subject heading information, the team created a force-directed network to “explore the vast materials living at NYPL.” Miller also includes fascinating visualizations and links to a simulation from the project.