POST: Announcing the first subscription journal to flip to open access through the Open Library of Humanities

The Journal Of British and Irish Innovative Poetry, a formerly subscription-access journal in publication since 2009, has become the first of its kind to adopt the Open Library of the Humanities open access model. According to journal editors Scott Thurston, Gareth Farmer, and Vicky Sparrow:

This move to a fully open-access platform with the OLH is a game-changing moment for the journal in its sixth year of operations… That our new form is open-access (and free of author-charges) sits particularly well with the ethos of a Journal so concerned with liberated and liberating writing, and we hope the move will open our work up to a wider community of thinkers. We are all thrilled about the potential for this new phase of operations and keen to experiment!

Open Library of the Humanities founder Dr. Martin Paul Eve reports that a range of new journals are on the horizon for OLH as well.

POST: Open-Sourcing MoMA’s Digital Vault

In “Open-Sourcing MoMA’s Digital Vault,” Ben Fino-Radin (Museum of Modern Art) explains the process and rationale behind the creation of Binder, the museum’s new open-source software for “overseeing and managing the active preservation of digital collections.” Binder was developed in conjunction with Artefactual Systems, and designed for use alongside Archivematica and Arkivum.

What is good for preservation is not always a boon to access or management. Therefore, before packages [created by Archivematica] are sent to “the warehouse” [Arkivum], Binder sifts through them, indexes their contents, and stores what it finds in a database that is built to be very good at queries across large sets of data. Binder allows us to see the bigger picture in our collection.

Binder is available on GitHub and features a REST API. A video introduction to the tool is also available.

POST: Becoming Digital Public Historians

Trevor Owens (Institute of Museum and Library Services) has written a post on his blog, reflecting on his experience teaching a Digital Public History seminar at the University of Maryland’s College of Information Studies. “Becoming Digital Public Historians” references both the title of a course unit and, Owens argues, a “kind of identity work [that] is at the core of what graduate education is supposed to be about.”

The post details the eight students’ final projects, which cover a broad range of collections, approaches, tools, and platforms. Owens concludes:

It took me a bit of time to shift gears from teaching a digital history course to public history students to teaching a digital public history course to iSchool students. With that said, the experience made me realize how relevant I think digital public history is to the future of libraries and archives.

POST: Learning How Fair Use Works

Kevin Smith (Duke University) has written a post introducing the new Fair Use Index released by the U.S. Copyright Office. The index, which will be updated and expanded by the Copyright Office, contains searchable summaries of approximately 170 fair use cases in U.S. courts and, as Smith notes, doubles as a learning tool: “the best way to understand fair use, and to become comfortable with it, is too look closely at the cases, both in the aggregate and individually.”

Smith takes a close look at what is or isn’t included in the index, and compares it to existing resources, including IP Watchdog and the Stanford Copyright & Fair Use site.

 

POST: Lessons Learned from “Bridging the Gap: Women, Code, and the Digital Humanities”

Celeste Sharpe and Jeri Wieringa (George Mason University) have written a post on the Association for Computers and the Humanities blog about their experience with DH Bridge, “an open curriculum and workshop framework for teaching computational thinking in the context of the humanities.”

Taking a cue from Rails Bridge and Rails Girls, the project:

set out to adapt the model of a distributed pedagogy to the needs of humanities scholars who, while not looking to become programmers per se, want to develop the skills and patterns of thinking necessary to apply computational methods to their scholarship.

The course explored the Digital Public Library of America’s collections using Python and the Natural Language Toolkit. Pedagogically speaking, Sharpe and Wieringa chose to lead participants from concrete problems and data to more abstract concepts, which is the opposite approach of many coding tutorials, because they found that that the former had a better balance between the conceptual and technical.

The organizers also argued that their training model fills a need in the DH training landscape for local and regional events that are informal and offer intensive learning opportunities yet are not resource intensive to organize or attend (i.e. no registration costs, only one or two days, and minimal travel requirements).

POST: We’re Overdue on Altmetrics for the Digital Humanities

Stacey Konkiel (Altmetric) has written a post reflecting on her recent efforts to understand how research in the humanities is evaluated, with an eye towards the digital humanities in particular.

Observing that scholarly societies have begun to issue guidelines for evaluating digital scholarship projects, Konkiel notes that they:

tend not to address newer types of quantitative and qualitative data, sourced from the web, that can help reviewers understand the full scope of the impacts your work may have. This data can include newer impact metrics like numbers of website visitors, what other scholars are saying about your work on their research blogs and social media, how many members of the public have reviewed your books on GoodReads and Amazon, and so on.

Konkiel’s post includes links and slides from her recent presentations on this topic, and she invites readers to contribute to a conversation on where altmetrics might fit into the humanities.

POST: Digital Texas Round-Up!

Charlotte Nunes (Southwestern University) has written a round-up of two Texas digital scholarship conferences: the Texas Digital Humanities Conference and the Texas Conference on Digital Libraries.

Chock full of links to projects and presentations, Nunes’s post covers presentations from Tanya Clement, Rebecca Frost Davis, Liz Grumbach, Adeline Koh, Alan Liu, Bess Sadler, George Siemens, and more.

POST: Libraries Looking Across Languages: Seeing the World Through Mass Translation

Kalev Hannes Leetaru (George Washington University) wrote a post for The Signal exploring the landscape of statistical machine translation (SMT) and outlining the advances made in the last few years to move toward a more “post-lingual society.” Leetaru provides an overview of mass translation projects with a particular focus on GDELT Translingual (Global Database of Events, Language, and Tone) which “live-translates all global news media that GDELT monitors in 65 languages in real-time, representing 98.4% of the non-English content it finds worldwide each day.” Leetaru reflects:

Machine translation has truly come of age to a point where it can robustly translate foreign news coverage into English, feed that material into automated data mining algorithms and yield substantially enhanced coverage of the non-Western world. As such tools gradually make their way into the library environment, they stand poised to profoundly reshape the role of language in the access and consumption of our world’s information. Among the many ways that big data is changing our society, its empowerment of machine translation is bridging traditional distances of geography and language, bringing us ever-closer to the notion of a truly global society with universal access to information.

POST: Notes from HathiTrust Uncamp 2015

Melissa Levine, in collaboration with Alix Keener and Rick Adler (all University of Michigan), has compiled a thorough recap of the HathiTrust Uncamp, held March 30-31, 2015, in Ann Arbor, Michigan.

The event “featured a diverse range of perspectives, ideas, demos, and poster sessions all featuring non-consumptive uses of HathiTrust,” and Levine’s post provides both summary and context (and links!) to the breakout sessions and keynote addresses.

POST: Tracking Digital Collections at the Library of Congress, from Donor to Repository

In a post on The Signal, Mike Ashenfelder introduces the work of senior archives specialist Kathleen O’Neill (Library of Congress) and her team as they shepherd digital collections through the “chain of custody” that takes them from acquisition to researcher access.

Written for a non-specialized audience, the post provides a useful introduction to the process and professional considerations of managing born-digital materials.