POST: HILT2015, iSchools+DH, and deconstructive/critical digital pedagogy.

Micah Vandegrift (Florida State University) shared a post in which he considers how to build on the iSchools+DH project that cross-pollinated iSchool students with digital humanities centers. Vandegrift has set his focus on compiling syllabuses from DH courses taught in library schools, asking:

How are DH courses being taught in LIS programs and iSchools? Is there any continuity or curricular similarities across programs? Are DH courses at different programs teaching on similar topics? Are we all reading the same things, learning the same tools, gaining the same skills?

Vandegrift ends with a call for collaborators interested in reviving a program of hands-on training for iSchool students studying digital humanities.

POST: #guerrilladh

Alex Gil (Columbia University) shared a post collocating thoughts on the #dhpoco summer school in 2013 with more recent ruminations inspired by the work of Bess Sadler and Chris Bourg, Miriam Posner, and Tim Sherratt. Exploring questions of activism and the potential for digital humanities to work towards “one of the broadest universals we have known,” Gil points out that there are serious barriers to realizing this dream:

For many of us who work in A© post-1923, the material past we are called to remediate is under literal negotiation. As we trudge along, one ethical hand locked behind our backs by professional demands to publish and not make public, shadow and pirate libraries have sprouted to remind us that the mechanisms we have set in motion will ignore the ticket-booths to knowledge central. These remediations are happening with our half-baked collaboration but without our scholarship.

POST: Omeka Curator Dashboard

Jess Waggoner (University of California, Santa Cruz) has written a post providing an overview of the Omeka Curator Dashboard, “a suite of fifteen plugins (though a bonus sixteenth will be coming soon!) designed to facilitate object import and export, manage metadata, and curate collections.”

Waggoner explains how working on the Grateful Dead Archive Online project helped she and her colleagues identify a need for “curatorial workflow tools, metadata and file management tools” within Omeka. The post also outlines what each plugin can offer to users and resources for getting started with them.

 

 

POST: Down the Rabbit Hole

In “Down the Rabbit Hole,” Scott Weingart (Carnegie Mellon University) links his search for the source behind a map he’d seen in a tweet (and the resulting difficulties and dead-ends) to the work of the Viral Texts project. Weingart draws similarities between 19th century newspaper citations and (the failures of) modern-day citation practice online.

A single snippet of text could wind its way all across the country, sometimes changing a bit like a game of telephone, rarely-if-ever naming the original author.

Isn’t that a neat little slice of journalistic history? Different copyright laws, different technologies of text, different constraints of the medium, they all led to an interesting moment of textual virality in 19th-century America. If I weren’t a historian who knew better, I’d call it something like “quaint” or “charming”.

You know what isn’t quaint or charming? Living in the so-called “information age“, where everything is intertwingled, with hyperlinks and text costing pretty much zilch, and seeing the same gorram practices.

Weingart’s many-layered citation chase, and the results of his findings, provide an argument for the importance of examining the data behind a publication and the need to design systems–and reinforce practices– that enable sharing with attribution.

POST: Towards monocultural (digital) Humanities?

Domenico Fiormonte (University of Roma Tre) has published a blog post on InfoLet, “Towards monocultural (digital) Humanities?” Fiormonte, partially in response to Gregory Crane’s (Tufts University) recent article, “The Big Humanities, National Identity and the Digital Humanities in Germany,” analyzes linguistic diversity in digital humanities research.

English native speakers get a free ride, but the incommensurate economical, rhetorical and semiotic power of Anglophones undermine and inhibit the right to express ideas in our own native language. If biology is a model, then we should remember that monoculture is pushing species towards extinction in the most effective way.

A colleague and I have an article coming out on the relationship between the language of DH publications and the languages of sources (i.e. bibliographic references and citations). Our data, although gathered from a relatively small sample (seven main DH journals worldwide), show that DH is monolingual  regardless of the country and/or working institution/affiliation of authors.

Fiormonte goes on to discuss language bias issues in the sources used for Crane’s article (Scopus, the Science Citation Index, the Social Sciences Citation Index, and the Arts & Humanities Citation Index), as well as concerns surrounding monolingualism and English as the dominant language of DH research.

These data show that the real problem is not that English is the dominant language of academic publications (and of DH), but that both Anglophone and a high percentage of non-Anglophone colleagues barely use/quote non-Anglophone sources in their research. On the long run, this trend could have a devastating effect on Humanities research as a whole, and lead to the disappearance of cultural diversity (at least in academic publications
). In educational institutions worldwide we keep hearing “go English if you want to be international”, a mantra that can be also translated as “your local language is useless for intellectual expression”.

POST: Acceptances to Digital Humanities 2015 (series)

Each year, Scott Weingart (Indiana University) analyzes the publicly available data surrounding the accepted submissions to the Alliance of Digital Humanities Organization’s Digital Humanities conference. This year, he’s drafted a series of posts that individually tackle different aspects of the data and we’ve reproduced the tl;dr for each here:

Part 1:

Part 1 is about sheer numbers of acceptances to DH2015 and comparisons with previous years. DH is still growing, but the conference locale likely prohibited a larger conference this year than last. Acceptance rates are higher this year than previous years. Long papers still reign supreme. Papers with more authors are more likely to be accepted.

Part 2:

This post’s about the topical coverage of DH2015 in Australia. If you’re curious about how the landscape compares to previous years, see this post. You’ll see a lot of text, literature, and visualizations this year, as well as archives and digitisation projects. You won’t see a lot of presentations in other languages, or presentations focused on non-text sources. Gender studies is pretty much nonexistent. If you want to get accepted, submit pieces about visualization, text/data, literature, or archives. If you want to get rejected, submit pieces about pedagogy, games, knowledge representation, anthropology, or cultural studies.

Part 3:

There’s a disparity between gender diversity in authorship and attendance at DH2015; attendees are diverse, authors aren’t. That said, the geography of attendance is actually pretty encouraging this year. A lot of this work draws a project on the history of DH conferences I’m undertaking with the inimitable Nickoal Eichmann. She’s been integral on the research of everything you read about conferences pre-2013.

Part 4:

Women are (nearly but not quite) as likely as men to be accepted by peer reviewers at DH conferences, but names foreign to the US are less likely than either men or women to be accepted to these conferences. Some topics are more likely to be written on by women (gender, culture, teaching DH, creative arts & art history, GLAM, institutions), and others more likely to be discussed by men (standards, archaeology, stylometry, programming/software).

POST: Digital Manuscripts as Critical Edition

The Schoenberg Institute for Manuscript Studies has posted a lightly edited version of “Digital Manuscripts as Critical Edition,” a talk given by Christoph FlĂŒeler (University of Fribourg) at the 50th International Congress on Medieval Studies. In his examination of the relationship between originals and digital reproductions, FlĂŒeler calls for a “critical theory of the digital manuscript” that addresses the potential for digitized manuscripts to themselves constitute a critical apparatus.

What we need to do is to ask the following question: what preconditions must be met in order for a digital manuscript to be understood as a reliable resource for scholarly research, such that a scholarly researcher can, without any great misgivings or doubts, utilize the digital object as the basis for serious research and make use of it to the fullest possible extent?

POST: Text Capture and Optical Character Recognition 101

Simon Tanner (King’s College London) offers an introduction to Text Capture and OCR in a recent blog post. Tanner outlines the various ways in which digital humanities textual datasets are created from physical artifacts, and the strengths and weaknesses of OCR, rekeying, handwriting recognition, and speech recognition as methods for creating them.

This post is particularly helpful for those considering starting digitization projects from scratch and serves as a good, readable primer for those who may not have had much exposure to the processes through which print documents are transformed into digital textual data. Tanner also provides advice on choosing a suitable approach for original projects, with consideration of levels of representation, indexing, metadata, and mark-up.

POST: Exploring Digital Humanities and Media History

Lisa Spiro (Rice University) has written a post recapping her experience at the Arclight Symposium at Concordia University, an event which “brought together film and media historians with digital humanists to explore the possibilities and pitfalls of digital methods for media history.”

In “Exploring Digital Humanities and Media History,” Spiro details the core principles, challenges, and approaches that were present at the symposium, along with links to projects and related research.

 

POST: Reviving the Statistical Atlas of the United States with New Data

What would the Statistical Atlas of the United States (first published in 1874) look like with current data? Nathan Yau (Flowing Data) has created a project doing just that, using the programming language, R. “Reviving the Statistical Atlas of the United States with New Data”

I used similar styling, and had one main rule for myself. All the data had to be publicly available and come from government sites.

Yau’s visualizations maintain the look of the original Statistical Atlas while drawing on much larger and more detailed datasets, which are all helpfully linked in the post.

POST: So you want to be a Data Visualization Librarian?

The latest “So What Do You Do?” feature from Hack Library School consists of interviews with four data visualization professionals in different roles at the University of Michigan: Marci Brandenburg, Justin Joque, Stephanie O’Malley, and Ted Hall.

The contributors detail what drew them to work in data visualization, document their day-to-day practice, and provide advice for those interested in entering this emerging field of librarianship. The piece provides both LIS students and professional librarians  with an overview of the necessary skills and offers advice on steps one might take to prepare for work in data visualization.