POST: The Archive as Data Platform

In a post prompted by the release of the Carter Cables by WikiLeaks, Ed Summers asks the question:

What if instead of trying to build the ultimate user experience for archival content, archives focused first and foremost on providing simple access to the underlying data first?

Responding to the “URL inspection” that is necessary to determine how to download archival records in bulk from NARA, Summers imagines a reorientation among archivists that would put these actions at the forefront of digital archives. Noting some reticence around a discussion stemming from controversial topics such as WikiLeaks, Summers puts the idea in perspective for scholars:

Researchers should feel that downloading data from the archive is a legitimate activity. 

Thoughts, anyone?

POST: Affinity of Ideas: Using an Affinity Wall to Map Out My Digital Dissertation

Amanda Visconti (University of Maryland), who recently posted about her digital dissertation, “Infinite Ulysses,” has written a post introducing the organizational technique of affinity mapping. She explains:

It’s a way to take a bunch of separate ideas and visually map out how they’re related (thus, “affinity”); this helps your areas of focus (or paper section headings) rise organically out of an overview of all the things you want to cover, rather than being pushed onto the ideas from the top down.

Visconti also explains how she used the technique to help structure an academic journal article based on her dissertation project.

POST: Neotopology

Elijah Meeks (Stanford University Libraries) has posted notes from his talk at the Texas Digital Humanities Consortium’s First Annual Conference, Networks in the Humanities (#txdhc). Among several topics, Meeks addresses the notion of scholarly “interlopers” with respect to the digital humanities:

I think interloping, more than computational approaches or the digital broadly construed as the object of study, defines digital humanities. And scholars are not the only ones interloping. We find ourselves awash in accessible, powerful tools and techniques that seem well-suited for our research and entice us into fields and disciplines with which we haven’t the wealth of domain expertise that we do in our primary fields.

 

POST: Mellon Funding for the Open Library of the Humanities

Adeline Koh (Stockton College) announced on ProfHacker that the Open Library of the Humanities, under the direction of Martin Paul Eve (University of Lincoln) and Caroline Edwards (University of London), has received “a substantial Mellon Foundation grant to build its technological platform, business model, journal and monograph pilot scheme.” In an interview with Koh on ProfHacker this week, Eve explains the origins of the project:

[A]t the end of 2013, amid a swirl of Twitter conversations that the idea for the Open Library of Humanities was born; a high-volume gold open access publisher with strict quality controls run on a not-for-profit, but for-sustainability, basis. At first, it was to be a project like PLOS – that is, based on Article Processing Charges at an affordable rate. However, gold does not have to mean APCs and we soon realised that there might be a route to achieving gold open access without publication fees…

 

 

POST: Digital History and the Death of Quant

In a post on the British Library’s Digital Scholarship Blog, James Baker (Curator, Digital Research) poses the question, “What do historians need to do good digital research?”

In his answer, Baker laments the lack of basic training in statistical and quantitative methods offered to historians in undergraduate programs:

As someone who was trained during the apotheosis of cultural history and the associated agonising over post-structuralist and post-modernist theory, I was not taught to count as a historian. And I wish I had been, for when first attempting to do good digital research I would have benefited from possessing this core skill, a skill the profession still has the capacity to teach.

Baker compares several editions of an introductory history textbook, noting the shrinking number of pages devoted to quantitative methods with each successive revision between 1984 and 2012.

POST: The Dividends of Difference: Recognizing DH’s Diverse Family Tree/s

Tom Scheinfeldt (University of Connecticut) has posted a recent talk, “The Dividends of Difference: Recognizing Digital Humanities’ Diverse Family Tree/s,” in which he advocates for a more nuanced genealogy of DH.

Framed by his realization that minimizing differences in digital humanities “in the name of collegiality” can be problematic (nodding to #dhpoco for its important work in this area), Scheinfeldt offers a narrative of DH that stems not from Father Busa and textual studies, but from Alan Lomax and the oral and public history that took shape in the 1940s and 1950s.

Thus, from my perspective, the digital humanities family tree has two main trunks, one literary and one historical, that developed largely independently into the 1990s and then came together in the late-1990s and early-2000s with the emergence of the World Wide Web. That said, I recognize and welcome the likely possibility that this is not the whole story. I would love to see this family tree expanded to describe three or more trunks (I’m looking at you anthropology and geography). We should continue to bring our different disciplinary histories out and then tie the various strains together.

 

PROJECT: Beyond Citation

Eileen Clancy (City University of NY) discusses the ideas behind Beyond Citation, a project from students in the Digital Praxis Seminar at the CUNY Graduate Center that seeks to understand how “databases shape the questions that can be asked and the arguments that can be made by scholars through search interfaces, algorithms, and the items that are contained in or absent from their collections.” Shortcomings of databases include OCR errors and difficulty locating the provenance information necessary to properly contextualize search results.

The Beyond Citation project plans to launch an early version of its website in May 2014 that aggregates “bibliographic information about major humanities databases so that scholars can understand the significance of the material they have gleaned.”

POST: Critical Code Studies Working Group: In Review

Viola Lasmana (University of Southern California) provides a summary of the recently completed, month-long Critical Code Studies Working Group, an online discussion group based at the University of Southern California seeking to apply “humanities hermeneutics to the interpretation of the extra-functional significance of computer source code.” The first week of discussion centered around exploratory programming, “a form of computing that is flexible, unpredictable, does not require expert programming skills, and iterative, always in a process of revision.” The second week focused on feminist programming and sought to answer the questions, “What is feminist code? What is feminist coding?”

Jacqueline Wernimont (Scripps College), was a participant and commented:

[T]here is something generative in allowing the absent-presence of feminist executable code to operate as an irritant, an occasion to continue to question the structures that have not permitted such a thing to exist.

Lasmana will soon post the highlights of weeks three and four, which focused on “PostColonial CritCode: Coding in Global Englishes” and “ACLS Workbench Collaborative Reading.”

POST: The Walt Whitman Archive

The Maryland Institute for Technology in the Humanities (MITH) shared an update on a new project related to the Walt Whitman Archive, a longstanding DH project that “sets out to make Whitman’s vast work, for the first time, easily and conveniently accessible to scholars, students, and general readers.”

Working in collaboration with the University of Texas at Austin and the Center for Digital Research in the Humanities at the University of Nebraska–Lincoln, the project team is using the tools developed for MITH’s Shelley-Godwin Archive to build a digital publication that “will allow users to read semi-diplomatic transcriptions of the texts alongside facsimile images, as well as visually distinguish regions of text annotated by Whitman.” The topical focus of the project is “Whitman’s annotations and commentary about history, science, theology, and art being discussed during his time,” which have been closely encoded according to TEI standards. As MITH research programmer Raffaele Viglianti writes in the update: “By adapting our Shelley-Godwin tools for Whitman, we found that Open Annotation was particularly suited for modeling Whitman’s own annotations, as the data model offered a basic and open system to represent generic annotation acts (for example by relating a piece of Whitman’s commentary to the specific portion of text that it annotates).” The project launch date should be announced in a few weeks.

 

POST: Literary Texts and the Library in the Digital Age, or, How Library DH is Made

Glen Worthey (Stanford University) has posted a written version of his talk from last year’s ALA Annual conference in Chicago, “Literary texts and the library in the digital age, or, How library DH is made.” As part of a panel on new roles for European and American Studies librarians in digital literary scholarship, Worthey touches on current threads of library debate surrounding the digital humanities — with generous praise for contributors to dh+lib’s ebook, Make It New (thanks, Glen!), among others.

The talk draws parallels between Russian Formalists of the 1920s and digital humanists of today:

In approaching the literary text, we focus on “how it’s made” – how literary history, genre systems, narrative lines, character networks, and even language itself are “made.”  Like the Russian Formalists, we in the textual digital humanities focus on “The Word as Such” (to use the title of a manifesto by two poets who were close comrades to the Formalists, Aleksei Kruchenykh and Velimir Khlebnikov); the advantage we claim in a particular digital approach is that we can do that at scale: our focus can be telescopic.  But the object in view is very much the same as that of our predecessors.

Worthey goes on to assert that:

[W]e in the library should make long-term, structural commitments to digital humanities work, rather than relying on short-term hires or crudely tacking on new job responsibilities to those of already-busy librarians.

POST: The Red Herring of Big Data

Brian Croxall (Emory University) has shared text and slides from an August 2013 talk entitled, “The Red Herring of Big Data,” in which he offers an engaging overview of the ways that digital humanities scholars can use digital technologies to go beyond pattern recognition and into humanities interpretation. Croxall reviews recent projects concerned with both big data and small data, and demonstrates that:

  • Data need interpretation
  • Data don’t have to be big
  • Data aren’t always the answer