Note: As the dh+lib Review editors work behind the scenes this summer, we have invited a few members of our community to step in as guest editors and share with us what they are reading and why the dh+lib audience might want to read it too. This post is from Rebekah Cummings, Digital Matters Librarian at University of Utah.
Hello, dh+lib community! Summer is winding down, but we still have a few weeks (well, two for me, sob) to add readings to our fall syllabi or reading group schedules. While I hoped my readings would have a cohesive theme like humanities data curation, ultimately, I decided to discuss four completely unrelated readings that stood out to me the most this summer. Topics include a thoughtful critique of data cleaning, humanities video game scholarship, a must-read white paper on contingent labor in digital libraries, and a critique of computational literary studies. Enjoy, and let me know your thoughts in the comments or on Twitter!
Rawson, K., & Munoz, T. (2019). Against Cleaning. In Debates in the Digital Humanities 2019. University of Minnesota Press.
“Against Cleaning” was originally posted in 2016, but its inclusion in the Debates in the Digital Humanities 2019 made me revisit it this summer with fresh eyes. In my former life as a data management librarian, it was almost sacrosanct that good data is clean data. It wasn’t uncommon to hear researchers say that they spend the majority of their research time getting their data into a form that was usable or useful through the process of “tidying” up their data. In “Against Cleaning” Munoz and Rawson make a strong case that humanists need to quell the urge to KonMari our data or simply adopt data cleaning methods wholesale from the sciences and social sciences and consider the ramifications of stripping the diversity and nuance out of data, particularly in these early stages of DH work when methods and norms are still being established.
To unpack this idea, Rawson and Munoz look to Anna Tsing’s nonscalability theory, which resists the notion that everything is scalable. Rather, scalability is a quality that some things have and some things don’t. The example used in the article is data from NYPL’s What’s on the Menu? project, which contains both scalable and unscalable elements. In their project, Curating Menus, Rawson and Munoz aim to preserve data diversity and create scalability through the use of indexes for ingredients, cooking techniques, and meal structures. Norms and rules can be applied that allow “2 eggs and bacon” and “bacon and two eggs” to be conceptually bound while preserving meaningful difference between items that are truly distinct. This article is a refreshing reminder to the DH community that as we adopt and adapt methods from other disciplines, humanistic inquiry is still the goal, and we should be wary of privileging cleanliness over meaning.
Coltrain, J. & Ramsay. S. (2019). Can Video Games Be Humanities Scholarship? In Debates in the Digital Humanities 2019. University of Minnesota Press.
One more from the new Debates in the Digital Humanities 2019! I have a confession that my personal interest in video games is virtually nil. As a text person through and through, it’s just not my preferred genre. However, it just so happens that the University of Utah, where I work, is home to the EAE (Entertainment, Arts, and Engineering) program, one of the top video game design programs in the country. My library even has an IMLS-funded project for archiving EAE theses, which are video games fraught with archival challenges like proprietary software, complicated authorship, and diverse file formats. So I feel as though the question of games as humanistic scholarship is relevant to my institution and my work in Digital Matters and is, therefore, a topic with which I should be familiar.
Coltrain and Ramsey acknowledge upfront that games are not, in most humanities departments, considered scholarship. But can they be? What are the necessary elements for something to be considered scholarly? If games are to cross the boundary from entertainment to scholarship, what new conventions would need to be developed for things like attribution? Do games even have the necessary features for analysis and interpretation? Ultimately, the authors make a compelling case for what a scholarly game might look like and the potential for scholarly games. Like other forms of digital scholarship, anticipated obstacles include adjusting the expectations of promotion and tenure committees and measuring impact of video game scholarship.
It’s probably no surprise to dh+lib Review readers that much of the “digital” work happening in libraries is grant-funded, contingent, and precarious. While there is certainly a place for temporary library labor — e.g. learning opportunities for undergrad and graduate students — too much of the Library, Archive, and Museum (LAM) labor force is entering and staying in the profession in contingent labor positions with little in the way of benefits, salary, and stability. The DLF Working Group on Labor in Digital Libraries, Archives, and Museums recently released a draft of the eagerly anticipated (in my Twitterverse) white paper on contingent and grant-funded labor. This white paper presents the outcomes of the first phase of the IMLS project Collective Responsibility, which seeks to understand worker experience in contingent positions and is required reading for anyone in the digital library world.
The rise of adjunct labor and shift to a gig economy over the past several decades has groomed many of us to think that short-term, grant-funded positions are a normal or even enviable way to enter the workforce. My personal experience mirrors this ideal. After graduating from library school, my first professional role was funded by a one-year subgrant from the Digital Public Library of America. While there was certainly a period of insecurity, it was an amazing opportunity and springboard to a tenure track position. While this narrative is brought out as the norm, the reality is that most contingent workers go on to more contingent work, half of contingent laborers make less than $40,000/year, and there is a startling lack of attention to the professional and personal futures of contingent workers.
The white paper sheds light upon the ways contingent labor disproportionately affects women and racial minorities and contributes to the overwhelming whiteness in our profession. It also highlights the disparity between the prestige and funding that grants brings to institutions and PIs while few of the benefits trickle down to the contingent labor fulfilling the work of the grant. If you are curious what institutions, funders, and workers can do to reverse this trend, read this excellent report and share it broadly with the decision-makers in your institution.
Da, N. Z. “The Computational Case against Computational Literary Studies,” Critical Inquiry 45, no. 3 (Spring 2019): 601-639. https://doi.org/10.1086/702594
The last reading I’ll put forward from my summer reading is “The Computational Case Against Computational Literary Studies” by Nan Z. Da. As a librarian who has lightly dabbled in text mining and topic modeling, I was a bit chagrined when I read her scathing but compelling critique. The article is clear at the outset that it is not a critique of DH writ large, just one prominent strand of DH, computational literary analysis, which Da defines as “running computer programs on large (or usually not so large) corpora of literary texts to yield quantitative results which are then mapped, graphed, and tested for statistical significance and used to make arguments about literature or literary history.”
Da asserts that computational literary analysis provides little explanatory power and shores up weak or inconclusive results with traditional literary criticism. She claims that the small amount of computing power (with the singular exception of large scale digitization efforts) required for computational literary analysis cannot justify the existence of literary labs and disproportionate funding towards the genre. The paper critiques the results of various high-profile CLS studies and divides them into two categories: papers that show no statistical results and papers that produce results but are wrong. The problem is not merely that the field isn’t mature enough or that methods are still being developed, but that the nature of the data does not lend itself to computation. According to Da, “Word frequencies and the measurement of their differences over time or between works are asked to do an enormous amount of work, standing in for vastly different things.” She concludes her argument by saying, “It may be the case that computational textual analysis has a threshold of optimal utility, and literature—in particular, reading literature well—is that cut-off point.” This compelling piece reminded me of another excellent reading on the nature of humanities data, Miriam Posner’s Humanities Data: A Necessary Contradiction.
Happy reading, and stay tuned for next week’s final installment of our “What Are You Reading This Summer?” series.