RECOMMENDED: Training Information Professionals in the Digital Humanities

Training Information Professionals in the Digital Humanities: An Analysis of DH Courses in LIS Education,” by Chris Alen Sula (Pratt Institute) and Claudia Berger (Sarah Lawrence College), provides a look at DH courses offered in LIS programs in light of the growth DH has seen since 2014 when it was identified as one of the top trends in academic libraries.

From the abstract:

The digital humanities (DH) remain a growing area of interest among researchers and a locus of new positions within libraries, especially academic libraries, as well as archives, museums, and cultural heritage organizations. In response to this demand, many programs that train information professionals have developed specific curricula around DH. This paper analyzes courses offered within two overlapping contexts: American Library Association (ALA) accredited programs and iSchools. In addition to documenting the scope and extent of DH courses in these settings, we also analyze their contents, relating our findings to previous research, including analysis of job ads and interviews with professionals.

Data was collected from Spring 2020, from institutional course catalogs and program webpages, and syllabi and course descriptions were obtained from 69 percent of the courses identified. Researchers discuss their findings with a focus on course offerings, course descriptions and key concepts, learning outcomes, technologies, and sources.

RECOMMENDED: DHQ Issue 17.2

The most current issue of Digital Humanities Quarterly (DHQ), issue 17.2, focuses on Critical Code Studies and Tools Criticism, which the editors define as “the application of the hermeneutics of the humanities to the interpretation of the extra-functional significance of computer source code. ‘Extra’ here does not mean ‘outside of’ or ‘apart from’ but instead it refers to a significance that is ‘growing out of’ an understanding of the functioning of the code.” In other words, the articles in this issue seek to understand code as a text which humanists can interpret, rather than just a means to a computational end.

Articles on topics such as Close Code Readings, Code Legibility and Critical AI, and Code Languages and Linguistics, as well as Tools Criticism are included. The editors have described these articles in 3 categories:

“In addition to demonstrating established methods and best practices, scholars in this issue offer new and nuanced approaches to a wide range of code objects as well as developing new approaches, expanding the realm of what can be analyzed through critical code studies — accompanied by in-depth readings performed by top scholars in the field. This first issue presents three groupings of articles: 1) exemplary close readings of code, 2) new directions in critical code studies (such as code legibility and Critical AI), and 3) new work in programming languages and linguistics (including esoteric programming languages and indigenous programming languages).”

RECOMMENDED: Reviews in Digital Humanities, Vol. 4.4

The latest issue of Reviews in Digital Humanities (vol 4, no 4, April 2023) has been released. This issue features projects that were sourced from their open submission process. This issue’s Editors’ Note highlights the role the reviewers have in the journal and their philosophy towards review., from the piece: 

We’ve been fortunate that all the reviewers we’ve worked with have approached the projects they reviewed with kindness and generosity. We say that “Reviewer 2” has no place here, and we’ve actually never had anyone act like Reviewer 2 when working with us.

Part of a supportive review culture, however, is ensuring that project directors who submit their work to us receive constructive feedback. Most of our mentoring with reviewers is focused on helping them provide constructive criticism.

RECOMMENDED: Diversity of Digital Humanities in IJHAC: Exemplary Publications, 2012-2022 Virtual Issue

The International Journal of Humanities and Arts Computing: A Journal of Digital Humanities has published a virtual special issue, “Diversity of Digital Humanities in IJHAC: Exemplary Publications, 2012-2022,” which makes available selected pieces from the last decade of the journal’s publications.

From the introduction:

IJHAC: A Journal of Digital Humanities has been published since 1989, initially under the name History and Computing. It is one of the longest running journals in digital humanities. Recently, the journal broadened its thematic scope and geographical impact. Our Editorial Board comes from 14 different countries, from all the continents, with experience in topics as diverse as history, literature, linguistics, environmental studies, urban studies, Asian Studies, Native American and Indigenous Studies, African Studies, gender studies, cultural heritage, and archaeology. The range of methodological expertise is also wide, with text analysis, spatial analysis, network analysis, databases, digital infrastructures, big data, digital pedagogy, digital curation, digital archives, and digital storytelling being prominent.

With this virtual special issue, available for download, we want to show the global reach of the journal and give greater visibility to the diversity of digital humanities approaches that we have been publishing in the last decade. The articles presented here range from Linked Open Data to 3D reconstruction of historical sites, and include a critical review about Artificial Intelligence, an important contribution at a time when everyone is chatting about this topic. In addition to the emerging technologies that have captured the attention of our authors, the journal has a long commitment to spatial analysis methods, with examples that range from the spatial representation of the Holocaust to the introduction of disability studies in the classroom. This special issue also highlights digital research infrastructures, historical data repositories, and concerns about web archiving. Moreover, methodologies now consolidated in the digital humanities, such as xml annotation, network analysis, and crowdsourcing, are represented in several studies regarding music, movies, and literature.

We hope that this special issue will help you engage with our community of digital humanities authors. We look forward to continuing to publish your cutting-edge research in the near future. Enjoy!

Of particular interest to dh+lib readers may be pieces such as “Lost in the Infinite Archive: The Promise and Pitfalls of Web Archives,” by Ian Milligan (Volume 10, Issue 1, March, 2016), “The Missing Voice: Archivists and Infrastructures for Humanities Research” by Reto Speck and Petra Links (Volume 7, Issue 1-2, October 2013), and “Crowdsourcing Bentham: Beyond the Traditional Boundaries of Academic History” by Tim Causer and Melissa Terras (Volume 8, Issue 1, April 2014).

RECOMMENDED: Ithaka S&R Brief: “Are the Humanities Ready for Data Sharing?”

An issue brief from Ithaka S+R, “Are the Humanities Ready for Data Sharing?” reports on the paucity of data sharing practices in the humanities, on the heels of the “Nelson Memo,” which states that publicly funded publications (including those with NEH funds) must to deposit their datasets into publicly accessible repositories. The brief introduces the relevance to digital humanists with:

While the NEH funds only a tiny percentage of research and publications in the humanities, its inclusion in the Nelson memo and in the “year of open science” is clear evidence that humanists—who have largely existed on the margins of major trends towards mandatory data sharing that are transforming research practices and scholarly communication in other fields—must now consider their place in this policy landscape. It is not yet clear how the NEH will define data for the purposes of compliance with the Nelson memo, but the requirement that they do so should stimulate conversation about data sharing in the humanities. When should the evidence humanists collect be considered data? How might humanists adopt STEM-oriented norms around data sharing, and what might humanists bring to the table that would help other fields improve their data sharing practices?

The brief includes interviews with 4 scholars involved with DH projects who have provided insights on planning ahead for data sharing, how and where they made their data available, and what they did to address barriers to this practice.

RECOMMENDED: “DH Eh? A Survey of Digital Humanities Courses in Canadian LIS Education”

Marcela Y. Isuster (McGill University) and Donna Langille (University of British Columbia – Okanagan) analyzed course offerings in eight Library and Information Science Programs in Canadian Universities in the March 2023 Issue of College and Research Libraries. Their article finds that “[w]all institutions offer at least a few elective courses on data management topics such as metadata, digital libraries, data mining, data science, digital curation, and database design, DH-specific courses are less prevalent both in terms of how many institutions offer them and the number of courses they offer.” Additionally, while these courses are introducing the technical skills of DH, skills like project management and collaborations are not as prevalent. The article provides a breakdown of themes present in each DH course offered in these programs.

RECOMMENDED: Rethinking Data and Rebalancing Digital Power

The Ada Lovelace Institute released a new report, “Rethinking data and rebalancing digital power.” Developed by the Rethinking data working group, co-chaired by Diane Coyle (Bennett Professor of Public Policy, University of Cambridge) and Paul Nemitz (Principal Adviser on Justice Policy, European Commission and visiting Professor of Law at College of Europe), the report sought to “imagine rules and institutions that can shift power over data and make it benefit people and society.”

The working group co-chairs note that:

We began this work in 2020, only a few months into the pandemic, at a time when public discourse was immersed in discussions about how technologies – like contact tracing apps – could be harnessed to help address this urgent and unprecedented global health crisis.

The potential power of data to affect positive change – to underpin public health policy, to support isolation, to assess infection risk – was perhaps more immediate than at any other time in our lives. At the same time, concerns such as data injustice and privacy remained.

It was in this climate that our working group sought to explore the relationship people have with data and technology, and to look towards a positive future that would centre governance, regulation and use of data on the needs of people and society, and contest the increasingly entrenched systems of digital power.

The working group discussions centred on questions about power over both data infrastructures, and over data itself. Where does power reside in the digital ecosystem, and what are the sources of this power? What are the most promising approaches and interventions that might distribute power more widely, and what might that rebalancing accomplish?

The report highlights four intersecting interventions:

  1. Transforming infrastructure into open and interoperable ecosystems.
  2. Reclaiming control of data from dominant companies.
  3. Rebalancing the centres of power with new (non-commercial) institutions.
  4. Ensuring public participation as an essential component of technology policymaking.

The full report is available for download.

Source: Auto Draft

RECOMMENDED: Tech Won’t Save Us: Don’t Fall for the AI Hype with Timnit Gebru

Explore the latest episode of Tech Won’t Save Us, a podcast that “examine[s] the tech industry, its big promises, and the people behind them. Tech Won’t Save Us challenges the notion that tech alone can drive our world forward by showing that separating tech from politics has consequences for us all, especially the most vulnerable.”

In this recent episode, “Don’t Fall for the AI Hype,” host Paris Marx interviews Timnit Gebru, founder and executive director of the Distributed AI Research Institute (DAIR), about artificial intelligence and the buzz around ChatGPT. Their conversation focuses on “the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.”

This episode will likely be of interest to digital humanities and library folks who are exploring the potential impacts — positive and negative — of ChatGPT and other AI tools for digital pedagogy.

Source: RECOMMENDED: Tech Won’t Save Us: Don’t Fall for the AI Hype w/ Timnit Gebru

RECOMMENDED: Data Primer: Making Digital Humanities Research Data Public

Published earlier in 2022, the collaboratively authored and edited Data Primer: Making Digital Humanities Research Data Public (Felicity Tayler; Marjorie Mitchell; Chantal Ripp; and Pascale Dangoisse) provides an overview of current practices around data management and curation for digital humanities practitioners.

Book Description

Data management and curation are important processes for digital humanists: without proper planning and management, the value of the data as well as the labour involved in researching, collecting, and analyzing the data, could be lost!

Data Primer: Making Digital Humanities Research Data Public helps integrate best practices when writing a Data Management Plan for research funding applications; it will also improve data curation strategies for collecting, managing, and publishing digital files and formats alongside traditional textual scholarship.

The primer offers a Data Flow and Discovery Model that “helps digital humanists assess and plan their data curation and management needs as an iterative process that can be conducted throughout the life of their research project.” It covers a broad range of topics, including data management, consent and intellectual property, copyright and (open) licenses, data collection and ownership, working with data, transforming data management into scholarly/creative work, and publishing/archiving your data.

The Primer is available open-access through eCampusOntario’s Pressbooks platform.

RECOMMENDED: 3D Data Creation to Curation: Building Community Standards for 3D Data Preservation

3D Data Creation to Curation: Community Standards for 3D Data Preservation, edited by Jennifer Moore (Washington University), Adam Rountrey (University of Michigan), and Hannah Scates Kettler (Iowa State University), provides valuable information on various aspects of handling 3D data. Topic addressed include: best practices for preservation, management and storage, metadata requirements, copyright and legal issues, and access. From the introduction:

There has been rapid growth in the production and usage of 3D data over the last decade, yet the preservation of these data has lagged behind to the detriment of scholarship and innovation. While the need for digital 3D data preservation is widely recognized, the ongoing development of 3D data creation processes and the evolving usage of content still present many open-ended questions about how to ensure the stability and durability of this data type. Creators, curators, and users of 3D datasets are disadvantaged by the lack of shared guidelines, practices, and standards. This volume, which includes surveys of current practices, recommendations for implementation of standards, and identification of areas in which further development is required, is a result of the efforts of a large practicing community coming together under the Community Standards for 3D Data Preservation (CS3DP) initiative to move toward establishment of standards. The goal of this work is to identify the broad, shared preservation needs of the whole community, and it is viewed as essential to use a collaborative approach for standards development that promotes individual investment and broad adoption. The authorship of the chapters recognizes those who worked to discuss particular aspects of preservation in detail, but throughout the process of development, the entire community has been engaged, shaping the content to meet needs across a diverse base of stakeholders.

This book shares important community knowledge and expertise for anyone interested in or working with 3D data curation lifecycles. It is also available as an open access edition at https://bit.ly/ACRL3Ddata.

RECOMMENDED: Ian Linkletter on Gettin’ Air with Terry Greene

Earlier this month, a podcast episode was released by Gettin’ Air with Terry Greene, featuring Ian Linkletter. Linkletter is a educational technology librarian, currently being sued by Proctorio, a surveillance technology company. The episode addresses his transition to librarianship from educational technology, and he emphasizes the need for librarians to critically engage with the tools and technologies they use, support, and promote, which increasingly capitalize on students’ data without their permission. Proctorio was and continues to be used at colleges and universities to administer tests online, unethically tracking eye movements to identify potential cheating that causes extreme mental and health issues reactions for students, including weeping from stress and urinating at their desks because they could not look away from their screens. The software was quickly adopted due to the COVID-19 pandemic, and in the initial rush, “We weren’t getting full ethical approval, full privacy approval.”  From the transcript:

“I think that there’s a disconnect right now between people that are entering the field and what the field actually is like at a lot of the larger institutions where you’re not necessarily just supporting educational technology, but you might be supporting more corporate technology. You might be supporting more surveillance technology than educational technology. And I guess, I guess, my advice to people would be that your voice really matters.”

Digital Humanities librarianship regularly requires evaluation of new tools and software, and in this case, Linkletter focuses on the ethical implications and real consequences of surveillance software. This is an important case, demonstrating how important it is to evaluate tools that aid digital humanities research, teaching, and pedagogy, as technology vendors carelessly and increasingly capitalize on user data to the detriment of individuals and society.