Note: As the dh+lib Review editors work behind the scenes this summer, we have invited a few members of our community to step in as guest editors and share with us what they are reading and why the dh+lib audience might want to read it too. This week, we hear from Benedikt Kroll, a cultural anthropologist who works as a product developer at the Center for Electronic Publishing at the Bavarian State Library in Munich, Germany.
Project Xanadu: Sketches of an alternative web
Nelson, Theodor Holm. Literary Machines: The Report On and Of Project Xanadu. Sausalito: Mindful Press, 1992.
To be honest, I’m not usually all excited when picking up my book orders at the library. But when picking up this book, which came as an interlibrary loan request, things where different: I was about to read documentation on Project Xanadu, one of the first hypertext projects before the web as it is known today. I stumbled across the reference in a footnote in a more recent article on digital quotation practices, and was since trying to find out more. Removing the rubber band around the book and opening the cover, it immediately fell apart. This allowed me to get a first overview while sorting the pages back in: Xanadu, as base for a digital media library, was meant to cover many special use cases, such as a payment method for using media from that digital media library: “Each published document has a cash register” (page no. “5/13”). Although it would be perfectly fine to read the Project Xanadu documentation just for nostalgic reasons, I’d like to propose this book as a helpful read to people interested in the publishing and library side of digital humanities. As we are working in a digital world full of proprietary software, non-disclosure agreements and electronic surveillance, it may be refreshing to take a few steps back to realize what people may have had in mind earlier, when laying the foundation of what is today the World Wide Web and the wiki paradigm known from sites like wikipedia. In fact, Project Xanadu itself was never with all its complexity, but its ideas and concepts are said to have been influential on both aforementioned technologies still shaping our daily digital lives.
Web links in terms of time and space
Shields, Rob. “Hypertext Links. The Ethic of the Index and Its Space-Time Effects.” In The World Wide Web and Contemporary Cultural Theory, edited by Andrew Herman and Thomas Swiss, 145-160. London: Routledge, 2000.
In my everyday work, the link (as in hyperlink, used in web sites) is endangered to become a mere mechanical object: Click here and you will be taken there. It takes a read like Shields’ reflections on the non-technical dimensions of the internet to keep the eyes open for conceptual implications of the hyperlink. Metaphors playing with spatial or temporal aspects of online content are frequently used in everyday language, in that same second, the perception of “imaginary worlds” (page 154) is created in the user. When designing or even when actually programming a web site or service, new parts of worlds are hence being created. In the “real world,” numerous laws regulate aspects of online content and activities—I appreciated the theoretical discussions presented in Shields’ text when tasked with implementing conformity with new regulations into existing web platforms, but also when working to research and analyse existing places on the internet regarding their methods of securing against privacy and human engineering threads.
Right, wrong and in between: An in-depth look at bibliometric databases
Tüür-Fröhlich, Terje. The Non-trivial Effects of Trivial Errors in Scientific Communication and Evaluation. Schriften zur Informationswissenschaft 69. Glückstadt: Verlag Werner Hülsbusch, 2016.
Visibility of one’s written work—in the era of search engines based on big data, learning algorithms and high-end OCR technology—may seem like just a natural thing to occur. Even the contrary might apply: One has to actively prevent being found, if desired. However, this type of “fast food visibility,” as one might call it, comes with downsides. Aside from “intelligent” search engines, fields like quantitative evaluation of scientific activity such as publication metrics strongly depend on correct data being provided. In her recent dissertation, Tüür-Fröhlich shows how faulty information makes its way into bibliometric databases and what the consequences of these unfortunately not too rare events can be for the authors, journals or institutions involved. Especially interesting to the digital humanities community, she points out how languages, alphabets, and subject-specific publication traditions contradict the idea of a quantitative, relatively unbiased analysis of academic activity. The study offers insights relevant to a broad range of publication data analysis. Also remarkable, Tüür-Fröhlich points out how certain processes and rules applied by bibliometric databases not only stay behind the possibilities of digital computing, but also actively refrain from opening up to more flexibility.
The mobility of source code snippets in digital humanities projects
An, Le, et al. “Stack Overflow: A Code Laundering Platform?” In Proceedings of the 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), 2017. https://arxiv.org/abs/1703.03897v1
Do you program? And if yes, have you ever used a question and answer website like Stack Overflow to find a hint on how to solve a specific problem? Especially those of us who do not program regularly or part-time might find themselves look up programming solutions more often – which is fine, but these days, licenses are in place to formalize the rules of knowledge exchange and the reuse of information. The paper by Le An, et al. presents a range of quantitative inquiries in the use of source code snippets from Stack Overflow, regarding both provenience and reuse of code. I kept this article in mind because it highlights an interesting level of the technical work involved in many digital humanities projects: How does the circulation of source code work? What pathways are there, and how do people think about code at the size of a snippet? A related question is of course the inquiry in the uniqueness of source code, as undertaken by Mark Gabel and Zhendong Su. I find these readings inspiring to look more closely at source code as tool on our fingertips, and to understand the specifics of those fingertips as belonging mostly/partly to academics who frequently are not full-blood programmers.