POST: MITH Graphs

Ed Summers (University of Maryland) has published a post on his blog about his collaborations with Matt Kirschenbaum’s Critical Topics in Digital Studies course. The goal of the collaboration between Summers and Kirschenbaum is to “provide a gentle introduction to the use of network analysis, aka graphs, in the digital humanities, while providing the students with some hands on experience using some tools.”

Summers breaks down how working with the demonstration visualizations he created, using MITH’s Research Explorer data, as well as sharing some of his inspiration for the class project (such as Miriam Posner’s Cytoscape tutorials, and a recent conversation with Posner, Thomas Padilla, and Scott Weingart on network visualization pedagogy).

Data-Driven Art History: Framing, Adapting, Documenting

This is the first post in Data Praxis, a new series edited by Thomas Padilla 

Matthew Lincoln is a PhD candidate in Art History at the University of Maryland, College Park. Matthew is interested in the potential for computer-aided analysis of cultural datasets to help model long-term artistic trends in iconography, art markets, and social relations between artists in the early modern period. Last summer, Matthew held a fellowship at the Harvard MetaLab workshop Beautiful Data, and presented research at the Alliance for Digital Humanities Organizations’ annual international conference, DH2015, in Sydney, where his paper, “Modeling the (Inter)national Printmaking Networks of Early Modern Europe,” was a finalist for the ADHO Paul Fortier Prize.

Thomas: I’m always interested in the hows and whys of folks getting involved in digitally inflected research. Can you tell us a bit about yourself and describe what motivated you to take a path that brings Art History and digital research together?

Matthew: I suppose my digital art history “origin story” is one of a series of coincidences. I’ve always been interested in programming, and, as an undergraduate, even took a few computer science courses while I was majoring in art history at Williams College. But I’d never seriously considered how to apply those digital skills to historical research while at Williams, nor did I start my graduate work at the University of Maryland with any intention of doing computationally-aided art history there, either. However, as it happened, the same generous donation that made my attendance at UMD possible (a Smith Doctoral Fellowship in Northern European Art), had also funded the Michelle Smith Collaboratory for Visual Culture, an innovative physical space in the Department of Art History & Archaeology that was intended to serve as a focal point for experimenting with new digital means for sharing ideas and research. I was already several years into my coursework before I took a semester-long graduate assistantship in the Collaboratory, where I was given remarkable leeway to explore how the so-called “digital humanities” might inflect research in art history. During that semester, I developed a little toy map generated from part of Albrecht Dürer’s diary of his trip to the Netherlands in 1520-1521. But I also had my eyes opened to the vibrant discourse about digital research in the humanities that had, up to that point, been totally outside my field of view. What is more, data-driven approaches held particular promise for my own corner of art historical research on early modern etchings and engravings. Because of the volume of surviving impressions from this period, a lot of scholarship on printmakers and print publishers comprises a wealth of quantitative description and basic cataloging. My dissertation seeks to mine this existing work for larger synthetic conclusions about print production practices in the Dutch golden age.

Thomas: Over the summer you presented a paper at DH2015 that would become a finalist for the ADHO Paul Fortier Prize, “Modeling the (Inter)national Printmaking Networks of Early Modern Europe.” What were the primary research questions in the paper, and what methods and tools (digital and otherwise) did you employ to pursue those questions?

Matthew: I’m interested in how etchings and engravings can serve as an index of past artistic and professional relationships. Most of these objects are the result of many hands’ work: an artist who produced a drawn or painted design, a platecutter who rendered the image onto a printing plate, and often a publisher who coordinated this effort and printed impressions. Seen in this light, the extensive print collections in modern-day collections offer an interesting opportunity to see what kinds of structures emerge from all of this collaboration. In this paper, I wanted to examine how artists tended to connect (or not) across national boundaries. In the history of seventeenth-century Dutch art in particular, there has been a lot of well-deserved attention on the influence and prestige of Dutch painters traveling abroad. But what about printmakers? Did Dutch printmakers tend to connect to fellow Dutch artists more frequently, or did they prefer international collaborators? And how might this ratio have changed over time? It’s easy to intuitively argue either side of this question based on a basic understanding of Dutch history at the time, so this was a good opportunity to introduce some empirical observations and formal measurement to the discussion. In this vein, I’d argue one of my most crucial methods was doing a good old-fashioned literature review in order to properly understand the stakes of the question that I wanted to operationalize.

from DH2015 paper, "Modeling the (Inter)national Printmaking Networks of Early Modern Europe"
from DH2015 paper, “Modeling the (Inter)national Printmaking Networks of Early Modern Europe”

I drew on two major datasets for this paper: the collections data of the British Museum, and that of the Rijksmuseum. The British Museum has released their collections data as Linked Open Data, which meant that I needed to invest a considerable amount of time learning SPARQL (the query language for LOD databases) and how to build my own mirror of their datastore in Apache Fuseki, as my queries were too large to submit to their live server. On the other hand, once I had mastered the basic infrastructure of this graph database, it was easy to produce tables from these data exactly suited to the analyses I wanted to do. The Rijksmuseum offers a JSON API service, allowing you to download one detailed object record at time. The learning curve for understanding the Rijksmuseum’s data model was lower than that for the British Museum’s LOD. However, I had to battle many more technical issues, from building Bash scripts to laboriously scrape every object from the Rijksmuseum’s cantankerous API, to figuring out how to break out just the information I needed from the hierarchical JSON that I got in return (jq was a fantastic utility for doing this).

Because I was more interested in looking at particular metrics of these networks rather than producing “spaghetti monster” visualizations like you can produce in a program like Gephi, I turned to the statistical programming language R to perform the actual quantitative analyses. R has been fantastic for manipulating and structuring huge tables of data, running network analysis algorithms (or just about any other algorithm you’d like to run), and then producing publication-quality visualizations. Because everything is scripted, it was easy to document my work and iterate through several different versions of an analysis. In fact, you can download the data and scripts for my DH2015 paper yourself and reproduce every single visualization.

from DH2015 paper, “Modeling the (Inter)national Printmaking Networks of Early Modern Europe”
from DH2015 paper, “Modeling the (Inter)national Printmaking Networks of Early Modern Europe”

Thomas: Based on your comments and prior blog posts such as, “Tidy (Art) Historical Data,” it seems that you put a great deal of care into thinking about how your data and research processes are documented and shared. Perhaps it’s a bit of a brusque way to ask, but what made you care? How did you learn how to care? Who did you learn from?

Matthew: I started caring because I saw smart people doing it. I still care because I experienced the practical benefits in a real way. Many of my DH role models put forward careful documentation of their work: Lincoln Mullen’s openly-accessible codeMiriam Posner’s bevy of public DH syllabi, or Caleb McDaniel’s lengthy “backwards survey course” reflection. Here were people doing really useful work, and I was directly benefitting from their openness – so that was absolutely something that I wanted to emulate. On the other side of it, I’ve also had to deal with anti-patterns in documentation. Because I work almost exclusively with data that other people have assembled, I’m painfully conscious of how much the lack of documentation, and/or the assumption that people will only ever use your data the same way that you did, can hinder productive re-use of data.

Now, to be honest, I am not sure if anyone else has directly benefitted yet from looking at my code and data. However, I’ve certainly benefitted from my own documentation! I have been revising an article in response to peer reviews. We all know what that timeline looks like: I “completed” (ha!) the data analysis almost a year ago, finalized and submitted the text with my co-author a month or so after that, then waited many more months before the reviews came back. In just the past month I’ve had to go back in and re-run everything with an updated dataset, clarify some of the analytical decisions made, and enhance several of the visualizations. And I didn’t need to rip my hair out, because all of my work is documented in simple code files, and I don’t have to try and reverse-engineer my own product without the original recipe. (I should note that the R programming community is great for this. It is filled with particularly vocal advocates for reproducible code, like knitr author Yihui Xie, who produce great tools for practicing what they preach.).

By writing documentation notes as I go, I’ve also become much better at explaining – in natural language – what I am doing computationally. This is crucial for any kind of quantitative work, but all the more so in humanities computing, where you can usually count on the fact that most of your audience will have no background in your methodology.

Thomas: Thinking on the digitally inflected research you’ve conducted to date, and the directions you seek to go in the future, what are the most significant challenges you anticipate you will encounter? Accessing data? Sharing your data? Venturing into new methodological terrain? Recognition of the work en route to tenure?

Matthew: I agree with Jacob Price’s assessment of data-driven methods in history: that, however promising, they present major challenges, both in the logistics of producing interoperable data, but also in producing interoperable scholarship: if the skills required to interpret and evaluate data-driven humanistic scholarship remain concentrated in a small corner of our respective fields, and never make it into, say, graduate methodology courses, then the long-term impact of that scholarship will also remain cloistered. One might argue this is surely a solvable problem… but I cite Price because he wrote that in 1969. I am excited to help other scholars implement these approaches in their own research (*cough*I’m available for hire!*cough*), but it is sobering to remember how enduring these problems have been.

Thomas: What recent research has inspired you?

Matthew: Ruth and Sebastian Ahnert’s recent article on English Protestant communities in the 1530s thoughtfully maps formal network concepts onto interesting disciplinary research questions – in their case, examining how Queen Mary I’s campaign to stifle evangelical organization failed to target the most structurally-important members of the dissident correspondence network. Also, I’ve found Ted Underwood’s and Jordan Sellers’ work on machine classification of literary standards to be one of the most fluently-written and compelling explanations of how predictive statistical tools can be used for hypothesis testing in the humanities.

Thomas: Whose data praxis would you like to learn more about?

Matthew: For all the work that I do with art history, I’ve actually done surprisingly little work directly with image data! There are some really interesting questions of stylistic history that I suspect could be informed by applying some fairly basic image processing techniques. I’d like to better understand methods for generating and managing image data and metadata (like color space information), from both the repository/museum perspective (how and why is it produced in the way it is?) as well as a computer vision perspective (how should that metadata be factored into analysis?).

 

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.

Pedal to the Metal: Our Year of DH

How did Virginia Commonwealth University librarians John Glover, Humanities Research Librarian, and Kristina Keogh, formerly the Visual Arts Research Librarian, build a DH initiative from the ground up? In this post, they detail their process for dreaming up, planning, developing, deploying, and evaluating Digital Pragmata over the course of its first year. 

Impetus

ALA Annual in 2012 featured the first meeting of the ACRL Digital Humanities Discussion Group and a preconference entitled “Digital Humanities in Theory and Practice: Tools and Methods for Librarians.” The former contributed to the creation of dh+lib, and the general interest in both demonstrated the demand for things DH-related within ALA. They also inspired the two of us to create Digital Pragmata, an ongoing digital arts and humanities initiative at Virginia Commonwealth University, based primarily at VCU Libraries, which kicked off with an event series. We did this within an academic year, without a formal structure to accommodate the work, no local past initiatives to draw on as examples, and minimal visible on-campus DA/DH community. Digital Pragmata has grown VCU Libraries’ DH profile on campus, reached hundreds of VCU faculty and students interested in digital scholarship, and paved the way for us to offer new kinds of outreach and support.

First Steps

Early in July of 2012, the two of us met to review our recent liaison activities and plans for the coming academic year. Not for the first time, we noted that we were continuing to encounter faculty and graduate students at VCU interested in the digital arts or digital humanities, whether in scholarly, pedagogical, or creative capacities, many of whom weren’t prepared to “do” DH, and who seemed to be looking for community.

Multiple developments relevant to the digital arts and humanities are moving forward at VCU, but no unit on campus is currently devoted solely to the digital arts or humanities. This is somewhat surprising, as Virginia Commonwealth University is a large urban research institution, with an FTE around 31,752 and various departments, programs, and interdisciplinary centers working in these areas–including top-ranking arts programs. On the other hand, as is often observed, libraries occupy a neutral ground, and finding the right blend of people, place, and resources takes time.

Based on what John had learned at ALA in Anaheim, he broached the subject of collaborating to create a digital arts and humanities initiative based out of VCU Libraries, and Kristina enthusiastically agreed. After a brief discussion, we decided that we wanted a real shot at creating something sustainable that would dovetail with library and university strategic goals: not just a workshop, lecture, or online presence, but a combination of all three, with growth potential. We decided provisionally, at John’s suggestion, to name it “Digital Pragmata,” reflecting the drive toward usefulness at the core of “more hack, less yack,” as well as the general concept of “digital things.”

Our interest in the project was strong, but we faced various potential hurdles. In our time at VCU, no liaison librarians had run, let alone started, a project on the scale we planned. Initiatives from our division, Public Services (since renamed “Research and Learning”), had not by and large previously been characterized by agile project management. We didn’t know how many people we would have to convince or collaborate with, or whom to seek out as partners. We had never attempted a project requiring substantial financial support from our library’s leadership. Perhaps our biggest hurdle was overcoming our own preconceived notions, both of what constituted feasible projects for librarians at our level and what kind of support we could expect from our institution.

The Landscape

As part of our initial planning process, we studied other institutions’ approaches. We learned, for instance, that the Institute for Advanced Study at the University of Minnesota has established Digital Humanities 2.0, a collaborative working group “to investigate and create ways of advancing artistic creation and scholarly research in the humanities by exploring digitization and Web 2.0 technologies.” We also looked at SUNY Buffalo’s Humanities Institute (HI) Research Workshops, which sponsors guest lectures and hosts presentations of research in progress by faculty and graduate students from diverse disciplines.

We were particularly interested in initiatives based out of university libraries. A good model is the Digital Arts & Humanities Lecture Series developed and hosted by the Brown University Library and the John Nicholas Brown Center for Public Humanities and Cultural Heritage. This series closely aligned with our own goals of bringing together faculty and students from different disciplines engaged in digital projects.

We also looked at developing projects in the digital humanities at VCU. Though there are a growing number of DH projects based in various departments, at our institution there has been no one central place or structure where scholars and students that work on digital arts and humanities projects can come together. VCU has, however, been working toward a number of initiatives that would offer the possibility of likely partnerships if we were to successfully establish a DH initiative. These include the Institute for Contemporary Art (ICA) and the Center for Advanced Research in the Humanities, which is currently recruiting for a Director. In addition, at the time, VCU Libraries was in the process of recruiting a Head for the newly conceived Innovative Media Studio, which will become part of the new addition to the James Branch Cabell Library set to open in Fall 2015. In the meantime, the continuing lack of one (or any) zone of interaction for those interested in this type of activity was becoming an increasing issue.

Stakeholders and Speakers

While we were waiting for final approval from the Libraries’ Administration, we set up meetings with people and groups inside and outside the library in order to begin laying our groundwork. We knew there would be many moving parts, but getting buy-in on campus was important. Our first meetings were with two targeted units outside the library – the Office of Research and the Center for Teaching Excellence (CTE). By bringing these units on board as named co-sponsors, we knew we could – from the start – increase our network of contacts. Their connections would also offer another venue for promotion.

After we received final approval, we met with stakeholders inside the library, including other research and collections librarians and department heads from Special Collections & Archives to discuss Digital Pragmata. Our colleagues offered many suggestions for potential speakers and knowledge of relevant projects around the country. Our web presence would not have been possible without the work of Erin White and Joey Figaro, members of the web team from our Digital Technologies department. Finally, we contacted and met with likely faculty and department heads around campus to publicize the events and our reasons for starting the initiative.

The speakers developed from a list we populated, as well as suggestions from others we spoke with during this initial process. We received one piece of advice that shifted our initial thinking about our first two panel sessions. It was suggested that outside speakers (i.e. non-VCU people) were more likely to elicit interest from faculty and students as we worked to establish Digital Pragmata. We decided to refocus our two panels to feature outside speakers, with VCU faculty acting as moderators for each event. Based on the initial advice, we felt this would garner interest in the concept, so that we could focus more on VCU projects down the road.

In the third week of December, we met with our supervisor, Bettina Peacemaker, and the Associate University Librarian for Public Services, Dennis Clark, to discuss Digital Pragmata. Administrative Council had met, discussed, and endorsed our proposal for two panels and a digital projects funding workshop, all of which would be designed to appeal to faculty and graduate students across the range of arts and humanities disciplines. We were given the go-ahead to begin planning in earnest, empowered to work with those colleagues we thought could contribute time or expertise, with the knowledge that we had financial support to make the event a success.

There was to be no task force, working group, or standing committee. In addition to this vote of confidence, we were simply asked to check in when we had questions or there were developments (e.g. speakers confirmed). This was simultaneously liberating and nerve-wracking: we had been entrusted with a high-profile project, the success or failure of which could affect the library and its perception on campus, students and faculty in our disciplines, and our own work life and careers.

Into the Weeds

Figure 1 - Digital Pragmata Mailer
Mailer

Our initial proposed budget was $600-$800. This, we argued, would be satisfactory to pay for light refreshments as well as travel, parking, and lunch to bring one speaker to each event from outside the Richmond metropolitan area. As our proposal’s parameters expanded, however, we were lucky to be approved for a much larger and more flexible budget, allowing us to offer honoraria for six outside speakers, travel and hotel accommodation for our out-of-town speakers, lunches for the speakers and university and library administration, receptions following two events, and gift bags for our speakers and moderators. Our process was heavily influenced by Gregory Kimbrell, VCU Libraries’ Membership and Events Coordinator, who both guided us and did or oversaw much of the events coordination work himself.

We spent a substantial amount of time trying to determine how best to publicize Digital Pragmata. One of the most important meetings in January was with our Director of Communication and Public Relations, Sue Robinson, with whom we discussed our overall publicity strategy and online presence. She helped us to think more effectively about our message and audience, and to target our promotion.

Facebook Page
Facebook Page

Sue, in turn, worked with a graphic designer on design concepts, one of which eventually led to the image that currently illustrates print materials like posters and mailers, and is the header image for Digital Pragmata’s blog, Facebook page, and Twitter feed (hashtag #digprag). Throughout the spring, colleagues, students, and faculty spoke effusively about the image’s eye-catching nature.

Showtime, and After

The March 26 and April 25 events each unfolded in similar fashion, on similar schedules. Library facilities and events colleagues ensured that our location, a multipurpose room seating around 65 people, was clean, with chairs set. Colleagues in library systems helped ensure that our technology was ready, and (see below) were indispensable when a travel debacle prevented one panelist from presenting in person. Colleagues from library events and administration helped to direct traffic, check attendees in, and keep everything running smoothly.

Our first panel, on March 26, had 49 attendees and focused on the “front ends” of digital projects, with speakers including Ed Ayers of the University of Richmond, Amanda French of the Roy Rosenzweig Center for History and New Media, and Emily Smith of 1708 Gallery. Each represented very different aspects of “front ends,” including Ed’s award-winning work creating many high-profile DH projects over the years, Amanda’s introduction to tools  for DH newcomers, and Emily’s experience with large-scale art projects involving image projection. The panel began with comments from multiple people, starting with John Ulmschneider, VCU’s University Librarian, and it ended with a Q&A session led by moderator Roy McKelvey, of VCU’s Department of Graphic Design.

Our second panel, on April 25, had 54 attendees and focused on the “back ends” of digital projects, with speakers including Ben Fino-Radin of Rhizome and MoMA, Francesca Fiorani of the University of Virginia, and Mike Poston of the Folger Shakespeare Library. These speakers took different approaches to the topic, including Ben’s work recreating and emulating defunct BBSes, Francesca’s process in building Leonardo and His Treatise on Painting, and Mike’s hands-on experience creating Folger Digital Texts. The panel began with comments from several people, starting with Dennis Clark, our administrator and advocate, and it ended with a Q&A session led by moderator Joshua Eckhardt, of VCU’s Department of English.

The funding workshop, held on May 2, had 20 attendees and ran somewhat differently. We chose not to film it, so that attendees might feel more free to speak about their own projects, though this wound up not being the case. Our presenters were Jessica Venable, from VCU’s Office of Research, and David Holland, from VCU’s School of the Arts, each of whom have expertise in grantsmanship and funding. Attendance for this workshop was lower than the panels, which was initially somewhat disappointing, but at twenty people, it was a tremendous turnout compared to most other VCU Libraries open workshops, particularly as it occurred during final exams.

Stumbles, Challenges, and Surprises

The main problems we experienced were those associated with the planning and execution of almost any event. These include issues such as when during the semester, day of the week, and time of day to schedule programming to allow for maximum attendance. Similarly, finding rooms on campus large enough to hold as many attendees as possible, without being too large for the number that do show up, proved a challenge. We also grappled with travel issues for our speakers, specifically a canceled flight that made it necessary for one of our panelists to present virtually from the Philadelphia International Airport.

Perhaps more unique to this type of endeavor were the problems we faced with audience expectations. If your proposed DH initiative is something completely new, the audience may be happy with almost any level or type of programming provided, having no real expectations. Later on, as our post-event surveys revealed, our audience attended with some expectations about the nature of the programming.

Attendee Survey
Attendee Survey

Different people want different things or all things – including lectures, conversational and networking events, and active learning opportunities. There was also some tension between an interest in the opportunity to learn something potentially new and innovative from outside speakers and an interest in (and even a demand for) Digital Pragmata’s role and perceived mandate to highlight VCU projects.

Various other results were unexpected. Many attendees were attracted to the topic of “digital scholarship” and “digital objects,” but came from departments outside the arts or humanities. Likewise, while we expected a positive response overall based on early conversations with stakeholders, only one survey respondent felt that the panel they attended did not meet their expectations. It was a pleasure to succeed, by and large, but the margin by which we passed expectations and the level of intensity of interest across the university was remarkable.

The Road Ahead

Attendees’ response to Digital Pragmata was overwhelmingly positive, and the year ended with the initiative counted a success by stakeholders inside and outside of the library. Survey comments heavily influenced our plans for 2013-2014, which gradually took shape over the summer. Upcoming programming will feature a blend of events, from a brown bag series to multiple large events, at and beyond the scale of Spring 2013.

The complexion of the project has changed with Kristina’s move from VCU Libraries to Indiana University, where she is Head of their Fine Arts Library, though she retains an interest in and hopes to continue to contribute to Digital Pragmata. John is now working with new partners at VCU Libraries, both to enlarge the initiative’s base of expertise and to accommodate a more ambitious schedule of programming for the new year. The project was time-consuming and sometimes exhausting, but it allowed us to engage with hundreds of faculty and students in the arts and humanities, as well as the broader VCU and local communities, teaching us about events planning, programming, publicity, outreach, and more about the digital arts and humanities in the process.

[wp_biographia user=”jglover”]
[wp_biographia user=”keoghkm”]

POST: From Trees to Webs: Uprooting Knowledge through Visualization

Scott B. Weingart has posted a preprint [pdf] of “From Trees to Webs: Uprooting Knowledge through Visualization,” in which he discusses the shift from visualizing the classification of knowledge as hierarchical and linear (tree) to the modern conception of knowledge as rhizomatic and networked (web).

In the blog post announcing the work, which also contains several images that are not included in the preprint, Weingart explains:

It’s basically two stories: one of how we shifted from understanding the world hierarchically to understanding it as a flat web of interconnected parts, and the other of how the thing itself and knowledge of that thing became separated.

The article will be published as part of the proceedings of the  Universal Decimal Classification Seminar at the Hague, taking place in October 2013.

POST: Patchwork Libraries

In a new post on his Sapping Attention blog, Ben Schmidt offers a visualization of the library sources of books included in Bookworm. Bookworm, a project that “explores new means of library data visualization,” takes books and metadata included in the Internet Archive’s Open Library as its source material. The visualization, beyond drawing attention to the number of books contributed to the Internet Archive by particular libraries over time, points to “temporal patterns” around the 1923 copyright cutoff or particular concentrations of institutional book collections. His lesson is that one must know the source material making up the aggregate and understand where and why shifts in the overall collection have occurred:

The digital libraries we’re building are immense patchworks. That means they have seams. It’s possible to tell stories that rest lightly across the many elements. But push too hard in the wrong direction, and conclusions can unravel.

This attention to the source library materials is reflected in other visualizations and tools with possible application in the Digital Public Library of America. A dh+lib review post last week described Harvard metaLab’s Data Artifacts project, “which seeks to understand the collections data of libraries and other institutions as cultural objects.” This week’s dh+lib review of Matthew Jockers and Julia Flanders’ keynote from the Boston Area Days of DH 2013 looks at the question of scale and the artificial divide between macro and micro reading in DH.

Is aggregation spurring a return to close examination, and, in the language of metaLab (“artifacts, things assembled by human hands and minds, with stories to tell and values to express”), to an almost artisanal sense of the small, handwrought particulars of the sources themselves? Critiquing some of the aggregate-data-driven claims of other scholars, Schmidt has commented: “The explanations for patterns like this might be solved by algorithmic firepower, but just as often they’ll be solved by arcane knowledge from history, literature, or library science.” Might algorithmic firepower and arcane knowledge be complementary?

POST: Visualizations and Digital Collections

In a previous post on dh+lib, Jefferson Bailey outlined some of the ways in which the digital humanities could enhance access and discovery of cultural heritage materials. Now, in “Visualizations and Digital Collections,” he explores the potential of visualization as a technique for appraisal in born digital collections:

[G]iven the ever-increasing volume of material in born-digital archival collections, visualizations are increasingly a crucial tool in a variety of managerial functions for digital stewards, from analyzing directory contents prior to acquisition, to risk assessment, to visualizing contextual relations between collections.