POST: Zines vs. Google Vision API?—? Part 1: Process

Matt Miller (Pratt Institute) has written a post on Medium entitled “Zines vs. Google Vision API?—? Part 1: Process,” which details the first phase of a project he has begun with digitized zines from the Solidarity! Revolutionary Center and Radical Library. Miller discusses the early steps of the project and walks users through the steps of beginning to use the Google Vision Suite API:

While browsing them I thought about how complex they are from a digital surrogate point of view. Up there with digitized newspaper, zines are often a combination of text, various fonts, images, orientations, anything imaginable. I wondered what a digital discovery system would look like for a collection of zines. These zines also have very minimal metadata, a title, creator and sometimes a description and subject terms. Simultaneously I’ve been looking at the Google Vision API suite and wondered what commodity computer vision API could do with this corpus. This is not deep learning model building, just very generic methods. But I thought it might be possible that they would be good enough to create some compelling use cases for a Zine discovery system. Plus when you sign up you get a $300 API credit, so…of course, let’s do that.

Miller goes on to share a link to an interface to the dataset highlighting the initial results.

dh+lib Review

This post was produced through cooperation among Kelsey George, Joseph Koivisto, Stephen Lingrell, Stephen McLaughlin, Megan Martinsen, Allison Ringness, and Chella Vaidyanathan (Editors-at-large for the week), Patrick Williams (Editor for the week), and Caitlin Christian-Lamb, Caro Pinto and Roxanne Shirazi (dh+lib Review Editors).