POST: Word Embedding Models 2

Benjamin Schmidt (Northeastern University) has shared¬†two posts about word embedding models–“Vector Space Models for the digital humanities” and¬†“Rejecting the gender binary“–on his blog.¬†Each post explores facets¬†algorithms that can be used in digital humanities research. The first post provides an overview to word embedding models, contrasting them with topic models:
DHers use topic models because it seems at least possible that each individual topic can offer a useful operationalization of some basic and real element of humanities vocabulary: topics (Blei), themes (Jockers), or discourses (Underwood/Rhody).1 The word embedding models offer something slightly more abstract, but equally compelling: a spatial analogy to relationships between words.
The second post is a “more substantive look at how the method can help us better imagine a version of English without gendered language through some tricks of linear algebra.”

dh+lib Review

This post was produced through a cooperation between Talea Anderson, Caroline Barratt, Camille Cooper, Katrien Deroo, Kristina De Voe, Shilpa Rele, Amy Wickner, and Aparna Zambare (Editors-at-large for the week), Caro Pinto (Editor for the week), Sarah Potvin (Site Editor), Caitlin Christian-Lamb, Roxanne Shirazi, and Patrick Williams (dh+lib Review Editors).

2 thoughts on “POST: Word Embedding Models

Comments are closed.