Skip to main navigation Skip to search Skip to main content

Crowdsourcing Image Extraction and Annotation: Software Development and Case Study

  • Ana Jofre
  • , Vincent Berardi
  • , Kathleen P.J. Brennan
  • , Aisha Cornejo
  • , Carl Bennett
  • , John Harlan

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We describe the development of web-based software that facilitates large-scale, crowdsourced image extraction and annotation within image-heavy corpora that are of interest to the digital humanities. An application of this software is then detailed and evaluated through a case study where it was deployed within Amazon Mechanical Turk to extract and annotate faces from the archives of Time magazine. Annotation labels included categories such as age, gender, and race that were subsequently used to train machine learning models. The systemization of our crowdsourced data collection and worker quality verification procedures are detailed within this case study. We outline a data verification methodology that used validation images and required only two annotations per image to produce high-fidelity data that has comparable results to methods using five annotations per image. Finally, we provide instructions for customizing our software to meet the needs for other studies, with the goal of offering this resource to researchers undertaking the analysis of objects within other image-heavy archives.

Original languageEnglish
JournalDigital Humanities Quarterly
Volume14
Issue number2
StatePublished - 2020

Fingerprint

Dive into the research topics of 'Crowdsourcing Image Extraction and Annotation: Software Development and Case Study'. Together they form a unique fingerprint.

Cite this