Generating Animations of American Sign Language
through Motion-Capture and
Participation of Native ASL Signers

Linguistic and Assistive Technology Laboratory
Queens College
The City University of New York

American Sign Language (ASL) is the primary means of communication for about 500,000 people in the United States (Mitchell, 2006).  ASL is a distinct language from English; in fact, a majority of deaf U.S. high school graduates have only a fourth-grade (age 10) English reading level (Holt, 1993).  Consequently, there are many deaf people who find it difficult to read English text on computers, TV captioning, or other settings.  Software to translate English text into an animation of a human character performing ASL will make more information and services accessible to deaf Americans. 

Unfortunately, essential aspects of ASL are not yet modeled by modern computational linguistic software.  Specifically, ASL signers associate entities under discussion with 3D locations around their bodies, and the movement of many types of ASL signs change based on these locations: pronouns, determiners, many noun phrases, many types of verbs, and others.  To create software to understand or generate ASL, we must answer:  When do signers associate entities under discussion with locations in space?  Where do they position them?  How must ASL sign movements be modified based on their arrangement?

The objective of this project is to discover techniques for generation of ASL animations that automatically predict when to associate conversation topics with 3D locations, where to place them, and how these locations affect ASL sign movements.  Our research methods include: creating the first annotated corpus of ASL movement data from native signers (in a motion-capture suit and gloves), annotating this corpus with features relating to the establishment of entity-representing locations in space, using machine learning approaches to analyze when/where these locations are established and how 3D motion paths of signs are parameterized on those locations, incorporating the models into ASL generation software, and recruiting native ASL signers to evaluate the 3D animations that result.

This material is based upon work supported by the National Science Foundation under Grant No. 0746566.  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).

Resources

Motion-Capture Glove Calibration Protocol: http://latlab.cs.qc.cuny.edu/glove/index.html

Publications

Matt Huenerfauth, Pengfei Lu. (in press, 2010). “Annotating Spatial Reference in a Motion-Capture Corpus of American Sign Language Discourse,” Proceedings of the Fourth Workshop on the Representation and Processing of Signed Languages: Corpora and Sign Language Technologies, The 7th International Conference on Language Resources and Evaluation (LREC 2010), Valetta, Malta.

Pengfei Lu, Matt Huenerfauth. 2009. “Accessible Motion-Capture Glove Calibration Protocol for Recording Sign Language Data from Deaf Subjects.” Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2009), Pittsburgh, Pennsylvania, USA.

Matt Huenerfauth. 2009. “A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language.” ACM Transactions on Accessible Computing.

Matt Huenerfauth. 2009. “Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL Signers.”  Proceedings of the International Conference on Universal Access in Human-Computer Interaction.  San Diego, CA.