Visual Communications and Technology Education Faculty Publications

Document Type

Conference Proceeding

Abstract

Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes move an avatar’s brows in a competing manner. This breakthrough brings the state of the art one step closer to the goal of an automatic English-to-ASL translator.

Conference proceedings from the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, Barcelona, Spain, 21-24 February, 2013.

Edited by Sabine Coquillart, Carlos Andújar, Robert S. Laramee, Andreas Kerren, José Braz. Barcelona, Spain. SciTePress 2013. 407-416.

Publication Date

Winter 2-21-2013

Start Page No.

407

End Page No.

416

Share

COinS