"On-Demand" Remote Sign Language Interpretation

Kitch Barnicle, Gregg Vanderheiden, Al Gilman
Trace R & D Center, University of Wisconsin - Madison
{barnicle@trace.wisc.edu, gv@trace.wisc.edu, asgilman@iamdigex.net}


Experiments were carried out at the Supercomputing 99 (SC99) conference in Portland, Oregon to assess the feasibility of providing sign language "interpreter-on-demand" services to conference attendees who are deaf. In the future such "pop-up-interpreters" could be accessed through standard web browsers so that interpreters would be reachable on any web-capable device with a video display and fast enough connection.


Traditionally, interpreters provide sign language interpretation services in-person for individuals who are deaf. However, in-person delivery limits the availability of these services. Web-based video communication technology promises to widen access to interpretation services. Access to remote interpreters via the web can eliminate the time that interpreters spend traveling to a location to provide services. This can lower cost and increase the availability of interpreters. Similarly, "on-demand" interpreters can be used to provide interpretation for as little as a few minutes at a time, rather than a two-hour minimum, leading to an additional cost savings. Finally, with remote access, interpreters from anywhere in the world, including interpreters with expertise in a particular discipline, can be hired. As people and businesses gain access to the web, on-demand, anytime, anywhere interpretation services become feasible.

Sign language interpretation involves rapid hand, arm and finger movements, changes in facial expressions and lip movements. These fast, often small movements can be difficult to detect unless the video achieves high fidelity both in detail and in timing. Data communication over today's commodity Internet is subject to performance limitations and fluctuations which degrade video fidelity to an unacceptable degree. Fortunately, working with SC99 provided us access to advanced networks and we were able to avoid this problem and carry out the experiments.


The two primary objectives for this project were to 1) demonstrate to the high performance computing community the potential application of high-speed networks for the provision of remote sign language interpretation and 2) to develop an understanding of the technical issues surrounding the provision of remote sign language interpretation over high performance and wireless networks.


  1. Interpretation of Keynote and Plenary Sessions - Interpreters at the remote site listened to the keynote and plenary sessions on a speaker phone and signed the session. The video image of the interpreter was sent back to the convention center via Microsoft NetMeeting and the Internet2. This image was projected onto an 8' screen in a room that held over a 1000 people.
  2. Interpretation of Informal Conversations - An individual who is deaf used an “interpreter-on-demand” during informal conversations. He carried a Sony PictureBook mini-notebook computer with a wireless network connection as he roamed the convention center. Upon request, a remote interpreter signed informal conversions. Audio and video were transmitted back and forth via NetMeeting and a wireless network. The interpreter’s image was displayed on the PictureBook.
  3. Individualized Interpretation of a Conference Session - Tests were also carried out to see if the wireless system could provide the user with support during an individual conference session. A wireless assisted listening device was used to feed the audio from the speaker's presentation into the PictureBook and then to the interpreter over the wireless and Internet infrastructure of the conference. Since the PictureBook had a built in camera, the user could also sign back to the interpreter to confirm a sign or request clarification.
  4. Interpretation Delivered through a Head Mounted Display - A final series of tests were out carried using a head mounted display (HMD). With this configuration, the user was able to view the presenter and presentation screen by looking "through" the HMD while simultaneously viewing the interpreter on the HMD.

Findings and Next Steps


The research team sought to integrate off-the-shelf hardware and software and high speed networks in order to demonstrate a useful and practical application of these technologies, the delivery of remote interpretation services. These experiments, as well as developments in the areas of networks with Quality of Service (QoS) capabilities, high speed networks, and mobile devices suggest that remote interpretation services are feasible and can be practical in the near future.

By working with research programs and emerging commercial services, the goal is to eventually create mechanisms for combining computer speech recognition and translation technologies with human assistance when and where needed to yield low cost text and sign language "interpretation on demand." Even before these types of devices can become a standard tool, "pop-up interpreter" windows could be built into standard browsers so that wherever there was a browser, there could be an interpreter.


This project was funded by the National Institute on Disability and Rehabilitation Research (NIDRR) and the Education Outreach and Training Program (EOT) of the Partnership for Advanced Computation Infrastructure which is funded by the National Science Foundation (NSF).