Publications

Refine Results

(Filters Applied) Clear All

Toward an interagency language roundtable based assessment of speech-to-speech translation capabilitites

Published in:
AMTA 2006, 7th Biennial Conf. of the Association for Machine Translation in the Americas, 8-12 August 2006.

Summary

We present observations from three exercises designed to map the effective listening and speaking skills of an operator of a speech-to-speech translation system (S2S) to the Interagency Language Roundtable (ILR) scale. Such a mapping is nontrivial, but will be useful for government and military decision makers in managing expectations of S2S technology. We observed domain-dependent S2S capabilities in the ILR range of Level 0+ to Level 1, and interactive text-based machine translation in the Level 3 range.
READ LESS

Summary

We present observations from three exercises designed to map the effective listening and speaking skills of an operator of a speech-to-speech translation system (S2S) to the Interagency Language Roundtable (ILR) scale. Such a mapping is nontrivial, but will be useful for government and military decision makers in managing expectations of...

READ MORE

Two experiments comparing reading with listening for human processing of conversational telephone speech

Published in:
6th Annual Conf. of the Int. Speech Communication Association, INTERSPEECH 2005, 4-8 September 2005.

Summary

We report on results of two experiments designed to compare subjects' ability to extract information from audio recordings of conversational telephone speech (CTS) with their ability to extract information from text transcripts of these conversations, with and without the ability to hear the audio recordings. Although progress in machine processing of CTS speech is well documented, human processing of these materials has not been as well studied. These experiments compare subject's processing time and comprehension of widely-available CTS data in audio and written formats -- one experiment involves careful reading and one involves visual scanning for information. We observed a very modest improvement using transcripts compared with the audio-only condition for the careful reading task (speed-up by a factor of 1.2) and a much more dramatic improvement using transcripts in the visual scanning task (speed-up by a factor of 2.9). The implications of the experiments are twofold: (1) we expect to see similar gains in human productivity for comparable applications outside the laboratory environment and (2) the gains can vary widely, depending on the specific tasks involved.
READ LESS

Summary

We report on results of two experiments designed to compare subjects' ability to extract information from audio recordings of conversational telephone speech (CTS) with their ability to extract information from text transcripts of these conversations, with and without the ability to hear the audio recordings. Although progress in machine processing...

READ MORE

Measuring translation quality by testing English speakers with a new Defense Language Proficiency Test for Arabic

Published in:
Int. Conf. on Intelligence Analysis, 2-5 May 2005.

Summary

We present results from an experiment in which educated English-native speakers answered questions from a machine translated version of a standardized Arabic language test. We compare the machine translation (MT) results with professional reference translations as a baseline for the purpose of determining the level of Arabic reading comprehension that current machine translation technology enables an English speaker to achieve. Furthermore, we explore the relationship between the current, broadly accepted automatic measures of performance for machine translation and the Defense Language Proficiency Test, a broadly accepted measure of effectiveness for evaluating foreign language proficiency. In doing so, we intend to help translate MT system performance into terms that are meaningful for satisfying Government foreign language processing requirements. The results of this experiment suggest that machine translation may enable Interagency Language Roundtable Level 2 performance, but is not yet adequate to achieve ILR Level 3. Our results are based on 69 human subjects reading 68 documents and answering 173 questions, giving a total of 4,692 timed document trials and 7,950 question trials. We propose Level 3 as a reasonable nearterm target for machine translation research and development.
READ LESS

Summary

We present results from an experiment in which educated English-native speakers answered questions from a machine translated version of a standardized Arabic language test. We compare the machine translation (MT) results with professional reference translations as a baseline for the purpose of determining the level of Arabic reading comprehension that...

READ MORE

Measuring human readability of machine generated text: three case studies in speech recognition and machine translation

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 5, ICASSP, 19-23 March 2005, pp. V-1009 - V-1012.

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard Defense Language Proficiency Test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down about 25% when reading system STT output, (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT* and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks.
READ LESS

Summary

We present highlights from three experiments that test the readability of current state-of-the art system output from (1) an automated English speech-to-text system (2) a text-based Arabic-to-English machine translation system and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each...

READ MORE

New measures of effectiveness for human language technology

Summary

The field of human language technology (HLT) encompasses algorithms and applications dedicated to processing human speech and written communication. We focus on two types of HLT systems: (1) machine translation systems, which convert text and speech files from one human language to another, and (2) speech-to-text (STT) systems, which produce text transcripts when given audio files of human speech as input. Although both processes are subject to machine errors and can produce varying levels of garbling in their output, HLT systems are improving at a remarkable pace, according to system-internal measures of performance. To learn how these system-internal measurements correlate with improved capabilities for accomplishing real-world language-understanding tasks, we have embarked on a collaborative, interdisciplinary project involving Lincoln Laboratory, the MIT Department of Brain and Cognitive Sciences, and the Defense Language Institute Foreign Language Center to develop new techniques to scientifically measure the effectiveness of these technologies when they are used by human subjects.
READ LESS

Summary

The field of human language technology (HLT) encompasses algorithms and applications dedicated to processing human speech and written communication. We focus on two types of HLT systems: (1) machine translation systems, which convert text and speech files from one human language to another, and (2) speech-to-text (STT) systems, which produce...

READ MORE

Two new experimental protocols for measuring speech transcript readability for timed question-answering tasks

Published in:
Proc. DARPA EARS Rich Translation Workshop, 8-11 November 2004.

Summary

This paper reports results from two recent psycholinguistic experiments that measure the readability of four types of speech transcripts for the DARPA EARS Program. The two key questions in these experiments are (1) how much speech transcript cleanup aids readability and (2) how much the type of cleanup matters. We employ two variants of the four-part figure of merit to measure readability defined at the RT02 workshop and described in our Eurospeech 2003 paper [4] namely: accuracy of answers to comprehension questions, reaction-time for passage reading, reaction-time for question answering and a subjective rating of passage difficulty. The first protocol employs a question-answering task under time pressure. The second employs a self-paced line-by-line paradigm. Both protocols yield similar results: all three types of clean-up in the experiment improve readability 5-10%, but the self-paced reading protocol needs far fewer subjects for statistical significance.
READ LESS

Summary

This paper reports results from two recent psycholinguistic experiments that measure the readability of four types of speech transcripts for the DARPA EARS Program. The two key questions in these experiments are (1) how much speech transcript cleanup aids readability and (2) how much the type of cleanup matters. We...

READ MORE

The effect of text difficulty on machine translation performance -- a pilot study with ILR-related texts in Spanish, Farsi, Arabic, Russian and Korean

Published in:
4th Int. Conf. on Language Resources and Evaluation, LREC, 26-28 May 2004.

Summary

We report on initial experiments that examine the relationship between automated measures of machine translation performance (Doddington, 2003, and Papineni et al. 2001) and the Interagency Language Roundtable (ILR) scale of language proficiency/difficulty that has been in standard use for U.S. government language training and assessment for the past several decades (Child, Clifford and Lowe 1993). The main question we ask is how technology-oriented measures of MT performance relate to the ILR difficulty levels, where we understand that a linguist with ILR proficiency level N is expected to be able to understand a document rated at level N, but to have increasing difficulty with documents at higher levels. In this paper, we find that some key aspects of MT performance track with ILR difficulty levels, primarily for MT output whose quality is good enough to be readable by human readers.
READ LESS

Summary

We report on initial experiments that examine the relationship between automated measures of machine translation performance (Doddington, 2003, and Papineni et al. 2001) and the Interagency Language Roundtable (ILR) scale of language proficiency/difficulty that has been in standard use for U.S. government language training and assessment for the past several...

READ MORE

High-level speaker verification with support vector machines

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vol. 1, ICASSP, 17-21 May 2004, pp. I-73 - I-76.

Summary

Recently, high-level features such as word idiolect, pronunciation, phone usage, prosody, etc., have been successfully used in speaker verification. The benefit of these features was demonstrated in the NIST extended data task for speaker verification; with enough conversational data, a recognition system can become familiar with a speaker and achieve excellent accuracy. Typically, high-level-feature recognition systems produce a sequence of symbols from the acoustic signal and then perform recognition using the frequency and co-occurrence of symbols. We propose the use of support vector machines for performing the speaker verification task from these symbol frequencies. Support vector machines have been applied to text classification problems with much success. A potential difficulty in applying these methods is that standard text classification methods tend to smooth frequencies which could potentially degrade speaker verification. We derive a new kernel based upon standard log likelihood ratio scoring to address limitations of text classification methods. We show that our methods achieve significant gains over standard methods for processing high-level features.
READ LESS

Summary

Recently, high-level features such as word idiolect, pronunciation, phone usage, prosody, etc., have been successfully used in speaker verification. The benefit of these features was demonstrated in the NIST extended data task for speaker verification; with enough conversational data, a recognition system can become familiar with a speaker and achieve...

READ MORE

Beyond cepstra: exploiting high-level information in speaker recognition

Summary

Traditionally speaker recognition techniques have focused on using short-term, low-level acoustic information such as cepstra features extracted over 20-30 ms windows of speech. But speech is a complex behavior conveying more information about the speaker than merely the sounds that are characteristic of his vocal apparatus. This higher-level information includes speaker-specific prosodics, pronunciations, word usage and conversational style. In this paper, we review some of the techniques to extract and apply these sources of high-level information with results from the NIST 2003 Extended Data Task.
READ LESS

Summary

Traditionally speaker recognition techniques have focused on using short-term, low-level acoustic information such as cepstra features extracted over 20-30 ms windows of speech. But speech is a complex behavior conveying more information about the speaker than merely the sounds that are characteristic of his vocal apparatus. This higher-level information includes...

READ MORE

Biometrically enhanced software-defined radios

Summary

Software-defined radios and cognitive radios offer tremendous promise, while having great need for user authentication. Authenticating users is essential to ensuring authorized access and actions in private and secure communications networks. User authentication for software-defined radios and cognitive radios is our focus here. We present various means of authenticating users to their radios and networks, authentication architectures, and the complementary combination of authenticators and architectures. Although devices can be strongly authenticated (e.g., cryptographically), reliably authenticating users is a challenge. To meet this challenge, we capitalize on new forms of user authentication combined with new authentication architectures to support features such as continuous user authentication and varying levels of trust-based authentication. We generalize biometrics to include recognizing user behaviors and use them in concert with knowledge- and token-based authenticators. An integrated approach to user authentication and user authentication architectures is presented here to enhance trusted radio communications networks.
READ LESS

Summary

Software-defined radios and cognitive radios offer tremendous promise, while having great need for user authentication. Authenticating users is essential to ensuring authorized access and actions in private and secure communications networks. User authentication for software-defined radios and cognitive radios is our focus here. We present various means of authenticating users...

READ MORE