Publications

Refine Results

(Filters Applied) Clear All

Automated forecasting of road conditions and recommended road treatments for winter storms

Published in:
19th Int. Conf. of Interactive Information Processing Systems for Meteorology, Oceanography and Hydrology, 9-13-February 2003.

Summary

Over the past decade there have been significant improvements in the availability, volume, and quality of the sensors and technology utilized to both capture the current state of the atmosphere and generate weather forecasts. New radar systems, automated surface observing systems, satellites and advanced numerical models have all contributed to these advances. However, the practical application of this new technology for transportation decision makers has been primarily limited to aviation. Surface transportation operators, like air traffic operators, require tailored weather products and alerts and guidance on recommended remedial action (e.g. applying chemicals or adjusting traffic flow). Recognizing this deficiency, the FHWA (Federal Highway Administration) has been working to define the weather related needs and operational requirements of the surface transportation community since October 1999. A primary focus of the FHWA baseline user needs and requirements has been winter road maintenance personnel (Pisano, 2001). A key finding of the requirements process was that state DOTs (Departments of Transportation) were in need of a weather forecast system that provided them both an integrated view of their weather, road and crew operations and advanced guidance on what course of action might be required to keep traffic flowing safely. As a result, the FHWA funded a small project (~$900K/year) involving a consortium of national laboratories to aggressively research and develop a prototype integrated Maintenance Decision Support System (MDSS). The prototype MDSS uses state-of-the-art weather and road condition forecast technology and integrates it with FHWA anti-icing guidelines to provide guidance to State DOTs in planning and managing winter storm events (Mahoney, 2003). The overall flow of the MDSS is shown in Figure 1. Basic meteorological data and advanced models are ingested into the Road Weather Forecast System (RWFS). The RWFS, developed by the National Center for Atmospheric Research (NCAR), dynamically weights the ingested model and station data to produce ambient weather forecasts (temperature, precipitation, wind, etc.). More details on the RWFS system can be found in (Myers, 2002). Next, the RCTM (Road Condition Treatment Module) ingests the forecasted weather conditions from the RWFS, calculates the predicted road conditions (snow depth, pavement temperature), Once a treatment plan has been determined, the recommendations are presented in map and table form through the MDSS display. The display also allows users to examine specific road and weather parameters, and to override the algorithm recommended treatments with a user-specified plan. A brief test of the MDSS system was performed in Minnesota during the spring of 2002. Further refinements were made and an initial version of the MDSS was released by the FHWA in September 2002. While this basic system is not yet complete, it does ingest all the necessary weather data and produce an integrated view of the road conditions and recommended treatments. This paper details the RCTM algorithm and its’ components, including the current and potential capabilities of the system.
READ LESS

Summary

Over the past decade there have been significant improvements in the availability, volume, and quality of the sensors and technology utilized to both capture the current state of the atmosphere and generate weather forecasts. New radar systems, automated surface observing systems, satellites and advanced numerical models have all contributed to...

READ MORE

Marathon evaluation of optical materials for 157-nm lithography

Published in:
J. Microlithogr., Microfab., Microsyst., Vol. 2, No. 1, January 2003, pp. 19-26.

Summary

We present the methodology and recent results on the longterm evaluation of optical materials for 157-nm lithographic applications. We review the unique metrology capabilities that have been developed for accurately assessing optical properties of samples both online and offline, utilizing VUV spectrophotometry with in situlamp-based cleaning. We describe ultraclean marathon testing chambers that have been designed to decouple effects of intrinsic material degradation from extrinsic ambient effects. We review our experience with lithography-grade 157-nm lasers and detector durability. We review the current status of bulk materials for lenses, such as CaF(2) and BaF(2), and durability results of antireflectance coatings. Finally, we discuss the current state of laser durability of organic pellicles.
READ LESS

Summary

We present the methodology and recent results on the longterm evaluation of optical materials for 157-nm lithographic applications. We review the unique metrology capabilities that have been developed for accurately assessing optical properties of samples both online and offline, utilizing VUV spectrophotometry with in situlamp-based cleaning. We describe ultraclean marathon...

READ MORE

Phonetic speaker recognition with support vector machines

Published in:
Adv. in Neural Information Processing Systems 16, 2003 Conf., 8-13 December 2003, p. 1377-1384.

Summary

A recent area of significant progress in speaker recognition is the use of high level features-idiolect, phonetic relations, prosody, discourse structure, etc. A speaker not only has a distinctive acoustic sound but uses language in a characteristic manner. Large corpora of speech data available in recent years allow experimentation with long term statistics of phone patterns, word patterns, etc. of an individual. We propose the use of support vector machines and term frequency analysis of phone sequences to model a given speaker. To this end, we explore techniques for text categorization applied to the problem. We derive a new kernel based upon a linearization of likelihood ratio scoring. We introduce a new phone-based SVM speaker recognition approach that halves the error rate of conventional phone-based approaches.
READ LESS

Summary

A recent area of significant progress in speaker recognition is the use of high level features-idiolect, phonetic relations, prosody, discourse structure, etc. A speaker not only has a distinctive acoustic sound but uses language in a characteristic manner. Large corpora of speech data available in recent years allow experimentation with...

READ MORE

Modeling prosodic dynamics for speaker recognition

Published in:
Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP, Vol. 4, 6-10 April 2003, pp. IV-788 - IV-791.

Summary

Most current state-of-the-art automatic speaker recognition systems extract speaker-dependent features by looking at short-term spectral information. This approach ignores long-term information that can convey supra-segmental information, such as prosodics and speaking style. We propose two approaches that use the fundamental frequency and energy trajectories to capture long-term information. The first approach uses bigram models to model the dynamics of the fundamental frequency and energy trajectories for each speaker. The second approach uses the fundamental frequency trajectories of a pre-defined set of works as the speaker templates and then, using dynamic time warping, computes the distance between templates and the works from the test message. The results presented in this work are on Switchboard 1 using the NIS extended date evaluation design. We show that these approaches can achieve an equal error rate of 3.7% which is a 77% relative improvement over a system based on short-term pitch and energy features alone.
READ LESS

Summary

Most current state-of-the-art automatic speaker recognition systems extract speaker-dependent features by looking at short-term spectral information. This approach ignores long-term information that can convey supra-segmental information, such as prosodics and speaking style. We propose two approaches that use the fundamental frequency and energy trajectories to capture long-term information. The first...

READ MORE

Cluster detection in databases : the adaptive matched filter algorithm and implementation

Published in:
Data Mining and Knowledge Discovery, Vol. 7, No. 1, January 2003, pp. 57-79.

Summary

Matched filter techniques are a staple of modern signal and image processing. They provide a firm foundation (both theoretical and empirical) for detecting and classifying patterns in statistically described backgrounds. Application of these methods to databases has become increasingly common in certain fields (e.g. astronomy). This paper describes an algorithm (based on statistical signal processing methods), a software architecture (based on a hybrid layered approach) and a parallelization scheme (based on a client/server model) for finding clusters in large astronomical databases. The method has proved successful in identifying clusters in real and simulated data. The implementation is flexible and readily executed in parallel on a network of workstations.
READ LESS

Summary

Matched filter techniques are a staple of modern signal and image processing. They provide a firm foundation (both theoretical and empirical) for detecting and classifying patterns in statistically described backgrounds. Application of these methods to databases has become increasingly common in certain fields (e.g. astronomy). This paper describes an algorithm...

READ MORE

A constrained joint optimization approach to dynamic sensor configuration

Author:
Published in:
36th Asilomar Conf. on Signals, Systems, and Computers, Vol. 2, 3-6 November 2002, pp. 1179-1183.

Summary

Through intelligent integration of sensing and processing functions, the sensing technology of the future is evolving towards networks of configurable sensors acting in concert. Realizing the potential of collaborative real-time configurable sensor systems presents a number of challenges including the need to address a number of challenges including the need to address the massive global optimization problem resulting from incorporating a large array of control parameters. This paper proposes a systematic approach to addressing complex global optimization problems by constraining the problem to a set of key control parameters and recasting a mission-oriented goal into a tractable joint optimization formula. Using idealized but realistic physical models, a systematic methodology to approach complex multi-dimensional joint optimization problems is used to compute system performance bounds for dynamic sensor configurations.
READ LESS

Summary

Through intelligent integration of sensing and processing functions, the sensing technology of the future is evolving towards networks of configurable sensors acting in concert. Realizing the potential of collaborative real-time configurable sensor systems presents a number of challenges including the need to address a number of challenges including the need...

READ MORE

ADS-B Airborne Measurements in Frankfurt

Published in:
21st AIAA/IEEE Digital Avionics Systems Conf., 27-31 October 2002, pp. 3.A.3-1 - 3.A.3-11.

Summary

Automatic Dependent Surveillance-Broadcast (ADS-B) was the subject of airborne testing in Frankfurt, Germany in May 2000. ADS-B is a system in which latitude-longitude information is broadcast regularly by aircraft, so that receivers on the ground and in other aircraft can determine the presence and accurate locations of the transmitting aircraft. In addition to the latitude and longitude, ADS-B transmissions include altitude, velocity, aircraft address, and a number of other items of optional information. The tests in Germany were aimed at assessing the performance of Mode S Extended Squitter, which is one of several possible implementations of ADS-B. Extended Squitter uses a conventional Mode S signal format, specifically the 112-bit reply format at 1090 MHz, currently being used operationally for air-to-ground communications and air-to-air coordination in TCAS (Traffic Alert and Collision Avoidance System).
READ LESS

Summary

Automatic Dependent Surveillance-Broadcast (ADS-B) was the subject of airborne testing in Frankfurt, Germany in May 2000. ADS-B is a system in which latitude-longitude information is broadcast regularly by aircraft, so that receivers on the ground and in other aircraft can determine the presence and accurate locations of the transmitting aircraft...

READ MORE

Validation techniques for ADS-B surveillance data

Published in:
21st DASC: Proc. of the Digital Avionics Systems Conf., Vol. 1, 27-31 October 2002, pp. 3.E.2-1 - 3.E.2-9.

Summary

Surveillance information forms the basis for providing traffic separation services by Air Traffic Control. The consequences of failures in the integrity and availability of surveillance data have been highlighted in near misses and more tragically, by midair collisions. Recognizing the importance and criticality of surveillance information, the U.S. Federal Aviation Administration (FAA) in common with most other Civil Aviation Authorities (CAAs) worldwide has implemented a surveillance architecture that emphasizes the independence of surveillance sources and the availability of crosschecks on all flight critical data. Automatic Dependent Surveillance Broadcast (ADS-B) changes this approach by combining the navigation and surveillance information into a single system element. ADS-B is a system within which individual aircraft distribute position estimates from onboard navigation equipment via a common communications channel. Any ADS-B receiver may then assemble a complete surveillance picture of nearby aircraft by listening to the common channel and combining the received surveillance reports with an onboard estimate of ownership position. This approach makes use of the increasing sophistication and affordability of navigation equipment (e.g. GPS-based avionics) to improve the accuracy and update rate of surveillance information. However, collapsing the surveillance and navigation systems into a common element increases the vulnerability of the system to erroneous information, both due to intentional and unintentional causes.
READ LESS

Summary

Surveillance information forms the basis for providing traffic separation services by Air Traffic Control. The consequences of failures in the integrity and availability of surveillance data have been highlighted in near misses and more tragically, by midair collisions. Recognizing the importance and criticality of surveillance information, the U.S. Federal Aviation...

READ MORE

Analysis and comparison of separation measurement errors in single sensor and multiple radar mosiac display terminal environments

Published in:
MIT Lincoln Laboratory Report ATC-306

Summary

This paper presents an analyis to estimate and characterize the errors in the measured separation distance between aircraft that are displayed on a radar screen to a controller in a single sensor terminal environment compared to a multiple radar mosiac terminal environment. The error in measured or displayed separation is the difference between the true separation or distance between aircraft in the air and the separation displayed to a controller on a radar screen. In order to eliminate as many variables as possible and to concentrate specifically on the differences between displayed separation errors in the two environments, for the purposes of this analysis, only full operation Mode S secondary beacon surveillance characteristics are considered. A summary of the Mode S secondary radar error sources and characteristics used to model the resultant errors in measured separation between aircraft in single and multi-radar terminal environments is presented. The analysis for average separation errors show that the performance of radars in providing separation services degrades with range. The analysis also shows that when using independent radars in a mosiac display, separation errors will increase, on average, compared to the performance when providing separation with a single radar. The data presented in the section on average separation errors is summarized by plotting the standard deviation of the separation error as a function of range for the single radar case and for the independent mosiac display case. The sections on typical and specific errors in separation measurements illustrate that the separation measurement errors are highly dependent on the geometry of the aircraft and radars. Applying average results to specific geometries can lead to counter intuitive results is illustrated in an example case presented in analysis.
READ LESS

Summary

This paper presents an analyis to estimate and characterize the errors in the measured separation distance between aircraft that are displayed on a radar screen to a controller in a single sensor terminal environment compared to a multiple radar mosiac terminal environment. The error in measured or displayed separation is...

READ MORE

The effect of identifying vulnerabilities and patching software on the utility of network intrusion detection

Published in:
Proc. 5th Int. Symp. on Recent Advances in Intrusion Detection, RAID 2002, 16-18 October 2002, pp. 307-326.

Summary

Vulnerability scanning and installing software patches for known vulnerabilities greatly affects the utility of network-based intrusion detection systems that use signatures to detect system compromises. A detailed timeline analysis of important remote-to-local vulnerabilities demonstrates (1) vulnerabilities in widely-used server software are discovered infrequently (at most 6 times a year) and (2) Software patches to prevent vulnerabilities from being exploited are available before or simultaneously with signatures. Signature-based intrusion detection systems will thus never detect successful system compromises on small secure sites when patches are installed as soon as they are available. Network intrusion detection systems may detect successful system compromises on large sites where it is impractical to eliminate all known vulnerabilities. On such sites, information from vulnerability scanning can be used to prioritize the large numbers of extraneous alerts caused by failed attacks and normal background MIC. On one class B network with roughly 10 web servers, this approach successfully filtered out 95% of all remote-to-local alerts.
READ LESS

Summary

Vulnerability scanning and installing software patches for known vulnerabilities greatly affects the utility of network-based intrusion detection systems that use signatures to detect system compromises. A detailed timeline analysis of important remote-to-local vulnerabilities demonstrates (1) vulnerabilities in widely-used server software are discovered infrequently (at most 6 times a year) and...

READ MORE