Application-Specific Design Elements

Historically, the function that a camera performed was simple and straightforward. A lens collected light from a scene and refocused it to form an image on a piece of film. Somewhat later, electronic imagers were developed in which the film was replaced by an array of light-sensitive “pixels.” Each pixel detects the light, converting it to an electrical signal that indicates the brightness of the corresponding point in the scene. In one common type of imager, the charge-coupled device (CCD) invented in 1969 by Boyle and Smith at Bell Telephone Laboratories, each pixel converts the light to a packet of electrons. At the end of the exposure time, these electron packets are shifted to the periphery of the CCD in order to be sensed and converted to digital image data.

In many military and scientific applications, the camera does very specialized and advanced tasks. It does not just take pictures; it enables the user to extract information that is hard to get for various reasons. The event of interest may be faintly illuminated and therefore hard to distinguish from its background or from noise in the electronics of the camera system, or the image may be moving quickly or jumping around on the device imaging surface. In some applications, the information sought might include more than just brightness, such as spectral signatures or scene depth. In other applications, the sheer amount of image data can be challenging; astronomers surveying large portions of the night sky desire billions of pixels. Electronic imagers, unlike film, can be designed to enhance their performance in delivering the desired output.

The Advanced Imaging Technology program area at Lincoln Laboratory develops imagers that are customized to solve many specific and difficult problems. Some of these imagers have an extremely large number of pixels, as much as 1.4 billion [1]. Others are exquisitely sensitive and are able to capture images in near darkness. Many perform information extraction functions that reduce the need for high-speed data transfers and off-chip computations. Sometimes this information extraction is not done by conventional computer circuits. Rather, it may involve manipulations that are convenient to do in the pixel, so that “processing” is done that is inherent in the way the light is being detected.

We have developed some design elements or methods that can help with different specific applications and have applied them to build useful imagers. Here we discuss four novel enabling methods: the orthogonal-transfer CCD (OTCCD), an electronic shutter for back illuminated imagers, the Geiger-mode avalanche photodiode (GMAPD) circuit element, and the curved focal-plane CCD.

Orthogonal-Transfer CCD

The OTCCD is an imager that performs a noiseless two-dimensional charge shifting of charge to improve performance. This CCD can shift the electron packets from pixel to pixel either horizontally (right or left) or vertically (up or down) during the exposure time. This is useful for applications in which the image makes translations across the array, for example, if there are vibrations between the camera and scene or image.  The study of faint stars by ground-based telescopes is made difficult by atmospheric turbulence, which causes the images of stars to “dance around” and creates image blur. Using a bright star to measure the turbulence, astronomers can use the OTCCD to compensate for it. The electronic image is shifted to follow the turbulence-induced motion in the scene, resulting in a sharper and brighter image of the objects being studied.

A four-pixel portion of an OTCCD is shown in Figure 1. In this design, there are four independent CCD gates in each pixel (numbered 1–4 in Figure 1(a)). (Note that all type 1 gates are connected together in the imager, and likewise for 2, 3, and 4). To transfer charge, consecutive gates are transferred to a high positive voltage, while all other gates are held to a low voltage, thereby forming a potential well for a packet of photoelectrons.

(a)                                    (b)
Figures 1(a) (left) and 1(b) (right) show a four-pixel portion of an OTCCD.

In Figure 1(a), gates of type 4 are held low, while a high voltage is applied consecutively to 1, 2, and 3, causing charge to move in a vertical direction. In Figure 1(b), gates of type 1 are held low, while a high voltage is applied consecutively to 2, 3, and 4, causing charge to move only in a horizontal direction. Therefore, by using various combinations of these two modes of operation, we can cause charge to move to any adjacent pixel (a diagonal move requires a combination of horizontal and vertical moves).

To demonstrate the improvement of imagery, an OTCCD imager was mounted on a spring and imaged a stationary picture on the wall while bouncing. A point source of light on the wall was used to determine the motion of this spring-mounted camera.

(a)                                                (b)
Figure 2(a) (left) shows an image taken with the OTCCD feature disabled, and Figure 2(b) (right) shows the image with the OTCCD enabled.

Figure 2(a) was taken with the OTCCD feature disabled, so the imager was operating like a conventional CCD. The blurring is caused by the motion of the image across the device during the image-integration period.  Figure 2(b) is an image taken by the same imager, again mounted on a spring and bouncing, but this time operating as an OTCCD imager with the charge moving in synchronization with the motion of the image across the imager. The improvement in the second image is obvious.

Figure 3. Two surface plots of imagery Figure 3. Two surface plots of imagery from a portion of a star cluster.

The OTCCD has also been used in ground-based astronomy photography to remove most of the jitter caused by atmospheric turbulence. Figure 3 shows two surface plots of imagery from a portion of the star cluster M71. The data in Figure 3 (left-hand image) was taken with no compensation shifting of the OTCCD pixels (that is, the OTCCD was operating like a normal CCD), while the data represented in Figure 3 (right-hand image) was taken with the imager operating as an OTCCD, using a bright guide star to measure the amount of jitter of the image. The star images have ann improved signal to noise of 1.5× in the compensated image.

Recently, we have incorporated many relatively small OTCCDs in an array with on-chip controls in a device that is called an orthogonal-transfer-CCD array (OTA). This device is designed to be abutted on all four sides, and therefore is able to be assembled into very large focal plane arrays for ground-based astronomy.

Electronic Shutter

Standard commercial imagers introduce light through the front device surface. To produce a shutter function, the charge is moved behind an opaque metal line in the pixel to block further accumulation of charge. The back-illuminated CCD has been developed to greatly improve sensitivity over commercial imagers by bringing light into the device through the back surface, unobstructed by structures on the front surface. By doing this, however, the normal commercial method of producing a shutter function cannot be used in a back-illuminated CCD.

Cross section of electronic shutterFigure 4. Cross section of electronic shutter.

We have developed an electronic shutter specifically designed for back-illuminated devices. To illustrate how the electronic shutter is formed and how it operates, Figure 4 represents a cross section of the physical structures in the silicon device. One feature not found in a normal CCD is the p+ buried layer, implanted with a high-voltage implanter (about 1 MeV boron). This layer forms a potential barrier that separates the illuminated back surface (bottom) from the front surface (top) of the device.

Figure 5. Depletion region of device storage well.Figure 5. Depletion region of device storage well.

Figure 5 represents the depletion region of the device storage well (blue region) when the storage well gate (VIA) is moved to a very high potential (18 V). The depletion region has reached through the p+ buried layer, and so photoelectrons (represented by the – symbols) may move from the back surface of the device, where they are generated, to the storage well under VIA on the CCD front side. This is the “shutter open” condition.

Figure 6 represents the potential configuration of the device when the storage well gate is moved to an intermediate voltage (<12 V), which is high enough to maintain a storage well (and the charge in it) under the gate (blue region), but not high enough for the depletion region to reach through the deep implant.

Figure 6. Potential configuration of the device when the storage well gate is moved to an intermediate voltage.Figure 6. Potential configuration of the device when the storage well gate VIA is moved to an intermediate voltage.

The shutter function also uses shutter drains running down the channel-stops (shown in the physical cross section above [Figure 4] as two n+ diodes imbedded in the p+ channel stop region and connected to a voltage VSD). While the storage-well voltage is reduced, the shutter-drain voltages are increased, allowing their depletion regions (red) in the Shutter Closed diagram (Figure 6) to now reach through the deep buried layer and form a way for the photoelectrons to be drained out of the pixel and discarded. Use of the shutter drain prevents photocharge generated during the shutter-closed condition from building up to the point of leaking over the potential barrier and contaminating previously collected charge in the CCD well.

The electronic shutter structure has enabled a number of devices, such as a four-sample high-speed burst imager and a fifty-sample imaging device, by making it possible to shelter previously collected charge from current photocharge when the integration period of the imager has ended. Both of these devices operate with effective sampling rates well above 1 MHz. Once the event is over, the multiple stored images are read out with low noise at manageable data transfer speeds so the noise is low.

Depth: The Third Dimension. Geiger-Mode Avalanche Photodiodes

An ordinary camera gives information about the brightness of each point in the scene, but gives no information about how far away each point is. This depth information is essentially absent because the image is a flat two-dimensional mapping.  The goal of three-dimensional imaging is to measure depth explicitly. One 3D imaging technique is flash-laser radar. In this technique, the scene is illuminated with a very short (< 0.5 ns) flash of light and, as in a conventional camera, imaged with a lens. Instead of measuring the amount of light, however, the imager pixel measures the time of arrival of the light to a pixel, which indicates the round trip time of the light to the corresponding point in the scene, and therefore measures depth. It is challenging to build compact high-performance LADAR (LAser Detection And Ranging) systems because most of the light scatters off the scene in random directions, and only a miniscule fraction returns to the collection lens of the camera. Most previous LADAR systems were not able to produce an image from a single flash of light, but used various techniques requiring much longer exposures.

Lincoln Laboratory has developed imagers specifically for LADAR, based on Geiger-mode avalanche photodiodes  (GMAPDs), which are single-photon-sensitive detectors.  An array of GMAPDs is bonded to high-speed digital timing circuits to make the imager. Each pixel can detect a single photon and measure its time of arrival with a precision of a fraction of a nanosecond. The pixel extracts and digitizes the relevant piece of information, relieving the rest of the system of burdens associated with sensing and processing extremely weak pulses of light. This device is the first all-solid-state single-photon-sensitive area array imaging device capable of subnanosecond time resolution of the time of return from a single flash of light.

A GMAPD acts essentially as a digital device; a single photon can cause the diode to discharge to a voltage just below its breakdown voltage in a very short time (tens of psec). This voltage step can be consistent with signals on modern complementary metal-oxide semiconductor (CMOS) digital logic devices. We developed methods to fabricate arrays of GMAPDs, and to integrate this diode array with CMOS logic, making one connection to a GMAPD per pixel. The CMOS logic was then designed with control and timing circuitry that was independent for each pixel.

Flash LADAR image of vanFigure 7. Flash LADAR image of a van. Click image to view movie of rotated views of the van.

Using this type of array, we have built a number of very compact (since the laser source needs only enough power to return a few photons per pixel) and sensitive LADAR systems. The 3D image in Figure 7 is of a van. The depth information is color coded: red points are points closest to the LADAR system and blue points are the most distant. Now that every point on the image is known in three dimensions, it is possible to view it from different directions on our 2D monitors. The LADAR system was directed at the front of the van to collect the image data, and this is the view shown in Figure 7. To see a movie of other views of the van rotated in different directions, click on Figure 7.

Curved Focal-Surface Imager

Many optical systems naturally (for example, the eye) produce a curved, rather than planar, focal surface. Optically correcting this curved surface to become planar often limits some performance attributes of the system (larger, heavier, smaller field of view, greater reflection losses, etc). During the era when film was used extensively for image capture, it could be curved to conform to some non-planar surfaces so that these trade-offs did not have to be made.

Figure 8(a) shows the modulation-transfer function (MTF) for a small lens suitable for a micro-air vehicle that must be very lightweight. If a spherical focal surface is used instead of the conventional planar surface, the MTF is improved dramatically.

Chart of MTF for a small lens for a micro-air vehicle Thin silicon membranes; and  on right shows a silicon membrane cut into petals
Figure 8. (a) on left shows the MTF for a small lens suitable for a micro-air vehicle; (b) in center shows the thin silicon membranes; and (c) on right shows a silicon membrane cut into petals.

Conventional solid-state imagers are made with a planar surface. However, we have extensive experience in producing back-illuminated devices, in which the silicon is thinned to several tens of microns. We developed methods to accomplish this thinning to entire wafers (currently 150 mm diameter) and to handle the thinned membranes. Thin silicon membranes are flexible and robust, as demonstrated in Figure 8(b). Figure 8(c) shows a silicon membrane cut into petals and formed into a spherical surface over a mandrel. The model shown represents a solid angle of one steradian. We have designed and built a novel CCD to fit this petal format. In this device, charge is clocked radially out along each petal into an output register located at the outer edge of each petal.

The non-planar focal surface technology has been applied to special devices we have designed and fabricated for a large optical system.

Information, Not Just Pictures

Electronic imaging, then, is not just photography. It involves sensing the relevant property of the light (intensity, time of arrival, wavelength) and in many cases manipulating the image while it is still on the imager in order to facilitate information extraction. Lincoln Laboratory has developed a unique collection of advanced imagers supporting capabilities not available in commercial cameras and enabling high-performance systems. Many of these imagers are enabled by an application-specific design element such as those presented here.

Reference
  1. B.E. Burke, J.L. Tonry, M.J. Cooper, P.E. Doherty, A.H. Loomis, D.J. Young, T.A. Lind, P. Onaka, D.J. Landers, P.J. Daniels, and J.L. Daneu, "Orthogonal Transfer Arrays for the Pan-STARRS Gigapixel Camera," Sensors, Cameras, and Systems for Scientific/Industrial Applications VIII, M.M. Blouke, ed., Proceedings of SPIE, vol. 6501, 2007, pp. 650107-1 to 650107-9.
top of page