High-Sensitivity Imaging

Several imaging device parameters can be optimized for high sensitivity. These include quantum efficiency (including fill-factor), charge-transfer efficiency (moving the charge from the pixel to the output port without loss or added spurious charge), and the noise to read this charge out. The overall goal is to convert most or all of the photons that impinge on the device to photoelectrons and then to read out these photoelectrons without losing any and without adding read noise.

This section specifically assumes a charge-coupled-device (CCD) type of imaging device, but many of the considerations apply to any silicon imager.

Quantum Efficiency (QE) 

We define QE to be the external quantum efficiency, that is,

QE = Number of photoelectrons created in a pixel and read out off-chip/Number of photons incident on the pixel area

This is a systems-based definition that takes account of all loss factors, such as

  • Reflection loss at the air-silicon interface
  • Absorption loss of photons in any dead layer or absorbing layer on the silicon surface
  • Fill-factor, in which part of the pixel area may be obscured by opaque material such as metal and therefore is not sensitive to the incident photons 
  • Any internal loss of photoelectrons from the instant they are created to the time they are read out

Measurement of the QE is straightforward, but requires care, since it is an absolute measurement. The QE is wavelength dependent, as discussed below.

Figure 1. Comparison of QE for selected imagersFigure 1. Comparison of QE for selected imagers.

Compared to other common detectors, the QE of presently evolved CCD scientific-quality devices can be almost perfect for much of the visible wavelength range (400 nm to 700 nm), as shown in Figure 1.

How did CCDs reach this advanced state of performance? Early CCD devices were built to receive the light on the circuit side of the device. The light needed to first pass through various layers on the device. Some of these layers were partially absorbing and some reflected considerable light back away from the collecting surface, and part of the pixel surface may have been covered with opaque material, such as metal.

Back-Illuminated Imagers 

To avoid light loss due to films and structures on the front of the device, Lincoln Laboratory and other researchers have developed back-illuminated (BI) device technology. The concept is that the device is first bonded circuit side down to a support wafer, then thinned to a thickness that will allow the fields from the CCD wells to penetrate to the back (non-circuit side) of the device.

Figure 2. Cross section of back-illuminated deviceFigure 2. Cross section of back-illuminated device.

The exposed silicon back surface must then be passivated (the blue top surface in Figure 2 represents the passivation layer) to prevent surface states or other external fields from penetrating into the silicon and disrupting the internal device operation. It is important that this passivation layer be thin compared to the absorption depth of the wavelength of light to be detected since many of the photons absorbed in this layer may be lost. Such a “dead layer” is usually associated with a particular passivation layer technique. Obviously, the dead layer thickness should be much less than the wavelength-dependent skin depth of the incoming photons to guarantee high QE.

Back-illuminated devices present a planar surface to the incident radiation, unlike front-illuminated (FI) devices that have many features for light to pass through or between. Because of this planar surface, BI devices may also be effectively coated with an antireflection layer to minimize reflection losses at the air-silicon interface.

The process of fabricating a BI device is more difficult and expensive than for an FI one, so use of these devices is limited to applications that demand the highest performance. Commercial devices all tend to be FI, with lower performance, because of cost considerations.

Wavelength Dependence of QE

Figure 3 shows the dependence of the absorption length (distance in which the absorption is 1/e) of silicon with wavelength of the incident radiation.

Figure 3. Wavelength dependence of absorption length in siliconFigure 3. Wavelength dependence of absorption length in silicon.

The silicon substrate of an imager must absorb a photon in order to create a photoelectron, so when the absorption length increases, so must the thickness of silicon for high QE. If the device is thinner than the absorption length, some of the radiation will simply pass through the device undetected. Figure 3 shows that for the wavelength range approaching 1 µm, and also for X-ray radiation with energy greater than about 5 keV, the silicon thickness must be many 10s of µm. However, as the thickness of silicon is increased, its resistivity must be decreased in order for the fields of the CCD wells to penetrate to the back surface. If there is a field-free region near the back surface, photoelectrons will tend to move laterally before being collected, therefore producing a blurred, low-resolution image.

Deep-Depletion Imagers

Processing the high-resistivity (~ 5000 Ω-cm) silicon needed for deep-depletion imagers is challenging. To make high-resistivity silicon, most impurities have been removed from the material. However, some impurities are useful in limiting the spread of dislocations as a silicon device is processed at high temperatures and undergoes thermal stresses. Dislocations can give rise to increased dark current and decreased charge transfer efficiency. In processing high-resistivity silicon devices, careful attention must be paid to minimize stresses in the silicon during high temperature steps, for example, by using very slow ramp up and ramp down temperature changes, and special techniques are used to minimize any mechanical stresses on the device during high-temperature processing. Because of the care and time involved in processing high-resistivity silicon, most commercial devices tend to be fabricated from much lower-resistivity silicon material.  

We have solved the problems of producing high-quality CCD devices on high-resistivity silicon with high yield and, in fact, use this material now for almost all the CCDs we fabricate.

The standard thickness for most Lincoln Laboratory–produced BI CCDs is about 45 µm. We have experimented making BI devices thicker in order to improve the QE for wavelengths greater than 900 nm. However, in this case, we use special techniques to ensure that the device is not only fully depleted but also has sufficiently strong drift fields to maintain a small (compared to the pixel size) charge point-spread function.

To supply the greater field, we have used a design feature that enables biasing of the substrate independently of the device circuitry, as shown in Figure 4 [1].

Figure 4. Design feature enables biasing of the substrate independently of device circuitryFigure 4. Design feature enables biasing of the substrate independently of device circuitry.

We have fabricated and tested devices made at 75 µm thickness and shown that by using substrate bias we are able to fully deplete the device. To supply this back-to-front bias requires extra diodes and guard rings as shown in Figure 4.  This design change enables higher QE while also giving us the ability to maintain the charge point-spread function at a desired level.

Extreme Ultraviolet and Low-Energy Back-Illuminated Passivation

In the ultraviolet and extreme ultraviolet wavelengths, where the absorption length drops to much less than 0.1 micron, there is a different problem. Any moderate thickness dead layer on the back surface of the device can result in very significant loss of photoelectrons before they can be collected in a CCD well. Therefore, the back-surface passivation process for these wavelengths must result in a very thin passivation layer and dead layer.

Using molecular-beam epitaxy (MBE), we have developed a novel process for producing very thin (~5 nm) passivation layers. This process results in excellent QE over the range from near UV to soft X-ray. This passivation layer also has proven to be very stable with respect to UV exposure and radiation. Lincoln Laboratory has a 6" MBE system in its fabrication facility (this system is highly unusual for a silicon facility).

High Charge-Transfer Efficiency (CTE)

Once photoelectrons are collected, it is important to minimize the loss of charge as it is transferred to the output amplifier. Any loss or addition of charge will be a source of noise in the device. Modern silicon processing techniques and methods make it possible to routinely achieve a charge-transfer efficiency (CTE) of greater than 0.999995 per transfer, even in devices fabricated on high-resistivity silicon. Radiation exposure, such as proton radiation in Earth orbit, can make CTE substantially worse.

Low Read Noise  

A photon detector converts the energy of the incoming photons to a charge packet, and ultimately this charge must be converted to voltage or current and sent off chip. Figure 5 shows schematically how the charge-to-voltage conversion takes place in a CCD; the charge is passed onto a very low capacitance node (shown as an n+ diode in this picture) and a voltage is developed. This voltage is sensed with a low-noise field-effect transistor (FET) whose output is then sent off chip.

Figure 5. Schematic picture of CCD output amplifierFigure 5. Schematic picture of CCD output amplifier.

The amplifier that performs this conversion will add “read noise” to the detection process. We have used a number of techniques to design and build very-low-noise readout amplifiers (the Sense FET in Figure 5), including designing for very-low-input capacitance (therefore giving a high voltage per unit charge) and designing for low 1/f noise (taking care in avoiding trapping noise as the electrons traverse the amplifier channel).

Figure 6 shows a typical layout design of an output node in a plane view looking down on the surface of the CCD circuit.

Figure 6. Layout design of output nodeFigure 6. Layout design of output node.
Figure 7 is a corresponding circuit microphotograph of the above design taken from an actual finished CCD device.
Figure 7. Microphotograph of output node Figure 7. Microphotograph of output node.

The chart shown in Figure 8 is a plot of read noise for several types of output amplifiers as a function of output data rate. The noise increases for higher data rates because the filters in the output circuit must be increased in bandwidth, therefore letting through more thermal noise generated by the amplifier. Low-noise amplifiers show a slow rise in read noise as the bandwidth of these filters is increased.

Figure 8. Plot of read noise for several types of output amplifiers as a function of output data rateFigure 8. Plot of read noise for several types of output amplifiers as a function of output data rate.

The black and white dots are data taken for our standard metal-oxide-semiconductor FET (MOSFET) low-noise amplifiers, while the red dots are data from a more recent junction FET (JFET) design. Both types of amplifiers can be constructed in our standard CCD process. The JFET amplifier has substantially lower Johnson and flicker noise and lower capacitance than the MOSFET design, and therefore performs with lower read noise than the conventional output.

 

Reference
  1. S.E. Holland et al., "A 200 x 200 CCD Image Sensor Fabricated On High-Resistivity Silicon," International Electron Devices Meeting (IEDM) Technical Digest, 1996, pp. 911–914.
top of page