Charge-coupled devices. device and basic principles of operation. Article updated 22.04 in 2023.

pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 5
Charge-coupled devices. Design and basic operating principles.

Sergey Ivanovich Neizvestny
Oleg Yuryevich Nikulin

CHARGE-COUPLED DEVICES —
THE BASIS OF MODERN TELEVISION TECHNOLOGY.
MAIN CHARACTERISTICS OF CCD.

Continued. Start in 4/99

The previous article provided a brief analysis of existing semiconductor light receivers and a detailed description of the structure and operating principle of charge-coupled devices.

This article will discuss the physical characteristics of CCD matrices and their impact on the general properties of television cameras.

Number of elements of the CCD matrix.

Perhaps the most “basic characteristic of CCD matrices is the number of elements. As a rule, the overwhelming majority of models have a standard number of elements, oriented towards the television standard: 512×576 pixels (these matrices are usually used in simple and cheap video surveillance systems) and 768×576 pixels (such matrices allow obtaining the maximum resolution for a standard television signal).

The largest of the manufactured and described in the literature CCD is a single-crystal device from Ford Aerospace Corporation measuring 4096×4096 pixels with a pixel side of 7.5 microns.

During production, the output of high-quality devices of large sizes is very low, therefore, when creating CCD video cameras for shooting large-format images, a different approach is used. Many companies manufacture CCDs with leads located on three, two or one side (buttable CCD). From such devices, mosaicCCD. For example, Loral Fairchild manufactures a very interesting and promising device 2048×4096 15 µm. The leads of this CCD are located on one narrow side. The achievements of the Russian industry are somewhat more modest. NPP Silar (St. Petersburg) produces a CCD 1024×1024 16 µm with a volumetric charge transfer channel, virtual phase and leads on one side of the device. This architecture of the devices allows them to be joined to each other from three sides.

It is interesting to note that several specialized large-format CCD-mosaic-based light detectors have been developed. For example, eight 2048×4096 Loral Fairchild CCDs are used to assemble an 8192×8192 mosaic with overall dimensions of 129×129 mm. The gaps between individual CCD crystals are less than 1 mm. In some applications, relatively large gaps (up to 1 cm) are not considered a serious problem, since the full image can be obtained by summing several exposures in the computer memory, slightly offset from each other, thus filling the gaps. The image obtained by an 8196×8196 mosaic contains 128 MB of information, which is equivalent to approximately a 100-volume encyclopedia with 500 pages in each volume. While these figures are impressive, they are still small compared to the size and resolution of photographic emulsions, which can be produced in huge sheets. Even the coarsest 35mm film contains up to 25 million resolvable grains (pixels) in a frame.

Resolving Power of Television Cameras

One of the main parameters of a TV camera, resolution (or resolving power), directly depends on the number of elements of the CCD matrix. The camera resolution as a whole is also affected by the parameters of the electronic signal processing circuit and the optics parameters.

Resolution is defined as the maximum number of black and white stripes (i.e. the number of transitions from black to white or vice versa) that can be transmitted by the camera and distinguished by the recording system at the maximum detectable contrast.

This means that the camera allows one to examine N/2 dark vertical lines on a light background, laid in a square inscribed in the image field, if the camera's passport states that its resolution is N television lines. In relation to a standard television table, this implies the following: when selecting the distance and focusing the table image, one must ensure that the upper and lower edges of the table image on the monitor coincide with the outer contours of the table, marked by the tops of the black and white prisms. Then, after final focusing, the number is read at the point of the vertical wedge where the vertical lines cease to be distinguishable for the first time. The last remark is very important, since alternating stripes are often visible on the image of the test fields of the table, which have 600 or more lines, which are, in fact, moire, formed by the beating of the spatial frequencies of the table lines and the grid of the sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-frequency spatial filters.

TVL (TV line) is taken as the unit of measurement of resolution in television systems. The vertical resolution of all cameras is almost the same, because it is limited by the television standard — 625 television scanning lines and they cannot transmit more than 625 objects along this coordinate. The difference in horizontal resolution is what is usually indicated in technical descriptions.

In practice, in most cases, a resolution of 380-400 TV lines is quite sufficient for general surveillance tasks. However, for specialized television systems and tasks, such as television monitoring of a large area with one camera, viewing a large perimeter with a camera with variable angular magnification (zoom), surveillance at airports, railway stations, piers, supermarkets, identification and recognition systems for car numbers, facial identification systems, etc., a higher resolution is required (for this, cameras with a resolution of 570 or more TV lines are used).

The resolution of color cameras is somewhat worse than that of black-and-white cameras. This is due to the fact that the pixel structure of CCD matrices used in color television differs from the pixel structure of black-and-white matrices. Figuratively speaking, a pixel of a color matrix consists of a combination of three pixels, each of which registers light in either the red (Red), or green (Green), or blue (Blue) part of the optical spectrum. Thus, three signals (RGB signal) are taken from each element of a color CCD matrix. The effective resolution in this case should be times worse than that of black-and-white matrices. However, the resolution of color matrices deteriorates less, since the size of their pixel is one and a half times smaller than the size of the pixel of a similar black-and-white matrix, which results in a deterioration in resolution of only 30-40%. The negative side of this is a decrease in the sensitivity of color matrices, since the effective area of ​​​​the registration of the image element becomes significantly smaller. The typical resolution of color TV cameras is 300 — 350 TV lines.

In addition, the resolution of the camera is affected by the frequency band of the video signal produced by the camera. To transmit a 300 TVL signal, a frequency band of 2.75 MHz is required (150 periods per 55 μs of the TV scanning line). The relationship between the frequency band of the TV scanning (n pxtr) and the resolution (TVL) is determined by the ratio:

n pctr=(TVL/2) x n chstv,

where the frequency n pctr is measured in MHz, the resolution of TVL is in TV lines, the horizontal scanning frequency n chstv=18.2 kHz.

Nowadays, many different semiconductor amplifiers with good frequency characteristics have been developed, therefore the bandwidth of camera amplifiers usually significantly (1.5-2 times) exceeds the necessary one, so as not to affect the final resolution of the system in any way. So the resolution is limited by the topology of the discreteness of the light-receiving area of ​​the CCD matrix. Sometimes the fact of using a good electronic amplifier is called by beautiful words like “resolution enhancement or “edge enhancement”, which can be translated as contrast resolution” and “emphasized boundaries”. It is necessary to realize that such an approach does not improve the resolution itself, only the clarity of the transmission of black and white boundaries is improved, and even then not always.

However, there is one case when no tricks of modern electronics allow to raise the video signal bandwidth above 3.8 MHz. This is a composite color video signal. Since the color signal is transmitted on the carrier (in the PAL standard — at a frequency of about 4.4 MHz), the brightness signal is forcibly limited to a bandwidth of 3.8 MHz (strictly speaking, the standard assumes comb filters for separating the color and brightness signals, but real equipment simply has low-pass filters). This corresponds to a resolution of about 420 TVL. Currently, some manufacturers declare the resolution of their color cameras to be 480 TVL or more. But they, as a rule, do not emphasize the fact that this resolution is realized only if the signal is taken from the Y-C (S-VHS) or component (RGB) output. In this case, the brightness and color signals are transmitted by two (Y-C) or three (RGB) separate cables from the camera to the monitor. In this case, the monitor, as well as all intermediate equipment (switches, multiplexers, video recorders) must also have Y-C (or RGB) inputs/outputs. Otherwise, a single intermediate element processing a composite video signal will limit the bandwidth to the aforementioned 3.8 MHz and make all the costs of expensive cameras useless.

Quantum efficiency and quantum yield of a CCD camera.

By quantum efficiency we mean the ratio of the number of registered charges to the number of photons that hit the light-sensitive area of ​​the CCD crystal.

However, one should not confuse the concepts of quantum efficiency and quantum yield. Quantum yield is the ratio of the number of photoelectrons formed in a semiconductor or near its boundary as a result of the photoelectric effect to the number of photons incident on this semiconductor.

Quantum efficiency is the quantum yield of the light-registering part of the receiver, multiplied by the coefficient of conversion of the photoelectron charge into the registered useful signal. Since this coefficient is always less than one, the quantum efficiency is also less than the quantum yield. This difference is especially large for devices with a low-efficiency signal registration system.

In terms of quantum efficiency, CCDs have no equal. For comparison, out of every 100 photons entering the pupil of the eye, only one is perceived by the retina (quantum yield is 1%), the best photoemulsions have a quantum efficiency of 2-3%, vacuum tubes (for example, photomultipliers) — up to 20%, for CCDs this parameter can reach 95% with a typical value from 4% (low-quality CCDs, usually used in cheap video cameras of the «yellow» assembly) to 50% (a typical unselected video camera of Western assembly). In addition, the width of the wavelength range to which the eye reacts is much narrower than that of CCDs. The spectral range of photocathodes of traditional vacuum television cameras and photoemulsions is also limited. CCDs react to light with a wavelength from a few angstroms (gamma and X-rays) to 1100 nm (IR radiation). This huge range is much larger than the spectral range of any other detector known to date.

pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 5

 

Fig. 1. An example of the quantum efficiency of a CCD matrix.

Sensitivity and spectral range

Closely related to the concepts of quantum efficiency and quantum yield is another important parameter of a television camera — sensitivity. While quantum efficiency and quantum yield are mainly used by developers and designers of new television systems, sensitivity is used by setup engineers, the operation service, and designers of direct working projects at enterprises. In essence, sensitivity and the quantum yield of a receiver are related by a linear function. While quantum yield connects the number of photons incident on a light detector and the number of photoelectrons generated by these photons as a result of the photoelectric effect, sensitivity determines the response of a light detector in electrical units of measurement (for example, in mA) to a certain value of the incident light flux (for example, in W or in lx/sec). In this case, a distinction is made between the concept of bolometric sensitivity (i.e. the total sensitivity in the entire spectral range of the receiver) and monochromatic sensitivity, measured, as a rule, by the radiation flux with a spectral width of 1 nm (10 angstroms). When we say that the sensitivity of a receiver is at a wavelength (for example, 450 nm), this means that the sensitivity is recalculated for the flux in the range from 449.5 nm to 450.5 nm. This definition of sensitivity, measured in mA/W, is unambiguous and does not cause any confusion when used.

However, for consumers of television equipment used in security systems, a different definition of sensitivity is more often used. Most often, sensitivity is understood as the minimum illumination on an object (scene illumination), at which it is possible to distinguish the transition from black to white, or the minimum illumination on the matrix (image illumination).

From a theoretical point of view, it would be more correct to specify the minimum illumination on the matrix, since in this case there is no need to specify the characteristics of the lens used, the distance to the object and its reflection coefficient (sometimes this coefficient is called the word «albedo»). Albedo is usually determined at a specific wavelength, although there is such a thing as bolometric albedo. It is very difficult to operate objectively with the definition of sensitivity based on the illumination on the object. This is especially true when designing television recognition systems at long distances. Many matrices cannot register an image of a person's face located at a distance of 500 meters, even if it is illuminated by very bright light.*

Note

* Tasks of this kind appear in the practice of security television, especially in places with an increased threat of terrorism, etc. Television systems of this kind were developed in 1998 in Japan and are being prepared for mass production.

But when selecting a camera, it is more convenient for the user to work with the illumination of the object, which he knows in advance. Therefore, the minimum illumination on the object is usually indicated, measured under standardized conditions: the reflection coefficient of the object is 0.75 and the aperture ratio of the lens is 1.4. The formula linking the illumination on the object and on the matrix is ​​given below:

Iimage=Iscene x R/(p x F2),

where Iimage, Iscene are the illumination of the CCD matrix and the object (Table 1);
R is the reflection coefficient of the object (Table 2);
p is the number 3.14;
F is the aperture ratio of the lens.

The values ​​of Iimage and Iscene usually differ by more than 10 times.

Illumination is measured in lux. Lux is the illumination created by a point source of one international candle at a distance of one meter on a surface perpendicular to the rays of light.

Table 1. Approximate illumination of objects.

Outdoors (latitude of Moscow)  
Cloudless sunny day 100,000 lux
Sunny day with light clouds 70,000 lux
Cloudy day 20,000 lux
Early morning 500 lux
Twilight 0.1 — 4 lux
“White nights”* 0.01 – 0.1 lux
Clear night, full moon 0.02 lux
Night, moon in the clouds 0.007 lux
Dark cloudy night 0.00005 lux
Indoors  
Room without windows 100 – 200 lux
Well-lit room 200 – 1000 lux

* “White nights” — lighting conditions that satisfy civil twilight, i.e. when the sun sinks below the horizon without taking into account atmospheric refraction by no more than 6°. This is true for St. Petersburg. For Moscow, the conditions of the so-called “navigational white nights” are met, i.e. when the disk of the sun sinks below the horizon by no more than 12°.

Often, camera sensitivity is specified for an “acceptable signal,” which means a signal with a signal-to-noise ratio of 24 dB. This is an empirically determined noise floor at which an image can still be recorded on videotape and still hope to be seen on playback.

Another way to define an “acceptable” signal is the IRE scale (Institute of Radio Engineers). The full video signal (0.7 volts) is taken as 100 IRE units. A signal of about 30 IRE is considered “acceptable”. Some manufacturers, in particular, BURLE, specify 25 IRE, some — 50 IRE (signal level -6 dB). The choice of “acceptable level” is determined by the signal-to-noise ratio. It is not difficult to amplify an electronic signal. The problem is that the noise will also increase. The most sensitive CCD matrices of mass production today are the Hyper-HAD matrices from Sony, which have a microlens on each photosensitive cell. They are used in most high-quality cameras. The spread of parameters of cameras built on their basis mainly means the discrepancy in the approaches of manufacturers to defining the concept of “acceptable signal”.An additional problem with determining sensitivity is that the unit of measurement of illumination, “lux,” is defined for monochromatic radiation with a wavelength of 550 nm. In this regard, it makes sense to pay special attention to such a characteristic as the spectral dependence of the sensitivity of a video camera. In most cases, the sensitivity of black-and-white cameras is significantly, compared to the human eye, stretched into the infrared range up to 1100 nm. Some modifications have sensitivity in the near infrared region even higher than in the visible region. These cameras are designed to work with infrared spotlights and in some parameters are close to night vision devices.

The spectral sensitivity of color cameras is approximately the same as the human eye.

pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 6

 

Fig. 2. Example of spectral sensitivity of a color CCD matrix with standard RGB bands.

Table 2. Approximate values ​​of reflectance of various objects.

Object Reflectance coefficient (%)
Snow 90
White paint 75-90
Glass 70
Brick 35
Grass, trees 20
Human face 15 – 25
Coal, graphite* 7

 

* It is interesting to note that the reflection coefficient of the lunar surface is also about 7%, i.e. the Moon is actually black.

Special mention should be made of ultra-high-sensitivity cameras, which are actually a combination of a regular camera and a night vision device (for example, a microchannel image intensifier tube — IOP). Such cameras have unique properties (sensitivity is 100-10,000 times higher than regular ones, and in the mid-infrared range, where the maximum radiation of the human body is observed, it itself glows), but, on the other hand, they are also uniquely capricious — the mean time between failures is about one year, and the cameras should not be turned on during the day, it is even recommended to cover their lens to protect the IOP cathode from burnout. At a minimum, lenses with an automatic aperture range of up to F/1000 or more should be installed. During operation, the camera must be regularly slightly rotated in order to avoid «burning» the image onto the IOP cathode.

It is interesting to note that, unlike CCD matrices, EOP cathodes are very sensitive to maximum backlighting. If the light-sensitive area of ​​a CCD camera returns to its original state relatively easily after bright illumination (it is practically not affected by backlighting), then the EOP cathode takes a very long time (sometimes 3-6 hours) to recover after bright illumination. During this recovery, even with the input window closed, a residual, “burned-in image” is read from the EOP cathode. As a rule, after large backlightings, due to reabsorption effects (gas release under the impact of channel wall bombardment by accelerated electron flows) over a large area of ​​microchannel plates, EOP noise, in particular, multi-electron and ion noise, increases sharply. The latter appear as frequent bright flashes of large diameter on the monitor screen, which greatly complicates the selection of a useful signal. At even higher input light fluxes, irreversible processes may occur both with the cathode and with the output fluorescent screen of the image intensifier tube: under the influence of a high flux, individual sections of them fail (“burn out”). During further operation, these sections have reduced sensitivity, which subsequently drops to zero.

Most ultra-high sensitivity TV cameras use brightness amplifiers with yellow or yellow-green output fluorescent screens. In principle, the glow of these screens can be considered a monochromatic radiation source, which automatically leads to the definition: systems of this type can only be monochrome (i.e. black and white). Taking this circumstance into account, system creators select the corresponding CCD matrices: with maximum sensitivity in the yellow-green part of the spectrum and no sensitivity in the IR range.

A negative consequence of the high sensitivity of matrices in the IR range is the increased dependence of the device noise on temperature. Therefore, IR matrices used for work in the evening and at night without brightness amplifiers, unlike television systems with image intensifiers, are recommended to be cooled. The main reason for the shift in the sensitivity of CCD television cameras to the IR region compared to other semiconductor radiation receivers is due to the fact that redder photons penetrate further into silicon, since the transparency of silicon is greater in the long-wave region and the probability of capturing a photon (converting it into a photoelectron) tends to unity.

pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 7

Fig. 3. Dependence of the depth of photon absorption in silicon on the wavelength.

For light with a wavelength longer than 1100 nm, silicon is transparent (the energy of red photons is not enough to create an electron-hole pair in silicon), and photons with a wavelength of less than 300-400 nm are absorbed in a thin surface layer (already on the polysilicon structure of the electrodes) and do not reach the potential well.

As mentioned above, when a photon is absorbed, an electron-hole carrier pair is generated, and the electrons are collected under the electrodes if the photon is absorbed in the depleted region of the epitaxial layer. With such a CCD structure, a quantum efficiency of about 40% can be achieved (theoretically, at this boundary, the quantum yield is 50%). However, polysilicon electrodes are opaque for light with a wavelength shorter than 400 nm.

To achieve higher sensitivity in the short-wave range, CCDs are often coated with thin films of substances that absorb blue or ultraviolet (UV) photons and re-emit them in the visible or red wavelength range.

Noise

Noise is any source of signal uncertainty. The following types of CCD noise can be distinguished.

 

Photon noise. Is a consequence of the discrete nature of light. Any discrete process obeys the Poisson law (statistics). The photon flux (S is the number of photons falling on the photosensitive part of the receiver per unit of time) also follows these statistics. According to them, photon noise is equal to pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 8. Thus, the signal/noise ratio (designated as S/N) for the input signal will be:

S/N=pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 9=pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 10.

Dark signal noise. If no light signal is supplied to the matrix input (for example, the video camera lens is tightly covered with a light-proof cap), then the system output will produce so-called “dark frames”, otherwise known as snowball noise. The main component of the dark signal is thermionic emission. The lower the temperature, the lower the dark signal. Thermionic emission also obeys Poisson statistics and its noise is equal to: pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 11, where Nt is the number of thermally generated electrons in the total signal. As a rule, in all video cameras used in CCTV systems, CCDs are used without active cooling, as a result of which dark noise is one of the main sources of noise.

Carryover noise.During the transfer of the charge packet across the CCD elements, some electrons are lost. They are captured by defects and impurities existing in the crystal. This transfer inefficiency randomly changes as a function of the number of transferred charges (N), the number of transfers (n), and the inefficiency of an individual transfer event (e). Assuming that each packet is transferred independently, the transfer noise can be represented by the following expression:

s =pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 12.

Example: for a transfer inefficiency of 10-5 , 300 transfers, and a number of electrons in a packet of 105, the transfer noise will be 25 electrons.

Readout noise. When the signal accumulated in a CCD element is output from the matrix, converted to voltage and amplified, additional noise appears in each element, called readout noise. Readout noise can be thought of as a certain base level of noise that is present even in an image with a zero exposure level, when the matrix is ​​in complete darkness and the dark signal noise is zero. Typical readout noise for good CCD samples is 15-20 electrons. In the best CCD samples manufactured by Ford Aerospace Corporation using Skipper technology, readout noise of less than 1 electron and transfer inefficiency of 10-6 are achieved.

Reset noise or kTC noise. Before introducing a signal charge into the detection unit, it is necessary to remove the previous charge. A reset transistor is used for this. The electrical reset level depends only on the temperature and capacity of the detection unit, which introduces noise:

s r=pribori s zaryadovoi svyazyu ustroistvo iosnovnie princip 13,

where k is the Boltzmann constant.

For a typical C capacitance of 0.1 pF at room temperature, the reset noise will be about 130 electrons. kTC noise can be completely suppressed by a special signal processing method: double correlated sampling (DCS). The DCS method effectively eliminates low-frequency signals, usually introduced by power supply circuits.

Since the main load on CCTV systems falls on the dark time of day (or poorly lit rooms), it is especially important to pay attention to low-noise video cameras, which are more effective in low-light conditions.

The parameter describing the relative magnitude of noise, as mentioned above, is called the signal-to-noise ratio (S/N) and is measured in decibels.

S/N = 20 x log(<video signal>/<noise>)

For example, a signal-to-noise ratio of 60 dB means that the signal is 1000 times greater than the noise.

At a signal/noise ratio of 50 dB or more, the monitor will show a clear picture without visible signs of noise, at 40 dB, flickering dots are sometimes noticeable, at 30 dB, “snow” is all over the screen, at 20 dB, the image is practically unacceptable, although large contrasting objects can still be seen through a solid “snowy” veil.

The data provided in the camera descriptions indicate signal/noise values ​​for optimal conditions, for example, with 10 lux illumination on the matrix and with automatic gain control and gamma correction turned off. As the illumination decreases, the signal becomes smaller, and the noise, due to the action of AGC and gamma correction, increases.

Dynamic Range

Dynamic range is the ratio of the maximum possible signal generated by the light detector to its own noise. For CCDs, this parameter is defined as the ratio of the largest charge packet that can be accumulated in a pixel to the readout noise. The larger the CCD pixel size, the more electrons it can hold. For different types of CCDs, this value ranges from 75,000 to 500,000 and higher. With 10 e-noise (CCD noise is measured in e- electrons), the dynamic range of a CCD reaches 50,000. A large dynamic range is especially important for recording images outdoors in bright sunlight or at night, when there is a large difference in illumination: bright light from a lantern and the unlit shadow side of an object. For comparison: the best photo emulsions have a dynamic range of only about 100.

For a more visual understanding of some characteristics of CCD detectors and, above all, the dynamic range, we will briefly compare them with the properties of the human eye.

The eye is the most universal light detector.

Until now, the most effective and perfect, in terms of dynamic range (and, in particular, in terms of the efficiency of image processing and restoration), light detector is the human eye. The fact is that the human eye combines two types of light detectors: rods and cones.

The rods are small in size and have relatively low sensitivity. They are located mainly in the area of ​​the central yellow spot and are practically absent on the periphery of the retina of the fundus. The rods distinguish light with different wavelengths well, or rather have a mechanism for forming a different neurosignal depending on the color of the incident flow. Therefore, under normal lighting conditions, the normal eye has the maximum angular resolution near the optical axis of the lens, the maximum difference in color shades. Although some people have pathological deviations associated with a decrease, and sometimes the absence of the ability to form different neurosignals depending on the wavelength of light. This pathology is called color blindness. People with acute vision are almost never color blind.

The cones are distributed almost evenly throughout the retina, are larger in size and, therefore, have greater sensitivity.

In daylight conditions, the signal from the rods significantly exceeds the signal from the cones, the eye is tuned to work with bright lighting (the so-called «daytime» vision). Compared to cones, the rods have a higher level of «dark» signal (in the dark we see false light «sparkles»).

If a person with normal vision who is not tired is placed in a dark room and allowed to adapt (“get used to”) to the darkness, then the “dark” signal from the rods will be greatly reduced and the cones will begin to work more effectively in perceiving light (“twilight” vision). In the famous experiments of S.I. Vavilov, it was proven that the human eye (a “cone” variant) is capable of registering separate 2-3 quanta of light.

Thus, the dynamic range of the human eye: from the bright sun to individual photons, is 1010 (i.e. 200 decibels!). The best artificial light detector in this parameter is the photomultiplier tube (PMT). In photon counting mode, it has a dynamic range of up to 105 (i.e. 100 dB), and with an automatic switching device for recording in analog mode, the dynamic range of the PMT can reach 107 (140 dB), which is a thousand times worse in dynamic range than the human eye.

The spectral sensitivity range of rods is quite wide (from 4200 to 6500 angstroms) with a maximum at a wavelength of approximately 5550 angstroms. Cones have a narrower spectral range (from 4200 to 5200 angstroms) with a maximum at a wavelength of approximately 4700 angstroms. Therefore, when switching from daytime to twilight vision, an ordinary person loses the ability to distinguish colors (it is not for nothing that they say: “at night all cats are gray”), and the effective wavelength shifts to the blue part, to the region of high-energy photons. This effect of spectral sensitivity shift is called the Purkinje effect. Many color CCD matrices that are unbalanced by the RGB signal to white color have it (indirectly). This should be taken into account when obtaining and using color information in television systems with cameras that do not have automatic white correction.

Linearity and gamma correction.

CCDs have a high degree of linearity. In other words, the number of electrons collected in a pixel is strictly proportional to the number of photons that hit the CCD.

The parameter «linearity» is closely related to the parameter «dynamic range». The dynamic range, as a rule, can significantly exceed the linearity range if the system provides hardware or further software correction of the device's operation in the nonlinear region. Usually, a signal with a deviation from linearity of no more than 10% can be easily corrected.

The situation is completely different in the case of photographic emulsions. Emulsions have a complex dependence of reaction to light and, at best, allow achieving a photometric accuracy of 5% and only in part of their already narrow dynamic range. CCDs, on the other hand, are linear with an accuracy of up to 0.1% over almost the entire dynamic range. This makes it relatively easy to eliminate the influence of non-uniformity of sensitivity over the field. In addition, CCDs are positionally stable. The position of an individual pixel is strictly fixed during the manufacture of the device.

The CRT in the monitor has a power-law dependence of brightness on the signal (the exponent is 2.2), which leads to a decrease in contrast in dark areas and an increase in bright areas; at the same time, as has already been noted, modern CCD matrices produce a linear signal. To compensate for the general nonlinearity, a device (gamma corrector) is usually built into the camera, which pre-distorts the signal with an exponent of 1/2.2, i.e. 0.45. Some cameras provide a choice of pre-distortion coefficient, for example, the option 0.60 leads to a subjective increase in contrast, which produces the impression of a “clearer picture. A side effect is that gamma correction means additional amplification of weak signals (in particular, noise), i.e. the same camera with G=0.4 enabled will be approximately four times more sensitive” than with G=1. However, let us remind you once again that no amplifier can increase the signal-to-noise ratio.

Charge spread.

The maximum number of electrons accumulated in a pixel is limited. For matrices of average manufacturing quality and typical sizes, this value is usually 200,000 electrons. And if the total number of photons during the exposure (frame) reaches the limit (200,000 or more at a quantum yield of 90% or more), the charge packet will begin to flow into neighboring pixels. Image details begin to merge. The effect is enhanced when the “extra” light flux not absorbed by the thin body of the crystal is reflected from the base substrate. At light fluxes within the dynamic range, the photons do not reach the substrate; almost all of them (at a high quantum yield) are transformed into photoelectrons. But near the upper limit of the dynamic range, saturation occurs and untransformed photons begin to “wander” around the crystal, mainly maintaining the direction of their initial entry into the crystal. Most of these photons reach the substrate, are reflected and thus increase the probability of subsequent transformation into photoelectrons, oversaturating the charge packets already located at the spreading boundary. However, if an absorbing layer, the so-called anti-glare coating (anti-blooming), is applied to the substrate, the spreading effect will be greatly reduced. Many modern matrices manufactured using new technologies have anti-blooming, which is one of the components of the backlight compensation system.

Stability and photometric accuracy.

Even the most sensitive CCD video cameras are useless for use in low-light conditions if they have unstable sensitivity. Stability is an inherent property of a CCD as a solid-state device. Here, first of all, we mean the stability of sensitivity over time. Temporal stability is verified by measuring fluxes from special stabilized radiation sources. It is determined by the stability of the quantum yield of the matrix itself and the stability of the electronic system for reading, amplifying and recording the signal. This resulting stability of the video camera is the main parameter in determining photometric accuracy, i.e. the accuracy of measuring the recorded light signal.

For good matrix samples and a high-quality electronic system, the photometric accuracy can reach 0.4-0.5%, and in some cases, under optimal matrix operating conditions and using special signal processing methods, 0.02%. The resulting photometric accuracy is determined by several main components:

  • temporal instability of the system as a whole;
  • spatial non-uniformity of sensitivity and, above all, non-uniformity of high-frequency (i.e. from pixel to pixel) sensitivity;
  • quantum efficiency of the video camera;
  • accuracy of video signal digitization for digital video cameras;
  • the amount of noise of different types.

Even if the CCD matrix has large non-uniformities in sensitivity, their influence on the resulting photometric accuracy can be reduced by special methods of signal processing, if of course these non-uniformities are stable over time. On the other hand, if the matrix has high quantum efficiency, but the instability of which is large, the resulting accuracy of registration of the useful signal will be low. In this sense, for unstable devices, the accuracy of recording the useful signal (or photometric accuracy) is a more important characteristic than the signal-to-noise ratio.

ik

Passive IR sensors for security alarms

Мы используем cookie-файлы для наилучшего представления нашего сайта. Продолжая использовать этот сайт, вы соглашаетесь с использованием cookie-файлов.
Принять