Text or shortcode
Text or shortcode

Статьи о технологиях охраны, биографии, фото.

Blog

Charge-coupled devices are the basis of modern television technology. Main characteristics of CCD.

0
pribori s zaryadovoi svyazyu osnova sovremennoi televiz 3

#Charge-coupled devices

Charge-coupled devices are the basis of modern television technology. Main characteristics of CCD.

Unknown Sergei Ivanovich
Nikulin Oleg Yuryevich

CHARGE-COUPLED DEVICES —
THE BASIS OF MODERN TELEVISION TECHNOLOGY.
MAIN CHARACTERISTICS OF CCD.

Source: magazine «Special Technology»

The previous article provided a brief analysis of existing semiconductor light receivers and a detailed description of the structure and operating principle of charge-coupled devices.

This article will discuss the physical characteristics of CCD matrices and their impact on the general properties of television cameras.

Number of elements in a CCD matrix.

Perhaps the most “basic” characteristic of CCD matrices is the number of elements. As a rule, the overwhelming majority of models have a standard number of elements oriented towards the television standard: 512×576 pixels (these matrices are usually used in simple and cheap video surveillance systems) and 768×576 pixels (such matrices allow obtaining the maximum resolution for a standard television signal).

The largest of the manufactured and described in the literature CCD is a single-crystal device from Ford Aerospace Corporation measuring 4096×4096 pixels with a pixel side of 7.5 microns.

The output of high-quality large-size devices is very low during production, so a different approach is used to create CCD video cameras for shooting large-format images. Many companies manufacture CCDs with leads located on three, two or one side (buttable CCD). Such devices are used to assemble mosaic CCDs.

For example, Loral Fairchild produces a very interesting and promising 2048×4096 15 µm device. The leads of this CCD are located on one narrow side. The achievements of the Russian industry are somewhat more modest. NPP Silar (St. Petersburg) produces a 1024×1024 16 µm CCD with a volumetric charge transfer channel, virtual phase and leads on one side of the device. This architecture of the devices allows them to be joined to each other from three sides.

It is interesting to note that several specialized large-format light detectors based on CCD mosaics have been created at present. For example, eight 2048×4096 CCDs from Loral Fairchild are assembled into an 8192×8192 mosaic with overall dimensions of 129×129 mm. The gaps between individual CCD crystals are less than 1 mm.

In some applications, relatively large gaps (up to 1 cm) are not considered a serious problem, since the full image can be obtained by summing several exposures in the computer memory, slightly offset from each other, thus filling the gaps.

The image obtained by the 8196×8196 mosaic contains 128 MB of information, which is equivalent to about a 100-volume encyclopedia with 500 pages in each volume.

Although these figures are impressive, they are still small compared to the size and resolution of photographic emulsions, which can be produced in huge sheets. Even the coarsest 35 mm film contains up to 25 million resolvable grains (pixels) in a frame.

Resolving power of TV cameras

One of the main parameters of a TV camera, resolution (or resolving power), directly depends on the number of elements in the CCD matrix. The resolution of the camera as a whole is also affected by the parameters of the electronic signal processing circuit and the parameters of the optics.

Resolution is defined as the maximum number of black and white stripes (i.e. the number of transitions from black to white or vice versa) that can be transmitted by the camera and distinguished by the recording system at the maximum detectable contrast.

This means that the camera allows you to see N/2 dark vertical lines on a light background, laid out in a square inscribed in the image field, if the camera's passport states that its resolution is N television lines.

In relation to a standard television table, this means the following: by selecting the distance and focusing the table image, it is necessary to ensure that the upper and lower edges of the table image on the monitor coincide with the outer contours of the table, marked by the tops of the black and white prisms.

Then, after final focusing, the number is read at the point on the vertical wedge where the vertical lines cease to be distinguishable for the first time.

The last remark is very important, since the image of the test fields of the table, which have 600 or more lines, often shows alternating stripes, which, in fact, are moire, formed by the beating of the spatial frequencies of the lines of the table and the grid of the sensitive elements of the CCD matrix. This effect is especially pronounced in cameras with high-frequency spatial filters.

The unit of measurement of resolution in television systems is TVL (TV line). The vertical resolution of all cameras is almost the same, because it is limited by the television standard – 625 TV scan lines and they cannot transmit more than 625 objects along this coordinate. The difference in horizontal resolution is what is usually indicated in technical descriptions.

In practice, in most cases, a resolution of 380-400 TV lines is quite sufficient for general television surveillance tasks.

However, for specialized television systems and tasks, such as television monitoring of a large area with one camera, viewing a large perimeter with a television camera with variable angular magnification (zoom), surveillance at airports, railway stations, piers, supermarkets, identification and recognition systems for vehicle license plates, facial identification systems, etc., a higher resolution is required (for this, cameras with a resolution of 570 or more TV lines are used).

The resolution of color cameras is slightly worse than that of black-and-white cameras. This is due to the fact that the pixel structure of CCD matrices used in color television differs from the pixel structure of black-and-white matrices. Figuratively speaking, a pixel of a color matrix consists of a combination of three pixels, each of which registers light in either the red (Red), or green (Green), or blue (Blue) part of the optical spectrum. Thus, three signals (RGB signal) are removed from each element of a color CCD matrix.

The effective resolution should be several times worse than that of black-and-white matrices. However, the resolution of color matrices deteriorates less, since the size of their pixel is one and a half times smaller than the size of a pixel of a similar black-and-white matrix, which results in a deterioration of resolution by only 30-40%.

The negative side of this is a decrease in the sensitivity of color matrices, since the effective area of ​​recording an image element becomes significantly smaller. The typical resolution of color TV cameras is 300 – 350 TV lines.

In addition, the camera resolution is affected by the bandwidth of the video signal produced by the camera. To transmit a 300 TVL signal, a bandwidth of 2.75 MHz is required (150 periods per 55 μs of TV scanning line). The relationship between the TV scanning bandwidth (n pctr) and the resolution (TVL) is determined by the ratio:

n pctr=(TVL/2) x n pctv,

where the frequency n pctr is measured in MHz, the resolution of TVL is in TV lines, the horizontal TV scanning frequency n pctv=18.2 kHz.

Nowadays, many different semiconductor amplifiers with good frequency characteristics have been developed, so the bandwidth of camera amplifiers usually significantly (1.5-2 times) exceeds the required one, so as not to affect the final resolution of the system in any way. So the resolution is limited by the discreteness topology of the light-receiving area of ​​the CCD matrix.

Sometimes the fact of using a good electronic amplifier is called by beautiful words like “resolution enhancement” or “edge enhancement”, which can be translated as “contrast resolution” and “emphasized boundaries”.

It is important to realize that this approach does not improve the resolution itself; it only improves the clarity of the black and white borders, and even then not always.

However, there is one case when no tricks of modern electronics allow to raise the video signal bandwidth above 3.8 MHz. This is a composite color video signal. Since the color signal is transmitted on the carrier (in the PAL standard – at a frequency of about 4.4 MHz), the brightness signal is forcibly limited to a bandwidth of 3.8 MHz (strictly speaking, the standard assumes comb filters for separating the color and brightness signals, but real equipment simply has low-pass filters).

This corresponds to a resolution of about 420 TVL. Currently, some manufacturers declare the resolution of their color cameras to be 480 TVL or more. But they, as a rule, do not emphasize the fact that this resolution is realized only if the signal is taken from the Y-C (S-VHS) or component (RGB) output.

In this case, the brightness and color signals are transmitted by two (Y-C) or three (RGB) separate cables from the camera to the monitor.

The monitor, as well as all intermediate equipment (switches, multiplexers, video recorders) must also have Y-C (or RGB) inputs/outputs. Otherwise, a single intermediate element processing the composite video signal will limit the bandwidth to the aforementioned 3.8 MHz and make all the costs of expensive cameras useless.

Quantum efficiency and quantum yield of a CCD camera.

By quantum efficiency we mean the ratio of the number of registered charges to the number of photons that hit the light-sensitive area of ​​the CCD crystal.

However, one should not confuse the concepts of quantum efficiency and quantum yield. Quantum yield is the ratio of the number of photoelectrons formed in a semiconductor or near its boundary as a result of the photoelectric effect to the number of photons that hit this semiconductor.

Quantum efficiency— is the quantum yield of the light-recording part of the receiver, multiplied by the coefficient of conversion of the photoelectron charge into the registered useful signal.

Since this coefficient is always less than one, the quantum efficiency is also less than the quantum yield. This difference is especially large for devices with a low-efficiency signal recording system.

CCDs have no equal in quantum efficiency.

For comparison, out of every 100 photons entering the pupil of the eye, only one is perceived by the retina (quantum yield is 1%), the best photo emulsions have a quantum efficiency of 2-3%, vacuum tubes (e.g. photomultipliers) – up to 20%, for CCDs this parameter can reach 95% with a typical value from 4% (low-quality CCDs, usually used in cheap video cameras of the “yellow” assembly) to 50% (a typical unselected video camera of Western assembly).

In addition, the width of the range of wavelengths to which the eye reacts is much narrower than that of a CCD.

The spectral range of photocathodes of traditional vacuum television cameras and photo emulsions is also limited. CCDs respond to light with wavelengths from a few angstroms (gamma and X-rays) to 1100 nm (IR radiation). This huge range is much larger than the spectral range of any other detector known to date.

pribori s zaryadovoi svyazyu osnova sovremennoi televiz 2

Fig. 1. Example of quantum efficiency of a CCD matrix.

Sensitivity and spectral range

Another important parameter of a television camera, sensitivity, is closely related to the concepts of quantum efficiency and quantum yield. While quantum efficiency and quantum yield are mainly used by developers and designers of new television systems, sensitivity is used by commissioning engineers, the operation service, and designers of direct work projects at enterprises.

In essence, the sensitivity and quantum yield of the receiver are related to each other by a linear function. If the quantum yield relates the number of photons incident on the light receiver and the number of photoelectrons generated by these photons as a result of the photoelectric effect, then the sensitivity determines the response of the light receiver in electrical units of measurement (for example, in mA) to a certain value of the incident light flux (for example, in W or in lx/sec).

In this case, the concept of bolometric sensitivity (i.e. the total sensitivity of the receiver over the entire spectral range) and monochromatic sensitivity, measured, as a rule, by the radiation flux with a spectral width of 1 nm (10 angstroms) are distinguished.

When they say that the sensitivity of the receiver is at a wavelength (for example, 450 nm), this means that the sensitivity is recalculated for the flux in the range from 449.5 nm to 450.5 nm. Such a definition of sensitivity, measured in mA/W, is unambiguous and does not cause any confusion when used.

However, for consumers of television equipment used in security systems, a different definition of sensitivity is more often used. Most often, sensitivity is understood as the minimum illumination on an object (scene illumination), at which it is possible to distinguish the transition from black to white, or the minimum illumination on the matrix (image illumination).

From a theoretical point of view, it would be more correct to indicate the minimum illumination on the matrix, since in this case there is no need to specify the characteristics of the lens used, the distance to the object and its reflection coefficient (sometimes this coefficient is called the word “albedo”).

Albedo is usually defined at a specific wavelength, although there is such a thing as bolometric albedo. It is very difficult to operate objectively with the definition of sensitivity based on the illumination of the object. This is especially true when designing television recognition systems at long distances. Many matrices cannot register the image of a person's face located at a distance of 500 meters, even if it is illuminated by very bright light.*

Note

* Tasks of this kind appear in the practice of security television, especially in places with an increased threat of terrorism, etc. Television systems of this kind were developed in 1998 in Japan and are being prepared for mass production.

But when selecting a camera, it is more convenient for the user to work with the illumination of the object, which he knows in advance. Therefore, the minimum illumination on the object is usually indicated, measured under standardized conditions – the reflection coefficient of the object is 0.75 and the lens aperture is 1.4. The formula linking the illumination on the object and on the matrix is ​​given below:

Iimage=Iscene x R/(p x F2),

where Iimage, Iscene are the illumination of the CCD matrix and the object (Table 1);
R is the reflection coefficient of the object (Table 2);
p is the number 3.14;
F is the lens aperture.

The values ​​of Iimage and Iscene usually differ by more than 10 times.

Illumination is measured in lux. Lux — the illumination created by a point source of one international candle at a distance of one meter on a surface perpendicular to the rays of light.

Table 1. Approximate illumination of objects.

Outdoors (latitude of Moscow)
Cloudless sunny day 100,000 lux
Sunny day with light clouds 70,000 lux
Cloudy day 20,000 lux
Early morning 500 lux
Twilight 0.1 — 4 suites
“White Nights”* 0.01 – 0.1 lux
Clear night, full moon 0.02 lux
Night, moon in clouds 0.007 lux
Dark cloudy night 0.00005 lux
Indoor
Room without windows 100 – 200 lux
Well-lit room 200 – 1000 lux

 

* “White nights” — lighting conditions that satisfy civil twilight, i.e. when the sun sinks below the horizon without taking into account atmospheric refraction by no more than 6°. This is true for St. Petersburg. For Moscow, the conditions of the so-called “navigational white nights” are met, i.e. when the disk of the sun sinks below the horizon by no more than 12°.

Camera sensitivity is often specified for an “acceptable signal,” which means a signal with a signal-to-noise ratio of 24 dB. This is an empirically determined limiting value of noise, at which an image can still be recorded on videotape and one can still hope to see something during playback.

Another way to define an “acceptable” signal is the IRE scale (Institute of Radio Engineers). A full video signal (0.7 volts) is taken as 100 IRE units. A signal of about 30 IRE is considered “acceptable”.

Some manufacturers, in particular BURLE, specify 25 IRE, some – 50 IRE (signal level -6 dB). The choice of “acceptable” level is determined by the signal-to-noise ratio. It is not difficult to amplify an electronic signal.

The trouble is that the noise will also increase.

The most sensitive CCD matrices of mass production today are the Hyper-HAD matrices from Sony, which have a microlens on each light-sensitive cell. They are used in most high-quality cameras.

The range of parameters of cameras built on their basis means, basically, the discrepancy in the manufacturers' approaches to defining the concept of “acceptable signal”.

An additional problem with determining sensitivity is that the unit of measurement of illumination “lux” is defined for monochromatic radiation with a wavelength of 550 nm. In this regard, it makes sense to pay special attention to such a characteristic as the spectral dependence of the sensitivity of a video camera.

In most cases, the sensitivity of black and white cameras is significantly, compared to the human eye, stretched into the infrared range up to 1100 nm.

Some modifications have even higher sensitivity in the near infrared region than in the visible region. These cameras are designed to work with infrared spotlights and are close to night vision devices in some parameters.

The spectral sensitivity of color cameras is approximately the same as the human eye.

pribori s zaryadovoi svyazyu osnova sovremennoi televiz 3

 

Fig. 2. An example of the spectral sensitivity of a color CCD matrix with standard RGB stripes.

Table 2. Approximate values ​​of the reflection coefficients of various objects.

Object Reflectance (%)
Snow 90
White paint 75-90
Glass 70
Brick 35
Grass, trees 20
Human face 15 – 25
Coal, graphite* 7

 

* It is interesting to note that the reflectivity of the lunar surface is also about 7%, i.e. the Moon is actually black.

Special mention should be made of ultra-high-sensitivity cameras, which are essentially a combination of a conventional camera and a night vision device (for example, a microchannel image intensifier tube – EOP).

Such cameras have unique properties (sensitivity is 100 – 10,000 times higher than usual, and in the mid-infrared range, where the maximum radiation of the human body is observed, it glows itself), but, on the other hand, they are also uniquely capricious – the mean time between failures is about one year, and the cameras should not be turned on during the day, it is even recommended to cover their lens to protect the cathode of the image intensifier from burning out.

At a minimum, you should install lenses with an automatic aperture range of up to F/1000 or more. During operation, the camera must be regularly slightly rotated in order to avoid “burning” the image onto the cathode of the image intensifier.

It is interesting to note that, unlike CCD matrices, image intensifier tube cathodes are very sensitive to maximum illumination. If the light-sensitive area of ​​a CCD camera returns to its original state relatively easily after bright illumination (it is practically not afraid of illumination), then the image intensifier tube cathode “recovers” for a very long time (sometimes 3-6 hours) after bright illumination.

During this recovery, even with the input window closed, a residual, “burned-in” image is read from the cathode of the image intensifier tube. As a rule, after large exposures, due to reabsorption effects (gas release under the impact of channel wall bombardment by streams of accelerated electrons) over a large area of ​​microchannel plates, the image intensifier tube noise, in particular, multi-electron and ion noise, increases sharply.

The latter appear as frequent bright flashes of large diameter on the monitor screen, which greatly complicates the extraction of a useful signal.

At even higher input light fluxes, irreversible processes may occur both with the cathode and with the output fluorescent screen of the image intensifier tube: under the influence of a high flux, individual sections of them fail (“burn out”). During further operation, these sections have reduced sensitivity, which subsequently drops to zero.

Most ultra-high sensitivity TV cameras use brightness amplifiers with yellow or yellow-green output fluorescent screens. In principle, the glow of these screens can be considered a monochromatic radiation source, which automatically leads to the definition: systems of this type can only be monochrome (i.e. black and white). Taking this circumstance into account, the creators of the systems select the corresponding CCD matrices: with maximum sensitivity in the yellow-green part of the spectrum and no sensitivity in the IR range.

A negative consequence of the high sensitivity of the matrices in the IR range is the increased dependence of the device noise on temperature.

Therefore, IR matrices used for work in the evening and at night without brightness amplifiers, unlike TV systems with image intensifiers, are recommended to be cooled.

The main reason for the shift in sensitivity of CCD television cameras to the IR region compared to other semiconductor radiation receivers is that redder photons penetrate further into silicon, since silicon is more transparent in the long-wave region and the probability of capturing a photon (converting it into a photoelectron) tends to unity.

pribori s zaryadovoi svyazyu osnova sovremennoi televiz 4

Fig. 3. Dependence of the depth of photon absorption in silicon on the wavelength.

For light with a wavelength greater than 1100 nm, silicon is transparent (the energy of red photons is not sufficient to create an electron-hole pair in silicon), and photons with a wavelength of less than 300-400 nm are absorbed in a thin surface layer (already on the polysilicon structure of the electrodes) and do not reach the potential well.

As mentioned above, when a photon is absorbed, an electron-hole carrier pair is generated, and the electrons are collected under the electrodes if the photon is absorbed in the depleted region of the epitaxial layer.

With such a CCD structure, a quantum efficiency of about 40% can be achieved (theoretically, the quantum yield at this boundary is 50%). However, polysilicon electrodes are opaque to light with a wavelength shorter than 400 nm.

To achieve higher sensitivity in the short-wave range, CCDs are often coated with thin films of substances that absorb blue or ultraviolet (UV) photons and re-emit them in the visible or red wavelength range.

Noise

Noise is any source of signal uncertainty. The following types of CCD noise can be distinguished.

Photon noise.

Is a consequence of the discrete nature of light. Any discrete process obeys the Poisson law (statistics).

The photon flux (S is the number of photons falling on the photosensitive part of the receiver per unit of time) also follows these statistics. According to it, photon noise is equal to pribori s zaryadovoi svyazyu osnova sovremennoi televiz 5.

Thus, the signal/noise ratio (denoted as S/N) for the input signal will be:

S/N=pribori s zaryadovoi svyazyu osnova sovremennoi televiz 6=pribori s zaryadovoi svyazyu osnova sovremennoi televiz 7.

Dark signal noise.

If a light signal is not supplied to the matrix input (for example, the video camera lens is tightly covered with a light-proof cap), then at the system output we will receive so-called “dark” frames, otherwise known as snowball noise.

The main component of the dark signal is thermionic emission. The lower the temperature, the lower the dark signal. Thermionic emission also obeys Poisson statistics and its noise is: pribori s zaryadovoi svyazyu osnova sovremennoi televiz 8, where Nt is the number of thermally generated electrons in the total signal. As a rule, in all video cameras used in CCTV systems, CCDs are used without active cooling, as a result of which dark noise is one of the main sources of noise.

Carryover noise.

During the transfer of a charge packet across the CCD elements, some electrons are lost. They are captured by defects and impurities existing in the crystal.

This transfer inefficiency randomly varies as a function of the number of transferred charges (N), the number of transfers (n), and the inefficiency of an individual transfer event (e). If we assume that each packet is transferred independently, then the transfer noise can be represented by the following expression:

s =pribori s zaryadovoi svyazyu osnova sovremennoi televiz 9.

Example: For a transfer inefficiency of 10-5, 300 transfers and a packet electron count of 105, the transfer noise is 25 electrons.

Readout Noise.

When the signal stored in a CCD element is output from the sensor, converted to voltage and amplified, additional noise called readout noise is introduced into each element.

Readout noise can be thought of as a baseline noise level that is present even in an image with zero exposure, when the sensor is in total darkness and the dark signal noise is zero.

Typical readout noise for good CCDs is 15-20 electrons. The best CCDs, manufactured by Ford Aerospace using Skipper technology, have achieved readout noise of less than 1 electron and transfer inefficiency of 10-6.

Reset noise or kTC noise.

Before the signal charge is introduced into the detection unit, the previous charge must be removed. This is done using a reset transistor.

The electrical level of the reset depends only on the temperature and capacity of the detecting unit, which introduces noise:

s r=pribori s zaryadovoi svyazyu osnova sovremennoi televiz 10,

where k is the Boltzmann constant.

For a typical C capacitance of 0.1 pF at room temperature, the reset noise will be about 130 electrons. kTC noise can be completely suppressed by a special signal processing method: double correlated sampling (DCS). The DCS method effectively eliminates low-frequency signals, usually introduced by power supply circuits.

Since the main load on CCTV systems falls on the dark time of day (or poorly lit rooms), it is especially important to pay attention to low-noise video cameras, which are more effective in low-light conditions.

The parameter describing the relative magnitude of noise, as mentioned above, is called the signal-to-noise ratio (S/N) and is measured in decibels.

S/N = 20 x log(<video signal>/<noise>)

For example, a signal/noise ratio of 60 dB means that the signal is 1000 times greater than the noise.

With a signal/noise ratio of 50 dB or more, the monitor will show a clear picture without visible signs of noise; at 40 dB, flickering dots will sometimes be visible; at 30 dB, “snow” will appear all over the screen; at 20 dB, the image is practically unacceptable, although large contrasting objects can still be seen through the continuous “snow” blanket.

The data provided in camera descriptions indicates signal/noise values ​​for optimal conditions, for example, with 10 lux illumination on the matrix and with automatic gain control and gamma correction turned off. As the illumination decreases, the signal becomes smaller, and the noise, due to the action of AGC and gamma correction, becomes larger.

Dynamic range

Dynamic range is the ratio of the maximum possible signal generated by the light receiver to its own noise.

For CCDs, this parameter is defined as the ratio of the largest charge packet that can be accumulated in a pixel to the readout noise. The larger the CCD pixel size, the more electrons can be held in it.

For different types of CCDs, this value ranges from 75,000 to 500,000 and higher. With 10 e-noise (CCD noise is measured in e- electrons), the dynamic range of the CCD reaches 50,000.

A large dynamic range is especially important for recording images outdoors in bright sunlight or at night, when there is a large difference in illumination: bright light from a street lamp and the unlit shadow side of an object. For comparison: the best photo emulsions have a dynamic range of only about 100.

For a more visual understanding of some characteristics of CCD detectors and, above all, the dynamic range, we will briefly compare them with the properties of the human eye.

The eye is the most universal light receiver.

Until now, the most effective and perfect, in terms of dynamic range (and, in particular, in terms of the efficiency of image processing and restoration), light receiver is the human eye. The fact is that the human eye combines two types of light detectors: rods and cones.

Rods are small in size and have relatively low sensitivity. They are located mainly in the area of ​​the central yellow spot and are practically absent on the periphery of the retina of the fundus.

Rods distinguish light with different wavelengths well, or rather, they have a mechanism for generating different neurosignals depending on the color of the incident beam.

Therefore, under normal illumination conditions, the normal eye has the maximum angular resolution near the optical axis of the lens, the maximum difference in color shades.

Although some people have pathological deviations associated with a decrease, and sometimes the absence of the ability to form various neurosignals depending on the wavelength of light. This pathology is called color blindness. People with acute vision are almost never color blind.

The cones are distributed almost evenly throughout the retina, are larger in size and, therefore, more sensitive.

In daylight conditions, the signal from the rods significantly exceeds the signal from the cones, the eye is tuned to work with bright lighting (the so-called “daytime” vision). Compared to cones, the rods have a higher level of “dark” signal (in the dark we see false light “sparkles”).

If a person with normal vision who is not tired is placed in a dark room and allowed to adapt (“get used to”) to the darkness, the “dark” signal from the rods will be greatly reduced and the cones will begin to work more effectively in perceiving light (“twilight” vision). In the famous experiments of S.I. Vavilov, it was proven that the human eye (a “cone” variant) is capable of registering separate 2-3 quanta of light.

Thus, the dynamic range of the human eye: from the bright sun to individual photons, is 1010 (i.e. 200 decibels!).

The best artificial light detector in this parameter is the photomultiplier tube (PMT). In photon counting mode, it has a dynamic range of up to 105 (i.e. 100 dB), and with an automatic switching device for recording in analog mode, the dynamic range of the PMT can reach 107 (140 dB), which is a thousand times worse in dynamic range than the human eye.

The spectral range of sensitivity of rods is quite wide (from 4200 to 6500 angstroms) with a maximum at a wavelength of approximately 5550 angstroms. The spectral range of cones is narrower (from 4200 to 5200 angstroms) with a maximum at a wavelength of approximately 4700 angstroms.

Therefore, when switching from daytime to twilight vision, an ordinary person loses the ability to distinguish colors (it is not for nothing that they say: “all cats are gray at night”), and the effective wavelength shifts to the blue part, to the region of high-energy photons. This effect of shifting spectral sensitivity is called the Purkinje effect.

Many color CCD matrices that are not balanced by the RGB signal to white have this (indirectly). This should be taken into account when obtaining and using color information in television systems with cameras that do not have automatic white correction.

Linearity and gamma correction.

CCDs have a high degree of linearity. In other words, the number of electrons collected in a pixel is strictly proportional to the number of photons that hit the CCD.

The “linearity” parameter is closely related to the “dynamic range” parameter.

The dynamic range, as a rule, can significantly exceed the linearity range if the system provides hardware or further software correction of the device’s operation in the nonlinear region. Usually, a signal with a deviation from linearity of no more than 10% can be easily corrected.

The situation is completely different in the case of photographic emulsions. Emulsions have a complex dependence of reaction to light and, at best, allow achieving a photometric accuracy of 5% and only in part of their already narrow dynamic range. CCDs, on the other hand, are linear with an accuracy of up to 0.1% in almost the entire dynamic range.

This makes it relatively easy to eliminate the influence of non-uniformity of sensitivity across the field. In addition, CCDs are positionally stable. The position of an individual pixel is strictly fixed during the manufacture of the device.

The kinescope in the monitor has a power-law dependence of brightness on the signal (the exponent is 2.2), which leads to a decrease in contrast in dark areas and an increase in bright areas; at the same time, as has already been noted, modern CCD matrices produce a linear signal. To compensate for the general nonlinearity, a device (gamma corrector) is usually built into the camera, which pre-distorts the signal with an exponent of 1/2.2, i.e. 0.45.

Some cameras offer a choice of pre-distortion coefficient, for example, the 0.60 option leads to a subjective increase in contrast, which gives the impression of a “sharper” picture.

A side effect is that gamma correction means additional amplification of weak signals (in particular, noise), i.e. the same camera with G=0.4 enabled will be approximately four times “more sensitive” than with G=1. However, we remind you once again that no amplifier can increase the signal-to-noise ratio.

Charge spread.

The maximum number of electrons that can accumulate in a pixel is limited. For matrices of average manufacturing quality and typical sizes, this value is usually 200,000 electrons. And if the total number of photons during the exposure (frame) reaches the limit (200,000 or more with a quantum yield of 90% or more), then the charge packet will begin to flow into neighboring pixels. Image details begin to merge.

The effect is enhanced when the “extra” light flux not absorbed by the thin body of the crystal is reflected from the substrate-base. At light fluxes within the dynamic range, the photons do not reach the substrate, they are almost all (at a high quantum yield) transformed into photoelectrons.

However, near the upper limit of the dynamic range, saturation occurs and untransformed photons begin to “wander” around the crystal, mainly maintaining the direction of the initial entry into the crystal.

Most of these photons reach the substrate, are reflected and thus increase the probability of subsequent transformation into photoelectrons, oversaturating the charge packets already located at the spreading boundary.

However, if an absorbing layer, the so-called anti-glare coating (anti-blooming), is applied to the substrate, the spreading effect will be greatly reduced. Many modern matrices produced using new technologies have anti-blooming, which is one of the components of the backlight compensation system.

Stability and photometric accuracy.

Even the most sensitive CCD video cameras are useless for low-light applications if they have unstable sensitivity. Stability is an inherent property of a CCD as a solid-state device.

Here, first of all, we mean the stability of sensitivity over time. Temporal stability is verified by measuring fluxes from special stabilized radiation sources.

It is determined by the stability of the quantum yield of the matrix itself and the stability of the electronic system for reading, amplifying and recording the signal. This resulting stability of the video camera is the main parameter in determining the photometric accuracy, i.e. the accuracy of measuring the recorded light signal.

For good matrix samples and a high-quality electronic system, the photometric accuracy can reach 0.4 – 0.5%, and in some cases, under optimal matrix operating conditions and using special signal processing methods, 0.02%.

The resulting photometric accuracy is determined by several main components:

  • temporal instability of the system as a whole;
  • spatial non-uniformity of sensitivity and, above all, high-frequency non-uniformity (i.e. from pixel to pixel);
  • the magnitude of the quantum efficiency of the video camera;
  • the accuracy of video signal digitization for digital video cameras;
  • the magnitude of noise of different types.

Even if the CCD matrix has large non-uniformities in sensitivity, their influence on the resulting photometric accuracy can be reduced by special methods of signal processing, if, of course, these non-uniformities are stable over time.

On the other hand, if the matrix has a high quantum efficiency, but the instability of which is large, the resulting accuracy of registration of the useful signal will be low.

In this sense, for unstable devices, the accuracy of registration of the useful signal (or photometric accuracy) is a more important characteristic than the characteristic of the ratio signal/noise.

 

You might be interested in …

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *

девять − три =

Мы используем cookie-файлы для наилучшего представления нашего сайта. Продолжая использовать этот сайт, вы соглашаетесь с использованием cookie-файлов.
Принять