|
Digital Photography Essentials #002"Pixel Size"
note by
Dick Merrill (Foveon)
|
The total number of pixels in an image sensor is determined by the size of the sensor field (see Digital Photography Essentials #1) and the size of the pixel. A smaller pixel size allows more pixels to fit in a given field size, increasing resolution. However as the pixel size becomes smaller, fundamental physical limits become increasingly important, placing a practical lower limit on pixel size. One of the most important pixel attributes that changes with pixel size is the ability to capture photons. As shown in figures 1a and 1b, pixels capture photons proportional to pixel area. More photons captured in an exposure time means lower noise. This is because most common light sources emit photons randomly, and the statistics associated with a random process of this kind are such that the uncertainty in the number of photons in a sample is equal to the square root of the total number of photons captured. As shown in figures 1a and 1b, this means the signal to noise ratio increases directly with an increase pixel pitch, so if the pixel size doubles from 4 um to 8 um, the signal to noise ratio limited by the random arrival of photons (referred to as photon shot noise) will also double. So larger pixels will provide higher useful ISO ratings for a digital camera. Another important pixel attribute that changes with pixel size is the ability to resolve features projected onto the focal plane by the camera optics. As shown in figure 2, optical diffraction limits the spot size that a lens can project, determined by aperture, wavelength, and focal length. Diffraction is a term for how light redirects its path when it encounters an obstruction, in this case the edges of the aperture. The equation in figure 2 predicts that spot size can decrease continuously with the focal length / aperture ratio (commonly known as f / #). However, as a practical matter resolution is limited at large apertures by aberrations. Reducing aberrations to allow smaller spot size requires more expensive and bulky lens designs. Most lenses used in good consumer cameras have a minimum resolution ability in the range of 3 um to 5 um. One consequence of the dependence of resolution on f / # is that small pixels will not be able to resolve features in images taken at small aperture settings.
Once the minimum spatial resolution capability of the optical system is determined, the consequences of spatial oversampling or undersampling of the projected features must be considered. Figure 3 shows an example of spatial oversampling and undersampling. At the top of the figure is a projected white / black line pair signal, with an 8 um period. This is situation is often described as 125 lpm, or 125 line pairs per mm. It can be shown mathematically that the most efficient way to sample this spatial signal is with a 4 um pitch pixel array, as shown in the middle of figure 3. The smaller pixel pitch of 2 um in figure 3 does not add new information about the signal. On the other hand, the larger 6 um pitch pixel sampling at the bottom of figure 3 results in a completely erroneous result. This kind of incorrect data resulting from undersampling is called "spatial aliasing". In its most obvious form, spatial aliasing causes the Moire patterns that form when a digital camera images a subject whose features project onto the image plane at a spacing close to the pixel pitch. Some digital cameras place a blur filter in the optical path to suppress high spatial frequencies that cause aliasing, however that necessarily reduces image sharpness as well.
Cost is another issue in determining pixel size. As discussed in Digital Photography Essentials #1, imager die cost depends on both die area and defect density. Defect density increases as the number of pixels per unit area increases, because there are more pixels that can fail per unit area. Nevertheless, to achieve high pixel count at low cost, many manufacturers scale pixel size and die area down together (figure 4 case D) , which usually results in lower cost, but at the price. Because of the relationship between the signal to noise ratio and pixel area described in figure 1a and 1b, small arrays with small pixels will have noisier images. Also, there will be less inter-pixel contrast in the output file because it is likely that the optics will not scale down such that a feature in the scene has the same contrast over the smaller pixel pitch (appendix A illustrates this latter point). In summary it is clear that there are many trade-offs associated with pixel size. In principle the same issues apply to film cameras, which is why different types of film with varying ISO ratings evolved to support different photographic applications. Fine grain film (analogous to small pixels) has high resolution capability but low ISO, and large grain film has high ISO but poor resolution (noticeable grain when magnified). A digital camera that can switch between small pixel mode and large pixel mode can be as flexible as a film camera. As shown in figure 5 the most effective way to achieve multiple pixel size is by averaging together the output of a group of pixels. This averaging is much easier to accomplish if all the pixels are identical, as opposed to having an asymmetry imposed by an array of color filters.
|
For Comments post in our News Group |