THE IMAGING PROCESS

The imaging process describes how the scene is imaged by the optics on the sensing device (camera plane). In first approximation the light rays go through the lens focus and produced an inverted image on the camera plane. The object point (x,y,z) in the real world, is mapped to the point (x',y') on the camera plane such that:

x' = x   f/z
y' = y   f/z

where f is the lens focal length. This formula does not take into account distorsions and other second order effects.

The sensing device then produces an electrical signal S which depends on the intensity I of the light striking it.

S = F(I)

A digital image is formed in which the light intensity on a small area is transformed in the image value of a pixel (picture element). Therefore the digital image has an imaging resolution corresponding to the size (or angle) of the scene that maps to a pixel.

For example a digital scanner resolution is expressed in mm per pixel, or in dots per inch (1 inch = 25 mm), ie, pixels per inch. A 300 dpi means 12 pixels per millimeter, that is about a 0.08 mm pixel. A 20 cm image becomes a 2400 pixel digital image.

The dependence of the pixel value from the light intensity is not uniform and a calibration is necessary to relate the pixel intensities to the same reference value R.

P(x,y) = a   (S(x,y) - R(x,y))

In fact there are also second order effects: The reference value not only depends on the pixel position, but also on the light intensity. Likewise the coefficient a also depends on the pixel, and the light intensity. Furthermore these quantities veries according to the wavelength of the light.

 

Mathematically a Digital Image is a map from a domain D in Rn to R (1-D image) or Rm (multidimensional image).

By subsampling the signal the domain of the image is reduced to a discrete finite set of points in Rn (an n-dimensional "lattice").

The pixel values are encoded with a finite number of bits: the image values are therefore quantized. For example grey-level images have values in the set {0, .., 255}, ie, they are quantized in 256 values. This implies also a saturation effect: pixel values below the minimum value of the set are set to this value, and pixel above the maximum are set to it.

The image tassellation defines how the pixels are connected with neighboring pixels (image topology): this determines the neighbor and the connectivity. Usual choices are 4-neighbor connection and 8-neighbor connections for a square lattice. Other lattices (eg hexagonal) have different connections.

The image representation denotes how the pixel values are encoded in bits.

Marco Corvi - Page hosted by geocities.com.