CORNER DETECTION

Corners are image points with local luminosity variation characterized by a spatial structure like a joint of type 'T', 'X', or 'L'.

The cornericity is defined for the edge points as the amount of spatial "structure", i.e., the amount the edge changes its direction. We can use the derivative of the direction of the gradient along the edge as a measure of the cornericity. If G is the image and T=atan(DyG / DxG) is the gradient direction, the edge has tangent vector (-DyG, DxG). Therefore,

K = Dedge T = - DyG Tx + DxG Ty
  = (DxxG (DyG)2 - 2 DxyG DyG DxG + DyyG (DxG)2 )   /   ( (DxG)2 + (DyG)2 )

This measure of the cornericity has a number of problems:

Therefore, alternate definitions, based on first order derivatives only have been proposed.

The cornericity based on the pseudo-hessian is K = det(H) / tr(H) where the matrix H is

H =
DxG2 DxG DyG
DxG DyG DyG2

If H1 and H2 are the eigenvalues of H, then

K = H1H2 / ( H1 +H2 )

A large value of K denotes a gradient without any special direction. On the other hand when the gradients points in a given direction H is (almost) singular and K is small.



Marco Corvi - Page hosted by geocities.com.