Corners are image points with local luminosity variation characterized by a spatial structure like a joint of type 'T', 'X', or 'L'.
The cornericity is defined for the edge points as the amount of spatial "structure", i.e., the amount the edge changes its direction. We can use the derivative of the direction of the gradient along the edge as a measure of the cornericity. If G is the image and T=atan(DyG / DxG) is the gradient direction, the edge has tangent vector (-DyG, DxG). Therefore,
K | = Dedge T = - DyG Tx + DxG Ty |
= (DxxG (DyG)2 - 2 DxyG DyG DxG + DyyG (DxG)2 ) / ( (DxG)2 + (DyG)2 ) |
This measure of the cornericity has a number of problems:
Therefore, alternate definitions, based on first order derivatives only have been proposed.
The cornericity based on the pseudo-hessian is K = det(H) / tr(H) where the matrix H is
H = |
|
If H1 and H2 are the eigenvalues of H, then
A large value of K denotes a gradient without any special direction. On the other hand when the gradients points in a given direction H is (almost) singular and K is small.