next up previous
Next: Laplacian of Gaussian (LoG) Up: gradient Previous: Edge Detection

The Laplace Operator

The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators:

\begin{displaymath}
\bigtriangleup =\bigtriangledown \cdot \bigtriangledown
= \...
...{array}\right]
=\sum_{n=1}^N \frac{\partial^2}{\partial x_n^2}
\end{displaymath}

In $N=2$ dimensional space, we have:

\begin{displaymath}
\bigtriangleup =\bigtriangledown\cdot\bigtriangledown
=\bigt...
...frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}
\end{displaymath}

When applied to a 2-D function $f(x,y)$, this operator produces a scalar function:

\begin{displaymath}\bigtriangleup f(x,y)
=\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}
\end{displaymath}

In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as

\begin{displaymath}
\bigtriangledown f[n]=f[n+1]-f[n]
\end{displaymath}

then the second order difference is
$\displaystyle \bigtriangleup f[n]=\bigtriangledown\;(\bigtriangledown f[n])$ $\textstyle =$ $\displaystyle \bigtriangledown f[n]-\bigtriangledown f[n-1]$  
  $\textstyle =$ $\displaystyle (f[n+1]-f[n])-(f[n]-f[n-1])=f[n+1]-2f[n]+f[n-1]$  

Note that $\bigtriangleup f[n]$ is so defined that it is symmetric to the center element $f[n]$. The Laplace operation can be carried out by 1-D convolution with a kernel $[1, -2, 1]$.

In 2-D case, Laplace operator is the sum of two second order differences in both dimensions:

$\displaystyle \bigtriangleup f[m,n]$ $\textstyle =$ $\displaystyle \bigtriangleup_m[f[m,n]]+\bigtriangleup_n[f[m,n]]$  
  $\textstyle =$ $\displaystyle f[m+1,n]-2f[m,n]+f[m-1,n]+f[m,n+1]-2f[m,n]+f[m,n-1]$  
  $\textstyle =$ $\displaystyle f[m+1,n]+f[m-1,n]+f[m,n+1]+f[m,n-1]-4f[m,n]$  

This operation can be carried out by 2-D convolution kernel:

\begin{displaymath}
\left[ \begin{array}{rrr} 0 & 1 & 0  1 & -4 & 1  0 & 1 & 0
\end{array} \right]
\end{displaymath}

Other Laplace kernels can be used:

\begin{displaymath}
\left[ \begin{array}{rrr} 1 & 1 & 1  1 & -8 & 1  1 & 1 & 1
\end{array} \right]
\end{displaymath}

We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before.

Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order difference of the image.

fat_edge.gif

fat_edge_negate.gif

One dimensional example:

fat_edge_detection_1d.gif

In the two dimensional example, the image is on the left, the two Laplace kernels generate two similar results with zero-crossings on the right:

fat_edge_detection_2d.gif

Edge detection by Laplace operator followed by zero-crossing detection:

forest_laplace.gif

If in the neighborhood (3x3, 5x5, 7x7, etc.) of a given pixel there exist both polarities, i.e., pixel values greater than and smaller than 0, then the pixel is a zero-crossing. (Note that the pixel in question does not have to be zero.) Specifically, we find the maximum and minimum among all pixels in the neighborhood of a pixel under consideration. If the maximum is greater than zero and the minimum is smaller than zero, the pixel is a zero-crossing.

Due to the existence of random noise some false zero-crossing may be detected. In this case we check whether the difference between the maximum and the minimum is greater than a threshold value. If so the pixel is on an edge, otherwise the zero-crossing is assumed to be caused by noise and suppressed.

zerocrossing_pseudocode.gif


next up previous
Next: Laplacian of Gaussian (LoG) Up: gradient Previous: Edge Detection
Ruye Wang 2018-12-12