Kernel (image processing)

In image processing, a kernel, convolution matrix, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image.

Details

Depending on the element values, a kernel can cause a wide range of effects.

Operation Kernel Image result
Identity
Edge detection
Sharpen
Box blur
(normalized)
Gaussian blur 3 × 3
(approximation)
Gaussian blur 5 × 5
(approximation)
Unsharp masking 5 × 5
Based on Gaussian blur
with amount as 1 and
threshold as 0
(with no image mask)

The above are just a few examples of effects achievable by convolving kernels and images.

Origin

The origin is the position of the kernel which is above (conceptually) the current output pixel. This could be outside of the actual kernel, though usually it corresponds to one of the kernel elements. For a symmetric kernel, the origin is usually the center element.

Convolution

Convolution is the process of adding each element of the image to its local neighbors, weighted by the kernel. This is related to a form of mathematical convolution. It should be noted that the matrix operation being performed - convolution - is not traditional matrix multiplication, despite being similarly denoted by *.

For example, if we have two three-by-three matrices, the first a kernel, and the second an image piece, convolution is the process of flipping both the rows and columns of the kernel and then multiplying locally similar entries and summing. The element at coordinates [2, 2] (that is, the central element) of the resulting image would be a weighted combination of all the entries of the image matrix, with weights given by the kernel:

The other entries would be similarly weighted, where we position the center of the kernel on each of the boundary points of the image, and compute a weighted sum.

The values of a given pixel in the output image are calculated by multiplying each kernel value by the corresponding input image pixel values. This can be described algorithmically with the following pseudo-code:

for each image row in input image:
   for each pixel in image row:

      set accumulator to zero

      for each kernel row in kernel:
         for each element in kernel row:

            if element position  corresponding* to pixel position then
               multiply element value  corresponding* to pixel value
               add result to accumulator
            endif

      set output image pixel to accumulator
*corresponding input image pixels are found relative to the kernel's origin.

If the kernel is symmetric then place the center (origin) of kernel on the current pixel. Then kernel will be overlapped with neighboring pixels too. Now multiply each kernel element with the pixel value it overlapped with and add all the obtained values. Resultant value will be the value for the current pixel that is overlapped with the center of the kernel.

If the kernel is not symmetric, it has to be flipped both around its horizontal and vertical axis before calculating the convolution as above.[1]

Edge Handling

Extend Edge-Handling

Kernel convolution usually requires values from pixels outside of the image boundaries. There are a variety of methods for handling image edges.

Extend
The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap
The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Mirror
The image is conceptually mirrored at the edges. For example, attempting to read a pixel 3 units outside an edge reads one 3 units inside the edge instead.
Crop
Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
Kernel Crop
Any pixel in the kernel that extends past the input image isn't used and the normalizing is adjusted to compensate.

Normalization

Normalization is defined as the division of each element in the kernel by the sum of all kernel elements, so that the sum of the elements of a normalized kernel is one. This will ensure the average pixel in the modified image is as bright as the average pixel in the original image.

References

  • Ludwig, Jamie (n.d.). Image Convolution (pdf). Portland State University.
  • Lecarme, Olivier; Delvare, Karine (January 2013). The Book of GIMP: A Complete Guide to Nearly Everything. No Starch Press. p. 429. ISBN 978-1593273835.
  • Gumster, Jason van; Shimonski, Robert (March 2012). GIMP Bible. Wiley. pp. 438&ndash, 442. ISBN 978-0470523971.
  • Shapiro, Linda G.; Stockman, George C. (February 2001). Computer Vision. Prentice Hall. pp. 53&ndash, 54. ISBN 978-0130307965.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.