Главное меню

RESTORE IMAGES IN THE PRESENCE OF A SINGLE NOISE SPATIAL FILTERING PDF Печать E-mail
Автор: F.K.Achilova, G.Z.Toyirova   
14.04.2018 13:36

RESTORE IMAGES IN THE PRESENCE OF A SINGLE NOISE SPATIAL FILTERING

F.K.Achilova, G.Z.Toyirova

 Karshi Branch TIUT, Uzbekistan

 

INTRODUCTION

Images formed by various optoelectronic systems and registered with the help of various receivers are distorted by interference of various types. Distortions of the image are introduced by all components of the imaging device, starting with the lighting system (for example, uneven illumination of the object). Distortions introduced by the optical system are known at the stage of its design and are called aberrations. Distortions made by electronic radiation receivers, such as CCD matrices, are called electronic noise. Interference hampers visual analysis of the image and its automatic processing.

The attenuation of interference is achieved by filtration. When filtering, the brightness (signal) of each point of the original image distorted by interference is replaced by some other brightness value, which is recognized as the least distorted interference. To perform the filtering, it is necessary to develop the principles of such transformations, which are based on the fact that the intensity of the image changes with spatial coordinates more slowly than the interference function. In other cases, on the contrary, a sharp signal of brightness is a sign of a useful signal.

 

MAIN PART

Surrounding image processing consists of the following:

1. the definition of the central point (x, y)

2. the commission of an operation that uses only the pixel values ​​in the contrarily specified neighborhood around the center point;

3. Assignment of the result of this operation to the "response" of the process at this point;

4. Repetition of the whole process for each point of the image.

As a result of moving the central point, new neighborhoods are formed, corresponding to each pixel of the image. For the described procedure, it is accepted to use the terms neighborhood processing and spatial filtering, the latter being more commonly used. As will be explained in the next section, if the operations performed on the neighborhood pixels are linear, then the whole procedure is called linear spatial filtration (also sometimes called the spatial convolution), otherwise it is called nonlinear spatial filtration.

 

Linear spatial filtration

The linear operations considered in this chapter consist of multiplying each pixel of the neighborhood by the corresponding coefficient and summing up these products to obtain the resulting response of the process at each point (x, y). If the neighborhood has the size mxn, then we need ran coefficients. These coefficients are grouped in the form of a matrix, which is called a filter, a mask, a filter mask, a kernel, a template or a window, with the first three terms being the most common. In the case of the soon-to-be known, we will also use the terms convolutional filter, mask or core.


The process consists in moving the center of the filter mask w from the point to the point of the image I. At each point (x, y) the response of the filter is the sum of the products of the filter coefficients and the corresponding pixels of the neighborhood. which are covered with a filter mask. For a mask of size mn, it is usually assumed that m = 2a + 1 and n = 26 + 1, where a and b are nonnegative integers, i.e., the focus is on masks having odd dimensions, size 3x3 (mask 1x1 is excluded as trivial). Advantageous handling of odd-sized masks is completely justified, since in this case the mask has a pronounced central point.


There are two closely related concepts that need to be well understood when performing linear spatial filtering. The first is correlation, and the second is convolution. Correlation is the passage of the mask w along the image. From the point of view of the mechanics of the process, the convolution is done in the same way, but the mask w must be rotated 180 ° before passing through the image. Two of these concepts are best explained on simple examples.

 To accomplish the convolution, rotate the mask w by 180 ° and combine its rightmost end with the beginning. Then the process of sliding computation takes place as in the derivation of a correlation.

The function is a discrete unit pulse, that is, it is equal to 1 in only one coordinate, and in all the others it is 0. n) or p) it is clear that the convolution simply "copies" w to the place where there was a unit pulse. This simple copy property (called the shift) is a fundamental concept of the theory of linear systems, which explains the need to rotate one of the functions by 180 ° when performing the convolution operation. We note that a permutation of the order of functions in convolution will yield the same result, which does not occur in correlation. If the shifted function is symmetric, then, obviously, correlation and convolution give the same results.


To complete the convolution, first turn w (x, y) 180 ° and perform the same procedure as in calculating the correlation. In a one-dimensional example, convolution produces the same result, regardless of which of the two functions is subjected to displacement. When the correlation is found, the order of the functions is significant. In the IPT package, when implementing these procedures, the filter mask always moves. We also note that the results of the correlation and convolution are obtained from each other by a rotation of 180 °. This is not surprising, since convolution is nothing more than a correlation with a rotated filter mask.

In the IPT package, linear spatial filtering is implemented by the imf ilter function, which has the following syntax:

>>g= imfilter(f, and, filtering_mode, boundary_options, size_options)

where f is the input image, w is the filter mask, g is the filtering result. The parameter f iltering_mode determines that it performs a filter, correlation or convolution ('conv'). The boundary_options option is responsible for extending the boundaries, and the size of the expansion is determined by the size of the filter.

Most often the imf ilter function is used as a command

g = imfilter(f, w, 'replicate')

This form of the command is used when implementing standard linear spatial filters in the IPT. These filters are already rotated by 180 °, so you can do the correlation procedure, which is specified in imf ilter by default. Correlation with the inverted filter is equivalent to convolution with the original filter. If the filter is symmetrical about its center, then both options give the same results.

When working with filters that were not pre-flipped or unbalanced, when you want to build a convolution, you can proceed in two ways. One of them uses the syntax

g = imfilter(f, w, 'conv', 'replicate').

Another approach is to preprocess the mask w using rot90 (w, 2), which rotates w by 180 °. After that, the imfilter (w, 'replicate') command is used. Of course, these two steps can be written in one formula. The result is an image whose size is the same as the original size (ie, the 'same' option, which was discussed earlier, is accepted by default).

Each element of the filtered image is calculated using double precision floating point arithmetic, but at the end of the operation the function imf ilter converts the output image to the original class. Hence, if f is an integer array, then the elements of the processed array that exit the M region of integers will be truncated, and the fractional values ​​will be rounded. If you want a result with increased accuracy, you must first translate the image into the double class using the im2double or double functions before applying imfilter.

CONCLUSION

In filtering methods, when estimating a real signal at a certain point in the frame, take into account a certain set (neighborhood) of neighboring points, using a certain similarity of the signal at these points. The concept of neighborhood is rather conditional. The neighborhood can be formed only by the nearest neighbors, but there may be neighborhoods that contain a lot of sufficiently distant points of the frame. In this case, the degree of influence (weight) of distant and close points on the decisions taken by the filter at a given point in the frame will be completely different. Thus, the ideology of filtering is based on the rational use of data from both the work point and its neighborhood.

Literature

1.     У. Прэтт. Цифровая обработка изображений. В 2-книгах.- М.: Мир. 1982.

2.     Цифровое преобразование изображений: Учеб. Пособие. -М.: Горячая линия -Телеком, 2003.-229 c.

3.     Миано Дж. Форматы и алгоритмы сжатия изображений в действии: Учеб. пособие. -М.: ТРИУМФ, 2003.-336 c.


 
Яндекс.Метрика