вход по аккаунту



код для вставкиСкачать
Proceedings of 2017 IEEE
International Conference on Mechatronics and Automation
August 6 - 9, Takamatsu, Japan
Multi-Scale Adaptive Corner Detection and Feature
Matching Algorithm for UUV Task Target Image
Jian Xu , Xingyu Zhou, Xiaoyuan Chen and Mingze Xu
Department of Automation
Harbin Engineering University
Harbin, Heilongjiang Province, China
[email protected]
signal point which aiming at the shortcomings of Moravec’s
algorithm, namely Harris operator. The method is inspired by
the thought of autocorrelation function, and through the
curvature of the autocorrelation matrix to obtain the corner,
with the invariance of noise, rotating and illumination, but
without the invariance of scale, and the detection time is long,
which causes Harris algorithm with low precision and poor
performance of real-time.
FAST algorithm proposed by Edward Rosten and Tom
Drummond is a simple and rapid method for corner
detection[7]. The detected corner of this algorithm is defined
as the different gray areas which has enough pixels around the
pixel’s neighborhood and the corner. The algorithm can make
the execution time shorter and the real-time performance
stronger, but due to the algorithm of corner detection based on
profile distribution and arrangement of the corner, it is easy to
detect every adjacent feature points, and result in the clustering
of feature points, which causes false matching in stereo image
Because of the unfavorable factors, such as the uneven
underwater illumination, low visibility, water particle with
scattering, which make image with low resolution, target
texture is richer than the background region, and contain large
chunks of the background area. In order to overcome the
above disadvantages, the collected image can be matched
accurately. A multi-scale self-adaptive improved Harris-FAST
corner detection algorithm is proposed in this paper. The
algorithm combines the advantages of FAST algorithm and
Harris algorithm, moreover, optimizing the threshold setting,
the single dimension and the non-maxima suppression
respectively, which shortens the time of the corner detection,
and improves the speed of image processing effectively.
Abstract—In order to ensure the collected images can be
matched exactly when UUV operates autonomously underwater.
An improved Harris-FAST algorithm is proposed in this paper.
The algorithm takes full advantages of FAST algorithm and
Harris algorithm, which makes the threshold value with selfadaptive and the detected corner in multi-scale and
omnibearingly. It can overcome the corner information loss and
position offset caused by corner detection in single-scale and
optimize the non-maxima suppression. After completing the
corner detection, using feature descriptor to generate eigenvector
and complete the corner matching and elimination of
mismatching based on Euclidean distance method. UUV
simulation test system of binocular vision is built in this paper,
and underwater images are captured. Through the contrast test,
it can verify the validity that the proposed algorithm is used in
underwater images.
Index Terms—UUV, image for task target, Harris-FAST,
corner detection, feature matching
When *Unmanned Underwater Vehicle (UUV) executes
tasks automatically, i.e. pipeline survey, cable maintenance, it
needs to collect images of the stereo matching to get the target
position and attitude. Since the characteristics of detection
determine the matching accuracy, the premise of matching
needs to detect the corner in the image.
Nowadays, myriad corner detection methods have been
explored. The corner detection method is mainly divided into
two classes: the method based on image edge and the method
based on image gray. The former needs to encode the image
edge, which depends on the image segmentation and edge
extraction greatly, with considerable difficulty and the amount
of computation, and the local change of target is likely to
cause the failure of the operation. So the method is used rarely
[1,2].The method based on image gray is mainly through
calculating the curvature of point and the gradient to detect
corner, which can avoid the defects. At present, the method
mainly includes Harris operator[3,4], Moravec operator [5]and
Susan operator[6], etc.
In 1988, Moravec proposed the Moravec corner detection
operator, which mainly uses the variance of gray to extract the
feature of point, but it only considers the changes in every 45°
direction without omnidirectional consideration, which causes
the response of noise is strong and the positioning of corner is
inaccurate. Afterwards, C. Harris and M.J. Stephenson
extended out an operator based on extracting the feature of
978-1-5090-6759-6/17/$31.00 ©2017 IEEE
Harris Algorithm
The Harris operator is based on the local autocorrelation
function of the signal, which has the performance of rotation,
noise and illumination invariability[8]. The principle of the
Harris operator is as follows:
For the image I(x, y), select a known size of the window
w, moving from point (x, y) to any ( Δx, Δy ) , then its selfsimilarity can be given by the following autocorrelation
c( x, y, Δx, Δy ) =
w(u , v)( I (u, v) − I (u + Δx, v + Δy )) 2
S p→ x
( u , v )∈W ( x , y )
Where W(x, y) is a window centered at point(x, y), w(u,
v)is a weighting function, which can be a constant or a Gauss
weighting function.
According to Taylor expansion, I ( x, y ) translating
 I (u, v)I (u, v) 
 I (u, v) 
around the circle, d, s, b are three states of pixels intensities on
the circle.
In this paper, we take the underwater images collected by
UUV binocular vision system as the research object. The
Harris algorithm does not have scale invariance and long
detection time. Combined with the FAST algorithm to extract
the characteristics of corner speed. An improved FAST-Harris
corner detection algorithm is proposed[10,11].
Step 1: Preliminary Screening of Candidate Corners
First, select pixels in the image (expect the edge) as the
center in turn, and compare its pixel value with 1, 5, 9 and 13
pixel density values of neighborhood circumference with
radius of 3. Only when three or more of the gray levels of
pixels and the absolute value of the difference of center pixel
gray value are more than a certain threshold, the center point
would be regarded as a candidate corner.
Using FAST algorithm to speed the elimination of a large
number of false corner, Harris algorithm avoids the traversal of
the entire image, which greatly improves the efficiency of
corner detection. Since the complex underwater environment
weakens the perspective of UUV’s grey value within the scope
of the scene, according to the threshold set by the experience,
FAST algorithm will not identify the phenomenon of scarce
texture well. In order to guarantee that UUV use binocular
vision to autonomously operate, it will be able to accurate
determination of target position and attitude. After a lot of
experiments, an optimized self-adaptive threshold is proposed
as follows:
I ( i + u, j + v ) − I ave )
16* 4 u =−3 v =−3
 Δx 
y)   = 1
 Δy 
We can get the formula of the corner response function R,
R = det M − α (traceM ) 2
[ Δx
threshold, I p → x regards as intensities of arbitrary pixels
( darker )
( similar )
( brighter )
Where I p is the intensity of pixel on the circle, t is
with (Δx, Δy ) , and making first-order approximation, formula
(1) can be approximated:
 I x ( u , v )2
I x (u, v) I y (u , v) 
M ( x, y ) =  
I y (u, v) 2 
w 
 I x (u , v) I y (u , v)
  I x (u, v )2
  x (u, v)I y (u, v)
 A B
B C 
d I p → x ≤ I p − t
= s I p − t < I p→x < I p + t
b I p + t ≤ I p → x
Δy ] M ( x
Where det M = AC − B 2 , traceM = λ1 + λ2 = A + C ,
and α is an empirical constant which the value range is 0.040.06.
FAST Algorithm
The corner detected by FAST algorithm is defined as the
enough pixels in the area around the pixels and this point are
in different gray areas. Consider any point in the image and a
circular area centering on it. The circular area with this point
as the center, it is a radius of 3 discrete Bresenham circle. If
the density value of at least 12 adjacent pixels (i.e., the pixel
value) is lower or higher than the density value of C at a
threshold t, the characteristic point at C will be detected. As
shown in Fig.1. To reject the false corners more quickly, the
detection can be optimized by examining the pixels 1, 5, 9 and
13, for a feature can exist only when three or more these
corners are all below or above the intensity of center by
Where I ( i + u, j + v ) is the pixel intensity on the
circle ( u = −3 ~ 3, v = −3 ~ 3) , I (i, j ) is the pixel intensity in
the center.
Step2: Multi-scale Harris corner detection
Because traditional Harris corner detection algorithms
does not have the feature of scale invariance and can detect
corners only in a single direction, once when the corner is
blocked or lose some information, it will cannot be detected.
In order to detect the corner comprehensively and in multiscale, in reference [3], the improvement of the original Harris
as follows:
In Cartesian coordinate system, the Gaussian function is
Fig. 1 FAST feature detection in an image patch
For each point on the circle x ∈ {1...16} , the pixel in the
expressed as G ( x, y ) = e− ( x
position relative to p , denoted by p → x , will have three
states, given by
+ y2 )
, the direction of the gradient is
same characteristics, according to the gradient vector defined
by the following two formulas which cannot fully describe the
characteristic direction of a FAST-Harris corner, especially for
images with rich texture information. So, this paper defines
three vectors to describe the feature of corner.
controllable. The n-th derivatives of G is defined as Gn and
θ is defined as the angle.
The first order derivative in the x direction of the gradient
is expressed as:
∂ − ( x2 + y 2 )
− ( x2 + y 2 )
G1x = e
= −2 xe
The first order derivative in the y direction of the gradient
is expressed as:
− ( x2 + y 2 )
∂ − ( x2 + y 2 )
= −2 ye
G1y = e
Therefore, the first order derivative at any angle can
be expressed as:
G1θ = cos (θ ) G1x + sin (θ ) G1y
Select a pixel (x, y) and calculate the pixels 1,
9,5,13 around the circle with adaptive FAST
method , accumulate
the number N satisfying the requirement
According to (9), the following equation is
c ( u, v ) =  W ( u , v )  I xi + u , yi + v − I u , v 
=  Wu , v u cos (θ ) G1x + v sin (θ ) G1y 
This pixel is not a corner
u ,v
 A B
= Au + 2Cuv + Bv = ( u, v ) 
 ( u, v )
B C 
In (10), A = cos 2 (θ )(G1x ) 2 , C = sin(θ ) cos(θ ) G1x G1y and
Use Harris algorithm combined with
contourlet to make
further judgment
B = sin 2 (θ )(G1y ) 2 .
Step 3: non-maxima suppression
Usually, in non-maxima suppression needs to set a square
template region(e.g. 3*3 square area), in order to judge the
response function value of the corner in the center of the
square area, only when the corner is the maximum response
function value, the corner is identified as the final corner. Use
the same method to traverse the whole image until all the
corners are detected. The position of target in the image will
rotate as the UUV is disturbed by external environment. Based
on the partial rotation invariance of square, the square
template region of 3*3 is improved to a circular template
region with radius of 3 for overcoming the above drawbacks.
When the circle does arbitrary rotation around the center, the
area covered by it remains unchanged, which makes nonmaxima suppression effect improved to some extent.
The program chart of improved FAST-Harris corner
detection shown in Fig.2.
Using a circular template to make nonmaxima suppression
All of
the pixels have been detected?
Fig. 2 Program flow of the improved algorithm
 m(x, y) = ( L (x + 1, y) − L(x − 1, y)) 2 + ( L (x, y + 1) − L(x, y− 1)) 2
−1 L (x, y + 1) − L (x, y − 1)
θ (x, y) = tan
L(x + 1, y) − L(x − 1, y)
In view of (11), m(x, y) and θ (x, y) represent the
The improved FAST-Harris algorithm can quickly detect
the homogeneous feature points of the target, and then use the
feature descriptor in SIFT algorithm to describe the extracted
points features which in the form of a vector, which can
achieve image matching fast and accurately.
magnitude and direction of gradient at (x, y) , the gray values
of adjacent pixels of (x, y) are L(x, y− 1) , L(x, y+ 1) ,
A. Corner Feature Vector
For a certain corner, the formula for calculating the
magnitude and direction of the gradient amplitude is as follows
the (11), taking the vector 1 shown in Fig. 3 to represent the
gradient. Since the pixels around the corners may exhibit the
L(x − 1, y) , L(x + 1, y) .
Fig. 5 Simulation of UUV binocular vision test scenarios
Fig. 3 Direction description of FAST-Harris corner
B. Corner Feature Descriptor
Because the feature descriptor of SIFT operator in image
stereo matching has the advantages of translation, rotation and
illumination invariance. On the basis of the above 3 vectors to
describe the corners, this paper adopts the SIFT feature
descriptor to describe the FAST-Harris corner. Shown in Fig.4.
A. The Comparison Test of Corner Detection
Running the improved algorithm in MATLAB 2010
environment, and comparison of Harris algorithm and FAST
algorithm to detect the number of corners and running time,
shown in Fig. 6.The threshold T in the FAST algorithm is 12,
the threshold of Harris algorithm and improved algorithm is
adaptive 0.01Rmax.
Fig.4 Feature descriptor
C. Corner Matching Algorithm
Compute the feature descriptors of feature points in the
left image and search sub-graph in the right image respectively,
then calculate the Euclidean distance of the eigenvector, and
regard it as the similarity measure between the feature points
and the matching points. Finally, the minimum Euclidean
distance is selected as the matching point of the feature point.
Specific calculation as shown in (12).
D = ( x1 − x1' ) 2 + ( x2 − x2' ) 2 +  + ( x128 − x128
eigenvector of two feature points to be matched in the two
images respectively. The smaller the D, the higher matching
degree of the feature points.
D. Eliminate Mismatch
In the process of image stereo matching, by means of the
most and the secondary relevant proportional methods, namely,
eliminate the wrong matching points by solving the nearest and
second nearest Euclidean distance ratio. Shown as the (13).
K = Dnearest / Dhypo − nearest
(b) FAST algorithm
(c) Harris algorithm (d) Improved algorithm
Fig. 6 Corner detection results by three algorithms
) is the
Where X 1 = ( x1 , x2,  , x128 ) and X 2 = ( x1' , x2'  , x128
Where Dnearest is
(a)Test image
and Dhypo − nearest is the second nearest Euclidean distance. K is a
set threshold, increasing the value of K and reducing the
matching points of SIFT will improve the matching accuracy
In order to verify the effectiveness of the improved
algorithm, build a UUV binocular vision simulation system
which installs the lamp between the binocular cameras as the
underwater light, and collect the gray image in the pool, shown
in Fig. 5.
Detection algorithm
Improved algorithm
Detected corners
Accurate time(s)
Fig. 6 shows the corner detection results of target test
image and the image using the algorithms, the detected number
and running time of the corresponding algorithm are shown in
table 1.
Fig.6 (b) detects the target corner with FAST algorithm,
Fig.6 (b) and Table 1 show that the FAST algorithm can
quickly detect feature corners in target, with high real-time, but
detect a large number of false corners at the same time. Fig.6
(c) shows the numerous intensive feature points in Harris
algorithm, produce the false corners, and UUV will interfere
with accurate image matching. At the same time a large
number of corners will increase computation in stereo image
matching, and has poor real-time performance. Fig.6 (d)
adopts the proposed algorithm. On one hand, the proposed
algorithm removes most of the false corners and produce
stable corners which accurately reflect the main characteristics
and structure of the target. On the other hand, compared with
the traditional Harris algorithm, the improved algorithm of
corner detection speed is greatly increased.
B. Comparison of Corner Detection Before and After
In this test, the threshold of both Harris and the proposed
algorithm is adaptive 0.01Rmax, the corner detected by Harris
algorithm and the proposed algorithm before and after object
rotation is shown in Fig.7, and table 2 is the contrast data
(b) The improved algorithm results
Fig. 8 Calibration plate matching results
Number of
Number of
(a)Harris before object rotating (b) Harris after object rotating
Fig. 8 (a) is the result of image matching in SIFT
algorithm, and Fig. 8 (b) is the result of using the improved
algorithm to match the images. Fig.8 (a), (b) and Table 3
illustrate that the extracted a large amount of the false corner
which cannot reflect the structural characteristics of the target
in SIFT algorithm, and has low success rate of matching. Most
of the detected feature points can reflect the characteristics of
target in the improved algorithm. In the stereo matching of
images, the eigenvector of matching and the elimination of
mismatch can improve the matching accuracy of the image and
reduce the time greatly, which can meet the demand of realtime in the process of UUV executes tasks automatically.
(c) The proposed method (d) The proposed method
before object rotating
after object rotating
Fig. 7 Corners extracted by two algorithms before and after target rotation
Detection algorithm
The proposed algorithm
Before rotation
After rotation
In order to improve the quality of corner detection, and
match the image exactly, an improved FAST-Harris corner
detection method is proposed in this paper. Firstly, the
adaptive threshold FAST algorithm is used to screen the corner
preliminary. Secondly, the multi-scale Harris algorithm is used
to select the feature points. Finally, the 3*3 circle domain is
used to suppress the non-maxima of the corners. After the
detection of the corners is completed, combining the feature
descriptors of the SIFT operator to generate the eigenvector,
and the matching of the corners and the elimination of
mismatching are completed based on the Euclidean distance.
This paper builds the UUV binocular vision simulation test
system and carries out the simulated UUV pool experiment.
The comparison of the three algorithms proves the superiority
of the improved FAST-Harris corner detection method applied
in orientation and real-time performance. In the view of feature
matching, this paper compares the improved image matching
algorithm with the SIFT matching algorithm, and verifies the
validity of the proposed algorithm applied to underwater
Fig. 7 and Table 2 show that the number of the detected
corner is changed a lot by Harris algorithm, while the number
of the corner is changed very little by the proposed algorithm.
There is a non-maxima suppression in the Harris corner
detection, and the target rotation has an effect on the number
of the detected corner. This paper adopts a circular area of
radius 3 instead of 3*3 square area, making the number of the
detected corner detection stably, and has less number of the
false corner. In this way, the rotated image can still be matched
precisely when UUV pitch and roll.
C. The Feature Matching Experiment
The following pictures show the analysis and the
comparison of the matching properties between the improved
algorithm and the SIFT algorithm in the applications of
binocular vision. The resolution of test binocular camera is
This work is supported by National Natural Science
Foundation (NNSF) of China under Grant 51179038, and
partly supported by the Fundamental Research Funds for the
Central Universities under Grant HEUCFX041402 and Harbin
Engineering University Science and Technology on
(a) SIFT matching results
[1] Hu Zhen,Yuan Xiaohai and Chen Rongsheng. Used to target detection
side and pinpoint the underwater robot visual system [J]. China
Shipbuilding, 2000, 41:89-94.
[2] Chen Rongsheng, Hu Zhen. Underwater robot light visual access to
information and processing algorithms [J]. Ship Mechanics,1999, 3:6469.
[3] C. Harris and M. Stephens, A combined corner and edge detecter
[J].Proceeding of the 4th Alvey Vision Conference. 1988, 124:147-151.
[4] David G Lowe. Object recognition from local scale-invariant features
[C]. Internat- ional Conference on Computer Vision, Corfu, Greece,
1999, 126: 1150-1157.
[5] C.Schmid, R. Mohrand and C. Bauckhage, Evaluation of Interest Point
Detectors [J]. Journal of Computer Vision, 2000, 37(2): 151-172.
[6] S. M. Smith and J. M. Brady, “SUSAN - a new approach to low level
image processing” [J]. International Journal of Computer Vision.1997,
23 (1): 45–78.
[7] E. Rosten and T. Drummond, Machine learning for high-speed corner
detection [J]. ECCV, 2006.
[8] Zhao Xiaochuan. Matlab image processing and application of cases [M].
Beijing: Beijing University of Aeronautics and Astronautics Press, 2014.
[9] Liu Ying,Harris corner detection algorithm optimizing [J]. Journal of
Yunnan University for Nationalities. 2011, 20(2):136-138.
[10] LI Yibo, LI Junjun, Harris corner detection algorithm based on improved
contourlet transform[J].Procedia Engineering 2011,15:2239-2243.
[11] Na Yao, Tie Chengbai, Improved harris corner detection for Chinese
characters [J]. Forth world congress on software engineering.2013.
Без категории
Размер файла
831 Кб
2017, icmal, 8015971
Пожаловаться на содержимое документа