close

Вход

Забыли?

вход по аккаунту

?

ICInfA.2017.8079050

код для вставкиСкачать
Proceedings of the 2017 IEEE
International Conference on Information and Automation (ICIA)
Macau SAR, China, July 2017
A New Method for Indoor Low-cost
Mobile Robot SLAM*
Weina Xi, Yongsheng Ou, Jiansheng Peng, and Gang Yu
Abstract – Simultaneous Localizatoin and Mapping (SLAM)
is an active area of robot research and the location technology of
mobile robots is a very critical issue in the filed of SLAM. Low
cost and high performance can not be balanced in commercial
robots and we plan to achieve better performance with low-cost
sensors. This paper focuses on the study of complex indoor
environment and a new method of point cloud matching and lowcost mobile robot localization is presented. Firstly, we change the
data within a range of distance from laser to image by building
grid maps and we consider each map has no scaling relation to
others. Second, fast Fourier transformation is used to get the
rotation angle. In order to get translation parameters, onedimensional Fourier transformation is used to horizontal
projection and vertical projection of map. We can build maps
based on positioning results. Finally, we perform comparative
experiments with other common methods.
ment and localization of unknown environment [1]. The
localization of known environment mainly considers
positioning accuracy problem. The localization of unknown
environment requires the use of external sensors to obtain
information, and we can get robot’s position after processing.
Besides, self-localization method can also be divided into two
categories: relative positioning and absolute positioning.
Relative positioning refers to get relative distance and
direction in which a robot moved in a short time. This method
measures change of the mobile robot current location relate to
initial position, usually based on dead reckoning, Kalman
Filter, Markov, Carlo and Extended Kalman Filter (EKF).
Absolute positioning refers to direct determination of position
in world coordinate system that mainly includes GPS and map
matching positioning [2].
Meanwhile, the cost of robots is an important factor that
needs to be taken into account. High performance and low
cost can not be balanced and it is an important concern for
realization of commercial robot SLAM. At present,
localization of a robot is almost achieved by installing
additional sensors, which are generally divided into two
categories: vision and laser. Choosing which one type from
these two depends on cost limitation to a large extent. Visionbased approaches utilize natural landmarks and significant
features in the environment [3-5]. The price of the visual
sensor is usually high. Laser sensor occupies an essential role
in robot navigation because of its better price than visual
system.
In this paper, we will propose an effective method to
achieve high performance by using a cleaning robot and
mainly aim at indoor environment. The only sensor we use is a
low-cost laser sensor. One of the key issues in laser navigation
is to locate the robot according to the registration of the laser
scanning data. We need to find the transformation parameters
according to the corresponding relationship between two laser
data sets.
The common currently used in point cloud registration is
Iterative Closest Point (ICP) algorithm. The algorithm was
proposed by Besl and McKay in 1992 [6]. On account of
simplicity and low complexity, ICP algorithm is very popular.
Although it is commonly used, it still has some defects [7],
such as falling into local extremum, depending on initial
registration and high repetition rate.
This paper focuses on the study of complex indoor
environment, as shown in Fig. 1. The commonly used ICP
method often fails in such a scenario since the complexity,
similarity of the environment, low repetition rate and fast
moving speed. We present a more robust approach of robot
Index Terms – Mobile robot, indoor, SLAM, point cloud, Fast
Fourier Transform.
I. INTRODUCTION
With the rapid development of artificial intelligence and
pattern recognition technology, intelligent robots have entered
all aspects of industrial automation and human life. The
realization of intelligent robots is based on autonomous
navigation and Simultaneous Localization and Mapping
(SLAM). SLAM is divided into localization and mapping,
where localization is an important premise of mapping. When
we get the results of localization, we can create map and
update map according to them. SLAM is composed of indoor
and outdoor and they each have their own way to achieve
SLAM. Depending on whether the environment is known, the
positioning can be divided into localization of known environ*
This work was supported by the National High-Tech Research and
Development Program of China (863 Program, Grant No. 2015AA042303),
the National Natural Science Foundation of China (Grant No. 61640305), the
Shenzhen Overseas Innovation and Entrepreneurship Research Program
(KQCX2015033117354155) and the Shenzhen Fundamental Research
Programs (JCYJ2016428154842603, JCYJ20150630114942265).
Weina Xi is with Guangdong Provincial Key Laboratory of Robotics
and Intelligent System, Shenzhen Institutes of Advanced Technology,
Chinese Academy of Sciences, 518055 Shenzhen, P .R. China.
Weina Xi is also with Harbin Institute of Technology, Shenzhen.
Yongsheng Ou is with CAS Key Laboratory of Human-Machine
Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology,
Chinese Academy of Sciences, 518055 Shenzhen, P. R. China.
Jiansheng Peng is with Hechi University.
Gang Yu is with Harbin Institute of Technology, Shenzhen.
Yongsheng Ou is also with The Chinese University of Hong Kong,
Hong Kong. Yongsheng Ou is the corresponding author (email:
[email protected]).
978-1-5386-3154-6/17/$31.00 ©2017 IEEE
1012
Fig. 3 Model point set and scene point set.
Fig. 4 Correspondence between the two sets.
short, given two data sets with overlapping, the first set is
called model point set, and the other is called scene point set,
as shown in Fig. 3, we need to find the transformation
parameters according to the corresponding relationship
between the two data sets, as shown in Fig. 4.
Using M i M i  R 2 , i  1,2,  , p to represent model point
Fig. 1 Indoor environment and CAD model.
localization based on low-cost laser. By selecting valid data
from laser point cloud and converting them into images, the
localization is realized by referring to registration of images.
This method gets better accuracy and robustness performance
when robot moves fast.




set, and S i S i  R 2 ,i  1,2,  , q to represent scene point set,
II. ICP ALGORITHM
where p and q are the number of points. ICP algorithm is to
find the optimal conversion parameters R and T to minimize
the following formula:
This paper uses relative positioning and finds the
transformation parameters between current location relate to
location at the last moment. We establish two coordinate
systems, marked as model coordinate system (XMOMYM) and
scene coordinate system (XSOSYS) respectively. The
transformation parameters are X , Y ,  , where X , Y are
translation parameters between the two locations and  is
rotation angle, as shown in Fig. 2. We often use two matrices
to represent the transformation between the locations, namely
R and T :
cos   sin  
R
,
 sin  cos  
n
 M  RS
i
2
i
T ,
(1)
i 1
where R is a matrix composed of rotation parameters, T is a
matrix composed of translation parameters, and n is the
number of corresponding points.
The main steps are as follows:
1) Selecting Points: You can select some points in scene
point set through uniform sampling or random sampling, and
marked as Si  S ;
2) Matching Points: Find M i  M in model point set
X 
T  .
Y 
In different locations, a robot can use laser to get different
point sets, and the change of the robot’s position can be
obtained through point registration.
ICP algorithm is commonly used in robots localization,
which was proposed mainly for point registration problem. In
which satisfies this formula: min M i  S i . These points are
the closest points to S i  S ;
3) Finding Parameters: Calculating R and T
to
2
minimize the following formula: M i  RSi  T  ;
4) Updating Points: S i  RS i  T , where S i is the
updated point set;
1 n
2
5) Calculating Deviation: d   M i  Si ;
n i 1
6) Judging: Determine whether d is less than the given
threshold or if the number of iterations is less than the given
threshold. If so, the iteration is terminated, and if not, taking
S i as a new point set, returning to step 2).
III. METHOD DESCRIPTION
This paper presents a new localization method since ICP
occasionally fails in complex indoor environment. Robots
localization is achieved by laser scanner in this paper.
A. Data Pretreatment
Fig. 2 Different locations at different moments.
1013
C. Rotation Parameter
Let Ax, y  and Bx, y  be two grid maps and let Ax, y 
be a translated, rotated and scaled of Bx, y  by rotation angle
 , translation x0 , y0 , and scale factor  , such that Ax, y 
is related to Bx, y  by the following transformation:
Ax, y  
.
B x cos   y sin    x0,
  x sin   y cos    y0 
In this paper, there is no scaling relation between two
maps and   1 . We can write (7) below:
Ax, y  
.
 x sin   y cos    y0 
Bx cos   y sin    x0,
Taking the FFT of both sides of (8) gives:
Au , v   Bu cos   v sin  ,u sin   v cos   .
Fig. 5 Grid map and matrix.
The scan data sets can not be applied to calculation
directly and need to be preprocessed.
If too many data points involved in computing, it will
greatly affect the memory and running speed. This paper
chooses the data within a certain distance from the robot as
data source for subsequent calculation. For convenience of
calculation, the selected data are translated to the first
quadrant. The next thing is to convert them into images. As
we all know, image information is stored as a matrix and
registration between images is achieved by operation between
matrices. To sum up, it is necessary to convert point cloud
data sets into matrices. Grid cells are commonly used to
represent points. One grid map consists of many 1s and many
0s, where 1 indicates an obstacle and 0 stands for free. For
example, as shown in Fig. 5. Memory and precision should be
considered when determining how many grid cells are divided
into.
B. One-dimensional Fast Fourier Transform (1-D FFT)
1-D FFT function relation can be written as follows:
f 2 x   f1 x  x0  ,
(2)
(7)
grid
(8)
(9)
Performing polar coordinate transformation:
(10)
u cos   v sin    cos    ,
(11)
 u sin   v cos    sin     .
We can write (9) as follows:
A cos  ,  sin    B cos   ,  sin     (12)
Let
A ,    A cos  ,  sin   ,
(13)
B ,    B cos  ,  sin   ,
(14)
it gives:
(15)
A ,    B ,     .
Equation (15) is similar to 1-D FFT. Using 1-D FFT
theorem, rotation angle  can be solved.
In contrast to traditional image registration, there is no
scaling relationship between grid maps. Therefore, the rotation
angle can be obtained by performing polar coordinate
transformation without further conversion of logarithmic
coordinate.
D. Translation Parameters
After calculating the rotation parameter, the translation
parameters can be solved by projection of image in horizontal
and vertical directions.
Let C x, y  be the grid map after Bx, y  is rotated by
rotation angle  and its projection can be represented by:
where f1 x  , f 2 x  are two functions, x0 is the shift between
them and f1 x  is related to f 2 x  by the above
transformation, as in (2). Taking the FFT of both sides of (2)
and using related theorems gives:
(3)
F2 u   F1 u e  j 2ux0 .
Taking the cross power spectrum of (3):

F1 u F2 u 
(4)
 e  j 2ux0 ,

F1 u F2 u 

C x  y    C x, y dx ,
where F  is conjugation of F , and  is magnitude. Taking
(16)

the inverse FFT of both sides of (4) gives an impulse function.
The displacement x0 can be derived from the position of the
magnitude of the impulse function [8-11].
Similarly, 2-D FFT function relation can be written in this
way:
f 2  x , y   f1  x  x0 , y  y 0  ,
(5)
where f1 x, y  , f 2 x, y  are two 2-D functions, x0 , y0 are
displacements between them. We can also get the cross power
spectrum of (5):

F1 u , v F2 u , v 
(6)
 e  j 2 ux0 vy0  .

F1 u , v F2 u , v 

C y x    C x, y dy .
(17)

Similarly, projection of Ax, y  can be represented by:

Ax  y  
 Ax, y dx ,
(18)


Ay x  
 Ax, y dy .
(19)

By taking projection of image, converting 2-D to 1-D, 1D FFT can be used to calculate translation parameters [12].
1014
From (15) to (19), we can see that rotation parameter and
translation parameters are eventually solved by 1-D FFT.
IV. EXPERIMENTAL RESULTS
The experimental facilities used are shown in Fig. 6. The
left shows iRobot, the right shows RPLidar. RPLidar is a kind
of instrument with 360-degree coverage field and range up to
six meters. The performance parameters of the computer used
in test are as follows: Lenovo G470, intel core i3, 2G memory.
The experimental data were obtained by RPLidar and
algorithm is implemented on MATLAB and Visual Studio
computing platforms. This paper chose the data within 2m
from the robot as data source for subsequent calculation and
250*250 grid cells. The experimental environment is shown in
Fig. 7, where the red dots represent robot’s positions. We
chose six positions and got six sets of data, as shown in Fig. 8.
2
Position y (m)
Position y (m)
3
1
0
-1
-2
-3
-15
-10
-5
0
5
10
Position x (m)
c) Position 3
Position x (m)
d) Position 4
2
Position y (m)
Position y (m)
3
1
0
-1
-2
-3
-15
-10
-5
0
5
10
Position x (m)
Position x (m)
e) Position 5
f) Position 6
Fig. 8 Six sets of data corresponding to six positions.
Taking data obtained at position 1 as model point set, and
the remaining five sets of data are sequentially taken as scene
point set. First of all, ICP algorithm was used to calculate
conversion parameters, the results are shown in Table I and
the converted graphs are shown in Fig. 9 where the reds
represent model point sets while blues represent scene point
sets.
Fig. 6 Experimental facilities.
TABLE I
RESULTS OF CALCULATION USING ICP
Parameters
Match pair
Translation of x (m)
Translation of y (m)
Rotation (deg)
0.1097
0.1879
-0.5185
-0.3437
-0.1852
-0.0115
-0.0198
0.1399
0.1639
0.1268
1.1950
1.2485
4.1152
9.2671
12.3167
Position y (m)
1-2
1-3
1-4
1-5
1-6
Position x (m)
a) Result of position 1 and 2
Position x (m)
a) Position 1
Position y (m)
Position y (m)
Position y (m)
b) CAD model
a) Experimental environment
Fig. 7 Experimental environment and CAD model.
Position x (m)
b) Position 2
Position x (m)
b) Result of position 1 and 3
1015
Position y (m)
Position y (m)
Position x (m)
a) Result of position 1 and 2
Position y (m)
Position y (m)
Position x (m)
c) Result of position 1 and 4
Position x (m)
b) Result of position 1 and 3
Position y (m)
Position y (m)
Position x (m)
d) Result of position 1 and 5
Position x (m)
c) Result of position 1 and 4
Position x (m)
e) Result of position 1 and 6
Fig. 9 Matching results using ICP.
Position y (m)
Similarly, taking data obtained at position 1 as model
point set, and the remaining five sets of data are sequentially
taken as scene point set. The algorithm described in this paper
was used to calculate conversion parameters, the results are
shown in Table II and the converted graphs are shown in Fig.
10 where the reds represent model point sets while blues
represent scene point sets.
Position x (m)
d) Result of position 1 and 5
TABLE II
RESULTS OF CALCULATION USING FFT
1-2
1-3
1-4
1-5
1-6
Translation of x (m)
Translation of y (m)
Rotation (deg)
0.0960
0.1760
0.5120
0.7840
0.9120
0.0000
-0.0160
-0.0320
-0.0800
-0.1120
1.4400
1.4400
2.8800
5.7600
7.2000
Position y (m)
Parameters
Match pair
Position x (m)
e) Result of position 1 and 6
Fig. 10 Matching results using FFT.
1016
As can be seen from Fig. 9, the results of position 1 and
position 2, position 1 and position 3 are accurate. However,
the other three groups are inaccurate. Meanwhile, it can be
seen from Fig. 10, the results of five groups using algorithm
proposed by this paper are correct and reliable. This is because
ICP algorithm requires the overlap rate of two sets of data
must be enough high. In other words, the distance between the
two sets of data should be close enough while FFT algorithm
does not need too high overlap rate.
The above calculation is based on static data. In addition
to that, this paper used ICP and the algorithm proposed in this
paper to carry on dynamic localization experiment under
different moving speed based on iRobot mobile platform. The
robot walked as follows: from (0,0) on, went to (6,0) and then
(6,3), as shown in Fig. 11. Using ICP and FFT to locate the
robot, and the results are show in Table III. Meanwhile, the
curves drawn from the results are shown in Fig. 12.
It can be seen from Table III and Fig. 12, when speed
exceeds 0.15m/s, accuracy of ICP algorithm is poor and the
error is up to 0.5m. In addition, due to the vibration of robot,
the curves generated by ICP algorithm fluctuate greatly. On
the contrary, the algorithm proposed by this paper has a good
robustness to speed. The location error is about 0.2m when
speed is 0.2m/s. The curves generated by the proposed
algorithm are smoother, so it also has a good robustness to
vibration.
V. CONCLUSIONS
In this paper, a new approach of point cloud matching and
low-cost mobile robot localization is presented. By
preprocessing the cloud data, we use FFT to get robot’s pose.
Via experiments and compared with traditional ICP method, it
is proved that the proposed method is effective and robust.
However, this work still needs to be improved. The test is
open-loop and there is no real-time correction of pose. If
setting the system to be closed loop, it will greatly improve the
accuracy.
REFERENCES
[1] J. A. Castellanos, J. M. Martinez, J. Neria, and J. D. Tardos,
“Simultaneous Map Building and Localization for Mobile Robots: A
Multisensor Fusion Approach,” Proceedings. 1998 IEEE International
Conference on Robotics and Automation, vol. 2, pp. 1244-1249, 1998.
[2] Y. Zhao, F. Liu, and R. Wang, “Location Technology of Indoor Robot
Based on Laser Sensor,” 2016 7th IEEE International Conference on
Software Engineering and Service Science (ICSESS), pp. 683-686, 2016.
[3] S. Se, D. Lowe, and J. Little, “Local and Global Localization for Mobile
Robots using Visual Landmarks,” Proceedings of the 2001 IEEE/RSJ
International Conference on Intelligent Robots and Systems, vol. 1, pp.
414-420, Oct. 2001.
[4] T. Uchimoto, S. Suzuki, and H. Matsubara, “A Method to Estimate
Robot’s Location Using Vision Sensor for Various Type of Mobile
Robots,” 2009 International Conference on Advanced Robotics, pp. 1-6,
2009.
[5] G. N. Desouza, A. C. Kak, “Vision for Mobile Robot Navigation: A
Survey,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 24, no. 2, pp. 237-267, 2002.
[6] P. J. Besl, Neil D. Mckay, “A Method for Registration of 3-D Shapes,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14,
no. 2, pp. 239-256, 1992.
[7] L. Zhu, H. F. Yang, “Study on Fourier-Merlin Transform for Field Laser
Scanning Point Cloud Registration,” Journal of Beijing University of
Architecture and Technology, vol. 6, no. 2, pp. 55-59, 2015.
[8] H. S. Stone, M. T. Orchard, Ee-Chien Chang, and S. A. Martucci, “A Fast
Direct Fourier-based Algorithm for Subpixel Registration of Images,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 10,
pp. 2235-2243, 2001.
[9] A. B. Abche, F. Yaacoub, A. Maalouf, and E. Karam, “Image
Registration based on Neural Network and Fourier Transform,” 2006
International Conference of the IEEE Engineering in Medicine and
Biology Society, pp. 4803-4806, 2006.
[10] R. Gonzalez, “Fourier Based Registration of Differentially Scaled
Images,” 2013 IEEE International Conference on Image Processing, pp.
1282-1285, 2013.
[11] X. Guo, Z. Xu, Y. Lu, and Y. Pang, “An Application of Fourier-Mellin
Transform in Image Registration,” The Fifth International Conference on
Computer and Information Technology (CIT’05), pp. 619-623, 2005.
[12] S. Alliney, C. Morandi, “Digital Image Registration Using Projections,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
PAMI-8, no. 2, pp. 222-233, 1986.
Fig. 11 Walking route of robot.
TABLE III
RESULTS OF LOCALIZATION USING ICP AND FFT
Positions
Speed (m/s)
End point of ICP (m)
End point of FFT (m)
(6,3)
(6,3)
(6,3)
(6,3)
(6,3)
(5.9560,2.9540)
(5.9130,2.9450)
(5.8730,2.9420)
(5.6470,2.7660)
(5.5160,2.5970)
(5.9300,3.0260)
(5.9050,2.9900)
(5.8710,2.9990)
(5.8100,2.9120)
(5.8070,2.8740)
Position y (m)
Position y (m)
0.1
0.125
0.15
0.175
0.2
End point (m)
Position x (m)
Position x (m)
a) ICP
b) FFT
Fig. 12 Localization results using ICP and FFT.
1017
Документ
Категория
Без категории
Просмотров
2
Размер файла
981 Кб
Теги
2017, icinfa, 8079050
1/--страниц
Пожаловаться на содержимое документа