close

Вход

Забыли?

вход по аккаунту

?

ICCSNT.2016.8070184

код для вставкиСкачать
2016 5th International Conference on Computer Science and Network Technology (ICCSNT)
IMOPSO: An Improved Multi-objective Particle
Swarm Optimization Algorithm
Borong Ma, Jun Hua, Zhixin Ma, Xianbo Li
School of Information Science and Engineering
Lanzhou University
Lanzhou, China
{ [email protected], [email protected], [email protected], [email protected]}
In this paper, an improved algorithm is proposed to
enhance the performance of MOPSO including convergence,
diversity and distribution in the light of several improvements
strategies as follows: ֙ Dynamic acceleration coefficients
based on sine transform to improve the global search ability;
֚The method of drift motion is used to find a better solution
for the best position to expand the search scopes; ֛Mutation
rates generated by modified Lévy flight[5] aim at keeping the
diversity of the Pareto solutions. Hence, an improved multiobjective particle swarm optimization algorithm (IMOPSO) is
proposed. Moreover, the simulation test results show the
effectiveness of the IMOPSO.
Abstract—An improved multi-objective particle swarm
optimization (IMOPSO) is presented because of the different
demand for decision and state variables in engineering
optimizations. IMOPSO adopts a new method of dynamic change
about acceleration coefficients based on sine transform to
improve the ability of global search in early period and the local
search ability in the last runs of the algorithm. To expand the
search area of particles, a drift motion is acted on the personal
best positions. Moreover, a dynamic mutation strategy in which
the mutation rates are generated by modified Lévy flight is used
to make the particles escape from the local optimal value. Finally,
the efficiency of this algorithm is verified with test functions and
the experimental results manifest that the IMOPSO is superior to
MOPSO algorithm in wide perspectives like obtaining a better
convergence to the true Pareto fronts with good diversity and
uniformity.
II.
A. Multi-objective Optimization Problems
Usually, the mathematical model of a multi-objective
optimization problem which has n independent variables and
m objective functions can be formulated as shown below:
Keywords—Particle swarm optimization algorithm; Multiobjective optimization; Acceleration coefficients; Drift motion;
Mutation
I.
INTRODUCTION
= ( ) = ( 1 ( ), 2 ( ), …
. ( ) ≤ 0, = 1,2, …
ℎ ( ) = 0, = 1,2, … ,
Most of the optimization problems of engineering
decisions belong to multi-objective optimization problems.
Because of the actual needs of production and living, the
problem has become a focus in the field of optimization.
( ))
where x = (x1 , x2 , … , xn ) ∈ X ⊆ Rn is decision variable, X is
n-dimensional decision space. This problem has m objectives,
fi (x)(i = 1, … m) is the ith objective to be minimized,
g i (x)(i = 1, … q) is ith inequality constraint and hj (x)(j =
1, … p) denotes jth equality constraint. Additionally, the set of
feasible solutions which are determined by all the constraints
is indicated as Ω, and the function Y = {F(x)|x ∈ Ω} is the
objective space.
To solve the complex multi-objective optimization
problems, many methods have been proposed and the swarm
intelligence algorithms have becoming prevailing. Multiobjective Evolutionary Algorithms (MOEAs) are famous
search algorithms based on swarm intelligence which imitate
the process of species inheritance, variation and selection.
Many excellent evolutionary algorithms have been proposed
after the pioneering effort of Schaffer[1].
B. MOPSO and Previous Research
Particle swarm optimization algorithm is an evolutionary
algorithm which is illuminated by the social behavior of birds.
In PSO, a bird is called a particle which is representing a
solution. Each particle decides its flying distance and direction
by a speed, and all of them have a fitness value based on the
optimization function. The PSO includes personal best
position gained in foregoing iteration and the global best
position in the whole population heretofore. The position and
velocity of a particle are updated by using the following
equations:
Illuminated by the birds’ flocking behaviors, the particle
swarm optimization (PSO) algorithm[2] is another widely
used important evolutionary algorithm with fast convergence,
simple operation and less parameters. The multi-objective
particle swarm optimization (MOPSO) algorithm was
proposed by Coello firstly[3] in 2002, then the method of
adaptive grid was used to preserve external file; meanwhile,
the mutation strategy was used in this algorithm and it is a
milestone in multi-objective optimization[4]. Since then the
MOPSO algorithm has attracted the extensive concern from
scholars, and many improved versions trod closely on another.
978-1-5090-2129-1/16/$31.00 ©2016 IEEE
RELATED WORKS
376
Changchun, China
(t+1)
Vi
proposed. In this method, the acceleration coefficients are
nonlinear changing dynamically based on sine transform. The
modified equations about c1 and c2 are shown as follows:
= w ∗ Vit + c1 ∗ rand1 ∗ (pbest ti − Xit ) + c2 ∗
(2)
rand2 ∗ (gbest ti − Xit )
( +1)
=
+
( +1)
(3)
where Xi = (xi1, xi2 , … , xiD ) andVi = (vi1, vi2 , … , viD ) are the
position vector and velocity vector of the ith particle in a Ddimensional space, respectively; w denotes the inertia weight;
c1 and c2 represent acceleration coefficients, within (0,4);
rand1 and rand2 represent two random numbers within [0,1];
and t=1,2,… indicates the iteration number; pbest ti means the
best position in the search space which was already visited by
particle i and gbest ti is the global best position discovered so
far.
In order to avoid local optimum, Parsopoulos[6] proposed
a method for online adjustment. Sierra and Coello[7] divided
the population into several sub populations with different
mutation operator, and they also introduced crowding distance
and ε-dominance to maintain the diversity of solution.
Raquel[8] presented an approach by incorporating the method
of crowding distance computation in NSGA2 into the PSO to
solve multi-objective optimization problems. Huang[9]
integrated a Pareto dominance concept into a comprehensive
learning particle swarm optimizer (CLPSO)[10]. Zheng used
decomposed decision variables and cooperative coevolutionary
sub-swarms
based
MOPSO[11].
The
decomposition based on the multi-objective particle swarm
optimizer (dMOPSO)[12] was presented with the method of
decomposition. The multi-objective particle swarm
optimization with preference-based sorting named MOPSOPS was presented, in which a global best position is selected
randomly from the non-dominated solutions in archive sorted
by global evaluation considering user’s preference for multiple
objectives[13].
III.
1
=
1
−(
1
−
1
)∗
(
2
=
2
+(
2
−
2
)∗
(
2∗
2∗
∗ )
(4)
∗ )
(5)
Fig.1. Variation diagram of acceleration coefficients based on sine
transform.
where t is the iteration number and the maximum iteration is
maxT. c1i and c2i represent initial value of c1 and c2 ; c1f and
c2f denote stop value of c1 and c2 . With this setting, a
particle’s own cognitive c1 is given a larger value and social
component c2 with a small value which is easy to realize fast
search and high efficiency during the early period of algorithm.
Furthermore, a small value of cognitive component c1 and a
large value of c2 can improve the speed of convergence in the
later part of the optimization with a high accuracy. The
variation diagram of acceleration coefficients based on sine
transform over time is shown in Fig.1.
B. Drift Motion
In the real world, flying birds often change their trajectory
slightly as a result of external influences like strong winds or
obstacles. Inspired by this phenomenon, random deviation
motion will be applied to the pbest i of the particle population
to expand the scope of search in this essay. The drift motion
can be formulated by the following equation:
THE PROPOSED APPROACH
The details of the proposed algorithm IMOPSO is
described in this section.
A. Dynamic Acceleration Coefficients based on Sine
Transform
Define abbreviations and acronyms the first time they are
used in the text, even after they have been defined in the
abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc,
and rms do not have to be defined. Do not use abbreviations in
the title or heads unless they are unavoidable.
′
=
+ (2 ∗
∗ )
() − 1) ∗
(2 ∗
(6)
where d is the spatial dimension, pbest id represents the dth
dimension value of pbest i in the search space so far, and rand()
is a function that can generate random numbers in [0,1].
In MOPSO, the search of optimal solution is directed by
the cognitive component c1 and social component c2 which
are always set to be 2 generally. A large number of scholars
researched the change of acceleration component finding that
a large value of c1 will issue in excessive wandering of
individuals through the search space and it will lead to the
slower convergence speed, in constant, a high value of c2 can
lead to loss the diversity of population and run easily into local
optimum earlier.
C. Mutation Rates generated by Modified Lévy Flight
With increasing of iterations number, the decrease of the
diversity of population may cause the premature convergence
problem in MOPSO algorithm. Especially for high dimension
problems, it is very easy to fall into local optimal value and it
will lead to the difficulty of converging to the true Pareto front.
Therefore, to improve the diversity of Pareto solutions and
avoid getting in the local best solution, the mutation is
introduced. The mutation operation is implemented by using a
dynamic mutation strategy in which the mutation rates are
Considering the stagnation phenomenon in the later period
of MOPSO algorithm caused by the diversity scarcity of
particles, a new method to set the acceleration coefficients is
377
ķ Update the position and velocity of each particle by
formula (2) and (3) in which the acceleration coefficients are
calculated by (4) and (5), and update pbest i ;
generated by modified Lévy flight. Lévy(β) is a random
number generated by a Lévy distribution and the calculation
formula are shown as follows:
é
∗
( )~
ĸ Update pbest i again by adopting drift motion (6);
(7)
1
Ĺ Store and update non-dominated solutions into external
archive At .
| |
1
=
(1+ )∗
Step 3(Mutation operation). Determine a mutation rate for
each particle of population by modified Lévy flight according
to (7), (8) and (9). Then update new population and external
archive At after mutation.
∗
( 2 )
(8)
−1
1+
∗ ∗2 2 )
2
(
where β is a constant and set to 1.5 usually, μ and v are
random numbers produced by a normal distribution with the
mean of 0 and standard deviation of 1, Γ is the gamma
function.
Step 4(Maintain external archive). When the external
archive’s size exceeds the ArchiveMax, maintain it by
crowding distance.
Step 5(Updategbest i ) Get the gbest i by crowding distance
of particles in external archive.
The random number Lévy(β) is generated like this: large
values and small values are generated alternatively in a
random manner according to the Lévy flight, but they are
often greater than 1; consequently, a value is obtained as the
mutation rate Pa ranged in [0,1] through a function called floor
which gets the largest integer on greater than itself. The
equation of mutation rate Pa is shown as follow, and Fig.2
shows the tendency chart of Pa .
= é
( )−
( é
( ))
Step 6(End condition) If stop condition is satisfied, stop;
otherwise, jump to Step 2.
In MOPSO, the update step copes with the simulation for
the activity of particles in which the movement of a particle is
primarily dominated by its personal experience and experience
of the population in search space. And the gbest i is updated
by crowding distance of non-dominated solutions generated by
the dominant relationship.
(9)
Moreover the non-dominated solutions are saved in an
external archive which is significant because the selection of
gbest i is achieved from it. When its size exceeds the
limitation of maximum size, we delete redundant solutions by
crowding distance to maintain the diversity and uniformity of
solutions.
where floor() is a function.
To avoid obtaining partial optimal solutions, the mutation
is realized through adding a small number which is smaller
than the related variable’s upper limit for a random dimension
value of the particle, and the mutation rates of particles are
changing dynamically with the method of modified Lévy
flight.
Fig.2. Tendency chart of Pa
IV.
D. Steps of the IMOPSO
Based on all above improvement strategies, an improved
multi-objective particle swarm optimization algorithm
(IMOPSO) is proposed. The specific steps of IMOPSO are
described as follows:
RESULTS AND ANALYSIS
A. Multi-objective Test Functions and Parameter Settings
To validate the proposed IMOPSO, four functions
ZDT1~ZDT3 and ZDT4[14] will be tested in this paper. And
the parameter settings are shown as follows:
For test functions ZDT1, ZDT2, ZDT3 d=30; for ZDT6
d=10. For these two algorithms, the size of initial population is
N=50, the maximum number of iterations maxT=200 and
external archive size ArchiveMax=100, w=0.7298. c1 = c2 =
1.4962 in MOPSO, c1i = c2f = 2 and c1f = c2i = 0.5 in the
IMOPSO.
Step 1(Initialization). Randomly generate the position and
velocity of each particle within the predefined decision
variable range and other related parameter values involved in
the population size N, the maximum iteration number maxT,
maximum size of external archive ArchiveMax. Evaluate the
objective values of each particle in initial population Pt , then
find and store non-dominated solutions into external archive
At . Initialize the present position as pbest i , gbest i is got by
the crowding distance of non-dominated solutions.
B. Performance Measures
In order to evaluate the proposed algorithm IMOPSO and
compare it with other established multi-objective algorithms
fairly, three widely used performance metrics are adopted:
convergence index (γ), diversity index (η) and uniformity
index (σd ).
Step 2(Update operation). Take the following operations
for all particles:
378
Convergence index γ: It indicates the degree of
approximation from the obtained front to the true Pareto front.
Diversity index η: The spread of the solutions on the
Pareto front:
=
1
∑
−
=1 (
(10)
)
−
Fig.5. Pareto front obtained by IMOPSO and MOPSO on ZDT3
where M is the total number of the objectives. zkmax and zkmin
are the obtained maximum and minimum values of the kth
objective. fkmax and fkmin are the true maximum and minimum
values of the corresponding objective on the Pareto front,
respectively.
Uniformity index σd : The degree of uniformity of the
obtained solutions:
=
[
(11)
]
Fig.6. Pareto front obtained by IMOPSO and MOPSO on ZDT6
where the normalized Euclidean distance between solution a
and b can be calculated by:
( , )=
∑
=1 (
[
( )−
−
( )]
)
The results of performance index calculated by the
IMOPSO and MOPSO are shown in Table I-III:
(12)
TABLE I. CONVERGENCE INDEX
C. Results and Discussion
All simulation tests are based on MATLAB 2010b,
Windows 8. To reduce the influence of test results which is
caused by the randomness of data, and to more accurately
response the performance, each algorithm was run 20 times
independently. Based on these parameter settings, the Pareto
fronts produced by IMOPSO and MOPSO for these four test
functions are shown in Fig.3-6. It is clear to see that IMOPSO
can obtain better Pareto fronts with more uniformly distributed
solution points.
ZDT1
ZDT2
ZDT3
ZDT6
MOPSO
IMOPSO
7.1773E-06
var
1.3678E-04
1.7163E-07
mean
1.3631E-03
1.0904E-05
var
3.6810E-06
4.2426E-11
mean
7.2693E-04
4.4839E-05
var
1.4268E-06
1.6795E-09
mean
7.9520E-06
3.9656E-06
var
7.1707E-11
1.5928E-11
mean
2.3264E-11
The results of convergence index for each function and
algorithm are summarized in Table I. The MOPSO and
IMOPSO all can properly approximate the true Pareto Fronts
but we can see that IMOPSO converges better than MOPSO
which further confirms the advantages of our algorithm. For
ZDT3, the convergence performance is increased by one order
of magnitude and the ZDT1 and ZDT2 improved two orders
of magnitude. In addition, the IMOPSO has more stable
performance.
Fig.3. Pareto front obtained by IMOPSO and MOPSO on ZDT1
TABLE II. DIVERSITY INDEX
ZDT1
ZDT2
ZDT3
Fig.4. Pareto front obtained by IMOPSO and MOPSO on ZDT2
ZDT6
379
mean
MOPSO
0.7855
IMOPSO
1.0000
var
7.2283E-02
0.0000E+00
mean
0.9408
1.0000
var
7.0118E-03
0.0000E+00
mean
1.0164
1.0001
var
4.0163E-04
1.5558E-08
mean
1.2190
1.0000
var
9.5880E-01
1.6859E-11
will continue to enhance the performance of IMOPSO and to
extend the IMOPSO to resolve the real word multi-objective
problems.
Table II presents the comparative results of MOPSO and
IMOPSO on these test functions regarding diversity index. It
is observed that IMOPSO performs best on all the test
functions, especially the Pareto fronts of ZDT1 and ZDT2
have the best performance of diversity.
REFERENCES
[1]
As is shown in Table III, we can see that the uniformity of
IMOPSO is better than MOPSO. The uniformity index of
ZDT2, ZDT3, ZDT6 are one order smaller than MOPSO
which proved the better performance of IMOPSO.
[2]
By analyzing the above results, the performance of
improved MOPSO algorithm (IMOPSO) is much better than
MOPSO especially in ZDT2. In conclusion, the comparisons
of the test results of the two algorithms indicate that the
IMOPSO can obtain much better solutions approximating true
Pareto fronts with better diversity and uniformity on the whole.
[3]
[4]
[5]
TABLE III. UNIFORMITY INDEX
ZDT1
ZDT2
ZDT3
ZDT6
IMOPSO
8.0539E-04
[6]
mean
MOPSO
8.8038E-04
var
2.7770E-08
4.3868E-08
[7]
mean
8.6978E-03
9.5252E-04
var
1.8230E-06
5.6018E-08
mean
1.0397E-03
5.7511E-04
var
6.7660E-07
3.3808E-08
mean
1.3324E-03
9.5196E-04
var
2.9934E-06
1.2684E-07
[8]
[9]
V. CONCLUSIONS AND FUTURE WORK
[10]
This paper, an efficient improved multi-objective particle
swarm optimization algorithm (IMOPSO), is presented to
solve multi-objective optimization problems. A novel dynamic
change of acceleration coefficients based on sine transform is
applied to enhance the global search ability. In order to make
the population moving towards a better position, the method
of drift motion used for the best personal position of particles
is presented in this passage. Additionally, the mutation
operation whose rates are generated by modified Lévy flight is
further performed on the non-dominated solutions with better
diversity and uniformity. When compared with MOPSO
algorithm, the experimental results illustrate that IMOPSO
performs best on almost all of the test functions.
[11]
[12]
[13]
[14]
For future work, we need to experiment with test functions
of higher dimensionality and more complexity, and all of them
380
J. D. Schaffer, "Multiple objective optimization with vector evaluated
genetic algorithms," in Proceedings of the 1st international Conference
on Genetic Algorithms, 1985, pp. 93-100.
J. Kennedy and R. Eberhart, "Particle swarm optimization," in IEEE
International Conference on Neural Networks, 1995. Proceedings, 1995,
pp. 1942-1948 vol.4.
C. C. Coello and M. S. Lechuga, "MOPSO: A proposal for multiple
objective particle swarm optimization," in Evolutionary Computation,
2002. CEC'02. Proceedings of the 2002 Congress on, 2002, pp. 10511056.
C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, "Handling multiple
objectives with particle swarm optimization," Ieee Transactions on
Evolutionary Computation, vol. 8, pp. 256-279, Jun 2004.
P. Barthelemy, J. Bertolotti, and D. S. Wiersma, "A Levy flight for
light," Nature, vol. 453, pp. 495-8, May 22 2008.
K. E. Parsopoulos and M. N. Vrahatis, "On the computation of all global
minimizers through particle swarm optimization," Ieee Transactions on
Evolutionary Computation, vol. 8, pp. 211-224, Jun 2004.
M. R. Sierra and C. A. C. Coello, "Improving pso-based multi-objective
optimization using crowding, mutation and e-dominance," in
International Conference on Evolutionary Multi-Criterion Optimization,
2005, pp. 505-519.
C. R. Raquel and P. C. Naval, "An effective use of crowding distance in
multiobjective particle swarm optimization," in Conference on Genetic
and Evolutionary Computation, 2005, pp. 257-264.
V. L. Huang, P. N. Suganthan, and J. J. Liang, "Comprehensive learning
particle swarm optimizer for solving multiobjective optimization
problems," International Journal of Intelligent Systems, vol. 21, pp. 209226, Feb 2006.
J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, "Comprehensive
learning particle swarm optimizer for global optimization of multimodal
functions," Ieee Transactions on Evolutionary Computation, vol. 10, pp.
281-295, Jun 2006.
X. W. Zheng and H. Liu, "A scalable coevolutionary multi-objective
particle swarm optimizer," International Journal of Computational
Intelligence Systems, vol. 3, pp. 590-600, Oct 2010.
Z. Mart, S. nez, and C. A. Coello Coello, "A multi-objective particle
swarm optimizer based on decomposition," in Genetic and Evolutionary
Computation Conference, GECCO 2011, Proceedings, Dublin, Ireland,
July, 2011, pp. 69-76.
S. Cheng, M. Y. Chen, and P. J. Fleming, "Improved multi-objective
particle swarm optimization with preference strategy for optimal DG
integration into the distribution system," Neurocomputing, vol. 148, pp.
23-29, Jan 19 2015.
A. C. Godínez, L. E. M. Espinosa, and E. M. Montes, "An Experimental
Comparison of Multiobjective Algorithms: NSGA-II and OMOPSO," in
Electronics, Robotics and Automotive Mechanics Conference, 2010, pp.
28-33.
Документ
Категория
Без категории
Просмотров
10
Размер файла
432 Кб
Теги
iccsnt, 8070184, 2016
1/--страниц
Пожаловаться на содержимое документа