TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 12, No. 12, Decembe
r
2014, pp. 82
8
6
~ 829
1
DOI: 10.115
9
1
/telkomni
ka.
v
12i12.59
99
8286
Re
cei
v
ed Ma
rch 2
0
, 2014;
Re
vised Aug
u
st
28, 201
4; Acce
pted Se
ptem
ber 19, 2014
Particle Swarm Optimization with Simulated Binary
Crossover
Lei Yang, Ca
ixia Yang*, Yu Liu
Dep
a
rtment of Electrical Infor
m
ation En
gi
ne
erin
g, W
uhan
Pol
y
t
e
ch
nic Un
iversit
y
,
W
uhan Po
l
y
t
e
c
hnic U
n
ivers
i
t
y
,
W
uhan 43
00
2
3
, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: s_t_p@a
l
i
y
u
n
.com
A
b
st
r
a
ct
Particle sw
ar
m opti
m
i
z
at
ion (
PSO) is a ne
w
intelli
gent s
earch tec
hni
qu
e, w
h
ich is i
n
s
p
ire
d
b
y
sw
arm int
e
lli
g
ence. Alth
ou
g
h
PSO has s
how
n go
od
p
e
rformanc
e in
ma
ny b
enc
h
m
ark
opti
m
i
z
a
t
ion
prob
le
ms, it su
ffers from pre
m
atur
e conv
er
genc
e in so
lvi
ng co
mp
lex
multi
m
o
dal pr
ob
l
e
ms. In this p
aper,
w
e
propos
e a
nove
l
PSO al
g
o
rith
m, cal
l
ed
PSO w
i
th a
simu
late
d bi
nary
crossover
op
e
r
ator (SCPSO)
,
to
improve th
e pe
rforma
nce of P
S
O. Experimen
tal result
s on s
e
vera
l benc
h
m
ark prob
le
ms s
how
that SCPSO
achi
eves b
e
tter performanc
e than stan
dar
d PSO.
Ke
y
w
ords
: par
ticle sw
arm o
p
timi
z
a
tio
n
, evol
ution
a
ry alg
o
rit
h
ms, sw
arm i
n
telli
ge
nce, gl
ob
al opti
m
i
z
a
t
io
n
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Many real
-wo
r
ld problem
s
can b
e
formu
l
ated
as o
p
timization p
r
o
b
l
ems in conti
nuou
s or
discrete variable space. In
the passed
decades,
some intelligent algorithms i
n
spired by nature
have b
een
p
r
opo
se
d to
solve optimi
z
a
t
ion p
r
obl
e
m
s, such
as
Geneti
c
Alg
o
r
ithms (GAs) [1],
Particle S
w
arm Optimization (PSO
) [2], Differentia
l
Evolution (DE) [3], etc. PSO is one
of the
most pop
ula
r
intelligent op
timization alg
o
rithm
s
, whi
c
h has
sho
w
n
good search
abilities in find
solut
i
o
n
s.
For PSO’s
simple co
ncept
, easy imple
m
entation,
a
nd efficien
cy, it has attract
ed much
attention. The
re
sea
r
ch
of
PSO be
come
s a
hot
sp
ot i
n
optimi
z
atio
n alg
o
rithm
s
.
Shi and
Ebe
r
hart
introdu
ce
d a
para
m
eter w
calle
d a
s
in
ertia wei
ght int
o
the
origi
nal
PSO to b
a
la
nce
the
glob
al
and lo
cal sea
r
ch a
b
ilities [
4
]. A large w is better for g
l
obal search,
and a mall w
is fitter for local
sea
r
ch. It ha
s bee
n p
o
inted
out in
[4] tha
t
a line
a
rly d
e
c
re
asi
ng
w
o
v
er the
evolut
ionary
process
is efficient. Afterwa
r
ds, t
he versi
on
of PSO
with inertia wei
g
ht is called
stand
ard PS
O
.
Sugantha
n p
r
opo
sed
a nei
ghbo
rho
od te
chni
que,
whi
c
h u
s
e
d
the l
o
cal
be
st pa
rticle in
stead
of
the global be
st, where the best local p
a
rticle in
dicates the be
st particle in the
neigh
borhoo
d
of
each pa
rticle
. For ea
ch
particl
e, a predefine
d
nu
mber
of part
i
cle
s
we
re
consi
dered a
s
its
neigh
bors. If the predefin
ed
numbe
r i
s
eq
uivalent
to th
e swarm si
ze
, the local
be
st be
come
s t
h
e
global
be
st [5]. Wan
g
et
al. [6] introdu
ced
opp
ositio
n-ba
se
d le
arning a
nd
Ca
uchy m
u
tatio
n
for
stand
ard PS
O. The prop
o
s
ed ap
proa
ch
(OPSO) esti
mates not onl
y the curre
nt particl
e but also
the co
rre
sp
o
nding o
ppo
sit
e
parti
cle. Th
is is hel
pful to provide m
o
re chan
ce
s for finding b
e
tter
solutio
n
s. In
addition,
OPSO also em
pl
oys a
Ca
uchy mutation
ope
rator cond
ucti
ng o
n
the
glo
bal
best p
a
rticl
e
. It is to hope t
hat the long t
a
il of
the Ca
u
c
hy mutation
coul
d help tra
pped p
a
rti
c
le
s
escap
e
from
local minim
a
. Liang [7] propo
se
d a
comp
reh
e
n
s
ive learnin
g
particl
e swa
r
m
optimize
r
(CL
PSO) for glo
bal optimi
z
ati
on of
multim
odal fun
c
tion
s. The
CLPS
O uses a
no
vel
learni
ng
strategy whe
r
eby
all ot
her pa
rticle
s’
hi
sto
r
i
c
al
be
st info
rmation i
s
used to
upd
ate
a
particl
e’s vel
o
city. This
strategy ena
ble
s
the di
ve
rsity of swa
r
m t
o
be p
r
e
s
e
r
ved to di
scou
rage
prem
ature
co
nverge
nce. T
he p
r
e
s
ente
d
simul
a
tion
re
sults sho
w
th
at CLPSO
ou
tperforms oth
e
r
eight re
ce
ntly prop
osed PS
O variant
s. Li
and
He
[8] propo
sed
a nov
el PSO algo
ri
thm (GMPSO
),
whi
c
h
emplo
y
s a
Gau
s
sia
n
mutation
a
nd a
dynami
c
a
daptatio
n
inertia
wei
ght
. The p
r
e
s
e
n
t
ed
results
sho
w
e
d
that GMPSO outpe
rform
s
LO
WPSO a
nd PSODR for all ben
chm
a
rk fun
c
tion
s.
In this pape
r, we pro
pose an improved PSO vari
ant (SCPSO
), called PSO
with a
simulate
d bin
a
ry cro
s
sover operator. In
SCPSO,
we
gene
rate two
trail pa
rticle
s based o
n
the
simulate
d bi
n
a
ry cro
s
sover ope
rator. A
n
d then, the
b
e
tter trail
pa
rticle
com
pete
s
with the
wo
rst
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Particle Swarm
Optim
i
zation with Si
m
u
lated Binary Cros
so
ver (Lei Yang)
8287
particl
e in the
curre
n
t pop
u
l
ation, and th
e fitter one i
s
sele
cted
as
the ne
w pa
rticle. In orde
r to
verify the perform
an
ce
of SCPSO,
we test
it on several function be
nch
m
ark pro
b
le
ms.
Experimental
results
sho
w
that SCPSO o
u
tper
fo
rms
st
anda
rd PSO in all test ca
se
s.
The re
st of the pap
er is
orga
nized a
s
follo
ws. In section 2, we
briefly introd
uce the
stand
ard
PSO. In Sectio
n 3, the
pro
posed a
p
p
r
o
a
ch
HCPSO
is p
r
o
p
o
s
ed
. In Section
4,
simulate
d st
udie
s
are
co
ndu
cted, incl
uding te
st
problem
s, para
m
eter setting
s, re
sults a
n
d
discu
ssi
on
s. Finally, the work a
nd future work a
r
e gi
ven in Sectio
n 5.
2. Particle Sw
arm Optimi
z
a
tion
Explaining re
sea
r
ch chro
n
o
logi
cal, incl
uding resea
r
ch de
sig
n
, re
sea
r
ch proce
dure
(in
the form of algorith
m
s, Pseu
do
cod
e
or other
), how to test and data acqui
sition [1, 3]. Th
e
descri
p
tion of
the co
urse
of re
sea
r
ch sho
u
ld
be
su
ppo
rted refe
ren
c
e
s
, so th
e expl
anation
ca
n b
e
accepte
d
scie
ntifically [2, 4].
Particle
swa
r
m optimi
z
atio
n (PSO
)
wa
s firstly devel
oped
by Ken
nedy a
nd Eb
erha
rt,
whi
c
h is m
o
tivated from th
e so
cial be
ha
vior of bi
rd fl
ocks o
r
fish
scho
oling [2]. It is a stocha
stic
optimizatio
n
algorith
m
whi
c
h m
a
intain
s
a swa
r
m of
candid
a
te solu
tions, referre
d
to a
s
pa
rticl
e
s.
In PSO, each parti
cle ha
s two ve
ctors, position
(X
) and velocity
(V). Particle
s fly through
the
sea
r
ch spa
c
e
by flowing th
e previo
us b
e
s
t parti
cle
s
a
nd the glo
bal
best p
a
rticl
e
s. Based o
n
this
model, the
p
o
sition
and
velocity of
particles
are u
p
dated from
g
eneration to
gene
ration.
T
here
are
several main versi
o
n
s
of the PSO algorithm
s, and
the following
version m
odi
fied by Shi and
Eberh
a
rt [4] is used in this
pape
r.
)
(
*
()
2
*
)
(
*
()
1
*
*
)
(
2
)
(
1
)
(
)
1
(
t
i
t
i
i
t
i
t
i
X
gbest
rand
c
X
pbest
rand
c
V
w
V
(1)
(1
)
(
)
(
1
)
tt
t
ii
i
XX
V
(2)
Whe
r
e Xi a
n
d
Vi are th
e
positio
n an
d
velocity of the ith parti
cle,
pbe
sti and
g
best a
r
e
previou
s
be
st parti
cle
of th
e ith p
a
rticl
e
and th
e gl
ob
al be
st p
a
rticl
e
foun
d by
all parti
cle
s
so f
a
r
respe
c
tively,
and w i
s
an inertia facto
r
[4], and rand
1() a
nd ra
nd
2() a
r
e two
random n
u
mb
ers
indep
ende
ntly gene
rated
within the
ra
n
ge of [0,1], a
nd c1 a
nd
c2
are t
w
o le
arning facto
r
s wh
ich
control the influen
ce of the so
cial an
d co
gnitive comp
onent
s.
In Equation
(1), the fi
rst
p
a
rt is mom
e
n
t
um
vecto
r
, which
re
presen
ts the i
nertial
motion
of cu
rre
nt velocity. The
se
con
d
pa
rt
is the
cog
n
itive vector,
whi
c
h in
dicates the
previous
experie
nces
and help p
a
rticle find better po
sition
s.
The third pa
rt is the socia
l
vector, whi
c
h
sha
r
e
s
the informatio
n of the global b
e
st
particl
e amo
n
g
particl
es.
3. PSO
w
i
th
a Simulated Binar
y
Crossov
e
r Operator
In this
se
ctio
n, it is explai
ned the
re
sul
t
s of
research and
at the
same
time i
s
given the
comp
re
hen
si
ve discussio
n
.
Results can
be prese
n
te
d in figure
s
,
grap
hs, tabl
e
s
and
others
tha
t
make the
rea
der un
de
rsta
nd ea
sily [2,
5]. The discu
ssi
on can be
made in seve
ral su
b-ch
apt
ers.
Like other evolutiona
ry
alg
o
rithm
s
(EA
s
),
PSO is also
based p
opul
ation ra
ndo
m
sea
r
ch
algorith
m
. Althoug
h they
h
a
ve some
co
mmon
s
in
se
arch
solutio
n
s
, they h
a
ve
differe
nces for
evolutiona
ry operators.
F
o
r a
gen
eral
EA, it has three
ev
olu
t
ionary o
perators:
sel
e
cti
on,
cro
s
sove
r an
d mutation. While in PS
O, there is
n
o
t any opera
t
or exce
pt for the velocity
and
positio
n u
pda
ting mo
del
s.
In this pa
per,
we
p
r
op
ose
a novel
PSO
algo
rithm,
which
em
ploys a
simulate
d bin
a
ry crossove
r operato
r
to a
c
hieve g
ood
perfo
rman
ce.
The sim
u
late
d bina
ry crossover
ope
rat
o
r is p
r
o
p
o
s
ed by De
b a
nd Beyer, which h
a
s
been
su
cce
s
sfully appli
e
d
to GAs [9]. In the si
mulat
ed bin
a
ry
cro
s
sover
ope
rat
o
r, two
offsp
r
i
ng,
,
,
..
.,
1
1,
1
1
,
2
1,
Yy
y
y
D
and
,
,
...,
2
2,
1
2
,
2
2
,
Yy
y
y
D
, are ge
nerate
d
ba
sed on two
pare
n
t vecto
r
s
,
,
...
,
,
1
1,
1
1
,
2
1,
Xx
x
x
D
and
,
,
..
.,
,
2
2,
1
2
,
2
2
,
Xx
x
x
D
as
follows
:
1,
1
1
,
1
2
,
1
11
2
j
jj
yx
x
(3)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 12, Decem
ber 20
14 : 8286 – 82
91
8288
2,
2
1
,
2
2
,
1
11
2
j
jj
yx
x
(4)
Whe
r
e
(1
,
2
)
k
k
is a random n
u
mb
er and g
ene
rated as follo
ws [9].
1
1
1
1
1
2,
i
f
(
0
,
1
)
2
()
1
21
,
i
f
(
0
,
1
)
2
uu
u
uu
(5)
Whe
r
e
u
(0,1
) is a norm
a
lly distribut
ed ran
dom
numbe
r withi
n
(0,1), and
is a
para
m
eter, which i
s
set 5.0
in this pape
r.
In our
propo
sed S
C
PSO,
we
use the
above
cr
osso
ver op
erato
r
to gene
rate t
w
o tri
a
l
particl
es
1
Y
and
2
Y
. And then, we sele
ct the fittest one amo
ng
1
Y
,
2
Y
and the
worst pa
rticle
P
w
as
the new
P
w
. F
o
r ea
ch pa
rticle
X
i
, we use
the modified
simulate
d bin
a
ry crossove
r as follows:
1,
1
,
1
1
11
2
j
ij
j
yx
B
e
s
t
(6)
2,
2
1
,
2
1
11
2
j
jj
y
xB
e
s
t
(7)
In order to
searc
h
better
s
o
lutions
,
we in
trodu
ce th
e inform
ation
of the glob
al be
st
particl
e Best
for the sim
u
lated bin
a
ry
cro
s
sover
o
perato
r
. The
main ste
p
s of SCPSO are
descri
bed
in
Algorithm
1
,
whe
r
e
ps i
s
the
po
pula
t
ion
si
ze, wh
ere ra
nd(0,1)
is a
n
o
rm
al
ly
distrib
u
ted ra
ndom nu
mbe
r
within [0,1],
p
c
is the crossover
rate, FEs is the nu
mber of fun
c
tion
evaluation
s
, and MAX_FE
s is the maxi
mum numb
e
r of function e
v
aluation
s
.
Algorithm 1
:
SCPSO
Begin
While
FEs <
MAX_FEs
do
For
i=
1 to ps
do
Calc
ulate the veloc
i
ty of
X
i
acco
rding to (1
);
Calculate the po
si
tion of
X
i
acc
o
rdin
g to (2);
Calculate the fitness valu
e of
X
i
;
FE+
+
;
If
rand(0,1) < p
c
th
e
n
Generate
1
Y
and
2
Y
accordin
g to (6)
and (7
);
Calc
ulate the fitnes
s
values
of
1
Y
and
2
Y
;
FEs
=
FEs+
2;
Find
the wors
t partic
le
P
w
in current popul
atio
n;
Sele
ct the fittest one amon
g
1
Y
,
2
Y
and
P
w
as the
new
P
w
;
End If
End For
End While
End
4. Experimental Studies
4.1. Benchm
ark Problem
s
In this pap
er, ten well-kn
own
ben
chm
a
rk fu
nc
tio
n
s are
sele
cte
d
in the exp
e
rime
nts.
Acco
rdi
ng to the prop
ertie
s
of function
s, we
divide then into two cl
asse
s: unimo
dal functio
n
s
(
f
1
-
f
4
) and multimodal fun
c
tio
n
s (
f
5
-
f
10
). These fun
c
tion
s were
cho
s
en
from an earl
y
study in [10]. All
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Particle Swarm
Optim
i
zation with Si
m
u
lated Binary Cros
so
ver (Lei Yang)
8289
the function
s
use
d
in this p
aper a
r
e to b
e
mini
mized. The de
scripti
on of the ben
chma
rk functi
ons
and their gl
ob
al optima are listed a
s
follo
ws:
2
1
1
D
f
x
i
i
Whe
r
e
[
1
00
,
1
00
]
x
i
, D=30
, and the glob
al optimum is 0.
11
2
D
f
xx
i
ii
i
D
Whe
r
e
[1
0
,
1
0
]
x
i
, D=30
, and the glob
al optimum is 0.
2
0.5
1
3
D
fx
i
i
Whe
r
e
[
1
00
,
1
00
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
4
4
[0
,
1
)
1
D
i
rand
f
i
ix
Whe
r
e
[1
.
2
8
,
1
.
2
8
]
x
i
, D=30
, and the glob
al optimum is 0.
si
n
|
|
1
5
D
f
xx
ii
i
Whe
r
e
[
5
00
,
5
00
]
x
i
, D=30
, and the glob
al optimum is -1256
9.5.
6
2
[1
0
c
o
s
(
2
)
1
0
]
1
D
fx
x
i
ii
Whe
r
e
[5
.
1
2
,
5
.
1
2
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
7
11
2
20
*
e
xp
0
.
2
*
e
x
p
c
os
2
2
0
11
nn
f
xx
e
ii
ii
nn
Whe
r
e
[3
2
,
3
2
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
8
1
2
co
s
(
)
1
11
4000
x
i
n
n
fx
ii
i
i
Whe
r
e
[
6
00
,
6
00
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
D
i
i
D
D
i
D
i
i
x
u
x
x
x
x
x
f
1
2
2
1
2
1
1
2
1
2
9
)
4
,
100
,
10
,
(
)]}
2
(
sin
1
[
)
1
(
)]
3
(
sin
1
[
)
1
(
)
3
(
{sin
1
.
0
Whe
r
e
[5
0
,
5
0
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
D
i
i
D
D
i
D
i
i
x
u
x
y
y
y
y
D
f
1
2
2
1
2
1
1
2
1
2
10
)
4
,
100
,
5
,
(
)]}
2
(
sin
1
[
)
1
(
)]
3
(
sin
1
[
)
1
(
)
3
(
sin
10
{
Whe
r
e
[5
0
,
5
0
]
x
i
, D=3
0
, and the glo
bal optimum i
s
0.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 12, Decem
ber 20
14 : 8286 – 82
91
8290
4.2. Results
In the expe
ri
ment, we
co
mpare SCPS
O with
stan
d
a
rd PSO
on
the ten te
st p
r
oble
m
s.
For the
sa
ke
of fair compet
ition, we u
s
e t
he same
para
m
eter
setting
s for
both PS
O and S
C
PS
O
.
For th
e
com
m
on p
a
ramet
e
rs,
pop
ulatio
n si
ze
(
ps
),
w
,
c
1
, and
c
2
are set a
s
1
0
, 0
.
72984,
1.496
18
and 1.
4961
8, re
spe
c
tively. For SCPSO,
the p
r
ob
abili
ty of cro
s
sov
e
r
p
c
i
s
set
0
.
05. For all t
e
st
probl
em
s, th
e maximu
m
numbe
r
of fu
nction
evalu
a
t
ions
MAX_F
E
s i
s
set
15
0,000. Both
PSO
and S
C
PSO
are
ru
n 2
5
times, an
d t
he me
an fitn
ess value
an
d the
stand
a
r
d d
e
viation
are
recorded.
Table 1
p
r
e
s
e
n
ts
the co
mp
arison re
sults
of
stan
dard
PSO and S
C
PSO on the
test
suite,
whe
r
e ‘Me
an’
indicate
s the
mean be
st functio
n
value
s
. As se
en, SCPSO achiev
e better re
sul
t
s
than stan
dard PSO on all test function
s. Espe
cially for
f
2
,
f
3
,
f
4
,
f
8
and
f
10
, SCP
S
O s
i
gnificantly
improve
s
the
results. It demonst
r
ate
s
that the pr
op
osed cro
s
sover
operator i
s
effective. Figure 1
sho
w
s the ev
olutiona
ry pro
c
e
s
ses
of PSO and S
C
PSO sh
ows the
conve
r
ge
nce curve
s
of HCDE
on
f
1
and
f
4
. As se
en, HCDE converge
s very
fast over the evolution
.
Table 1. The
results a
c
hiev
ed by stand
ard PSO and SCPSO
Functions
Standard PS
O
SCPSO
Mean
Std
Dev Mean
Std
Dev
f
1
2.64e-10
3.17e-09
8.89e-13
5.58e-13
f
2
0.133
0.74
2.43e-06
6.82e-06
f
3
8.00
5.93
0
0
f
4
0.329
0.69
4.92e-03
3.96e-03
f
5
-6646.3
725.2
-7149.7
531.6
f
6
58.7
39.3 46.6 29.7
f
7
8.19
3.89 4.17 2.32
f
8
0.58
0.42
1.40e-09
3.62e-09
f
9
0.62
0.59 0.31 0.17
f
10
0.02
3.11e-03
8.90e-09
6.35e-09
Figure 1. The
evolutionary
pro
c
e
s
ses of
stand
ard PS
O and SCPS
O on four fun
c
tion
s
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Particle Swarm
Optim
i
zation with Si
m
u
lated Binary Cros
so
ver (Lei Yang)
8291
4. Conclusio
n
This
pap
er
p
r
esents an
i
m
prove
d
PS
O alg
o
rithm,
calle
d PSO
with a
sim
u
la
ted bina
ry
cro
s
sove
r o
p
e
rato
r. The
prop
osed
ap
proa
ch
SCP
S
O empl
oys a mo
dified
simulate
d bi
nary
cro
s
sove
r op
erato
r
. First, we g
ene
rate t
w
o trail
pa
rticl
e
s b
a
sed o
n
curre
n
t parti
cl
e and th
e glo
bal
best
pa
rticle.
Then, th
e b
e
tter o
ne
betwe
en the
two tr
ail pa
rticle
s is co
mpa
r
ed
th
e worst p
a
rti
c
le
in cu
rrent
swarm. Th
e fitter on
e is re
pla
c
ed
with
new
worst p
a
rticl
e
. Simulation result
s sho
w
t
hat
SCPSO a
c
hi
eves
better result
s tha
n
standard PSO
in
all te
st
ca
se
s. It dem
o
n
strate
s that
the
prop
osed
stra
tegy is
effecti
v
e. In ou
r futu
re
wo
rk,
we
will apply S
C
P
S
O to
solve
some
real
-worl
d
probl
em
s
Referen
ces
[1]
DE Gold
berg,
E Davi
d.
Editors
. Genetic Al
gorithms
in Se
arch Optimiz
a
tion
and
Mach
i
ne L
ear
nin
g
.
Addis
on W
e
sle
y
. Ne
w
Y
ork Cit
y: Lo
ngma
n
Pu
blish
i
n
g
Co. 19
89.
[2]
J Kennedy
, RC Eberhart.
Particle sw
arm
opti
m
i
z
at
ion
. IEEE International C
onferenc
e
on Neur
al
Net
w
orks. Pert
h. Australia. 1
9
95: 194
2–
19
48
.
[3]
R Storn,
K Pri
c
e. Differe
ntia
l
evo
l
utio
n--A s
i
mple
a
n
d
effic
i
ent
he
uristic f
o
r g
l
ob
al
o
p
timizatio
n
ov
e
r
contin
uo
us spa
c
es.
Journa
l of Globa
l Optimi
zation
. 19
97; 11
: 341–3
59.
[4]
Y Shi,
RC Eberhart.
A modi
fied particl
e
s
w
arm
o
p
ti
mi
z
e
r
. IEEE Proceedings
of the
Confer
ence on
Evoluti
onar
y C
o
mputati
on.
Pi
scata
w
a
y. 19
9
8
: 69–7
3.
[5
] PN
Su
ga
n
t
ha
n.
Particle sw
arm o
p
ti
miser
w
i
th neig
h
b
o
rh
ood
oper
ator
.
Procee
din
g
s
of the IEEE
Con
g
ress on E
v
oluti
onar
y C
o
mputatio
n.
W
a
shin
gton DC. 1
999: 19
58
–1
96
2.
[6]
H W
ang, Y L
i
u, SY Z
eng,
H Li, CH
Li.
Oppositi
on-b
a
s
ed p
a
rticle sw
arm
alg
o
rith
m w
i
th Cauchy
m
u
tation
. Pr
oc
eedings
of the IEEE Congr
ess on Evolutionar
y
Computation. Br
is
bane. 2007:
4750–
475
6.
[7]
JJ Li
ang, A
K
Qin, PN
Sug
antha
n.
C
o
mp
rehe
nsive
le
ar
nin
g
p
a
rticle
s
w
arm o
p
ti
mi
z
e
r for gl
ob
al
opti
m
i
z
at
ion
of
multi
m
o
d
a
l
fu
nctions
. IEEE
T
r
ansactions on Evolutio
nar
y Computation.
Parsopoulos
.
200
6; 10(3):2
8
1–2
95. Parso
p
oul
os
[8]
L Li,
X
He.
Gaussion m
u
tation
Particle
Sw
arm Opti
mi
z
a
t
i
o
n
w
i
th dyna
mi
c ada
ptatio
n in
ertia w
e
ig
ht
.
W
o
rld Co
ngres
s on Soft
w
a
re
Engi
neer
in
g. Xi
amen. 20
09; 4:
454–
45
9.
[9]
K Deb, H Be
yer. Se
lf-ada
pt
ive ge
netic a
l
gorithms
w
i
t
h
simulat
ed bi
na
r
y
crossov
e
r.
Evoluti
onar
y
Co
mp
utation J
ourn
a
l
. 20
01; 9
(
2): 195-2
19.
[10]
J Brest, S Gre
i
ner, B B
o
skov
i
c,
M Mernik,
V Zumer. Self-
adapting co
ntrol param
e
ters
in differential
evol
ution: A
co
mparat
iv
e stud
y
on
num
erica
l
be
nchmark
pr
obl
ems.
IEEE
Trans. Evol. Comput.
. 20
07
;
10(6): 64
6-6
5
7
.
Evaluation Warning : The document was created with Spire.PDF for Python.