TELKOM
NIKA
, Vol.13, No
.1, March 2
0
1
5
, pp. 331~3
4
0
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v13i1.1267
331
Re
cei
v
ed O
c
t
ober 1
8
, 201
4; Revi
se
d Decem
b
e
r
21, 2014; Accept
ed Ja
nua
ry 1
0
, 2015
A Self-Adaptive Chaos Particle Swarm Optimization
Algorithm
Yalin Wu*, Shuiping Zha
ng
F
a
cult
y
of Information En
gi
ne
erin
g, Jiang
xi Univers
i
t
y
of Scienc
e an
d T
e
chno
log
y
,
Ganzho
u, Jian
gxi, Chi
n
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: 1159
37
685
5
@qq.com
A
b
st
r
a
ct
As a
new
ev
ol
ution
a
ry a
l
g
o
rit
h
m,
partic
l
e sw
arm o
p
timi
z
a
ti
o
n
(PSO) ac
hiev
es i
n
tegrate
d
e
v
oluti
on
throug
h the inf
o
rmatio
n
betw
een the i
ndiv
i
d
uals. All t
he p
a
r
ticles hav
e the
ability to ad
jus
t
their ow
n speed
and re
member
the optimal p
o
sitio
n
s they h
a
ve exp
e
ri
e
n
c
ed. T
h
is alg
o
ri
thm has so
lve
d
ma
ny practi
cal
eng
ine
e
ri
ng
pr
obl
e
m
s a
n
d
ac
hiev
ed
better
opti
m
i
z
at
ion
ef
fect. How
e
ver, PSO can
e
a
s
ily g
e
t trap
pe
d
in
local extre
m
u
m
, makin
g
it fail to get the gl
oba
l opt
i
m
a
l
soluti
on an
d re
duci
ng its con
v
erge
nce spe
e
d
. T
o
settle these
def
icienc
ies, this
p
aper
has pr
opo
sed a
n
ad
aptiv
e cha
o
s partic
l
e sw
arm o
p
ti
mi
z
a
ti
on (AC
PSO)
base
d
o
n
the i
dea of
cha
o
s opti
m
i
z
at
ion af
ter
ana
ly
z
i
n
g
t
he b
a
sic
princ
i
ples
of PSO. T
h
is al
gorith
m
can
improve
the
p
o
pul
ation
d
i
versi
t
y and
the
er
go
dicity
of p
a
rticl
e
se
arch
throu
gh th
e
prop
erty
of ch
aos;
ad
ju
st
the i
nertia
w
e
i
ght acc
o
rdi
ng
to the
pre
m
atu
r
e co
nverg
enc
e of th
e p
o
p
u
l
a
tion
an
d th
e i
ndivi
du
al fitn
es
s;
consi
der
th
e g
l
ob
al opti
m
i
z
a
t
ion
an
d loca
l opti
m
i
z
at
ion;
e
ffectively av
oid
pre
m
ature c
o
nverg
ence
a
n
d
improve
alg
o
rit
h
m effici
ency.
T
he exper
i
m
en
tal simul
a
tion h
a
s verifie
d
its effectiveness a
n
d
super
iority.
Ke
y
w
ords
: ch
aotic theory, p
a
rticle sw
arm
o
p
timi
z
a
ti
on, sel
f
-adapti
o
n
1. Introduc
tion
PSO is a ne
w evolution
a
ry algorithm d
e
vel
ope
d in rece
nt years
and a
s
a nov
el swarm
intelligent se
arch techniq
u
e
, it has attra
c
ted the
atte
ntion of a gre
a
t numbe
r of resea
r
chers
who
have a
pplied
it in eve
r
y fi
eld a
c
tively b
e
ca
use of
th
e featu
r
e
s
of
simple
comp
utation fo
rm,
few
para
m
eter se
tting
and
q
u
i
ck se
arch sp
eed sin
c
e
it
wa
s propo
se
d. As a
ki
nd
of evolution
a
ry
algorith
m
, PSO start
s
from
a rand
om so
lution; sea
r
ch
es the o
p
tima
l solutionvia it
eration
s
a
nd it
evaluate
s
the
sol
u
tion
qual
ity through
fitness, b
u
t it
is simpl
e
r than
the rules of
g
enetic alg
o
rit
h
m
(GA). It do
e
s
n’t have
th
eope
ration
sof
“cro
ssover
” and “mutatio
n”of
GA and
it
sea
r
che
s
the
global
optimu
m
by follo
win
g
the
cu
rrent
optimal va
lu
e
[1]. This kin
d
of al
gorith
m
ha
s d
r
a
w
n
the
attention of t
he a
c
a
demi
c
circle
for the
advanta
g
e
s
of ea
se to
re
alize,
high
accuracy
and
fast
conve
r
ge
nce and it has d
e
m
onst
r
ated it
s su
peri
o
rity in solving p
r
a
c
tical p
r
obl
e
m
s.
In pra
c
tical
appli
c
ation
s
,
sin
c
e th
e init
ializatio
n of t
he b
a
si
c PS
O is ra
ndo
m
,
it has
certai
n bli
n
d
ness. Althou
gh the
ra
nd
om init
iali
zati
on can ba
si
cally
gu
arant
ee
the unifo
rm
distrib
u
tion
of the initial
po
pulation, it
can’t
gu
ara
n
te
e
the quality of
every parti
cle and
it
ma
y
cau
s
e
som
e
particl
es
get far a
w
ay from
the opt
imal
solutio
n
and
affect the co
n
v
ergen
ce
sp
e
ed
of the al
gorit
hm [2]. PSO
ca
n e
a
sily
g
e
t trap
ped i
n
local extre
m
um an
d it
ca
n’t have
glob
al
optimal
soluti
on. Th
e
conv
erge
nce
spe
ed of PS
O i
s
quite
slo
w
. I
t
usu
a
lly take
s
som
e
time
to
rea
c
h
the co
rrespon
ding a
c
cura
cy
in so
lving
pra
c
ti
ca
l probl
em
s. Sometime
s it is not
worthy
to
spe
nd a long
time to get
a feasible
sol
u
tion[3]. The rea
s
on to ca
use thi
s
pro
b
l
em is that PSO
doe
sn’t ma
ke
full advanta
ge of the info
rmation from
the com
putati
on; inste
ad, it only use
s
the
informatio
n of global optim
um and in
dividual optimu
m
in every iteration. Besid
e
s
, the algo
rithm
itself doe
sn’t
have an o
p
timization m
e
chani
sm to el
i
m
inate the b
ad ca
ndid
a
te
solutio
n
so th
at it
has a
slow conver
gence speed [4],[5].
To
solve th
e
above
-
me
ntioned
short
c
o
m
ings of
ba
sic PSO, thi
s
pape
r h
a
s p
r
opo
sed
adaptive ch
aos
parti
cle
swarm o
p
timization
(A
CPSO) b
a
sed on the
theory of chao
s
optimizatio
n. Gene
rally, ch
aos
refers to
a rand
om motion state
to get from a determini
sti
c
formula
an
d i
t
is a
commo
n ph
enom
en
on in
non
-lin
ear
syste
m
. It is
compli
cat
ed a
nd it h
a
s the
cha
r
a
c
teri
stic simila
r to random
ne
ss.
Ho
wever,
th
e se
emingly
cha
o
tic
cha
o
s
chan
ge i
s
not
totally chao
s, on the co
ntrary, it has ce
rtain in
herent law. ACPSO perform
s chaotic
initiali
zation
to the
ba
sic p
a
rticle
swarm
with
the
uniq
ue e
r
g
odi
city of
ch
ao
s opti
m
ization;
ge
n
e
rate
s a se
ri
es
of initial solution gro
u
p
s
and sele
ct the
initial popul
ation from them, in this way, it effectively
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 331 – 3
4
0
332
improve
s
the
diversity of
PSO and the ergodi
city
of particle
sea
r
ch.The in
erti
a weight in PSO
adju
s
ts itself according to
the fitness
value of
every particle i
n
d
i
vidual so
as to enhan
ce
its
“glob
a
l co
arse sea
r
ch” a
n
d
“lo
c
al coarse sea
r
ch”
cap
a
citie
s
and i
m
prove its
co
nverge
nce sp
eed
and a
c
curacy
. Determin
e the goo
d or
b
ad of the cu
rrent parti
cle a
c
cordi
ng to t
he fitness val
u
e
and
con
d
u
c
t cha
o
tic o
pera
t
ion on
some
sele
cted
parti
cle
s
to hel
p “i
nertia
”
pa
rticle
jump out fro
m
the local sol
u
tion re
gion
and q
u
ickly search the
glo
bal optimal
solution. The
last pa
rt of th
is
pape
r ha
s ve
rified the effe
ctivene
ss a
n
d
advan
ceme
nt
of the algo
rithm propo
sed in this p
a
per
throug
h expe
riment sim
u
la
tion.
2. O
v
er
v
i
e
w
of Standard
Particle S
w
a
r
m Optimiz
a
tion
The initial
pu
rpose of
Do
ct
or Eb
ethart
a
nd
Do
ctor K
e
nnedy
wa
s to
esta
blish a
model i
n
a 2D
sp
ace t
o
schem
atize
the motion of
the bird fl
o
ck. Assumi
ng
such
a sce
nari
o
that a grou
p of
bird
s ra
ndoml
y
search food
in a region
whe
r
e the
r
e i
s
only a pie
c
e of food. Ho
wever, all of the
bird
s don’t kn
ow wh
ere the food is. All th
ey know i
s
ho
w far they are
from
the food. Then wh
at is
the optimal st
rategy to find the
food? Th
e simple
st an
d the most e
ffective strate
gy is the sea
r
ch
the surroun
di
ng a
r
ea
of th
e bird which i
s
the
clo
s
e
s
t
to the food. P
S
O is i
n
spire
d
from thi
s
ki
nd
of model an
d
it is use
d
to
settle optimi
z
ati
on p
r
o
b
le
ms. The bi
rd
has b
een
a
b
stra
cted
as
the
particl
e with
no quality an
d volume an
d
it has bee
n e
x
tended to
N
-dimen
sion
al space wh
ere t
h
e
position of
particle
i
is
e
x
p
r
es
se
d as
ve
c
t
o
r
12
,,
,
ii
i
i
N
X
xx
x
and it
s
flying sp
eed
as ve
ctor
12
(,
,
)
ii
i
i
N
VV
V
V
,
. Every particl
e has a fitne
s
s value dete
r
mined by the
obje
c
tive function and
it kno
w
s th
e current optimal
positio
n
i
p
and
the cu
rrent p
o
sition,
whi
c
h
can be see
n
as
its own
flying experi
e
nce. In
additi
on, every
pa
rt
icle
also
kn
ows the
opti
m
al po
sition
g
p
(
g
p
is the
optimal val
u
e
of
i
p
) all
parti
cl
es i
n
th
e g
r
o
up h
a
s fou
n
d
.
The
pa
rticle
de
cide
s its
next motion
according to i
t
s own exp
e
ri
ence and the
optimal expe
rience of its co
mpanio
n
s.
For the
kt
h
iteration, every particle in the PSO ch
a
nge
s a
c
cordi
ng to the formula b
e
l
o
w:
1
12
kk
k
k
id
i
d
id
id
gd
id
vv
c
r
a
n
d
p
x
c
r
a
n
d
p
x
(1)
11
kk
k
id
id
id
x
xv
(2)
In Formula
s
(1) and (2),
1,
2
,
,
,
iM
M
is
the total particle
s
in the group.
k
id
v
is the
dt
h
sub
-
vecto
r
of
the position
vect
or of the iteration p
a
rticle
i
in the
kth
iteration.
id
p
is
th
e
d
th
comp
one
nt of the optimal
positio
n
i
p
of the parti
cle
i
.
gd
p
is the
dt
h
compo
n
ent of the opt
imal
positio
n
g
p
in th
e gro
up.
1
c
&
2
c
are weig
ht fact
ors
and
()
rand
is the
rand
om fun
c
tion to gen
erat
e
a ra
ndom
nu
mber within
the rang
e of
(0,1). F
o
rm
ul
a (1
) m
a
inly
comp
utes th
e ne
w
spe
e
d
of
particl
e
i
throu
gh 3 p
a
rts: th
e sp
eed
of p
a
rticle
i
at a fo
rmer
mome
nt; the distan
ce
betwe
en the
curre
n
t po
sition of p
a
rti
c
le
i
and it
s o
p
timal po
sition
as well
as
the di
stan
ce
betwe
en the
cu
rren
t
positio
n of particle
i
and the optimal position
of the group. Particle
i
compute
s
th
e coo
r
din
a
te
of its
new p
o
sition
thro
u
gh F
o
rm
ula
(2). Pa
rticle
i
deci
d
e
s
its n
e
xt motion
p
o
sition
thro
u
g
h
Formul
as
(1)
and (2
). From
the angle of so
ciolo
g
y,
the first part of Formul
a (1) i
s
calle
d memo
ry
item, standin
g
for the influen
ce of the
last sp
e
ed
and directio
n
,
the seco
nd
part is the
self
cog
n
ition ite
m
, a vector
pointing fro
m
the current
point to the optimal point
of the particle,
rep
r
e
s
entin
g that the parti
cle motio
n
co
mes fro
m
it
s
own exp
e
rie
n
ce an
d the th
ird pa
rt is call
ed
grou
p cogniti
on item, nam
ely a vector f
r
om the
curr
e
n
t point to the optimal poi
nt of the gro
up,
reflectin
g
the
coo
r
din
a
tion
and info
rmati
on sha
r
ing
be
tween th
e p
a
rticles. T
herefore, the
pa
rticl
e
determi
ne
s its next motion
through its o
w
n expe
ri
en
ce and the be
st exper
ie
nce
of its partners
[6],[7]. In Figure
1, the ex
ample
of 2D
s
p
ac
e
des
c
ri
bes
the princ
i
ple that t
he
partic
le moves from
position
K
X
to
1
K
X
according to Fo
rmula
s
(1
) an
d (2).
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
A Self-Adaptive Chao
s Particle Swa
r
m
Optim
i
zation Algorithm
(Yalin Wu)
333
Figure 1. Sch
e
matic dia
g
ra
m of particle
motion
In
199
8,
Shi and othe
r
p
e
ople revised
Formul
a (1
) and
i
n
trod
uce
d
ine
r
tia wei
g
ht
facto
r
w
.
1
12
kk
k
k
id
id
id
id
gd
ik
v
w
v
c
ran
d
p
x
c
r
and
p
x
(3)
w
, non-ne
gative, is called i
nertia facto
r
.
The bigge
r
w
is, the stro
nger the glo
bal
optimization
ability is and
the lo
cal opti
m
ization
abili
ty is, if
w
is sm
all, and it is
good fo
r lo
ca
l
search. Initially, Shi took
w
as
con
s
tant, but in the lat
e
r ex
pe
rime
n
t, it can be found that th
e
dynamic
w
ca
n obtai
n a
n
o
p
timization
re
sult b
e
tter th
a
n
the fixed
va
lue. Dyn
a
mic
w
ca
n
cha
nge
linearly i
n
th
e sea
r
ch p
r
o
c
e
s
s of PSO
and
it
can
also
chan
ge
according
to
certai
n m
e
a
s
ure
function dyn
a
mically of the perfo
rma
n
ce of PSO
. At present, the linearly d
e
crea
sing
we
ight
strategy (LDW) p
r
op
osed
by Shi
has be
en used freq
u
ently, namely
ma
x
m
i
n
mi
n
k
ww
ww
N
k
N
(4)
In this formula,
N
is
th
e ma
ximu
m iter
atio
n
s
a
n
d
ma
x
w
&
min
w
ar
e
the
ma
ximu
m an
d
minimum ine
r
tia weight
s re
spe
c
tively.
The intro
d
u
c
tion of inertia
weig
ht factor
can g
r
eatly i
m
prove the p
e
rform
a
n
c
e o
f
PSO
and it can al
so adj
ust the
global an
d local
sea
r
ch cap
a
citie
s
a
c
cording to dif
f
erent search
probl
em
s, makin
g
PSO solve many
pra
c
tical p
r
o
b
lems. Fo
rm
ulas (2) an
d
(3) are call
ed
stand
ard PS
O [8].
3. Chaos O
p
timization
Algorithm and
Its Ergodic Property
In non-lin
ea
r system, ch
a
o
s is a
com
m
on motion
phen
omen
on
with su
ch e
x
cellent
cha
r
a
c
teri
stics as e
r
go
dicit
y
, randomne
ss and “reg
ula
r
ity”.Cha
otic
motion ca
n e
x
perien
c
e all
the
states in th
e
state
sp
ace
witho
u
t re
p
e
tition a
c
co
rding to
certa
i
n “rule
”
withi
n
certain
mo
tion
rang
e. Even a tiny chang
e
in the initial value c
an cau
s
e hu
ge chan
ges in the
po
st motion, wh
ich
is
called the
strong
sensiti
v
ity of the initial va
lue.
Th
e research
es have
used t
hese featu
r
e
s
of
cha
o
s and
p
r
opo
se
d
cha
o
s
optimization alg
o
rith
m,
whi
c
h
ca
n
easily ju
mp
out of the l
o
cal
solutio
n
regi
o
n
and which can have hig
h
comp
utation
efficien
cy [9].
The mo
st typical chaoti
c
system is soug
ht fr
om Logist
ic formul
a an
d its iterative formula
is
as
follows
:
10
1,
0
,
1
,
,
0
1
,
0
,
4
nn
n
xS
x
x
n
N
x
(5)
Whe
n
4
, the orbit
0
,
1
,
...,
n
x
nN
is
cha
o
tic.
It can
be
se
en from
Fig.2
that the two
orbit point
s cl
ose to ea
ch
other in the i
n
itial st
ate di
verge after o
n
ly 3 iteratio
ns. After sev
e
ral
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 331 – 3
4
0
334
iteration
s
, sin
c
e the
errors of these t
w
o orbit
s
in
cre
a
se
co
ntinuo
usly, it is difficult to identi
f
y
whi
c
h state th
e system i
s
in
and Logi
stic
m
ap is totally in the cha
o
tic state [10].
Figure 2. The
logistic ma
p whe
n
4
Randomly take an i
n
itial point
0
0,
1
x
, the orbit o
f
Logisti
c
m
a
p is
ch
aotic a
m
ong
(0,1
),
that is to sa
y, regardl
ess of
N
formerly
kno
w
n ite
r
ati
v
e points, th
e positio
n of
the
(1
)
t
h
N
iterative poi
nt ca
n’t be
pre
d
icted. In
fact
, for
0
0,
1
x
, there i
s
a sub-orbit
nk
x
of
n
x
to make
nk
x
x
.
Although th
e
cha
o
tic
dynamic
system
has a
comp
licated “chao
tic” state,
it
can b
e
found th
at th
e certai
n
cha
o
s
ha
s
reg
u
la
rity and
e
r
go
dicity
in stati
s
tics by
o
b
serving this
syst
em
with the pe
rspe
ctive of
statistic. A
s
indica
ted in
Figure 3, pe
rform 1
500 i
t
eration
s
wit
h
10
0.41
795
x
and
20
0.81
042
x
as
the
initial
value
s
and get
two cha
o
tic se
qu
ences
1
n
x
and
2
n
x
. A dot in the
figure sta
n
d
s
for a point in the sp
ace with
its coo
r
dinat
e as
12
()
j
j
x
x
,
.
Figure 3. Erg
odicity of logi
stic ma
p
From
the
abo
ve figure,
it is cle
a
r th
at
ch
aotic
variabl
e is
sensitive t
o
the i
n
itial v
a
lue, as
eviden
ced
by the fact that
the c
haotic traje
c
tori
es
of these
two
cl
ose i
n
itial val
ues i
n
the
st
ate
spa
c
e a
r
e
co
mpletely different. Ch
aotic
variable
c
an
experie
nce al
l the stat
es in
the state sp
a
c
e
without
re
peti
t
ion a
c
cordin
g to it
s o
w
n
“rule
”
. Chaoti
c
varia
b
le i
s
a
s
cha
o
tic a
s
random
vari
ab
le.
Whe
n
4
, the total ch
aotic ite
r
ative formul
a map
of
Lo
gistic move
s
unsta
bly in th
e range
of
[0,1] and wh
en it experie
nce
s
a long
-t
ime dynamic
motion, it will reflect the cha
r
a
c
teri
stic of
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
A Self-Adaptive Chao
s Particle Swa
r
m
Optim
i
zation Algorithm
(Yalin Wu)
335
rand
omn
e
ss [
11],[12]. Although
ch
aotic
variable i
s
ra
ndom,
with th
e initial value
determi
ned,
its
cha
o
tic varia
b
le ha
s also
been dete
r
mined an
d the highlig
hte
d
rand
omne
ss is the inhe
rent
regul
arity of
chaotic motion
. The
sp
ecifi
c
step
s
of cha
o
s optimization
al
gorith
m
can
be
indi
ca
ted
by Figure 4:
Figure 4. Pro
c
ed
ure
s
of ch
aos o
p
timizati
on algo
rithm
4. Adap
tiv
e
Chao
s Partic
le S
w
a
r
m Op
timization
& Example Tes
t
Bec
a
us
e PS
O uses
the randomly
-generat
ed
partic
le, it may have unsearc
h
ed dead
zon
e
s for mu
ltimodal fun
c
t
i
on. Since
ch
aos optimi
z
at
ion al
gorith
m
ha
s the
excellent featu
r
e
of
ergo
dicity, this pape
r init
ialize
s
by using
chao
s
optimizatio
n in the early
phase of the
optimizatio
n of the particle
swa
r
m an
d select
s the
initial particl
e po
pulation. The
spe
c
ific p
r
o
c
e
ss
inc
l
udes
two parts
: the 1s
t part
is to
perform glo
bal se
arch
with the basi
c
particl
e swa
r
m
optimizatio
n
while
the
2nd
pa
rt is to im
plement l
o
cal
se
arch
a
c
co
rding
to th
e result
of PSO
by
usin
g ch
ao
s optimizatio
n.
4.1. Chao
tic Local Searc
h
Chaotic local
search (CLS
) is m
a
inly to i
m
prove th
e
search
perfo
rmance of th
e
parti
cle
and
avoid th
e pa
rticl
e
to
get tra
ppe
d i
n
lo
cal
extre
m
um. Ta
ke
L
ogisti
c
m
ap
as
example.
CLS
firsts ma
ps th
e parti
cle vari
able into cha
o
tic va
riabl
e
and ten tra
n
sforms the
cha
o
tic variabl
e i
n
to
particl
e vari
a
b
le after th
e
iteration. Th
e ba
sic
fo
rm
ulas
of these
two tra
n
sfo
r
mations
are
as
follows
:
mi
n
,
ma
x
,
mi
n
,
k
k
ii
i
ii
xx
xx
(6)
YE
S
NO
NO
Wh
ethe
r th
e
sm
all
per
t
ur
bat
i
on
ha
s reached
certai
n
it
erati
ons
YES
YE
S
P
e
rform
th
e fi
rst
carri
er b
y
u
s
ing
L
o
gi
s
t
i
c
or
T
e
nt m
a
p
t
o
pr
odu
ce
chao
s sequ
en
ce
Sea
r
ch
t
h
e c
h
aos seq
u
e
nce
af
te
r
the carrier
Cond
u
c
t sm
al
l per
t
ur
bati
on
search
Out
put
t
h
e c
o
m
p
utat
i
on re
su
lt
W
h
eth
e
r t
h
e
fi
rst
sea
r
ch
ha
s ach
iev
e
d
t
h
e set i
t
erat
io
ns or
m
e
t c
e
rtai
n erro
r
requ
irem
en
ts
W
h
eth
e
r to
re
ach
th
e
term
i
n
at
i
on
co
ndi
ti
on of
the al
gor
i
t
hm
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 331 – 3
4
0
336
11
mi
n
,
max
,
mi
n
,
kk
ii
i
i
x
xx
x
x
(7)
Her
e
,
,
min
i
x
and
,
max
i
x
rep
r
e
s
e
n
t the u
ppe
r a
nd lo
we
r b
o
und
s of th
e
ith
parti
cal
r
e
spec
tively.
k
i
x
is the de
cisi
o
n
variable
(na
m
ely the indi
vidual extrem
um of the particle swarm
)
within
th
e ra
nge of
,,
,
min
i
m
a
x
i
xx
while
k
i
is the
ch
aot
ic va
riable
wi
th its valu
e i
n
the
ra
nge
of
(0,1).
The sp
ecifi
c
steps of CLS a
r
e as follo
ws:
(1) A
s
s
u
me that
0
k
and tran
sform
k
i
x
into
k
i
through Fo
rmul
a (6).
(2) A
c
cordin
g to the
current
k
i
, determi
ne the
cha
o
t
ic varia
b
le
1
k
i
of the next
iteration.
(3) A
c
c
o
rdi
n
g
to Formula (
7
), tran
sform
1
k
i
into the deci
s
i
on variabl
e
1
k
i
x
.
(4) Evaluate t
he ne
w
1
k
i
x
.
(5) If the n
e
w solutio
n
is
superi
o
r to th
e
opt
imal solut
i
on befo
r
e th
e local search of the
particl
e
swarm or it
rea
c
h
e
s the
preset maximum
iter
a
t
io
ns
, o
u
t
pu
t th
e
so
lu
tion
w
h
ic
h is
fou
nd
from ch
aotic l
o
cal
sea
r
ch; otherwise, make
1
kk
and return to Step (2) [13],[14].
4.2. The Ste
p
s and Proc
edures o
f
Ad
aptiv
e
Particle S
w
a
r
m Op
timization
By integratin
g the
se
arch p
r
o
c
ess o
f
t
he two
p
hases of
ch
aos pa
rticle
swa
r
m
optimizatio
n, the overall se
arch step
s
of
this algo
rithm
are as follo
ws:
(1) Set the
si
ze
of the p
a
rt
icle
popul
atio
n
N
and
the
maximum n
u
m
ber of iterations and
initialize the p
o
sition a
nd speed of the p
a
rticle
ran
d
o
m
ly within the feasible valu
e rang
e.
(2) Evaluate
the fitness of ev
ery particle; set the obje
c
tive fitn
ess value of the 1st
particl
e as th
e global o
p
timal value an
d its initial po
sition is its o
w
n individu
al extremum val
ue.
(3)
Upd
a
te the particl
e sp
e
ed and p
o
siti
on acco
rdin
g to Formula
s
(2) and
(3).
(4) Eval
uate
the pa
rticle fi
tness; compa
r
e t
he
pa
rticl
e
fitness val
ue with
the
previou
s
value a
nd
u
pdate th
e
superi
o
r obje
c
tive fun
c
tio
n
fitness val
ue a
s
th
e
current in
divid
ual
extremum val
ue; comp
are
the curre
n
t optimal
particl
e fitness with
the previou
s
and update t
he
sup
e
rio
r
obje
c
tive fitness v
a
lue a
s
the current glob
al optimal value
.
(5) Kee
p
the forme
r
N
/5 pa
rticle
s of the popul
ation.
(6)
Upd
a
te th
ese p
a
rti
c
le p
o
sition
s by u
s
ing ch
ao
s local sea
r
ch an
d
the CLS re
sults. If it
meets the termination sta
n
dard, outp
u
t the cu
rrently optimal solutio
n
.
(7)
Narro
w
d
o
wn the
sea
r
ch spa
c
e an
d rand
omly g
enerate 4
N
/5 new particles in the
narro
wed sea
r
ch spa
c
e.
(8) Fo
rm a n
e
w po
pulatio
n with the particl
es throug
h
CLS update
and the ne
w
particl
es
of 4
N
/5.
(9) Ma
ke
1
kk
and return to Step (3
).
4.3. Algorith
m
Testing &
Resul
t
An
aly
s
is
This p
ape
r h
a
s u
s
ed fou
r
cla
ssi
cal te
sting func
tio
n
s i
n
the experim
ents so as to
test the
perfo
rman
ce
of ACPSO a
n
d
it also
com
pare
s
th
e te
sting result of
ACPSO with
those
of DE
and
PSO.
In ord
e
r to i
n
vestigate the
algo
rithm ex
pand
ability, the varia
b
le
d
i
mensi
o
n
s
for every
function to
b
e
used i
n
th
e testing
are
20 di
m
e
n
s
i
ons. In th
e
experim
ent, operates eve
r
y
circum
stan
ce
30 time
s
a
nd
see
k
the
mean
opti
m
al value
a
s
the
ba
sis of pe
rform
ance
comp
ari
s
o
n
. The p
a
ra
met
e
rs selectio
n
in these ex
perim
ents
are for PSO a
nd ACPSO,
w
redu
ce
s from
0.8 to 0.3 wit
h
the evolutio
nary alg
e
b
r
a,
the scalin
g factor and
cro
s
sover fa
ctor
of
DE are 0.5 a
nd 0.9 re
spe
c
tively.
(1) F
u
n
c
tion
1
f
(Gri
ewan
k Fu
nction
)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
A Self-Adaptive Chao
s Particle Swa
r
m
Optim
i
zation Algorithm
(Yalin Wu)
337
2
1
1
1
cos
1
,
300
4000
n
n
i
ii
i
i
x
fx
x
x
i
m
i
n
*
0,
0,
0
0
fx
f
Grie
wan
k
fun
c
tion rea
c
he
s its global mi
nimal poi
nt when
i
0
x
and it re
ach
e
s the l
o
cal
minimal p
o
in
ts wh
en
,
1
,2
,
,
,
1
,2
,
,
i
x
ki
i
n
k
n
. The computati
on re
sult
s of
these
algorith
m
s a
r
e demon
strated in Table 1.
Table 1. The
Mean Fitne
ss Value for Gri
e
wa
nk Fu
ncti
on
Algorithm PSO
DE
ACPSO
Variable
Dimension
n=20 n=20 n=20
Mean Optimal V
a
lue
3.36
4.52
0.81
Whe
n
the variable dim
e
n
s
ion
20
n
, the chan
ging curve of
the mean fitness value f
o
r
Grie
wan
k
fun
c
tion with the
iteration
s
is a
s
indi
cated in
Figure 5.
Figure 5. Iterations
on Griewank
func
tion
(2) F
u
n
c
tion
2
f
(Ra
s
tri
g
rin F
u
nction
)
2
1
10
c
o
s
2
10
,
5
.
1
2
n
ii
i
i
fx
x
x
x
mi
n
*
0
,
0
,
,
0
0
fx
f
As a m
u
ltimodal fun
c
tion
, Rast
rigri
n
f
unctio
n
rea
c
hes th
e glo
b
a
l minimal
p
o
int wh
en
i
0
x
.There a
r
e a
bout 10n lo
cal minimal p
o
ints withi
n
5.12
,
5
.1
2
,
1
,
2
,
,
i
Sx
i
n
. See
the comp
utation re
sults of t
hese algo
rith
ms in Tabl
e 2
.
Table 2. The
Mean Fitne
ss Value for Ra
strig
r
in Fun
c
ti
on
Algorithm PSO
DE
ACPSO
Variable
Dimension
n=20 n=20 n=20
Mean Optimal V
a
lue
25.36
100.62
3.98
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 331 – 3
4
0
338
Whe
n
the variable dim
e
n
s
ion
20
n
, the chan
ging curve of
the mean fitness value f
o
r
Ra
strig
r
in fun
c
tion with the
iteration
s
is a
s
indi
cated in
Figure 6.
Figure 6. Iterations
on Rastrigrin func
tion
(3) F
u
n
c
tion
3
f
(Ackley Fu
ncti
on)
2
11
11
2
0
ex
p
0
.2
ex
p
c
o
s
2
2
0
,
3
2
nn
ii
i
ii
fx
x
x
e
x
nn
m
i
n
*
0,
0,
0
0
fx
f
Ackley fun
c
tion rea
c
h
e
s t
he glob
al min
i
mal point wh
en
0
i
x
. The com
putation re
sul
t
s
of these alg
o
rithms are sh
o
w
n in Tabl
e 3
.
Table 3. The
Mean Fitne
ss Value for Ackley Fun
c
tion
Algorithm PSO
DE
ACPSO
Variable Dimension
n=20
n=20
n=20
Mean Optimal V
a
lue
1.18
2.03
1.21
Whe
n
the variable dim
e
n
s
ion
20
n
, the chan
ging curve of
the mean fitness value f
o
r
Ackley fun
c
tion with the iteration
s
is a
s
indicate
d in Figure 7.
Figure 7. Iterations o
n
Ackley function
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
A Self-Adaptive Chao
s Particle Swa
r
m
Optim
i
zation Algorithm
(Yalin Wu)
339
(4) F
u
n
c
tion
4
f
(Ro
s
en
bro
c
k Functio
n
)
2
2
2
1
1
100
1
,
50
n
ii
i
i
i
fx
x
x
x
x
m
i
n
*
1,
1,
1
0
fx
f
Ro
sen
b
ro
ck i
s
a
non
-conv
ex patholo
g
ical qua
dr
atic f
unctio
n
an
d i
t
s minimal
p
o
int is
easy to
find,
but it is greatl
y
difficult to
conver
g
e
to
gl
obal mi
nimu
m. The
co
mp
utation results of
the several al
gorithm
s can
be se
en in Ta
ble 4.
Table 4. The
Mean Fitne
ss Value for Ro
sen
b
ro
ck Fun
c
tion
Algorithm PSO
DE
ACPSO
Variable Dimension
n=20
n=20
n=20
Mean Optimal V
a
lue
3.14
4.21
3.5e-3
Whe
n
the variable dim
e
n
s
ion
20
n
, the chan
ging curve of
the mean fitness value f
o
r
Ro
sen
b
ro
ck functio
n
with the iteration
s
i
s
as in
dicated
in Figure 8.
Figure 8. Iterations o
n
Ro
senbrock fun
c
tion
It can be se
e
n
from the ab
ove experim
e
n
tal te
st that ACPSO is su
perio
r to PSO and DE
in all test fun
c
tion pe
rform
ance. It not
only has
fa
st conve
r
gen
ce spe
ed, but
also ha
s
strong
global
sea
r
ch
ability. In th
e four testin
g
functi
on
s, the mean o
p
timal fitness o
f
ACPSO is the
minimum, the
r
efore, it has
highe
r a
c
cura
cy
and
stabili
ty than PSO and
DE and i
t
s advanta
ge
is
more
obvio
u
s
in
the
case
s of
hig
her d
i
mensi
o
n
s
an
d relatively sharp
fun
c
tion
value
chang
es
while
the
effect of PSO
fall
s
signifi
cantly
. From
the
m
ean
expe
rime
ntal result, it i
s
clea
r th
at th
e
mean fitne
s
s and sta
nda
rd deviatio
n
of DE are
b
i
g, therefore, this algo
rith
m has bi
gg
e
r
fluctuation
s
, the bigge
r the
dimensi
o
n
s
, the more
un
stable it is. It
can b
e
see
n
from the abo
ve
experim
ental
result that A
C
PSO i
s
a
suitable t
ool
to solve the
g
l
obal o
p
timization p
r
obl
e
m
s of
compli
cate
d functio
n
s. Fo
r the high-dim
ensi
onal
an
d
multi-extrem
e-poi
nt functi
ons, the glob
al
minimum of certain a
c
cura
cy can b
e
obt
ained at fewe
r com
putation
cost.
5. Conclusio
n
This p
ape
r h
a
s an
alyze
d
the con
s
titution and o
p
timization p
r
in
ci
ple of parti
cl
e swarm
optimizatio
n
and it ha
s p
o
inted o
u
t that the ba
sic
particl
e swa
r
m optimizatio
n ca
n ea
sily get
trappe
d in lo
cal optimal
sol
u
tion in the o
p
timum
iterati
on and th
at its convergen
ce sp
eed i
s
ve
ry
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 331 – 3
4
0
340
slo
w
in the l
a
te stag
e si
n
c
e it ge
ne
rat
e
s the
pa
rticl
e
s
rep
r
e
s
enti
ng varia
b
le v
a
lue rand
oml
y
.
Therefore, thi
s
pa
per i
n
teg
r
ates
ch
ao
s o
p
timiza
tion
al
gorithm in th
e parti
cle
swarm o
p
timizat
i
on
and propo
se
s ada
ptive particle
swar
m
optimization
to solve obj
ective optimi
z
ation p
r
obl
e
m
s.
This al
go
rith
ms u
s
e
s
the
cha
o
s
prin
cipl
e, enha
nces
t
he diversity o
f
variable val
ue; com
pute
s
the
corre
s
p
ondin
g
fitness val
ue to eve
r
y
particl
e
, n
a
m
ely the o
b
j
e
ctive fun
c
tio
n
value
of the
probl
em, through pa
rticl
e
swa
r
m op
timization; select
s variab
le value to perform
ch
aos
optimizatio
n
according to
its value in
orde
r to hel
p the variabl
e to jump o
u
t from the local
extremal re
gi
on and self-adaptively adjust
s
its i
nertia wei
ght coeffici
ent a
c
cordi
ng to the
obje
c
tive fun
c
tion valu
e of
the p
r
oble
m
to improve
th
e glob
al a
nd l
o
cal
se
arch
capa
cities
of the
algorith
m
.
Referen
ces
[1]
Masou
d
JS, R
a
min V, Me
hrd
ad B, H
a
ssan
S, Aria
A. App
l
icatio
n of Parti
c
le
S
w
arm Optimization i
n
Cha
o
s S
y
nchr
oniz
a
tion
in N
o
is
y Envir
onm
ent in Pr
esen
ce of Unk
n
o
w
n Param
e
ter
Uncerta
i
nt
y.
Co
mmun
icati
o
ns in No
nli
n
e
a
r
Science a
nd N
u
merica
l Si
mu
l
a
tion
. 20
12; 17
(2): 742-7
53.
[2]
Amir HG, Gun JY, Xin S
Y
, Siamak T
.
Chaos-e
nha
nce
d
Acceler
a
ted P
a
rticle S
w
a
rm
Optimizatio
n
.
Co
mmun
icati
o
ns in No
nli
n
e
a
r
Science a
nd N
u
merica
l Si
mu
l
a
tion.
20
13; 18
(2): 327-3
40.
[3]
Wei ZR, Jing
C. A Cha
o
s P
a
rticle S
w
a
rm
Opti
mizatio
n
R
ang
ing
Corr
ection
Loc
ation
i
n
Com
p
le
x
Enviro
nment.
T
he Jour
nal
of Chin
a Univ
ersiti
es of Posts and T
e
leco
mmun
i
catio
n
s.
201
2; 19(1): 1-5.
[4]
Na D, C
hun
H
W
, W
a
i HI, Z
e
ng QC, Kai
L
Y
. Chaot
ic S
p
ecies B
a
sed
P
a
rticle S
w
a
rm
Optimizatio
n
Algorit
hms a
n
d
Its Ap
plic
ati
on
in
PCB
Co
mpon
ents D
e
t
e
ction.
Ex
pert Systems
w
i
th Appl
icatio
ns
.
201
2; 39(1
5
): 1250
1-12
51
1.
[5]
Shij
un Z, T
i
ng J. A Novel P
a
rticle S
w
a
rm Opt
i
mi
zatio
n
T
r
ain
ed Su
pp
ort Ve
ctor Machi
ne f
o
r Automatic
Sense-thr
o
u
g
h
-
folia
ge T
a
rget Reco
gniti
on S
ystem.
Know
led
ge-Bas
ed Systems.
2
014; 6
5
: 50-59.
[6
]
MR
T
a
nw
ee
r, S Su
re
sh
, N
Su
n
d
a
r
a
r
aj
an
. Se
l
f
Re
gul
atin
g Particle S
w
a
rm
Optimization
Algorithm
.
Information Sci
ences
. 20
15; 2
94(1
0
): 182-
20
2.
[7]
Oguz ET
, Mert ST
, Mustafa T
C
. Chaotic Quant
um B
e
h
a
v
ed Partic
le S
w
arm Optimizati
on Al
gorithm
for Solvin
g No
nlin
ear S
y
stem
of Equatio
ns.
Co
mp
uters & Mathe
m
atics w
i
th Appl
icatio
ns
.
2014; 68(
4):
508-
530.
[8]
Keiji T
,
T
a
keruIbuki, T
e
tsuzo
T. A Chaotic P
a
rticle
S
w
arm
Optimiza
tio
n
E
x
p
l
oiti
ng
A Virt
ual
Quarti
c
Objective F
u
n
c
tion Base
d o
n
T
he Person
al an
d Glob
al
Best Soluti
ons
.
Applie
d Math
ematics an
d
Co
mp
utation.
2
013; 21
9(1
7
): 8991-
901
1.
[9]
Yaoy
ao H, Qifa X
,
Sha
n
l
i
n
Y, Li L. Reser
v
oir F
l
oo
d Co
ntro
l Operati
o
n
Based o
n
Ch
aotic Particl
e
S
w
a
rm Optimiz
a
tion Al
gorit
hm.
Applie
d Math
ematica
l
Mode
l
ling.
2
0
1
4
; 38(
17): 448
0-4
492
.
[10]
Shen
X, Li
u J, Liu Y. A Ne
w
Particle S
w
ar
m Optimizatio
n
Solu
ti
on to
Po
w
e
r F
l
o
w
R
egu
latio
n
for
Unsolv
ab
le Ca
se.
Energy Pro
c
edi
a.
201
2; 14: 775-7
81.
[11]
W
e
i F
G, San
YL, Lin
g
L
H
.
Particle S
w
a
r
m Op
timizatio
n
w
i
t
h
C
h
a
o
tic Oppositi
on-b
a
s
ed
Po
pu
latio
n
Initializ
atio
n an
d Stochas
tic S
earch T
e
chni
q
ue.
Co
mmu
n
ic
ations i
n
No
nli
near Sci
enc
e a
nd Nu
merica
l
Simulation
. 20
12; 17(1
1
): 431
6-43
27.
[12]
Huimi
n
J, CK K
w
o
ng, Z
e
n
g
q
ia
ng C, YC
Ys
im. Chaos
Particle S
w
a
r
m Optimizatio
n
and F
u
zz
y
Mode
lin
g Appr
oach
e
s to Co
nstr
ain
ed Pre
d
i
ctive Co
ntrol.
Expert Systems w
i
th Applicat
ions
. 20
12
;
39(1): 19
4-2
0
1
.
[13]
Omid MN, Anvar B, Paria
J. Dy
namic Diversit
y
E
n
hancement
in P
a
rticle S
w
arm
Optimization
(DDEPSO) Algorithm for Preventi
ng from
Premature C
onver
genc
e.
Proced
ia Co
mp
uter Scienc
e.
201
3; 24: 54-6
5
.
[14]
Mehd
i S, Has
s
an S, Aria
A. Minimum
Entr
op
y C
ontr
o
l of Ch
aos
Via Onli
ne P
a
rticle S
w
a
rm
Optimizatio
n
Method.
Ap
pli
ed Mathe
m
atic
al Mode
lli
ng.
20
1
2
; 36(8): 39
31-
394
0.
Evaluation Warning : The document was created with Spire.PDF for Python.