TELKOM
NIKA
, Vol.12, No
.4, Dece
mbe
r
2014, pp. 99
7~1
004
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v12i4.533
997
Re
cei
v
ed Au
gust 28, 20
14
; Revi
sed O
c
t
ober 2
9
, 201
4; Acce
pted
No
vem
ber 1
4
,
2014
Application of Chaotic Particle Swarm Optimization in
Wavelet
Neural Network
Cuijie Zhao*
1
, Guozhe
n Wang
2
1
Pearl River C
o
lle
ge, T
i
anji
n
Univers
i
t
y
of
F
i
nanc
e an
d Eco
nomics, T
i
anjin
, 30181
1, Chi
n
a
2
Bohai profess
i
on
al an
d techn
i
cal Co
ll
ege, T
i
anji
n
, 30
040
2, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: 3687
27
63@
q
q
.com
A
b
st
r
a
ct
Currently, the
meth
od of opti
m
i
z
i
ng th
e w
a
velet ne
ur
al n
e
tw
ork w
i
th parti
cle sw
arm pl
ay
s a certai
n
role
in
i
m
prov
i
ng th
e co
nverg
ence s
p
e
ed
an
d accur
a
cy; h
o
w
ever, it is n
o
t a g
o
o
d
sol
u
tio
n
for pr
obl
e
m
s
of
turnin
g i
n
to
loc
a
l
extre
m
a
an
d
po
or g
l
o
bal
se
arch
abi
lit
y. T
o
solve
these
pr
o
b
le
ms, th
is p
a
p
e
r, bas
ed
on
th
e
particl
e sw
arm
opti
m
i
z
at
ion,
p
u
ts forw
ard an
improve
d
meth
od, w
h
ich
is int
r
oduc
ing
the c
haos
mech
ani
s
m
into the al
gorit
hm of ch
aotic
par
ticle sw
arm opti
m
i
z
a
t
i
o
n
.
T
h
rough a
s
e
ries of co
mp
arative si
mulat
i
o
n
exper
iments, it proves
that ap
plyin
g
this a
l
go
rithm to o
p
ti
mi
z
e
th
e w
a
velet
neur
al n
e
tw
ork can successfu
l
l
y
solve th
e pro
b
l
e
ms
of turnin
g
into loc
a
l extr
e
m
a, a
nd i
m
pro
v
e the co
nverg
ence s
pee
d of
the netw
o
rk, in
the
me
anti
m
e, red
u
ce the
outp
u
t
error a
nd
i
m
pr
ove the s
earc
h
abi
lity of the
al
gor
ith
m
. In g
e
n
e
ral, it h
e
lps
a
lot
to impr
ove the
overa
ll perfor
m
ance
of the w
a
velet ne
ural
ne
tw
ork.
Ke
y
w
ords
: ch
aotic partic
l
e s
w
arm opti
m
i
z
a
t
ion, conv
erg
e
n
c
e spee
d, w
a
velet ne
ural
net
w
o
rk
1. Introduc
tion
The o
p
timism theo
ry and
method
hav
e existe
d si
n
c
e
an
cient ti
me, amo
ng
whi
c
h, the
relatively rep
r
ese
n
tative on
e is the g
o
lde
n
se
ct
ion m
e
thod. Optimi
sm mainly solv
es the
pro
b
le
m
of finding the best solutio
n
from many
soluti
on
s. We can defin
e
d
optimism a
s
: unde
r ce
rt
ain
rest
rictio
ns, to make the p
r
oble
m
rea
c
h
a best mea
s
urem
ent, or to find out a set of paramet
ers,
and ma
ke certain indi
cat
o
rs
rea
c
h th
e maximu
m or minimu
m. As an important bran
ch of
sci
en
ce, the
optimism
met
hod i
s
gaini
n
g
mo
re
an
d
more
attentio
n, and
pl
ays i
m
porta
nt role
s in
many fields,
su
ch a
s
engi
neeri
ng tech
nology, ele
c
trical e
ngin
e
e
r
ing, imag
e pro
c
e
ssi
ng e
t
c.
Ho
wever, in
real life appli
c
ati
on, si
nce the com
p
lexity and nonlin
earity of man
y
problem
s, the
target fu
nctio
n
s
of the
s
e
p
r
oble
m
s a
r
e
often di
scre
te
and
of m
u
lti-point valu
e, furthe
rmo
r
e, t
h
e
modelin
g the
problem
itse
lf is
also very difficult. When
applyin
g
traditio
nal
o
p
timization
s l
i
ke
Ne
wton m
e
th
od, dynami
c
prog
ram
m
ing
,
bran
ch
an
d
boun
d meth
o
d
, etc. to
solv
e the
s
e
com
p
lex
optimism problems, one usually ne
ed t
o
traverse the entire
search
space,
whi
c
h will waste a lot
of time, and
can
not m
e
e
t
the actu
al requireme
nt
in the a
s
p
e
ct
s of the
con
v
ergen
ce
of
the
probl
em
s and
the optimizat
ion cal
c
ul
atio
n spe
ed. Th
e
r
efore, in the curre
n
t field of optimism, the
key job is to seek the effici
ent optimizati
on.
Particle swa
r
m optimizatio
n gain the attention
of many internation
a
l sch
ola
r
s in
related
fields
rapidly
sin
c
e it
s ad
vent. First, K
enne
dy
J a
n
d
Eberha
rt R. C. put forward th
e bin
a
ry
particl
e swarm optimizatio
n in 1997. Th
en, in 1998,
in orde
r to improve the co
nverge
nce of the
algorith
m
, Shi Y and Eberhart R
C introdu
ced the i
nertia weight
para
m
eter int
o
the spe
ed i
t
em
of the PSO a
nd p
r
op
osed
to dynami
c
all
y
adju
s
t t
he i
nertia
wei
ght
to balan
ce
th
e convergen
ce
spe
ed
du
ring
the p
r
o
c
e
s
s o
f
evolution. T
h
is
al
go
rithm
is
calle
d the
standard PSO.
The
n
, they
p
u
t
forwa
r
d th
e li
near de
crea
sing ine
r
tia
we
ight LDW-
PSO, however, i
f
it deviated from the
overall
optimum
solu
tion in the
ini
t
ial state, the
n
the lin
ear
decrea
s
in
g
will contin
uou
sly enhan
ce
the
local se
arch ability,
which may
end up with
lo
cal opt
i
m
ism.
Clerc,
et al in 1
999,
put forwa
r
d t
he
CF-PSO
with
shri
nkage fa
ctors by intro
duci
ng
shri
nkage facto
r
s into the evolu
t
ion equatio
n
to
ensure th
e conve
r
ge
nce
of the algo
rithm. And
in ord
e
r to o
v
erco
me the
probl
em of
the
prem
ature
co
nverge
nce of
LDW-PSO, th
ey put fo
rwa
r
d the
ran
dom
inertia
weigh
t
Ran
d
W-PSO,
so that u
nde
r a certai
n ra
nge of a
c
curacy, t
he mult
imodal fun
c
ti
on can q
u
ickly converge.
At
pre
s
ent, th
e
improvem
ent
of p
a
rticl
e
swarm o
p
ti
mization m
a
inly
inclu
d
e
s
: first, introd
uci
ng
a
variety of me
cha
n
ism
into
the pa
rticl
e
swarm
optimi
z
ation
to stu
d
y variou
s
of improved P
S
O;
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 4, Dece
mb
er 201
4: 997
– 1004
998
second, com
b
ining the P
S
O with
other intelligent
optimization and
studying
a variety of
mixed
algorith
m
s to
compl
e
me
nt each other a
n
d
impr
ove the
performan
ce
of the algorithm [1].
This p
ape
r int
r
odu
ce
s the
d
e
finitions a
n
d
theorie
s relat
ed to the wavelet neu
ral ne
twork,
as
well
a
s
so
me fre
que
ntly-used
traini
ng
metho
d
s of
wavelet
neu
ral net
work. It
also
ela
borates
the p
r
inci
ple,
definition
a
nd b
a
si
c
wo
rkin
g p
r
o
c
e
s
s of th
e p
a
rt
icle
swarm o
p
timization,
and
make
s a
deta
iled expl
anati
on o
n
th
e imp
r
oving
meth
o
d
ap
plied
by t
he p
ape
r. T
h
en, it introdu
ces
the ba
sic i
d
e
a
and
de
sign
approa
ch of t
he metho
d
of
optimizin
g th
e wavel
e
t ne
ural n
e
two
r
k
with
particl
e
swarm. It sho
w
s t
he fea
s
ibility
and
su
peri
o
ri
t
y
of the
cha
o
tic p
a
rticl
e
swarm
optimization
through com
parative expe
riments, and proves the feasi
b
ility
and
superiorit
y of the im
proved
algorith
m
it p
r
opo
se
d by
a
pplying the
chaotic pa
rticl
e
swa
r
m o
p
timized
wavele
t neural n
e
twork
to s
i
mple target trac
ing.
2. Basic Particle S
w
a
r
m
Optimiza
tion
2.1. Basic Id
ea of th
e Ba
sic Particle
S
w
a
r
m Opti
mization
The ba
si
c ide
a
of the ba
sic particl
e swa
r
m optim
izatio
n is: the pote
n
tial solutio
n
of every
optimism
pro
b
lem is th
e search
spa
c
e
particl
e.
Every particle
ha
s fitness val
u
e
determi
ned
by
the optimize
d
function, an
d has a
spe
ed vector
d
e
termini
ng its flying dire
ctio
n and dist
an
ce.
Then the
s
e p
a
rticle
s will fo
llow the se
arch in the
soluti
on sp
ace of the cu
rrent op
timized pa
rticl
e
.
The initiali
zati
on of th
e p
a
rt
icle
swarm o
p
timization
is a
swarm
of
random
pa
rticl
e
s, the
n
it fin
d
s
out the optim
ized
solutio
n
throug
h iteration. In
every iteration,
the parti
cl
es upd
ate
themselves
by tracin
g two extrema. O
ne is th
e opti
m
ized
sol
u
ti
o
n
found
by the parti
cle itse
lf so far,
which is
the individual
optimized
solution. The
other on
e
is
the optimize
d
solution fou
nd by the wh
ole
particl
e swa
r
m so far, wh
ich is the ov
erall opt
imi
z
ed solution. Obvious
ly, the partic
l
e
s
w
arm
optimizatio
n also
b
a
ses o
n
individual coope
ration an
d com
petition
to compl
e
te the se
arch of t
h
e
optimize
d
sol
u
tion in a co
mplex spa
c
e.
It is an evolutionary com
putation tech
nique ba
se
d on
swarm intelli
gen
ce metho
d
. The pa
rticl
e
swarm o
p
timization
con
duct
s
se
arch
by each p
a
rt
icl
e
followin
g
the
optimize
d
p
a
rticle. The
r
efo
r
e, it is
simpl
e
and
ea
sy, and d
o
e
s
n
o
t need
to adj
u
s
t
many parame
t
ers [2].
Advantage
s o
f
the PSO:
(a)
No
crossin
g
and m
u
tation
ope
ration
s,
depe
nding
on
parti
cle
spe
e
d to complete
sea
r
ch, hig
h
conve
r
ge
nce spe
ed;
(b)
Applying the
method of simultaneo
usly
addre
s
sing
more tha
n
o
ne parti
cle in
the particle
swarm
to
si
multaneo
usly
se
arch
cert
ain a
r
ea
of t
he d
e
si
gn
space,
havin
g
the
nature
of
parall
e
lism;
(c)
Adopting
re
al
num
ber
cod
i
ng, solving t
he p
r
o
b
lem
dire
ctly on
the p
r
o
b
lem
domain,
le
ss
para
m
eters t
o
be
setted,
easy to
a
d
just, so th
e algo
rithm
is si
mple, a
nd ea
sy for
impleme
n
tation;
Disadvanta
g
e
s
of the PSO:
(a)
Eas
y
to turn into loc
a
l extrema;
(
b
) L
o
w
s
e
ar
ch
ac
cu
r
a
c
y
;
(c)
The hi
gh
efficien
cy info
rm
ation
sha
r
ing
mec
hani
sm
might lea
d
to
the ove
r
con
c
entration
of
particl
es
whe
n
they are seeki
ng for the optim
ize
d
solutio
n
, whi
c
h ma
ke
s all the particle
s
move to ce
rtain overall o
p
timized
poi
nt, and
ca
nn
ot be appli
e
d to multimo
dal functio
n
optimizatio
n;
(d)
Whe
n
solvin
g
probl
ems
of optimizatio
n
with
discrete
variable
s
, the
roun
ding of
the discrete
variable
s
ma
y appear g
r
e
a
t erro
rs;
(e)
The
algo
rith
m theo
ry i
s
not
perfe
ct, espe
cially l
a
cking
p
r
a
c
tical
guid
e
line
s
fo
r
spe
c
ifi
c
pra
c
tice.
The m
e
tathe
t
ic de
scriptio
n is:
ea
ch
particl
e i
s
consi
dered
a
s
a
poi
nt in
the
D
dimen
s
ion
a
l
spa
c
e, the l
o
catio
n
of th
e No.i p
a
rti
c
le is m
a
rked
as
, the
particl
e’s in
di
vidual extrem
a l is marked
as
, the overall extrema’
s sub
s
cript
is
rep
r
ente
d
by “g
”, pa
rticl
e
i’s spee
d
i
s
marke
d
a
s
, particl
es will
adju
s
t their
spe
ed an
d lo
cation a
c
cord
ing to the followin
g
equati
ons:
(1)
12
(,
,
,
)
li
i
i
D
X
xx
x
12
(,
,
,
)
li
i
i
D
Pp
p
p
12
(,
,
,
)
li
i
i
D
Vv
v
v
1
11
2
2
()
(
)
tt
t
t
id
id
id
id
g
d
id
vv
c
r
P
x
c
r
P
v
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Application of Chaoti
c
Parti
c
le Swarm
Optim
i
zati
on in Wavelet Neural Ne
twork .... (Cuijie Z
hao)
999
(2)
Among
whi
c
h, “d
=1,2
…D, i=1,2…
m” i
s
the
sw
a
r
m
scale.
“t” is the
cu
rre
nt
evolution
algeb
ra. “c1
”
and “c2
”
are accele
ration
con
s
tant
s, which a
r
e p
o
sit
i
ve con
s
tants. “r
1
” and
“r
2
” ar
e
two ra
ndom
numbe
r withi
n
the ra
nge o
f
[0, 1]. Mor
eover, in order to control the parti
cle spe
e
d
,
one can set a spee
d limit
, that is, in equ
ation (1
), wh
en
, conside
r
, when
, consi
der
, the first part
of equation
(1) is the pre
v
ious speed
item, the
se
con
d
pa
rt is the
cog
n
itio
n part of the
particl
e itse
lf,
that is, the im
pact o
n
the
current lo
catio
n
of
the particl
e’s
histori
c
al b
e
st location, wh
ich re
pre
s
e
n
ts the informa
t
ion sha
r
ing
and co
ope
rati
on
among parti
c
l
e
s[3],[4].
2.1. Basic P
S
O Procedur
e
(a)
Ran
domly ini
t
ialize the lo
cation and
sp
eed of
the p
a
rticle
swarm
,
it is usually
gene
rated
rand
omly wit
h
in the allo
wed ra
nge. Th
e pbe
st
co
ordinate of ea
ch p
a
rt
icl
e
is its cu
rre
nt
locatio
n
. Cal
c
ulate its
corresp
ondi
ng in
dividual extre
m
um (i.e. the
individual fitness valu
e),
. The ov
erall
extre
m
um (i.e.
the ove
r
all fitness value)
is the best of the individual extrema.
Mark the nu
mber of the p
a
rticle
with the be
st value as “g”,
and set the gbest wi
th the
curre
n
t locati
on of the best
particle.
(b)
Cal
c
ulate e
a
ch particl
e’s fitness value.
(c)
Comp
are ea
ch pa
rticle’
s
fitness valu
e
with
its indi
vidual extre
m
um, if better, update th
e
curre
n
t individual extrem
u
m
.
(d)
Comp
are ea
ch p
a
rti
c
le’s
fitness val
u
e
wi
th the
overall extremu
m
, if better, update th
e
curre
n
t overal
l extremum.
(e)
Upd
a
te ea
ch
particl
e’s lo
ca
tion and spee
d according t
o
equatio
n (1
) and (2).
(f)
If not re
ach t
he p
r
eviou
s
stetted termin
ation
stand
ard (usually
set
as the
large
s
t nu
mbe
r
of
iteration
), then return to step (2
);
if reached, then sto
p
cal
c
ulatin
g [5].
3. Impro
v
ement of
the Pa
rticle S
w
a
r
m
Optimizatio
n
Base
d on Chao
tic Mec
h
anism
3.1. Idea of the Cha
o
tic P
a
rticle S
w
a
r
m Procedure
Strictly sp
ea
king, the
ch
ao
s p
hen
omen
o
n
refers to
th
e inte
rnal
ra
n
dom
action
a
c
ted
by
a sy
stem
of co
mplete
certainty a
nd
without
any
rand
om fa
cto
r
s.
The
chao
tic optimi
z
ati
o
n
con
d
u
c
ts
sea
r
ch
mainly by
makin
g
u
s
e
of the er
g
odi
city of the ch
aotic motio
n
, so a
s
to avo
i
d
turning i
n
to l
o
cal mi
nimu
m. The chao
tic optim
ization po
sse
s
se
s featu
r
e
s
like
rand
omn
e
ss,
ergo
dicity, re
gularity, nonli
nearity and l
ong-te
rm b
e
h
a
vior unp
re
di
ctability, etc. Tra
ck e
r
g
odi
city
mean
s th
at the
cha
o
s se
quen
ce
can
go th
roug
h
a
ll
the states within
ce
rtain
ra
nge
with
o
u
t
repetition. It is the fund
a
m
ental sta
r
ti
ng point
of
the functio
n
optimizatio
n
throug
h cha
o
s.
Usually, the
sea
r
ch p
r
o
c
e
ss
ba
sed
on
cha
o
s
dyna
mic is
divide
d into two
st
age
s. First, the
ergo
dicity tra
ck ge
ne
rated
by iteration with ce
rtainty inspe
c
ts the
entire soluti
on spa
c
e. When
certai
n te
rmi
nation
co
ndit
i
on i
s
m
e
t, and th
e di
scovered
be
st
state
du
rin
g
the
se
arch is
con
s
id
ere
d
a
s
clo
s
e to th
e
optimal
solut
i
on, and
it
is
regarded
a
s
t
he
sea
r
ch
sta
r
ting p
o
int of t
h
e
se
con
d
stag
e
.
Second, ta
king the
resu
lt gained
by the first stag
e as the cen
t
er, con
d
u
c
ting
further i
n
-d
ep
th local
se
arch by addin
g
slight per
tu
rb
ations, u
n
til the
terminatio
n standard is
me
t,
among
which
,
the ad
ded
slight pe
rturba
tions
ca
n b
e
the chao
s va
riable
s
, o
r
ra
ndom va
ria
b
les
based o
n
G
a
ussian
dist
rib
u
tion, Ca
uch
y
distrib
u
ti
on
or u
n
iform
distribution, et
c. and al
so
ca
n
be
the offset value gen
erate
d
by the calcul
ation ba
sed o
n
the gra
d
ien
t
descent me
cha
n
ism. Ba
sed
on the above
mentioned i
dea, adoptin
g
methods
sim
ila
r to carrie
r wave to intro
duce the cha
o
s
variable
s
ge
nerate
d
by Logi
stic map
p
ing into
the optimize
d
variable
s
, in the meantime,
transfe
rring
t
he
e
r
godi
c range of
the cha
o
s motion
into the opti
m
ized
varia
b
l
e
dom
ain, th
en
s
e
arching
with c
h
aos
variables
[
6],[7].
11
tt
t
id
id
id
x
xv
max
V
ma
x
id
VV
>
ma
x
id
VV
=
ma
x
id
VV
<
-
ma
x
id
VV
=-
12
(,
,
,
)
li
i
i
D
Pp
p
p
12
(,
,
,
)
gg
g
g
D
Pp
p
p
ma
x
T
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 4, Dece
mb
er 201
4: 997
– 1004
1000
F
i
gure 1.T
he perio
dicit
y
of ch
aotic vari
abl
es
F
i
gure 2
.
T
he rand
omness
of chaotic moti
on
3.1. Basic Pr
ocedur
es of
The Improv
e
d
Particle Sw
a
r
m Optimi
zatio
n
The se
arch
process of
the two stage
s
com
b
i
ned with ch
aotic pa
rticle
swa
r
m
optimizatio
n, the overall se
ar
ch s
t
eps
are as
follows
[8]:
(a)
Set the pa
rticle swa
r
m
size a
s
“N” an
d
the
maximum
numb
e
r
of th
e iteratio
n, a
nd rand
omly
initialize the l
o
catio
n
and
speed of the p
a
rtic
le
s withi
n
the range of
feasibl
e
ado
p
t
ion.
(b)
evaluate the
fitness of e
a
ch
pa
rticle;
set th
e pa
rt
icle fitne
s
s rangin
g
first
as th
e ove
r
a
ll
optimum; the initial location
of the particl
e is the parti
cle’s individu
al
extremum.
(3)
(4)
(c)
update the
sp
eed an
d location of the par
t
i
cle
s
acco
rdin
g to equation
(
3)an
d(4).
(d)
evaluate the fitness of each parti
cl
e; co
mpare it with its previo
u
s
fitness, upd
ate the individual
extremum wit
h
the better fitness; comp
a
r
e the fi
tness of the current
opt
imized particle with its
perviou
s fitne
ss, up
date th
e overall opt
i
m
al value wit
h
the better fitness.
(e)
reserve the first N/5
pa
rticle
s of the swarm.
(f)
update the l
o
catio
n
s of t
hese pa
rticle
throug
h cha
o
s lo
cal
sea
r
ch a
nd CLS
result. If th
e
terminatio
n st
anda
rd is m
e
t, output
the current optimized sol
u
tion.
(1
)
(
)
1
()
(
)
kk
ii
i
b
e
s
t
i
VV
c
r
a
n
d
P
X
2
()
(
)
i
cR
a
n
d
G
b
e
s
t
X
(1
)
(
)
(
1
)
kk
k
ii
i
XX
V
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Application of Chaoti
c
Parti
c
le Swarm
Optim
i
zati
on in Wavelet Neural Ne
twork .... (Cuijie Z
hao)
1001
(g)
narro
w the se
arch spa
c
e, a
nd ra
ndomly
gene
rate
4
N
/
5
ne
w pa
rticl
e
s in the
narrowe
d se
arch
spa
c
e.
(h)
con
s
titute a new
swarm
with the upd
ated parti
cle
s
throu
gh CLS and these 4N/5 ne
w
particl
es.
(
9)m
a
ke k=k+1
,
return to s
t
ep(3).
4. Experiment Simulation And Relate
d Applicatio
ns
4.1. Experimental Mod
e
l Cons
tru
c
tio
n
And Data
Amaly
s
is
In orde
r to verify the effectiveness of the
chaoti
c
pa
rticle swa
r
m o
p
t
imization p
r
o
posed
by this pape
r rega
rding th
e application
in optimiz
in
g wavelet neu
ral netwo
rk, th
is pap
er ad
o
p
ts
simulatio
n
so
ftware to co
ndu
ct simulat
i
on exper
i
m
e
n
ts, thus to
verify the method p
r
opo
sed
thereby. It provides a fun
c
tion grou
p, as follows:
(5)
In interval[-1,
1], rand
omly gene
rate 5
0
points
with
same inte
rvals, and ma
rk a
s
, in
whi
c
h, k=l,2,...,50. In this model, respecti
vely
apply CPSO optimized wavelet network and Basic
PSO optimized wavel
e
t network to trai
n the func
tio
n
s. On the h
i
dden laye
r, we choo
se t
he
wavelet fun
c
tion of Morlet wavelet, sin
c
e the
M
o
rlet functio
n
posse
ss
es
the feature
s
o
f
contin
uou
s
condu
ctivity and go
od vid
eo lo
cali
zati
on, moreove
r
, its fun
c
tio
n
expression
is
simple
r. Th
en
apply th
e
ch
aotic
parti
cle
swarm
optim
i
z
ed
wavelet
netwo
rk to tra
i
n the fu
nctio
n
s,
in which, respectively
sele
ct the
num
be
r of th
e
par
ti
cle a
s
“N=5
0”,
lea
r
ning
fa
ctor
as “cl
=
c2=2”,
the maximu
m and
mini
mum in
ertia
factors
as“
”
and
“
”
, the
maximum
numbe
r of iteration a
s
“5
00
0”.
Figure 3. B CPSO optimize
d
wavelet net
work outp
u
t
0.5
0
.5
2.
18
6
1
.2
8
6
1
0
.2
()
4
.
2
4
6
0
.
2
0
si
n[(3
7
)
]
0
1
x
xx
fx
x
x
ex
x
x
k
x
max
0.9
mi
n
0.
6
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 4, Dece
mb
er 201
4: 997
– 1004
1002
Figure 4. Basic PSO optimi
z
ed
wavelet n
e
twork outp
u
t
The a
bove
are the o
u
tput
curve
s
gen
erated fro
m
the
sam
e
fun
c
tio
n
train
ed
rela
tively by
the CPSO optimiz
ed wavelet network
and the
BPSO
optimiz
ed
wavelet network. It c
a
n be
s
e
en
that su
ch
wa
velet neu
ral
netwo
rks
po
sse
s
s fine fi
tting. Its target
value a
nd th
e traini
ng
out
put
value are ba
sically co
nsi
s
t
ent, which av
oids the
ri
sk
of turning int
o
local extre
m
a. The follo
wing
are
the
outpu
t error curve
s
re
sp
ectively
adoptin
g the
CPSO a
nd B
PSO optimi
z
e
d
wavelet n
e
ural
netwo
rk. A
s
t
o
the
CPSO,
before the
2
500th traini
n
g
, the e
rro
r
d
e
crea
se
s fa
ster, an
d after
the
2500th traini
ng, the erro
r is ba
sica
lly stabili
zed, an
d the ch
ang
e
is
rel
a
tively small. Th
e error
value is ap
proaching
zero. It can be se
en that
at the moment, the
netwo
rk h
a
s been g
r
ad
ua
lly
began to converge. As to the BPSO, it
is not
until after the 3500t
h traini
ng, the error become
stable, and the network begin to
converge. However, the error at
that moment
is still rel
a
tively
bigge
r. Throu
gh the comp
aring
experi
m
ents, it
ca
n
be se
en tha
t
the wavelet
neural net
work
based o
n
CPSO is bett
e
r than t
he
wavelet n
eural network b
a
se
d on BP
SO, for that it
accele
rate
s the conve
r
g
e
n
ce
spee
d, improve
s
t
he error a
c
cura
cy, and avoids turnin
g into the
local
extrem
a. Since th
e wavelet n
eural
net
wo
rk p
o
sse
s
se
s the featu
r
e
of hig
h
-sp
e
ed
conve
r
ge
nce, durin
g the training
pro
c
e
s
s, the optim
a
l
numbe
r of
conve
r
ge
nce
is withi
n
30
0
0
,
and th
e e
r
ror
accuracy
is al
so th
e le
ss th
e bette
r, ot
he
rwi
s
e
it will
se
riou
sly affe
ct
the st
ru
cture
of
the netwo
rk,
and lead to a
lose
stru
cture
,
weak
ene
d generalization
ability, and even re
sult in the
“Butterfly Effe
c
t
” of the net
work
output.
Figure 5. CPSO optimize
d
wavelet network e
r
ror
curv
e
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Application of Chaoti
c
Parti
c
le Swarm
Optim
i
zati
on in Wavelet Neural Ne
twork .... (Cuijie Z
hao)
1003
Figure 6. Basic PSO optimi
z
ed
wavelet n
e
twork e
rro
r curve
In order to assure the reli
ability of the ex
perim
ent
data, this paper
conducts several
repe
ated exp
e
rime
nts to this model, an
d
the experime
n
ts data a
r
e a
s
follows.
Table 1. The
data com
p
a
r
i
s
on of the two optimizatio
n
Experime
nt
Trainin
g
error
o
f
BPSO
Trainin
g
error
o
f
CPSO
First 0.0835
0.0475
Second 0.0087
0.0073
Third
0.05354
0.01267
5. Conclusio
n
This pap
er
mainly introd
uce
s
th
e p
a
r
ticl
e
swa
r
m
optimi
z
ation
.
It starts from the
introdu
ction
o
f
basi
c
the
o
ri
es, an
d g
r
ad
ually explore
s
into th
e nu
mber
sel
e
ctio
n metho
d
s, a
n
d
then elab
orat
es the
whole
algorithm b
y
analyzi
ng t
he pros a
n
d
con
s
of the
algorithm a
n
d
introdu
cin
g
the algorith
m
p
r
ocedu
re. In orde
r to
verif
y
the supe
rio
r
ity of the algorithm propo
sed
here, thi
s
p
a
per
ado
pts th
e ch
aotic pa
rticle sw
a
r
m o
p
timization
a
nd
the ba
sic particl
e swarm
optimizatio
n to re
spe
c
tivel
y
calcul
ate the minimu
m of
two testing f
unctio
n
s. It can be
seen from
the experi
m
e
n
t data an
alysis th
at the chaotic
parti
cl
e swarm opti
m
ization
ca
n
not only imp
r
ove
the
e
rro
r accura
cy,
but al
so accel
e
rat
e
t
he
co
nvergen
ce
sp
eed
, and
enh
an
ce th
e a
b
ility of
avoiding
lo
cal
extrema. By
con
d
u
c
ting th
e comp
a
r
ativ
e
si
mulation
experim
ents, and re
spe
c
tively
usin
g the ba
sic pa
rticle
swarm optimi
z
e
d
wavele
t ne
ural n
e
two
r
k
and the
chaot
ic pa
rticle
swarm
optimize
d
wa
velet neu
ral
n
e
twork to
train the func
tions
,
it
s
h
ows
th
at the
ch
aotic pa
rticle
swa
r
m
optimize
d
wa
velet
neu
ral netwo
rk
po
sse
s
sed not
only
high
er conve
r
ge
nce
sp
eed, but also
smalle
r erro
r accuracy, an
d is a feasi
b
le
training meth
od.
Referen
ces
[1]
Lin W
a
ng, Bo
Yang,et
al. Improvi
ng
particl
e s
w
arm
optim
izatio
n usi
ng m
u
lti-la
ye
r se
arc
h
in
g strateg
y
.
Information Sci
ences.
20
14; 2
74(1): 70-
94.
[2]
Guohu
a W
u
, Disha
n Qiu,et al. Super
ior so
lutio
n
gui
de
d particl
e s
w
a
r
m
optimizati
on c
o
mbi
ned
w
i
t
h
local s
earch te
chni
ques.
Exp
e
rt Systems w
i
th Appl
icatio
ns.
201
4; 41(1
6
): 7536-
754
8.
[3]
Nabi
la N
o
u
a
o
u
r
ia, Mou
n
ir Bo
ukad
oum. Imp
r
oved
gl
o
bal-
b
est particl
e s
w
arm optimiz
ati
on a
l
gor
ithm
w
i
t
h
mi
xe
d-attri
bute dat
a class
i
ficatio
n
cap
abi
lit
y
.
Ap
pli
ed So
ft Comp
uting.
2
013; 21: 5
54-5
67.
[4]
Sarthak Ch
atterje
e
, Debd
ipt
a
Gos
w
a
m
i, e
t
al. Behav
ior
a
l an
al
ysis of
the lead
er p
a
rticle d
u
rin
g
stagnati
on i
n
a particl
e s
w
a
rm optimiz
ation
a
l
gorithm.
Infor
m
ation Sci
enc
es.
2014; 2
79(2
0
): 18-36.
[5]
Xi
nch
ao Z
h
a
o
, Z
i
y
a
ng L
i
u, et
al. A multi-s
w
arm c
oop
erativ
e multistag
e
p
e
rt
urbati
on g
u
i
d
in
g partic
l
e
s
w
a
rm optimiz
er.
Appli
ed Sof
t
Comp
utin
g.
2012; 22(
9): 77-
93.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 4, Dece
mb
er 201
4: 997
– 1004
1004
[6]
Xi
an
g Yu,
Xu
e
q
in
g Z
h
a
ng. E
nha
nce
d
comp
rehe
nsive
le
ar
nin
g
partic
l
e s
w
a
rm o
p
timiza
tion.
App
lie
d
Mathe
m
atics a
nd Co
mputati
o
n.
2014; 2
42(1)
: 265-27
6.
[7]
Jianl
i D
i
n
g
, Jin
Liu,
et a
l
. A p
a
rticle s
w
a
rm
optim
iz
ation
us
ing
loc
a
l stoc
h
a
stic se
arch
a
nd
enh
anc
in
g
diversit
y for co
ntinu
ous o
p
timi
zation.
Ne
uro c
o
mputi
ng.
20
1
4
; 137(5): 2
61-
267.
[8]
Amer Fahm
y
,
T
a
rek M. Hassan, et al. Improving
RCPSP
s
o
lutions qualit
y w
i
t
h
Stacking
Ju
stification–
Appl
icatio
n
w
i
t
h
p
a
rticle s
w
a
r
m optimiz
atio
n
.
Expert Syste
m
s w
i
th A
ppl
ic
ations
. 201
3;
4
1
(13): 587
0-
588
1.
Evaluation Warning : The document was created with Spire.PDF for Python.