TELKOM
NIKA
, Vol. 11, No. 12, Decem
ber 20
13, pp.
7344
~73
5
0
e-ISSN: 2087
-278X
7344
Re
cei
v
ed
Jun
e
27, 2013; Revi
sed
Jul
y
3
0
, 2013; Acce
pted Augu
st 17, 2013
PSO Algorithm Based on Accumulation Effect and
Mutation
Ji Weidong,
Zhang Ju
n
Harbi
n
Norm
al
Univers
i
t
y
, H
a
rbin, Ch
ina
Corresp
on
din
g
author, e-mai
l
: kingj
w
d
@
126.
com, hsd_zj
@
163.com
A
b
st
r
a
ct
Particle Swarm
Optim
i
z
a
tion
(PSO) al
go
rithm
is a
n
e
w
sw
arm int
e
lli
ge
nce
opti
m
i
z
at
io
n
techni
qu
e, bec
ause
of its simplicity, few
e
r p
a
ra
mete
rs a
n
d
goo
d effects, PSO has be
en
w
i
dely use
d
to
solve var
i
ous
compl
e
x opti
m
i
z
a
t
i
on pr
obl
e
m
s. particl
e sw
arm opti
m
i
z
a
t
i
o
n (PSO)
exist the prob
le
ms
of
pre
m
atur
e an
d
local c
onv
erg
ence, w
e
pro
p
o
sed
an
i
m
pr
oved particl
e sw
arm
opti
m
i
z
ation base
d
o
n
aggr
egati
on
effect and w
i
th mutati
on
oper
ator, w
h
ich de
termi
nes w
het
her the a
ggr
e
gatio
n occurs
i
n
search
ing, if t
here
is the
n
the Gauss
i
a
n
mutati
on
is de
tected to the
glo
bal
extre
m
u
m
, to ov
erco
me
particl
e sw
arm opti
m
i
z
a
t
i
on f
a
lli
ng i
n
to loc
a
l opti
m
a
l
sol
u
ti
on def
ects. T
e
sting the n
e
w
alg
o
rith
m by a
typical test function, the
res
u
l
t
s show
that, c
o
mpar
ed w
i
th
the c
onv
entio
n
a
l gen
etic alg
o
r
ithm
(SGA),
it
improves th
e a
b
ility of gl
ob
al opti
m
i
z
at
ion, b
u
t
also effectiv
ely avo
i
d the pr
emature co
nve
r
genc
e.
Ke
y
w
ords
:
PS
O, aggregati
o
n
effect, variation, precoci
ous
Copy
right
©
2013 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
PSO was p
r
o
posed by Dr.
Eberha
rt an
d Dr. Kenne
d
y
in 1995 [1,
2], which p
r
o
duced
the swa
r
med
intelligent a
n
d
imp
r
oved
g
u
idan
ce
algo
rithm thro
ugh
popul
ation, coope
ration
a
nd
comp
etition b
e
twee
n parti
cles swa
r
m
s
. Partic
le Swarms Optimi
zat
i
on com
p
a
r
e
d
and share
d
informatio
n b
e
twee
n in
dividual
s. Advant
age
s a
r
e
si
m
p
le a
nd
ea
sy
to und
erstan
d its idea
s,
a
n
d
involving fewer pa
ram
e
ters, whi
c
h h
a
s
a faste
r
converg
e
n
c
e
spe
ed an
d stronge
r glo
b
a
l
optimizatio
n cap
ability. It i
s
a evolution
a
ry
com
putati
on method
b
a
se
d on swa
r
m intellige
n
ce.
Particle S
w
arm Optimizati
on get a wid
e
ran
ge of
a
pplication
s
a
nd re
sea
r
ch
whe
n
it was
put
forth so
on [3
-7], However,
due to p
a
rticl
e
swarm
opti
m
ization
algo
rithm is to tra
c
e the l
o
cation
of popul
ation
by sea
r
ching
iterativel
y, all particl
es
are
near to th
e b
e
st po
sition, t
hey tend to b
e
the same
slo
w
ly, or makin
g
the diverse
group
d
a
ma
ged serio
u
sly
,
resulting in
” mass effe
ct”,
making conv
ergence speed redu
ced
gradually, even at
a standstill, maki
ng the algorithm
conve
r
ge in
advan
ce an
d
be prem
aturi
t
y. Particle
swarm optimization is easy to fall into local
optimum, it is difficult to find the glob
a
l
optim
um, so the se
arch
accuracy is
so hig
h
. Man
y
experts and
schola
r
s
stud
ied it and
ma
de a n
u
mbe
r
of improve
d
particl
e swa
r
m optimizatio
n
algorith
m
[8-12]. Although
som
e
of the
above al
go
rithm imp
r
oved
parti
cle
swarm optimi
z
atio
n
perfo
rman
ce
in varying
d
egre
e
s an
d i
m
prove
d
to
varying d
egrees in the
gl
obal
sea
r
chi
ng
ability, convergence and ac
curacy, the ef
fect is not very satisfactory.
As PSO is prone to
”cl
u
st
er effe
ct” in l
a
te, this p
a
p
e
r introdu
ce
d
the idea
of j
udgme
n
t
mech
ani
sm f
o
r “clu
ste
r
effect”
and
mut
a
tion ba
se
d
on the
parti
cl
e swa
r
m me
cha
n
ism
[13], to
enha
nce the particl
e diversity, improve the ability out
of local extre
m
e points, to make the
m
can
contin
ue sea
r
chi
ng in oth
e
r a
r
ea
s, thu
s
avoidin
g
th
e deficie
nci
e
s of bein
g
e
a
sy to fall in
to
optional val
u
es, e
nhan
cin
g
the o
ppo
rtu
n
ity of sea
r
ch
ing for glob
al
values, to i
m
prove th
e
spe
ed
and a
c
cura
cy
of late conve
r
gen
ce.
2. PSO Algorithm
It is from the simulatio
n
of bird pre
dati
on, is si
milar with g
enetic al
gorit
hm, PSO
algorith
m
initi
a
lize
s
a g
r
ou
p of
rand
om
particl
es fi
rstly. Every parti
cle i
s
a p
o
ssi
b
le
solutio
n
o
f
optimizatio
n
probl
em
s; it h
a
s it
s o
w
n
po
sition
and
vel
o
city, wh
ose
obje
c
tive fun
c
tion value i
s
it
s
fitness
deg
re
e [1]. In each iteration,
each pa
rt
icle memori
es,
followi
ng
th
e b
e
s
t pa
r
t
ic
le
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
PSO Algorith
m
based on
Accum
u
latio
n
Effect and Mutation (Ji
We
idong
)
7345
curre
n
tly by tracking two ”e
xtreme” to up
date itsel
f: on
e is the optim
al solutio
n
found by pa
rticl
e
itself, whi
c
h
i
d
individ
ual
extreme
pbe
st.
Anothe
r i
s
the
optio
na
l sol
u
tion
aro
und th
e e
n
tire
popul
ation, called the
glo
bal extre
m
u
m
gbe
st
. After findi
ng th
ese
two
opti
onal valu
es,
the
particl
es up
d
a
te thei
r velo
city and
po
sit
i
on it
e
r
atively acco
rdi
ng to
the follo
win
g
Equatio
n (1)
and (2
).
The eq
uation
of motion of particl
e in d-
d
i
mensi
onal space
ca
n
be descri
bed co
mmonly
by a group of
differential eq
uation
s
and
constrai
nts. As sho
w
n in Eq
uation (1
) an
d (2).
(
1
)
(
)
(
)(
)
(
)(
)
,,
1
1
,
,
2
2
,
()
(
)
t
t
tt
tt
id
id
id
id
d
i
d
c
r
p
b
es
t
x
c
r
gb
es
t
x
(1)
(1
)
(
)
(
1
)
,,
,
tt
t
id
id
id
xx
(2)
t
is numb
e
r
of iteration
s
,
is i
nertia fa
ctor,
()
,
t
id
x
is
th
e
po
s
i
tion
ve
c
t
or
p
a
r
t
ic
le
c
u
rr
en
tly.
()
,
t
id
is
partic
l
e velocity,
()
,
t
id
p
be
st
is the b
e
st po
sition
of particl
e i reaching d,
()
t
d
gbest
is the be
st
positio
n of p
opulatio
n re
a
c
hin
g
d,
1
c
2
c
are accel
e
ratio
n
facto
r
s,
1
r
2
r
are the two ra
ndom
numbe
rs
whi
c
h
are di
stri
b
u
ted u
n
iforml
y betwe
en
0
and
1, the
(1
)
,
t
id
in indi
catin
g
t
he
cu
rre
nt
state of the p
a
rticle; the
ab
ove formula
(1) id ma
de by
three item
s, the first item i
s
“m
omentu
m
”
se
ct
ion,
t
he
se
con
d
it
em
is “
c
o
gnit
i
v
e
”
se
ct
i
on,
con
s
ide
r
ing
the
parti
cle o
w
n
experi
ence;the
third item i
s
“so
c
iety” secti
on, the soci
al
role
b
e
twe
e
n
the pa
rticle
s. Particl
e
s
move co
nsta
ntly
throug
h
sea
r
chin
g for its
own
informati
on
pb
es
t
an
d
gro
up i
n
form
ation
gb
es
t
u
n
til co
ndition
is
sat
i
sf
ie
d.
3. PSO is based on Accu
mulation Effect an
d Muta
tion
3.1.
PSO: The Determine
d
Mechanis
m
w
h
e
n
“clu
ster e
f
fect”
Several defini
t
ions are give
n to determin
e
.
Definition 1:
PSO group fit
ness varia
n
ce can b
e
defi
ned a
s
Equati
on (3
).
2
2
1
n
ia
v
g
i
ff
f
(3)
Among th
em,
i
f
is the
fitness deg
ree
of i-t
h
,
avg
f
is
cu
rre
nt
averag
e fitne
s
s de
gre
e
s of
swarm,
acco
rding to th
e fo
rmulatio
n (4)
to solve,
f
is
used to limit
the si
ze
of
2
, its value
can b
e
define
d
as form
ula (5).
n
i
i
av
g
f
f
n
(4)
ma
x
1
,
m
a
x
ia
v
g
ff
f
(5)
Fitness varia
n
ce
can
refle
c
t the deg
re
e
of c
onve
r
ge
nce in th
e swarm of pa
rticl
e
s, the
smalle
r the
2
is, the greate
r
the degre
e
o
f
aggreg
ati
on
of PSO, then
the more
con
v
ergent the
particl
e swa
r
m is. On the contrary, parti
cl
e
s
are in th
e rand
om sea
r
ch
stage.
Definition 2:
Particle is t
he maximum
dist
an
ce (M
axDist), i
s
the cu
rre
nt distance
betwe
en the
positio
n of th
e glob
al o
p
timum maxim
u
m Euclid
ean
distan
ce,
can
be d
e
fined
a
s
formula (6):
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN: 2
087-278X
TELKOM
NIKA
Vol. 11, No
. 12, Dece
mb
er 201
3: 734
4 – 7350
7346
2
1
m
a
x
1
,
2
.
..
,
m
gd
i
d
d
M
ax
D
i
s
t
p
x
i
n
(6)
There, n is the num
be
r of
parti
cles in t
he
pa
rticle
swarm, m is t
he dime
nsi
o
n
of the
particl
e,
g
d
p
is
be
s
t
po
s
i
tion
by th
e
p
a
r
t
ic
le
s
w
a
r
m,
id
x
m
e
a
n
s th
e
cu
rre
n
t se
archin
g
positio
n
by i-th particl
e.
Definition 3: Average pa
rti
c
le agg
re
gation di
stan
ce (Mean
Dist
), the averag
e particl
e
swarm Eu
clid
ean di
stan
ce,
can be d
e
fin
ed as Equ
a
tio
n
(7):
2
1
1
,
2
,
...
,
m
gd
i
d
d
px
M
ea
n
D
i
s
t
i
n
n
(7)
Whether it is local or
global convergence
, parti
c
le
convergenc
e
will happen "
gather"
phen
omen
on.
In the ite
r
ative pro
c
e
s
s, the
pa
rticles
of the
“gatheri
n
g
”
a
r
e not
a b
a
d
phen
omen
on,
it can prom
ote the optim
ization of
the
particl
e, wh
at we ne
ed to add
re
ss a
r
e
those
who d
o
not m
eet t
he
conditio
n
s: the ea
rly "g
atherin
g" ph
e
nomen
on.
When
a pa
rticl
e
moving at a
rate eq
ual to
0, particl
es
come to
geth
e
r, very difficult to move, if
g
d
p
is not
obtaine
d for the global o
p
timal solutio
n
, then the algo
rithm will fall into a local op
timum. We ca
n
determi
ne wh
ether ag
gre
g
a
tion occu
rs
by the ti
mes of position un
cha
nge
d and
the chang
e of
optimal lo
cati
on, we
dete
r
mine the A
g
greg
ati
on
ha
ppen
s if the
numbe
r i
s
g
r
eater th
an th
e
optimal
th
re
shold or
le
ss
t
han a ce
rtain
thresh
ol
d po
sition. You ca
n also
kno
w
that by Fitness
varian
ce, the
agg
reg
a
tion
effect o
c
curs
whe
n
le
ss than th
e se
t threshold.
This
pap
er
wa
s
based on
Det
e
rmin
ation on
2
and MaxDist
Mean
Dist.
Aggre
gation,
effect, the judge mechani
sm is sh
own in
Figure 1:
Figure 1. Jud
g
ment Me
cha
n
ism of Aggregation Effect
When
2
tends 0:
(1)
If the maximum distan
ce is
less
than the
averag
e agg
regation di
sta
n
ce.
M
axDist
M
e
anDi
s
t
Believes that particl
e re
ach
e
s glo
bal con
v
ergen
ce.
(2)
If the maximum distan
ce is
greate
r
than t
he avera
ge a
ggre
gation di
stan
ce.
M
axDist
Me
anDi
s
t
Believes that is "cluster eff
e
ct",
falling into local convergence.
3.2.
Mutation
Opera
t
or
Whe
n
the
swarm a
g
g
r
eg
ation o
c
curs, t
he pa
rticl
e
s t
end to
homo
geni
zation, th
en the
introdu
ction
o
f
mutation op
erato
r
to ma
kes the
PSO i
n
crea
se dive
rsity, enhan
ce
the ability to
jump out of local o
p
timal
answe
r, co
ntinuing
sea
r
ch
i
ng in othe
r areas, it is po
ssible to find t
he
global
optim
al sol
u
tion [1
4]. This a
r
ticle will int
r
od
uce th
e G
a
u
ssi
an m
u
tation into p
a
rti
c
le
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
PSO Algorith
m
based on
Accum
u
latio
n
Effect and Mutation (Ji
We
idong
)
7347
swarm
optimi
z
ation
alg
o
rit
h
m, ad
ded
di
sturb
a
n
c
e
s
i
n
the place of optimal
valu
e
re
ceive
d
,
thi
s
algorithm was com
b
ined with st
ronger ability out of l
o
cal extrem
a of Gaussian algorithm based
on global
se
arching
cap
a
b
ility and easy to achiev
e,
it is possi
ble to find the global opti
m
al
s
o
lution.
Whe
n
"prem
a
ture" phe
nom
enon a
ppe
ars, acco
rdin
g to Equation (8
) for mutation.
1
gd
gd
PP
(8)
Whe
r
ein,
obe
ys the
Gau
ssian di
strib
u
tio
n
,
~0
,
1
Ga
us
s
. Thus
, there is
partic
le
swarm o
p
timization al
gorit
hm GPSO.
3.3. Basic Pr
ocess o
f
Op
timization
w
i
th Muta
tion Particle S
w
a
r
m
Step1: init position an
d sp
eed of PSO populatio
n,
ma
x
m
a
x
,
VV
V
,
max
m
ax
,
XX
X
,
Initializ
e population parameters
, s
e
t
s
w
arm
si
ze N,
set
dime
nsio
n
is D,
S
e
t
s
t
h
e
maximum nu
mber of iterati
ons
ma
x
T
,
The initial freque
ncy t = 0.
Step 2:
cal
c
u
l
ate the
fitne
s
s value
of
e
v
ery pa
rticle
i
F
; update
indi
vidual o
p
timu
m P
i
and glo
bal op
timum
g
P
.
Step 3: Upd
a
t
e the parti
cl
e velocity an
d positio
n a
c
cording to th
e formula
(1
) and (2),
adju
s
t the ine
r
tia weig
ht according to the
formula (9
)
[15],
ma
x
m
i
n
ma
x
ma
x
it
er
it
er
(9)
ma
x
ite
r
is maximum
evolution alg
ebra,
ite
r
is Current evolution
algeb
ra.
Step 4: Determine whethe
r the algorith
m
rea
c
he
s th
e maximum
numbe
r of ite
r
ation
s
ma
x
T
.
If it does then
turn Step 8, otherwise, do
Step 5;
Step 5: Determin
e whet
her the pa
rticle swa
r
m a
ggre
gation o
c
curs, cal
c
ul
ate the
particl
e swa
r
m fitness va
rian
ce a
c
cording to the formul
a (3
), (4), (5), if
2
tend
s 0, the
aggregatio
n o
c
curs, do Step 6, otherwi
se go to Step 3;
Step 6: Accordi
ng to
Equation
(6), (7),
cal
c
ulate
MaxDist
,
Mean
Dist
,
if
M
axDist
M
e
anDi
s
t
, then turn up
to step8, if
M
axDist
Me
anDi
s
t
, t
u
rn up to Step 7;
Step 7: Acco
rding to the formula (8
) ma
ke variation
s
o
n
Global o
p
timum, turn Step 3;
Step 8: outpu
t
g
P
.
4. Experimental Stud
y
In order to
assess the
effectiveness of
the proposed
method G
PSO,
this art
i
cle will
comp
ared G
PSO and SG
A, select four
ben
chma
rks f
unctio
n
s to solve it [16]:
(1) Schwefel fun
c
tion is
sho
w
n in
Equation
(10
)
, (11) a
s
follows:
1
1
1
()
n
n
ii
i
i
f
xx
x
(10)
2
2
11
()
ni
j
ij
f
xx
(11)
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN: 2
087-278X
TELKOM
NIKA
Vol. 11, No
. 12, Dece
mb
er 201
3: 734
4 – 7350
7348
(2) Grie
wan
k
fun
c
tion is
sho
w
n in Equation
(12
)
belo
w
:
2
3
1
1
1
()
c
o
s
(
)
1
4
000
n
n
i
i
i
i
x
fx
x
i
(12)
(3) Ackley fun
c
tion is sho
w
n i
n
Equation (1
3) belo
w
:
2
4
11
11
(
)
20
e
x
p
0
.
2
e
x
p
c
os(
2
)
2
0
e
xp(
1
)
nn
ii
ii
fx
x
x
nn
(13
)
Schwefel fun
c
tion
1
()
f
x
2
()
f
x
are u
n
imodal fun
c
t
i
on. Grie
wan
k
functio
n
3
()
f
x
ca
n
not be sepa
rated, variabl
e
it is mult
imo
dal fun
c
tion, Functio
n
Ackl
ey
4
()
f
x
is a conti
nuou
s,
rotational, an
d insep
a
rabl
e multimodal
test
function. Mainly through a cosi
ne expone
ntial
waveform to
adju
s
t the
ex
pone
ntial fun
c
tion. Its glo
b
a
l optim
al val
ue fall
s
on th
e ed
ge
,if
the
initial value of the algorithm
falls on the e
dge, it
will be
very easy to solve this
kin
d
of probl
em.
Its topology i
s
characte
ri
zed by: sin
c
e
the out
er
re
g
i
on domi
nate
d
functio
n
is
an expo
nenti
a
l
function,
so i
s
very flat. Becaus
e of the cosine
waveform adjusts
i
n
the middle,
it
will
turn up a
apertu
re
or a
summit, a
nd t
u
rn
s o
u
t to b
e
not flat. T
h
e multimo
dal
function
ha
s l
a
rge
am
ount
of
partial o
p
tima
l point. The four fun
c
tion
s
are
requi
re
d to get minimu
m. Functio
n
dimen
s
ion
an
d
feasibl
e
sol
u
tion sp
ace as
sho
w
n in Ta
b
l
e 1.
Table 1. Th
re
e Test Fun
c
ti
ons
Test function
Dimension
Feasible solution
Space
1
1
1
()
n
n
ii
i
i
f
xx
x
30
[-10, 10]
n
2
2
11
()
ni
j
ij
f
xx
30
[-100, 100
]
n
2
3
1
1
1
()
c
o
s
(
)
1
4000
n
n
i
i
i
i
x
fx
x
i
30
[-600, 600
]
n
4
()
2
0
e
x
p
fx
2
1
1
0.
2
n
i
i
x
n
1
1
ex
p
c
o
s
(
2
)
n
i
i
x
n
20
e
xp(
1
)
30
[-32, 32]
n
Experiment p
a
ram
e
ters:
GPSO algo
rithm paramete
r
is set to:
P
a
rt
icle sw
ar
m
size
40
n
Dimen
s
io
n
30
d
,
12
2
cc
,
ma
x
0.9
,
mi
n
0.
4
,
ma
x
2
V
.
SGA algorith
m
param
eter
is set to:
Usi
ng fitness propo
rtion
a
l
sele
ction o
perato
r
, arith
m
etic cro
s
so
ver and u
n
iform mutatio
n
operator,
Where crossover
probability
0.8
c
p
Mutation probability
0.02
m
p
GPSO algorit
hm and SGA
were u
s
e
d
, whi
c
h are tested on thre
e test functio
n
s 20
times, find
th
e average
fitness valu
e a
n
d
sta
nda
rd
de
viation. Te
st result
s a
r
e
sh
own
in
Table
2,
GPSO, SGA
of the four function to solv
e the opt
imization of the simulation curves sh
own in
Figure 2, 3, 4, and 5.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
PSO Algorith
m
based on
Accum
u
latio
n
Effect and Mutation (Ji
We
idong
)
7349
Table 2. Te
st Re
sults
Tes
t
Function
Average fitness value (standard
d
e
viation)
Global
minimum
G
PSO SG
A
F
1
3. 9125E-11
5
(2. 0841E
-114)
2. 6624E-1
(6. 3625E
-2)
0
F
2
8. 7445E-18
1
(0)
671. 3289
(122. 479
0)
0
F
3
2. 4797E-14
(1. 1206E
-13)
4. 8423
(1. 1184
)
0
F
4
1.1685E-13
(2.1318E
-13)
5.9209
(6.4707E
-1)
0
Figure 2. Fitness Value Co
mpari
s
o
n
Ch
art of
F1 in 2 Algori
t
hms
Figure 3. Fitness Value Co
mpari
s
o
n
Ch
art of
F2 in 2 Algori
t
hms
Figure 4. Fitness Value Co
mpari
s
o
n
Ch
art of
F3 in 2 Algori
t
hms
Figure 5. Fitness Value Co
mpari
s
o
n
Ch
art of
F4 in 2 Algori
t
hms
The exp
e
rim
ental results we
re a
naly
z
ed
and
co
mpared.
We
can
kno
w
i
t
from 2
Algorithm
s fo
r 4 test fun
c
tions, GPSO
can get a
smal
ler an
swer th
an SGA thro
u
gh 4 fun
c
tion
s,
indicating GP
SO algo
rithm
ca
n sea
r
ch
for bette
r
sol
u
tion, and
av
oiding
prema
t
ure p
a
rti
c
le
s
falling into local point. Thro
ugh the analy
s
is of the sta
ndard deviati
on, GPSO algorithm i
s
better
than SGA
al
gorithm, it
can o
b
tain
smaller sta
n
d
a
rd deviation
,
so
it ha
s better stabilit
y.
Therefore, th
e algo
rithm i
n
solving a
ccura
cy
an
d ro
bustn
ess of
GPSO are b
e
tter than
SGA
algorith
m
. After an
alyzin
g the fitness val
ue comp
a
r
i
s
o
n
ch
art, we see, althou
gh
the two
kind
s
of algorithm
s are n
o
t getting the glo
b
a
l optimal va
lue within th
e set maxim
u
m numb
e
r
of
iteration
s
, G
PSO algorith
m
conve
r
ge
n
c
e a
c
cura
cy is mu
ch bette
r than SGA a
l
gorithm, unli
k
e
that SGA alg
o
rithm into l
o
cal p
o
int pre
m
aturely,
re
sults of GPSO
algorith
m
are very clo
s
e
to
0
50
100
150
200
250
300
350
40
0
450
500
10
-120
10
-100
10
-80
10
-60
10
-40
10
-20
10
0
10
20
N
u
m
ber
of
G
ener
at
i
ons
F
i
t
n
e
ss V
a
lu
e
G
PSO
SG
A
0
50
100
150
200
250
300
350
400
10
-200
10
-150
10
-100
10
-50
10
0
10
50
N
u
m
ber
o
f
G
ener
at
i
o
ns
F
i
t
nes
s
V
a
l
u
e
G
PSO
SG
A
0
5
10
15
20
25
30
35
40
10
-15
10
-10
10
-5
10
0
10
5
N
u
m
ber
of
G
ener
at
i
ons
Fi
t
n
e
s
s
V
a
l
u
e
G
PSO
SG
A
0
5
10
15
20
25
30
35
40
45
50
10
-14
10
-12
10
-10
10
-8
10
-6
10
-4
10
-2
10
0
10
2
Num
b
e
r
of
G
e
ner
at
i
o
ns
Fi
t
n
e
s
s
V
a
l
u
e
GP
S
O
SG
A
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN: 2
087-278X
TELKOM
NIKA
Vol. 11, No
. 12, Dece
mb
er 201
3: 734
4 – 7350
7350
the target val
ue. Simulatio
n
experim
ent
s sh
ow
GPSO algo
rithm for imp
r
oving
the quality of
solutions have validity.
5. Conclusio
n
In this p
ape
r,
PSO algo
rit
h
m ha
s
been
improve
d
, we put id
ea
s o
f
aggregatio
n
effect
and va
riation
idea
s into
PSO, pro
p
o
s
e
d
an i
m
pr
ove
d
pa
rticle
swarm
optimiza
t
ion ba
sed
o
n
accumul
a
tion
effect and mutation ope
rat
o
r. Test
in
g to stand
ard g
e
n
e
tic algo
rithm
and improve
d
particl
e swa
r
m with
comp
lex Schwefel
function,
G
r
i
e
wa
nk fu
ncti
on an
d Ackle
y
function, th
e
results
sh
ow
that improve
d
PSO is
bette
r than
the
sta
ndard g
eneti
c
algo
rithm in
conve
r
ge
nce
spe
ed an
d gl
obal search
capability, avoiding fa
lling int
o
a prem
ature and lo
cal converg
e
n
c
e.
Ackn
o
w
l
e
dg
ement
It is sup
porte
d by Youth
proje
c
t of Na
tional Natural
Scien
c
e fun
d
(41
001
243
), Key
Labo
rato
ry o
f
intelligent
edu
cation
a
nd info
rmati
on e
ngin
eeri
ng p
r
oje
c
t i
n
Heilon
g
jia
ng
provin
ce, Heil
ongjia
ng province
key disci
p
line
of com
p
uter appli
c
ati
on tech
nolo
g
y (08120
3).
Referen
ces
[1]
J Kenned
y
,
R Eberh
a
rt.
Particle Swarm
Optimi
z
a
tion.
Proc. IEEEInt. Conf. Neural Net
w
o
r
ks. 1995;
194
2-19
48.
[2]
YW
Leung,
Y
P
W
ang.
An
Orthogon
al Ge
netic
Algor
ithm
w
i
th Qu
antiza
t
ion for Glob
al
Numerica
l
Optimization.
IEEE T
r
ansactions on Evo
l
uti
onary C
o
mput
ation
. 20
01; 5(
1): 41–5
3.
[3]
Das Sharm
a
K, Chatterjee
A, Rakshit
A.
A
Ra
n
dom Spati
a
l lbest PSO-Based H
y
b
r
id St
rateg
y
f
o
r
Desig
n
i
ng
Ada
p
tive F
u
zz
y
C
ontrol
l
ers for a Class of No
nlin
ear S
y
ste
m
s.
IEEE
T
r
a
n
sactions on
Instrume
ntatio
n and Me
asur
e
m
e
n
t
. 2012; 6
1
(6): 160
5–
162
1.
[4]
Chia-
N
a
n
Ko,
Y
i
n
g
-Pin C
h
a
n
g
, Chia-J
u W
u
.
A
PS
O Method
w
i
t
h
Non
l
i
n
e
a
r
T
i
me-V
ar
yin
g
Evoluti
o
n
for Optimal De
sign of H
a
rmo
nic F
ilters.
IEEE T
r
ansactions
on Power Sy
stem
s
. 20
09;
24(1): 4
37-
444.
[5]
Vlach
ogi
an
nis
JG, Lee KY
. Econom
ic Lo
ad D
i
spatch
.
A
Com
parativ
e Stud
y
on He
uristic Optimizati
on
T
e
chniqu
es
w
i
t
h
an Impr
ove
d
Coor
din
a
ted Aggre
gati
on-B
a
sed
PSO.
IE
EE T
r
ansactio
n
s on P
o
w
e
r
System
s
. 20
09
; 24(2): 991
–10
01.
[6]
Benzh
e
n
g
W
e
i,
Z
h
imin Z
h
ao,
Xi
n Pe
ng. Sp
at
ia
l Inform
atio
n
Based M
e
d
i
cal
Image R
e
g
i
stration
usi
n
g
Mutual Information.
Journ
a
l of
Multimedi
a.
20
09; 6(3): 23
6-2
43.
[7]
Xi
ao
hui
Ch
en,
Canfe
ng Go
n
g
, Jian
gb
o Mi
n.
A
Node
Lo
calizati
on
A
l
go
rithm for W
i
rel
e
ss Sens
o
r
Net
w
orks b
a
se
d on Particl
e
Sw
a
rm
Algor
ith
m
.
Journal of N
e
tw
orks
. 2012; 7(1
1
); 18
60-1
8
67.
[8]
Zhang Feizh
o
u
,
Cao Xuej
un,
Y
ang Do
ngka
i
. Intellig
e
n
t schedu
lin
g of publi
c
traf
fic vehicle
s
based
o
n
a h
y
bri
d
ge
neti
c
algor
ithm.
T
s
ingh
ua Sci
ence
and T
e
ch
nol
og
y
. 2008; 13(
5): 625
–6
31.
[9]
T
aejin Park, K
w
a
n
g
R
y
e
l
R
y
u.
A
Dual-Po
p
u
lati
on
Gen
e
tic
Algorithm for
Adaptiv
e Div
er
sit
y
C
ontrol.
IEEE T
r
ansactions on Evo
l
uti
onary C
o
mput
ation
. 20
10; 14
(6): 865–
88
4.
[10]
Xi
ao Mi
n Hu, J
un Z
han
g,
Y
an
Y
u
, et al. H
y
br
i
d
Genetic
Alg
o
r
ithm Usin
g a F
o
r
w
ard E
n
co
di
ng Schem
e
for Lifetime Maximizati
on of
W
i
reless Sensor Net
w
orks
.
IEEE
T
r
ansactions on Evolutionar
y
Co
mp
utation
. 2
010; 15(
4): 766
–78
1.
[11]
Y
i
-T
ung K
ao,
Er
w
i
e Z
a
har
a.
A
h
y
brid
g
enet
ic al
gorith
m
and partic
l
e
s
w
arm opti
m
izatio
n
for
multimod
al fun
c
tions.
Appl
ie
d Soft Computin
g.
2008; 8: 84
9
-
857.
[12]
Shutao L
i
Min
g
kui
T
an,
T
s
ang IW
, et al.
A
H
y
br
id PSO-B
F
G
S Strateg
y
for
Global Opti
mizatio
n
of
Multimod
al F
u
nctions.
IEEE
T
r
ansactions on System
s,
Ma
n, and
Cyb
e
rn
etics,
Part B: Cyber
netics
.
201
1; 41(4): 10
03–
10
14.
[13]
MG Epitropak
i
s
, DK
T
a
souli
s
, NG Pavlid
i
s
. E
nha
ncin
g
Dif
ferenti
a
l Ev
oluti
on Uti
lizi
n
g Pro
x
im
it
y
-
Based Mutati
o
n
Operators.
IEEE T
r
ansactions on Evo
l
uti
onary C
o
mput
ation
. 20
1
1
; 15
(1): 99-1
1
9.
[14]
S Das,
A
Abraham, UK Chak
rabort
y
,
A
Kon
a
r
.
Dif
ferential
Evolutio
n Usi
ng a Nei
g
h
bor
hoo
d-Bas
e
d
Mutation Oper
ator
.
IEEE T
r
ansactions o
n
Evoluti
onary C
o
mputatio
n
. 200
9; 13(3): 526-
55
3.
[15]
Y
Shi, RC Eberhart.
A
Modified Particle Swarm
Optimiz
e
r
.
Proc
eedings of IEEE
International
Confer
ence on
Evolutio
nar
y
C
o
mputati
on. An
chora
ge,
AK, USA. 1998; 69
–
73.
[16]
YW
Leu
ng, Y
P
W
ang. A
n
Ort
hogon
al
Ge
netic A
l
gor
ithm
w
i
th
Quantiz
a
t
ion for
Glob
al
Numer
i
ca
l
Optimization.
IEEE Transacti
ons on Evo
l
uti
onary C
o
mput
ation
. 20
01; 5(
1): 41–5
3.
Evaluation Warning : The document was created with Spire.PDF for Python.