Internati
o
nal
Journal of Ele
c
trical
and Computer
Engineering
(IJE
CE)
V
o
l.
6, N
o
. 5
,
O
c
tob
e
r
201
6, p
p
. 2
470
~247
7
I
S
SN
: 208
8-8
7
0
8
,
D
O
I
:
10.115
91
/ij
ece.v6
i
5.1
089
9
2
470
Jo
urn
a
l
h
o
me
pa
ge
: h
ttp
://iaesjo
u
r
na
l.com/
o
n
lin
e/ind
e
x.ph
p
/
IJECE
Adaptive Adjustment of PSO Co
efficients Taking the Notion
from the Bee B
e
h
avi
or in Coll
ectin
g Nect
ar
Abb
a
s Fa
da
vi
1
, Karim F
a
ez
2
, Z
e
ina
b
Fa
mili
3
1
Departm
e
nt
of
M
echatron
i
cs
,
S
c
ien
c
e
and R
e
s
e
arch Br
anch
, Is
l
a
m
i
c Azad
Unive
r
cit
y
,
S
e
m
n
an,
Ir
an
2
Department of
Electrical Eng
.
,
Amir
kabir Univ
ersity
of
Tech
, Tehran, Iran
3
Adiban High
er
Edjuc
a
tion
Instit
ute
Article Info
A
B
STRAC
T
Article histo
r
y:
Received Apr 16, 2016
Rev
i
sed
Ju
l 2
,
2
016
Accepte
d
J
u
l 18, 2016
In parti
c
le swar
m
optim
ization
,
a set of part
ic
l
e
s m
ove towards the global
optim
um
point accord
ing to
their
experi
enc
e
and exper
i
en
c
e
of other
parti
c
les.
Param
e
ters such
as p
a
rticl
e
r
a
te
, p
a
rti
c
le b
e
st exp
e
ri
en
ce,
the
best
experience of
all th
e particles
and pa
r
tic
le
cur
r
ent posit
ion
ar
e used to
determ
ine
th
e n
e
xt pos
i
tion of
each
par
tic
le
. C
e
rta
i
n re
la
tions
h
i
ps
rec
e
iv
ed
the input p
a
ram
e
ters and det
e
rm
i
n
ed the
nex
t
position of each par
ticl
e
. In
this
article, th
e relationships
are accu
rately
assessed and the amount o
f
the eff
e
ct
of input parameters is hor
izontally
set. To set coefficien
ts adaptively
,
th
e
notion is taken
from bee behav
i
or in
co
llecting
nectar
.
This method was
im
plem
ented on
s
o
ftware and ex
a
m
ined in the s
t
a
ndard s
earch
env
i
ronm
ents
.
The ob
tain
ed r
e
sults indicate th
e eff
i
ciency
of
this method in
in
creasing
the
rate of
converg
ence of
par
ticles
towards the glob
al op
timum.
Keyword:
Ad
ap
tiv
e settin
g
G
l
ob
al op
tim
u
m
Particle swarm op
ti
m
i
zatio
n
St
anda
r
d
sea
r
c
h
e
nvi
ro
nm
ent
The rate of
conve
rgence
of
p
a
rticles
Copyright ©
201
6 Institut
e
o
f
Ad
vanced
Engin
eer
ing and S
c
i
e
nce.
All rights re
se
rve
d
.
Co
rresp
ond
i
ng
Autho
r
:
Ab
bas Fad
a
vi
,
Depa
rt
m
e
nt
of
M
echat
ro
ni
cs,
Science a
n
d R
e
search Bra
n
ch,
Islamic Azad
Uni
v
ercity,
Sem
n
an, Iran.
Em
a
il: ab
b
a
s_fad
a
v
i
@yaho
o
.co
m
1.
INTRODUCTION
Th
e Particle Swarm
Op
timiz
atio
n
algo
r
ithm [
1
]-
[2
] is com
p
o
s
ed
of
a se
t o
f
p
a
rticles.
Th
e aim
o
f
all
th
e
p
a
rticles is app
r
o
a
ch
ing
t
h
e
o
p
tim
u
m
resp
on
se and re
du
cing
er
ro
r.
Th
e er
ro
r of
each
p
a
rticle
is p
a
rticle
d
i
stan
ce t
o
resp
on
se. Each
p
a
rticle can
b
e
a
p
o
t
en
tial resp
on
se. Each
p
a
rti
c
le d
e
term
in
es
its fu
t
u
re po
sitio
n b
y
co
nsu
lting
with
o
t
h
e
r p
a
rticles an
d
its exp
e
rien
ces. The
positio
n
o
f
each
p
a
rticle is a resu
lt o
f
its ex
p
e
rien
ces
and
ot
he
r part
i
c
l
e
s'
experi
enc
e
s. Fo
r exam
pl
e, we co
ns
ide
r
a person as a
s
m
art particle
and the purpose as
b
u
y
ing
a su
itab
l
e au
to
m
o
b
ile. Th
e
p
e
rson
pays atten
tio
n
to
two
factors in
bu
ying
a su
itab
l
e au
to
m
o
b
ile; First
,
hi
s l
a
st
expe
ri
ences o
f
b
u
y
i
ng a
n
aut
o
m
obi
l
e
and sec
o
n
d
, co
ns
ul
t
i
ng
wi
t
h
ot
her
pe
opl
e a
nd as
ki
ng t
h
ei
r
opi
ni
o
n
a
b
o
u
t
ex
peri
enc
e
s
of
b
u
y
i
ng
an
aut
o
m
obi
l
e
. T
h
e
pers
o
n
,
reg
a
rdi
ng
hi
s e
x
peri
e
n
ces a
n
d
ot
he
rs'
ex
p
e
rien
ces in
b
u
y
ing
an
au
tom
o
b
ile, selects h
i
s
op
ti
m
u
m
au
to
m
o
b
ile.
Fi
gu
re 1 i
n
di
cat
es how a hy
pot
het
i
cal
part
i
c
l
e
perfo
rm
s in t
h
e opt
i
m
i
z
at
i
on al
go
ri
t
h
m
of part
i
c
l
e
swarm
opt
i
m
i
z
at
i
on. T
h
e
ho
ri
zont
al
axi
s
i
ndi
cates the scope
of sea
r
ch s
p
ac
e and t
h
e ve
rtical axis indicat
es the
am
ount
of e
r
ror acc
ording t
o
consiste
nt func
tion.
As
shown in
Fi
gure
1,
there is
a sea
r
ch s
p
ace i
n
which
a
p
a
rticle tries to
reach
a g
l
obal o
p
tim
u
m
. x
(
t) is th
e po
siti
o
n
o
f
a
p
a
rticle at th
e ti
m
e
t, v
(
t) is th
e
rat
e
o
f
a
p
a
rticle at th
e ti
m
e
t, p
best
(t) is th
e
b
e
st exp
e
rien
ce
o
f
a
p
a
rti
c
le to
th
e tim
e
t an
d
g
best
(t) is
the b
e
st e
xpe
rience
o
f
all th
e
p
a
rti
c
les to
th
e ti
me t. In
PSO m
e
th
od
, each
p
a
rticle ten
d
s
t
o
mo
v
e
toward
s its b
e
st exp
e
rience and
best
ex
peri
e
n
c
e
of
ot
he
r pa
rt
i
c
l
e
s. p
best
- x(t) is th
e d
i
stan
ce o
f
p
a
rticle to
its b
e
st exp
e
rien
ce and
g
best
-
x
(
t) is
t
h
e di
st
an
ce o
f
part
i
c
l
e
t
o
t
h
e
best
e
xpe
ri
en
ce of
ot
her
p
a
rticles. Th
e rate v
(t+1
) is th
e
resu
ltan
t
o
f
the two
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ada
p
tive Ad
ju
stmen
t
o
f
PS
O
Cieffic
ien
t
s Takin
g
t
h
e N
o
tion
fro
m
t
h
e Bee
Beha
vio
r
i
n
.... (Ab
b
a
s
Fada
vi
)
2
471
com
pone
nt
s o
f
p
best
-
x (t
) an
d
g
best
-
x (t
). Ba
sed
on t
h
ese t
w
o c
o
m
p
onent
s
, expe
rience
gained
during ti
me and
i
t
s
expe
ri
ences
exc
h
an
ge
d
wi
t
h
ot
he
r
part
i
c
l
e
s, p
a
rt
i
c
l
e
x
ca
n m
ove t
o
wa
rd
s t
h
e
o
p
t
i
m
u
m
poi
nt
.
Equ
a
tio
n (1
) ind
i
cates th
e calcu
latio
n
m
e
th
od of
p
a
rticle rate at th
e tim
e t+1
.
1
∗
∗
∗
∗
∗
(1
)
Fi
gu
re
1.
m
e
t
h
od
f
o
r
pe
rf
o
r
m
a
nce
of
a
hy
p
o
t
h
et
i
cal
pa
rticle in
th
e op
tim
iz
atio
n
algor
ith
m of
p
a
rticle swarm
o
p
tim
izat
io
n
C
o
m
pone
nt
s v
(t
) i
s
t
h
e
rat
e
o
f
pa
rt
i
c
l
e
at
t
h
e
t
i
m
e
t
and co
e
ffi
ci
ent
w s
p
ec
i
f
i
e
s t
h
e i
m
pact
fact
or
o
f
v
(t
) on v (t
+1
). C
o
m
pone
nt
p
be
st
- x
(t
) i
s
t
h
e
di
st
ance
of
pa
rt
i
c
l
e
t
o
i
t
s
be
st
expe
ri
ence
and
c
1
is th
e i
m
p
a
c
t
coef
fi
ci
ent
of
t
h
i
s
param
e
t
e
r on v
(t
+1
).
C
o
m
pone
nt
g
be
s
t
- x
(t) is th
e d
i
stan
ce
o
f
p
a
rticle to
th
e b
e
st
expe
ri
ence
o
f
ot
he
r part
i
c
l
e
s and
c
1
is the
i
m
pact coefficient of t
h
is pa
ra
meter on v (t+
1
).
Hav
i
n
g
th
e rat
e
an
d
curren
t
po
sitio
n, we can sp
eci
fy th
e n
e
st step
x
(t+1
) b
y
eq
u
a
tion
(2). r
1
and r
2
are two
ra
ndom
coefficients. Thes
e two c
o
efficients a
r
e
u
s
ed
to prev
ent p
a
rticles' in
vo
lv
em
en
t in
the lo
cal
opt
i
m
a who
s
e
am
ount
i
s
bet
w
een ze
ro
an
d
o
n
e.
1
1
(2
)
1.
1.
Search space l
i
mitati
on
It is possible t
h
at the
particles exit from
the s
earc
h
s
p
ace range
while pe
rform
i
ng the al
gorithm
.
To
allev
i
ate th
is prob
lem
,
Eq
u
a
ti
o
n
(3
) is
u
s
ed.
1
1
1
1
(3
)
Whe
r
e x
mi
n
is the minim
u
m
s
earch s
p
ace and x
ma
x
is the
maxim
u
m
search space. E
quati
on
(3) lim
its
particle
in
th
e
ran
g
e
of
x
mi
n
and
x
ma
x
.
1.
2.
Speed limit
Decreasi
ng a
n
d i
n
creasi
ng t
h
e part
i
c
l
e
s'
rate have a great
i
n
fl
ue
nce o
n
fi
ndi
ng t
i
m
e for
resp
onse i
n
the optim
ization algorithm
of PSO. If the
rat
e
of a pa
rticle is low, it m
u
st
take m
o
re steps
to reach
whe
r
e the
respon
se
is. If th
e
rate
is h
i
gh,
th
e p
a
rticle
mo
v
e
s
towa
rds t
h
e re
sponse
by taking
larger s
t
eps and a
p
proaches
t
h
e resp
o
n
se a
r
ea fast
er
. If t
h
e am
ount
of
m
a
xim
u
m
rat
e
i
s
not
l
i
m
i
t
e
d, t
h
e part
i
c
l
e
s b
ecom
e
di
verge
n
t
and
will be
rem
oved
from
the search space. For t
h
is rea
s
on, E
quation
(4) is
use
d
to li
m
it the rate of each pa
rt
icle.
1
1
1
(4
)
If
v
(t+1) calculated
b
y
Equ
a
tio
n (1
) ex
ceed
s
th
e allo
wed
amo
u
n
t
,
it will
b
e
li
mited
b
y
eq
uatio
n
(4).
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 5
,
O
c
tob
e
r
20
16
:
247
0
–
24
77
2
472
1.
3.
Intr
oduc
ti
on
of
som
e
of
the
pro
p
ose
d
me
t
h
ods
i
n
P
S
O
SPS
O [
3
]
i
n
t
h
i
s
m
e
t
hod,
a ce
rt
ai
n am
ount
i
s
de
vot
e
d
t
o
w
whi
c
h as t
i
m
e
passes,
re
d
u
ce
s. I
n
fact
, i
t
can
b
e
sai
d
t
h
at p
a
rticles in
itially
m
o
v
e
, tak
i
n
g
larg
er
step
s to
ward
s t
h
e area wh
ere th
e respo
n
s
e it.
Then
, as
tim
e
passes, t
h
e
particles ta
ke sm
aller steps t
o
be
ab
le to
search
m
o
re carefu
lly. SAPSO
[4
] in
n
e
ural
networks,
whe
n
eve
r
one of the com
pone
nts has a good
res
p
onse
, it will be encoura
g
e
d
but it will be punishe
d
in case of an
unsuita
ble resp
onse.
In t
h
is m
e
t
h
od, all the
particles
will be exam
ined in
each iteration. If p
best
of
each
particle is
not im
prove
d
com
p
ared
with its last tim
e
, that pa
rticle wi
l
l
be punis
h
ed. He
re
, the
wei
g
hts
of
th
e p
a
rticle will ch
ang
e
.
DNSPSO
[5]
in the
optimization al
gorit
h
m
of PSO,
each
particle
pays attention to its best
expe
ri
ence
, al
l
t
h
e ot
her
pa
rt
i
c
l
e
s'
best
exp
e
ri
ence a
n
d i
t
s
nei
g
h
b
o
r
s'
be
st
expe
ri
ence
.
TC
PSO
[
6
]
i
n
t
h
i
s
m
e
t
hod,
PS
O i
s
com
pose
d
o
f
PSO
Sl
ave a
n
d
M
a
st
er. T
h
ese
t
w
o
PS
Os c
o
o
p
erat
e
wi
t
h
eac
h
ot
her
t
o
reac
h t
h
e
optim
u
m
response.
p
best
in
Slav
e PSO do
es
no
t m
ean
th
e b
e
st ex
p
e
rien
ce
of th
e p
a
rticle bu
t is d
e
fi
n
e
d
as th
e
best
ex
peri
e
n
c
e
of t
h
e
part
i
c
l
e
and i
t
s
nei
g
h
b
o
r
s. M
a
st
er PSO uses the be
st expe
rience of PSO Slave as
well
as i
t
s
best
e
x
pe
ri
ence a
n
d
ot
he
r
part
i
c
l
e
s'
best
ex
peri
e
n
ce.
PTPS
O [7]
m
a
t
e
ri
al
s occur i
n
t
h
ree phase
s of ga
s, l
i
qui
d
and
s
ol
i
d
. Gas m
o
l
ecul
e
s have t
h
e hi
ghes
t
m
o
v
e
m
e
n
t
rate wh
ile so
li
d
m
o
lecu
les h
a
ve th
e least. In th
is m
e
th
o
d
, th
e
no
tio
n of m
o
v
e
m
e
n
t
rate o
f
m
o
lecules is used. Eac
h
particle has one of
the m
a
te
rial form
s and m
oves accordin
g to
the releva
nt form
ula
of sam
e
m
a
terial type. Particles cha
nge t
h
eir phase
based
on
vari
o
u
s co
n
d
i
t
i
ons
. Part
i
c
l
e
s
m
ove at
di
f
f
ere
n
t
rates wh
en
co
nd
itio
n
s
v
a
ry so
th
at th
ey can
reach
th
e op
tim
u
m
resp
o
n
se rat
e
.
Ada
p
t
i
v
e
PS
O
[8]
t
h
i
s
m
e
t
h
o
d
i
s
t
h
e
sam
e
as st
an
dar
d
P
S
O
but
t
h
e
o
nl
y
di
ffe
re
nce i
s
t
h
at
t
h
e am
ount
of
w
i
s
sel
ect
e
d
as
a
d
apt
i
v
e
.
The e
q
uat
i
o
n
o
f
det
e
rm
i
n
i
ng
w i
s
c
h
ose
n
i
n
a way
t
h
at
P
S
O
red
u
ces
t
h
e
am
ount
of
w t
h
r
o
ug
h fi
ndi
ng t
h
e best
g
best
. Th
is causes th
e p
a
rticles in
itiall
y
m
o
v
e
d
b
y
tak
i
ng
larg
er steps and
th
en
b
y
find
ing
b
e
tter g
best
, th
ey l
o
ok
for th
e respo
n
se b
y
tak
i
n
g
smaller step
s.
RPSP [
9
]
in t
h
is
m
e
thod,
the
param
e
ter called a
b
est is use
d
instea
d
of
g
be
s
t
to
po
sitio
n
t
h
e
p
a
rticles.
The am
ount
of
part
i
c
l
e
s'
p
best
is exa
m
ined and the better one is specified
as th
e p
a
rticle l
ead
er or best a
g
ent.
Each pa
rticle
m
oves towards the gl
obal opti
m
u
m
according to its best e
xperie
n
ce and the
best position
of
ag
en
t. M
PSO
[10
]
in
th
is meth
od
, th
ere are fo
ur
d
i
fferen
t
equ
a
tio
ns in
d
e
term
in
in
g
th
e po
sitio
n
o
f
t
h
e
particle.
In eac
h attem
p
t, each pa
rticle uses
one
of th
e
equat
i
ons
ra
ndom
ly.
Usi
ng
di
ffe
rent equations reduce
s
th
e po
ssi
b
ility o
f
p
a
rticles b
e
i
n
g inv
o
l
v
e
d in
th
e lo
cal
op
ti
ma.
In
th
e first sectio
n
of th
is article, stan
d
a
rd
PSO will b
e
ex
amin
ed
co
m
p
let
e
ly after wh
ich so
m
e
o
f
th
e
propose
d
m
e
thods
will be
dis
c
usse
d.
In
the second section,
the offere
d
m
e
th
od
will be e
xplaine
d. Final
l
y, in
th
e th
ird
secti
o
n
,
t
h
e
resu
lts
of
p
e
rform
i
n
g
the offere
d m
e
th
o
d
and
o
t
h
e
r
p
r
o
p
o
s
ed
m
e
th
o
d
s will b
e
co
m
p
ared.
2.
DESC
RIPTI
O
N OF
THE
OFFERE
D M
ETHOD
Equ
a
tio
n
s
(1) an
d
(2
) sp
ecify th
e p
o
s
ition
o
f
a
p
a
rticle in
PSO. These two
equ
a
t
i
o
n
s
can
b
e
co
m
b
in
ed
and
written
as fo
llows:
1
∗
(5
)
To sim
p
lify the disc
ussion, r
1
and
r
2
are i
g
nored and
Equ
a
tio
n (5
) is written
as Equ
a
tion
(6
).
1
∗
(6
)
Acco
r
d
i
n
g
t
o
E
quat
i
o
n (
2
-
1
)
,
we
ca
n wri
t
e
E
quat
i
o
n (
7
).
1
(7
)
Th
en
b
y
co
m
b
in
ing
(6) an
d (7),
we can
write
1
∗
1
(8
)
B
y
sim
p
l
i
f
y
i
ng Eq
uat
i
o
n (
8
),
we ca
n
wri
t
e
1
∗
1
1
(9
)
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ada
p
tive Ad
ju
stmen
t
o
f
PS
O
Cieffic
ien
t
s Takin
g
t
h
e N
o
tion
fro
m
t
h
e Bee
Beha
vio
r
i
n
.... (Ab
b
a
s
Fada
vi
)
2
473
Eq
uation
(
9
)
s
h
o
w
s t
h
e
fo
ur
f
actors
o
f
x
(t-
1
),
x
(t),
p
best
and
g
best
as t
h
e
i
n
put
a
n
d t
h
e
am
ou
nt
x (t
+
1
)
i
s
cal
cul
a
t
e
d a
s
t
h
e o
u
t
p
ut
. T
h
e am
ount
of
param
e
t
e
rs'
impact
i
s
det
e
rm
ined
by
t
h
e c
o
e
ffi
ci
ent
s
w, c
1
and c
2.
For
e
x
am
pl
e, t
h
e i
m
pact
of
x
(t
) i
s
det
e
rm
i
n
ed
by
t
h
e am
ou
nt
1+
w
-
c
1
-c
2
.
2.
1.
New relatio
n
o
u
tline
This relation is
pose
d
by the notion that can r
eceive the four fact
ors
of x
(t-1), x (t
), p
best
and g
best
as
in
pu
ts.
With
t
h
e im
p
act o
f
each
o
f
th
e
four co
efficien
ts is d
e
term
in
ed
.
Equ
a
tio
n (1
0)
is u
s
ed
to calcu
late
x
(t+1)
.
1
(1
0)
As i
s
speci
fi
e
d
i
n
Equat
i
o
n (
1
0
)
, f
o
ur fact
o
r
s are use
d
t
o
det
e
rm
i
n
e t
h
e
am
ount
x
(t
+1
)
.
The i
m
pact
of each one
of these
factors i
s
determ
ined
by the c
o
efficients c
1
, c
2
, c
3
an
d c
4
. c
1
i
s
t
h
e
am
ount
of i
m
pact
o
f
x(t
-
1
)
, c
2
t
h
e
a
m
ount
o
f
i
m
pact
of
x
(
t
)
,
c
3
t
h
e am
ount
o
f
i
m
pact
o
f
p
best
and c
4
t
h
e am
oun
t
of
i
m
pact
of
g
best
.
In t
h
e prese
n
ted m
e
thod of
this article, we tried
to c
ont
rol the am
ount of im
pact of each by a
separate c
o
efficient. In t
h
is m
e
thod, t
h
e position
of
the
best experie
n
c
e
of the
partic
le becom
e
s prom
inent
instead
of t
h
e
distance t
o
the
best e
xpe
ri
ence of the
parti
c
les. The
coe
f
ficients r
1
, r
2
, r
3
and
r
4
are random
num
bers i
n
t
h
e
ra
nge
o
f
ze
ro
and
o
n
e
use
d
t
o
pre
v
e
n
t
pa
rt
i
c
l
e
s fr
om
fal
l
i
ng i
n
t
h
e l
o
cal
o
p
t
i
m
a
.
2.
2.
Determining
the coe
fficients
c
1,
c
2
, c
3
an
d c
4
Each
one
of the pa
rticles has
its own
speci
fic coe
fficients
.
This e
n
a
b
les PSO t
o
re
gulate
the am
ount
of i
m
pact
of
pa
ram
e
t
e
rs of t
h
e
part
i
c
l
e
s acc
or
di
n
g
t
o
t
h
e cond
itio
n
s
of
th
e sa
m
e
p
a
r
ticle. Each
o
n
e
of t
h
e
four
particles that
has a
greater am
ount
will have
a gr
eater
para
m
e
ter im
pact am
ount releva
nt
to it.
In t
h
i
s
st
udy
,
we ap
pl
i
e
d t
h
e
not
i
o
n o
f
bee
s
'
m
e
t
hod
of c
o
l
l
ect
i
ng nect
a
r
[1
1]
-
[
1
4
]
i
n
gr
o
ups;
bee
s
per
f
o
r
m
a gro
u
p
o
f
ope
rat
i
o
ns
t
o
col
l
ect nect
ar.
We call those bees c
o
llec
ting nectar as
working bees. After
a
p
e
ri
o
d
of tim
e,
n
ectar av
ailable in
th
e g
a
rd
en
will redu
ce
an
d as
su
ch
, a nu
m
b
er of
wo
rk
ing
b
ees turn in
to
searchi
ng
bees
. The sea
r
c
h
i
n
g bees a
r
e
obl
i
g
ed t
o
be
rem
oved
fr
om
t
h
e garde
n
a
nd e
n
t
e
r i
n
t
o
a
new
g
a
rde
n
.
W
h
en
ev
e
r
on
e of
th
e s
e
ar
c
h
in
g b
e
es
f
i
nd
s
a
n
e
w
g
a
rd
en
,
it r
e
f
e
rs
to o
t
he
r
b
e
e
s
and
g
i
v
e
s
th
e
ad
dr
e
s
s
to
th
e
rest o
f
b
ees. Th
e m
o
re n
ectar is av
ailab
l
e in
th
e g
a
rd
en, the
m
o
re work
ing
b
ees th
ere will b
e
. Th
e lesser th
e
n
ectar i
n
th
e g
a
rd
en
, th
e m
o
re
search
i
n
g b
e
es
th
ere
will b
e
.
As we
k
n
o
w p
be
s
t
is th
e b
e
st ex
p
e
rien
ce
of the p
a
rticle. Th
e
respon
se
g
a
in
ed
b
y
p
best
is b
e
t
t
er in
so
m
e
t
r
i
e
st
heref
o
re,
t
h
e am
ount
o
f
p
best
v
a
ries.
Th
is ind
i
cates th
at th
e co
n
s
id
ered
p
a
rticle cou
l
d
find
a b
e
tter
r
e
spon
se.
I
f
the nu
m
b
er
of
p
a
r
ticles w
h
o
s
e
p
best
h
a
s been
opti
m
ized
in
creases, m
o
re p
a
rticles will find
a
b
e
tter
response. He
re
we consi
d
er e
ach pa
rticle as
a bee.
W
e
con
s
id
er th
e p
e
rcentag
e
o
f
th
e num
b
e
r o
f
p
a
rticles th
at
o
p
tim
izes th
eir
p
best
in a try as nectar.
At the
st
art o
f
algorith
m
,
th
e PSO of all
th
e p
a
rticles was con
s
i
d
ered
as
the working particles. Thes
e
particles searc
h
the optim
u
m
poi
nt accord
i
ng t
o
their e
x
periences a
nd t
hos
e of
ot
he
r part
i
c
l
e
s.
If t
h
e
p
e
rcen
tag
e
o
f
th
e
p
a
rticles in
eac
h t
r
y
t
h
at
ha
s
not
o
p
t
i
m
i
zed t
h
ei
r
ow
n
p
best
i
s
n
o
t
re
duce
d
fr
om
a cert
a
i
n
am
ount
,
o
n
e o
f
t
h
e part
i
c
l
e
s va
r
i
es ran
dom
l
y
fr
om
a worki
ng
part
i
c
l
e
t
o
a searchi
ng
part
i
c
l
e
.
Whe
n
e
v
er
one
of the
particles selected as
a s
earch
ing
p
a
rticle
find
s a
po
sitio
n
b
e
tter
th
an
p
best
, all th
e
searchi
ng
parti
c
les turn into
worki
n
g partic
les a
nd m
ove towa
rds the
new optim
u
m
poi
nt. In fact, each
particle can be
placed in sea
r
ching and working m
odes.
Why do all the
particles turn
into sea
r
chi
ng
particles
whe
n
t
h
e am
ou
nt
o
f
part
i
c
l
e
s
wh
ose
p
best
i
s
n
o
t
o
p
t
i
m
i
zed b
ecom
e
s l
o
wer t
h
an
a ce
rt
ai
n a
m
ount
?
There a
r
e two
reasons why a perce
n
tage
of th
e p
a
rticles th
at h
a
s n
o
t
op
timized
th
eir p
best
am
ount
i
n
each attem
p
t are re
duce
d
from
a certain am
ount.
1.
The
p
a
rt
i
c
l
e
s ha
ve a
p
pr
oac
h
ed
t
h
e
gl
obal
opt
i
m
u
m
poi
nt
an
d a
r
e
fi
n
d
i
n
g t
h
e fi
nal
resp
ons
e, t
a
ki
n
g
s
m
al
l
e
r
steps.
2. T
h
e pa
rticles are trapped i
n
a local optimum
by
mistake
.
It is p
o
ssi
b
l
e th
at p
a
rtic
les are in the first case and
it is not
necess
a
ry to t
u
rn all t
h
e
pa
rticles int
o
sea
r
chi
n
g pa
rticles.
In
fact
, t
h
e
wo
rki
n
g
an
d sea
r
chi
n
g
part
i
c
l
e
s
are c
o
nt
rol
l
e
d
base
d
on
v
a
ri
ous
co
n
d
i
t
i
ons
.
It
i
s
a
g
o
o
d
co
nd
itio
n th
at
d
o
e
s no
t
n
e
ed
to
ch
ang
e
wh
en
a larg
e po
rti
o
n of
p
a
rticles is b
e
i
n
g op
timized
. Ho
wever, if a
l
a
rge p
o
r
t
i
on
of
part
i
c
l
e
s i
s
not
bei
n
g o
p
t
i
m
i
zed, t
h
ere
m
u
st
be a change i
n
t
h
e ge
n
e
ral
beha
vi
or
of P
S
O
p
a
rticles.
2.
3.
Descripti
o
n
of the
working
mode
Here it is assumed
th
at th
e op
ti
m
u
m
p
o
i
n
t
is
m
i
n
i
m
u
m
.
Th
e fo
llowing p
o
i
n
t
s are imp
l
em
en
ted
to
det
e
rm
i
n
e
t
h
e am
ount
o
f
c
o
ef
fi
ci
ent
o
f
eac
h wo
rki
n
g part
i
c
l
e
.
1
.
All th
e p
a
rticles are in
exp
e
rien
ced
in
t=0
and
no
p
a
rtic
le is su
p
e
rio
r
to
ano
t
h
e
r p
a
rticle. Fo
r th
is reason, all
the coe
fficie
n
ts are e
q
ual in t=
0.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 5
,
O
c
tob
e
r
20
16
:
247
0
–
24
77
2
474
2.
Each am
ount
of al
g
o
ri
t
h
m
per
f
o
r
m
a
nce t
i
m
e
i
s
spent
and t
h
e e
x
peri
e
n
ce o
f
pa
rt
i
c
l
e
s i
n
creases
. I
f
a
particle is placed in a proper
position
in the
prim
ary atte
mpts
random
l
y, i
t
is possible to be determ
ined as
g
best
. This
part
icle loses its
position ove
r
ti
m
e
, becau
se othe
r pa
rticles find
be
tter re
sponses
by their
m
o
v
e
m
e
n
t
. In
fact, it is sh
own
th
at it is no
t v
a
lu
ab
le th
at a
p
a
rticle is g
best
in
th
e fi
rst m
o
men
t
s. Ov
er ti
me,
a p
a
rticle canno
t be
g
best
at
r
a
nd
om
. Fo
r t
h
i
s
reas
on
, m
o
r
e
at
t
e
nt
i
o
n
s
h
o
u
l
d
be
pai
d
t
o
g
best
ove
r
t
i
m
e.
Acco
r
d
i
n
g t
o
what
was
ex
pl
ai
ned,
by
o
n
e i
t
erat
i
on,
t
h
e a
m
ount
o
f
c
4
inc
r
eases by one
unit.
3.
Whe
n
e
v
er
f
(
x
(t-1
))
> f
(
x
(t)
)
,
it i
n
d
i
cates that th
e p
a
rticle is prob
ab
ly
m
o
vi
n
g
i
n
a ri
ght
di
rect
i
o
n t
o
wa
r
d
s
the res
p
onse.
T
h
ere
f
ore, we
increase c
2
by
on
e u
n
i
t
.
I
n
t
h
e
o
p
p
o
si
t
e
case
,
t
h
e
di
rect
i
o
n
of
m
ovem
e
nt
i
s
n
o
t
pr
o
b
abl
y
s
u
i
t
a
bl
e w
h
e
n
t
h
e
a
m
ount
o
f
c
1
inc
r
eases by one
unit.
4
.
Wh
en
ev
er a particle is selec
t
ed
as g
best
, it
sh
ows a goo
d
ex
p
e
rien
ce
o
f
it. Fo
r th
is reaso
n
, th
is p
a
rticle
sh
ou
l
d
p
a
y mo
re atten
tion
t
o
its p
e
rson
al expe
riences
. T
h
ere
f
ore,
whe
n
eve
r
f
(g
best
(
t
)) = f
(
x
(t))
,
the
am
ount
o
f
c
3
i
n
creases by one
unit.
5.
In
[1
5]
we
p
r
e
s
ent
e
d a m
e
t
hod
f
o
r
reco
g
n
i
z
i
ng t
h
e pa
rt
i
c
l
e
s t
h
at
are t
r
a
ppe
d i
n
t
h
e l
o
cal
opt
i
m
u
m
.
T
h
e
st
anda
rd
para
m
e
t
e
r i
s
defi
ned as g
best
in
PSO. A p
a
ram
e
te
r called
g
w
o
r
st
is th
u
s
in
tro
duced
in
th
is stu
d
y
.
Th
e
p
a
rticle that h
a
s th
e
worst efficien
cy fu
n
c
tion
is
k
nown
as
gwo
r
st an
d
will
th
u
s
be
rearrang
ed
. Th
is
mean
s its p
o
sitio
n
v
a
ries
ran
d
o
m
ly to
m
o
v
e
in
ano
t
h
e
r
p
o
i
n
t
un
til it p
r
ob
ab
ly jo
ins th
e to
tal activ
e
particles.
If a
particle is selected
as
gworst, the am
ount
of i
t
s ne
xt
position
will be select
ed
random
l
y and
all the coe
ffici
ents
of c
1
, c
2
, c
3
and c
4
are eq
ual to
on
e
un
til th
e
p
a
rticle starts to
m
o
v
e
from
th
e n
e
w
p
o
i
n
t
.
2.
4.
Searching mode
If a
pa
rt
i
c
l
e
var
i
es fr
om
t
h
e wor
k
i
n
g m
ode t
o
t
h
e searc
h
i
n
g
m
ode, we
det
e
r
m
i
n
e t
h
e am
ount
x
(t
) as
a
random
am
ount in the search space and consider v (t
) as
zero. The searc
h
ing pa
rticles
use Equation (11) t
o
d
e
term
in
e th
eir po
sitio
n.
1
∗
(11)
As i
t
i
s
cl
ear from
Equat
i
o
n (1
1)
, t
h
e eq
uat
i
on
does
not
p
a
y
at
t
e
nt
i
on t
o
t
h
e am
ount
g
best
and t
h
e
p
a
rticle m
o
v
e
s in
d
e
p
e
nd
en
tly reg
a
rd
less
o
f
th
e ex
p
e
rien
ce o
f
o
t
h
e
r
p
a
rticles. Th
is cau
ses th
e
p
a
rticle to
i
nde
pen
d
e
n
t
l
y
l
o
o
k
fo
r
ot
he
r r
e
sp
onses
i
n
ot
h
e
r
poi
nt
s.
2.
5.
Using the r
a
ndom
coe
fficie
n
ts in combinati
o
n
To
p
r
ev
en
t p
a
rticles fro
m
b
e
in
g trap
p
e
d
i
n
th
e lo
cal
op
ti
mu
m
,
rand
o
m
coefficien
ts are
used
in PSO.
A com
b
ination of random
coefficient
has
been u
s
ed i
n
pre
v
i
o
us st
u
d
i
es [16]
-
[
17]
.
The re
po
rt
s o
f
t
h
ese
articles indicate better efficiency by
co
m
b
in
in
g
rando
m
co
efficien
ts.
In
th
is stud
y,
we used
a co
m
b
in
atio
n
of
random
coeffic
i
ent to increase
effici
en
cy
. T
h
eref
ore
,
t
h
e
E
q
uat
i
o
n
(
1
0) c
h
a
nge
s i
n
t
o
t
h
e
E
quat
i
o
n
(
1
2
)
1
1
1
1
1
(1
2)
3.
RESULTS
A
N
D
DI
SC
US
S
I
ON
In t
h
i
s
sect
i
on
t
h
e res
u
l
t
s
of t
h
e o
ffe
re
d m
e
tho
d
i
s
ex
am
i
n
ed. T
h
e
of
fere
d
m
e
t
hod i
s
i
m
plem
ent
e
d o
n
soft
ware
t
h
at
t
e
st
s st
an
dar
d
s
earch
en
vi
r
o
n
m
ent
s
[1
8]
.
O
u
r
ai
m
i
s
t
o
p
r
esent
a
ne
w
m
e
t
hod
so
t
h
a
t
i
t
can
red
u
ce t
h
e a
m
ount
of cal
cul
a
t
i
ons
. To
exam
i
n
e t
h
e reduce
d
cal
cul
a
t
i
ons o
f
t
h
e
offe
re
d m
e
t
hod
, we
im
pl
em
ent
e
d som
e
di
fferent
pr
o
pose
d
m
e
t
hods i
n
PS
O m
e
t
h
o
d
s o
n
s
o
ft
ware
. These
m
e
t
hods
have
been
im
ple
m
ented in the
searc
h
e
n
vironm
ents and c
o
m
p
ared
wi
th
th
e
ob
tain
ed resu
lts of th
e
o
f
fered
m
e
th
o
d
.
To
com
p
are
di
ffe
rent
P
S
O
m
e
t
hods,
we
use
d
t
h
e
st
an
dar
d
envi
ro
nm
ent
s
o
f
Ac
kl
ey
,
Grie
wan
k
و
R
a
st
ri
gi
n.
Wi
t
h
t
h
eal
go
ri
t
h
m
s
of SAPS
O a
nd
DN
PS
O, th
e
offered
al
g
o
rithm
s
in
th
is stu
d
y
are
com
p
ared. I
n
e
ach pe
rf
orm
a
nce, 10
di
ffe
re
nt
part
i
c
l
e
s searc
h
t
h
e o
p
t
i
m
u
m
resp
ons
e.
W
e
assum
e
d W
=
1,
c
1
=2
and c
2
=2.
Dimen
s
ion
s
of th
e
search en
v
i
ronmen
t are con
s
id
ered
as
1
,
10
an
d 100
.
W
e
illu
strate th
e d
i
ag
ram
of
t
h
e
pe
rf
orm
a
nce t
i
m
e t
o
t
h
e am
ount
of t
h
e o
b
t
a
i
n
e
d
c
o
n
s
i
s
t
e
nt
f
unct
i
o
n i
n
eac
h al
g
o
r
i
t
h
m
.
Each l
i
ne
o
f
t
h
e
d
i
agram
is th
e resu
lt
o
f
th
e averag
e of
1
0
0
ti
mes o
f
al
g
o
rithm
iteratio
n
in
t
h
e
d
e
fi
n
ite environ
m
en
t.
As i
t
i
s
cl
ear from
Fi
gures
2,
3 an
d Fi
g
u
re
4, t
h
e
of
fere
d
al
go
ri
t
h
m
s
show bet
t
e
r res
u
l
t
s
com
p
ared
wi
t
h
ot
her
m
e
tho
d
s a
n
d m
ove
t
o
wa
rd
s t
h
e
o
p
t
i
m
u
m
resp
on
se q
u
i
c
kl
y
.
T
h
e effi
ci
ency
fu
nct
i
o
n
i
n
t
h
e st
anda
r
d
sear
ch env
i
ronmen
ts m
a
k
e
s u
s
ho
pefu
l about th
e p
e
r
f
or
m
a
n
ce
o
f
th
e al
g
o
r
ith
m
in
th
e r
e
al en
v
i
r
o
n
m
en
t.
Th
e su
m
o
f
th
e lev
e
l
b
e
low in
ev
ery fi
g
u
re can
b
e
a su
itab
l
e criterion
fo
r co
m
p
aring
th
e two
m
e
t
hods
. Fo
r e
x
am
pl
e, i
n
Fi
gure
4, t
h
e s
u
m
of t
h
e l
e
vel
b
e
l
o
w D
N
S
P
S
O
m
e
t
hod eq
ual
s
39
0.
Ho
we
ve
r, t
h
e
sum
of t
h
e l
e
vel
bel
o
w i
n
t
h
e o
ffe
red m
e
tho
d
eq
ual
s
8
0
.
Thi
s
i
ndi
cat
es
t
h
at
DNSP
S
O
m
e
t
hod
has
bet
t
e
r
resul
t
s
c
o
m
p
ared
wi
t
h
t
h
e
o
f
f
e
red
m
e
t
hod.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ada
p
tive Ad
ju
stmen
t
o
f
PS
O
Cieffic
ien
t
s Takin
g
t
h
e N
o
tion
fro
m
t
h
e Bee
Beha
vio
r
i
n
.... (Ab
b
a
s
Fada
vi
)
2
475
In this e
xpe
riment, each m
e
thod was
perform
ed 10
0 times and the
figure
of
performance tim
e
to
con
s
i
s
t
e
nt
f
u
n
c
t
i
on
of eac
h
m
e
t
hod
was
o
b
t
a
i
n
ed
. T
h
e s
u
m
of t
h
e
bel
o
w l
e
vel
o
f
fi
gu
res i
s
s
h
ow
n as
a
m
easuri
n
g c
r
i
t
e
ri
o
n
of eac
h
m
e
t
hod.
Eac
h
m
e
t
hod
w
h
o
s
e
sum
of t
h
e
bel
o
w
l
e
vel
i
s
l
o
wer
co
ul
d
gai
n
a fast
e
r
resp
o
n
se.
Tabl
e 1 i
ndi
cat
es t
h
e sum
of t
h
e
be
l
o
w l
e
vel
o
f
di
f
f
ere
n
t
m
e
t
hods
o
f
di
ffe
rent
di
m
e
nsi
ons
.
Fi
gu
re
2.
i
n
di
cat
es t
h
e re
sul
t
s
of
pe
rf
orm
a
nces o
f
di
ffe
re
nt
al
go
ri
t
h
m
s
i
n
Ac
kl
ey
en
vi
r
onm
ent
o
f
di
m
e
nsi
o
n
1
Fi
gu
re
3.
i
n
di
cat
es t
h
e re
sul
t
s
of
t
h
e
per
f
o
rm
ance
of
di
f
f
er
e
n
t
al
g
o
ri
t
h
m
s
i
n
Ackl
ey
e
nvi
r
onm
ent
s
o
f
di
m
e
nsi
on 10
Fi
gu
re
4.
i
n
di
cat
es t
h
e re
sul
t
s
of
t
h
e
per
f
o
rm
ance
of
di
f
f
er
e
n
t
al
g
o
ri
t
h
m
s
i
n
Ackl
ey
e
nvi
r
onm
ent
s
o
f
di
m
e
nsi
on 10
0
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 5
,
O
c
tob
e
r
20
16
:
247
0
–
24
77
2
476
Table
1.
Sum
of the
bel
o
wle
v
el of diff
ere
n
t m
e
thods
with dim
e
nsions 1, 10,100
Rastr
i
gin
Griewank
Ackley
5
1321
2006
3
9
1175
1884
0
28
301
397
SAPSO
7
3418
5371
7
12
1136
1730
8
55
368
476
DNSPSO
0.
6
651
1179
0
3
351
5507
18
72
81
Present PSO
A
s
i
t
i
s
s
h
o
w
i
n
T
a
b
l
e
1
,
P
S
O
s
h
ows
a
better res
u
lt c
o
m
p
ared with ot
her m
e
thods. For
exam
ple, the
sum
of t
h
e
bel
o
w l
e
vel
o
f
t
h
e fi
g
u
re
i
n
Ac
kl
ey
searc
h
en
vi
r
onm
ent
of
d
i
m
e
nsi
on
10
us
i
ng
DN
SPS
O
m
e
t
hod
equal
s
3
6
8
but
t
h
i
s
am
ount
e
q
ual
s
72
i
n
t
h
e
PSO
p
r
ese
n
t
e
d
.
REFERE
NC
ES
[1]
J. Kenned
y
an
d R. Eberhart
,
“
P
article Swarm
Optim
ization
,
”
Neural Netw
orks, 1995. Proceed
ings, IEEE
International Co
nference on
,
vo
l. 4, pp. 1942-194
8, 1995
.
[2]
Y. Shi and R.
Eberhar
t
, “A modi
fied p
a
rti
c
l
e
s
w
arm
optim
izer,”
E
v
olu
tionary
Computation P
r
oceedings, 199
8.
IEEE World Congress on
Computational Int
e
ll
igenc
e
, The 1998 IEEE Internat
ional Conferenc
e
on
, pp. 69-73
,
1998.
[3]
Q.
Li,
et al
., “
O
ptim
izatio
n stud
y
on resource
equilibrium
wit
h
fixed tim
e lim
it for a project based on SPSO
algorithm
,
”
Intelligent In
formation Technol
ogy
Applica
tion Wor
k
shops, 2008 IIT
AW'08 International Symposium
on,
pp
. 70-73
, 2
008.
[4]
K. Yas
uda
and
K. Ya
zawa
,
“
P
aram
eter s
e
lf-a
djus
ting s
t
r
a
teg
y
for P
a
rt
icl
e
S
w
arm
Optim
ization
,
”
Inte
llig
e
n
t
Systems Design
and Applications (
I
SDA)
,
2011 11th International Conferen
ce on,
pp. 265-270
, 20
11.
[5]
H. Wang,
et al.
, “Diversity
en
hanced par
ticle swar
m optimization with
neighborhood search,”
Informatio
n
Scien
ces
, vol. 22
3, pp
. 119-135
,
2013.
[6]
A.
Afshar,
et al.
, “Honey
-
bee mating op
timization (hbmo) algor
ithm for
optimal reservoir operation,”
Journal o
f
the Franklin
Inst
itute
,
vo
l. 344, p
p
. 452-462
, 200
7.
[7]
J.
Ma
,
et al.
, “
P
hase transition
particl
e
swarm optim
ization
,
”
Evolut
ionary Computation (
C
EC)
,
2014 IEEE
Congress on,
pp. 2531-2538, 201
4.
[8]
D. W
u
and H.
Gao, “
R
es
earch
of an adaptiv
e
partic
le swarm optimization on Engi
ne Optimization Problem,”
Intelligen
t Human-Machine Systems and Cybern
etic
s (
I
HMSC
)
,
2013 5th International Con
f
erence on
,
v
o
l
.
1,
pp
.
42-45, 2013
.
[9]
M. Anantath
ana
v
it and M. A.
Munlin, “
R
adius
parti
c
le swarm
optim
izat
ion,
”
C
o
mputer Scien
c
e and Engineerin
g
Conference (
I
CSEC)
,
2013 International
, pp
.126-
130, 2013
.
[10]
M. Pluhacek,
et al.
, “Investig
ation on the perfo
rmance of Inv
e
s
tigat
ion on
the
P
e
rform
ance of
a New M
u
lti
p
l
e
Choice Strateg
y
for PSO Algorithm in th
e
t
a
s
k
of Larg
e S
c
ale
Optim
izatio
n P
r
oblem
s
,
”
2013
I
EEE Congress on
Evolutionary Co
mputation, CEC
2013,
2013
.
[11]
D. Karaboga, “An idea based on honey
bee
s
w
arm for nu
merical optimizatio
n,”
Technical report-
tr06, Erciyes
university, engin
eering
faculty,
computer engin
e
ering department,
2005.
[12]
D. Karabog
a an
d B. Aka
y
,
“
A
com
p
arativ
e s
t
u
d
y
of
ar
tifi
c
ia
l
bee
colon
y
algo
rithm
,
”
Applied Mathematics
an
d
Computation
, vo
l. 214
, pp
. 108-1
32, 2009
.
[13]
V.
Nay
a
k,
et al.
, “
I
m
p
lem
e
ntati
on of Artificia
l Bee Colon
y
Alg
o
rithm
,
”
IAES I
n
ternational Jou
r
nal of Artificia
l
Intell
igen
ce
(
I
J-AI)
,
vol. 1, pp. 1
12-120, 2012
.
[14]
B. Akay
and
D. Karaboga, “A modifi
ed artificial bee colo
n
y
algor
ithm
for real-par
ameter
optimization,”
Information Sciences
, vo
l. 192, p
p
. 120-142
, 201
2.
[15]
A. Fadavi and
K. Faez, “Th
e
Eff
e
ct
of
Rearrangement of
the Most In
com
p
atible P
a
rti
c
l
e
on Increas
e of
Convergence Sp
eed of
PSO,”
In
ternational Jour
nal of
Electrical and Computer
Engineering
(
I
JECE)
,
vol. 3, pp
.
238-245, 2013
.
[16]
M.
A.
Arasomw
a
n and A.
O.
Adewumi,
“
A
n adaptiv
e velo
cit
y
parti
c
le s
w
arm optimization for
high-dimensional
function
optim
iz
ation
,
”
Evo
l
utio
nary Computatio
n (
C
EC)
,
2013 IEEE Congress on
, pp
. 2352-235
9, 2013
.
[17]
S. Sun and
J. Li, “A two-swar
m
coope
rative par
t
icle swarms optimization
,
”
Swarm and Evolution
a
ry Computation
,
vol. 15
, pp
. 1-18
, 2014
.
[18]
K.
Deep,
et al.
,
“
A
new fine grai
ned inerti
a weig
ht Particl
e
Swarm
Optim
ization,
”
Information and Communication
Technologies (
WICT)
, 2011 World Congress on
, p
p
. 424-429
, 201
1.
Evaluation Warning : The document was created with Spire.PDF for Python.
IJECE
ISS
N
:
2088-8708
Ada
p
tive Ad
ju
stmen
t
o
f
PS
O
Cieffic
ien
t
s Takin
g
t
h
e N
o
tion
fro
m
t
h
e Bee
Beha
vio
r
i
n
.... (Ab
b
a
s
Fada
vi
)
2
477
BIOGRAP
HI
ES OF
AUTH
ORS
Abba
s fadavi
was born in Sari, Iran
in 197
8. He rece
iv
ed
the B.S
.
deg
r
e
e
in el
ec
tronic
engineering fro
m Azad university
of Garmsar,
Garmsar,
Iran
,
in 2005 and
the
M.Sc. deg
r
ee in
M
echatron
i
cs
fr
om
S
c
ience and
Res
earch Br
an
ch, Is
lam
i
c
Aza
d
Univerci
t
y
S
e
m
n
an, S
e
m
n
an,
Iran in
2012. His resear
ch in
ter
e
sts includ
e Im
age Processing,
Pattern
Recogn
ition, Algoritm
Optim
izatio
n,
an
d
Neural Networ
ks.
Karim Fa
ez
Was
born in S
e
m
n
an, Iran
.
He rec
e
ived his
BS
c. d
e
gree in
Ele
c
tri
c
al Engin
eering
from Tehran
Poly
technic Universi
ty
as
the first rank
in June 1
973, and
his M
S
c. and
Ph.D.
degrees in Computer Scien
ce fr
om University
o
f
California at
Los Angeles (UC
L
A) in 1977 and
1980 respect
ivel
y
.
Professor Faez was with Ir
a
n
Tel
ecom
m
unicat
ion Resear
ch
Center (1981-
1983) before Joining Amirkabir Un
iversity
of Technolog
y
(Tehran
Poly
technic) in Iran in March
1983, where he
holds the rank of
Professor in the
Electrical Engin
eering
Dep
a
rtment. He was th
e
founder of the Computer Engineering Depar
t
ment
of Amirkabir
University
in 1989 and he has
served as
the
fir
s
t cha
i
rm
an duri
ng April 1989-S
e
pt. 1992. Profe
ssor Faez was
th
e ch
airm
an of
planning
com
m
itte
e for Com
put
er Eng
i
nee
r
ing
and Com
puter S
c
ien
ce o
f
Minist
r
y
o
f
Sci
e
nc
e,
Research and
Techno
log
y
(du
r
ing 1988-1996). Hi
s research
interests are
in Biometrics
Recognit
i
on an
d authen
ti
cat
ion
,
Patt
ern R
eco
gn
ition, Im
age
Processing, Neural Networks,
Signal Processing, Farsi Handwritte
n Processing,
Earthqu
ake Sign
al
Processing, Fault Tol
e
ran
c
e
S
y
stem
Design, Com
puter Networks, and Hard
ware Design.
Dr. Faez co
auth
ored a book in
Logic Cir
c
uits published b
y
Amirkabir Univ
ersity
P
r
es
s
.
He a
l
s
o
coauthor
ed a
chapt
e
r in th
e
book:
Recen
t Ad
vances in Simula
ted Evolution an
d Learning
, Adv
a
nces in Natur
a
l
Computation,
Vol. 2, Aug.200
4, World Scien
tific. He pub
lished about 300 articles in th
e abov
e area. He is a
m
e
m
b
er of IEEE, IEIC
E, and
ACM, a m
e
m
b
e
r
of Editori
al C
o
m
m
ittee of Journal of Irani
an
As
s
o
ciation of E
l
ec
tric
al
and El
ectronics Engineers, and Intern
atio
nal Journal of C
o
mmunication
Engine
ering.
Em
ails
:
kfae
z@aut
.
ac.
ir,
kfa
ez@i
e
e
e
.org,
kfa
ez@m
.
iei
ce.org
.
Z
e
inab Famili
was born in
Semnan, Iran
in
1
980. He re
ceiv
ed his B.Sc. d
e
gr
ee
in
Electronic
Engineering fro
m Azad university
of Garmsar,
Garmsa
r, Iran,
in 2005 and th
e
M.Sc. degr
ee
in
Electronic from Islamic Azad Univercity
Gazvin
,
Semnan, Iran in 2009. His research interests
includ
e Image Processing, N
e
ural
Networks.z_electron590@
y
a
ho
o.com
Evaluation Warning : The document was created with Spire.PDF for Python.