TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 12, No. 10, Octobe
r 20
14, pp. 7404
~ 741
1
DOI: 10.115
9
1
/telkomni
ka.
v
12i8.536
9
7404
Re
cei
v
ed
De
cem
ber 1
5
, 2013; Re
vi
sed
Jun
e
25, 201
4; Acce
pted July 22, 20
14
Resear
ch on Grain Yield Prediction Method Based on
Improved PSO-BP
Liguo Zhang
*
1
, Jiangtao
Liu
2
, Lifu Zhi
3
1
Colle
ge of Info
rmation Sci
enc
e &
T
e
chnol
og
y, Agricu
ltur
a
l
Univers
i
t
y
of Hebe
i, Heb
e
i Ba
odi
ng, 07
100
1,
Chin
a
2
Colle
ge of Me
chan
ical a
nd El
ectronic En
gin
eeri
ng, Ag
ricu
ltural U
n
ivers
i
t
y
of Hebe
i, Heb
e
i
Baod
ing,
071
00
1, Chin
a
3
Departme
n
t of Electric and El
ectronic En
gin
eeri
ng,
Shij
iaz
hua
ng Voc
a
tio
nal T
e
chnol
og
y Institute,
Shiji
azh
u
a
ng, Heb
e
i 05
00
81, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: Z
hangl
igu
o
2
006
@12
6
.com, Liuji
a
n
g
tao
2
0
0
3
@1
26.com
A
b
st
r
a
ct
Aimed at the h
i
ghly n
onl
in
ear
and u
n
certa
i
nt
y of
grain yi
eld
chan
ges, a ne
w
method for g
r
ain yi
el
d
pred
iction
bas
e
d
on
i
m
pr
ove
d
PSO-BP is pro
pose
d
. By
i
n
tro
duci
ng
mutatio
n
op
erati
on
an
d ad
aptiv
e a
d
j
u
st
of inertia w
e
i
g
ht, the probl
e
m
of easy to f
a
ll i
n
to lo
c
a
l o
p
timu
m, pre
m
a
t
ure, low
preci
s
ion a
nd l
o
w
later
iteratio
n efficie
n
cy of PSO a
r
e solve
d
. By
usin
g
the i
m
pr
oved PSO to
opti
m
i
z
e
BP n
eura
l
netw
o
rk
’s
para
m
eters, th
e l
earn
i
n
g
rate
an
d o
p
ti
mi
z
a
t
i
on c
apa
bi
lity of
conv
entio
na
l
BP are
effectiv
ely i
m
prov
ed.
T
h
e
simulati
on
res
u
lts of gr
ai
n p
r
oducti
on
pre
d
i
c
tion s
how
tha
t
the pr
edict
a
ccuracy of t
h
e
new
meth
od
is
signific
antly
hi
gher th
an th
at of conve
n
tio
nal
BP neur
al n
e
t
w
ork meth
od,
and th
e
meth
o
d
is effective
a
n
d
feasib
le.
Ke
y
w
ords
:
particl
e sw
arm o
p
ti
mi
z
a
t
i
o
n
(PSO), mut
a
tion,
a
d
a
p
tiv
e
ad
just, Bac
k
-prop
agati
o
n
neur
al
netw
o
rk, grain
Yiel
d pred
ictio
n
Co
p
y
rig
h
t
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Grain
proble
m
is
a colo
ssal majo
r p
r
obl
em fo
r
a
cou
n
try. Althoug
h in recent ye
ars there
has b
een a
pattern of ov
ersupply of g
r
ain, the
grai
n se
cu
rity issue ca
n not b
e
ignored. As a
large a
g
ri
cult
ural count
ry, to protect a
nd mainta
in
the cou
n
try's grain securi
ty is particul
a
rly
importa
nt. Rational an
alyzing a
nd fore
ca
sting g
r
ain
prod
uct cap
a
city has im
portant reference
value for the
setting and
achieving g
r
ain secu
rity
objective
s. Many schola
r
s, at home
and
abro
ad, h
a
ve mad
e
mu
ch
relate
d rese
arch
and
con
s
tructe
d
a num
be
r
of very valu
able
theoreti
c
al h
y
pothesi
s
an
d predi
ction
models [1-6]. Ref [4]
has comp
ared and an
al
yzed
forecastin
g p
e
rform
a
n
c
e
s
of step re
gre
ssi
on, ba
ck-p
rop
agatio
n (BP) neu
ral netwo
rk
and
GM(1,
N
) g
r
a
y
system. Using non
-line
a
r
artificial ne
ural net
wo
rk
BP model, Ref [5] made corn
prod
uctio
n
p
r
edictio
n in
ch
ina. Based
o
n
the p
r
evio
u
s
d
a
ta, Ref [
6
] applie
d th
e gray sy
ste
m
in
grain
yield
predictio
n a
nd
put forwa
r
d t
he g
r
ay
relati
onal
analy
s
is BP artifici
al
neural n
e
two
r
k
model fo
r corn pro
d
u
c
tion
predi
ction. G
enerally agr
e
ed that there
are m
any fa
ctors affe
ct g
a
in
yield. The m
a
in affect factors are pl
ant
ings,
wate
r, f
a
rming techniques, se
eds,
fertilizers, et
c.
Ho
wever, the
grain yield fluctuatio
n tren
d sho
w
s a hig
h
nonlin
earity
and un
certai
nty, which lea
d
s
to accurately
predi
ct the gain yield, is difficu
lt. Artificial neu
ral n
e
twork predi
ction method
can
better
han
dle
the
nonlin
ea
r a
nd
un
certa
i
n p
r
oble
m
s [7-10],
but it a
l
so
ha
s m
any
sh
ort
c
omin
g
s
,
su
ch a
s
: mod
e
l training
slo
w
; time and space com
p
le
xity is high; easy to fall into local optimu
m
.
Particle
swa
r
m optimi
z
ati
on
(PSO) is a
popul
atio
n ba
se
d
sto
c
ha
stic opti
m
ization
techni
que d
e
v
eloped by
Dr. Eberh
a
rt a
nd Dr. Kenn
edy in 199
5. as a g
r
o
up i
n
telligent sea
r
ch
algorith
m
, it
throug
h pop
u
l
ation coo
p
e
r
ation
and co
mpetition bet
wee
n
the pa
rticle
s to guid
e
grou
p
se
arch
. And it
ha
s
many me
rits,
su
ch
a
s
pa
rallel
gl
obal
search,
th
e model is sim
p
l
e
and
conve
n
ient, few p
a
ramete
rs
nee
d to b
e
adju
s
ted,
co
nverge
nce i
s
fast an
d ea
sy
impleme
n
tation
[11]. Thus,
usin
g PSO
algorith
m
for BP Neu
r
al
netwo
rk p
r
e-sea
r
ch ca
n overcom
e
the
deficie
nci
e
s
of BP algo
rithm. Ho
weve
r, whe
n
th
e
r
e
are
more lo
cally optimum
s, sta
nda
rd P
S
O
algorith
m
al
so ea
sy to
fall into l
o
cal
optimum.
Many resea
r
che
r
s h
a
ve
made
stu
d
ie
s fo
r
improvin
g th
e PSO algo
ri
thm and a
c
hi
eved some
succe
ss [1
2-1
6
]. The pa
pe
r propo
se
d the
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Grain Yiel
d Predi
ction Met
hod Based o
n
Im
proved P
S
O-BP (Ligu
o
Zhang
)
7405
grain yield
predictio
n meth
od ba
sed
on
improve
d
PSO-BP, and th
e pre
d
ictio
n
result
s sh
ow t
hat
the predi
ction
model ca
n effectivel
y improve the predi
ction a
c
cura
cy.
2.
BP Neural
Net
w
o
r
k
Ar
tific
i
a
l
ne
ur
a
l
ne
tw
ork
s
a
r
e p
o
w
e
r
f
u
l
tool fo
r p
r
edi
ction
of
nonlin
earitie
s. The
s
e
mathemati
c
al
mod
e
ls com
p
rise in
dividu
al p
r
o
c
e
ssin
g
units called
neuron
s
that
resembl
e
n
e
u
r
al
activity. Each
pro
c
e
s
sing
units
sum
s
weighted i
nput
s a
nd the
n
a
pplie
s a li
ne
ar o
r
n
on lin
ear
function to th
e re
sulting
su
m to determi
ne the out
p
u
t. The neu
ron
s
are arran
g
e
d
in layers a
n
d
are com
b
ine
d
throu
gh exce
ssive conne
ctivity.
With hie
r
a
r
chical fe
ed forward network
architectu
re, the ba
ck-p
rop
agation n
e
tw
ork h
a
s
re
cei
v
ed most atte
ntion.
Typically, three-laye
r BP n
eural
network (i
nput laye
r, hidde
n layer
and outp
u
t la
yer) can
reali
z
e t
he fu
nction
ma
ppi
ngs of
n
in
de
pend
ent vari
a
b
les an
d the
m
depend
ent
varia
b
les.
In
the
study of BP neural n
e
two
r
k, the main feature
s
are forward tran
sformer
of input
sign
al and b
a
ck-
prop
agatio
n of erro
r. Net
w
ork’
s wei
g
h
t
s and thre
shold value
s
are adj
uste
d
accordi
ng to
the
predi
ction e
r
ror.
The
sign
al i
nputted from
outsid
e
spread
s
to the
output laye
r
and give
s th
e re
sult
throug
h p
r
o
c
essing l
a
yer
for layer
of n
euro
n
s in inp
u
t layer a
nd
hidde
n layer.
If the expect
ed
output can’t b
e
obtained in
output layer, it shi
fts to the
conve
r
sed sp
readi
ng processing an
d the
true value an
d the erro
r ou
tputted by network will
retu
rn alon
g the couple
d
acce
ss forme
r
ly. The
error is
redu
ced by modifying co
ntacte
d weig
ht va
lue of neuro
n
s in
every layer a
nd then it shifts
to the
po
sitive spreadi
ng
pro
c
e
ssi
ng
a
nd
revolve
s
i
t
eration
until
the e
rro
r i
s
smaller the
gi
ven
value [4-5]. The topolo
g
ica
l
of BP neural network is
sh
own in Fig
u
re
1.
Figure 1. The
Topologi
cal
of BP Neural
Network
Her
e
,
n
X
X
X
,
,
,
2
1
are th
e input valu
es of neu
ral
network,
n
Y
Y
Y
,
,
,
2
1
are the
predi
ctive val
ues,
ij
and
jk
are
netwo
rk’
s
weights. Befo
re usi
ng,
the firs
t task
is
to
train the
netwo
rk. Th
e training p
r
o
c
e
ss in
clu
ded th
e followin
g
st
eps.
Step 1: Initialize the
network. A
c
cordin
g to
the inp
u
t and o
u
tpu
t
of actual
system,
determi
ne th
e numb
e
rs o
f
input layer node
s, hidd
en layer n
o
d
e
s an
d outp
u
t layer nod
es,
initialize
ij
,
jk
an
d the thresho
l
d value of b
o
th hidd
en la
yer and
outp
u
t layer, an
d
set the
learni
ng rate and the ne
uron activation
function.
Step 2: Calcu
l
ate the hidde
n layer output
base
d
on formula (1
).
n
i
j
i
ij
j
a
x
f
H
1
)
(
l
j
,
,
2
,
1
(
1
)
Whe
r
e,
l
is the numb
e
r of
node
s in hi
dden laye
r,
j
a
is the thre
sh
old value an
d
f
is
the
activation fun
c
tion of hidd
e
n
layer. In
this
paper, we
selec
t
formula
(2) as
f
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 740
4
– 7411
7406
x
e
x
f
1
1
)
(
(2)
Step 3: Calcu
l
ate the outpu
t value of
output layer ba
sed on form
ula
(3).
l
j
k
jk
j
k
b
H
O
1
m
k
,
,
2
,
1
(
3
)
Whe
r
e,
k
b
is the thresh
old val
ue of output layer nod
e.
Step 4: Calcu
l
ate the predi
cti
on erro
r according to the
network p
r
ed
icted outp
u
t and the
desi
r
ed o
u
tpu
t
.
k
k
k
O
Y
e
m
k
,
,
2
,
1
(4)
Step 5: Updat
e the con
n
e
c
tion we
i
ghts b
y
the predicti
on error
k
e
.
m
k
k
jk
j
j
ij
ij
e
i
x
H
H
1
)
(
)
1
(
l
j
n
i
,
,
2
,
1
;
,
,
2
,
1
(5)
k
j
jk
jk
e
H
w
m
k
l
j
,
,
2
,
1
;
,
,
2
,
1
(6)
Her
e
,
is the learni
ng rate.
Step 6: Updat
e the thre
shol
d value ba
se
d on formul
a (7) a
nd (8
).
m
k
k
jk
j
j
j
j
e
H
H
a
a
1
)
1
(
l
j
,
,
2
,
1
(7)
k
k
k
e
b
b
m
k
,
,
2
,
1
(8)
Step 7: Determine wh
ethe
r the iterativ
e end
s, and if not, return to Step 2.
3. PSO Algorithm and its
Impro
v
ement
3.1. Standar
d
PSO Algor
ithm
Given in a Q-dim
e
n
s
ion
a
l sea
r
ch sp
ace,
there is a particle
community co
mposed
of
n
particle
s
. And the re
levant para
m
eters of
i
-th particl
e are denoted a
s
follows: the
positio
n vector is
den
o
t
ed by
12
(,
,
,
)
ii
i
i
Q
x
xx
x
,
1,
2
,
,
in
.
Th
e flying sp
eed i
s
denote
d
by
12
(,
,
,
)
ii
i
i
Q
vv
v
v
. Up to
no
w, the se
arch
e
d
optimal l
o
cation of
i
-th particle
is
denote
d
by
12
(,
,
,
)
ii
i
i
Q
pp
p
p
(
Nam
e
ly
best
P
)
. the search
ed optimal l
o
catio
n
of the
whol
e pa
rticl
e
s
comm
unit
y
is den
ote
d
by
12
(,
,
,
)
g
gg
g
Q
pp
p
p
(
Namely
best
G
)
. To
s
e
ar
ch
th
e
o
p
t
ima
l
s
o
lu
tion
in
Q
-
d
i
mens
io
na
l s
p
ac
e
is to search t
he parti
cle in
best positio
n.
Acco
rdi
ng th
e three
prin
ci
ples,
maintai
n
its i
n
e
r
tia,
maintain it
s
optimal p
o
siti
on a
nd
maint
a
in
comm
unity optimal positio
n, the particl
e
update
s
its status du
ring the mome
nt.
In every iteration, the particles up
date th
eir velocity an
d positio
n by formula (9).
1
1
max
1
max
1
max
1
max
1
2
1
1
,
)
(
)
(
k
id
k
id
k
id
k
id
k
id
k
id
k
id
k
id
k
best
k
id
k
best
k
id
k
id
v
x
x
v
v
v
v
v
v
v
v
x
G
c
x
P
c
v
v
(9)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Grain Yiel
d Predi
ction Met
hod Based o
n
Im
proved P
S
O-BP (Ligu
o
Zhang
)
7407
Her
e
,
denot
es in
ertia
wei
ght and
u
s
ed
to maintain
the o
r
iginal
ra
te coeffici
ent
s.
1
c
and
2
c
则
den
ote
learning fa
ct
or a
nd
accel
e
ra
tion coeffic
i
ent
s
,
res
p
ec
tively.
and
are
the
uniformly dist
ributed rand
o
m
numbe
rs
durin
g 0 and
1.
is co
nst
r
aint factor. [-v
ma
x
,
v
ma
x
] i
s
velocity rang
e for ea
ch di
mensi
on of p
a
rticle. St
and
ard PSO algo
rithm flow is
shown in Figu
re 2.
Figure 2. Standard PSO Algorithm Flo
w
3.2. The Improv
ement of PSO
For the
stan
d
a
rd PSO al
g
o
rithm i
s
ea
sy to fall into local
optimum
probl
em, the
pape
r
introdu
ce
d the mutation op
eration to PSO algorith
m
. The ba
sic id
e
a
is to re-i
nitialize the p
a
rti
c
le
after each u
pdate with a
certain p
r
ob
abilit
y. The adaptive muta
tion operatio
n method for
i
-th
partic
l
e is
as
follows
:
P
P
ij
r
r
x
x
x
random
j
i
(10)
Her
e
,
ij
x
denote
s
the
j
-th co
mpone
nt of particle
i
x
, P denotes mutation
proba
bility,
r
the unifo
rmly distri
buted
r
andom
num
b
e
rs du
ring
0
and 1,
random
x
den
otes
ran
dom
numb
e
r
durin
g individ
ual maximum
and minimu
m position of
particl
e.
Re
sea
r
ch sh
ows that the
liner d
e
crea
si
ng ine
r
tia we
igh ca
n bette
r bala
n
ce the
global
sea
r
ch ability
and lo
cal
search a
b
ility. The pa
per
adopt
s the followin
g
met
hod to get in
ertia
weig
ht value.
)
)
/
(
/
*
2
(
*
)
(
)
(
2
max
max
T
k
T
k
k
end
start
start
(
1
1
)
Her
e
,
start
denot
es initial ine
r
tia weight
;
end
denote
s
ine
r
tia weig
ht of maximum
iteration nu
m
ber,
k
d
enote
s
cu
rrent itera
t
ion numbe
r,
max
T
denote
s
maxi
mum iteration
numbe
r.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 740
4
– 7411
7408
4. The Improv
ed
PSO-BP Net
w
o
r
k
BP neural n
e
t
work lea
r
nin
g
pro
c
e
s
s is
the
update
p
r
ocess of the
conn
ectio
n
weig
hts
and threshold
s
of the n
e
twork. T
he p
u
rp
ose
of
usi
ng
PSO algorith
m
optimize BP neural net
wo
rk
is to g
e
t bett
e
r n
e
two
r
k in
itial weig
hts
and th
re
shol
ds. Th
e b
a
si
c id
ea i
s
to
u
s
e th
e po
sitio
n
of
each individ
u
a
l pa
rticle i
n
PSO to re
prese
n
t all of t
he initial
net
work
co
nne
ct
ion weight
s
and
threshold
parameters. The
n
take the in
dividual in
itial
i
zed BP ne
ural netwo
rk predictio
n erro
r as
the individu
al’s fitness valu
e and th
ro
ug
h the pa
rtic
l
e
netwo
rk opti
m
ization t
o
find the b
e
st in
itial
weig
hts an
d thre
shol
ds.
The detaile
d algorith
m
ca
n
be summ
ari
z
ed as follo
ws:
1) De
sig
n
an
d initialize the
network no
rmalize the
sa
mples.
2) Initialize P
S
O, such as,
popul
ation si
ze, particle
structure, locati
on and
spe
e
d
.
3) Cal
c
ul
ate fitness valu
e
of each pa
rticle. The pa
per ta
kes fo
rmula (4
) as
particl
e
fitnes
s
func
tion.
N
i
i
i
d
y
f
1
|
|
itness
(12)
Her
e
:
N
den
ote
s
traini
ng
sa
mple nu
mbe
r
,
i
d
denote
s
the de
sire
d
output of
i
-th
sampl
e
.
i
y
deno
tes network computing val
ues of
i
-th sa
mple.
4) Acco
rding
to the fitness
value of each
particle, up
d
a
te its person
a
l best po
sitio
n
best
P
and glo
bal be
st positio
n
best
G
.
5) Acco
rding
to formula (9
), adjust the p
o
sition a
nd velocity of part
i
cle.
6) Acco
rding
to formula (1
0), make ada
ptive mutation operation.
7) If the converge
nce crite
r
ia is met (th
e
num
be
r of iteration is re
ach
ed or the
error can
accepte
d
), st
op it
eratio
n. And the
best
G
is the initial parameter va
lu
e
s
of BP network, th
rou
g
h
further lea
r
ni
ng a
nd trainin
g
of BP al
gori
t
hm can fo
rm
the p
r
edi
ct m
odel. Oth
e
rwi
s
e, g
o
to
step
3
for the next iteration.
5. Grain Yield Prediction
Bas
e
d on Improv
ed PSO-BP
Acco
rdi
ng
to previou
s
stud
ies and Chi
n
a
Stat
istical
Yearb
o
o
k
, th
ere
are
ma
ny factors
affecting grai
n yield includ
ing effective irrig
a
ted area
, the total number of pe
o
p
le enga
ged
in
agri
c
ultu
ral p
r
odu
ction, g
r
ain so
wn a
r
ea, t
he disa
ster a
r
ea, vil
l
age hyd
r
opo
wer
gen
erati
n
g
cap
a
city, total ag
ricultural me
cha
n
ical po
we
r,
a
g
ricultural in
frastructu
re
i
n
vestment
a
n
d
con
s
um
ption
of fertili
zer
and
other fa
ctors. In
th
e
propo
sed
p
r
edictio
n mo
d
e
l, take
effective
irrig
a
ted a
r
ea
(
kilo hm
~2
),
consum
ption
of fertilizer (
m
illion tons
), the
disaste
r
area
(
kilo hm
~2
),
grain so
wn
a
r
ea
(
kil
o
hm
~2
), total agricultural me
ch
anical powe
r
(
m
w
steam
),
and agri
c
ult
u
ral
infrast
r
u
c
ture
investment (
billion yuan
)
as input
s a
nd ta
ke
grai
n yield
(
m
illion tons)
as the
output
.
Thus the BP
neural net
work
stru
cture is sh
own
in
Figure 3. Then, take the colle
cted sa
mple
data from 19
990 to 2001
as traini
ng sample data a
nd the sam
p
l
e
data from
2002 to 200
7
as
testing
sam
p
l
e
data. In
th
e test, the
rel
e
vant pa
ram
e
ters of PSO
algo
rithm
are a
s
follo
ws:
the
numbe
r
of ite
r
ation
is 50,
p
opulatio
n
size
is
20,
c1
=1.4
9445,
c2
=1.4
9445
an
d the
length
of
ea
ch
particl
e i
s
4
1
. Each
gen
eration
be
st
individual
fitness
curve
of improve
d
PSO algo
rit
h
m
optimizatio
n pro
c
e
ss i
s
sh
own in Fig
u
re
4. The
traini
ng error
cu
rves of
the stan
dard BP network
and th
e BP
network
opt
imized
by i
m
prove
d
PS
O a
r
e
sh
own in
Figu
re
5 an
d Fi
gure 6,
respe
c
tively.
The obtain
e
d
optimal initial weight
s a
n
d
thresh
old
s
of BP neural network is
sho
w
n
in Table
1.Th
e cu
rves
of predict
e
d
grain
yield and a
c
t
ual grain yiel
d from 2
002 t
o
200
7 is
sho
w
n
in Figure 7. Predi
ction
s
co
ntrast of the p
r
opo
se
d meth
od and oth
e
r
method is
sh
own in Ta
ble
2.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Grain Yiel
d Predi
ction Met
hod Based o
n
Im
proved P
S
O-BP (Ligu
o
Zhang
)
7409
Figure 3. BP
Network Stru
ctur
e fo
r Grai
n Yield Predi
ction
Figure 4. Best Individual Fitness Cu
rve of
Improved PS
O
Figure 5. Trai
ning Error
Cu
rve of Standa
rd BP
Netw
or
k
Figure 6. Trai
ning Error
Cu
rve of BP Network
Optimize
d by Improved PS
O
Figure 7. Pre
d
icted Yield a
nd Re
al Yield
Table 1. Opti
mal Initial We
ights an
d Thresh
old
s
The Initial weights betw
e
en input l
a
y
e
r
nodes and hidde
n la
y
e
r n
odes
1.1783
-
3.5159
-
1.2645
0.6073
0.9899
...
-
1.3646
0.2707
The thr
e
sholds of hidden la
y
e
r
no
des
-
1.7820
1.6677
0.6473
1.1162
-
1.6561
— —
—
The Initial weights betw
e
en outpu
t
layer nod
es and
hidden la
y
e
r
nod
es
-
0.2745
-
1.9078
0.1258
1.6990
-
1.1197
— —
—
The thr
e
sholds of output la
y
e
r no
des
-
1.1587
— — —
— —
—
—
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 740
4
– 7411
7410
Table 2. Pred
iction Contra
st
Ye
a
r
Real Gr
ain
y
i
eld(
ten million
tons)
BP net
w
o
rk mod
e
l
Improved PSO-B
P net
w
o
rk model
Predicted
yi
e
l
d
Absolute
error
Relative
error
/
(%
)
Predicted y
i
eld
Absolute
error
Relative
error
/(
%)
2002
4.5705
4.4372
0.1333
2.9
4.699
0.1285
2.8
2003
4.3069
4.7892
0.4823
11.2
4.493
0.1861
4.3
2004
4.6946
4.5323
0.1623
3.5
4.58
0.1146
2.4
2005
4.8402
5.1229
0.2827
5.8
4.654
0.1862
3.8
2006
4.9804
5.2229
0.2425
4.9
4.75
0.2304
4.6
2007
5.0160
5.1229
0.1069
2.1
5.015
0.001
0.01
As can be
se
en from Figu
re 4, the best i
ndividual fitne
ss obtai
ned b
y
improved PSO-BP
neural netwo
rk meth
od ha
s better o
p
timization
cap
ability in evolution than th
at of standa
rd BP
neural net
wo
rk. Unde
r the
same trainin
g
accu
ra
cy
and by com
p
a
r
ing Fig
u
re 5
and Figu
re
6, it
can
be
se
en
that the im
proved
PSO
method
ca
n
meet
the converg
e
n
c
e (0.0000
1) at
12
th
gene
ration a
nd obviou
s
ly supe
rior to
convention
a
l
BP network (25
th
ge
ne
ration
). By d
a
ta
comp
ari
s
o
n
o
f
Table 1, the predi
ction a
c
curacy
of imp
r
oved PSO-B
P method is
sup
e
rio
r
to that
of conve
n
tion
al BP netwo
rk metho
d
for
the sam
e
stat
istics data. T
he maximum
relative e
rro
r
of
BP network
method a
nd
improve
d
PSO-BP meth
o
d
are
11.2%
and 4.6%, resp
ectively. And
also,
as for g
r
ain
yield
pro
ductio
n
, the
maximum
rel
a
tive erro
r of
pap
er propo
sed
meth
od
and
Ref [2] method are 0.01%
and 5.7%, re
spe
c
tively.
From the com
p
arison, it can
be se
en that the
prop
osed g
r
ai
n yield metho
d
is effective and fea
s
ible
6. Conclusio
n
As
compl
e
x a
g
ricultural a
n
d
stati
s
tical
i
s
sue
s
, g
r
ai
n yi
eld p
r
e
d
iction
is affecte
d
b
y
many
factors. And also, its hi
sto
r
ical d
a
ta is li
mited, which
makes it di
ffic
u
lt to acc
u
rately predic
t. The
pape
r p
r
op
osed a im
prove
d
PSO-BP b
a
se
d g
r
ain yi
eld p
r
edi
ction
method,
whi
c
h o
p
timized
the
BP neural
n
e
twork
para
m
eters throu
gh imp
r
oved
PSO and e
ffectively improved the
ov
erall
learni
ng
abilit
y and ove
r
co
me the
pro
b
l
e
m of e
a
sy to
fall into lo
cal
optimum. T
h
e test
re
sults
for
2002
-20
07
grain yield
sh
o
w
s the p
r
o
p
o
s
ed
metho
d
i
s
signifi
cantly better th
an B
P
neu
ral n
e
twork
method an
d Grey-Rel
atio
nal su
ppo
rt vector
m
a
chin
e based met
hod, and h
a
s good ap
plica
t
ion
pro
s
pe
cts.
Referen
ces
[1]
Ma W
enjie, F
e
ng Z
hon
gcha
o.
China gr
ain pr
oducti
on factor
s anal
ysis-B
as
ed on the emp
i
rical an
al
ysis
o
n
Pa
ne
l
da
ta
.
Shanx
i Jour
nal
of agricultur
a
l
scienc
e
. 200
8; 1: 163-1
66.
[2]
Nie Sh
ao
hua.
Grain prod
u
c
tion pre
d
icti
o
n
base
d
on
Gre
y
-R
elati
o
n
a
l
supp
ort vector machi
ne.
Co
mp
uter si
mu
latio
n
. 201
2; 29: 220-2
23.
[3]
Li J
i
n
x
i
a
. A
pr
ojecti
on
of H
e
nan
gr
ain
pr
od
uction
b
a
sed
o
n
gr
a
y
s
y
stem
mod
e
ls.
J
our
nal
of
Hen
a
n
Univers
i
ty of Techn
o
lo
gy (So
c
ial scie
n
ce e
d
i
t
ion)
. 200
9; 5: 1-3,7.
[4]
Su bo, Li
u lu,
Yang F
a
ngtin
g
.
Comparis
on
and res
ear
c
h
o
f
grain pr
od
uction forec
a
sting
w
i
t
h
metho
d
s
of GM (1,N) gra
y
s
y
stem a
nd
BPNN.
Journ
a
l
of China a
g
ric
u
ltura
l
univ
e
rsit
y
. 2006; 11: 99
-104.
[5]
W
u
Yuming, Li
Jian
xi
a. Non-
li
near artifici
al n
eura
l
net
w
o
rk
mode
l and its a
pplic
atio
n in co
rn prod
uction
pred
iction.
Jo
u
r
nal of He
na
n nor
mal
univ
e
rsi
t
y (Natural Sci
ence)
. 20
02; 3
0
: 35-38.
[6]
Che
n
W
e
i, Li
u Guobi. Ap
pl
icatio
n the
gr
a
y
s
y
stem i
n
Grain Yie
l
d pr
edicti
on.
Jour
n
a
l of Bei
jin
g
Electron
ic scie
n
ce an
d T
e
chn
o
lo
gy institute
.
200
8; 16: 62-6
4
.
[7]
Xi
on
g Xin, Nie
Ming
xi
n. BP ne
t
w
ork pr
inci
pl
e and matl
ab sim
u
lati
on. http://
w
w
w
.
p
a
per.ed
u
.
c
n.
[8]
BHM Sad
e
g
h
i.
A BP-ne
ural
n
e
t
w
o
r
k pr
ed
ictor mod
e
l for
pl
astic in
jecti
on
moldi
ng
proc
e
ss.
Jo
u
r
na
l
of
mater
i
al proce
ssing
tech
no
lo
gy
. 2000; 1
03: 411-
416.
[9]
Hepsi
b
a
K A
n
ga, PC
Pa
nch
a
ri
ya, A
L
S
har
ma. Authe
n
tic
a
tion
of In
di
an
w
i
nes
usi
ng
voltammetri
c
electro
n
ic ton
g
ue co
upl
ed
w
i
th artificial
ne
ural n
e
t
w
orks.
Sensors tra
n
s
ducers: soft
sensors
and
artificial neur
al netw
o
rks
. 2012
; 145; 65-7
6
.
[10]
M Lazri, F
Ou
allo
uch
e
, S Ameur, ect. Identif
y
i
ng C
onve
c
tive and strat
i
form Rain b
y
Confronti
n
g
SEVERI Sens
or Multi-s
pectr
al Infrar
ed to Ra
dar S
ens
or Dat
a
Us
ing Neural Netw
ork.
Se
nsor
s
transduc
ers: soft sensors an
d artificia
l
neur
al netw
o
rks
. 20
12; 145; 1
9
-32.
[11]
KENNEDY J,
EBERHART
RC.
Particle Swarm
Optim
i
z
a
t
i
on.
Proc
eed
ings
of IE
EE Iternatio
n
Confer
ence
on
Neura
l
Net
w
or
ks. 1995; 19
42
-194
8.
[12]
Xu
e T
i
ng.
T
he s
y
nt
hesis a
nd i
m
provem
ent of PSO. Dalian: Dali
an Mar
i
time Univ
ersit
y
. 2
008.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Grain Yiel
d Predi
ction Met
hod Based o
n
Im
proved P
S
O-BP (Ligu
o
Zhang
)
7411
[13]
Z
hang D
i
n
g
x
u
e
, Guan Z
h
ih
ong, Li
u
Xinz
hi. Adaptiv
e p
a
rticle s
w
a
rm optimiz
ation
al
gorithm
w
i
t
h
d
y
nam
ical
l
y
c
h
ang
ing i
nerti
a w
e
itht.
Contro
l and D
e
cisi
on
. 200
8; 23; 125
3
-
125
7.
[14]
Z
hao C
h
e
n
g
y
e, Yan Z
hen
gbi
ng, L
i
u
Xi
ngg
ao. Impro
v
ed a
d
a
p
ti
ve
param
eter
p
a
rticle s
w
a
r
m
optimiz
ation a
l
gorithm,”
Jour
n
a
l of Z
heji
a
n
g
Univers
i
ty (Eng
ine
e
rin
g
scie
n
c
e
).
2011; 3
9
: 1039-
104
2.
[15]
W
u
Peifen
g, Gao L
i
qu
n, Z
ou
Kai
x
u
an, etc. A
n
impr
oved
par
ti
cle s
w
arm
opt
imizatio
n a
l
gor
i
t
hm.
Journa
l
of Northeaster
n
Univ
ersity (Natural Sci
ence)
. 2005; 45; 20
9
9
-21
02.
[16]
Yi Da, Ge Xi
uru
n
. An improve
d
PSO bas
ed AN
N
w
i
t
h
simul
a
ted an
ne
ali
n
g techni
qu
e.
N
e
u
r
o
c
om
pu
ti
ng
. 2005; 6
3
: 52
7-53
3.
Evaluation Warning : The document was created with Spire.PDF for Python.