TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.4, April 201
4, pp. 3224 ~ 3
2
2
9
DOI: http://dx.doi.org/10.11591/telkomni
ka.v12i4.4928
3224
Re
cei
v
ed Se
ptem
ber 24, 2013; Revi
se
d No
vem
ber
21, 2013; Accepted Decem
ber 10, 20
13
The Combined Forecasting Model of Discrete Verhulst-
BP Neural Network Based on
Linear Time-Varying
Shang Hon
g
c
hao
1
, Long
Xia*
2
, He Tingjie
3
1
School of Co
mputer Scie
nc
e, Sichua
n
Un
i
v
ersit
y
of Sci
e
n
c
e & Engin
eer
i
n
g
Sichu
an Un
iver
sit
y
of Scie
nce
& Engin
eeri
ng,
6430
00 Z
i
g
o
n
g
, Chin
a, Phon
e: 1808
04
66
57
8
2
School of Co
mputer Scie
nc
e, Sichua
n
Un
i
v
ersit
y
of Sci
e
n
c
e & Engin
eer
i
n
g
Lecturer, Scho
ol of Comp
uter
Science, Sich
uan U
n
ivers
i
t
y
of Science & E
ngi
neer
in
g, 64
300
0 Z
i
go
ng,
Chin
a, Pho
ne: 138
90
058
58
0
3
Sichua
n Institute of T
e
chnol
og
y, Institute of aut
omatio
n an
d electro
n
ic inf
o
rmatio
n
Institute of automatio
n
an
d el
ectronic i
n
form
ation, 64
30
00
Z
i
gon
g, Chin
a, Phon
e: 182
27
7
355
74
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: 3526
27
19@
q
q
.com
1
, long
xi
a
10-2
8
@1
63.co
m
2
, 731129
34
4
@
qq.com
3
A
b
st
r
a
ct
F
i
rstly, this pa
per, ai
ming
at
the pro
b
l
e
m
of
e
rrors pr
oduc
ed by th
e tran
sformati
on
of d
i
fferentia
l
equ
atio
n dir
e
c
t
ly into d
i
fference
eq
uatio
n
from tr
a
d
itio
nal gray
Ver
hulst mo
del,th
roug
h
g
ener
ati
n
g
reciproc
al for the ori
g
in
al d
a
ta sequ
enc
e, constr
ucts the discrete Vrh
u
ls
t mod
e
l bas
ed
on lin
ear ti
me
-
varyin
g (LT
D
V
M
mode
l);And
then
w
e
, takin
g
the LT
DV
M
pre
d
icted
val
ue
as
an
in
put va
lu
e
an
d the
ori
g
i
n
al
data as a
me
ntor traini
ng v
a
lu
e,
put forw
ard the co
mbi
ned for
e
castin
g mode
l of di
screte Verh
uls
t
-BP
neur
al n
e
tw
ork based
on l
i
ne
ar time-varyi
ng
. Meanw
hile,
i
n
order to i
m
pr
o
v
e the train
i
ng
spee
d an
d ag
ili
ty
and
effective
l
y
avo
i
d
the s
a
tu
ration
reg
i
o
n
o
f
S-type fu
ncti
on, this
artic
l
e
nor
mal
i
z
e
d
i
n
adva
n
ce
the
in
pu
t
data an
d me
nt
or
trai
nin
g
v
a
lu
es
to better en
sure
th
e us
eful
ness, se
lf-lear
nin
g
a
b
i
lity a
n
d
fault to
ler
ance
o
f
the model. At last, we will study the cas
e
s to
dem
o
ns
trate that the m
o
del has high modeling and
forecasting acc
u
racy.
Ke
y
w
ords
:
dis
c
rete Verhu
l
st mo
de
l, line
a
r time-v
aryi
n
g
, BP neura
l
netw
o
rk, combi
nati
o
n
forecasting
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Fore
ca
st
reef
ers to, o
n
th
e ba
si
s of
m
a
steri
ng
exist
i
ng info
rmatio
n an
d in
a
c
corda
n
ce
with ce
rtain
mean
s and
rules, the me
asu
r
e an
d calcul
ation for the future thing
s
to kno
w
in
advan
ce
the
developm
ent pro
c
e
s
s
an
d results of
tings
. In ac
tual
forecas
t, generally based
on
histori
c
al
dat
a
vari
able
s
, we use statist
i
cal met
hod
s or system
id
e
n
tification me
thod
to
e
s
tabl
ish
mathemati
c
al
model
s for
predi
ction. Al
though th
e
r
e
are ma
ny existing predi
ctive models,
the
time seri
es m
odel which is
based on the
simple
reg
r
e
s
sion a
nalysi
s
and the the
o
ry of probabilit
y
and stati
s
tics, only has better pre
d
ict results fo
r th
e data of linear variatio
n, thus hard to
accurately d
e
scrib
e
the t
i
me se
rie
s
trend
s.
The
r
ef
ore, the g
r
a
y
system th
eory an
d ne
ural
netwo
rk mod
e
ls the
no
nlinear predi
ction mo
del
s
such
as the g
r
ay sy
stem t
heory
and
n
eura
l
netwo
rk mo
d
e
ls
eme
r
ge
d. Gray mo
del
based
on li
mi
ted info
rmati
on
can
fit the
overall tre
n
d
of
the time seri
es an
d improve the pred
iction a
c
cura
cy, but the gray
syste
m
theory can
not
approa
ch to
a nonli
nea
r fu
nction
in
spit
e of trai
ning l
earni
ng.
Whil
e neu
ral
network mo
del i
s
able
to solve thi
s
probl
em very
well, e
s
pe
cial
ly the BP neu
ral net
wo
rk
p
o
sse
ssi
ng a
strong n
onlin
e
a
r
mappin
g
abili
ty. Through t
he efforts
of a larg
e
nu
mber
of sch
o
lars, BP ne
ural n
e
two
r
k has
compl
e
te th
eoreti
c
al
system and
cl
ear al
gorith
m
ic p
r
o
c
e
ss,
with a st
rong a
nalog
and
recognitio
n
. Ho
wever, wit
h
the deep
eni
ng of BP neur
al netwo
rk a
p
p
licatio
n re
se
arch, som
e
of its
probl
em
s co
me to expo
se, su
ch a
s
long lea
r
ni
ng time, slo
w
co
nverg
e
n
ce a
nd po
or
gene
rali
zatio
n
ability, and
these
proble
m
s h
a
ve a
seriou
s im
pa
ct on the
pre
d
i
c
tion a
c
cu
ra
cy of
BP neural
ne
twork [1]. Th
us, re
se
arch
ers
beg
an
to
focu
s on BP
neural net
work i
m
prove
m
ent
and
combi
nat
ion fore
ca
st.
Li Hu
anrong [
2
] (200
0)
pro
posed a m
e
th
od which can
not only redu
ce
the amou
nt of sample in
p
u
t, but also
can im
prove the co
nverge
nce
spe
ed i
s
an app
roa
c
h
to
optimize
the traditional
ne
ural network.
Cao
Ji
an
h
u
a
, etc. [3] (2
008) a
c
cordi
ng to the
error
determi
ning
combi
ned
we
ights of gray model an
d neural netwo
rk mo
del, bui
ld a com
b
ina
t
ion
forecastin
g
model.
Dai Y
u
(2
010
) [4]
con
s
tru
c
ted
the combi
ned
weig
hts fo
re
ca
sting m
ode
l of
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
The Com
b
in
e
d
Fore
ca
sting
Model of Discrete Ve
rhul
st-BP Neural
… (Shan
g Ho
ngchao
)
3225
RBF n
e
u
r
al
netwo
rk met
hod, g
r
ay
G
M
(1
,
1
)
m
e
th
od a
nd A
R
I
M
A metho
d
,
but with
e
r
ro
r a
n
d
other limited i
n
formatio
n to determin
e
th
e weig
ht
and
establi
s
h the
combi
ned fo
reca
sting m
o
d
e
l,
the effective
compl
e
me
nta
r
y advanta
g
e
s
of t
w
o
o
r
m
o
re
model
s
can n
o
t be fo
rmed. Th
erefo
r
e,
there
ha
s be
en mo
re tha
n
two p
r
edi
ctio
n mod
e
ls
re
s
earch fo
r the
effective inte
gration,
su
ch
as
Li Weig
uo [5] (2007
) usin
g gray syste
m
theory to
extract the trend item of time se
rie
s
and
applying
the
sampl
e
p
e
ri
o
dogram to
fit
perio
dic te
rm
s, finally
crea
te a
com
b
ina
t
ion forecasti
ng
model. Shi B
i
ao [6] (200
9
)
by the
mea
n
s
of usi
ng
PSO algo
rith
m to train
B
P
neu
ral n
e
twork,
optimize
d
ne
ural n
e
two
r
k
para
m
eters a
nd imp
r
ov
ed
the gen
erali
z
ation ability o
f
neural net
work.
Liu Re
ntao e
t
c. [7] (2008) through u
s
in
g real
-c
ode
d accele
ration
geneti
c
algo
ri
thm to optimize
GM(1
,,
1) p
a
rameters an
d taking the im
proved
GM(1
1) predi
ctio
n value as i
n
put values a
nd
the origi
nal d
a
ta as
output
values to
co
mbi
ne
with BP neural net
work, finally
obtaine
d a hi
gher
predi
ction
accuracy. To
ng
Xinan etc. [8] (201
1) b
a
s
ed o
n
the V
e
rhul
st mod
e
l
and BP ne
ural
netwo
rk m
o
d
e
l, made a re
sea
r
ch on th
e com
b
inat
io
n of these two model
s an
d
thought that the
combi
nation model ha
s
a good stability
,
while
th
e m
odelin
g predi
ction
re
sults f
o
r the
sh
ape
of
"s" type of oscillation
sequ
ence we
re no
t ideal.
This
pap
er
firstly acco
rd
ing to the
arisi
ng
pro
b
l
e
ms from t
he tra
n
sfo
r
m
a
tion of
differential e
quation
s
di
re
ctly into diffe
ren
c
e
equat
i
ons
of traditi
onal g
r
ay Ve
rhul
st mod
e
l, and
throug
h gen
e
r
ating countd
o
wn to the original
data
seque
nce, establish a gray
model with no
bias to "
s
" type sequ
ence sim
u
lat
i
on--di
s
crete
Verhul
st m
odel ba
se
d
on linea
r
time
-
varying(LTDV
M model
);Th
en this p
ape
r, taking
LT
DVM predi
cted
value as
an
input value a
nd
the origin
al d
a
ta as a me
ntor traini
ng
value,
prop
o
s
e
s
ba
sed o
n
linear time
-varying discrete
Verhul
st-BP neural
net
wo
rk com
b
inatio
n
fore
ca
st
m
odel. Me
an
while, in o
r
de
r to improve the
training
sp
ee
d and a
g
ility and effectiv
ely avoid the
saturation region of S-ty
pe fun
c
tion, this
article p
r
io
rs t
o
normali
ze t
he input data
and mentor t
r
ainin
g
value to better make sure that this
model ha
s hi
gher p
r
a
c
tica
bility, self-learning ability an
d fault toleran
c
e.
2.
Bas
e
d on Linear Time-v
ar
y
i
ng Dis
c
rete Verhulst-BP Neu
r
a
l
Net
w
o
r
k Combination
Foreca
st Mo
del
2.1.
Discre
t
e Ver
hulst Model
Bas
e
d on Linear Time-v
ar
y
i
ng (LTDVM model)
Definition 1 take the ob
servation valu
e of
a behavioral cha
r
a
c
te
ristic
seq
uen
ce of the
sy
st
em as:
)
(
,
,
2
,
1
0
0
0
0
n
x
x
x
X
,
0
Y
is the
co
untd
o
wn
se
que
nce of
0
X
,
that
is,
0
0
1
1,
2
,
,
,
yk
k
n
xk
1
Y
is a cum
u
l
a
tive sequ
en
ce of
0
Y
, that is
:
11
12
3
4
1,
1
,
2
,
,
1
yk
k
y
k
k
k
n
(1)
Is the discrete Verhul
st model ba
se
d o
n
linear time
-varying (LT
D
VM model).
The solving
pro
c
e
ss of thi
s
model i
s
as
follows:
1) Usi
n
g
the least
squa
re
s
method to
find the model p
a
ram
e
ters
4
3
2
1
,
,
,
A
B
A
B
A
B
A
B
4
4
3
3
2
2
1
1
,
,
,
2) Applying the re
cursio
n
formula
1
1
12
3
4
1
yk
k
y
k
k
to find the
seq
uen
ce
1
1
yk
3) Usi
n
g
0
0
1
1
1
xk
yk
to find the
sim
u
l
a
tion p
r
edi
cti
on valu
e of t
he
origin
al se
qu
ence.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 3224 – 3
229
3226
2.2.
Impro
v
ed BP Neural Ne
tw
o
r
k
The e
s
sen
c
e
of pre
d
ictio
n
i
s
the
exploration
of la
w fro
m
the
seemi
n
gly cha
o
tic
hi
stori
c
al
data.
The n
eural n
e
twork
mode
l ha
s the
s
e f
eatur
e
s
:
self-learni
ng i
n
formation p
r
o
c
e
ssi
ng,
kno
w
le
dge re
aso
n
ing and
self-a
daptatio
n
to
no
n-d
e
te
rmini
s
tic rule system.
T
h
ro
ugh
the
traini
ng
to sam
p
le
da
ta to a
c
hieve
som
e
kind
o
f
mappin
g
fro
m
input to
ou
tput, so
by the map
p
ing
th
e
inherent law
of sample da
ta can be di
scovered.
After a long peri
od of develo
p
ment, artifici
al
neural net
wo
rk
resea
r
ch has a
c
hi
eved
fruitful re
sea
r
ch re
sults a
nd
cu
rr
ently the mo
st wid
e
ly
use
d
model i
s
BP neural n
e
twork mo
del
.
In the application of BP neural n
e
two
r
k to predi
ct, the prima
r
y task is to e
s
tabli
s
h BP
neural netwo
rk mo
del and
during the
model proc
ess, determi
nin
g
the netwo
rk layers and
the
numbe
r of ne
uron
s in ea
ch
layer is the key.
(1) Netw
or
k
lay
e
r
In the BP ne
ural
network
model, hi
dde
n layers
dete
r
mine
the
sp
eed
of mod
e
l
trainin
g
,
but in pra
c
tice, incre
a
si
ng
the hidden l
a
yer nee
ds
more trai
ning
time, so generally only the
stru
cture
con
t
aining th
e in
put with
on
e
hidd
en l
a
yer, hidd
en l
a
yer a
nd
outpu
t layer
can
b
e
sele
ct
ed.
(2)
Determina
t
ion of hidden
node
s
The sel
e
ctio
n
of hidden layer nod
es i
s
al
so ve
ry impo
rtant. If the number of ne
urons in
the hidde
n la
yer is too sm
all, the netwo
rk p
e
rfo
r
ma
n
c
e is p
o
o
r
or
can n
o
t identi
f
y the comple
tion
of traini
ng. If the
sele
ctio
n of th
e n
u
m
ber of n
ode
s is too l
a
rg
e, the n
u
mb
er of
iteratio
ns may
increa
se, the
training time
may be prol
onge
d,
the network fault tolera
nce may
decrea
s
ed,
and
the ge
neralization
cap
a
city
may dimi
nish. All the
s
e i
s
sue
s
ca
n le
a
d
to d
e
terio
r
a
t
ion of the
mo
del
predi
ction.
In orde
r to sel
e
ct a re
aso
n
a
b
le numb
e
r of
hidden n
ode
s, there i
s
an
empiri
cal formula:
im
n
a
(2)
Thereinto, i
stands for th
e
hidde
n laye
r
node
s,
m fo
r
the num
be
r o
f
input n
ode
s, n for
the numbe
r o
f
output node
s, and the ran
ge of a is 1-1
0
.
(3) Data
pr
ep
roc
e
s
s
in
g
Becau
s
e the
input layer a
nd hidd
en la
yer in the BP neural n
e
twork a
pplie
d the tansi
g
function
which is "'s" type of tr
ansfer function
whose range i
s
[-1,1]
or [0,1].In
order to improve
the traini
ng
speed
and
agi
lity and effect
ively avoid t
he satu
ratio
n
region
of S-type fun
c
tion, t
he
rang
e of inpu
t data is gen
erally re
quire
d betwe
en[-1
,1] or [0,1].This arti
cle p
r
i
o
rs to n
o
rm
alize
the input dat
a and mento
r
trainin
g
value to make its rang
e be
[0,1], and then bri
n
g
s
the
pro
c
e
s
sed d
a
ta into the BP neural ne
twork to tr
ain
,
and finally anti-no
rmali
z
es the e
s
tim
a
ted
results to get
the requi
re
d data.
Normali
z
ation
formula [10]:
mi
n
ma
x
m
i
n
0.
1
*
0.8
xx
T
xx
(3)
There into, T rep
r
e
s
ent
s the norm
a
lized
target data a
nd x is the ori
g
inal data.
Anti-normali
zation formul
a:
max
m
i
n
mi
n
(0
.
1
)
(
)
0.8
Tx
x
xx
(4)
2.3.
Bas
e
d on Li
near Time-v
ar
y
i
ng Discr
ete
Ver
hulst-BP Neural Net
w
o
r
k Co
mbination
Foreca
st Mo
del
The g
r
ay
system and
ne
ural network i
n
tegrat
io
n into
a gray ne
ural
network mo
del
can
compl
e
me
nt each othe
r. By the
gray p
r
edictio
n meth
od, buildi
ng
model
req
u
ire
s
a
small
am
ount
of co
mputatio
n an
d in
the
case
of
small
sampl
e
s thi
s
way
can
a
c
hi
eve hig
her a
c
curacy; th
e u
s
e
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
The Com
b
in
e
d
Fore
ca
sting
Model of Discrete Ve
rhul
st-BP Neural
… (Shan
g Ho
ngchao
)
3227
of BP neural
netwo
rk
co
n
t
ributes to
b
u
ilding a
mo
del with hi
gh
pre
c
isi
on a
n
d
error
co
ntrol.
Therefore, int
egratio
n of the tw
o togeth
e
r can give f
u
ll play to
the advantag
es of both. At the
same
time, b
e
ca
use the
g
r
ay p
r
edi
ction
model
was
constructe
d b
a
s
ed
on
a lin
e
a
r time
-varyin
g
discrete
Verh
ulst Mo
del
( L
T
DVM m
odel
), this mod
e
l
has no
bia
s
t
o
the
data
of
the "s" type
a
nd
can
simul
a
te
and p
r
edi
ct th
e oscillato
ry data ve
ry wel
l
; Moreove
r
,
based o
n
the
data no
rmali
z
ed
BP neural n
e
twork ma
ke
s furthe
r imp
r
oveme
n
t,
effectively improving
the trai
ning speed a
nd
agility as
well
as avoi
ding t
he saturation
of S-
type function to m
a
ke the
combination
forecasting
model with hi
gher
simul
a
tion and p
r
edi
ction accuracy
.
3. Cas
e
Analy
ses
Select
one
of
appli
c
atio
n
example
s
in
t
he literature
[9] and
analy
z
e
and
comp
are
the
sampl
e
of
o
ne
co
st of T
o
rpe
do i
n
19
95-2
003.
T
a
ble 1
sh
ows the raw
dat
a. According
to
statistics, th
e cum
u
lative
co
st of Torpedo a
p
p
r
ox
imates
cu
rve
"S" type, suitable for t
h
e
establi
s
hm
en
t of new discrete Verhul
st Model (LTDV
M
model).
Table 1. A Type of Torpedo De
velopment Cost unit: million
years
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
cost
496
779
1187
1025
488
255
157
110
87
79
Table 2. A Type of Torpedo De
velopment Cost unit: million
years
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
cost
496
1275
2462
3487
3975
4230
4387
4497
4584
4663
This type of Torp
edo
cum
u
lative development co
st i
s
sh
own in Table 2. It can
be see
n
from the tabl
e that the growth of To
rp
edo cu
mul
a
tive developm
ent co
st slo
w
s do
wn a
nd t
he
data se
rie
s
p
r
esents "S" shape. Th
eref
ore, a
c
cordi
n
g to the discrete Verhul
st
model (LT
D
VM
model
) ba
sed
on linea
r time-varyin
g
, the data sh
own in Table 2
ca
n
be simul
a
ted
and predi
cte
d
.
0
0
1
1,
2
,
,
,
yk
k
n
xk
thereinto,
10
1
,1
,
2
,
;
k
i
yk
y
i
k
n
accordi
ng t
o
d
e
finition
4 an
d by the
lea
s
t sq
ua
re
s to fin
d
the
model
pa
ra
meters
4
3
2
1
,
,
,
and g
e
t the
predi
ction
e
x
pressio
n
:
11
1
0
.
3012
0.
007
6
0
.
0002
0.
002
,
1
,
2
,
,
1
yk
k
y
k
k
k
n
.
The re
sult
s are sho
w
n in T
able 3.
Table 3. LT
DVM Model Simulation Results to a Type
of Torped
o Cumulative De
velopment Cost
Unit: million
years
data
Traditional Verhu
l
st model
[8]
Discrete Verhulst model based on
linear
time-var
y
i
ng (LT
D
VM
model)
Analog value
Relative error
Analog value
Relative error
1995
496
1996
1275
1119.1
0.123
1274.8908
0.0086
1997
2462
2116.0
0.1405
2465.0106
0.1223
1998
3487
3177.5
0.0888
3473.1565
0.3970
1999
3975
3913.7
0.0154
3983.2205
0.2068
2000
4230
4286.2
0.0133
4241.6250
0.2748
2001
4387
4444.8
0.0132
4387.4620
0.0105
2002
4497
4507.4
0.0023
4487.3902
0.2137
2003
4584
4531.3
0.0115
4575.9492
0.1756
2004
4663
4540.3
0.0263
4671.3441
0.1789
Average relative error
0.0482
0.1765
Note: the rela
tive error in th
e table takin
g
the absol
ute value.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 3224 – 3
229
3228
On thi
s
ba
si
s, use
the im
proved BP net
work m
odel
to sim
u
late a
n
d
predi
ct. Th
e sp
ecifi
c
approa
ch is t
he esta
blish
m
ent of a three
-
layer
BP
neural n
e
twork, ta
king th
e fit part of the
discrete
Verh
ulst mo
del b
a
se
d on
line
a
r time
-varyi
ng a
s
the
in
put of ne
ural
netwo
rk an
d
the
origin
al data
as the mento
r
training BP neural n
e
two
r
k.
The firs
t s
t
ep: network
parameter setting
The net
work stru
cture i
s
sele
cted a
s
9-8
-
1,
the transfe
r fun
c
tion of input layer and
hidde
n
laye
r neuron
s
i
s
chosen as
tans
ig
, the
tran
sfer fun
c
tion
o
f
the outp
u
t l
a
yer n
e
u
r
on
s is
pureli
n
an
d
the trainin
g
function ta
ke
s the a
dapti
v
e modificati
on lea
r
nin
g
rate alg
o
rith
m
traingd
a.
Thi
s
kind
of app
ro
ach i
n
the trai
ning p
r
oje
c
t
a
u
tomatically modifies
the
l
earni
ng rate, let
it chan
ge al
ways in a
suit
able rang
e to
ensure
syst
em stability a
nd speed
of netwo
rk t
r
aini
ng
(learning rate is too la
rge,
whi
c
h will
reduce the stability of
the network; on
the contrary, the
training time
will be longer). Set the maximum num
ber of epochs
of traini
ng
steps as 5000st
eps,
the expe
cted
error
goal
as 0.0000
1, the
initial le
a
r
nin
g
rate i
s
gen
erally
sele
cte
d
between
0.
01
and 0.1 an
d this pa
per
sel
e
cts l
r
=0.05, learni
ng rate increme
n
t lr_i
nc a
s
1.05.
The se
co
nd step: data normalizatio
n
Becau
s
e the
input layer a
nd hidd
en la
yer in the BP neural n
e
twork a
pplie
d the
tansi
g
function
which is "'s" type of tr
ansfer function
whose range i
s
[-1,1]
or [0,1].In
order to improve
the traini
ng
speed
and
agi
lity and effect
ively avoid t
he satu
ratio
n
region
of S-type fun
c
tion, t
he
rang
e
of inp
u
t
data i
s
gen
erally
req
u
ire
d
bet
wee
n
[-
1
,
1] or [0,1]. T
h
is
arti
cle
pri
o
rs to
no
rmal
ize
the input d
a
ta and m
ento
r
traini
ng val
ue to ma
ke
its ran
ge b
e
[0, 1], and then bri
n
g
s
the
pro
c
e
s
sed d
a
ta into the BP neural ne
twork to tr
ain
,
and finally anti-no
rmali
z
es the e
s
tim
a
ted
results to get
the requi
re
d data.
Normali
z
ation
formula [10]:
mi
n
ma
x
m
i
n
0.
1
*
0.8
xx
T
xx
(3)
Thereinto, T repre
s
e
n
ts the
normali
ze
d ta
rget data a
n
d
x is the orig
inal data.
Anti-normali
zation formul
a:
max
m
i
n
mi
n
(0
.
1
)
(
)
0.8
Tx
x
xx
(4)
The third
step
: using the co
mbination m
o
del to simulat
e
and predi
ct
Bring the no
rmalize
d
input
data P and the mento
r
tra
i
ning value T
into the BP
netwo
rk to train
and the
n
g
e
t a predi
ctive n
e
twork.Usin
g
the train
ed n
eural
net
work to predict th
e input
data f
o
r
simulatio
n
, to obtain the
d
e
sired p
r
e
d
ict
i
ve value, an
d cal
c
ul
ate th
e relative
error (se
e
Ta
ble
4).
Comp
ared
wi
th 0.091
8
of the m
odel
in t
he lite
r
atur
e [
8
], the a
c
cura
cy of thi
s
co
mbination
mo
del
is much high
er.
Table 4. The
Comp
ari
s
o
n
Table of Cal
c
ulation Results of Model Unit: million
years
data
Gra
y
Ver
hulst-BP net
w
o
rk
combination model
[8]
Based on linear t
i
me-var
y
i
ng discr
ete Verhulst-BP
neural
net
w
o
rk combina
t
ion prediction model
Analog value
Rel
a
ti
v
e
error
Analog value
Rel
a
ti
v
e
error
1995
496
496.3691
0.0007
496
0
1996
779
777.5959
0.0018
779.3
0.0004
1997
1187
1190.0080
0.0025
1187.2
0.0002
1998
1025
1018.1120
0.0067
1025
0
1999
488
505.1010
0.035
488.2
0.0004
2000
255
225.9330
0.114
255.1
0.0004
2001
157
173.0380
0.1022
157.7
0.0045
2002
110
128.7950
0.1709
100.8
0.0836
2003
87
81.1100
0.0677
83.5
0.0402
2004
79
46.0700
0.4168
91.9
0.1633
Average relative
error
0.0918
0.0293
Note: the rela
tive error in th
e table takin
g
the absol
ute value.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
The Com
b
in
e
d
Fore
ca
sting
Model of Discrete Ve
rhul
st-BP Neural
… (Shan
g Ho
ngchao
)
3229
4. Conclu
sion
Gray Ve
rhul
st model a
nd
BP neural ne
twor
k mo
del
has it
s o
w
n
shortcomin
gs
and the
establi
s
hm
en
t of a comb
ination of th
e predi
cti
on model effecti
v
ely
play
their respe
c
tively
advantag
es.
More
over, th
e re
spe
c
tive improvem
ent
of gray pre
d
iction m
odel
and BP neu
ral
netwo
rk
makes thi
s
co
mb
ination mo
de
l simulate a
n
d
pre
d
ict the
data of s-ty
pe with hi
gh
er
ac
cur
a
cy
.
Ackn
o
w
l
e
dg
ements
This
wo
rk
wa
s supp
ort
ed by the
basi
c
ap
plicat
ion re
sea
r
ch
Program of
Sichua
n
Province (No
.
2008
JY01
1
2
), the
Hig
h
e
r
Edu
c
atio
n
Personn
el T
r
aining
Qualit
y and T
e
a
c
hi
ng
Reform Subj
ect of Si
chu
a
n
Provin
ce
(No. P09
264
), the Depa
rtm
ent of ed
ucation key scie
n
t
ific
resea
r
ch
Pro
g
ram
of
Sich
uan
Provin
ce
(No. 2
006
A077),
and
the F
oun
d
a
tion of Si
chuan
Province for Retun
ed Sch
o
lars (Sichua
n peopl
e so
ci
al letter No. 3
2
in 2010 ).
Referen
ces
[1]
Liu
T
i
anshu. T
he
improv
eme
n
t rese
arch
a
nd
app
lic
ation
of BP
ne
ural
net
w
o
rk. Har
b
in:
Northe
as
t
Agricult
ural U
n
i
v
ersit
y
. 2
011.
[2]
Li H
u
a
n
ron
g
,
W
ang S
humi
n
g. An im
prov
e
d
BP n
eur
al
n
e
t
w
o
r
k pr
ed
icti
on met
hod
an
d its a
ppl
icati
o
n.
Systems En
gin
eeri
ng.
20
00; 1
8
(5): 75-7
8
.
[3]
Cao Ji
anh
ua,
Liu Yu
an, Da
i Yue. Net
w
ork t
r
affic
predicti
o
n base
d
on gr
e
y
neur
al n
e
t
w
ork integr ate
d
mode
l.
Co
mput
er Engi
ne
erin
g and Ap
plic
atio
ns
. 2008; 4
4
(5)
:
155-15
7.
[4]
Dai Yu. Optima
l
Comb
inati
o
n
F
o
recasti
n
g
Mode
l a
n
d
Its
Appl
icatio
n.
Ec
onomic Mathem
atics.
20
10;
27(1): 92-
98.
[5]
Li W
e
i
g
u
o
, Z
h
ang
Aiq
i
ng. T
he mo
de
lin
g
method
of co
mbin
ation
fore
cast bas
ed
on
gra
y
s
y
stem
.
Statistics and
Decisi
on
. 200
7; (21): 11-12.
[6]
Shi Bia
o
, Li Y
u
xia, Yu
Xi
nhu
a, Yan W
ang.
S
hort-term loa
d
forecastin
g o
f
improved PS
O-BP neur
a
l
net
w
o
rk mo
del.
Computer Ap
p
licatio
ns.
20
09;
29(4): 103
6-1
039.
[7]
Liu R
entao, F
u
Qiang, F
eng
Yan, Gai Z
haome
i
,
Li Guo
lian
g
, Li W
e
i
y
e. Gra
y
BP n
eura
l
net
w
o
r
k
pred
iction
mo
del
bas
ed
on
RAGA a
nd
i
t
s impact
on
the Sa
nji
a
n
g
Plai
n gr
ou
nd
w
a
ter d
y
n
a
m
i
c
forecasting.
System En
gin
eeri
ng T
heory a
nd
Practice
. 200
8;
28(5): 171-
17
6.
[8]
T
ong Xi
nan,
W
e
i W
e
i. Gray Ver
hulst-BP
Net
w
ork C
o
mbin
ed Mo
del
in F
o
recasti
n
g Rese
arch
.
Co
mp
uter Engi
neer
ing a
nd Ap
plicati
ons.
2
0
1
1
; 47(23): 2
45-
248.
[9]
Liu S
i
fen
g
, Da
ng Y
aog
uo, Fa
ng Zhi
g
e
ng
etc. Gra
y
s
y
stem theor
y and
its app
lic
ati
on. B
e
ijin
g: Sci
e
n
c
e
Press. 2010: 1
76-1
79.
[10]
W
ang Yi
ng
yi
n
g
. Coa
l
lo
gisti
cs dema
nd
pr
edict
i
on res
ear
ch bas
ed
on
gra
y
n
eura
l
n
e
t
w
o
r
k mo
del
.
Beiji
ng: Bei
jin
g
Jiaoton
g Un
iversit
y
. 2
012.
[11]
Xi
a L
ong, Yo
n
g
W
e
i, Ping
Lia
o
, Yan L
i
u. Li
n
ear T
i
me-var
yi
ng Par
a
meters
Discrete Gra
y
Mode
l Base
d
on Oscill
ating
Sequ
enc
e.
Jou
r
nal of Syste
m
s Science a
nd
Information
. 2
0
12; 10(4): 3
13-
318.
[12]
Xi
a Lo
ng, Yo
n
g
W
e
i, Z
hao L
ong. T
he Com
b
in
ed F
o
rec
a
sting Mo
de
l of Gra
y
M
ode
l Bas
ed On Li
ne
ar
T
i
me-variant a
nd ARIMA Mod
e
l.
Acade
mic R
e
searc
h
Pub
lis
hin
g
Age
n
cy
. 2013; 16(
3).
[13]
Guang
yo
u Yan
g
,
Z
h
iji
an Ye,
e
t
c.
T
he Implem
entatio
n of S-c
u
rve Acc
e
ler
a
ti
on
and
Dec
e
l
e
ration
Usi
n
g
FPGA.
T
E
LKOMNIKA Indone
sian Jo
urna
l of
Electrical E
ngi
neer
ing.
2
013; 11(1):
27
9-2
8
6
.
[14]
T
i
ngZ
hong W
a
ng, Gang
Lo
ng
F
an.
T
he Research
of Buil
di
ng F
u
zz
y C-M
e
ans Cl
usteri
ng
Mode
l Base
d
on P
a
rticle S
w
arm Optimiz
a
tion.
T
E
LKOMN
I
KA Indo
nesi
a
n Jo
urna
l
of
E
l
ectrical
En
gin
eeri
ng.
20
13
;
11(1
2
): 758
9-7
598.
Evaluation Warning : The document was created with Spire.PDF for Python.