TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.6, Jun
e
201
4, pp. 4451 ~ 4
4
5
6
DOI: 10.115
9
1
/telkomni
ka.
v
12i6.548
1
4451
Re
cei
v
ed
De
cem
ber 2
3
, 2013; Re
vi
sed
Febr
uary 16,
2014; Accept
ed March 3, 2
014
The Prediction of Granulating Effect Based on
BP
Neural Network
Fang Li*
1
, Kaigui Wu
2
, Guan
y
i
n
Zhao
3
Coll
eg
e of Co
mputer Scie
nc
e, Chon
gq
ing
Univers
i
t
y
, C
h
o
ngq
ing, C
h
in
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
:hnz
y1
98
8@1
63.com
1
, ka
i
g
uiw
u
@
c
qu
.e
du
.cn
2
, 56379
99
72
@qq.com
3
A
b
st
r
a
ct
Durin
g
the
granu
latio
n
proc
ess of Iron or
e sinter
mixtur
e, there are
ma
ny factors
affect the
gran
ulati
ng eff
e
ct, such as che
m
ic
al co
mp
o
s
ition, si
z
e
dist
ributi
on, surfac
e feat
ure of p
a
r
ticle, and s
o
o
n
.
Some r
e
searc
hers
use
tradit
i
on
al fittin
g
c
a
l
c
ulati
on
metho
d
s l
i
ke
le
ast s
quar
e
met
hod
an
d r
egress
i
o
n
ana
lysis metho
d
to predict gr
anu
latio
n
effec
t
s,
w
h
ich exis
ts big error. In order to pre
d
ic
t it better,
w
e
b
u
il
d
i
m
p
r
o
v
ed
BP (Ba
ck p
r
op
ag
a
t
io
n
)
ne
u
r
a
l
n
e
t
wo
rk m
o
d
e
l
to
ca
rry o
u
t
d
a
t
a
a
n
a
l
ysi
s
a
n
d
p
r
o
c
e
ssin
g
,
an
d
then
obtai
n better ef
fect than traditi
ona
l fitting calc
ulati
on
meth
od
s.
Ke
y
w
ords
:
iron or
e sinter
mixtur
e, si
z
e
distributi
on,
gran
ulati
on eff
e
cts, BP, neural netw
o
rk, fitting
calcul
atio
n
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1.
Introduc
tion
Based
on th
e
gra
nulating
mech
ani
sm o
f
sinter
mixture,
there are many
facto
r
s influen
ce
gran
ulating e
ffect, In certain conditio
n
s
of
granul
atio
n equipm
ent and ope
ratin
g
con
d
itions,
the
main facto
r
is the sinte
r
mixture’s o
w
n natur
e, including the
chemical co
m
positio
n of the
material, s
i
z
e
dis
t
ribution, mois
ture c
apac
i
ty
,m
icro
scopic st
ru
cture
and
ot
he
r fa
ctors. T
he m
a
in
chemi
c
al
co
mpositio
n of sinter mixture
is TFe, FeO,
SiO
2
, CaO, Al
2
O
3
, MgO,
MnO, TiO
2
, K
2
O,
Na
2
O
,
S
,
P
,
I
n
w
h
i
c
h
,
C
a
O
,
A
l
2
O
3
, MgO are cond
uciv
e to gran
ulati
on, but SiO
2
has a
n
adve
r
se
effect to g
r
an
ulation. Th
e
conte
n
t of th
ese
ch
emi
c
al
con
s
tituent
s
sho
u
ld b
e
u
s
ed a
s
the
inp
u
t
para
m
eters o
f
the model.
While th
e re
st of t
he che
m
ical in
gre
d
ien
t
s, su
ch a
s
M
n
O, TiO
2
, K
2
O,
Na
2
O, S, P,
whi
c
h has lo
w content, will not be co
nsidered for
the purpo
se
of reducing the
compl
e
xity of the model. Similarly, we se
le
ct <0.
2
mm,0.2-0.7
mm and 0.7
-
3mm a
s
si
ze
distrib
u
tion i
nput, and ot
her two parameter li
ke
moistu
re cap
a
city, moistu
re content a
r
e
c
o
ns
ide
r
ed
.
So there are
nine pa
ramet
e
rs a
s
granul
ating e
ffect predictio
n mod
e
l’s input, tha
t
is CaO,
Al
2
O
3
, MgO,
SiO
2
, <0.2mm, 0.2-0.7m
m and 0.7-3
mm, mois
ture
capa
city, moisture
conte
n
t.
The perfo
rm
ance quality of iron ore
sint
er mixtu
r
e granulatio
n is determi
ned by
perm
eability; however it i
s
not me
asured in actual
production but
i
n
experim
ental
conditions. We
use
content
o
f
3-8m
m in
th
e g
r
anul
ation
to evaluat
e
th
e pe
rme
abilit
y in a
c
tual
produ
ction. Th
e
r
e
are two outp
u
t para
m
eters in g
r
an
ulat
ing effe
ct p
r
edictio
n mod
e
l, perm
eabil
i
ty and 3-8
m
m
gran
ula
r
ity co
ntent.
The BPNN i
s
a
forwa
r
d
multi-layer n
e
twork
,
whi
c
h ba
se
s
on
BP algorith
m
, and th
e
topologi
cal
structu
r
e a
s
a l
a
yered fee
d
-f
orward net
wo
rk, is
comp
osed of the inp
u
t layer, hidd
en
layer and
out
put layer. In essen
c
e, the
BPNN alg
o
rit
h
m makes th
e input and
o
u
tput of a set
o
f
sampl
e
s into
a no
nline
a
r optimization
pro
b
lem
wit
h
u
s
ing th
e
gra
d
ient
de
scent al
gorit
hm
optimizatio
n tech
niqu
e, whi
c
h u
s
e
s
the iter
ative sol
u
tion to get the right value [1].
2. Granulati
ng Neur
al Model
In this p
ape
r,
we
build
a t
h
ree
layers
BP
neural ne
twork m
odel
as Fi
gure 1.
In it the
three laye
rs a
r
e den
oted a
s
Input layer, Hide laye
r, and Output lay
e
r.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 6, June 20
14: 4451 – 4
456
4452
Figure 1. Gra
nulating
Neu
r
al Model
From Fig
u
re
1, we can see the
r
e are nine input
nodes in th
e netwo
rk, which a
r
e
mois
ture c
apac
i
ty, mois
ture c
ontent, CaO, Al
2
O
3
, M
g
O, SiO
2
, <0.2mm, 0.2-0.7mm and 0.
7-
3mm. We ma
ke tho
s
e pa
ra
meters as g
r
a
nulating effe
ct predictio
n in
put.
There are al
so two outp
u
t
node
s in th
e
network, which
are
permeability and
3-8mm
gran
ula
r
ity co
ntent. We ma
ke tho
s
e two
para
m
et
ers a
s
gra
nulatin
g effect pre
d
icti
on output.
3. BP
Algorith
m
Back Pro
pag
ation ne
ural
netwo
rk i
s
one
ki
nd
of
neural n
e
two
r
ks
with mo
st wi
de
appli
c
ation. It is b
a
sed o
n
gra
d
ient d
e
scent m
e
thod
whi
c
h mi
nimi
ze
s the
sum
of the squa
re
d
errors betwee
n
the actual a
nd the de
sire
d output valu
es [2, 3].
Suppo
se p is the input of
netwo
rk, a is t
he output of neuro
n
s in h
i
dden laye
r, o is the
output of
neu
ron
s
in
o
u
tpu
t
layer, r i
s
th
e nu
mbe
r
of i
nput n
ode
s,
s is th
e
numb
e
r
of ne
uro
n
s in
hidde
n laye
r,
t is the
num
b
e
r of
neu
ron
s
in outp
u
t
lay
e
r, w1 i
s
the
con
n
e
c
tion
weight of hi
dd
en
layer, w2 is th
e con
n
e
c
tion
weig
ht of output layer [4, 5].
The output of
i neuron
s in
hidde
n layer:
1
1
(1
1
)
r
ii
j
j
i
j
af
w
p
b
(1)
The output of
i neuron
s in
output layer:
2
1
(2
2
)
s
kk
i
i
k
i
of
w
a
b
(2)
In (1) an
d (2), f1, f2 are the ex
citation fun
c
tion
in hidd
en la
yer an
d out
put laye
r
respe
c
tively, b1,b2
are th
e threshold
value i
n
hi
dde
n laye
r a
nd
output laye
r,
respe
c
tively. In
whic
h, i=
1,2,
…,s
;
k
=
=
1
,2,
…
,t.
The traini
ng
of BP network is
reali
z
ed
by updating t
he co
nne
ctio
n weig
ht accordin
g to
error bet
wee
n
real data a
n
d
respe
c
t value [6].
No
w we d
e
fin
e
the error fu
nction:
2
1
1
()
(
)
2
t
pk
k
k
Ek
t
o
(3)
In (3
), tk a
n
d
ok i
s
th
e re
al output
and
re
spe
c
t valu
e re
sp
ectivel
y
[7]. The total erro
r
function:
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
The Predi
ctio
n of Gran
ulating Effect Based on BP Ne
ural Netwo
r
k (Fan
g Li)
4453
1
()
()
r
p
p
Ek
E
k
(4)
Then comp
ute the fluctuati
ng val
ue of conne
ction
wei
ght [8]:
2
1
()
2(
1
)
2(
)
[(
)
]
r
k
p
Ek
wk
wk
uk
o
(5)
1
()
1(
1
)
1(
)
[1
(
)
]
r
k
p
Ek
wk
wk
uk
a
(6)
We can adj
ust the conne
cti
on wei
ght [9]:
1(
1)
1(
)
1
(
1
)
wk
wk
wk
(7)
2(
1
)
2(
)
2
(
1
)
w
k
wk
wk
(8)
4.
Impro
v
ed Model
The BP alg
o
r
ithm is sim
p
le, easy,
sm
all am
ou
nt of
cal
c
ulatio
n, and h
a
s th
e
parall
e
l
advantag
es,
so it i
s
on
e o
f
the larg
est
and m
o
st
m
a
ture trai
ning
algorith
m
s fo
r netwo
rk trai
n
i
ng
at pre
s
e
n
t. The e
s
sen
c
e
o
f
the algo
rith
m is to
solve
the minimu
m
value of the
error fu
nctio
n
6
.
Becau
s
e it
use
s
the me
thod of stee
pest de
sc
ent
in nonlin
ear prog
rammi
n
g
, there exi
s
ts
following problems
[10, 11].
(1) Slo
w
co
nverge
nce, low
learni
ng effici
ency;
(2) Easily falling into local minima.
In orde
r to make the mo
d
e
l more a
c
cu
rate, we u
s
e
momentum a
daptive learni
ng rate
adju
s
tment al
gorithm. Th
e
weig
hts a
nd t
h
re
shol
d
adj
u
s
tment formul
a with a
dditio
nal mom
entum
factor [12]:
(1
)
(
1
-
)
(
)
jj
w
k
mc
u
p
mc
w
k
ij
i
j
(9)
(1
)
(
1
-
)
(
)
bk
m
c
u
m
c
b
k
ii
i
(10)
In which, k is the training t
i
mes, we ta
ke
1000
0, mc
is the mome
ntum factor,
we take
0.9 is the we
ight betwe
en
i node in hid
den layer
a
n
d
j node in in
put layer; is the adju
s
tme
n
t
weig
ht for hid
den l
a
yer an
d i
s
the
a
d
ju
stment th
re
sh
old fo
r hi
dde
n layer.
At th
e same
time i
t
is
not an
e
a
sy
thing to
sele
ct ap
propri
a
te lea
r
ni
n
g
ra
te
for a parti
cula
r pro
b
le
m.
To solve
the
probl
em, it i
s
natu
r
al to
ad
just the
lea
r
ni
ng
rate
auto
m
atically in
traini
ng
process. Th
e a
dapti
v
e
learni
ng rate adju
s
tment formula [13, 14]
:
1.05
(
)
(1
)
0
.
7
(
)
()
uk
uk
u
k
uk
(1
)
(
)
(1
)
1
.
0
4
(
)
Ek
Ek
Ek
E
k
othe
r
(11
)
E(k) is sum o
f
squa
red e
r
rors fo
r the k
step.
The
sel
e
ction of the i
n
itial learni
ng
rate ca
n
be option
a
l, we take 1.0 [15].
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 6, June 20
14: 4451 – 4
456
4454
This meth
od
can e
n
sure t
hat the network trai
n the
sampl
e
s
by a learnin
g
rate
which is
alway
s
acce
p
t
able to the network. The
system setting
is sho
w
e
d
in
Table 1 in
clu
d
ing a
c
cura
cy,
rate, time, momentum fac
t
or.
Table 1. System Settings
S
y
stem settings
Data settings
S
y
stem accurac
y
0.001
Learning r
a
te
1.0
Training time
10000
Momentum facto
r
0.9
5.
Model Solution
In order to ge
t the relation betwe
en moi
s
ture
cap
a
cit
y
and moisture content, we
collect
and mea
s
u
r
e
different ores
from different
factory.
In this p
ape
r,
we
use
40
group
s of d
a
ta
as
sam
p
le
s; take
32
group
s a
s
trainin
g
sampl
e
s
whi
c
h is sele
cted ra
ndoml
y
from the sample
s
as showed in Ta
ble 2 then use 8 gro
u
p
s
as
forecast
sam
p
les a
s
sho
w
ed in Table 3.
Table 2. Training Sampl
e
s
No moistur
e
content
3-
8mm/% Per
m
eability
/
mm
H2O
1
5.1
29.85
288.00
2 6.86
53.80
216.00
3 8.12
61.33
230.00
4
5.73
63.48
220.00
5 6.68
61.20
228.00
6 6.58
59.90
228.00
7 6.27
31.96
286.00
8 5.21
27.37
652.00
9 5.00
33.69
674.00
10 6.24
35.04
570.00
11 8.38
51.68
314.00
12 7.99
59.16
196.00
13 7.04
40.96
256.00
14 9.32
60..27
208.00
15 7.61
43.21
286.00
16 6.33
31.35
588.00
17 5.585
25.92
550.00
18 7.85
55.41
404.00
19 6.65
66.02
196.00
20 5.42
54.79
288.00
21 5.68
40.24
596.00
22 6.34
58.23
413.00
23 6.27
47.94
296.00
24 7.10
42.24
248.00
25 7.71
42.86
232.00
26 8.99
60.67
200.00
27 8.53
54.56
196.00
28 8.50
61.08
206.00
29 7.88
66.11
224.00
30 7.24
49.35
246.00
31 5.52
32.16
566.00
32 7.80
49.08
566.00
Table 3. Fo
re
ca
st Sample
s
N
o
Moisture
content
3-
8mm/%
Per
m
eability
/
mm
H2O
1 7.2
49.22
260.00
2 5.78
32.21
442.00
3 6.82
36.87
286.00
4 4.98
63.45
250.00
5 6.90
25.16
820.00
6 5.42
70.23
210.00
7 6.58
42.08
236.00
8 8.61
63.33
248.00
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
The Predi
ctio
n of Gran
ulating Effect Based on BP Ne
ural Netwo
r
k (Fan
g Li)
4455
Then
we u
s
e
c++ to write
BP Algorithm
, and ma
ke a
small
softwa
r
e sho
w
ed in
Figure 3
to train sam
p
l
e
s an
d get the forecast val
ue.
The BP software
sho
u
ld firstly input trai
ning data
as
training
sa
mp
le, the larg
e trainin
g
time for the
n
u
mbe
r
of
ne
u
r
on
s
and
the
numbe
r
of in
put laye
r, hid
den l
a
yer,
an
d outp
u
t laye
r.
When the trai
ning is
stopped, the
forecast result will be stored.
From Fig
u
re 2, we can se
e that before
traini
ng
syste
m
accuracy i
s
set to 0.00
1
,
training
time is set to
10000 time
s, learnin
g
rat
e
is set
to 0.8. The input
num re
prese
n
ts the num
b
e
r of
node in inp
u
t layer, the hidden nu
m re
pre
s
ent
s t
he numbe
r of no
de in hidde
n layer, the output
num re
prese
n
ts the nu
mb
er of nod
e in
output la
yer.
When th
e training time i
s
up to 8000, t
h
e
total erro
r is 0
.
00099
92(0.0
0099
92
<0.00
1
), the trainin
g
is stop
ped.
Figure 2. Main Interface of the Software
Train
them to
solve th
e
wei
ghts from in
p
u
t layer to th
e
hidde
n laye
r
and from the
hidde
n
layer to the o
u
tput layer, a
nd then take
the ot
her
sa
mples
as foreca
st sa
mple
s, analyzi
ng the
differen
c
e b
e
t
ween th
e foreca
st value
(forecast i
n
cid
ence) a
nd th
e actu
al data
as
sho
w
e
d
i
n
Figure 3 and
Figure 4:
Figure 3. Pre
d
iction
Re
sult
s for 3-8mm
Gran
ule
s
Percenta
g
e
Figure 4. Prediction
Result
s for Perm
eability
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 6, June 20
14: 4451 – 4
456
4456
Sequentially
comp
ari
ng re
lative erro
rs
stay
betwee
n
6%~8%, and the accura
cy of the
model rea
c
h
92%. So we can get this co
nclu
sio
n
following:
(1) It is feasibility to predict gra
nulat
i
ng effect usi
ng BPNN m
odel; and th
e model
obtaine
d very good effect.
(2) The BP network has
the strong misalignment to approa
ch abilit
y; the fi
tting preci
s
ion
is goo
d between the outp
u
t
and the sam
p
les.
In this pap
er,
neural net
wo
rk i
s
ap
plied t
o
the mod
e
lin
g pro
c
e
s
s of
gran
ulation
which i
s
compl
e
x, non
linear,
dynam
ic, multivari
a
ble, difficu
lty in
mod
e
ling. We obtain be
tter
effect
tha
n
traditional fitting cal
c
ul
ation
methods.
In future, the model will pl
a
y
a certain rol
e
in gran
ulati
ng pro
d
u
c
tion
.
Ackn
o
w
l
e
dg
ements
This
wo
rk is
sup
porte
d by
the Maj
o
r
Rese
arch P
r
oj
ect of the
Na
tional Natural
Scien
c
e
Found
ation o
f
China und
e
r
Gra
n
t No.9
0818
028, Na
tural Scie
nce
Foundatio
n Proje
c
t of CQ
CSTC: 20
11
BB2064.
Referen
ces
[1]
Jian
g Hui Y
u
, Don
g
Min, Yan
g
F
eng.
App
lic
ation of BP n
e
u
ral n
e
tw
ork in
to predicti
on of
nitrobe
n
z
e
n
e
compound in toxicity
, Intern
ation
a
l C
onfer
en
ce on
Gen
e
tic
and
Evol
utio
na
r
y
C
o
mp
utin
g, 2nd
ed. 2
0
0
8
;
170-
173.
[2]
BHM Sad
e
g
h
i.
A BP-ne
ural
n
e
t
w
o
r
k pr
ed
ictor mod
e
l for
pl
astic in
jecti
on
moldi
ng
proc
e
s
s
. Jo
u
r
na
l
of
Materials Pr
oc
essin
g
T
e
chn
o
l
ogy
. 200
0; 103
(3): 411–
41
6.
[3]
T
i
e W
ang, Ch
a
o
W
a
n
g
.
T
he F
ault D
i
a
gnos
is
of Bora
Eng
i
n
e
CH Em
issio
n
s
bas
ed
on
Ne
u
r
al n
e
t
w
ork.
T
E
LKOMNIKA Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2012; 1
0
(8): 2
343-
235
0.
[4]
Yang S
han
xia
o
, Yang Gu
an
g
y
in
g. Emotion
Recog
n
iti
on o
f
EMG Based on Improve
d
L
-M BP Neur
a
l
Net
w
ork a
nd S
V
M.
Journal of
Softw
are
. 2011
; 6(8): 1529-1
5
36.
[5]
Luo Y
aoz
hi, T
ong
Ruofe
i
. Stud
y of BP n
e
tw
o
r
k for a c
y
l
i
nder s
hel
l'
s su
pport i
dentific
a
t
ion
. Advanced
Materials R
e
se
arch, Advanc
e
s
in Civil E
ngi
n
eeri
ng an
d Arc
h
itecture Inn
o
v
a
tion
. 20
12; 20
50-2
055.
[6]
Li Jin
g
-R
ui, W
ang Gan
g
, Z
hou
Yun-Ji
n, Gong
Yun-F
e
i.
W
o
rk
piec
e pattern r
e
cog
n
itio
n b
a
s
ed o
n
ART
2
neru
a
l n
e
tw
ork
. Harbin Go
ng
ye Da
xu
e
Xu
eb
ao/Jour
nal
of Ha
rbi
n
Institute
of
T
e
chnol
og
y. 2009; 4
1
(3)
:
117-
120.
[7]
Shui
pin
g
Z
e
n
g
,
Lin C
u
i, Ji
nh
ong
Li. D
i
ag
no
sis S
y
stem for
Alumi
na R
e
d
u
ction B
a
se
d
on BP
Neur
a
l
Net
w
ork.
Jour
n
a
l of Co
mp
uter
s
. 2012; 7(4): 9
29-9
33.
[8]
Shah, Ma
ntha
n, Gaik
w
a
d, Vi
ja
y
,
Lok
ha
nd
e
Shash
i
kant, B
o
rha
de S
anket
.
F
ault id
entific
ation f
o
r I.C.
eng
ines us
in
g artificial n
e
ru
al
netw
o
rk
. Internatio
nal C
onfe
r
ence o
n
Pr
ocess Automation
, C
o
n
t
ro
l
and
Comp
uting, PA
CC. 201
1.
[9]
Y Algi
nah
i, MA Sid-Ahm
ed,
MAhmad
i.
Loc
al thres
hol
din
g
of compos
it
e docu
m
ents
usi
ng multi-
lay
e
r
perce
ptron n
e
r
ual n
e
tw
ork
. Mid
w
est S
y
m
pos
i
u
m on Circ
u
its and S
y
stems. 200
4; I209-I21
2
.
[10]
Nabi
l EL Ka
dh
i, Karim Ha
dja
r
, Nahla E
L
Z
ant,
A Mobil
e
Agents a
nd Ar
tificial N
eura
l
Netw
orks for
Intrusion Detection
, Journ
a
l of
Soft
w
a
r
e
. 201
2; 7(1): 156-1
6
0
.
[11]
Sarava
na
n P,
Nag
a
raj
an S.
An ada
ptive le
arni
ng ap
pro
a
c
h
for tr
acking data usi
ng vis
ual a
nd textua
l
features
, Inter
natio
nal
Conf
e
r
ence o
n
T
r
endz in Informati
on Scie
nces
a
nd Com
puti
ng,
2nd
ed. T
I
SC-
201
0. 201
0; 19
2-19
6.
[12]
Don
g
cha
o
Ma,
Z
h
ib
o Z
h
a
ng,
Yuan
yu
a
n
Ji
a.
A Soft
w
a
re
Co
mpon
ents Aut
o
mated
T
e
sting S
y
stem
o
n
Reco
nfigur
ab
le
Routin
g Platfo
rm.
Journal of
Conv
erge
nce I
n
formatio
n
T
e
chno
logy
. 2
013;
8(1): 69 -76.
[13]
Li
Xi
n
w
u, Gu
a
n
Pen
g
ch
en
g.
A Nove
l Al
gorit
hm of Net
w
o
r
k
T
r
ade Custom
er Cl
assificati
o
n
Base
d o
n
F
ourier
Basis
F
unctions.
T
E
LKOMNIKA In
don
esia
n J
our
nal
of El
ectric
al E
ngi
ne
erin
g.
20
13;
11(1
1
):
643
4-64
40.
[14]
Zhu Lei, Ya
ng
Dan, Wu Ying-
bo.
Selecti
on o
f
soft
w
a
re re
lia
bilit
y mo
del b
a
s
ed on BP n
e
u
r
al net
w
o
rk
.
Co
mp
uter Engi
neer
ing a
nd D
e
sig
n
. 200
7; 28(17).
Evaluation Warning : The document was created with Spire.PDF for Python.