TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 12, No. 11, Novembe
r
2014, pp. 77
2
8
~ 773
7
DOI: 10.115
9
1
/telkomni
ka.
v
12i11.62
78
7728
Re
cei
v
ed Ma
y 15, 201
4; Revi
sed
Jun
e
22, 2014; Accepted July 1
0
,
2014
Resear
ch on Application of Sintering Basicity of Based
on Various Intelligent Algorithms
Song Qiang*
,
Zhang Hai-Feng
Mecha
n
ica
l
En
gin
eeri
ng D
epa
rtment, An
y
a
ng
In
stitute of
T
e
chno
log
y
, An
yang 4
5
5
000, H
ena
n
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: songq
ia
ng0
1
@
12
6.com
A
b
st
r
a
ct
Predicti
on of
alkal
i
n
i
ty in si
nterin
g proc
es
s is
difficu
lt. W
hether the
l
e
vel
of the a
l
kalin
ity of
sinteri
ng
proce
ss is succ
essful or
n
o
t is
dir
e
ctly re
l
a
ted
to
the
qua
lity of
sinter. T
her
e is
no
go
od
meth
od
,
pred
iction
of al
kalin
ity by h
i
g
h
compl
e
xity, the pres
ent n
onl
i
near, stron
g
c
oup
lin
g, hi
gh ti
me
de
lay, so t
h
e
recent n
e
w
techno
logy, gr
ey l
east squ
a
re su
pport vect
or
machi
ne h
a
ve b
e
en intro
duc
ed.
In this pap
er, T
he
w
e
ight of eva
l
u
a
tion
obj
ective
s has not g
i
ve
n
the corre
sp
on
din
g
cons
ider
ation w
h
e
n
solv
i
ng the corr
el
ati
o
n
degr
ee
by taki
ng tra
d
itio
nal
g
r
ey rel
a
tio
n
a
n
a
lysis
an
d it
is
w
i
th a l
o
t of s
ubj
ective facto
r
s, easily
le
ad
to
mistak
es in de
cision-
maki
ng on pro
g
ra
m. W
hat is mo
re
a kind of alka
li
ne grey su
ppo
rt vector mach
i
n
e
mo
de
l, ena
ble
s
us to devel
o
p
new
formu
l
at
ions a
nd a
l
gor
i
t
hms to pre
d
ict
the alkal
i
nity.
In the mo
de
l, the
data se
qu
ence
of fluctuatio
n i
s
com
pos
ed of
grey the
o
ry a
nd su
pp
ort vec
t
or mac
h
i
ne is
w
eakene
d, ca
n
dea
l w
i
th nonli
near a
daptiv
e i
n
formatio
n
, comb
in
ation
a
nd
grey sup
port vector
mac
h
i
ne these adv
anta
ges
.
T
he resu
lts sh
ow
that, the b
a
sicity of si
nte
r
, c
an accur
a
tely pr
edict th
e
sma
ll s
a
mpl
e
and r
e
fere
nc
e
infor
m
ati
on
usi
ng th
e
mo
del.
T
he ex
peri
m
en
tal resu
lts s
h
o
w
that, the gre
y
sup
port vecto
r
machi
ne
mod
e
l
i
s
effective an
d w
i
th practica
l ad
vantag
es of hig
h
pr
ecis
ion, les
s
samp
les, an
d
simp
le ca
lcul
a
t
ion.
Ke
y
w
ords
:
basicity
in
sin
t
ering
proc
ess
,
grey r
e
lati
o
n
a
nalys
is, gr
ey l
east sq
ua
res su
pport v
e
cto
r
mac
h
i
ne, pred
i
c
tion, grey
mo
del
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
In the modern steel ente
r
prises
, the si
ntering p
r
o
c
e
ss for bl
ast fu
rna
c
e mate
ri
als is o
ne
of the most
importa
nt produ
ction p
r
o
c
e
s
ses. T
h
e
sinteri
ng al
kalinity has a
dire
ct effect
on
prod
uctio
n
a
nd e
c
on
omic benefits
of the whole
ste
e
l enterpri
s
e
[1]. Therefo
r
e, almost ev
ery
steel facto
r
y is equip
ped
with many instruments
a
nd automatic
co
ntrol sy
ste
m
s in the sinteri
n
g
plant for the
produ
ction
pro
c
e
ss
cont
rol. Ho
weve
r, the comple
xity of the sintering p
r
o
c
e
ss
make
s th
e p
r
ocess to
be
difficultly de
scribe
d
by
a
set
of math
ematical
mo
dels. Sin
c
e
this
process often has l
a
rge t
i
me del
ay
and dynami
c
ti
me vari
ability, it
is hard to perform
control
tasks of the whole si
nter
in
g
process u
s
in
g conve
n
tion
al control mo
dels.
Sintering p
r
o
c
e
ss i
s
a co
mplex physi
cal-chemi
s
try
pro
c
e
ss,
whi
c
h relate
s to a lot o
f
cha
r
a
c
teri
stics in
cludi
ng
complicated m
e
ch
ani
sm, hi
gh no
nlinea
rit
y
, strong
co
u
p
ling, hig
h
time
delay,and
etc [1]. the mathematical mo
del for
wh
ole
sinte
r
ing pro
c
e
ss ca
nnot be
e
s
tabli
s
he
d,
thus
we
can
only
con
s
tru
c
t math
emati
c
al
mod
e
l for one
of
perf
o
rma
n
ce in
di
cators,
and
the
perfo
rman
ce
index of si
nte
r
ing p
r
o
c
e
s
s
determi
ne
s the poli
c
y of
blendi
ng p
r
o
c
ess. Beca
use
o
f
the re
stri
ction of the dete
c
ting
me
an
s, the che
m
ica
l
examinatio
n
of sinter al
kalinity gene
ra
lly
need
s 40 mi
n. In the who
l
e pro
c
e
s
s, its time someti
mes
can
eve
n
excee
d
1 h
our. Obvio
u
sl
y,
su
ch a lo
ng
time delay cannot me
et the nee
ds
of actual p
r
o
d
u
ctivity, and thus, the
sin
t
er
alkali
nity must be detected
and a mod
e
l for predi
cting the alkalinity should b
e
esta
blish
ed [1].
2. Gre
y
Theo
r
y
2.1.
Gre
y
G
M
(1,1) M
odel
Grey the
o
ry i
s
a
metho
d
to stu
d
y the
small sample,
poor informat
ion, un
ce
rtain
t
y, with
partial i
n
form
ation
kno
w
n,
part info
rmati
on u
n
kn
o
w
n
small
sa
mple
, poor info
rm
ation u
n
certai
nty
probl
em a
s
t
he research
obje
c
t, the kn
own i
n
form
ation throug
h d
a
ta minin
g
, to extract valu
a
b
le
informatio
n, a co
rrect
de
script
io
n of
system b
ehavio
r, evolut
ion a
nd
effective monitori
ng
a
n
d
predi
ction of
the positio
n informatio
n syst
em.
Statistical predictio
n method ha
s m
any
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Applicatio
n of Sintering Ba
sicit
y
of Base
d on Vario
u
s
Intelligent… (Song Qian
g)
7729
advantag
es compa
r
ed
with
the tra
d
ition
a
l metho
d
, it
doe
s n
o
t ne
e
d
to d
e
termi
n
e whethe
r th
e
forecast va
ria
b
les subje
c
t to no
rmal
dist
ribution,
do
n't
need
large
sa
mple
statistics, that i
s
to
say
the research
obje
c
t spe
c
ial
l
y for the
sm
all sample,
p
oor inform
ation u
n
certaint
y, do not
nee
d to
cha
nge a
c
co
rding to the
cha
nge of in
put variabl
e
s
foreca
st mo
del, prod
uce
d
by the grey
seq
uen
ce
th
e grey sy
ste
m
theo
ry, think, al
th
oug
h
the o
b
je
ctive sy
stem of
com
p
lex d
a
ta
rep
r
e
s
entatio
n, scattered, but it
always has a whole functio
n
, it mu
st contain the
inhere
n
t law of
some.
The
key is
ho
w to
cho
o
se the
a
ppro
p
ri
ate
wa
y to tap it a
n
d
u
s
e it. All t
he g
r
ey
seq
u
ence
can
be
gen
erated by
a we
ake
n
ing
the
random
ne
ss,
reveal the
re
g
u
larity. A diffe
rential
equ
ation
is a unified
model, differential e
q
u
a
tion m
odel
has hig
her predi
ction
accuracy. T
he
establi
s
hm
en
t of GM(1,1
)
model i
s
ba
si
cally a
cumu
l
a
tive gene
rati
on of the o
r
igi
nal data,
so t
hat
the gene
rate
d seq
uen
ce
has
ce
rtain regula
r
ity,
by
establi
s
hi
ng the differential
equation mo
del,
obtain the fitting cu
rve, an
d thus to pre
d
ict the
un
kn
own p
a
rt
s of the system. T
he GM mod
e
l
is
first of th
e
o
r
iginal
data,
a a
c
cumul
a
ted, ge
ne
rate
1-AG
O, a
ccumulated
dat
a throug
h d
a
t
a
mining have
certai
n reg
u
l
a
rity,
the
o
r
ig
inal
d
a
ta
i
s
n
o
t obviou
s
re
gularity, a
nd
its devel
opm
ent
trend is
swi
n
g
i
ng. If the original data were ac
cumul
a
te
d gene
rating,
its regul
arity is obviou
s
Assum
e
that there is a tim
e
respon
se
seque
nce (whi
ch is
calle
d original time se
ries),
n
x
x
x
x
0
0
0
0
,......,
2
,
1
(1)
Whe
r
e x(0
)(i
) stand fo
r the monitori
ng da
ta an time I,i=0,1,2,….,n.
The fo
re
ca
st value
.
,
,
2
,
1
),
(
)
0
(
R
T
T
n
x
ca
n
be de
rived
b
y
the followi
n
g
thre
e
st
ep
s.
(1) Build u
p
the first-ord
e
r
accumul
a
ting
generator o
p
e
rato
r (AGO
)
This
step is t
o
wea
k
e
n
the indetermin
a
cy in
the original time serie
s
and g
e
t a more
regul
ar time serie
s
. Let
)
1
(
x
be the gene
rate
d time seri
es.
11
1
1
1
0
0
1
,
2
,
....
..,
,
k
i
k
nk
xx
x
x
x
x
, k
=
1, 2, ….,
n
(2)
Whe
r
e
)
1
(
x
is the once A
ccu
m
u
lated Ge
nerati
ng Ope
r
ati
on(1
-
AG
O) seque
nce1-A
G
O
(2) Co
nst
r
u
c
t
first-o
r
de
r linear
differe
ntia
l equ
atio
n, the white
n
izatio
n diffe
rential
equatio
n ca
n be obtain
ed:
(1)
(1)
d
d
x
ax
u
t
(3)
Whe
r
e, a is a
developing
coefficient, wh
ose valu
e refl
ects the va
ria
t
ion relation o
f
data;
and
u i
s
g
r
ey
action
qu
antity, which i
s
th
e mo
st imp
o
rt
ant differe
nce
between
gre
y
and
com
m
o
n
model.
Usi
ng the lea
s
t squ
a
re
s e
s
timation, a and u can b
e
ob
tained:
1
T
T
N
a
BB
B
Y
u
(4)
Whe
r
e,
1
1
5
.
0
1
2
3
5
.
0
1
1
2
5
.
0
1
1
1
1
1
1
n
x
n
x
x
x
x
x
B
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 11, Novem
ber 20
14: 77
28 – 773
7
7730
n
x
x
x
Y
N
0
0
0
3
2
Acco
rdi
ng to linear first-o
r
d
e
r differe
nt
ial equatio
n, Equation (5
) can
be derived:
10
11
ˆ
e
ak
uu
k
xx
aa
(5)
(3) Inve
rse th
e accumul
a
tion gen
eratio
n
Let
)
0
(
ˆ
x
be the fitted and foreca
sted serie
s
,
,
ˆ
,
,
2
ˆ
,
1
ˆ
ˆ
0
0
0
0
n
x
x
x
k
x
(6)
Then, the pre
d
icted valu
e can
be calcula
t
ed by Equation (7
),
n
i
i
x
i
x
i
x
,
,
2
,
1
,
1
ˆ
)
(
ˆ
1
0
)
0
(
(7)
Whe
r
e,
1
ˆ
0
x
,
2
ˆ
0
x
, ….,
n
x
0
ˆ
are fitted value of
the origin
al
seri
es; an
d
1
ˆ
0
n
x
,
2
ˆ
0
n
x
, … are the forec
a
s
t
values
.
(4) Er
ro
r ex
a
m
ination
The rel
a
tive erro
r ca
n be calcul
ated by Equation (8).
%
100
ˆ
0
0
0
k
x
k
x
k
x
k
e
(8)
Whe
r
e, e is t
he error p
e
rcentage.
2.2. Residual
Forecas
ting
Model
To evaluate
modelin
g perf
o
rma
n
ce, we
sh
o
u
ld do
synthetic test of
goodn
ess:
C=
1
2
s
s
(9)
Where S
2
1
=
2
)
0
(
)
0
(
)
1
(
1
n
x
x
n
k
; S
2
2
=
2
)
1
(
)
(
1
n
k
k
n
.
Deviation b
e
twee
n origi
nal
data and e
s
timating data:
)
0
(
=
)}
(
),...,
2
(
),
1
(
{
n
=
{
x
(0)
(1)
-
)
(
ˆ
)
),...,
2
(
ˆ
)
2
(
),
1
(
ˆ
)
0
(
(
)
0
(
)
0
(
)
0
(
)
0
(
n
x
n
x
x
x
x
}
P
=P
{
)
(
k
<0
.6745
S
1
}
(10)
The preci
s
io
n
grade of fore
ca
sting mod
e
l
can be
see
n
in Table 1.
Finally, apply
i
ng the
inve
rse
accu
mulat
ed g
ene
ratio
n
op
eration
(AGO),
we
th
en h
a
ve
predi
ction val
ues:
1
ˆ
ˆ
ˆ
1
1
0
k
x
k
x
k
x
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Applicatio
n of Sintering Ba
sicit
y
of Base
d on Vario
u
s
Intelligent… (Song Qian
g)
7731
Table 1.
Pre
c
ision G
r
ad
e o
f
Forecastin
g Model
Precision grade
P
C
Good 0.95
≤
p
C
≤
0.35
Qualified
0.80
≤
p
<0.9
5
0.35<C
≤
0.5
Ju
st
0.70
≤
p
<0.8
0
0.5<C
≤
0.65
Unqualifie
d
p
<0.7
0
0.65<C
2.3. Gre
y
Relation An
aly
s
is
Grey rel
a
tion
al is un
ce
rtai
n co
rrel
a
tion
betwe
en thin
gs, or u
n
cert
ain co
rrelatio
n betwe
en
the system f
a
ctors, bet
ween t
he fa
ctors to
prin
ci
pal act. Th
e
fundame
n
tal
missi
on of
grey
relation
al an
alysis i
s
geo
metry approa
ch to the micro o
r
ma
cro
basing o
n
the se
quen
ce
of
behavio
ral fa
ctors, to a
nal
yze a
nd
dete
r
mine
the
i
n
fluen
ce
deg
re
e bet
wee
n
e
a
ch
facto
r
s o
r
the
contri
bution
measure of f
a
ctor to
th
e
main
behavio
r, an
d its fun
damental
ide
a
is jud
g
e
wh
ethe
r
the geo
metry
of the
seq
u
e
n
ce
of curve
s
is
clo
s
ely
lin
ked
acco
rdi
n
g to the l
e
vel
of simila
rity o
f
it,
and th
e
clo
s
er the
curv
e, the
gre
a
ter th
e
co
rre
lation of t
h
e
co
rrespon
di
ng
seq
uen
ces,
conve
r
sely, the smalle
r[6]. The
com
put
ational p
r
o
c
e
dure
of grey relational
anal
ysis i
s
exp
r
e
s
sed
belo
w
:
Assume,
)
4
(
),
3
(
),
2
(
),
1
(
1
)
4
(
,
1
)
3
(
,
1
)
2
(
,
1
)
1
(
0
0
0
0
0
x
x
x
x
x
x
x
x
x
x
x
x
X
i
i
i
i
i
i
i
i
i
i
i
i
i
,
then:
18
.
0
04
.
0
055
.
0
0
12
.
0
06
.
0
24
.
0
0
015
.
0
015
.
0
02
.
0
0
1
1
.
0
0
0
21
.
0
14
.
0
05
.
0
0
0
02
.
0
07
.
0
0
04
.
0
01
.
0
16
.
0
0
4
.
0
08
.
0
02
.
0
0
28
.
0
521
.
0
074
.
0
0
2
.
0
04
.
0
04
.
0
0
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
)
4
(
)
3
(
)
2
(
)
1
(
0
9
0
9
0
9
0
9
0
8
0
8
0
8
0
8
0
7
0
7
0
7
0
7
0
6
0
6
0
6
0
6
0
5
0
5
0
5
0
5
0
4
0
4
0
4
0
4
0
0
0
3
0
3
0
3
0
2
0
2
0
2
0
2
0
1
0
1
0
1
0
1
0
0
0
0
0
0
0
0
0
9
0
8
0
7
0
6
0
5
0
4
0
3
0
2
0
1
0
0
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
X
X
X
X
X
X
X
X
X
X
3
2
4
0
4
2
1
k
i
i
i
x
x
s
k
i=1,
2,3,
…..
,
9
:
s
0
=0.1;
s
1
=0
.
7
3
5
;
s
2
=0.1;s
3
=0.19;
s
4
=0.09;
s
5
=
0
.1
95
;s
6
=0.6;
s
7
=0.0
3
25;
s
8
=0.12;
s
9
=
0
.
805
;
3
2
0
0
0
0
0
0
0
)
4
(
4
2
1
k
i
i
i
x
x
x
x
s
s
k
k
,,
,
i
=
12…
.
.
9
;
Th
en
we
ob
tai
n
:
s
s
0
1
=0
.635
;
s
s
0
2
=0.2;
s
s
0
3
=0.
0
9;
s
s
0
4
=0.19;
s
s
0
5
=0
.471
;
s
s
0
6
=0.7;
s
s
0
7
=0.
1
4
25;
s
s
0
8
=0.
02;
s
s
0
9
=0
.105
;
s
s
s
s
s
s
ε
i
i
i
i
0
0
0
0
1
1
,,
,
i
=
12…
.
.
9
;
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 11, Novem
ber 20
14: 77
28 – 773
7
7732
So we
will obtain:
ε
01
=0
.742
9
;
ε
02
=0
.857
1
;
ε
03
=
0
.9
348
;
ε
04
=0
.863
3
;
ε
05
=0
.733
3
;
ε
06
=
0
.7
081
;
ε
07
=0.7717
;
ε
08
=0.
9
8
39;
ε
09
=0
.9
478;
Th
en
we
m
a
y fi
nd
ε
08
>
ε
09
>
ε
03
>
ε
04
>
ε
02
>
ε
07
>
ε
01
>
ε
05
>
ε
06
,
X
8
>
X
9
>
X
3
>
X
4
>
X
2
>
X
7
>
X
1
>
X
5
>
X
6
By the definition of the rel
a
tional analy
s
i
s
, t
he allo
cati
on is
seq
uen
ced by the
re
lational
coeffici
ent. A
nd the
sequ
e
n
ce
of
relatio
nal
coeffici
ent
si
ze
is the
order of
ran
k
ing
of the
influe
n
c
e
fac
t
or
.
Ba
se
d
on th
e a
bov
e re
sult, it i
s
clea
r that
X
8
=
0.8633
is ma
ximum, and
i
t
rep
r
e
s
ent
s
that the relational deg
ree i
s
bigge
st bet
wee
n
the
first allocation
ce
nter and the i
deal allo
catio
n
cente
r
. The
r
e
f
ore, the first allocation
cen
t
er is the mo
st optimal choi
ce.
X
8
is the optimal factors,
X
9
ranked second,
X
6
is the worst in all factors.
That is to
say, the
Ca
O ratio of
si
nter b
a
si
city
of pulveri
ze
d
co
al ratio, b
a
si
city on
si
nter influ
e
n
c
e is
relatively larg
e, the thickn
e
ss of the mat
e
rial
layer a
n
d
mixed FeO
content in ore has very little
effect on the basi
c
ity of sin
t
er, might as
well
put the two op
eratin
g variable
s
fro
m
outside.
3.
Leas
t-Sq
u
ares Supp
o
r
t Vector Ma
chines Algo
r
i
thm Modeling
3.1.
Leas
t-s
quares
Algo
rithm Suppo
rt Vector Ma
chine
Re
cently, lea
s
t squ
a
res
su
pport ve
ctor
machi
ne (LS-SVM) has b
e
en appli
ed to machi
ne
learni
ng d
o
m
a
in succe
s
sfully. It is a promisin
g
tech
nique
owin
g to its succe
ssful appli
c
atio
n in
cla
ssifi
cation
and regressi
on tasks. It is e
s
tabl
i
s
hed
based on th
e stru
ctu
r
al ri
sk mi
nimi
zatio
n
prin
cipal
rath
er than the
minimized e
m
piri
ca
l e
rro
r com
m
only
implemente
d
in the ne
ural
netwo
rks. LS
-SVM achi
eves hig
her g
e
nerali
z
atio
n perfo
rman
ce
than the neu
ral networks in
solving th
ese
machi
ne le
a
r
ning
pr
oble
m
s. Anothe
r
key prope
rty is
that unli
k
e the trai
nin
g
of
neural networks
whi
c
h re
q
u
ire
s
nonli
n
e
a
r optimi
z
atio
n with the da
nger of gettin
g
stuck into l
o
cal
minima, trai
ning LS-SV
M
is equiv
a
lent to sol
v
ing a set of linear equatio
n pro
b
lem.
Con
s
e
quently
, the solutio
n
of LS-SVM i
s
al
ways
uni
que a
nd gl
ob
ally optimal. In this
study, the
appli
c
ation
of LS-SVM in t
he predi
ction
of the al
kalini
t
y in sinteri
n
g
pro
c
e
s
s was discu
s
sed [1
2-
14].
Giving a trai
ning set
1
,
N
tt
t
xy
, with
n
t
x
R
and
t
yR
, where
n
t
x
R
is the input
vec
t
or of the
firs
t
t
sampl
e
s,
t
yR
is th
e d
e
sired
output
value
of the fi
rst
t
sampl
e
s,
and
N
is
the num
be
r o
f
sam
p
les,
th
e p
r
oble
m
of l
i
near
regressi
on i
s
to fin
d
a
linea
r fun
c
tio
n
y
(
x
) ,
w
hi
ch
is
equivalent to
applying
a f
i
xed non
-lin
e
a
r m
appin
g
of the initial
data to
a fe
ature
sp
ace.. In
feature
spa
c
e
,
SVM models take the fo
rm:
T
()
(
)
yx
w
x
b
(11)
W
h
er
e
,
th
e
no
n
lin
ea
r
fu
nctio
n
ma
pp
in
g
h
n
()
:
n
RR
map
s
the
h
i
gh-di
men
s
io
nal spa
c
e int
o
the feature space; and
w
is not a pre-specifie
d dime
nsio
nal, possi
bly infinite dimensi
onal, b is a
real con
s
tant.
The lea
s
t sq
u
a
re
s app
ro
ach pre
s
crib
es
cho
o
si
ng the
para
m
eters (
w
,
b
) to mini
mize the
sum of the sq
uare
d
deviati
ons of the dat
a, and the sq
uare lo
ss fun
c
tion is d
e
scri
bed a
s
:
N
T2
1
11
mi
n
(
,
)
22
t
t
J
we
w
w
e
(12)
Whe
r
e
is the trade-off parameter b
e
twe
en a smo
o
the
r
solutio
n
, an
d training e
r
rors.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Applicatio
n of Sintering Ba
sicit
y
of Base
d on Vario
u
s
Intelligent… (Song Qian
g)
7733
With co
nst
r
ai
nts,
T
()
(
)
+
tt
yx
w
x
b
e
, for
t
= 1, …,
N
.
Important differen
c
e
s
with
stand
ard SV
M ar
e the eq
uality con
s
tra
i
nts and the
squ
a
re
d
error term, which g
r
eatly si
mplifies the p
r
oble
m
.
Only equality con
s
trai
nts, a
nd the optimi
z
ation
o
b
jecti
v
e function is the erro
r loss, whi
c
h
will simplify the probl
em sol
v
ing.
To solve this
optimizatio
n probl
em, one
define
s
the following
Lag
ra
nge fun
c
tion:
N
T
t
t1
(,
,
,
)
(
,
)
{
(
)
t
L
wb
e
J
w
e
w
x
(13)
Whe
r
e,
t
is an Lagra
nge m
u
ltipliers. By
Karu
sh-K
uhn
-Tu
c
ker (KKT
) optimal con
d
itions, the
conditions for optimality are:
1
1
0(
)
00
0
00
N
tt
t
N
t
t
tt
t
T
tt
t
t
L
wx
w
L
b
L
e
e
L
wb
e
y
(14)
Whe
r
e,
t
= 1, …,
N
.
After elimination of
t
e
and w, the soluti
on is given
by the following set of linear
equatio
ns.
0
0
(
)
(
)
T
T
tt
b
xx
1
α
y
1D
(15)
Whe
r
e,
1
,,
N
yy
y
,
11
,
,
1
,
α
[
1
,
…,
N
],
1N
,,
di
a
g
D
. Selec
t
>0, an
d
guarantee m
a
trix
0
()
()
T
T
tt
xx
1
φ
1 D
1
b
0
φ
α
y
.
3.2.
Selecti
on of the
Ke
rnel Functio
n
By the KKT-optimal
con
d
i
t
ions,
w
i
s
o
b
tained,
and
thus th
e trai
n
i
ng
sets of n
online
a
r
approximatio
n is obtain
ed
too.
1
()
(
,
)
N
tt
t
yx
k
x
x
b
(16)
Whe
r
e x
,
i
x
den
ote training p
o
int and supp
ort vector
re
spectively,y is the output of netwo
rk..
i
and b are the
solution
s to Equation (13).
(,
)
(
)
(
)
T
tk
t
k
kx
x
x
x
,1
,
,
tk
N
(17)
The Sele
ctio
n of the ke
rnel functio
n
h
n
()
:
n
RR
has
several
possibilities. It is
arbitrary sym
m
etric fu
ncti
on which ca
n the Me
rc
er theore
m
. In this pa
per,
the radi
al ba
si
s
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 11, Novem
ber 20
14: 77
28 – 773
7
7734
function
(RB
F
) i
s
used
as
the kernel fu
n
c
tion
of
the
L
S
-SVM, be
ca
use
RBF
kernels tend
to
g
i
ve
good
pe
rform
ance u
nde
r g
eneral
smo
o
thne
ss a
s
sum
p
tions, sin
c
e
Gau
ssi
an RB
F
(Radi
al
Ba
sis
Functio
ng, RBF) functio
n
is usua
lly use
d
as a kern
el function [7],
22
2
(,
)
e
x
p
{
|
|
|
|
/
2
}
tt
Kx
x
x
x
(18)
Whe
r
e,
is a positive re
al consta
nt.
By Equation (8) to Equatio
n (10
)
, the ob
ject nonli
nea
r model is a
s
follows:
22
2
1
(
)
e
x
p
{
||
||
/
2
}
N
tt
t
yx
x
x
b
(19)
The LS-SVM
predi
ction i
n
volves two
p
a
ram
e
ters to
be optimi
z
e
d
, whi
c
h a
r
e
(the
width of the Gau
ssi
an ke
rnels wh
ich cover the inpu
t space) an
d
is viewed a
s
regul
ari
z
ation
para
m
eters,which
controls
the tradeoff b
e
wtee
n co
m
p
lexity of
the machi
ne an
d
the numbe
r of
non-se
pa
rabl
e points [8].
LS-SVM is an efficie
n
t versio
n of
these im
proved SVM,Inste
s
d of a
quad
ratic
prog
ram
m
ing
proble
m
in stand
ard SV
M,a set of
linear e
quatio
ns ba
se
d on
KKT optimization
con
d
ition a
r
e
solved in
L
S
-SVM,which
can
red
u
ce
the com
putat
ional
compl
e
xity and time fo
r
training to a certain extent [9, 11].
4.
Gre
y
Least Squar
es Support Ve
ctor Machine
s
The obje
c
t of both grey forecast an
d SVM fo
reca
st
st
udy
is sm
all sample p
r
e
d
ict
i
on.
Although th
e
y
build
on th
e ba
si
s of
di
fferent the
o
ri
es, the
r
e
are
so
me
simila
rities
betwee
n
GM(1,1
) a
nd
SVM. grey G
M
(1,l)
by ide
n
t
ifying the pa
rameters
of th
e mod
e
l i
s
a
c
tually ba
sed
on
the lea
s
t squ
a
re
s line
a
r
re
gre
ssi
on, wh
erea
s
sup
port vector ma
ch
ine is evolve
d from the lin
ea
r
optimal
su
rfa
c
e. Both
mo
dels have th
eir o
w
n
adv
a
n
tage
s a
nd
wea
k
n
e
sse
s
,
grey
GM(l,l
) is
a
model of the
differential e
quation, whi
c
h stren
g
t
hen
s the reg
u
lari
ty of raw data by cumulat
i
ve
gene
ration;
more
over, it is the fitting
of exponent
i
a
l curve. Based on ri
sk m
i
nimizatio
n
, there
must be over-fit,
and su
pp
ort
ve
ctor ma
chin
e
i
s
a
the
o
ry ba
se
d o
n
structu
r
al
ri
sk mi
nimizatio
n
,
whi
c
h
ha
s ve
ry goo
d g
ena
ralization
abili
ty. If the
two
combi
ne to
e
nhan
ce
the
regula
r
ity of raw
data by accu
mulative gen
eration, a
nd to identif
y model pa
ramete
rs, at the sa
me time to adopt
stru
ctural ri
sk minimizatio
n
, which con
s
ol
idat
es the
ad
vantage of th
e two mod
e
ls and can obta
i
n
better forec
a
st ac
c
u
rac
y
.
Based
on th
e
above
analy
s
is, a
ne
w id
ea o
r
alg
o
rith
m is p
u
t forward. A g
r
ey
suppo
rt
vector ma
chi
ne mo
del i
s
p
r
opo
se
d to
overcome
th
e
s
e
limitations o
n
the
ba
sis of
the fo
re
ca
sting
model
s. Th
e
fluctuatio
n o
f
data
seq
u
e
n
ce
is
weakened
by the
grey the
o
ry
and th
e
sup
port
vector ma
chi
ne is capa
ble
of proce
s
sin
g
nonlin
e
a
r a
daptabl
e information, and the grey supp
ort
vector ma
chi
ne i
s
a
combi
nation
of tho
s
e adva
n
tage
s. Above all,
g
r
ey theo
ry i
s
use
d
to
co
nd
uct
a cumul
a
tive sequ
en
ce o
f
the raw da
ta, and the least squa
re
s supp
ort vector machine
is
adopte
d
for the pro
c
e
s
s a
nd pre
d
ictio
n
.
Ne
w algo
rith
m desig
n ste
p
s a
s
follows:
(1) Fi
rstly, the origi
nal
se
quen
ce
n
i
i
x
n
x
x
x
X
,
2
,
1
,
0
,
)
(
,
),
2
(
),
1
(
0
0
0
0
0
, and
a
seq
uen
ce g
e
nerate
d
a cu
mulative pro
d
u
ction, a
s
follows:
n
i
i
x
n
x
x
x
X
,
2
,
1
,
0
,
)
(
,
),
2
(
),
1
(
1
1
1
1
1
.
,
,
2
,
1
),
(
)
(
1
0
1
n
k
i
x
k
x
k
i
(2) Se
con
d
ly, sele
ct Kernel
function
);
,
(
x
x
K
i
(3) Solving o
p
timization p
r
oblem
s Eqn.(8) usi
ng supp
ort vector m
a
chin
e method
.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Applicatio
n of Sintering Ba
sicit
y
of Base
d on Vario
u
s
Intelligent… (Song Qian
g)
7735
(4) Build u
p
regre
s
sion fun
c
tion
1
()
(
,
)
N
tt
t
yx
k
x
x
b
;
(5)
Con
s
tru
c
t
cumul
a
tive se
quen
ce a
nd g
e
t
1
X
,where
1
X
is the pre
d
ictive
value;
(6)
1
X
by the cu
mulative re
du
ction, by the
origin
al d
a
ta
seq
uen
ce X
(0)
f
o
re
ca
st
mo
del
,
2
,
1
,
1
1
1
1
0
n
n
k
k
X
k
X
k
X
.
(7) Fin
a
lly, model test.
5.
Sinter Alkalinit
y
Forecasts and Simulation Ba
sed on grey
Leas
t Squar
es Suppor
t
Vector Ma
ch
ines
5.1.
Sy
stem Input Param
e
ter
s
Selecti
o
n
The grey lea
s
t sup
port ve
ctor ma
chi
n
e
is use
d
to predi
ct the alkalinity, aiming at this
importa
nt ou
tput index. In the wh
ole
pro
c
e
ss,
th
e variabl
es
related to th
e alkali
nity is
synthe
sized
and
ma
ke
su
re te
n im
port
ant inp
u
t va
ri
able
s
a
s
the
input of
grey neural netwo
rk,
su
ch
as th
e l
a
yer thi
c
kne
s
s, the trolley
spe
ed, ad
din
g
wat
e
r
rate
of the first mi
xture, the mix
i
ng
temperature,
the conte
n
t of SiO
2
in the mineral, the c
ontent of Ca
O in the mi
ne
ral, the conte
n
t of
FeO in the
mineral, addi
ng wate
r rate
of the sec
o
n
d
mixture, the prop
ortio
n
of CaO, and
the
prop
ortio
n
of coal.
5.2.
Sample Data Pro
ces
sing
Becau
s
e all t
he coll
ecte
d data is often
not in
the sa
me ord
e
r of
magnitud
e
, the colle
cted
data a
r
e
normalize
d
to [
-
1 1]; this will
improve t
he
training
spee
d of n
eural n
e
twork.
We
o
ften
use
s
the follo
wing formulo
to cope
with the initial data
:
1
.
0
8
.
0
_
_
'
min
max
min
x
x
x
x
x
j
j
j
ij
ij
(20)
Whe
r
e
'
ij
x
and
ij
x
were the
old
and
new va
lue of the va
riable
for a
sampling
poin
t
r
e
spec
tively,
ma
x
m
i
n
and
jj
x
x
we
re
the
mi
nimum
and
maximum val
ue of
the va
riable
in
the
origin
al data
s
et.
After comp
uting u
s
ing the
neural net
wo
rk, t
he a
n
tino
rmali
z
ation p
r
oce
s
sing i
s
d
one to
obtain the a
c
tual value of t
he output val
ue fore
ca
stin
g.
Antinormali
za
tion of the followin
g
formul
a:
x
x
x
x
x
j
ij
j
j
ij
min
min
max
1
.
0
'
(21)
In the sampli
ng data
s
et, there were inev
itably some a
nomalie
s , an
d these d
a
ta woul
d give
an impa
ct on a certai
n mod
e
l, and even l
ead to mi
sle
a
d
ing. The
r
efo
r
e, the model
data used in
training
sam
p
les data
s
et a
nd test sa
mpl
e
s data
s
et was carefully selecte
d
.
文中
Figure 1.
Th
e Predi
ction o
f
the Alkalinity Based on G
r
ey GM(1,1)
0
5
10
1.
98
1.
99
2
2.
01
2.
02
2.
03
2.
04
2.
05
ti
m
e
s
e
r
i
e
s
O
r
i
g
i
nal
v
a
l
ue an
d f
o
r
e
c
a
s
t
v
a
l
u
e
e
rro
r
G
M
(
1
,
1
) m
o
d
e
l
-2
0
2
-0
.
0
3
-0
.
0
2
-0
.
0
1
0
0.
01
0.
02
0.
03
f
o
rec
a
s
t
e
rror
ti
m
e
s
e
r
i
e
s
0
5
10
-0
.
0
15
-0
.
0
1
-0
.
0
05
0
0.
00
5
0.
0
1
0.
01
5
re
l
a
t
i
v
e
f
o
re
c
a
s
t
e
r
ro
r
ti
m
e
s
e
r
i
e
s
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 11, Novem
ber 20
14: 77
28 – 773
7
7736
Figure 2.
Th
e Predi
ction o
f
the Alkalinity Ba
sed on G
r
ey Lea
st Square
s
Sup
port Vector
Machi
n
es
Figure 3.
Th
e Erro
r Cu
rve
of the Alkalinity
Based on Grey Lea
st Suppo
rt Vector Machin
e
Figure 1 illust
rates th
e pre
d
iction e
r
ror
and rel
a
tive predi
ction e
rro
r of GM(1,1
)
model of
the alkalinity, Figure 2
prese
n
ts the
fitting cu
rv
e b
a
s
e
s
on
grey
least
squ
a
res sup
p
o
r
t vect
or
machi
ne. It can cle
a
rly se
en that the p
r
edi
cted va
lu
es a
r
e in g
o
o
d
agreeme
n
t with the de
sired
one
s in the whole ra
nge
s o
f
time step, while Figu
re 3
sho
w
predi
ction er
ror b
a
se
s on grey lea
s
t
squ
a
re
s supp
ort vector m
a
chin
e.
From T
able
1, the accu
racy of the g
r
ey
su
ppo
rt vector m
a
chi
ne re
ache
s
0.273%,
whe
r
ea
s the
accuracy of
GM(1,1
) ap
proach only is
arou
nd -3.20
6
%. It is not
difficult to se
e tha
t
the foreca
st
a
c
cura
cy of
grey lea
s
t squa
re
sup
p
o
r
t ve
ctor ma
chin
e
is hi
ghe
r tha
n
that of a
si
ng
le
GM(1,
l)
mod
e
l o
r
the
mo
del of
SVM,
and
ha
s b
e
tter
rob
u
stn
e
ss.The
refo
re,
we
ca
n
con
c
l
ude
that the grey
least
squar
es support vector m
a
chine exhibits
ex
cell
ent learning
ability with fewer
training data,the generali
z
ation capabilit
y of LS-SVM is greatly improved.
Table 2.
Th
e
Compa
r
i
s
on
of the Alkalini
t
y
Based on
Grey GM
(1,1) and Grey Least Suppo
rt
V
e
ct
or M
a
chi
n
e
Numb
er
O
r
iginal
data
GM(1,
1
) model
gre
y
least suppo
r
t
vector
machine
Model
data
Relative
error
/
%
Model
data
Relative
error
/
%
1 2.02
2.02
0
2.0202
-0.01
2 2.01
1.9878
+1.1
2.009
-0.05
3 1.99
1.9942
-2.11
1.9887
+0.065
4 1.98
2
-1.01
1.9791
+0.015
5
2 2.0070
-0.35
1.9971
+0.15
6 2.01
2.0135
-0.174
2.0098
+0.01
7 2.02
2.02
0
2.0188
+0.059
8 2.04
2.0265
-0.662
2.0393
+0.034
Average relative
error
/
%
-3.206
0.273
0
1
2
3
4
5
6
7
1.
96
1.
98
2
2.
02
2.
04
2.
06
2.
08
predi
c
t
i
on of
t
he
al
k
a
l
i
n
i
t
y
bas
ed on
grey
s
uppor
t
v
e
c
t
or
m
a
c
h
i
n
e
)
t
i
m
e
(
uni
t
:
hours
t
h
e al
k
a
l
i
n
i
t
y
i
n
s
i
nt
eri
ng p
r
oc
e
s
s
or
i
g
i
nal
v
a
l
u
e
f
o
r
e
ca
st
v
a
l
u
e
0
1
2
3
4
5
6
7
-2
0
2
4
x 1
0
-3
pr
edi
c
t
i
on
er
r
o
r
of
t
he
al
k
a
l
i
ni
t
y
bas
ed
on gr
ey
s
uppor
t
v
e
c
t
or
m
a
c
h
i
n
e
t
i
m
e
u
n
i
t:h
o
u
r
s
ab
s
o
l
u
te e
r
r
o
r
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Re
sea
r
ch on
Applicatio
n of Sintering Ba
sicit
y
of Base
d on Vario
u
s
Intelligent… (Song Qian
g)
7737
5. Conclu
sion
This pap
er h
a
s
pro
p
o
s
ed
a math
ematics m
odel
of th
e al
kalinity,
whi
c
h i
s
reali
z
ed
via
grey
lea
s
t sq
uare
s
sup
p
o
r
t
vector machi
ne.
Thi
s
alg
o
rithm com
b
ine
s
the a
d
vanta
ges
of GM
(1,
1
)
and LS-SVM.
The ne
w mo
del fully makes u
s
e of
the
advantage
s
of accumulati
on gen
eratio
n of
GM(1,1
) met
hod, and
we
ake
n
s the eff
e
ct of stocha
stic di
sturbi
n
g
fa
ctor in o
r
i
g
inal data
se
ries,
and stre
ngth
ens
the reg
u
larity
of
ra
w
data, and
av
oids the the
o
r
etical
defe
c
t
s
exi
s
ting in
the
grey fo
re
ca
sting mo
del. Be
side
s, SVM’s ability to ha
ndle
high
-dim
ensi
on
and i
n
com
p
lete
da
ta
allows
su
cce
ssful
extra
c
tion of info
rma
t
ion ev
en
wh
en pa
rt of th
e data
re
cord
s was
missin
g or
unre
a
sona
ble
owing to the
proble
m
s of
instru
ment
m
a
lfunctio
n
or
maintena
nce, calibration a
nd
climate
influe
nce
s
,
so
LS-SVMs meth
o
d
is suitable
to si
mulate
th
e al
kalinity in
an
efficient
and
stable way.
These re
sult
s fully demonstrate the pre
d
iction
a
c
curacy of new
model is
sup
e
rio
r
to a
singl
e mod
e
l, the theoretical analy
s
is a
nd sim
u
lati
on
results a
r
e f
u
lly pre
s
ente
d
the validity of
the forecast
model. T
h
is shows th
at the
grey le
a
s
t
sq
uare
s
su
ppo
rt vector ma
ch
ine i
s
avail
abl
e
for the modeli
ng of the alka
linity,
and can
get better pe
rforma
nce.
Although the
prop
osed g
r
e
y
LS-SVM-ba
s
ed m
odel
m
a
y be supe
ri
or to othe
r m
odelling
method
s in
some a
s
pe
cts,
it has
some
potent
ial d
r
a
w
ba
cks
su
ch
as the
und
e
r
lying Ga
ussi
an
assumptio
n
s
related
to a
least
sq
uare
s
co
st fun
c
tion. Some
re
sea
r
che
r
s ha
ve made
so
me
efforts to ove
r
co
me the
s
e
by applying
an ada
pted f
o
rm
called
weighted LS
-S
VM. So we will
intend to co
ntinue the stu
d
i
e
s on the a
p
p
lication
of the
alkalinity in sintering p
r
o
c
e
ss.
Referen
ces
[1]
F
A
N Xi
ao-
hui,
W
A
NG Hai-do
ng. Mathem
ati
c
al Mo
del
an
d
Artificial i
n
tel
l
i
genc
e of si
nte
r
ing
process.
Centra
l South
Univers
i
t
y
. 2
0
0
2
.
[2]
LIU Si-fen
g, D
A
NG Yao-g
uo,
FANG Zhi-ge
ng. Gre
y
S
y
ste
m
s theor
y a
n
d
app
ilic
atio
n. C
h
in
a scie
n
ce
press. 200
4.
[3]
LI Guo-zhe
ng,
W
A
NG Meng, Z
E
NG Hua-ju
n. An Introduct
i
on to
su
pport
Vector machi
n
es and
other
Kerne
l
-bas
ed L
earn
i
ng Met
h
o
d
s. Publis
hin
g
Hous
e of Electronics Ind
u
str
y
.
2006.
[4]
Eveli
o
H, Yam
an A. C
ontrol
of Non
lin
ear S
y
stems
Usin
g
Pol
y
n
o
mia
l
AR
MA Mode
ls.
AIChE Journal.
199
3; 39(3): 44
6.
[5]
Nare
ndra K S
,
Parthasarath
y
K.
Identific
a
t
ion a
nd C
ont
rol of
D
y
n
a
mi
cal
S
y
stems Using Ne
ura
l
Net
w
orks.
IEEE Trans Neural Networks.
1990; 1(1): 4-27.
[6]
W
A
NG Xu-do
ng, SHAO
H
u
i-h
e
. Ne
ura
l
Net
w
o
r
k Mo
deli
n
g
a
n
d
S
o
ft-Chemic
al
Measur
emen
t
T
e
chnolog
y.
A
u
tomatio
n
an
d Instrume
ntatio
n.
1996; 2
3
(2): 28 (in C
h
in
ese)
.
[7]
Cortes C, Vap
n
ik V. Supp
ort Vector Ma
chi
n
e. Machin
e Le
arni
ng. 19
95; 2
0
(3): 273.
[8]
W
A
NG Yong, LIU Ji-zhe
n, LIU Xi
an
g-ji
e et al. Mode
lli
ng a
nd App
licat
i
on
of soft-sensor base
d
on le
ast
squar
es su
ppo
rt vector machi
nes of
o
x
yge
n
-
c
ontent i
n
flu
e
gases
of pl
an.
Micro-co
mp
ute
r
infor
m
ati
o
n
.
200
6; 10: 110-
115.
[9]
Kecman V. Le
arni
ng an
d Soft Computi
ng. C
a
mbrid
ge: T
he MIT
Press. 2001.
[10]
CHEN
Xiao-fa
ng,GUI W
e
i-hu
a,W
A
NG Ya-li
n
etl. So
ft-sens
ing m
o
d
e
l of s
u
lfur co
ntent i
n
agg
lom
e
ra
t
e
base
d
on i
n
tell
i
gent inte
grate
d
strateg
y
.
Co
ntrol T
heory & A
pplic
atio
n
. 200
4; 21(1): 75
~
80
.
[11]
Z
H
ANG Xue-
g
ong. On
the
Statistical
Lear
nin
g
T
heor
y a
nd S
upp
ort V
e
ctor Mac
h
in
e
s
.
Automatio
n
Journ
a
l
. 20
00, 26(1): 1 (in C
h
i
nese).
[12]
LI Guo-zhe
n
g
,
W
A
NG Me
ng,
Z
E
NG Hua-j
un. Sup
p
o
r
t Vector Machin
e Introduc
tion. Beij
in
g
:
Electron
ics Ind
u
str
y
Pu
blis
h Press. 2004 (i
n Chin
ese).
[13]
Mozer MC, Jor
dan MI, Petsche T
,
et al. Advances
i
n
Ne
u
r
al Informatio
n
Processin
g
S
y
stems. 19
97;
9(8): 134.
[14]
Nell
o Cristi
an
n
i
, John S
h
a
w
e
-
T
a
y
l
or. An Intr
oduc
ti
on to S
u
pport Vector M
a
chi
nes a
nd O
t
her Kern
el-
Based L
ear
nin
g
Method. Be
iji
ng: Chi
na Mac
h
in
e Press. 20
05.
Evaluation Warning : The document was created with Spire.PDF for Python.