TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.7, July 201
4, pp
. 5235 ~ 52
4
3
DOI: 10.115
9
1
/telkomni
ka.
v
12i7.578
7
5235
Re
cei
v
ed Fe
brua
ry 13, 20
14; Re
vised
Ma
rch 16, 20
14; Accepted
April 2, 2014
Wavelet
Kernel Based on Identification for Nonlinear
Hybrid Systems
Ham
i
d Nou
r
isola
Dep
a
rtment of Electrical c
ontr
o
l Eng
i
ne
er
i
ng,
F
a
cult
y
of Ele
c
trical an
d Co
mputer
Engi
neer
in
g, Universit
y
of T
abriz,
T
abriz, IRAN
e
m
a
i
l
:
ha
mi
dn
ou
ri
so
la
@y
ah
oo.co
m
A
b
st
r
a
ct
This paper presents a new m
e
thod based on wave
let for a class of nonlinear hybr
id system
s
identification. Hybrid
system
s identification
is com
p
osed of two pr
oblem
s
;
estimate the
discrete
modes or
sw
itch amo
ng
the system
modes a
nd
esti
mate c
ontin
ue
s submod
els. In this pap
er, w
e
assumed tha
t
have
n
’
t a
n
y
pri
o
r know
l
edg
e
abo
ut dat
a cl
a
ssificatio
n
a
nd
sub
m
o
dels
id
e
n
tificatio
n
. Also
the co
mbi
n
in
g
of
feature vector
selecti
on a
l
gor
i
t
hm a
nd w
a
vel
e
t are use
d
in
subsp
a
ce l
ear
nin
g
an
d sup
p
o
rt vector mac
h
in
e
as a classifier.
T
he results indic
a
te that the erro
r of usi
ng the w
a
vele
t in subspac
e
learni
ng pr
oc
ess
beco
m
es low
.
In additi
on, t
he pr
opos
ed
meth
od
is co
nverg
ent an
d
has a
n
acce
p
t
able r
e
spo
n
s
e
in
prese
n
ce of hi
gh-p
o
w
e
r nois
e
.
Ke
y
w
ords
: hy
brid system
identification, wa
velet kernel function, featur
e ve
ctor selection,
support vector
mac
h
i
ne class
i
fier
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Hybrid
syste
m
s
switch
a
m
ong
severa
l co
ntinue
s mode
s de
scri
bed as syste
m
s whi
c
h
inclu
deboth continue
a
nd discrete state
s
.
In
many
application, an
accurate m
o
del of system
is
not available,
thus it is nece
s
sary to identif
y system paramete
r
s and t
heir dy
namics. In this
pape
r, a cl
ass of no
nline
a
r hybrid
syste
m
identificati
on in n
onlin
e
a
r auto
r
e
g
re
ssive with
externa
l
input (NA
R
X) form is co
nsi
dere
d
as follo
ws:
()
i
ii
i
yf
x
e
(
1
)
W
h
er
e
i
e
is
a
n
ad
ditive G
aussia
n
n
o
ise term
a
nd
11
[
,
..
.,
,
,
.
..,
]
ak
k
c
T
i
i
in
i
n
in
n
xy
y
u
u
is
contin
ue
s sta
t
e regression
.
,
ca
nn
are lagg
ed
in outputs
ik
y
and input
s
k
in
k
y
res
p
ec
tively.
The
discrete
mode
s are
determi
ned
b
y
{1
,
2
,
.
.
.
,
}
i
n
in
whi
c
h
the on
e of
n
su
bmod
els
1
{}
n
j
j
f
is a
c
tive at time step i. Al
so the
numb
e
r
of mo
de
s i
s
kno
w
n a
nd
any inform
ation ab
out
their re
gre
s
so
rs i
s
not available.
In [1], five methods
of
hybrid
s
y
s
t
ems
id
entificat
ion ha
s
bee
n stu
d
ied
wh
ich th
ese
method
s are non-co
nverge
nt; in addition
, optimiz
ation
proble
m
is e
norm
o
u
s
ly de
pend
ent on the
initial con
d
ition. The mixed-inte
ge
r progra
mming i
s
one of th
e mentione
d
method
s which
respon
se
s a
r
e limited to th
e numb
e
r
of data an
d vari
able
s
[2, 3]. In [4], the nu
mber
of mod
e
s i
s
kno
w
n and prop
oses an
identificatio
n
algorith
m
which com
b
in
es clu
s
terin
g
,
regressio
n
and
c
l
as
s
i
fic
a
tio
n
te
c
h
n
i
qu
es
. U
n
kn
ow
n
pa
r
a
me
te
rs
o
f
Ba
ye
s
i
a
n
ap
p
r
oa
ch
a
r
e c
o
ns
id
er
ed a
s
rand
om varia
b
les p
r
e
s
ente
d
in [5]. This method ha
s a three step
: paramete
r
estimation, d
a
ta
cla
ssifi
cation,
and
e
s
tima
tes of
re
gio
n
an
d Baye
sian
la
w a
r
e infe
rre
d t
o
e
s
timate t
he
para
m
eters. In Algebraic G
eometri
c ap
proach, the app
lied syste
m
is assume
d wit
hout noi
se [6].
The m
ention
ed a
pproa
ch
ha
s o
b
viously con
s
id
er
a
b
l
e erro
r u
nde
r the
ex
pe
ri
mental syste
m
s.
Bounde
d-erro
r approa
ch id
entifies
the h
y
brid syste
m
s throu
gh im
posi
ng the error
con
s
tri
c
tion
[7]. In [8], formul
a con
s
tructio
n
i
s
u
s
ed
as a le
ast
squa
re
probl
em
with
sum
-
of-norms
regul
ari
z
ation
over reg
r
e
s
sor pa
ram
e
ter differen
c
es. Automati
c tuning ap
proa
ch a
ppli
e
s
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 7, July 201
4: 5235 – 52
43
5236
boun
ded
-erro
r
app
roa
c
h a
nd su
ppo
rt vector
reg
r
e
s
sion (SVR) for extensio
n o
f
the algebra
i
c
method [9]. [10] uses
alg
ebrai
c a
nd
SVR app
roa
c
he
s to e
s
ta
blish
a fram
ewo
r
k ba
sed
o
n
minimizi
ng th
e produ
ct of
loss fun
c
tion
s al
ong
wi
th
a re
gula
r
ization term. La
u
e
r i
s
p
r
e
s
ent
ed
anothe
r m
e
th
od fo
r hyb
r
id
system
s i
den
tification b
a
sed o
n
sup
p
o
r
t vector cl
assifier a
nd
ke
rn
el
function [11,
12]. Kernel
function i
s
use
d
as
a nonlin
ear tra
n
sformation.
In [13], Luange
prop
oses fo
u
r
metho
d
s fo
r feat
ure extraction a
nd u
s
e
s
them in
SVM formula
to identify the
nonlin
ear hy
brid
sy
stems. In ad
dition
, each m
o
d
e
ha
s
a
different
radi
al
ba
sic fun
c
tion
covari
an
ce i
n
whi
c
h
mod
e
s
, train
an
d t
e
st d
a
ta a
r
e
kno
w
n
in
sub
s
pa
ce
map
p
i
ng
step. In [1
4] a
new lea
r
ni
ng
appro
a
ch for piece wi
se
smooth fun
c
tion
s by reg
u
lari
zed
kern
el reg
r
e
ssio
n
is
prop
osed. Th
is is d
one
by defining a
n
e
w regul
ari
z
a
t
ion term. In [15] identifica
t
ion of hybrid
system
s i
n
volving arbitra
r
y and
un
kn
own no
nline
a
ri
ti
es i
n
the
sub
m
odel
s i
s
inv
e
stigate
d
. In t
h
is
approa
ch, th
e sub
m
od
els are e
s
timat
ed one
by
one by maxi
mizing th
e sparsely of the
corre
s
p
ondin
g
error vecto
r
.
In this paper,
the prop
osed
method improves t
he wo
rk of [13] with
wavelet functi
on. The
main contri
bu
tion of this p
aper i
s
the
chang
e in
the
sub
s
p
a
ce lea
r
ning fo
rm tra
i
n and te
st d
a
ta
usin
g the wa
velet kern
el functio
n
in no
nlinea
r hybr
id
systems id
e
n
tification. Finally, the effe
ct of
kernel
fu
nctio
n
coefficie
n
t and wavelet kernel
fu
nc
tio
n
coefficie
n
t
are i
n
vestig
ated. In thi
s
pa
per,
we
assum
e
t
hat train
an
d
test data
a
r
e
unkno
wn
i
n
d
a
ta calssification a
nd
data
have
singl
e
RBF
covari
an
ce. We kno
w
onl
y about the numbe
r of modes.
This
pap
er is o
r
ga
nized
as foll
ows: I
n
Sectio
n 2,
a fra
m
e
w
ork of
nonlin
e
a
r hyb
r
id
system
s i
d
e
n
tification i
s
intro
duced.
Sectio
n 3
pre
s
ent
s th
e
ke
rn
el p
r
in
cipal
compo
nent
reg
r
e
ssi
on a
n
d
wavel
e
t kernel p
r
in
cipal
comp
one
nt regre
s
sion. In
Section
4, p
r
opo
sed
meth
od
with nume
r
i
c
al results is in
vestigated. Fi
na
lly, the con
c
lu
sion
s are dra
w
n in secti
on 5.
2. Frame
w
o
r
k of Nonline
a
r H
y
brid S
y
stems Iden
ti
fication
First, Thi
s
se
ction present
s t
he stru
ctu
r
e of kern
el functi
o
n
for submod
els e
s
t
i
mating
[12, 13]. Nonl
inear hyb
r
id
systems
subm
odel
s ca
n introdu
ce a
s
:
1
()
(
,
)
N
j
kj
j
k
j
k
f
xk
x
x
b
(2)
W
h
er
e
j
inclu
des
1
[
,
..
.,
]
T
jN
j
,
j
b
is
bias
term
for
j
f
and
j
k
i
s
kernel fu
nction
that
sa
tisfy
the merce
r
condition
s [16
]. Typical ke
rnel fun
c
tion
s are linea
r
kernel fun
c
ti
on, RBF kernel
function a
n
d
polynomi
a
l kernel
function.
In this
paper, RBF
k
e
rnel functio
n
2
2
2
(
,
)
e
xp(
2
)
kk
kx
x
x
x
is
us
ed.
The metho
d
mentione
d in [17-19] for id
ent
ification a
nd data cl
assi
fication is (3).
()
()
(
)
re
g
e
m
p
R
wT
w
c
R
w
(
3
)
W
h
er
e
()
em
p
R
w
is em
pirical risk
fu
nction.
()
Tw
i
s
a
t
e
rm
whi
c
h
p
r
events th
e
extra trai
ning
a
nd
durin
g minim
i
zing th
e em
pirical ri
sk fu
nction
act
s
adju
s
tment p
e
rform.
c i
s
the adju
s
tme
n
t
coeffici
ent. Accordi
ng to
(4
), the aim
of
empiri
ca
l
risk function
is
m
i
nimizin
g
the
numbe
r a
nd li
mit
of classification error.
1
1
()
(
(
,
)
)
N
em
p
i
i
R
wq
yf
xw
N
(
4
)
Whe
r
e
i
y
,
(,
)
f
xw
a
nd q
are
cl
ass la
bels,
output of
cl
assifier
and
wei
ghted f
unctio
n
r
e
sp
ec
tive
ly in
s
u
pp
or
t ve
c
t
o
r
mac
h
in
es
c
l
ass
i
fi
cati
on. Acc
o
rdi
n
g to (4), (
3
)
is re
written
as
follows
:
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Wa
velet Kern
el Based o
n
Identificatio
n for No
nline
a
r
Hybrid S
yste
m
s (Ham
id Nouri
s
ola
)
5237
2
1
1
min
(
(
)
1
)
2
N
T
p
r
i
m
a
l
iii
i
ww
c
w
x
b
y
(
5
)
Whe
r
e
1
N
ii
i
i
w
y
x
and
.
is no
rm 1 o
r
2. To obtai
n the sol
u
tio
n
, the dual o
p
timization
probl
em mu
st be solved
with boun
dary e
x
presse
d co
n
d
itions by:
11
1
1
1
ma
x
2
..
0
,
1
0
NN
N
T
du
al
i
i
j
i
j
i
j
ii
j
i
N
ii
i
wy
y
x
x
st
c
i
N
y
(
6
)
In (6), the inp
u
t data can b
e
in anothe
r s
pace. This m
ean
s that data have bee
n mappe
d
to anoth
e
r sp
ace.
Wh
en
th
e data
h
a
ve
nonlin
ear be
h
a
vior
and
can
not di
stingui
sh am
ong
the
m
,
data mappi
ng
is use
d
. Mod
e
estimation f
o
r ea
ch data
obtain
s
throu
gh (7
).
arg
m
ax
(
)
ii
i
f
x
(
7
)
Whe
r
e
1
()
N
T
ii
i
i
i
i
f
x
y
xx
b
.
3. Subspace
Learning an
d Data Dime
nsion Red
u
c
t
ion
The pu
rpo
s
e
of this sectio
n is to redu
ce the numbe
r of added dat
a which de
scribe the
system with extra
feature
s
.
Subspa
ce
learni
ng i
s
u
s
ed th
rou
gh
sele
cting the
eigenvalu
e
s and
eigenve
c
tors
of the trainin
g
data matrix
and ev
aluati
ng its effect on test data.
Then use the
Suppo
rt Vect
or Ma
chin
e L
agra
ngia
n
m
u
ltipliers
1
,
.
..,
{}
ij
j
n
for each test vectors in e
a
ch mode
obtaine
d. One of the supp
ort vector ma
chin
e pro
per
ti
es is redu
cin
g
the numbe
r of test data.
In
fact, the
num
ber of d
a
ta i
n
two
step
s
(su
b
spa
c
e
le
arnin
g
a
n
d
cl
assificatio
n
)
decrea
s
e
s
. T
he
operation
s
of two step
s a
r
e
explained in
the followin
g
.
3.1. Kernel P
r
incipal Com
ponen
t
Reg
r
ession
If the data
distrib
u
tion h
a
s nonli
nea
r behavior
in
the original
space, it cannot be
cha
nge
d by li
near ma
ppin
g
.
So it i
s
n
e
ce
ssary to
u
s
e
nonlin
ear ma
pping
to
red
u
ce th
e
nonlin
ear
relation b
e
tween the data.
As me
ntione
d in [1
3], [20-21], Ke
rnel
Prin
cipal
Co
mpone
nt Re
gre
ssi
on
(KP
C
R)
can
redu
ce
the
training
data
di
mensi
o
n
in
o
p
timal
way.
S
uppo
se
that a
set
of trainin
g
featu
r
e ve
ct
ors
in the origin
a
l
spa
c
e is
12
{
,
,
...,
}
N
zz
z
whe
r
e
(1
)
n
i
zR
i
N
is feature extra
c
ti
on from
mode i. Al
so
sup
p
o
s
e that
:
n
Rg
is a
no
nline
a
r
tra
n
sfo
r
mati
on
which tran
sform
s
data from
the ori
g
inal
space of
dime
nsio
n n
to a f
eature
spa
c
e
of dimen
s
io
n
l. In this
sp
ace, scatter
mat
r
ix
is obtain
ed a
c
cordi
ng to (8
).
1
1
((
)
)
(
(
)
)
N
T
ti
i
i
Sz
e
z
e
N
(
8
)
Whe
r
e
()
(
1
)
i
zf
i
N
is vector i in featu
r
e sp
ace and
e is the average of all vectors in
the feature
space. If the
mean of ve
ct
ors i
s
not zero, we ca
n tra
n
sfer
ke
rnel f
unctio
n
in fea
t
ure
spa
c
e to zero
through
(9).
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 7, July 201
4: 5235 – 52
43
5238
11
ˆ
(1
)
(
1
)
jN
N
N
j
N
N
N
kI
k
I
NN
(
9
)
Whe
r
e
N
I
and
1
NN
are Identity and unit matrix
resp
ectively.
Eigenvalue
s
and eig
enve
c
tors m
a
trix of
t
S
place
d
out
of null sp
ace
g ca
n be
cal
c
ulated
throug
h usi
n
g
the PCR alg
o
rithm.
t
Sv
v
(
1
0
)
Whe
r
e
1
()
N
ii
i
vz
. According to [21], (10) will be as (11).
kW
N
W
(
1
1
)
Whe
r
e
12
[,
,
.
.
.
,
]
T
N
W
and k is kernel fun
c
tion matrix. If above operation (11
)
for m large
eigenvalu
e
s of
t
S
is d
one, m
vectors
12
,
,
...,
m
ww
w
will
be obtai
ned. I
t
is obviou
s
t
hat to obtain
the m vectors, we mu
st compute
the e
i
genvalue
s a
nd eige
nvect
o
rs
of kernel
matrix k. Fin
a
lly,
mappin
g
of e
a
ch
test
featu
r
e ve
ctors su
ch
as
n
zR
fr
o
m
or
ig
in
a
l
s
p
ac
e
w
i
th
n d
i
me
nsio
n
to m
-
dimen
s
ion
a
l subspa
ce (
mn
) is
done by (1
2).
T
z
x
Wk
(
1
2
)
Whe
r
e
m
x
R
is
the ma
pping of test
feat
ure
vector i
n
su
bspa
ce and
12
[
(
,
)
,
(
,
)
,
...,
(
,
)]
T
zN
kk
z
z
k
z
z
k
z
z
.
To obtain the
kernel p
r
in
cip
a
l comp
one
nt regr
essio
n
a
nd identificati
on of system
mode
s:
1.
Comp
ute the kernel matrix
for a set of training d
a
ta.
2.
Comp
ute eig
envalue
s a
n
d
eigenve
c
tors fo
r kernel m
a
trix determi
ne its
dimen
s
ion
by (13) a
nd calcul
ate the transfe
r matrix.
1
1
m
i
i
N
i
i
s
s
(
1
3
)
Whe
r
e m i
s
matrix dimen
s
ion,
i
s
is eige
nvalue of kernel matrix an
d
[0
,
1
]
relat
e
s sy
st
em
error to its di
mensi
on. The
amount of this param
ete
r
can b
e
de
cre
a
se
d until the
system erro
r is
low.
3.
Tran
sfe
r
the test data to feat
ure spa
c
e u
s
ing the ma
p
p
ing matrix.
4.
Place the tra
n
sferre
d test data in (6
).
5. Cal
c
ulate
ˆ
f
for ea
ch
data
se
t usin
g
(14
)
a
nd the
n
id
enti
f
y and
cla
ssif
y
them by
(6)
and (7
).
ˆ
()
(
.
,
)
TT
j
jj
j
j
f
xW
k
x
b
(
1
4
)
Whe
r
e
j
is lagrangi
an coefficient in feature spa
c
e.
3.2. Wav
e
let Kern
el Principal Compon
ent Regre
ssi
on
The
wavelet
kernel
pri
n
ci
p
a
l comp
onent
re
gre
s
sion
i
s
the
exten
s
i
on of
KPCR
method.
This m
e
thod
can
be
used
for no
nline
a
r system
s. In
this pa
pe
r, wavelet tran
sf
orm i
s
u
s
e
d
for
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Wa
velet Kern
el Based o
n
Identificatio
n for No
nline
a
r
Hybrid S
yste
m
s (Ham
id Nouri
s
ola
)
5239
sub
s
p
a
ce learnin
g
. The
main idea of
the wavelet
analysi
s
is to approxim
a
t
e function
s by
dilation
s and
transl
a
tion
s functio
n
()
hx
calle
d the mother
wavelet.
1
2
,
()
(
)
ac
x
c
hx
x
h
a
(
1
5
)
Whe
r
e
,,
x
ac
R
.
a
is a dilation facto
r
and c is a transl
a
tion fact
or. If the wavelet function i
s
multidimen
sio
nal, it can re
write a
s
(1
6) [
22].
1
()
(
)
N
i
i
hx
hx
(16
)
Whe
r
e
12
(
,
,
...
,
)
N
N
x
xx
x
R
. If
,
N
x
xR
are two ve
ctors in a spa
c
e,
wavelet kern
el can
be obtain
ed from (17
)
[23].
1
(,
)
(
)
(
)
N
ii
ii
i
x
cx
c
kx
x
h
h
aa
(17
)
In translatio
n
-invariant kern
el
(,
)
(
)
kx
x
k
x
x
, (17) can b
e
rewritten a
s
(18).
1
(,
)
(
)
N
ii
i
x
x
kx
x
h
a
(18
)
Mother
wavel
e
t function i
s
assume
d th
e form
2
(
)
[c
o
s
(
1
.7
5
)
e
x
p
(
)]
2
P
x
hx
x
, kern
el functio
n
is
as (1
9).
1
2
2
1
(,
)
[
(
)
]
[c
o
s
(
1
.
7
5(
))
e
x
p
(
)]
2
N
P
ii
i
N
ii
P
ii
i
xx
kx
x
h
a
xx
xx
aa
(19
)
Wavelet kern
el is an ort
h
onormal fun
c
tion [24]
whil
e this feature is not in G
aussia
n
kernel fu
nctio
n
. In other word
s, du
e to t
he de
pen
den
cie
s
an
d
correlation
s bet
ween d
a
ta in t
he
Gau
ssi
an kernel functio
n
, train spee
d wil
l
be lowe
r tha
n
the wavelet
kern
el.
WKPCR al
go
rithm i
s
ap
pli
ed in th
e
sa
me way a
s
well a
s
th
e K
P
CR
algo
rith
m exce
pt
that their kernel fun
c
tion
will be diffe
rent. In
feature extra
c
tion
and
sub
s
pa
ce le
arni
ng f
o
r
nonlin
ear
systems, wavelet
kern
el functi
on is u
s
ed.
T
he pro
c
e
s
s of the algorithm
is as follo
ws:
1.
Comp
ute the wavelet kern
el matrix for a
set of trainin
g
data.
2.
Che
c
k mea
n
of the train
data in m
a
pping
su
bsp
a
ce
usi
ng
1
()
N
j
i
i
x
whe
r
e
()
ji
x
is i
th
column
of mapping train data m
a
trix. If
the
mean of trai
n data is not
zero, the
wavelet kern
el function m
u
st be tra
n
sfe
rre
d by (9).
3.
Comp
ute the mappin
g
matrix using the
ei
genve
c
tors of wavelet ke
rn
el matrix.
4.
Tran
sfe
r
the test data an
d place them in
(6).
5. Cal
c
ulate
ˆ
f
for ea
ch
data
se
t usin
g
(14
)
a
nd the
n
id
enti
f
y and
cla
ssif
y
them by
(6)
and (7
).
4. Simulation Resul
t
s
This
se
ction i
n
volves the
e
s
timation
of a
f
unction
whi
c
h swit
che
s
among
four u
n
kn
own
nonlin
ear sy
stems.
Con
s
id
er th
e fun
c
tio
n
a
r
bitra
r
ily
switch
es amo
n
g
fou
r
n
online
a
r
behavio
rs
as
(20
)
. Estimation of the syst
em is given in
Figure 1.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 7, July 201
4: 5235 – 52
43
5240
2
1
2
3
3
4
si
n(
3
)
2
()
co
s
(
2
)
3
1
x
x
fx
x
x
(20
)
The
re
sults
of the KPCR [12, 13] a
n
d
WKP
CR fo
r different va
lues
of their
para
m
eters
a
r
e
expre
s
sed in
tables 1 a
nd
2.
Figure 1. Estimation of a Switch
ed Nonli
near F
u
n
c
tion
from 2000
Noisy Data Poi
n
ts
A training
set
of N =
200
0
points i
s
g
e
n
e
rated
by (2
0
)
with a
dditive
zero-m
ean
G
aussian
noise (stand
ard d
e
viation
[0
,
0
.
7
]
) for u
n
iformly distribut
ed ra
ndom
[3
,
3
]
x
and
uniformly di
st
ributed
ra
ndo
m
{1
,
2
,
3
,
4
}
i
. The n
u
m
ber
of train
a
nd test d
a
ta i
s
10
0 an
d 30
0
respe
c
tively. This
syste
m
i
s
id
entified b
y
KP
CR m
e
thod
and it
s
result
s a
r
e
gi
ven in T
able
1.
Data cl
assifi
cation e
r
ror
obtaine
d fro
m
multi-cl
a
s
s su
ppo
rt vector m
a
chin
es cl
assification
method an
d confu
s
e
d
ma
trix shown in Table 1.
Co
nfuse
d
matri
x
shows the data whi
c
h h
a
ve
been
cla
ssifi
ed wron
gly are pl
ace in
which mod
e
s
; the sh
are
of data cla
ssifie
d
incorrectly
become
s
hig
h
as th
e simi
larity of the shape
and
typ
e
of mode
s i
n
crea
se
s. Th
is case occu
rs
betwe
en mod
e
s 2 an
d 3.
Table 1. Re
sult of KPCR Met
hod on S
w
itch
ed Nonli
near System
Test. Classif.
Error
%
Con
f
use
d
M
a
tri
x
2
51
0
2.
1
0
.
3
5
14
7.
9
1
.
0
5
1
.
5
0.
8
0
.
6
0.
3
0
0
148
.
3
0.
8
1
.
5
0
.
5
0
0
8
.
8
1.
25
141
.
1
1.
2
0
0
0
.
2
0.
3
0
.
4
0.
35
14
8.
9
0
.
5
3
10
2
.
45
0.
4
5
14
7.
3
1
.
6
2.
05
1.
25
0.
6
0
.
6
0
0
.
2
0
.3
5
1
4
7
.
9
1
.1
2
0
.6
0
1.
2
0
.
8
0.
9
0
.
2
5
148.
2
2
.
6
5
0
0
0
.
2
0.
25
.
5
0.
3
1
4
9
.
3
0.
5
4
10
1.
2
8
0.
49
150
0
0
0
0.4
0
.
7
145.3
3
.
2
4.
3
3
.
1
0.
2
0
.63
0.4
0
.
4
5
2
.
6
1.8
1
47
1.
8
0
0.1
0
.
3
2
0
0
149.2
0
.
2
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Wa
velet Kern
el Based o
n
Identificatio
n for No
nline
a
r
Hybrid S
yste
m
s (Ham
id Nouri
s
ola
)
5241
Table 2. Re
sult of WKPCR Method
o
n
Switche
d
No
nlinea
r Syste
m
Test. Classif.
Error
%
Con
f
use
d
M
a
tri
x
2
10
12.
88
0
.
88
13
1.9
1
.37
1
.4
1
.
2
6
0.3
0
.
6
7
1
6
.
4
1
.51
1
5
2.62
1
31.6
2
.84
0
.7
0.82
2.7
1
.
3
4
11.4
2
.59
3
.5
1.65
12
7.
1
2
.23
8
2.36
18.2
1
.
4
0.5
0
.71
0
.
7
0
.
95
1
30.6
1
.9
2
21
0
2
.
27
0.
6
5
146.3
1
.89
1
.5
0.85
2
.
2
1
.55
0
0
1
4
8
.
2
1.32
1.6
1
.
0
8
0
0
4
.
3
1.
5
7
14
5.
7
1
.57
0
0
3
.
4
2.12
0.4
0
.7
146.8
1
.
8
7
2
51
0
3.
03
0.
64
14
3
.
7
1
.
9
5
2
.
3
1.
5
3
.
8
2.
53
0
0
1
47
.
8
1.
14
2.
2
1
.
1
4
0
0
4
.3
1.16
14
5.7
1
.1
6
0
0
5
.1
1.91
0.9
0
.74
144
1.89
4
10
1.
1
7
0.
32
15
0
0
0
0
0
1
46.67
3
.
2
3
3.46
0.33
0.82
0
3
.17
2
.86
1
4
7
.17
3
.12
0
0
0
.5
1.22
0
1
4
9
.
5
1.12
The
data
of
Table
1 i
s
shown that
th
e dat
a
cla
ssi
f
i
cat
i
on er
ro
r
de
cr
ea
se
s wit
h
t
he
increme
n
t of
. This
erro
r is
redu
ce
d to a
spe
c
ified val
u
e of
, then the erro
r will
be
increa
se
d.
Actually, the system e
r
ror
has a mini
mu
m point in RB
F para
m
eter.
In WKPCR m
e
thod, the nu
mber of train
and te
st data
are 50 a
nd 1
50, the value of c and
P are
100
0
and 1
re
sp
ectively. The data cla
s
sifi
ca
tion erro
r an
d co
nfused
matrix re
sult
s are
sho
w
n i
n
Ta
b
l
e 2. The
cl
assificatio
n
e
rro
r for
sm
all a
m
ount of
is
high a
nd
cla
s
sificatio
n
e
rro
r
decrea
s
e
s
co
nsid
era
b
ly as increa
se of
. This m
e
thod
has
a minim
u
m point fo
r
cla
ssifi
cation
error in
wave
let param
eter. If the value of p is
not fixed, for a fixed value of
wavelet kernel
para
m
eter su
ch
a
s
3
10
, erro
r chang
es for dif
f
erent value
s
of P is sho
w
n
in Figure 2.
Figure 2. Erro
r Cha
nge
s for Different Val
ues of P
The cl
assification erro
r fo
r two m
e
thod
s is
l
o
w
as
15% as
sh
o
w
n in T
able
1 and
2.
Whe
n
the wa
velet is used,
the classification erro
r is low in comp
arison with th
e ca
se that the
wavelet i
s
n’t
applie
d. Figu
re 3
shows t
he
classi
fication e
rro
r in
the
same
con
d
itions fo
r t
w
o
method
s.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 7, July 201
4: 5235 – 52
43
5242
Figure 3. Co
mpari
s
o
n
of KPCR [12, 13
] and
WKPCR Cla
s
sificati
on Erro
r in th
e Same
Conditions
5. Conclusio
n
In this pape
r, a new m
e
thod ba
se
d
on wa
velet
for identification and
submod
el
estimation
of
nonli
nea
r h
y
brid system
s
i
s
pro
p
o
s
ed. Sele
cted
wavel
e
t
kernel fun
c
tion
is
multidimen
sio
nal. Thi
s
me
thod could
a
pprox
im
ate
a
nonlin
ear h
y
brid system
and can b
e
impleme
n
ted
on
the
hybrid sy
stem
s
whi
c
h
sw
itch
amo
ng
un
known m
ode
s. Estimating
the
numbe
r
of submod
els an
d data
cl
assification fo
r
l
i
near
an
d n
online
a
r hyb
r
id system
s is
importa
nt issue pre
s
e
n
ted
in this pap
er. Dep
end
e
n
ce am
ong
kernel fun
c
tion trainin
g
d
a
ta
cau
s
e
s
redu
ction of lea
r
nin
g
sp
eed. F
u
rt
herm
o
re
, p
r
o
posed meth
o
d
eliminate
s
t
h
is d
epe
nde
n
cy
and imp
r
ove
s
learnin
g
sp
e
ed.
Furthe
r inve
stigation will focu
s on o
p
timizi
ng a
nd
selectin
g the regula
r
pa
ram
e
ter c,
cho
o
si
ng
an
approp
riate
kernel
fun
c
tion
for
su
bspa
ce lea
r
nin
g
a
n
d
u
s
ing
the
n
online
a
r
su
pp
ort
vector
classifi
cation for mix
ed data.
Referen
ces
[1]
Paol
etti S, Julo
ski A, F
e
rra
ri-T
recate G, Vid
a
l
R. Identificati
o
n of h
y
br
id s
y
s
t
ems: A tutorial
.
Euope
a
n
Journ
a
l of Co
ntrol.
200
7; 13(2
–3): 242
–2
62.
[2]
Bempor
ad A, Roll J, Lju
n
g
L.
Identificati
on of hybr
i
d
systems via
mix
ed-i
n
teg
e
r progr
a
m
min
g
.
Procee
din
g
s
of
the 40th IEEE Confer
ence
on
Decisi
o
n
an
d Contro
l, 200
1; 1: 786-7
92.
[3]
Roll
J, Bemp
orad
A, Lj
un
g
L. Id
entific
ation
of p
i
ec
e
w
ise
a
ffin
e
s
y
stems
v
i
a mixed-i
n
teg
e
r
progr
ammin
g
.
Autom
a
tica
. 2
0
04; 40(1): 3
7
-5
0.
[4]
Ferrari-T
recate
G,
Musell
i M,
Li
berati
D. M
o
rari M. A
cl
u
s
tering
techn
i
q
ue for
the
id
e
n
tificatio
n
of
piec
e
w
is
e affin
e
s
y
stems.
Aut
o
matica.
20
03;
39(2): 205-
21
7.
[5]
Juloski
AL, W
e
ila
nd S, H
e
e
m
els W
P
MH. A Ba
yesi
an
ap
proac
h to i
den
tification
of h
y
brid s
y
stem
s
.
IEEE Transactions on Aut
o
m
a
tic Control.
20
0
5
; 50(10): 1
520
-153
3.
[6]
Vidal R, Soatto
S, Ma Y, Sastr
y
S.
An alge
br
aic ge
o
m
etric appr
oach to th
e ide
n
tificatio
n
of a class o
f
line
a
r hybri
d
s
ystems.
42nd I
EEE Conference on Decis
i
on and C
ontrol P
r
oceedings. Berkeley
.
2003;
1: 167-1
72.
[7]
Bempor
ad A,
Garulli A, Paole
tti S, Vicin
o
A. A bou
nd
ed-
error a
ppro
a
ch
to pi
ece
w
i
s
e
affine s
y
ste
m
identification.
IEEE Transactions on Aut
o
m
a
tic Control
. 20
0
5
; 50(10): 1
567
-158
0.
[8]
Ohlsson H, L
j
ung
L, Ohlsso
n H.
Identific
a
t
ion of Pi
ecew
ise Affi
ne Syst
ems
Usin
g Su
m-of-N
or
ms
Reg
u
lar
i
z
a
tio
n
.
18th IF
AC W
o
rld Co
ngress.
Milan
o
. 201
1; 664
0-66
45.
[9]
Lau
er F
,
Bloc
h
G.
A new hybr
id system
identification
algorit
hm
with
automatic tuning.
1
7
t
h IFAC
Wo
rl
d
Con
g
ress. Seo
u
l. 200
8; 102
0
7
-10
212.
[10]
Lau
er F
,
Vidal
R, Bloch G.
A product-of-e
rrors framew
or
k for li
near hy
brid syste
m
id
entificati
o
n
.
Procee
din
g
s of
the 15th IF
AC s
y
mp. o
n
s
y
ste
m
identific
ation
.
Saint-Malo. 2
009.
[11]
Lau
er F
,
Bloch G. S
w
itch
ed a
nd pi
ece
w
i
s
e n
onli
n
e
a
r h
y
bri
d
s
y
stem ide
n
tifi
cation In H
y
bri
d
S
y
stems
:
Comp
utation a
nd
Co
ntrol.
Spr
i
ng
er Berli
n
He
idel
ber
g.
200
8; 4981: 3
30-3
4
3
.
[12]
Lau
er F
,
Bloch G, Vidal R.
Nonlinear hy
brid system
identif
ication with kernel models.
49th IEEE
Confer
ence
on
Decisio
n
an
d Cont
ro
l (CDC).
Atlanta. 201
0; 696-
701
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Wa
velet Kern
el Based o
n
Identificatio
n for No
nline
a
r
Hybrid S
yste
m
s (Ham
id Nouri
s
ola
)
5243
[13]
Luo
ng
Le
V, Bl
och G,
Lau
er
F
.
Reduc
ed-siz
e
ker
nel
mo
del
s for n
o
n
lin
ear
h
y
bri
d
s
y
stem
i
dentific
atio
n
IEEE Transactions on Neur
al
Networks.
201
1; 22(12): 2
398
-240
5.
[14]
Lau
er F
,
Le VL, Bloch G.
Le
arni
ng s
m
o
o
th
mo
dels
of no
nsmooth
fu
ncti
ons via c
onve
x
opti
m
i
z
a
t
i
o
n
.
IEEE International Workshop
on Ma
chine
Learning for Signal Proces
s
i
ng (
M
LSP). Santander. 2012; 1-
6.
[15]
Le VL, La
uer
F
,
Bako L, Bloch G.
Learni
n
g
non
lin
ear hy
brid syste
m
s: from sp
arse o
p
t
imi
z
at
io
n to
supp
ort vector
regress
i
on.
P
r
ocee
din
g
s of
the 1
6
th i
n
te
rnatio
nal
conf
erenc
e o
n
H
y
brid s
y
stems
:
computati
on a
nd contro
l (AC
M). Philad
e
lp
hi
a. 2013; 3
3
-42
[16]
Vapn
ik Vl
adim
i
r N. An ov
ervie
w
o
f
statistical
lear
nin
g
the
o
r
y
.
IEEE Transactions on Neur
al Networks.
199
9; 10(5): 98
8-99
9
[17]
Meshg
i
ni
s. Au
tomatic F
a
ce
Reco
gniti
on
us
ing
Sup
port V
e
ctor Mach
ines
Ph.D. T
hesis. Univers
i
t
y
of
T
abriz. 2013.
[18]
Abe S. Supp
ort vector ma
chines for pattern
classification.
S
p
rin
ger
. 20
10.
[19]
Burges
CJ. A t
u
toria
l
o
n
su
pp
ort
vector mac
h
in
es for
patter
n
reco
gn
ition
. Date mi
ni
ng an
d
kn
ow
ledg
e
discov
e
ry 2.
19
98; 2: 121-
167.
[20]
Scholk
opf B, Smola A, Mul
l
e
r KR. Non
l
i
n
ear
com
pon
en
t anal
ys
is as
kerne
l
ei
genv
a
l
ue
prob
lem.
Neur
al Com
put
ation. 19
98; 10
(5): 1299-
13
19
.
[21]
Rosip
a
l
R, Gir
o
lami
M, T
r
ejo L, Cic
hocki
A.
Kerne
l
PCAfo
r
feature
e
x
tra
c
tion a
nd
de-
n
o
isin
g i
n
n
on-
line
a
r regr
essi
on.
Neur
al Co
mp
utin
g & Appl
icatio
ns
. 200
1; 10(3): 23
1–
243
.
[22]
Zhang QH, Be
nven
iste A. Wavel
e
t net
w
o
rk
s.
IEEE
Transaction o
n
Neur
al Net
w
orks. 1
992; 3: 889-
898.
[23]
Zhang
L, Zho
u
W, Jiao W. Wavel
e
t sup
port
ve
ctor mac
h
in
e. IEEE T
r
ansaction
on
S
y
stem Man
a
n
d
C
y
ber
netics. 2
004; 34(
1).
[24]
Aubec
hies I. Orthonorma
l b
a
ses of comp
actl
y
su
pp
orte
d
w
a
vel
e
ts.
Communic
a
tio
n
s
on pure a
n
d
app
lie
d mathe
m
atics.
1
988; 4
1
(7): 909-
99
6.
Evaluation Warning : The document was created with Spire.PDF for Python.