TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.4, April 201
4, pp. 2762 ~ 2
7
6
8
DOI: http://dx.doi.org/10.11591/telkomni
ka.v12i4.4204
2762
Re
cei
v
ed Au
gust 9, 201
3; Re
vised O
c
to
ber 31, 20
13;
Accept
ed No
vem
ber 1
6
, 2013
Optimal Support Vector Regression Algorithms for
Multifunctional Sensor Signal Reconstruction
Xin Liu
1
, Dan Liu*
2
, Yan Z
h
ang
3
, Qisong Wang
4
, Shen Zhang
5
, Hua Wang
6
1
,5,6
School of T
r
ansp
o
rtatio
n Scienc
e an
d En
gin
eeri
ng, Har
b
in Institut
e of
T
e
chnolog
y, H
a
rbi
n
, Chin
a
2,3,
4
School of El
ectrical En
gin
e
e
rin
g
and A
u
to
mation,
Har
b
in
Institute of
T
e
chno
log
y
, H
a
rbi
n
, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: xin
liu
@hit.e
d
u
.cn
1
, liuda
n@
hit.edu.cn
2
, zy
hit@hit.edu.cn
3
,
w
a
n
gqis
o
n
g
@
h
it.edu.cn
4
, sh
enzh
a
n
g
@h
it.edu.cn
5
,
w
a
ng
h
ua@
hit.edu.c
n
6
A
b
st
r
a
ct
T
he e
m
pir
i
cal
risk mi
ni
mi
z
a
t
i
on
metho
d
s w
e
re often us
ed
to estimate th
e multifuncti
on
al sens
o
r
regressi
on
fun
c
tion i
n
s
i
gn
al
reconstructi
on.
T
he s
m
a
ll s
i
ze of s
a
mpl
e
d
a
ta w
oul
d l
e
a
d
to the
pro
b
l
e
m
of
poor g
ener
ali
z
ation ca
pab
ility
and overfittin
g
.
Support ve
ctor mac
h
in
e (SVM) is a novel
mach
ine l
ear
n
i
n
g
meth
od
b
a
sed
on
structural
risk
min
i
mi
z
a
ti
on, a
n
d
it ca
n
i
m
prov
e
gen
e
r
ali
z
a
t
i
o
n
cap
a
b
ility
an
d restr
a
i
n
overfitting. In this
pa
pe
r, an
op
ti
ma
l
ν
-S
up
po
rt Vector Re
gre
ssion (
ν
-SV
R
) alg
o
rith
m has bee
n
pr
op
osed
f
o
r
mu
ltifuncti
ona
l
sensor rec
o
n
s
truction, w
h
ich combi
ned
ν
-
SVR w
i
th particle sw
ar
m
optimi
z
a
tion (PS
O
),
achi
evin
g accu
rate esti
mati
on
of both the h
y
perp
a
ra
meter
s
and rec
onstr
uction fu
nction
. T
he results
o
f
emul
ation a
nd
theory an
alysis
indicat
e
that the pro
pose
d
al
gorit
h
m
is
mor
e
accurate a
n
d
reliab
l
e for sig
n
a
l
reconstructi
on.
Ke
y
w
ords
: v-SVR, PSO, hyperp
a
ra
meters,
multifu
n
ctio
nal
sensor, sign
al
reconstructi
on
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
In the la
st d
e
ca
de
s, multi
f
unction
al
se
nso
r
s have
receive
d
mo
re attention
d
ue to the
developm
ent
of microel
ectronics, micromachini
n
g
and othe
r rel
a
ted tech
nol
ogie
s
, whi
c
h
can
simultan
eou
sl
y detect sev
e
ral different electri
c
o
r
no
n-ele
c
tri
c
si
g
nals, g
r
eatly redu
ce the
si
ze
and con
s
um
ption of the
measure
m
e
n
t system,
a
nd it could
be appli
ed i
n
to the field
o
f
environ
menta
l
perce
ption a
nd ind
u
st
ry m
easure
m
ent [
1
, 2], also
nat
urally a
pplied
into the
regio
n
of aero
nauti
c
s, astrona
utics and
micro-mech
ani
cal t
e
ch
nolo
g
y. In gene
ral, the
multifunction
al
sen
s
in
g te
ch
nique
can
be
studi
ed f
r
om
two
rel
a
ted
asp
e
ct
s [3]: t
he p
h
ysi
c
al
structu
r
e
de
sig
n
of
the multifun
ctional
sen
s
o
r
for
multiple
varia
b
le
s
sensi
ng
usuall
y
by exploiti
ng the
cro
ssing
sen
s
itivity of sen
s
itive compon
ents
and the
developme
n
t of corre
s
p
ondi
ng algo
rithm
for
recon
s
tru
c
tin
g
the me
asu
r
ed va
riabl
es. The sc
h
e
m
a
tic st
ru
cture
of multifunctional sen
s
in
g
techni
que i
s
sh
own
in
Figure 1,
where
1
x
, …,
n
x
are
the
ph
ysical
qua
ntities u
nde
r
measurement
,
1
y
, …,
n
y
are
th
e se
nsor
out
put sig
nal
s,
while
1
ˆ
x
, …,
ˆ
n
x
are the
e
s
tima
tion of
the mea
s
u
r
e
d
qu
antities t
hat can b
e
o
b
tained
th
ro
u
gh the
sig
nal
re
con
s
tructio
n
alg
o
rithm,
and
this pro
c
e
s
s is also call
ed
multifunction
sen
s
o
r
sig
nal
reco
nst
r
u
c
tio
n
.
ˆ
n
x
2
ˆ
x
1
ˆ
x
1
y
1
x
2
x
n
x
2
y
n
y
Figure 1. Sch
e
matic Stru
ct
ure of
Multifu
n
ction
a
l Sensing Techniq
u
e
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Optim
a
l Support Vecto
r
Re
gre
ssi
on Algo
rithm
s
fo
r Mul
t
ifunctional S
ensor Sign
al
… (Xin Liu)
2763
By now, th
e
study of
re
co
nstru
c
tion
al
g
o
rithm i
s
be
coming
mo
re i
n
tere
sting
an
d ma
ny
sign
al re
con
s
truction
algo
ri
thms have b
een p
r
opo
se
d [4-6], while
these meth
o
d
s a
r
e ba
se
d
on
the empi
rical risk minimi
zat
i
on (E
RM)
pri
n
cipl
e, wh
i
c
h
ensure
s
the
a
c
tual ri
sk cl
ose to the value
of empiri
cal ri
sk
whe
n
the
sampl
e
data
set is la
rge.
The si
gnal re
con
s
tru
c
tion i
s
usually a hi
gh-
dimen
s
ion
a
l
sign
al proce
ssi
ng p
r
obl
e
m
, howev
e
r
,
the sam
p
le
data set o
b
tained from
th
e
experim
ent is small
comp
a
r
ed to
whol
e measurement
rang
e of the
multifunction
al se
nso
r
. In this
ca
se, minimi
zing the em
piri
cal ri
sk can n
o
t guarant
ee
a small val
u
e
of actual
risk, and thu
s
lea
d
to the overfitting and p
o
o
r
gene
rali
zatio
n
cap
abilitie
s [7-8]. Suppo
rt vector ma
chine (SVM
) a
n
d
its modifie
d
al
gorithm
s
co
ul
d provi
de
po
werful
an
d eff
i
cient to
ols th
at are
capa
bl
e of d
ealin
g
with
the sm
all
sa
mple
size p
r
oblem
and t
heoretical
bo
und
s on
the
gene
rali
zati
on e
rro
r th
ro
ugh
repla
c
in
g ERM pri
n
ci
ple
with stru
ctu
r
al
risk mi
nimization (S
RM
) p
r
i
n
cipl
e, which
define
s
a t
r
a
de
off betwe
en
the qu
ality of the a
p
p
r
oximati
on
o
f
given data
set a
nd th
e complexity of
approximatin
g functio
n
, m
o
tivated by statistical
le
arning the
o
ry. In re
ce
nt yea
r
s, SVM a
n
d
its
modified al
go
rithms
have
been
widely
use
d
in man
y
rese
arch fi
elds a
nd a
c
hi
eved sati
sfa
c
tory
results
[9-11].
In this pap
er,
we propo
se
to use a n
e
w class of SVR algo
rithm
s
[12], called
ν
-SVR, for
s
e
ns
or
s
i
g
n
a
l
r
e
c
o
ns
tru
c
tion
. T
h
is a
l
go
r
i
th
m c
o
uld au
tomatically compute
th
e width
of
th
e so-
called
ε
-in
s
e
n
sitive tube,
whi
c
h m
u
st
be spe
c
ified
a pri
o
ri in
st
anda
rd
ε
-SV
M
method
s,
thus
adju
s
t the g
e
nerali
z
atio
n a
c
cura
cy level
to
the sampl
e
data
set.
More
over th
e
paramete
r
ν
is
asymptoticall
y
related to t
he noi
se m
o
del, theref
o
r
e
,
to get bette
r gen
eralization a
c
cura
cy
the
spe
c
ified a
s
y
m
ptotical opti
m
al value co
uld be cho
s
e
n
in accordan
ce with the n
o
ise mo
del th
at is
in the
data
set, whi
c
h i
s
more
suitable
for th
e
sen
s
or
sign
al
re
co
nstru
c
tion
un
der the
real
worl
d
con
d
ition that
the data set are often
cont
aminated by
noise.
The mai
n
p
r
oblem in
ν
-SVR or
ε
-SVM method
s, howeve
r
, i
s
that of tu
ning the
para
m
eters, becau
se the gene
rali
zatio
n
abilities
of these alg
o
rit
h
ms de
pen
d on the choi
ce of
kernel p
a
ram
e
ter, re
gula
r
i
z
ation
param
eter C
and
p
a
ram
e
ter
ε
(or
ν
), we
pre
s
ent a sim
p
le
and
efficient PSO
procedu
re
a
i
med
at dete
r
minin
g
the
optimal hype
r-p
ara
m
eters and the sen
s
or
recons
truc
ted func
tion. In
Sec
t
ion II, we briefly review the
ε
-SVM,
ν
-SVR and PSO
algorith
m
s,
and the
n
d
e
scrib
e
recon
s
tructio
n
al
gorit
hm ba
se
d on
the optim
al
ν
-
SVR pr
oc
ed
u
r
e
.
In Se
c
t
io
n
III, we build up a simulation model of m
u
ltifunc
tional sensor and analyze
the ex
perim
ental result
obtaine
d by the pro
p
o
s
ed
approa
ch.
2. Theor
y
an
d Algorithm
2.1.
ε
-SVR and
ν
-SVR
SVM was
ori
g
inally devel
oped fo
r bin
a
ry cla
s
sifica
tion pro
b
lem,
and then V.
Vapnik
gene
rali
zed t
he re
sult
s obt
ained fo
r the
pattern
re
co
g
n
ition proble
m
to the probl
em of re
gre
ssion
by intro
duci
n
g a
novel l
o
ss fun
c
tion,
ε
-insen
sitive
loss fun
c
tion,
whi
c
h
could
be
define
d
as
follows
:
0i
f
(
)
(
)
ot
herw
i
s
e
yf
Ly
f
yf
x
x
x
(1)
For a
given i
ndep
ende
nt and ide
n
ticall
y distribute
d
(i.i.d.) data
set
1
,
l
ii
i
y
x
with the
input data
n
i
x
and outp
u
t da
ta
i
y
. The
ε
-SVM is
intended to es
timate the following
function:
()
,
,
,
Tn
fb
b
xw
x
w
x
(2)
by minimizin
g
the regula
r
i
z
ed risk fun
c
tional:
2
1
11
()
2
l
ii
i
Cy
f
l
wx
(3)
Whe
r
e C is th
e regul
ari
z
ati
on paramete
r
that c
ontrol the trade
-off betwee
n
mini
mizing the m
odel
compl
e
xity (the fo
rme
r
) a
nd the
em
pirical
ri
sk (th
e
latter).
To
minimizi
ng th
e Equ
a
tion
(3) i
s
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2762 – 2
768
2764
equivalent to
the followi
ng co
nst
r
ain
ed optim
i
z
ati
on problem
by introdu
cing two
set
of
nonn
egative sla
ck
vari
able
s
1
l
i
i
and
1
l
i
i
[13]:
2
,,
1
11
mi
n
i
mi
ze
(
,
,
)
(
)
2
l
ii
i
JC
l
w
ww
(4)
Subject to:
,,
0
T
ii
i
T
ii
i
ii
by
yb
wx
wx
(5)
The sl
ack va
riable
s
me
asure the
d
e
via
t
ion of data outsid
e
the
ε
-insen
sitive tube, and
they are pen
alize
d
in the equat
io
n (4
). Although the
para
m
eter
ε
affects the de
sire
d accu
ra
cy of
the approxim
ation and the
spa
r
sene
ss o
f
the solution,
it is difficult to find out the optimal value
o
f
ε
for the lacking
of a pri
o
ri info
rmatio
n about
the
data set. T
herefo
r
e, it is advi
s
able
to
automatically comp
ute the
ε
from the data set, whi
c
h i
s
the idea of the
ν
-SV
R
.
In the
ν
-SV
R
formulatio
n, the value of
ε
i
s
al
so a va
ria
b
le, whi
c
h i
s
trade
d off agai
nst the
model compl
e
xity and slack variabl
es vi
a a con
s
tant
(0
,
1
]
:
2
,,
,
1
11
mi
n
i
mi
z
e
(
,
,
,
)
(
(
)
)
2
l
ii
i
QC
l
w
ww
(6)
Subject to the con
s
trai
nt (5), the con
s
t
r
aint
optimiza
t
ion probl
em of E
quation (6) re
sults in
a
convex
optim
ization
proble
m
with
a gl
o
bal mini
m
u
m
by u
s
ing
La
gran
ge
multipliers te
chni
q
ues
and d
ual theo
rem, which is simila
r to the
Vapnik’
s
ε
-S
VM, therefore
the dual fo
rm for the
ν
-S
VR
optimizatio
n probl
em coul
d be stated a
s
follows:
,
1,
1
1
max
i
mi
ze
(
,
)
(
)
(
)
(
)
2
ll
T
ii
i
i
i
j
j
i
j
ii
j
Wy
xx
(7)
Subject to:
1
1
()
0
0,
()
l
ii
i
ii
l
ii
i
Cl
C
(8)
And the app
roximating fun
c
tion can be
expre
s
sed a
s
:
1
()
(
)
l
T
ii
i
i
f
b
xx
x
(9)
Usually, only
so
me
of the
co
efficient
s
()
ii
are n
o
n
z
ero
,
and
the
co
rre
sp
ondi
ng
data vectors are call
ed sup
port vect
ors
(SVs). F
u
rthe
rmo
r
e, to make the
ν
-SVR al
gori
t
hm
nonlin
ear, th
e input d
a
ta
vector
i
x
ca
n
be ma
ppe
d
into a hig
h
-dimen
sion
al
feature
sp
ace
throug
h som
e
nonli
nea
r mappin
g
()
, then solve the
optimizatio
n
probl
em (7) i
n
the featu
r
e
spa
c
e,
whi
c
h
mean
s the
i
nner produ
ct
T
ij
xx
in (7
) i
s
repl
acin
g by t
he i
nner produ
ct
of the inp
u
t
vectors ind
u
c
ed in th
e feature
sp
ace
,
()
(
)
T
ij
xx
. Accordi
n
g
to Mercer’
s
theorem, th
ese
expen
sive calcul
ation
s
o
f
inner prod
uct in
th
e
high-dimen
s
i
onal featu
r
e
sp
ace can
be
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Optim
a
l Support Vecto
r
Re
gre
ssi
on Algo
rithm
s
fo
r Mul
t
ifunctional S
ensor Sign
al
… (Xin Liu)
2765
signifi
cantly redu
ced a
nd the explicit form of
the nonl
inear m
appi
n
g
is no n
eed
by choo
sing
a
suitabl
e ke
rn
el function
su
ch that:
(,
)
(
)
(
)
T
ij
i
j
k
xx
x
x
(10
)
Then we ca
n get the nonlin
ear form of Equation (9):
1
()
(
)
(
,
)
l
ii
i
i
f
kb
xx
x
(11
)
The typical
choices of kernel fun
c
tion i
n
clu
de p
o
lyn
o
mial
kernel
s, sigmoi
d
kernels and
ga
u
ssi
an
ker
nel
s.
For the
ν
-SV
R
Algorith
m
, the theo
retical
significan
c
e
of param
eter
ν
is that
ν
is a
n
uppe
r
boun
d on the
fraction
of errors an
d a lo
wer
bou
nd o
n
the fra
c
tion
of SVs, whe
r
e e
rro
rs
refe
r to
the traini
ng
d
a
ta vecto
r
s ly
ing o
u
tsid
e th
e tube
an
d fraction
refe
r t
o
the
relative
numbe
rs divi
ded
by the total numbe
r of training d
a
ta points, thu
s
ν
ca
n control
the numbe
r of SVs and
the
numbe
r of data points lying outsid
e
the tube. More
over it has
b
een theo
retically proven that
para
m
eter
ν
can
be
a
s
ymptotically o
p
timal cho
s
en
fo
r
a
given
cl
ass
of noi
se model, su
ch as
ν
can b
e
set to 1 or 0.54 for
Lapla
c
ian o
r
Gau
ssi
an noi
se mod
e
l, re
spectively [14].
2.2. PSO for H
y
perparameter
s Selecti
on
It can be fou
nd from a
bov
e that the ge
nerali
z
atio
n p
e
rform
a
n
c
e o
f
ν
-SVR d
epe
nds
on a
good
setting
of C,
ν
a
nd
kernel
pa
ram
e
ter, ho
weve
r,
the p
r
inci
pled
app
roa
c
h
for the
sele
ction
of
hyperp
a
rame
ters i
s
still an
open
and fu
rther
compli
ca
ted re
se
arch
area,
whi
c
h i
s
u
s
ually trea
ted
as u
s
e
r
defin
ed input
s tha
t
based
on a
prio
ri kn
ow
l
e
dge o
r
expe
rtise [15]. Actu
ally the optimal
para
m
eters
sele
ction
ca
n be re
gard
ed as a
n
o
p
timal sea
r
ch pro
c
e
ss, a
nd the estim
a
tion
accuracy i
s
computed
a
s
a
function
of h
y
perpa
ram
e
ters, th
erefo
r
e
optimal hyp
e
r
pa
ramete
rs
can
be automati
c
ally found by optimizatio
n tech
niqu
es.
PSO is a nov
el evolutiona
ry computatio
n tech
nique
motivated by the so
cial b
e
haviors of
flocki
ng bi
rd
s o
r
swa
r
mi
ng in
se
cts,
whi
c
h i
s
a
popul
ation b
a
se
d sto
c
h
a
stic o
p
timiza
tion
techni
que th
at can be u
s
ed for both d
i
screte an
d continuo
us op
timization p
r
o
b
lems, an
d the
coo
peration
and i
n
form
ation
sha
r
ing
of
an
entire flo
c
k implie
s th
e intellig
en
ce
and
efficie
n
cy of
algorith
m
[16]. Each parti
cl
e is a moving
point in
the solutio
n
spa
c
e, and the pa
rticle’
s
traversal
in the sea
r
ch
spa
c
e is infl
uen
ced by the bes
t sol
u
tion that it has found, pbe
st, and the best
solutio
n
foun
d by the swarm
of pa
rticles, gbe
st, resp
ectively. The common
PSO algorit
hm
con
s
i
s
ts of the velocity and
position e
q
u
a
tion as follo
wing:
11
22
(1
)
(
)
(
)
(
(
)
(
)
)
()
(
(
)
(
)
)
ii
i
i
ii
v
k
wv
k
c
ra
nd
k
p
be
st
k
x
k
c
r
a
n
d
k
gbest
k
x
k
(12
)
(1
)
(
)
(
1
)
ii
i
x
k
xk
vk
(13
)
Whe
r
e
x
i
s
the po
sition i
n
formation th
a
t
reflect
s
the
value of hyp
e
rpa
r
amte
rs,
v
is the velocity
informatio
n which i
s
dyna
mically adju
s
t
ed acco
rd
i
ng
to the flying experien
c
e
of both pa
rticle
and
sw
arm,
w
is
inertia weight
that control the tr
ade-off betwe
en the global explo
r
ation and lo
cal
exploitation abilities of the swam.
c
1,
c
2 are a
c
celeratio
n
co
n
s
tants, an
d
rand
1
,
rand
2
are
rand
om n
u
m
ber
betwe
en
(0, 1). In this
p
aper, th
e PSO is a
pplie
d to
ν
-SVR algorithm to es
timate
the optimal value of hyperpara
m
eters.
2.3. Algorith
m
For
a m
u
ltifunction
al
sen
s
or, a
n
y outp
u
t
sign
al
sho
u
l
d
represent t
he u
n
iqu
e
in
p
u
t sig
nal,
whi
c
h can b
e
calle
d on
e-to-one, oth
e
rwise it is
im
possibl
e to d
i
stingui
sh a i
nput sig
nal from
anothe
r. Thu
s
, the inverse mappin
g
of multifunc
tional tran
sfer functio
n
is un
ique ba
sed
on
inverse map
p
i
ng theorem, and the sy
ste
m
equation,
a
c
cordi
ng to Fi
gure 1,
can b
e
written a
s
:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2762 – 2
768
2766
11
1
1
(
)
(
(
),
,
(
))
(
)
(
(
),
,
(
))
n
nn
n
x
tg
y
t
y
t
x
tg
y
t
y
t
(
1
4
)
Then the m
u
ltifunction
al
sen
s
or reco
nstru
c
ted
sig
nal can b
e
obtaine
d through the
estimation of
Equation (14),
1
11
1
1
ˆ
(
)
(
(
),
,
(
),
)
ˆ
(
)
(
(
),
,
(
),
)
n
svr
n
svr
nn
n
xt
f
y
t
y
t
xt
f
y
t
y
t
(15
)
W
h
er
e
1
(
(
),
,
(
),
)
svr
in
i
fy
t
y
t
is the
ν
-SVR re
gre
ssi
on estim
a
tion
of
mea
s
u
r
ed
q
uantity
i
x
, and
i
is the optima
l
hyperpa
ram
e
ters
cal
c
ulat
ed by PSO.
3. Result a
n
d Analy
s
is
To verify the feasi
b
ility of th
e prop
osed m
e
t
hod, a physical mod
e
l of the two-in
put/output
multifunction
al sen
s
o
r
use
d
in the experim
ent has b
e
en built up an
d sho
w
n in Fi
gure 2.
Whe
r
e x a
nd
y are the
inp
u
t sign
als,
wh
ich
rep
r
e
s
ent
the rate
of th
e slid
e resi
st
or lo
we
r
side resi
stan
ce
s to the entire resi
stan
ces, u and
v (Voltage) a
r
e the output si
gnal
s. To test the
ability of the algorith
m
to
match th
e noi
se, two
i
nde
p
ende
nt and i
d
entically di
stri
buted (ii
d
)
noi
se,
1
and
2
are
ad
ded to the i
n
put sig
nal
s. Acco
rdi
ng to
KCL, the
syst
em tran
sfe
r
functio
n
can
be de
scribe
d as follo
ws:
5[
2
5
(
2
)
]
35
[
(
1
)
(
1
)
]
5[
2
5
(
2
)
]
35
[
(
1
)
(
1
)
]
x
yx
y
x
y
u
yy
x
x
yx
x
y
x
y
v
yy
x
x
(16
)
Whe
r
e
1
xx
and
2
yy
. As a trainin
g
set, we u
s
e
144 sam
p
le
s
(,
,
,
)
ii
ii
x
yu
v
gene
rated
by the above fu
nction. Here, the training i
nput set
(,
)
ii
x
y
is a
carte
s
ia
n produ
ct of
two input si
g
nal set
s
, whi
c
h a
r
e both
comp
os
ed of
12 equally
spa
c
e
d
data
points ove
r
the
interval (0.
1
, 0.9), and
1,
2
are
iid additive n
o
ise. Th
e ri
sk, gener
alization erro
r, is
computed
with
respe
c
t to t
he Equ
a
tion
(16
)
withou
t noise, th
u
s
the te
st
set con
s
i
s
ts
of 196
sam
p
les
(,
,
,
)
ii
i
i
x
yu
v
generated from the noiseless Equati
on (16
)
, wh
e
r
e the test in
put set is al
so a
carte
s
ia
n pro
duct that are comp
os
ed of 14 equ
ally sp
ace
d
data poi
nt
s
in the interval (0.1, 0.9).
In this
experi
m
ent, we
ad
d Ga
ussia
n
n
o
ise
with
ze
ro mea
n
a
nd
stand
ard
devi
a
tion to
the data, whi
c
h i
s
the
co
mmon a
s
sum
p
tion, and th
e aim i
s
to o
b
se
rve wheth
e
r the
pro
p
o
s
ed
1
R
2
R
3
R
k
1
k
1
k
1
x
y
v
u
5
k
5
k
)
5
(
v
V
cc
Fi
g
ure 2. Circuit Model of Two In
p
ut/Out
p
ut Senso
r
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Optim
a
l Support Vecto
r
Re
gre
ssi
on Algo
rithm
s
fo
r Mul
t
ifunctional S
ensor Sign
al
… (Xin Liu)
2767
method with theoreti
c
ally predi
cted
val
ue
of
ν
can l
ead to
goo
d
gene
rali
zatio
n
pe
rform
a
n
c
e in
pra
c
tical fo
r d
i
fferent noi
se
level, and wh
ether the n
o
i
s
e level ha
s
any influen
ce
on PSO based
hyperp
a
rame
ters sele
ction
procedu
re. T
herefo
r
e,
we
first comp
ute the
optim
al
h
y
perpa
ram
e
ters
for vari
ed
noi
se l
e
vel
(diffe
rent
stan
dard
deviation
), h
e
re
the fitne
s
s i
s
th
e
root
mean
squa
re
d
relative error (RMS
RE)
between the
es
timations
a
n
d
the tru
e
v
a
lue
s
of n
o
iseless te
st in
pu
t
sign
als, an
d then we cal
c
u
l
ate the relati
ve erro
r of e
s
timation
s wit
h
the optimal
param
eters for
each noi
se le
vel.
Figure 3 a
n
d
Figure 4 illu
strate th
e pe
rforma
n
c
e
s
of
PSO approa
che
s
o
n
the t
e
st set
with differe
nt noise
settin
g
to find the
global o
p
timum by plottin
g
the be
st fitness versu
s
the
numbe
r of ite
r
ation
s
. It is
clea
r fro
m
th
e figures
that
for all
noi
se
levels th
e v
a
lue of
RMS
R
E
decrea
s
e
s
ve
ry fast to the
high quality solution
s
at the early iterati
ons
(abo
ut 100 iteration
s
)
and
then the
curv
es
be
come
very flat, whi
c
h implie
s th
at PSO can
co
nverge
to the
glob
al optim
um
very quickly. It also can b
e
see
n
that the be
st
RMS
R
E of signal
s incre
a
se wit
h
the increa
se o
f
the noi
se l
e
vel, ho
wever the big
g
e
s
t value i
s
st
ill le
ss than
1%,
thus it d
e
mo
nstrate
s
th
at
the
prop
osed PS
O procedu
re
can
effectivel
y prevent
the
prem
ature converg
e
n
c
e
and
signifi
ca
ntly
enha
nce the
conve
r
ge
nce
rate a
nd a
c
curacy in
t
h
e
evolutiona
ry pro
c
e
ss, i
n
d
epen
dent of t
h
e
noise levels
.
Figure 3. RM
SRE of x versus Iteration fo
r
Different Noise Level
Figure 4. RM
SRE of y versus Iteration fo
r
Different Noise Level
Figure 5. Rel
a
tive Error of
x for Different
Noise
Level
Figure 6. Rel
a
tive Error of
y for Different
Noise
Level
To evalu
a
te t
he
recon
s
tru
c
tion pe
rform
a
nce
of
ν
-SV
R
algo
rithm
wit
h
optimal
pa
rameters
setting, the
b
o
x plots
of th
e rel
a
tive erro
r of
x
an
d
y
fo
r different noi
se level
s
are
sho
w
n i
n
Fig
u
re
5 and Fig
u
re 6. Each b
o
x plot is ba
se
d on the re
sult
s of test data
set with va
rying add
ed n
o
i
s
e:
from left to right,
0.1
,
0.2
,
0
.
3
,
0
.
4
,
0.5
noise
. As sh
own i
n
figure
s
, the relative error of bo
th
r
e
co
ns
tr
uc
te
d s
i
gn
a
l
s
x
and
y
a
r
e
app
roximately ze
ro-me
an
or ve
ry cl
ose to
zero
-mea
n fo
r all
noise level
s
.
This is confirmed
by the
mean
va
lue
s
,
given
at the
bottom
of e
a
ch
plot,
whi
c
h
10
0
10
1
10
2
10
3
10
-1
10
0
Ite
ra
t
i
o
n
RM
SR
E
o
f
x
%
no
i
s
e
=0
.1
no
i
s
e
=0
.2
no
i
s
e
=0
.3
no
i
s
e
=0
.4
no
i
s
e
=0
.5
10
0
10
1
10
2
10
3
10
-1
10
0
Ite
ra
t
i
o
n
RM
SR
E o
f
y
%
no
i
s
e
=0
.
1
no
i
s
e
=0
.
2
no
i
s
e
=0
.
3
no
i
s
e
=0
.
4
no
i
s
e
=0
.
5
0.
1
0.
2
0.
3
0.
4
0.
5
-2
-1
0
1
2
R
e
l
a
t
i
v
e
E
r
ro
r
o
f
x
%
no
i
s
e
0.
1
0.
2
0.
3
0.
4
0.
5
-2
-1
0
1
2
R
e
l
a
t
i
v
e
E
r
ror o
f
y
%
noi
s
e
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2762 – 2
768
2768
implies th
at the propo
se
d
algorith
m
is unbia
s
e
d
. Note from the
figure
s
that
the ran
ge a
nd
stand
ard
devi
a
tion of the
relative error increa
se
with
the growin
g v
a
lue of
n
o
ise
level, howeve
r
,
the most of relative error
for ea
ch box
plot is withi
n
a rea
s
o
n
a
b
le ra
nge, su
ch a
s
all of the
whi
s
ker
ra
ng
e are
within t
he rang
e fro
m
-0.5% to
0
.
5%. Therefo
r
e, it prove
s
t
hat the
pro
p
o
s
ed
method is
sta
b
le and a
c
curate with re
sp
ect to the add
ed noi
se in th
e data set.
4. Conclusio
n
This p
ape
r p
r
ese
n
ts a
n
opt
imal SVR alg
o
rithm that p
r
ofit from the combinatio
n of
ν
-SVR
and PSO fo
r multifunctio
n
a
l se
nsor
sig
nal recon
s
tru
c
tion. Th
e
ν
-SVR
me
th
od is
ab
le
to
co
p
e
with a hig
h
-di
m
ensi
onal
si
gnal p
r
o
c
e
ssi
ng in small sample
size si
tuations. Mo
reover, the hi
ghe
r
gene
rali
zatio
n
a
c
curacy
could
be
achi
eved, si
nce t
he p
a
ramete
r
ν
is in
acco
rdan
ce
with t
h
e
noise mod
e
l
that is in the
data set. And the
PSO ba
sed
paramet
ers
optimi
z
ati
on procedu
re
is
simple, effici
ent, and ea
sy to implement. The ex
pe
riment re
sult
s su
gge
st that the propo
sed
method is
sui
t
able for the
multivariable
con
d
ition an
d
enhan
ce the
generalizatio
n perfo
rman
ce
and stability of
sig
nal re
co
nstru
c
tion
un
der
differe
nt
noise level
s
.
Hen
c
e, th
e p
r
opo
sed
ap
pro
a
ch
can
be im
m
ediately u
s
ed
by pra
c
tition
ers intereste
d
in a
pplying
SVM to vari
ous
appli
c
ati
on
domain
s
.
Ackn
o
w
l
e
dg
ements
This work was sp
on
so
re
d by Natural
Science Fo
undatio
n of Heilo
ngjian
g
Province,
Chin
a (No. Q
C
20
11
C097
),
Nation
al Nat
u
ral S
c
ien
c
e
Found
ation o
f
China
(No. 6120
1017,
No.
5113
8003
a
nd No. 61
3
0101
2) an
d
the Funda
mental Re
se
arch Fun
d
s for the Ce
ntral
Universitie
s
(Grant
No. HI
T. NSRIF.201
4082 a
nd 20
1
146).
Referen
ces
[1]
Yuji J, S
h
id
a K
.
Ne
w
Mu
ltifun
ctiona
l T
a
ctile
Sensi
ng T
e
chn
i
qu
e b
y
S
e
lecti
v
e Data
Proce
ssing.
IE
EE
T
r
ansactio
n
s o
n
Instrumentati
on an
d Meas
ur
ement
. 200
0; 4
9
(5): 109
1-1
0
9
4
.
[2]
Sun JW
, Sh
id
a K. Multi
l
a
y
e
r
Sens
ing
a
nd
Aggr
e
gati
on A
ppro
a
ch to
En
vironme
n
tal
P
e
rcepti
on
w
i
t
h
One Multifuncti
ona
l Sens
or.
IEEE Sensors J
ournal
. 20
02; 2
(
2): 62-72.
[3]
W
e
i G, Shida K. Estimation of Conce
n
trati
ons
of T
e
rnar
y Solutio
n
w
i
th
NaCl
and S
u
cr
ose bas
ed o
n
Multifuncti
ona
l Sensi
ng
T
e
chn
i
qu
e.
IEEE Transactio
n
s o
n
Instrumentati
o
n
and M
eas
ure
m
e
n
t
. 2
0
06;
55(2): 67
5-6
8
1
.
[4]
Sun JW
, Li
u
X, Sun SH. T
L
S
Algor
ithm-b
as
ed Stu
d
y
o
n
M
u
ltifuncti
on
al S
ensor
Data
Re
constructio
n
.
Acta Electronic
a
Sinic
a
. 200
4; 32(3): 391-
39
4.
[5]
W
ang
X, Z
han
g XP. Optim
a
l
Look-
up T
abl
e-bas
ed D
a
ta
Hidi
ng.
IET
Si
gna
l Process
i
n
g
. 201
1; 5(2):
171-
179.
[6]
Liu
X, S
un JW
, Liu
D. No
nli
n
ear Mu
ltifuncti
ona
l Se
nsor S
i
gna
l R
e
constr
uction
bas
ed
o
n
T
o
tal Le
ast
Squar
es.
Journ
a
l of Physics: Confer
ence S
e
ries
. 200
6; 48(
1): 281-2
86.
[7]
Bot RI, Lor
enz
N. Optimiz
a
ti
on Pr
obl
ems i
n
Statistica
l L
e
a
rni
ng: D
ual
it
y an
d Optima
lit
y
Co
nditi
ons
.
Europ
e
a
n
Jour
nal of Operati
o
nal R
e
searc
h
. 201
1; 213(
2): 395-4
04.
[8]
Vapn
ik V.
T
he Nature of Stati
s
tical Le
arni
ng
T
heor
y
.
2nd
ed
ition, Ne
w
Y
o
rk
: Springer. 20
0
0
: 147-1
50.
[9]
Yu YL
Z
h
o
u
. Acoustic
E
m
ission
Si
gna
l Cl
a
ssificati
o
n
Bas
ed O
n
Sup
port V
e
ctor Mach
ine.
T
E
LKOMNIKA Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2012; 1
0
(5): 1
027-
103
2.
[10]
Charra
da
A.
Supp
ort Vecto
r
Machi
nes
R
egress
i
on
for
MIMOOF
DM Ch
ann
el Esti
mation.
IA
ES
Internatio
na
l Journ
a
l of Artificial Intel
lig
enc
e
. 2012; 1(4): 2
1
4
-22
4
.
[11]
Bo Y, L L
i
an
g
,
W
Xue
.
C
o
l
o
r Ca
libr
a
tion
Mode
l in Ima
g
i
ng
Devic
e
C
o
ntrol us
ing
Su
pport Vect
or
Regr
essio
n
. T
E
LKOMNIKA Indo
nesi
an Jo
u
r
nal of Electric
al Eng
i
ne
eri
ng.
2013
;
1
1
(10): 553
0-55
38.
[12]
Scholk
opf B, Smola AJ, Williams
on R
C
,
Bartlett PL. Ne
w
Sup
port V
e
ctor Alg
o
rith
ms.
Neur
a
l
Co
mp
utation
. 2
000; 12(
5): 120
7-12
45.
[13]
Smola
A, Scholkopf B. A T
u
to
ria
l
o
n
Su
p
port Vector
Re
gressi
on.
Stati
s
tics an
d C
o
mputin
g
. 20
04;
14(3): 19
9-2
2
2
.
[14]
Chal
imo
u
rda A
,
Scholko
pf B,
Smola A. Exp
e
riment
all
y
Op
timal
v
in Su
p
port Vector Re
gressi
on fo
r
Different No
ise
Models a
nd P
a
rameter Setti
ngs.
Neur
al Ne
tw
orks
. 2004; 17(1): 127-
14
1.
[15]
Cherk
a
ssk
y
V, Ma Y. Practical Se
lecti
o
n
of SVM Parameters a
nd
Noise Estim
a
ti
on for SV
M
Regr
essio
n
.
Neura
l
Netw
orks
. 2004; 17(
1): 113-1
26.
[16]
Parpi
nel
li RS,
T
eodoro F
R
, L
opes
HS. A Co
mpariso
n
of S
w
a
rm Intel
lig
en
ce Alg
o
rithms f
o
r Structura
l
Engi
neer
in
g Optimizati
on.
Int
e
rnati
ona
l Jo
ur
nal for
Nu
meri
cal Meth
ods
in
Engi
ne
erin
g
. 201
2;
91(
6):
666-
684.
Evaluation Warning : The document was created with Spire.PDF for Python.