TELKOM
NIKA
, Vol.13, No
.2, June 20
15
, pp. 578 ~ 5
8
6
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v13i2.1473
578
Re
cei
v
ed
Jan
uary 26, 201
5
;
Revi
sed Ma
rch 2
9
, 2015;
Acce
pted April 17, 2015
Test Generation Algorithm Based on SVM with
Compressing Sample Space Methods
Ting Long*
1
, Jiang Shiqi
1
, Hang Luo
2
1
School of Co
ntrol Eng
i
n
eeri
ng, Che
n
g
du U
n
iversit
y
of Info
rmation T
e
chn
o
lo
g
y
, C
h
e
ngd
u, 6102
25, Ch
i
n
a
2
School of Ma
nufacturi
ng Sci
ence a
nd En
gi
neer
ing,
Un
iver
sit
y
of Sich
uan,
Chen
gd
u, 610
065, Ch
in
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: leamo
n
lo
ng
@hotmai
l
.com
1
, luohu
an
g20
0
2
@1
63.com
2
A
b
st
r
a
ct
T
e
st generati
o
n alg
o
rith
m b
a
s
ed on th
e SVM (support
ve
ctor mac
h
in
e) gen
erates test
signa
ls
deriv
ed fro
m
t
he sa
mple s
p
a
c
e of the
outp
u
t respo
n
se
s
o
f
the an
alo
g
D
U
T
.
W
hen the
respo
n
ses of t
h
e
nor
mal
circu
i
ts are s
i
mil
a
r to
those
of the fa
ulty circ
uits (i.e., the latter have only
s
m
a
ll
para
m
etric fau
l
ts),
the sa
mpl
e
space is
mix
e
d
and traditi
on
al al
gorit
h
m
s
have d
i
fficulty
disting
u
ish
i
n
g
the tw
o groups.
How
e
ver, the SVM provid
es
an effective r
e
sult. T
he sa
mp
le sp
ace c
ontai
ns red
und
ant data, bec
a
u
se
successiv
e
impuls
e
-resp
ons
e samples
ma
y get quite
cl
ose. The redu
nda
ncy w
ill waste the nee
dl
es
s
computati
o
n
a
l
loa
d
. So w
e
prop
ose thr
e
e differe
nce
meth
ods to c
o
mpress th
e
sampl
e
spac
e. T
h
e
compressi
ng s
a
mpl
e
spac
e
meth
ods
are E
qui
distant
co
mpressi
ona
l met
hod,
k-n
earest nei
ghb
ors met
hod
and
maxi
mal
di
fference metho
d
.
Nu
merica
l e
x
peri
m
e
n
ts
pr
o
v
e
that
maxi
mal differe
nce method
ca
n ens
ur
e
the precis
ion
of t
he test gener
ation.
Ke
y
w
ords
:
T
e
st Generation,
SVM (Support
Vector Mach
i
n
e), k-nearest N
e
ig
hbors, Maxi
ma
l Differe
nce
1. Introduc
tion
A rece
nt tre
nd in anal
og
test strategi
es is
calle
d
test gene
rati
on. It can e
s
tabli
s
h
conve
n
ient
si
gnal
s to excit
e
the
inp
u
t of the device u
nder te
st (
CUT), ob
se
rve the re
sp
on
se
[1],
and de
cid
e
wheth
e
r the
CUT i
s
fau
l
ty based o
n
the re
spo
n
se [2]-[12].
The an
alog
test
generation i
s
different from
the
digital test generation
[13],[14].
Fault dete
c
tio
n
an
d
classifi
cation
in thi
s
pape
r a
r
e
ba
sed
on
a Su
p
port Ve
ctor M
a
chi
ne
(SVM), so th
at the re
spo
n
s
e ve
ctors of
norm
a
l
and
faulty circuits can
b
e
di
stingui
she
d
on
the
basi
s
of nonlinear
classification. SVM [15],[16] is
to classify sm
all
sampl
e
s based on
statisti
cal
learni
ng th
eo
ry. This meth
od h
a
s p
r
ove
n
ad
ept at
d
ealing
with
hi
ghly no
nlinea
r
cla
ssifi
catio
n
probl
em
s. The rule
-less resp
on
se dat
a sampl
ed from elect
r
oni
c system
s a
r
e an excell
ent
example.
Wh
en the b
and
width of the
DUT
is m
u
ch
smalle
r tha
n
the sa
mplin
g
frequ
en
cy of the
DAC/ADC, th
e sa
mple ve
ctors
of the im
pulse-re
s
pon
se b
e
come
q
u
ite large. Th
e re
sp
on
se d
a
ta
of the sampl
e
spa
c
e cont
ains redu
nda
nt data,
beca
u
se succe
s
si
ve impulse
-re
s
po
nse sam
p
les
may get quite
close. The
redun
dan
cy will waste
th
e needl
ess co
mputational l
oad. In this p
aper
we p
r
o
p
o
s
e
a maximal
differen
c
e
method to
compress the
sam
p
le
sp
ace,
and
re
duce
comp
utationa
l load.
2. Test gen
e
ration algorithm
An analo
g
DUT
can
be treated a
s
a
di
screte ti
me
di
gital syste
m
by placi
ng it
betwe
en a
digital-to
-anal
og co
nverte
r (DAC) and
an anal
og-to
-d
igital co
nverter (ADC) [1
2]. Many circuit
instan
ce
s, which a
r
e eith
er no
rmal or
given par
am
etric faults,
must be sim
u
lated for th
e test
gene
ration. E
a
ch
inst
ance
is lab
e
led
a
s
‘passe
d’
o
r
‘f
ailed’. A p
a
ssed in
stan
ce
mean
s that t
he
simulate
d p
a
r
amete
r
s mat
c
h thei
r
spe
c
ification
s
. A failed in
stan
ce mea
n
s th
a
t
the simul
a
ted
para
m
eters d
o
not match t
heir
spe
c
ifica
t
ions. A
re
sp
onse vecto
r
is co
nst
r
u
c
ted
for each ci
rcuit
instan
ce by sampling the a
nalog o
u
tput sign
al [
12]. So we can sa
mple the outp
u
t respon
se o
f
a
passe
d or failed in
stan
ce t
o
rep
r
e
s
e
n
t the ci
rcuit. Th
en the
sam
p
le sp
ace can
be obtai
ned f
r
om
many
ci
rc
uit
inst
an
ce
s.
T
h
is
spa
c
e
wi
ll be u
s
e
d
a
s
trai
ning
se
t or te
sting
set [17]-[1
9
]
for
cla
ssifi
cation
in the test ge
neratio
n.
The process
of the test is
illustrated in Fi
gure 1.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Tes
t
Generation Algorithm Bas
ed on SVM with
Compress
ing Sample Spac
e .... (Ting Long)
579
Figure 1. Pro
c
e
ss of the te
st algorith
m
b
a
se
d on SVM
The
cla
ssifi
ca
tion hype
rpla
ne d
e
termin
e
d
for th
e o
u
tp
ut re
spo
n
se
space i
s
the
same a
s
that dete
r
min
ed fo
r the
im
pulse-re
s
pon
se
sp
ace
[1
2
]. Thus,
we
can al
so
obtai
n the
sim
u
lat
ed
para
m
eter
se
ts by
sam
p
li
ng the
imp
u
l
s
e
re
sp
on
se
s of
t
he
cir
c
uit
inst
a
n
c
e
s.
The
si
mulat
e
d
para
m
eter
se
ts can them
selves be
con
s
ide
r
ed o
u
tp
ut vectors, a
nd used to build the sam
p
le
s
p
ac
e.
Sv
=
(
S
[
0]
, S
[1]
, S
[2]
,
···) i
s
a
sampl
e
v
e
ct
or for o
ne im
puls
e
re
sp
on
se. Many
s
u
c
h
v
e
ctor
s
c
o
ns
tr
uc
t the s
a
mp
le sp
ac
e
(
Sv
1
, S
v
2
, Sv
3
,
···). Ho
wever,
the o
u
tput
respon
se mainly co
mes
from the ra
ng
e
S
= (
s
[0]
, s
[1]
,
···,
s
[
d
−
1]), where
d=Fs/BW
and
BW
is the ban
dwidth of the LT
I.
So impulse-resp
on
se sam
p
le sp
ace (
S
1
,
S
2
,···,
S
i
,···)
is co
nst
r
ucte
d by
S
-vec
tors
.
In the analysi
s
of some a
n
a
log sy
stems the re
sp
on
ses of normal
circuits a
r
e
si
milar to
those of ci
rcu
i
ts with small
paramet
ric f
aults,
so the
respon
se ve
ctors a
r
e mixe
d together. It is
dif
f
i
cult
t
o
cla
ssif
y
t
he sa
m
p
le sp
ac
e co
nst
r
u
c
ted by
these re
spon
se
ve
cto
r
s wi
th any existing
test gene
rati
on algo
rithm.
A SVM can deal with thi
s
sampl
e
sp
ace by mappin
g
it to a higher-
dimen
s
ion
a
l f
eature
spa
c
e
and
se
parating the
gr
oup
s with
a hype
rplan
e
.
We can
u
s
e
SVM for
the test gene
ration to exe
c
ute the cl
assificatio
n
pro
c
e
ss, an
d ob
tain the test sign
als fro
m
the
cla
ssifi
cation hyperpl
ane.
The
optim
al hyperpl
ane al
gorithm pro
p
o
se
d
by
Vlad
imir Vapni
k
was a li
nea
r cl
assifier.
Referen
c
e
[2
0] propo
sed
a
way
to
cre
a
te no
nline
a
r cl
assifiers
by a
pplying th
e
kernel
fun
c
tion
s to
maximum-m
a
rgin hype
rpla
nes.
We o
b
tain
suppo
rt vecto
r
s from
the trai
ning
set
wi
th the SVM
algorith
m
s [2
1]. The
hyperpl
ane
can be built from
the su
pp
ort vectors.
T
S
is the tran
sp
o
s
e of
S
. The test sig
nal
s
can the
n
be calcul
ated by the optimal hy
perpl
ane a
s
the test se
que
nce.
3. Compres
s
i
on of the s
a
m
ple space
3.1. Equidistant compr
e
s
s
ional meth
od
Whe
n
the ba
ndwi
d
th
BW
of the DUT i
s
much
smalle
r than the
sa
mpling fre
que
ncy
Fs
of
the DAC/AD
C, the sam
p
l
e
v
e
ct
ors of
the impul
se-resp
on
se be
come quite la
rge
con
s
ide
r
i
ng
d=F
s
/BW
. Th
en the samp
le spa
c
e of the impulse-resp
on
se be
comes q
u
ite large too. Thi
s
sampl
e
sp
ace contain
s
re
dund
ant d
a
ta
, becau
se
su
ccessive i
m
p
u
lse
-
respon
se sampl
e
s m
a
y
get
quite
clo
s
e.
T
he re
du
ndan
cy will wa
ste
th
e
n
eedle
s
s
com
putational
lo
ad. Fo
r
red
u
c
ing
comp
utationa
l load, a sm
a
ll numbe
r of
impulse
-re
s
p
onse sa
mple
s can be
extracte
d from t
he
vec
t
or
H
to
build the
ne
w
sampl
e
sp
ace. So
the
comp
re
ssion
of the
sam
p
le space m
e
ans
decrea
s
in
g the length of every sam
p
le vector.
Red
u
ci
ng the
length of a
sampl
e
vecto
r
always
d
e
creases th
e co
st of the calculation,
but may i
m
pl
y a lo
ss of i
n
formatio
n. So
befo
r
e
de
cre
a
sin
g
the
sa
mple
sp
ace,
we
mu
st en
sure
that the rema
ining inform
ation suffice
s for the cl
a
s
sification. In this paper, we require that the
efficien
cy of the test
rem
a
i
n
s
satisfa
c
to
ry, m
eaning t
hat the p
a
ra
meters ge
ne
rated by the t
e
st
algorith
m
are effective. The effica
cy of t
he test is depen
de
nt on the preci
s
io
n of the
cla
ssif
i
cat
i
on.
A method to
com
p
re
ss t
he sampl
e
vectors
of im
pulse-re
s
pon
se i
s
the
eq
uidista
n
t
comp
re
ssion
a
l method [1
2]. For exam
ple (
s
[3],
s
[7],
s
[11] ,
s
[15
]
···) would b
e
a ne
w sa
mple
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 578 – 58
6
580
vec
t
or after extrac
ting samples fr
om
H
= (
s
[0]
, s
[1]
,
··
·,
s
[
d
−
1]
)
with di
stan
ce
4.
Thi
s
m
e
thod
is
easy to be
execute
d
. But it cannot e
n
su
re that
th
e remai
n
ing
informatio
n suffices fo
r the
cla
ssifi
cation,
whe
n
the l
e
n
g
th of every
new
sa
mp
le
ve
c
t
o
r
is
ver
y
s
m
a
ll, be
ca
us
e
th
e e
x
tr
ac
te
d
s
[
i
] for the ne
w sam
p
le vectors i
s
not always the mo
st
effec
t
ive for class
i
fic
a
tion in this
method.
We u
s
e the
circuits i
n
Figure
2 to sh
ow
the results of the equi
distant comp
ressio
na
l
method, and
assign no
rma
l
and faul
ty param
eters to the comp
one
nt
s in Figure 2 to build normal
and faulty circuit insta
n
ces. All the para
m
eters fall
in
side thei
r respective rang
e
s
of tolera
nce
in
norm
a
l circuit
.
We can con
s
tru
c
t a sam
p
le sp
ace wi
th sa
mple v
e
ctors fro
m
the circuit shown in
Figure
2. A sample vecto
r
, which len
g
th is se
t to 30, is obtaine
d by samplin
g the impulse
respon
se
of a circuit insta
n
c
e. It can b
e
written a
s
(
s
[
0
],
s
[1],···,
s
[
28]
,
s
[29]). In the trainin
g
set,
each sampl
e
vector i
s
la
be
led a
s
‘p
asse
d’ or ‘fail
ed’ a
c
cordi
ng to t
he ci
rcuit spe
c
ificatio
ns. T
h
e
testing set cl
assificatio
n
s are de
rived
b
y
comp
ar
i
ng
the outp
u
t re
spo
n
se to
a t
h
re
shol
d d
e
ri
ved
from the hype
rplan
e
co
efficients, followi
n
g
the test gen
eration m
e
tho
d
based on S
V
M.
V
in
V
out
1
.5K
1.
5K
1.
5K
0.0
145
5
n
0.0
030
2n
0.
029
1n
(a)
three
-
pol
e act
i
ve filter
172
K
10
0n
V
in
V
out
15
2n
15
.9K
10
0n
43
K
41
K
41
K
82K
68
K
68
K
12
0K
(b) fiv
e
-p
ole a
c
tiv
e
filter
Figure 2. Example ci
rcuits
Table 1
sh
o
w
s th
e miscl
a
ssificatio
n rates fo
r th
e
circuits i
n
Figure 2 by
the test
gene
ration
al
gorithm
ba
se
d on
SVM. F
o
r th
e p
a
sse
d
(failed
)
pop
ul
ation, the
miscla
ssifi
cation
is
defined
as th
e ratio
betwe
en the
numb
e
r of
co
rrectl
y classified
p
a
ssed
(faile
d) instan
ce
s to
the
numbe
r of in
stan
ce
s label
ed a
s
pa
sse
d
(fa
iled
)
. Th
e test gen
eration metho
d
based on S
V
M
impleme
n
ted
in this pap
er
achi
eves lo
w
miscl
assification rate
s.
Table 1. Misclassificatio
n
Rate
s of Te
st
Generation b
a
se
d on SVM for Figure 2
Misclassifi
cati
o
n
r
a
t
e
(%
)
Figure 2
(a
)
Figure 2
(b
)
Trainin
g
set
Testi
n
g
set
Trainin
g
set
Testi
n
g
set
miscla
ssif
i
cation for
passed
population
0.67%
0.875%
1.6%
0
miscla
ssif
i
cation for
failed
population
0.67%
1.5%
1.6%
1%
miscla
ssif
i
cation for
total population
1.34%
2.38%
3.2%
1%
We
can
use the eq
uidist
an
t comp
re
ssi
o
nal metho
d
to extract
sa
mples
and
re
duce the
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Tes
t
Generation Algorithm Bas
ed on SVM with
Compress
ing Sample Spac
e .... (Ting Long)
581
length of eve
r
y impul
se-re
s
po
nse samp
le vector.
We
use
k
to rep
r
esent the le
ngth of the n
e
w
sampl
e
v
e
ct
o
r
af
t
e
r
ex
t
r
a
c
t
i
ng
sampl
e
s.
I
f
k
=1
5, the
sample ve
cto
r
woul
d b
e
(
s
[1]
, s
[3]
,
···,
s
[27],
s
[29]).
We
could al
so
co
nstru
c
t the
sample ve
ctors
(
s
[2],
s
[5],···,
s
[26],
s
[29]
) an
d
(
s
[4],
s
[9],
s
[14],
s
[19],
s
[24],
s
[29]) with
k
=10
and
k
=5 sample
s re
sp
ectively. Figure 3 sho
w
s th
e
miscl
assification rates for the
circuits of
Figure 2,
with
different
valu
es
of
k
. The mis
c
l
ass
i
fication
rates for
passed
an
d
failed
popul
ation a
r
e den
oted in
Figure 3 by d
i
fferent ba
rs.
The
sum of t
h
e
mis
c
l
ass
i
fication rates
for
passed
an
d
failed
popul
atio
n mean
s the miscl
assification rate for total
popul
ation.
Figure 3
(
b
)
shows th
at th
e miscla
ssification rates fo
r the
total
trai
ning
or te
stin
g set of
Figure 2(b
)
fo
r
k
=15,
k
=1
0 and
k
=5
re
sp
ectively. Con
s
ulting the mi
scl
assificatio
n
rate
s in Ta
ble
1, the effe
ct
of redu
cin
g
the le
ngth
of every
sa
mple ve
ctor
is a
c
cepta
b
ly sm
all, an
d
the
equidi
stant compressio
nal
method is u
s
eful ev
en for sampl
e
vecto
r
s of lo
w dim
ensi
on.
But for the
ci
rcuit
of Fig
u
re 2(a),
Figu
re 3(
a) sho
w
s that the
miscla
ssifi
cation
rates for
the total
traini
ng an
d te
stin
g set
ca
n a
c
h
i
eve 80% a
n
d
44% when
we re
du
ce the
l
ength of
every
sampl
e
vecto
r
. The miscla
ssifi
cation
rat
e
s a
r
e ve
ry hi
gh, and cann
ot be accepte
d
. So for Figu
re
2(a
)
the
eq
ui
distant
co
mpression
a
l met
hod i
s
disable
d
. Becau
s
e th
e ne
w
sa
mpl
e
vecto
r
s a
r
e
not
the mos
t
effec
t
ive for c
l
assific
a
tion.
(a) misc
las
s
i
fic
a
tion rates
of Figure 2(a)
(
b
) miscla
ssification rates o
f
Figure 2
(
b)
Figure 3. Miscla
ssifi
cation
rates fo
r different value
s
of
k
by equidi
stant comp
re
ssional metho
d
In
the
followi
ng sections we will propose
t
w
o other different
m
e
t
hods to
com
p
ress the
sampl
e
spa
c
e,
and co
ntra
st
the
three
method
s. We
will
sho
w
the
best m
e
thod
for comp
re
ssion
of sample
sp
ace by the ex
t
ensive expe
riment re
sults.
3.2.
k
-near
es
t neighbor
s method
In this section we
will propose a m
e
thod to
compress the im
puls
e-response
sam
p
le
spa
c
e b
a
sed
on
k
-nea
re
st neigh
bors alg
o
rithm.
Suppo
se
th
a
t
an
im
pulse
-re
sp
on
se sa
mple spa
c
e
is (
S
1
,
···
S
n
, S
n
+1
,
···
S
m
). In this
sampl
e
space the numb
e
r of impulse
-re
s
po
nse
sample vectors labeled as ‘passed’ is
n
, an
d the
numbe
r of impulse-re
s
pon
se sample ve
ctors label
ed
as ‘failed’ is
m
-
n
.
H
i
which is (
s
i
[0],
s
i
[1],···,
s
i
[
d
-1]
)
(
i
=
1
,···
n
,
n
+1,·
··
m
) denote
s
on
e i
m
pulse-re
s
po
nse
sam
p
le vector. T
he o
r
i
g
inal len
g
th o
f
S
i
is
d
.
The
comp
re
ssion
of the
sampl
e
spa
c
e i
s
to
red
u
ce the
value of
d
to
co
nstruct
a
new
sampl
e
spa
c
e. After redu
cing
d
the
ne
w length
of
H
i
is
k
. We
s
e
t
P
j
=
(
s
1
[
j
],···,
s
n
[
j
],
s
n
+1
[
j
],···,
s
m
[
j
])
(
j
=0
,
··
·
d
-
1
).
T
he com
p
re
s
s
i
on is t
o
sele
ct
some
P
j
s f
r
om
{
P
0
,
P
1
, ···
P
d
-1
}. The
numbe
r of the
sele
ct
ed
P
j
s f
o
r the ne
w sa
mple sp
ace is
k.
If we exe
c
ute the cl
assificatio
n
for
every
P
j
,
P
j
s wit
h
low mi
scl
assi
fication rates
sho
u
ld be
sel
e
cted,
sin
c
e t
hese
P
j
s can make greater
contri
bution
t
o
the
cl
assification
of
th
e sa
mple spa
c
e,
and red
u
ce
th
e
mi
scl
assification rate of
the sam
p
le sp
ace
for the te
st gene
ration.
After the co
mpre
ssi
on th
e ne
w sampl
e
sp
ace can
hold the
crucia
l
cha
r
a
c
teri
stics for the test
gene
ration.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 578 – 58
6
582
The
k
-nea
re
st neigh
bors
al
gorithm
is a
u
s
eful m
e
thod
for pattern
c
l
ass
i
fic
a
tion [22]. The
testing set he
re is the targ
et set for cla
s
sificatio
n
. For one sam
p
le i
n
the testing
set, this meth
o
d
can b
e
run in
the followin
g
step
s:
1) For this
s
a
mple, loc
a
te
the
t
nearest
sampl
e
s of the trainin
g
set.
t
is the number of
the nearest sample
s.
2) Examine t
hat mo
st of the
t
nea
re
st
sampl
e
s
belo
ng to which category of th
e trainin
g
s
e
t.
3) Assign thi
s
catego
ry to this sample in
the target dat
aset.
A Euclidean
distan
ce me
asu
r
e is u
s
e
d
to calculat
e how
close
each
sampl
e
of the
testing
set i
s
to the trai
nin
g
set.
Given
sx
=
(
sx
1
,
sx
2
,...,
sx
n
) which
is a
sample
i
n
the te
sting
set
and
sy
= (
sy
1
,
sy
2
,...,
sy
n
)
whi
c
h i
s
a
sample i
n
the
trainin
g
set
as t
w
o
point
s, the E
u
clid
ean
distan
ce fro
m
sx
to
sy
is gi
ven by:
q
i
i
i
sy
sx
sy
sx
d
1
2
)
(
)
,
(
(1)
So
k
-ne
a
rest
neig
hbo
rs m
e
thod
cla
s
sifies th
e te
stin
g set b
a
sed
on the
class
of their
nearest n
e
igh
bors. It can
b
e
execut
ed fa
st for the l
o
w-dime
nsi
onal
spa
c
e. So it
can
be u
s
e
d
to
execute the
classificatio
n
for
P
j
.
The comp
re
ssion
a
l metho
d
of the impu
lse-re
sp
on
se
sampl
e
spa
c
e ba
sed o
n
k
-nea
re
st
neigh
bors alg
o
rithm can be
run in the ste
p
s a
s
follows:
1) Cal
c
ul
ate the miscla
ssifi
cation rate for each
P
j
with
k
-n
ea
re
st nei
ghbo
rs al
go
rithm.
2) Ba
se
d o
n
the mi
scl
assificatio
n
ra
tes obtain
ed by
step
1), sele
ct
k
P
j
s w
i
th
low
mis
c
l
ass
i
fication rates
to c
o
ns
truc
t the new s
a
mple spac
e.
Then the ne
w sample
spa
c
e can b
e
use
d
to the test generation.
The
k
-n
ea
re
st neighb
ors m
e
thod i
s
a
cla
ssifi
cation
me
thod he
re. T
h
e cru
c
ial p
r
o
b
l
em for
all t
he cla
s
sif
i
cat
i
on m
e
t
h
o
d
s i
s
t
o
re
du
ce mi
scl
as
sif
i
cat
i
on
rat
e
s.
Whe
n
t
he mi
scl
as
sif
i
c
a
t
i
o
n
rates a
r
e to
o hig
h
, we
must fin
d
ot
her
methods to reduc
e
them. It makes
the
k
-nea
rest
neigh
bors m
e
thod compl
e
x to apply
to comp
re
ss
the sample
spa
c
e. The
next section
will
pre
s
ent an
oth
e
r metho
d
to comp
re
ss the
sample
spa
c
e without the
cla
ssifi
cation.
3.3. Maximal differe
nce
method
In this
section we will
pr
opose another method
to com
p
ress t
he impulse-response
sampl
e
spa
c
e ba
sed
on
maximal diffe
ren
c
e
of the
categ
o
rie
s
. It’s
called
the
maximal diffe
ren
c
e
method.
We u
s
e the i
m
pulse-re
s
po
nse
sam
p
le space (
S
1
,
··
·
S
n
, S
n
+1
,
···
S
m
)
which wa
s ill
ustrate
d
in last section to show the maximal difference
method. The method just ut
ilizes the training set
for com
p
re
ssi
on. It can be run
in the step
s of Figure 4.
Figure 4. Proce
ss of the m
a
ximal differe
nce meth
od
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Tes
t
Generation Algorithm Bas
ed on SVM with
Compress
ing Sample Spac
e .... (Ting Long)
583
The features
of ‘passed’ o
r
‘failed’ categ
o
ry are the a
v
erage valu
e
s
in the traini
ng set.
Use ve
ctor
F
p
=(
F
p
1
,
F
p
2
,
F
p
3
,···)
to
rep
r
esent th
e fe
ature
of
‘p
assed’
catego
ry, and ve
cto
r
F
f
=(
F
f
1
,
F
f
2
,
F
f
3
,···) to rep
r
e
s
e
n
t the featur
e
of ‘failed’ category.
The differen
c
e feature
s
of
sampl
e
poi
nts ar
e obtai
ne
d by mea
s
uri
ng the di
stan
ce
s of
F
p
and
F
f
. Th
e ma
xima
l d
i
ffe
r
e
nc
e me
th
od
sele
cts th
e maximal
fe
at
ures
to cons
truc
t the new
sampl
e
sp
ac
e.
3.4. Compari
s
on
We u
s
e diffe
rent metho
d
s to compress sa
m
p
le sp
ace of impul
se-re
s
p
o
n
s
e.
These
method
s a
r
e
equidi
stant
comp
re
ssion
a
l method,
k
-nea
re
st neig
hbors m
e
tho
d
and m
a
ximal
differen
c
e me
thod. Figure
5 sho
w
s that
the sam
p
le
p
o
ints of the
compresse
d
sample
spa
c
e
of
impulse-re
s
p
onse. Th
e e
q
u
idista
nt com
p
re
ssi
onal
m
e
thod i
s
to e
x
tract
sampl
e
s
with i
n
varia
b
le
distan
ce. So
the ne
w
sa
mp
le vecto
r
s of t
he
comp
re
ssed
sampl
e
sp
ace
for th
e
ci
rcuit
s
of
Figu
re
2 are ali
k
e.
(a)
sampl
e
po
ints of Figure
2(a)
whe
n
k=15
(b) sampl
e
point
s of Figure 2
(
a) when
k=1
0
(c) sa
mple po
ints of Figure
2(a)
whe
n
k=5
(d) sampl
e
point
s of Figure 2
(
b) when
k=1
5
(e)
sampl
e
po
ints of Figure
2(b)
whe
n
k=10
(f)
sampl
e
point
s of Figure 2
(
b) when
k=5
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 578 – 58
6
584
Figure 5. Sample point
s o
f
the compressed
samp
l
e
space of impul
se-re
s
p
o
n
s
e from differe
nt
method
s
Contrast to th
e
k
-nea
re
st n
e
ighb
ors met
hod a
nd the
maximal diffe
ren
c
e m
e
tho
d
, the k-
nearest nei
g
hbors meth
o
d
is a cla
s
sification met
h
o
d
. This meth
od is mo
re complex than
the
maximal differen
c
e
meth
od. The
ma
ximal differe
nce
metho
d
only ne
ed
s to cal
c
ul
ate
the
averag
e valu
es
of e
a
ch
category,
and
doe
sn’t
nee
d to
co
nsid
e
r
the
testin
g
set. It i
s
no
t a
cla
ssifi
cation
method, a
nd
doe
sn’t ne
ed
to re
cog
n
ize the two
categ
o
rie
s
. So it i
s
not a
com
p
l
e
x
cla
ssifi
cation
pro
c
e
ss. T
h
e
comp
utation
a
l pro
c
e
s
s of
it is simple
r
than the
k
-
nea
r
e
s
t
n
e
i
gh
bo
r
s
method. So
we
can
ch
oo
se it for th
e
comp
re
ssi
on
of the sa
mpl
e
sp
ace if its misclassifica
t
ion
rates a
r
e lo
w
enou
gh for th
e test.
The misclassification rates for the different
compressional methods are illust
rated in
Figure 6.
We
co
mpa
r
e th
e
thre
e
comp
ression
a
l
met
hod
s p
r
e
s
ent
ed by
Figu
re
6. For exam
p
l
e,
Figure 6(a
)
shows the miscla
ssifi
ca
tion
rates fo
r Figu
re 2(a) when
k
=15, 10 an
d
5. So acco
rd
ing
to the misc
las
s
i
fic
a
tion
rates i
n
Fi
gu
re
6 the
k
-ne
a
re
st nei
ghb
ors
method
is mo
re
effective th
an
the equidi
sta
n
t comp
re
ssi
onal meth
od,
and the ma
ximal differen
c
e meth
od is more effe
ctive
than the
k
-ne
a
re
st neigh
bo
rs meth
od.
So con
s
ide
r
in
g the compl
e
xity of the com
putational p
r
ocess an
d the miscl
assification
rates, the ma
ximal differen
c
e metho
d
is
the bes
t choi
ce for the
co
mpre
ssion of
the sampl
e
spa
c
e.
(a) mi
scl
as
sif
i
cat
i
on r
a
t
e
s f
o
r Figu
re 2
(
a)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Tes
t
Generation Algorithm Bas
ed on SVM with
Compress
ing Sample Spac
e .... (Ting Long)
585
(b) mi
scl
as
sif
i
cat
i
on r
a
t
e
s f
o
r Figu
re 2
(
b)
Figure 6. Miscla
ssifi
cation
rates fo
r different co
mpression
a
l metho
d
s
4. Conclusio
n
s
In this p
ape
r
we h
a
ve p
r
op
ose
d
an
effective test gen
e
r
ation
algo
rith
m for a
nalog
circuits
with com
p
ressing
sampl
e
space method
s. The algo
rit
h
m use
s
a S
V
M to obtain the test sign
a
l
s.
For
th
e com
p
re
ssi
on of
t
he sampl
e
space we co
n
t
rast
th
ree
co
mpre
ssion
a
l method
s,
inclu
d
ing e
q
u
idista
nt co
mpre
ssion
a
l
method,
k
-nearest n
e
ig
hbor’
s
m
e
th
od an
d ma
ximal
differen
c
e
m
e
thod.
Con
s
iderin
g the
compl
e
xity of the comp
utational
pro
c
e
s
s an
d t
he
miscl
assification rate
s, we
can choo
se
the ma
ximal difference m
e
thod a
s
the
comp
re
ssi
on
al
method. The
experim
ents
can p
r
ove the
effi
ciency of the maximal d
i
fference met
hod.
Referen
ces
[1]
Ke H, Stratigopoulos HG, Mir S,
Hora C, Yizi X, Krus
ema
n
B. Dia
g
nosis
of Local Sp
ot Defects i
n
Anal
og Circ
u
its
.
IEEE
Transac
tions on Instru
m
e
ntation and Measur
em
ent
. 201
2; 61(1
0
): 2701-
271
2.
[2]
Che
ng LY, Ji
ng Y, Z
hen L
,
Shulin T
.
C
o
mpl
e
x F
i
eld
F
ault Mode
lin
g-Base
d Optimal F
r
equ
enc
y
Selecti
on in L
i
ne
ar Anal
og
Circuit F
ault
Diag
nos
is.
IEEE Transactions on
Instrument
ation and
Measur
e
m
ent
. 201
4; 63(4): 81
3-82
5.
[3]
Ahmady
an SN, Kuma
r JA, V
a
sudevan S.
Goal-
o
rie
n
ted sti
m
u
l
us g
ener
ati
on for ana
lo
g circuits
.
49
th
Desig
n
Autom
a
tion C
onfer
en
ce. 2012; 6: 10
18-1
023.
[4]
Ozev S, Orailoglu A.
An
inte
g
r
ated too
l
for a
nal
og test g
e
n
e
ratio
n
a
nd fa
ult si
mul
a
tio
n
. Internationa
l
S
y
mp
osi
u
m on
Qualit
y
E
l
ectro
n
ic Desi
gn. 20
02: 267-
27
2.
[5]
Voorak
aran
am
R, Chatterj
ee
A.
T
e
st gener
a
t
ion for acc
u
rat
e
pre
d
ictio
n
of
ana
log s
pec
ific
ations
.
18
th
IEEE VLSI
T
e
s
t
Sy
m
posi
u
m. 200
0; 5: 137-1
42.
[6]
Hu
yn
h SD, Seong
w
o
n K, So
ma M, Jin
y
a
n
Z
.
Au
tomatic anal
og test sign
al ge
nerati
on
usin
g multi-
freque
nc
y
an
a
l
y
s
is.
IEEE Tr
ansactions on Circuits and Syst
em
s II:
Analog and Digital S
i
gnal
Processi
ng.
19
99: 565- 5
76.
[7]
Qi Z
Z
,
Yong
L
X
,
Xi
F
L
, D
o
n
g
JB,
Xua
n
X, S
an S
X
. M
e
tho
d
o
lo
g
y
a
n
d
Eq
ui
pments for
An
alo
g
C
i
rcuit
Parametric F
a
ults Dia
gnos
is
Based on
Matrix Ei
ge
nv
alu
e
s.
IEEE
Transactions on Applied
Superc
o
n
ducti
vity
. 2014; 24(
5):
06
011
06.
[8]
Cau
w
e
n
ber
ghs
G. Delta-si
gm
a cel
l
u
l
ar
auto
m
at
a for
ana
lo
g VLSI r
and
o
m
vector g
e
n
e
r
ation.
IEEE
Transactions on Circuits and
System
s II:
Analog and Digital
Signal Processing
. 19
99; 46(
3): 240-2
50.
[9]
Z
heng
HH, Bal
i
vad
a
A, Abrah
a
m JA.
A nove
l
test gen
eratio
n ap
proac
h for
para
m
etric fau
l
ts in li
ne
a
r
ana
log circ
uits
.
14
th Proceedin
g
s of VLSI
T
e
st. 1996; 5: 470
–47
5.
[10] Soma
M.
A
u
tomatic test
ge
nerati
on
al
gori
t
hms for
a
nal
ogu
e circ
uits
.
IEE Procee
di
n
g
s Circ
u
its,
Devices
and S
y
stems. 19
96: 366-
373.
[11]
F
uh LW
, Schreib
e
r H. A pragmatic ap
proa
ch to
automati
c
test generati
on
an
d failur
e
isolatio
n of
ana
log s
y
stem
s.
IEEE
Transa
c
tions on C
i
rcu
i
ts and Syste
m
s
. 1979; 26(
7): 584-
585.
[12]
Che
n
YP, K
w
a
ng T
C
.
T
e
st gener
ation
f
o
r li
near tim
e
-inv
ar
iant a
nal
og c
i
r
c
uit.
IEEE Transactions on
Circuits and Sy
stem
s II: Analog
and Digit
a
l Signal Processing
. 1999; 4
6
(5): 554-
564.
[13]
Yu W
,
Hao W
,
Z
hen
yu S. A Prioritize
d
T
e
st Generati
on M
e
thod for Pa
ir
w
i
se T
e
sting.
TELKOMNIKA
Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2013; 1
1
(1): 1
36-1
43.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 578 – 58
6
586
[14]
Liu
X. Le
arni
n
g
T
e
chniqu
es
for Automatic
T
e
st Pattern Generatio
n u
s
ing Bo
ol
ean
Satisfiab
ilit
y.
T
E
LKOMNIKA Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2013; 1
1
(7): 4
077-
408
5.
[15]
Vapn
ik VN. Overvie
w
of
statistical l
ear
nin
g
theor
y.
IEEE Transactions on Neural
Networks
. 1
9
99;
10(5): 98
8-9
9
9
.
[16] Vapn
ik
V.
SVM metho
d
of e
s
timati
ng d
ensi
t
y, conditi
on
al prob
abi
lity,
an
d
cond
ition
a
l d
ensity
. IEEE
Internatio
na
l Symp
osi
u
m on
Circuits
a
nd S
ystems. 2000; 2
:
749-75
2.
[17]
Richard OD, Peter EH, David
GS.
Pattern cla
ssification
. W
ile
y
-
Int
e
rscie
n
ce
. 2000: 11.
[18]
Yan HF
, W
ang W
F
, Liu J.
Optimizatio
n
a
nd ap
p
lic
atio
n of supp
ort vector machin
e b
a
sed o
n
SV
M
alg
o
rithm par
a
m
eters.
Journa
l of Digita
l
Informati
on Ma
nag
ement.
201
3; 1
1
(2): 165-
16
9.
[19]
Demetg
ul, Mu
stafa, Yazicio
g
l
u
O, Kentli A. Radi
al b
a
sis a
nd LVQ ne
ural
net
w
o
rk al
gori
t
hm for real
time fault dia
g
n
o
sis of bottle fil
ling
pla
n
t.
T
ehnicki Vj
esnik
. 2
014; 21(
4): 689
-695.
[20]
Hua
ng T
M
, Kecman V, Ko
pri
v
a I.
Kerne
l
Ba
sed Al
gorit
hms
for Mini
ng
Hu
ge D
a
ta Sets,
Superv
i
sed
,
Semi-su
pervis
ed, and U
n
su
p
e
rvise
d
Lear
ni
ng
. Sprin
ger-V
erla
g, Berlin, H
e
id
elb
e
rg. 20
0
6
: 96-97.
[21]
T
i
ng L, Houju
n
W
,
Bing L. T
e
st gen
eratio
n
algor
ithm for ana
l
og s
y
stem
s base
d
on su
pport vecto
r
machi
ne.
Sign
al, Ima
ge an
d
Vide
o Processi
ng.
201
1; 5(4): 527-
533.
[22]
T
Cover, P Hart. Nearest nei
ghb
or patter
n
classificati
on.
I
EEE Transactions on Infor
m
ation Theory.
196
7; 13(1): 21
-27.
Evaluation Warning : The document was created with Spire.PDF for Python.