TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 13, No. 1, Janua
ry 201
5, pp. 124 ~
136
DOI: 10.115
9
1
/telkomni
ka.
v
13i1.692
5
124
Re
cei
v
ed Se
ptem
ber 30, 2014; Revi
se
d No
vem
ber
4, 2014; Acce
pted De
cem
b
er 2, 2014
Adaptive System Identification using Markov Chain
Monte Carlo
Muhammad
Ali Raza Anj
u
m
Dep
a
rtment of Engi
neer
in
g Scienc
es, Arm
y
P
ublic C
o
l
l
eg
e o
f
Managem
ent
and Sci
enc
es,
Ra
w
a
lpindi, PAKIS
T
AN
email: a
li.raza.
anj
um@a
pcom
s.edu.pk
A
b
st
r
a
ct
One of the
m
a
jor problem
s
in
ada
ptive filter
ing is
the pr
oblem
of system
id
entification. It has been
studie
d
exte
nsi
v
ely d
ue to
its i
m
me
nse
practi
cal i
m
portanc
e i
n
a
va
rie
t
y o
f
fi
e
l
d
s
. Th
e u
nde
rl
yi
ng
go
al
i
s
to
ide
n
tify the impuls
e
respo
n
s
e
of an unkn
o
w
n system.
This is accomplis
hed by p
l
aci
n
g
a know
n system i
n
para
lle
l a
nd fe
edi
ng
both sys
tems w
i
th th
e
same i
n
p
u
t. Due to
initi
a
l
dis
parity i
n
the
i
r i
m
p
u
ls
e resp
on
ses
,
an error is g
e
nerate
d
betw
e
en their
o
u
tput
s. T
h
is error is set to tune the i
m
p
u
lse re
spons
e of kno
w
n
system in a w
a
y that every chang
e in i
m
p
u
ls
e respo
n
se
red
u
ces the mag
n
i
tude of pros
pe
ctive error. This
process is re
p
eated u
n
til the
error beco
m
e
s
negl
igi
b
l
e
an
d the resp
ons
es of both sys
tems
match. T
o
specific
ally
mi
n
i
mi
z
e
the error,
num
ero
u
s ada
ptive al
gorith
m
s are avail
a
b
l
e.
T
hey are note
w
orthy either for
their low
co
mp
utation
a
l co
mp
lexity or hi
gh
conver
genc
e s
pee
d. Rece
ntl
y
, a metho
d
, know
n as Mark
ov
Chai
n M
onte
Carlo
(MCMC)
,
has g
a
i
ned
muc
h
atte
ntio
n d
ue to
its
remarka
b
ly l
o
w
comp
utatio
n
a
l
compl
e
xity. But despite this c
o
loss
al adv
ant
age, pro
perti
es
of MCMC method h
a
ve n
o
t bee
n investi
gat
ed
for ada
ptive sy
stem i
d
e
n
tificat
i
on
prob
le
m. T
h
is articl
e br
id
ges this
ga
p b
y
provi
d
in
g a c
o
mpl
e
te treat
men
t
of MCMC meth
od in the
afore
m
e
n
tio
ned c
o
n
t
ext.
Ke
y
w
ords
:
Ma
rkov chai
n, Monte Carl
o, system i
d
e
n
tificatio
n
, Wiener-H
opf
, adaptive filt
er
Copy
right
©
2015 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
System identi
f
ication i
s
o
n
e
of the m
o
st
im
porta
nt problem
s in
ad
apt
ive filterin
g. Over
the past man
y
years, it has bee
n studie
d
extens
ively in many area
s of sci
en
ce and engi
nee
ri
ng,
both in the
o
retical a
nd ap
plied
contexts. For exam
pl
e, volumes
of results in thi
s
area h
a
ve b
e
e
n
repo
rted fo
r e
c
ho
ca
ncellation in a
c
ou
sti
cs [1], for
cha
nnel e
s
timati
on in commu
nicatio
n
s [2],
for
earth
-soun
din
g
in ge
ophy
si
cs [3], for ro
b
o
tics
and
co
n
t
rol [4], etc.
More
and
mo
re
sop
h
isti
cat
e
d
techni
que
s
are pe
rpet
ually bein
g
rep
o
rt
ed in
this
a
r
e
a
[5-11]. Th
e
r
efore, its im
portan
c
e
can
be
hardly ove
r-e
mpha
sized.
In system id
entification p
r
oble
m
, the im
pulse re
sp
onse of an unkno
wn sy
stem is
identified by placi
ng a discrete
-time Fi
nite Impul
se
Re
spo
n
se (FIR) sy
stem in
parallel with
the
unkno
wn
system and fe
e
d
ing b
o
th sy
stem
s with
same inp
u
t. The differen
c
e between t
h
e
outputs
of two syste
m
s i
s
reg
a
rd
ed a
s
error
and
th
e re
spo
n
se o
f
the FIR sy
stem is a
d
ju
sted
minimize the erro
r. This
pro
c
e
ss i
s
re
peated
until
the error in the
output
s of both syste
m
s
become
s
neg
ligible. Wh
en
this hap
pen
s,
the impulse
resp
on
se
s of both syste
m
s match. Thi
s
, in
very gene
ral term
s, is the p
r
inci
ple be
hin
d
the system
identificatio
n probl
em [12].
The pri
n
ci
pal
task a
s
soci
a
t
ed with syst
em identificat
ion pro
b
lem i
s
minimi
zatio
n
of the
indicated
error. O
ne
way
to do
it is to
redu
ce
ave
r
a
ge p
o
wer of
the e
rro
r
by f
o
rmin
g its
co
st
function. T
h
is is rega
rde
d
as the
pri
n
ci
pl
e of Minim
u
m Mea
n
Sq
uare
Error
(MMSE). Average
power of the
erro
r, al
so
kno
w
n
a
s
m
ean
sq
uared
erro
r, is tak
en a
s
th
e
co
st of
sel
e
ctin
g
a
particula
r imp
u
lse
re
spo
n
se for the FI
R
system. Th
e resp
on
se i
s
se
lected i
n
such
a way that th
is
co
st is minim
i
zed
over a
set of all p
o
ssible
sele
ct
io
ns
pr
o
v
id
ed
fur
t
h
e
r
r
e
du
ct
io
n in
co
st
is
n
o
t
possibl
e. He
n
c
e, the
syste
m
ident
ification p
r
oble
m
i
s
tra
n
sfo
r
me
d into a
n
opti
m
ization
prob
lem
[13].
There a
r
e
m
any iterative algo
rithm
s
t
hat
can se
arch
th
e co
st function
for su
ch an
optimal impul
se respon
se.
Lea
st Mean
Squares
(LM
S
) algo
rithm, Normali
z
ed L
M
S (NL
M
S), an
d
Re
cursive Le
ast Squa
re
s (RLS
) algo
rithm are very
famous in thi
s
reg
a
rd. The distingui
shi
n
g
points for a
n
y such algorith
m
are its com
putati
onal co
mplexity and spe
ed of con
v
ergen
ce. Bo
th
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Adaptive S
yst
em
Identification usi
ng Ma
rko
v
Ch
ain Mo
nte Carl
o (Mu
ham
m
ad Ali
Ra
za Anjum
)
125
are
gen
erally
con
s
id
ered a
s
tra
deoffs. F
o
r exam
ple,
LMS algo
rith
m ha
s very lo
w comp
utatio
nal
compl
e
xity but is very
slo
w
to converg
e
. RLS
alg
o
rit
h
m, on the
contra
ry
,
has rapid conve
r
g
ence
prop
ertie
s
but
is com
putationally inten
s
ive [12].
Re
cently, a v
e
ry interestin
g metho
d
, kn
own
by the
n
a
me of M
a
rkov Chai
n Mo
nte Ca
rlo
(MCM
C) met
hod, ha
s
em
erge
d in th
e
com
puting
li
terature for the solution
linear alg
ebrai
c
equatio
ns [1
4
]. Though th
e
histo
r
y of th
e MCM
C
m
e
thod
can
be t
r
aced
as fa
r
back a
s
195
0s
[15], it ha
s g
a
i
ned
mu
ch
attention
over re
cent ye
ar
s
du
e to th
e avail
ability of po
werful
com
puti
ng
device
s
. Thi
s
method e
m
pl
oys ra
ndo
m n
u
mbe
r
ge
ner
ators fo
r the
solutio
n
of practical proble
m
s
whe
r
e
a
clo
s
ed fo
rm o
r
an
alytical
soluti
on i
s
n
o
t po
ssible.
Ho
wev
e
r, thi
s
meth
od h
a
s al
so
b
een
studie
d
lately for conve
n
tio
nal pro
b
lem
s
due to
its very low comp
utational compl
e
xity [16-19].
This featu
r
e
also ma
ke
s i
t
an excellen
t
choi
ce for the study of system ide
n
tification
probl
em i
n
a
daptive filteri
ng. So fa
r
as ca
n
be
asce
rtained,
MCM
C
m
e
thod
an
d its prope
rti
e
s
have not
been investigated in thi
s
cont
ext. This article aim
s
to f
ill
this gap and,
hence, provides
a compl
e
te treatment of MCMC m
e
thod
for adaptive system ide
n
tification p
r
obl
e
m
.
2. Nomenclature
Con
s
id
er the
system ide
n
tification p
r
obl
e
m
depicte
d
in
Figure 1.
Figure 1. No
mencl
a
ture for system id
en
tification pro
b
lem
Let there be
a discrete
-tim
e Linea
r Tim
e
Invariant
(L
TI) system
wi
th a finite impulse respon
se
of length
.
…
(1)
is unkno
wn.
To identify it, a Fini
te Impulse
Re
spo
n
s
e (FI
R
) filter
of equal len
g
th is pla
c
ed
parall
e
l to the unkn
o
wn system.
…
(2)
A Wide Sense Stationary (WSS) ra
ndo
m sequ
en
ce
is appli
ed to the input of bo
th system
s.
…
(3)
has follo
wing
prope
rtie
s.
0
(4)
And,
(5)
is the power
of the input seque
nce
. It i
s
generally normaliz
ed to unity.
1
(6)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 13, No. 1, Janua
ry 2015 : 124 –
136
126
For the given
input
, the unkno
wn
syste
m
gene
rate
s an output
and the FIR filter gen
erat
e
s
an output
. F
o
rme
r
is term
ed as the d
e
sired inp
u
t
.
(7)
Differen
c
e b
e
t
ween the d
e
s
ire
d
output a
nd t
he filter output is terme
d
as erro
r
.
(8)
3. Deriv
a
tion of Wiener
-Hopf Equa
tion
Derivatio
n
of
Wie
ner-Hopf
Equation
ca
n be
a t
w
o
step
proce
s
s.
In first ste
p
, a
co
st
function of th
e error is fo
rmed by squ
a
ring it and the
n
taking its ex
pecte
d value.
2
(9)
Or,
2
(10
)
is the cost
function. It r
epre
s
e
n
ts the Mean Square Erro
r (M
SE).
repre
s
e
n
ts the cro
s
s-
correl
ation ve
ctor.
…
(11
)
is the auto-correlation mat
r
ix.
…
…
⋮⋮
⋱
⋮
…
(12
)
Since
is
WSS,
is
a s
y
mmetric
Toeplitz
matrix [13].
(13
)
And,
(14
)
So,
…
…
⋮⋮
⋱
⋮
…
(15
)
In s
e
c
o
nd
s
t
ep, the cos
t
func
tion in Eq.
(10) is
minimi
zed
by taki
ng
its g
r
adie
n
t
with respe
c
t to
and then
setting the gra
d
ie
nt to zero.
0
2
2
(16
)
Or,
(17
)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Adaptive S
yst
em
Identification usi
ng Ma
rko
v
Ch
ain Mo
nte Carl
o (Mu
ham
m
ad Ali
Ra
za Anjum
)
127
Equation (1
7) repre
s
e
n
ts the
famous Weiner-Hopf Equation.
is known as the Minimum MSE
(MMSE) Wi
e
ner Filter.
4. Markov
C
h
ain Monte
Carlo Soluti
on of the
Wi
ener-Ho
p
f E
quation
In Equation (16),
matrix can be split as,
(18
)
So that,
(19
)
Or,
(20
)
But,
⋯
(21
)
Substituting Equation
(
21) in Equation (20),
⋯
(22
)
Expanding E
quation (22
)
,
⋯
(23
)
Furthe
r expa
nding the
matrix in Equation (22
)
,
…
…
⋮⋮
⋱
⋮
…
(24
)
Usi
ng Equati
on (24
)
, (2
) a
nd (11
)
to expand Equ
a
tio
n
(22
)
,
⋮
⋮
…
…
⋮⋮
⋱
⋮
…
⋮
⋯
(25
)
P
i
ckin
g
t
he
-t
h unkno
wn in
Equation (2
4
)
,
∑
∑∑
⋯
(26
)
Splitting
’s in Equation (26),
(27
)
Whe
r
e
∈
and,
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 13, No. 1, Janua
ry 2015 : 124 –
136
128
0
1
(28
)
’s rep
r
e
s
e
n
t proba
bilities. S
ubstituting Eq
uation (2
7) in
Equation (26),
∑
∑∑
⋯
(29
)
Or,
l
i
m
→
∞
(30
)
represents t
he sum
of first
terms
.
te
rm c
o
mes
from the
-th ro
w
of
,
come
s f
r
om
the
-th ro
ws o
f
and
,
c
o
mes
from the
-th row
s
of
,
,
and
so on.
4.1. Analy
s
is
of the Infinite Sum
De
comp
osi
n
g
the
term,
(31
)
is equal to the
-th value of the right-h
and sid
e
in Eq. (
17). We rememb
er it as the startin
g
positio
n. De
compo
s
ing the
term,
⋯
(32
)
Let,
(33
)
With
0
,
1
,
1
. Substituting Equatio
n (33
)
in Equ
a
tion (32
)
,
⋯
(34
)
Re-arran
ging Equation
(34),
⋯
⁄
(35
)
term appears to be an averag
e. To analyze it further, we move the
in the de
nominato
r
of
Equation (35) to left.
⋯
(36
)
De
comp
osi
n
g
Equation (3
6
)
into
equatio
ns,
⋮
(37
)
These
equa
tions rep
r
e
s
e
n
t
one-step
random walks taken by the
-th particle. In
first
equatio
n for example, the par
ticle sta
r
ts from state-
(cha
ra
cteri
z
ed by
),
mov
e
s t
o
st
at
e-
0
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Adaptive S
yst
em
Identification usi
ng Ma
rko
v
Ch
ain Mo
nte Carl
o (Mu
ham
m
ad Ali
Ra
za Anjum
)
129
(
c
ha
ra
c
t
er
ized
b
y
), and th
en come
s to
a halt. The
r
e
are
one
-st
e
p
terminatin
g
points
.
Hen
c
e,
is
an ave
r
ag
e
of
one
-ste
p
ran
dom
wal
k
s of the
-th
parti
cle to
one
-ste
p
terminatin
g points
when it starts from state-
.
In the lig
ht of
this discussion, we a
nalyze the
term
.
∑∑
(38
)
Let,
(39
)
For
∀
,
. Substituting Equation
(39
)
in Equati
on (38
)
,
∑∑
(40
)
contain
s
an
additional d
ouble
summ
ation term
when compa
r
e
d
to
. This
extra term
rep
r
e
s
ent
s two-step rand
om wal
ks of the
-th particle to two-ste
p
terminating
points
before it com
e
s to a halt. There will be
s
u
ch
r
a
nd
om w
a
lk
s
.
He
nc
e
,
is an av
erag
e of
two-step ra
n
dom wal
ks o
f
the
-th particle to
two-step terminatin
g points
when it
start
s
from
state-
. We ca
n
sho
w
in a
sim
ilar way that
is an ave
r
a
g
e
of
three-ste
p
ran
dom
wal
ks t
o
three-step termi
nating poi
nts
. Continuin
g
in a simila
r m
anne
r,
is an
averag
e of
1
-step rand
om walks to
terminating poi
nts.
4.2. Random
Walks
w
i
th
Arbitrar
y
Pr
obabilities
In Equation (32), we a
s
su
med,
1
(Re
peat
)
With
0,
1,
…
,
1
. This mean
s when a
particle is in state-
, it is e
qually likely to transit to
any of the
one-step termi
nating poi
nts with a proba
bility
1/
. This i
m
plies that
one-step
terminating points will be covered in at least
random wal
ks,
two-st
ep terminatin
g points wil
l
be cove
red i
n
at least
random walks,
three-step terminating poi
nts will be covered in at
least
rand
o
m
walks, and
so on. But if the probab
ilit
ies are not equally likely, this will not b
e
the case. We will have to define another variabl
e
to indicate
the number of minimum
random wal
k
s
that will
at
least be required
to cover
-step te
rminating
poi
nts. In on
e-ste
p
terminatin
g points,
will be
inversely pro
portion
al to th
e smalle
st transition proba
bility among
all the one-step transition
probabilities from state-
to all one-st
ep termin
atin
g points. T
h
is
prob
ability is then ro
und
ed
off upwards t
o
the nearest
possibl
e integ
e
r.
∀
(41
)
With
0,
1,
…
,
1
. Equation (4
1) m
e
ans th
at
-th
one-step te
rminatin
g poi
nt with the
smalle
st tra
n
s
ition p
r
o
babi
lity
will
be
covered in at l
east
1/
ran
dom
wal
ks. Simil
a
rly
,
will be equ
al
to the smallest tran
sition
proba
bility, round
ed up
ward
s to the neare
s
t po
ssi
b
l
e
integer, amo
ng all the two-ste
p
tran
sition prob
abiliti
e
s from state
-
to
all two-step terminatin
g
points.
∀,
(42
)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 13, No. 1, Janua
ry 2015 : 124 –
136
130
With
,
0
,
1
,
…
,
1
. Similarly,
can be
wri
tten as,
∀,,
…,
…
(43
)
With
,
,
…,
,
,
0
,
1
,
…,
1
.
5. Analy
s
is o
f
Conv
ergen
c
e
Our enti
r
e an
alysis
started
from Equatio
n (21
)
whi
c
h repre
s
e
n
ts an
infinite matrix serie
s
.
⋯
∑
∞
→
∞
∑
→
∞
(44
)
Whe
r
e,
∑
(45
)
Let there be
a solutio
n
to whic
h
will converge in limi
t
→
∞
(46
)
But,
∑
(47
)
Or,
(48
)
Applying the limit,
→
∞
→
∞
→
∞
(49
)
Or,
→
∞
(50
)
Equation (50) implies that,
‖
‖
1
(51
)
Therefore, a
can
oni
cal no
rm of
matrix sho
u
ld be l
e
ss than u
n
ity for the
conve
r
gen
ce of the
MCM
C
method to a unique solution.
Presence of
a unique
sol
u
tion will i
n
turn
guarantee the
stability of the method by
Lax’s Theo
rem
[20].
6. Limitation
s of the M
e
thod
Equation (51) imposes
a
re
stri
ction on
matrix. One wa
y to analyze this restri
ction
is to
look at the ei
genvalu
e
s of
matrix. They should lie i
n
sid
e
the followin
g
interva
l
to meet this
rest
rictio
n.
1
1
(52
)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Adaptive S
yst
em
Identification usi
ng Ma
rko
v
Ch
ain Mo
nte Carl
o (Mu
ham
m
ad Ali
Ra
za Anjum
)
131
Compl
e
x eige
nvalue
s can
equally meet the c
onditio
n
descri
bed in
Equation (5
2). But we
will focus our attention on
real eigenval
ues because
has re
al ei
genvalu
e
s. It is relate
d to
matrix by Equation (17
)
. As
is a
sy
mmetric
matrix [13], it
will have re
al e
i
genvalue
s b
y
fundame
n
tal theorem of linear alge
bra [
21]. Hence,
will also be a symmetric m
a
trix with re
a
l
eigenvalu
e
s.
By the relation descri
bed
by
Equation (17), co
ndition
on eigenval
ues of
matr
ix
transl
a
tes di
rectly to the following
con
d
ition on eig
env
alue
s of
matrix.
0
2
(53
)
Equation (53) means th
at all eigenvalu
e
s of
sho
u
ld
be real, po
si
tive, and insi
de the
interval
0,2
. For
matrix, first two conditio
n
s hold automat
ically.
is a symmetric m
a
tri
x
so its
eigenvalu
e
s are
real.
is positive defin
ite so all its
eigenv
alu
e
s
are p
o
sitive [13]. Finally, to
prove that
meets the third con
s
traint as well, we
employ Gershg
orin’
s
the
o
rem [22]. This
theore
m
state
s
that
-th eige
nvalue of a matrix
lies insi
de a circle,
,
∑
(54
)
Such that
is the cent
re a
nd
∑
is the
ra
dius of
corre
s
po
ndin
g
ci
rcle. There
will
be
su
ch
cir
c
le
s
f
o
r an
matrix. Sinc
e
is
a finite
s
y
mmetric
Toeplitz
matrix, all the
Gershgori
n
’s circles
w
ill ov
erlap
such that,
,
∑|
|
(55
)
repre
s
ent
s the autoco
r
relation sequ
en
ce
of
the input
r
andom proce
ss
.
i
s
it
s v
a
lue
at zero delay.
This is equivalent to the e
nergy of the random p
r
o
c
e
ss
which ca
n
be normali
ze
d
to unity. So the Ge
rshgo
ri
n’s ci
rcle
s for
will be centered at unity.
As
to the rem
a
ining values
of
, its monotoni
cally de
crea
si
ng natu
r
e will
ensu
r
e t
hat t
he rem
a
inin
g
values lie in
side a
ce
rtain
regio
n
[23].
For
co
nverg
e
n
ce, thi
s
regi
on mu
st b
e
boun
ded
by
Eq. (55
)
. Fo
r example, if
the
sampl
e
s of
are ind
epe
ndent an
d identically distri
buted (IID) with ze
ro
mean a
nd u
n
ity
varianc
e
, then,
1
0
(56
)
There will
be
overlap
p
ing
Gershg
ori
n
’s circle
s
1,0
. These
will essentially be
points
centered at 1. So,
there
will be
eige
nval
ues
equ
al to
1. This i
s
true be
ca
use
b
y
Equation (1
7),
will be equivalent to an i
dentity matrix which ha
s all eigenvalue
s equal to
1
. I
n
this case, the startin
g
po
sition
s of the
-pa
r
ticle
s
,
which are rep
r
ese
n
ted
by
,
will be the
s
o
lution to the s
y
s
t
em identific
a
tion problem.
(57
)
T
h
is
o
n
c
e
aga
in
is
tr
u
e
bec
a
u
s
e
w
h
en
is
an identity
matrix,
.
7. Anal
y
s
is o
f
Error
We be
gin wit
h
the infinite seri
es d
e
scri
bed by Equati
on (30
)
.
l
i
m
→
∞
(Re
peat
)
For
1
,
will be equal to the first term in the
infinite serie
s
.
(Re
peat
)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 13, No. 1, Janua
ry 2015 : 124 –
136
132
This
will
be th
e sta
r
ting val
ue of th
e pa
rt
icle. So
error
in the e
s
timat
e
at the
begi
n
n
ing
of
the random walk will be equal to,
l
i
m
→
∞
(58
)
0
indicate
s tha
t
no random
wal
ks have b
een initiated
yet. For
2
,
will be equal to the sum
of firs
t two term in the infinite s
e
ries
.
∑
(Re
peat
)
This
rep
r
e
s
en
ts all one
-ste
p terminatin
g
points. A mini
mum of
random wal
k
s will
be
requi
re
d to co
ver all these
points. So the
erro
r after
random walks
will be,
l
i
m
→
∞
(59
)
For
3
,
∑
∑∑
(Re
peat
)
This re
presen
ts all two-ste
p
terminating p
o
ints. A minimum of
random wal
ks will
be
requi
re
d to co
ver all these
points. So the
erro
r after
random walks
will be,
l
i
m
→
∞
(60
)
Similarly, the error after
ra
ndom walks
will be,
l
i
m
→
∞
(61
)
rep
r
e
s
ent
s t
he lo
wer bo
u
nd on
erro
r. In gen
eral, th
e erro
r in the
estimate
after
rand
om wal
k
s will be grea
ter than
. This is due to th
e law of large numbers whi
c
h dictate
s
that
→
∞
for the equality to hold in Equation
(61
)
. Also,
…
(62
)
Whe
n
→
∞
,
∞
l
i
m
→
∞
l
i
m
→
∞
0
(63
)
Hence, infinit
e
random wal
ks
will be required to ta
ke the error in the estimate equal to zero.
8. Cons
truc
tion of Algori
t
hm
In this
sec
t
ion, we foc
u
s
our attention on
the con
s
tru
c
tion of a
n
al
gorithm to
co
mpute a
n
iterative MCMC sol
u
tion to
Wien
er-Ho
p
f Equation.
8.1. The Matter of Ab
sorb
ing State
From a
n
alg
o
rithmi
c viewpoint, there
is
one
esse
ntial point to
con
s
ide
r
. F
o
r that
purp
o
se, let u
s
an
alyze
the
se
con
d
rand
om walk i
n
E
quation (3
7). The
p
a
rticl
e
starts
from sta
t
e
-
, moves
to s
t
ate-
1
in next step and then
the rando
m wal
k
su
dde
nl
y come
s to an end. So in
addition to the existing
states, there m
u
st be anot
h
e
r state, a no-return
state
.
We term this
extra state a
s
the abso
r
bin
g
state. A parti
cle ca
nnot return fro
m
ab
sorbing
state.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Adaptive S
yst
em
Identification usi
ng Ma
rko
v
Ch
ain Mo
nte Carl
o (Mu
ham
m
ad Ali
Ra
za Anjum
)
133
∑
0
(64
)
And once the
particle e
n
co
unters the ab
sorbing
state, it stays there
.
1
(65
)
Hen
c
e fo
r a
single u
n
kno
w
n, there
are
a
total of
1
state
s
. Asso
ciated
with the
s
e
st
ates a
r
e
1
state transition probabiliti
e
s.
8.2. Formati
on of the S
t
a
t
e Tran
sition
Matrix
Ea
c
h
un
kn
own
h
a
s
1
state transition probabilities as
sociated
with it. As there are
unkno
wn
s in the syste
m
, we can
con
s
tru
c
t an
1
state transitio
n matri
x
.
…
⋮⋮
⋮
⋮
⋮
…
(66
)
Probabilities associated
with
absorbing state are called abs
orpti
on probabiliti
e
s, i.e.,
∀
0
,1,
…
,
1
. We add an
extra ro
w to
in orde
r to meet the co
n
d
itions lai
d
d
o
wn in
Equation (64) and (65
)
fo
r t
he ab
sorbing
state.
…
⋮⋮
⋮
⋮
⋮
…
00
0
0
1
(67
)
The dim
e
n
s
i
ons
of the
matrix no
w
cha
nge to
1
1
. The a
b
sorpti
on
probabilities will
not be available
di
rectly. In order
to obtain
them, we
shoul
d ensure
during
splitting process in Equati
on (27
)
that,
∑
1
(68
)
∀
0,1,
…
,
1
. By
fundame
n
tal theore
m
of proba
bility [24],
∑
1
(69
)
From Equ
a
tio
n
(69
)
, absorption pro
babil
i
ties ca
n be computed in th
e followin
g
manne
r.
1
∑
(70
)
∀
0,1,
…
,
1
.
8.3. Formati
on of Sta
t
e T
r
ansition
Rules
Once
the stat
e
tran
sition matrix
is co
m
p
lete, state transitio
n rul
e
s for the
-th u
n
kn
own
can be d
e
fin
ed acco
rdin
g to Table 1.
Evaluation Warning : The document was created with Spire.PDF for Python.