TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 12, No. 10, Octobe
r 20
14, pp. 7381
~ 738
8
DOI: 10.115
9
1
/telkomni
ka.
v
12i8.534
2
7381
Re
cei
v
ed
De
cem
ber 1
1
, 2013; Re
vi
sed
Jul
y
18, 201
4
;
Accepte
d
Augu
st 2, 2014
Anti-Occlusion Algorithm for Object Tracking Based on
Multi-Feature Fusion
Jie Cao, Lia
ng Xuan*
Schoo
l of Com
puter an
d Com
m
unic
a
tion, L
a
n
zho
u
Univ
ersi
t
y
of T
e
chnolo
g
y
, L
anzh
ou 7
300
50, P.R.Chi
n
a
*Corres
p
o
ndi
n
g
author, em
ail
:
xy27
717
53
@
163.com
A
b
st
r
a
ct
Co
mpl
e
x back
g
rou
nd, espec
i
a
lly w
hen the
obj
ect is
simil
a
r to the backgrou
nd in co
lo
r or the
target gets bl
o
cked, can e
a
si
ly lea
d
to tracking fai
l
ure. T
herefor
e, a fusion a
l
g
o
rith
m
base
d
on fe
ature
s
confid
enc
e an
d similar
i
ty w
a
s propos
ed, it can ad
aptiv
ely
adjust fusio
n
strategy w
hen occlusi
on occ
u
rs.
And th
is co
nfid
ence
is us
ed
a
m
o
ng
occl
usio
n det
ection,
to
overco
me
the
prob
le
m of
in
a
ccurate occlus
i
on
deter
mi
natio
n
w
hen bl
ocke
d
by an
alo
g
u
e
. T
he exp
e
ri
me
n
t
al resu
lts sho
w
that t
he pro
pose
d
al
gor
ith
m
is
m
o
re robust in the
case of the cover
,
but als
o
has go
od p
e
r
f
orma
nce u
n
d
e
r
other co
mpl
e
x scenes.
Ke
y
w
ords
:
computer
app
l
i
catio
n
, obj
ect tracking, a
n
ti
-occlusi
on, p
a
r
t
icle filter,
mu
l
t
i-feature, te
mplate
upd
ate
Co
p
y
rig
h
t
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Video o
b
je
ct trackin
g
i
s
a
n
importa
nt re
search
directio
n of
comp
ute
r
visio
n
that i
s
widely
use
d
in vedi
o su
rveillan
c
e, human
-co
m
puter in
te
raction, robot
visual navi
gation, medi
cal
diagn
osti
cs a
nd othe
r are
a
s
. Targ
et is o
ften bloc
ked
by non-ta
rget
obsta
cle an
d
other target in
compl
e
x ba
ckgroun
d. Ho
w to
deal
wit
h
o
ccl
usi
on
i
s
a
proble
m
t
hat shoul
d b
e
solved
quickly in
curre
n
t visual
tracking a
r
ea
s.
Referen
c
e
s
[1-2] have p
r
o
posed a kalm
an filter al
go
ri
thm to predi
ct
target po
sitio
n
, but it
is likely to fai
l
whe
n
target
moves errati
cally o
r
is bl
ocked fo
r
a l
ong time.
Re
feren
c
e [3]
h
a
s
use
d
the diff
eren
ce
of pi
xel gray p
r
e
d
icted val
u
e
s
and m
e
a
s
u
r
eme
n
ts to
determi
ne if
an
occlu
s
ion,
b
u
t it did
not
apply to
th
e situ
ation
t
hat the t
a
rg
et traje
c
tory
cha
nge
d g
r
e
a
tly.
Referen
c
e [4
-5] h
a
ve p
r
op
ose
d
a
tra
c
ki
ng al
gorith
m
based
on
pro
babili
stic
app
eara
n
ce m
o
d
e
l,
whi
c
h timely updated th
e target col
o
r mod
e
l to
keep the t
a
rget colo
r spatial di
stri
bution
informatio
n, but it can n
o
t track targ
et when
ta
rget is
simil
a
r to sh
elter. Refere
nce [6]
rep
r
e
s
ente
d
the targ
et wi
th multiple
different wi
ndo
ws, th
en u
s
e
d
the p
o
sitio
n
rel
a
tion
an
d
simila
rity of the win
d
o
w
s t
o
estimate th
e true po
siti
o
n
of the targe
t, it can solve
partial o
ccl
u
s
ion,
but be interfe
r
ed by ba
ckg
r
oun
d noi
se.
Referen
c
e
s
[7-9] solved the occlu
s
io
n by blocking t
he
target, but la
cked of
stabil
i
ty and were
low effi
ci
en
cy in complex
backg
rou
nd.
Referen
c
e [1
0]
has p
r
op
osed
an occlu
s
ion
detection m
e
thod ba
sed
o
n
adaptive fu
sion
coeffici
e
n
t, but there is a
big error
whe
n
the weig
ht distrib
u
tion is
relatively uniform.
As the fixed fusion m
e
thods will
reduce the
tracking performance when the
object is
occlu
s
ion, so
we nee
d
a
d
a
p
tive
fusio
n
strategy
to
red
u
ce
its i
n
flue
nce
on
the t
r
acking
re
sult
s. In
this pap
er, u
s
ing the
co
nfiden
ce dyna
m
i
cally adju
s
ts the fusion
weights of e
a
ch feature in t
he
frame
w
ork of
particl
e filteri
ng, an
d it
ca
n
be
u
s
ed
to bl
ock d
e
tectio
n
.
Accordi
n
g
to the
chan
ge
of
total simila
rity, the fusion
m
e
thod a
nd te
mplate
up
dat
e strategy are dynami
c
ally
adju
s
ted, wh
ich
can reduce the effect of noise.
The experiments illustra
te the effectiveness a
nd superiority of the
algorith
m
.
2 Algorithm
Principles
2.1. Particle Filter
Particle filter is an effective
technical me
ans an
d tools in solving no
n-Ga
ussia
n
a
nd non-
linear
system
state tra
cki
n
g
pro
b
lem. It cal
c
ulat
e
s
re
cursively mai
n
ly throug
h the predi
ction
and
update:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 738
1
– 7388
7382
1)
The predi
ctio
n pro
c
e
s
s
1:
1
1
1
1
:
1
1
||
|
tt
t
t
t
t
t
p
xz
p
x
x
p
x
z
d
x
(1)
2)
The upd
ate p
r
ocess
1:
1
1:
1:
1
||
|
||
tt
t
t
tt
tt
t
t
t
pz
x
p
x
z
px
z
pz
x
p
x
z
d
x
(2)
For th
e sy
stem of no
n-li
near an
d no
n-Ga
ussi
a
n
, formula (1
), (2)
i
n
teg
r
al cannot
b
e
cal
c
ulated analytically. Particle filter
us
es a
set
of weighte
d
ran
dom
sa
mpling
point
to
approximate
to the target
stat
e p
o
ste
r
ior p
r
ob
abilit
y distrib
u
tion
, when
the
sampli
ng
poi
nts
approa
che
s
i
n
finity, it can
obtain the
opt
imal Baye
si
a
n
e
s
timation
of target
stat
es. Th
e p
o
ste
r
ior
probability distribution can
be
approximately
expressed as:
1:
1
|
j
j
tt
t
t
t
N
j
px
z
w
x
x
(3)
Whe
r
e
N i
s
t
he nu
mbe
r
of
parti
cle
s
;
j
t
w
is
the the j p
a
rti
c
le
weig
ht at
time t, which
satisfie
s the
norm
a
lization condition
1
1
N
j
t
j
w
;
is dilta fun
c
ti
on. After int
r
odu
cin
g
the
importa
nce
den
sity function
q
in the part
i
cle samplin
g, the weight
s recu
rsive e
q
u
a
tions:
1
1
11
:
||
|,
jj
j
tt
t
t
jj
tt
jj
tt
t
pz
x
p
x
x
ww
qx
x
z
(4)
Cho
o
si
ng pri
o
r dist
ributio
ns
1
|
tt
p
xx
as the importa
nce sampling fun
c
tion, the wei
ght
updatin
g pro
c
ess (4
) ca
n b
e
simplified a
s
:
1
|
jj
j
tt
t
t
ww
p
z
x
(5)
The targ
et state estimate
value is the a
v
erage
weig
h
t
ed of particle
set in time t:
ˆ
j
jj
tt
t
t
NN
jj
x
wx
w
(6)
Becau
s
e t
he
motion
cha
r
a
c
teri
stics of a
n
y target
s is
difficult to obt
ain, this p
ape
r choo
se
one o
r
de
r lin
ear
system
whi
c
h i
s
the
most
comm
o
n
ly
use
d
a
s
the pa
rticle
filte state tra
n
sfer
model
r:
W
Ax
x
t
t
1
(7)
Whe
r
e
A is
state tran
sitio
n
matrix,
whi
c
h i
s
th
e
unit
matrix in
he
re,
W i
s
G
a
u
s
s noi
se
with
0
averag
e.
2.2. Featur
e Model
The ba
si
c ide
a
to achieve
multi feature t
r
ackin
g
in the
framework of
par
ticl
e filter is that,
Firstly, descri
be the multi feature mo
de
l of target area
1,
2
,
,
i
in
Qq
, where
i
repre
s
e
n
ts
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A
n
t
i
-Oc
c
lu
sio
n
A
l
gorit
hm
for Obje
ct
Tr
a
cki
ng Based
on Multi-Fe
ature Fu
sio
n
(Ji
e
Cao
)
7383
different featu
r
e sp
ace,
1,
2
,
,
ˆ
iu
i
um
ii
qq
y
are
each featu
r
e
sub
-
mo
del
s d
e
scrib
ed with
th
e
weig
hted kernel functio
n
h
i
st
ogram. Usually expre
s
sed as:
1
ˆ
ˆ
i
l
uh
i
l
i
B
l
yx
qy
C
K
b
x
u
a
(8)
Whe
r
e
ˆ
y
is the
cente
r
po
sition of target a
r
ea;
B
and
a
rep
r
esent the pix
e
l numb
e
r a
n
d
scale
of the target
area;
is delta func
tion,
il
bx
denotes
corre
s
p
ondin
g
hi
stog
ram i
ndex val
ues of
the pixel
color in im
age
l
x
,
i
u
is the
sub i
n
terval of e
a
ch feature
spa
c
e;
2
1
Kr
r
is a
kernel fun
c
ti
on, whi
c
h
se
ts the sm
alle
r wei
ghts
to
the pixels a
w
ay from
the target template
cente
r
, becau
se the
s
e pixel
s
are
su
scept
ible to
the other goal
confli
ct or ba
ckgro
und pixel
s
.
Then
set th
e parti
cle
co
nce
n
tration
o
f
each p
a
rti
c
le are
a
as t
he targ
et ca
ndidate,
establi
s
h
the
co
rrespon
di
ng
can
d
idate
model
1,2
,
,
1,2
,
,
jN
jj
i
in
Pp
, w
her
e
j
are p
a
rti
c
le
s
,
1,
2
,
,
i
ii
j
iu
j
um
pp
y
is the i feature su
b model
of candidate
model,
j
y
is the ce
ntral
positio
n of a
particl
e, and
use Bh
attach
aryya co
effi
ci
ent to mea
s
u
r
e the simil
a
rit
y
of particle
a
r
ea
and ea
ch
sub
model:
1
ˆ
ii
i
j
iu
j
u
i
m
u
py
q
y
(9)
On this basi
s, calculate pa
rticle observati
on probability
of each feature at time t:
2
,
2
1
|e
x
p
2
2
ij
jj
ii
d
pz
x
(10)
Whe
r
e
is the variance of the Gau
s
s dist
ribution;
,
1
j
ij
i
d
.
3. Fusion an
d Occlusion
In the actual
tracking
pro
c
ess,
becau
se
of the influence of
n
o
ise, sen
s
o
r
insta
b
ility or
other i
n
terference factors,
using a
fixed com
b
ination met
hod will
reduce the tracki
ng
perfo
rman
ce.
So we n
eed
a ad
aptive f
u
sio
n
strateg
y
to red
u
ce i
t
s influe
nce
on the t
r
a
cki
ng
results. Thi
s
pape
r analy
z
es the sp
atia
l distribut
io
n and wei
ghts
distrib
u
tion o
f
particles t
o
measure ea
ch feature
con
f
idence in the
tracking p
r
o
c
e
ss, a
s
a fu
sion fa
ctor of
the obse
r
vat
i
on
probability and Bhattacharyya coe
fficient of block det
ermination.
3.1. Featur
e Fusion
Firstly, we
use varian
ce
s t
o
mea
s
u
r
e th
e sp
at
ial di
stribution
of pa
rticle
s, an
d select
a
certai
n
p
r
op
o
r
tion
of ea
ch feature pa
rticl
e
s as
th
e ref
e
ren
c
e
pa
rticl
e
s after res
a
mpling,
in
or
d
e
r
to preve
n
t ind
i
vidual pa
rticl
e
s from mi
gra
t
ing fa
rthe
r,
caused
by excessive vari
an
ce. Th
e
small
e
r
the varia
n
ce
of the po
sitio
n
di
stributio
n
is ,
m
o
re
concentrate
d t
he featu
r
e
p
a
rticle
s
are, t
h
e
smalle
r the u
n
ce
rtainty is.
Defin
e
1
: Th
e varian
ce of the positio
n d
i
stributio
n ca
n be define
d
as:
T
1
1
jj
ii
ii
m
i
j
x
xx
x
m
(11)
Whe
r
e
i
x
is th
e mea
n
of t
he
sampl
e
p
o
sition, m
is
the num
be
r
of parti
cle
s
,
j
i
x
is the j-th
particl
e of the i-th feature.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 738
1
– 7388
7384
Then
we
u
s
e
the o
b
serve
d
ent
ropy to
measur
e the
weig
ht di
strib
u
tion, greate
r
entropy
is, more u
n
ifo
r
m the wei
ght
distributio i
s
, wea
k
e
r
identi
f
ication ability
is.
Defin
e
2
: Th
e weig
ht distri
bution can be
defined a
s
:
2
1
(|
l
o
g
|
)
jj
jj
ii
i
i
i
N
j
Hp
p
z
x
p
z
x
(12)
The feature weight that
spatial
di
strib
u
tion is more
con
c
entrate
d and distin
g
u
ish g
ood
on the target
sho
u
ld be l
a
rge
r
, so we
defi
ne ea
ch
feature tracki
ng perfo
rm
ance
evaluati
o
n
1
ii
h
.
Defin
e
3
: Th
e i-th feature
fusion
weig
hts ca
n be defi
ned a
s
:
1
ii
i
n
i
hh
(13)
Defin
e
4
: Th
e new fu
sion
strategy
can
be define
d
as:
1,
1
,
1
1
1
j
jj
t
t
it
t
i
it
n
n
i
i
pp
p
(14)
Whe
r
e
1
t
is
the total s
i
milarit
y
of target template for a
moment.
In this pape
r, the fusion
strategy essenti
a
lly
unifies m
u
ltiplicative fu
sion, weighte
d
fusion
into an ad
apti
v
e frame
w
ork, and dynami
c
ally adju
s
t
s
each feature
prop
ortio
n
in
the ob
servati
on
probabilities. The method can
adapt
to environm
ental change
s, to achiev
e robust tracking
better.
3.2. Occlusi
on De
tec
t
ion
and Model Upda
te
Targ
et or
sce
ne is li
kely to
cha
nge i
n
th
e tra
cki
ng p
r
oce
s
s. Using
a sin
g
le fixe
d targ
et
model
can n
o
t
be a long a
nd stabl
e tra
c
king, an
d the
sen
s
itivities
of different fe
ature
s
to sce
n
e
cha
nge
are
di
fferent, and fi
xed wei
ghts
u
pdating m
e
th
od is
difficult to meet the
re
quire
ment
s. So
this pape
r co
mbine
s
with feature
confid
ence and Bh
attacha
r
yya coefficient to calcul
ate the total
simila
rity of ta
rget template,
and dynami
c
ally adj
ust
s
u
pdate st
rateg
y
acco
rdin
g to its cha
nge.
As target i
s
blocked
by t
he analogue, si
milar characteri
sti
c
s still m
a
intai
n
a
high
simila
rity, so are not e
a
sy to estimate th
e occlu
s
ion.
But at this moment the fe
ature p
a
rticl
e
s are
more
di
spe
r
sed, weight
s d
i
stributio
n i
s
relatively
unifo
rm, confide
n
ce is lo
wer. S
o
this
alg
o
rith
m
use
s
it to wea
k
en the inte
rferen
ce to the
occl
u
s
ion jud
g
ment ca
use
d
by
the similar feature.
Defin
e
5
:
Co
mbined
with f
o
rmul
a (9)
an
d (1
3) th
e tot
a
l simil
a
rity o
f
target templ
a
te can
be define
d
as:
1
n
ii
i
(15)
When the tot
a
l similarity decr
eases greatly, the occl
usion oc
curs, at this mom
ent if still
updatin
g mo
del, it will incorpo
r
ate m
o
re backg
ro
un
d noise, ca
using the erro
r
for tra
ckin
g
a
fter
occlu
s
ion recovery. So at this poi
nt we
should u
s
e the
target model
a moment ag
o.
Defin
e
6
: Th
e new templ
a
te update st
ra
tegy can be d
e
fined a
s
:
1
(1
)
t
t
t
t
c
in
i
t
T
T
q
q
qp
(16)
Whe
r
e
1
(1
)
ct
cur
p
qp
,
is th
e simila
rity of initial templa
te and the cu
rre
nt templat
e
,
in
it
q
is the initial template,
cur
p
is
the c
u
rrent template,
T
is th
e thre
shol
d u
s
ed to d
e
termine
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A
n
t
i
-Oc
c
lu
sio
n
A
l
gorit
hm
for Obje
ct
Tr
a
cki
ng Based
on Multi-Fe
ature Fu
sio
n
(Ji
e
Cao
)
7385
occlu
s
ion, g
e
nerally
0.6. whe
n
o
ccl
usi
on is
not
occurred, templ
a
te update
co
mbine
s
with
the
information of initial template
、
the previous tem
p
late
and current
template, whi
c
h can a
dapt
to
the cha
nge of
other comple
x environme
n
t
better.
The
algo
rith
m u
s
e
s
featu
r
es
confid
en
ce to
we
aken
the i
n
terfe
r
e
n
ce
to th
e o
ccl
usi
on
judgme
n
t cau
s
ed
by
simila
r cha
r
a
c
teri
stics,
and
upd
ate targ
et temp
late ba
sed
on
the
cha
nge
of
the total s
i
milarity in order to
ensure the
accuracy of tracking.
3.3. Algorith
m
Implementation
We
sele
ct color a
nd e
d
ge featu
r
es
to track targ
et, whe
r
e th
e col
o
r fe
ature u
s
e
s
16×16
×16
RGB spa
c
e, th
e ed
ge fe
ature is de
scri
be
d by
weig
hte
d
g
r
adie
n
t di
rection
hi
stog
ram.
Specific
step
s are a
s
follo
ws:
(1)
Initialization. Manually sel
e
ct
target
tr
a
cki
ng, extra
c
t
su
b-m
odel
s of ea
ch fe
a
t
ure
histog
ram b
a
s
ed o
n
formul
a (8), ma
ke
0,
1
i
wN
,
0
0.8
;
(2)
Fore
ca
st. Cal
c
ulate the
cu
rre
nt frame st
ate base
d
o
n
formula (7) and the pre
v
ious
frame state;
(3)
Cal
c
ulation of
confide
n
ce. Cal
c
ulate feat
ure
s
co
nfiden
ce
i
by formula (13
)
;
(4)
Feature fusi
on. Cal
c
ulat
e parti
cle
s
we
ig
hts on t
he ba
sis
of formula
(14
)
, then
es
timate target s
t
ate
ˆ
t
x
by fo
rmula (6);
(5)
Template u
p
d
a
te. Update t
a
rget templ
a
te on the ba
si
s of formula
(16);
(6)
Re
sampli
ng, retur
n
step
(2
).
4. Experimental Re
sults
and An
aly
s
is
In orde
r to test the perfo
rmance of this al
gorithm,
we sele
ct rel
e
vant video seq
uen
ce
s
for testing. Al
l algorithm
s a
r
e implem
ent
ed by Ma
tlab
on the PC proce
s
sor Intel
2.3G 2.3G, 1
G
memory, ta
rg
et po
sition
an
d initial m
ode
are
se
t ma
n
ually, video
seque
nces are
all d
e
rived
from
the stan
da
rd
video lib
rary
[11]. The ex
perim
ental
d
a
ta and
alg
o
r
ithm pa
ram
e
ter are give
n
in
Table 1 an
d Table 2, wh
ere HOE is the
edge di
re
ctio
n histog
ram.
Table 1. Vide
o Sequen
ce
s
Prope
rty
Video
Image size
Target size
Sequence length
1 384×288
32×72
100
2 384×288
32×48
140
Table 2. Algo
rithm Param
e
ter
The numbe
r of p
a
rticles N
100
Features
Color
、
HO
E
Feature
digitalizing
16
Resampling
60%N
Occlusion threshold T
0.6
In Experimen
t 1, video sequen
ce i
s
sel
e
cted
from the cha
nnel in
front of a sh
oppin
g
mall, the target is affected by
the illumination and occl
usi
on. Fr
om Figure 1, t
he reference
[12
]
use
s
fixed fusion, althou
g
h
colo
r ch
aracteri
stic
s ca
n not accura
tely identify the target in the
intense illumi
nation environment from
10-th fram
e to
24-th frame, i
t
can
still effectively track t
h
e
target fused the edg
e feature b
a
sed on
gradi
ent di
re
ction; After o
ccl
usi
on (2
6-t
h
frame
)
, due
to
still maintain
the original
fusion meth
ods, enla
r
g
e
the backgro
und noi
se, caused that most
particl
es mov
e
to th
e
simil
a
r
ba
ckgro
u
n
d
targe
t, the
track fail
s. F
o
r th
e p
r
op
osed m
e
thod, t
w
o
feature confid
ence will be a
daptively adju
s
ted in
the intense illuminat
ion enviro
n
m
ent, the fusio
n
result is more accurate, the co
nfiden
ce curve
i
s
shown in Fig
u
re 2; it ad
aptively redu
ce
multiplicative
fusion
weight
accordi
ng to
the chang
e after occlu
s
io
n occurs(26
-
th frame) of to
tal
simila
rity red
u
ce
wei
ght fu
sion, n
o
t amp
lify noise
s a l
o
t, the fusion
result ca
n effectively se
parate
target
and
p
ede
strian
s; A
fter o
ccl
usio
n
is
ov
er(64
-
t
h
fram
e), fu
sion
strategy
can
ad
aptive
l
y
redu
ce a
dditi
ve fusion we
ight, and improve the
tracking reliabi
lity. To
further illustrate the
effectivene
ss of the propo
sed al
gorith
m
, we ma
ke a quantitative compa
r
ison of
two algo
rith
ms
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 738
1
– 7388
7386
errors. F
r
om
Figure 3,
we
can
find th
e p
r
opo
se
d m
e
th
od i
s
mo
re
effective in
the il
lumination
an
d
occlsi
on environment.
Figure 1. Some Re
sults o
n
Experiment
1 by usi
ng Li
terature [12] Method (th
e
first ro
w) and
Propo
se
d Me
thod (the
second ro
w)
(Fra
mes: 10, 26,
45, 64, 86)
Figure 2. The
Update
Process of Feature
Weig
hts
Figure 3. The
Error
Cu
rve on Experime
n
t 1
Experiment
2 mainly verifies algo
rith
m tr
ackin
g
p
e
rform
a
n
c
e i
n
the shado
ws
and
analo
gue o
cclusio
n. From
Figure 4, wh
en the ta
rget
moves into th
e sh
ado
w in
85-th frame, t
he
track
i
ngs
of
sum
rule
and produc
t rule drift,
how
ever, the propos
ed algorithm track
s
acc
u
rat
e
ly.
This i
s
be
cau
s
e pa
rticl
e
s d
i
verge, the a
c
curacy
of
su
m rule b
e
com
e
lowe
r, and t
he produ
ct ru
le
enlarge the b
a
ckgroun
d inf
o
rmatio
n. Th
e pro
p
o
s
ed
al
gorithm
com
b
ines the
adva
n
tage
s of bot
h
,
and redu
ce
s
the colo
r wei
ght, so it red
u
ce
s the
effe
ct of sha
d
o
w
s on the
re
su
lts, and supp
ress
background
noise better; When
tar
get i
s
blocked
by a similar obje
ct, the posteri
o
r probability of
sum rule ca
n not conve
r
ge,
tracki
ng e
rro
r is improv
ed
distin
ctly, since the pr
odu
ct rule integrat
es
more
ba
ckground
noi
se, l
o
sin
g
the target finally
. Howeve
r, the p
r
opo
se
d met
hod u
s
e
s
the
total
simila
rity
to regul
ate
the
prop
otion of sum
ru
l
e
a
n
d produ
ct
rul
e
ad
aptively, and
u
s
e
s
t
h
e
confid
en
ce to
allo
cate the
feature
s
weig
hts in
sum
ru
le re
asona
bly, and
ultimately makes th
e
result more a
c
curate a
nd
effective. The
similarity
cu
rve is sho
w
n i
n
Figure 5, when the targ
e
t
is
blocke
d by similar obje
c
t, as the gray i
n
formatio
n
of target and p
ede
strian i
s
very simila
r, color
feature can
not determine
ocllusi
on, however the total si
milarity
can
still detect
the occlusion
accurately, a
nd sto
p
up
da
ting the templ
a
te. From
Fig
u
re
6, we
ca
n find the p
r
o
posed al
gorit
hm
is more accu
rate and sta
b
l
e
.
10
20
30
40
50
60
70
80
90
100
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
Fr
a
m
e
s
F
e
at
ur
e
w
e
i
g
h
t
λ
Co
l
o
r
Cu
e
G
r
a
d
i
ent
C
u
e
0
20
40
60
80
10
0
0
5
10
15
20
25
30
35
Fr
a
m
es
E
r
ro
r /
p
i
x
e
l
propo
s
e
d m
e
t
h
o
d
r
e
f
e
r
e
nc
e[
1
2
]
m
e
t
hod
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A
n
t
i
-Oc
c
lu
sio
n
A
l
gorit
hm
for Obje
ct
Tr
a
cki
ng Based
on Multi-Fe
ature Fu
sio
n
(Ji
e
Cao
)
7387
Figure 4. Some Re
sults o
n
Experiment
2 by usi
ng Sum Rule
(the
first ro
w), Pro
duct Rule (th
e
se
con
d
ro
w)
and Pro
p
o
s
e
d
Method (th
e
thir
d ro
w)
(Frame
s: 68,
85, 98, 104, 1
17)
Figure 5. The
Bhattacha
r
yya Coeffici
ent Curv
e
Figure 6. The
Error
Cu
rve on Experime
n
t 2
In the aspe
ct of real-time, out algori
t
hm
time-co
n
s
umin
g is m
a
inly in the feature
extraction
an
d fusion, but
also
related t
o
the
target
size and p
a
rt
icle
s numb
e
r.
Table 3 sho
w
s
the different t
r
ackin
g
spee
d (fp
s
). Th
e p
r
opo
se
d al
gorithm complexity is similar
to sum
rule
and
prod
uct rule.
Table 3. The
Comp
utation
a
l Co
st of Four Algorith
m
s
Video
reference[1
2
]
Sum rule
Product rule
Proposed metho
d
1 9.9
10.1
10.2
9.7
2 10.9
11.2
11.4
10.6
5. Conclusio
n
Video ta
rget
tracking
is wi
dely u
s
ed
in
r
obot navigat
ion,
medi
cal
diagn
osi
s
an
d
video
surveill
an
ce
etc. But beca
u
se of
com
p
lex backg
ro
u
n
d
, target is e
a
sy to be bl
o
c
ked an
alog
u
e
s,
usin
g the fixed fusio
n
method may redu
ce the
tracking p
e
rfo
r
mance, t
here
f
ore, this pa
per
dynamically adju
s
ts the contributio
ns
of different feature
s
and di
fferent fusion
methods to the
result by usi
ng feature
confiden
ce an
d simila
rity,
and ap
plies t
he co
nfiden
ce into occlusion
detectio
n
, en
suri
ng the
de
terminatio
n result i
s
a
c
curate and
effective; When o
ccl
usi
on o
c
cu
rs,
we chang
e the template
update
strate
gy in time,
makin
g
templat
e
information
more accu
ra
te.
Experiment
s
sho
w
e
s
that
the algo
rithm
not onl
y ha
s go
od tra
c
ki
ng pe
rform
a
n
c
e in a
nalo
g
u
e
60
70
80
90
100
11
0
120
13
0
14
0
0.
4
5
0.
5
0.
5
5
0.
6
0.
6
5
0.
7
0.
7
5
0.
8
0.
8
5
0.
9
Fr
a
m
es
B
h
a
tta
c
h
a
r
y
y
a
c
o
e
ffi
c
i
e
n
t
Co
l
o
r
Fu
s
i
o
n
0
20
40
60
80
100
120
140
0
5
10
15
20
25
30
35
40
45
Fr
a
m
es
E
rro
r
/ p
i
x
e
l
pr
od
uc
t
r
u
l
e
su
m
r
u
l
e
pr
op
os
ed m
e
t
hod
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 10, Octobe
r 2014: 738
1
– 7388
7388
occlusion,
but also
can track ta
rget
effectively in the illumina
tion, shadow and other
com
p
lex
environ
ment
s. This p
ape
r
has
so
me lim
itations o
n
th
e long tim
e
o
ccl
usi
on, the
next step i
s
t
o
resea
r
ch the long time full occlu
s
ion.
Ackn
o
w
l
e
dg
ements
This work was supp
orte
d by the National Natu
ral Scien
c
e
Found
ation
of China
(612
630
31).
The Fina
nce Dep
a
rtme
nt Found
ati
on of Gan
s
u Provin
ce, Chi
na (0
9
14ZTB14
8).
Referen
ces
[1]
Gan MG, Chen J, W
A
NG YN, et al. A
target
tracking al
g
o
rithm bas
ed o
n
mean shift a
nd norm
a
liz
e
d
moment of iner
tia feature.
Acta Autom
a
tica Sinica
. 2
0
1
0
; 36
(9):133
2-13
36.
[2]
Karavas
ilis V, Nikou C, Lik
a
s
C.
Visual tracking by ad
aptiv
e Kal
m
a
n
filteri
ng an
d mean s
h
ift.
Lecture
Notes
in
Com
puter Sc
ie
nce,
Artificia
l
Intel
l
i
genc
e: T
heorie
s, Mode
ls
and
App
licati
ons. 201
0;
6
0
4
0
:
153-
162.
[3]
Seni
or A, Ha
mpap
ur A, T
i
an Yin
g
li,
et
al.
Appe
ara
n
ce
mode
ls for oc
clusio
n h
a
n
d
li
n
g
. Ima
ge
an
d
Vision Com
p
uting
. 200
6; 24(1
1
): 1233-
12
43.
[4]
Gross R, Mat
t
e
w
s
I, Baker
S. Active
ap
pear
ance
mo
d
e
ls
w
i
t
h
occlu
s
ion.
Im
ag
e an
d
Vi
si
o
n
Co
mp
uting
. 2
0
06; 24(6): 5
93-
604.
[5]
Khan MS, Sh
a
h
M. T
r
acking multipl
e
occ
l
ud
ing
peo
pl
e b
y
l
o
caliz
in
g o
n
multipl
e
sce
ne
pl
anes.
IEEE
T
r
ansactio
n
s o
n
Pattern Ana
l
ysis and Mac
h
i
ne Intell
ig
ence
.
2009; 3
1
(3): 5
05-5
18.
[6]
An GC, Z
hang
F
J
, W
ang HA, et al. Multi-
w
i
ndo
w
tar
get tracking.
Jour
nal
of Computer R
e
searc
h
an
d
Devel
o
p
m
ent
. 201
1; 48(1
1
): 2023-
203
0.
[7] Phadk
e
G.
Robust multip
le ta
rget tracking
u
nder occ
l
us
i
on
usin
g frag
me
nted
mea
n
shift and Ka
l
m
a
n
filter
. Internatio
nal C
onfere
n
ce
on Commu
nic
a
tions a
nd Si
g
nal Proc
essin
g
.
2011; 2: 517-
521.
[8]
Yan J, W
u
MY, Ch
en SZ
, et
al
. Anti-occl
usio
n
trackin
g
a
l
g
o
ri
thm bas
ed
on
mean
shift a
n
d
fragme
n
ts.
Optics and Pre
c
ision E
ngi
ne
e
r
ing
. 20
10; 18(
6): 1413-
14
19.
[9]
Qi MB, Z
hang L, Jiang JG, et al.
T
a
rget templ
a
te up
dat
e method i
n
fragme
n
t trackin
g
.
Journa
l of
Imag
e an
d Gra
phics
. 20
11; 16
(6): 976-9
82.
[10]
Li YZ
, Lu Z
Y
, Li J. Rob
u
st video
obj
ect tr
acking a
l
g
o
rithm
base
d
on m
u
lti
-
feature fusi
on.
Journa
l of
Xidi
an Un
iversi
ty
. 2012 ; 39(4)
: 1-6.
[11]
F
i
sher R.
Caviar
T
e
st Case
Scenar
io
s [DB/OL]. [2011-1
2
-05].
grou
ps.inf.ed.a
c
.uk/visi
on/CA
VIAR/CAVIARDAT
A
1/.
[12] Birchfiel
d
S.
Elliptic
a
l h
e
a
d
trackin
g
usin
g i
n
t
ensity gra-
di
ents an
d color
histogra
m
s.
P
r
oc of IEEE
computer
societ
y
confer
enc
e on co
m
puter
vision and pattern r
e
co
gnition. Santa.
IEEE.
1998: 232-
237.
Evaluation Warning : The document was created with Spire.PDF for Python.