TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.5, May 2014, pp
. 3719 ~ 37
2
7
DOI: http://dx.doi.org/10.11591/telkomni
ka.v12i5.5094
3719
Re
cei
v
ed
No
vem
ber 1
0
, 2013; Re
vi
sed
De
cem
ber 1
6
,
2013; Accep
t
ed Jan
uary 7
,
2014
A Moving Objects Detection Method with Re
sistance to
Illumination Change
Xiaoling Wa
ng, Tao Zha
ng*, Chang
h
ong Cha
n
g
Coll
eg
e of Information a
nd El
e
c
tronic Eng
i
ne
erin
g,
Z
heji
ang Go
ng
shan
g Univ
ersi
t
y
, Han
g
Z
hou,
Z
he Jiang, 3
1
001
8, P.R.CHINA
Corresp
on
din
g
author, e-mai
l
:
w
a
ng
xl@1
89.
cn, zhangt
aol
l
w
@
1
6
3
.com*,
yuc
h
_
2
0
07@
1
63.com
A
b
st
r
a
ct
Movin
g
obj
ects detection is
conducte
d in
the s
equ
enti
a
l i
m
a
ge of mov
i
n
g
obj
ects, w
h
ich i
s
favorab
l
e to
d
e
tect, ide
n
tify
and
an
aly
z
e
t
he
movin
g
o
b
j
e
cts. It has b
e
en
app
li
ed
in
vide
o surv
eil
l
a
n
ce,
virtual r
eal
ity, and a
d
va
nced
u
s
er interfac
es.
Bas
ed
on
exist
i
ng r
e
searc
h
o
n
the F
r
a
m
e
Di
fference Met
h
o
d
(F
DM) an
d th
e Back
gro
und
Subtracti
o
n
Method
(BSM
)
,
consi
deri
n
g
the s
hort ti
me
interv
al
betw
e
e
n
adj
acent
i
m
a
g
e
s
use
d
for
diffe
rence, F
D
M
is
ado
pted
fo
r its
sma
ller
i
m
p
a
ct
by sce
ne
ill
u
m
i
natio
n var
i
ati
o
n
,
w
h
ich is co
mpl
e
mentary to th
e draw
back th
at BSM is
sen
s
itive to envir
o
n
menta
l
variati
on; w
h
ile BSM
can
detect the
inte
gral
movin
g
o
b
j
ects, w
h
ich ca
n als
o
ma
k
e
u
p
the
disa
dva
n
t
age of F
D
M
i
n
fail
ing
to d
e
tect
the i
n
tegr
al
mo
ving
ob
jects. In
this
pap
er, w
e
pro
pose
a
mo
ving
ob
jects
de
tect
i
o
n
me
th
od wi
th
re
si
stan
ce
to illu
min
a
tion
chan
ge. W
e
co
nclu
de fro
m
th
e exp
e
ri
me
nt
that this
meth
o
d
is no
ise-
proo
f and ca
n ad
ap
t the
abru
p
t chan
ge
in ill
u
m
in
atio
n to ensur
e accur
a
cy of the dete
c
tion.
Ke
y
w
ords
:
moving
o
b
ject d
e
tection, vid
eo s
u
rveil
l
a
n
ce, FDM, BSM, illumi
natio
n cha
nge
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
With the dev
elopme
n
t of so
ciety and p
eople'
s
in
cre
a
sin
g
aware
n
e
ss of se
cu
rity, vide
o
surveill
an
ce
has be
en
wi
dely u
s
ed i
n
many field
s
like t
r
an
sp
o
r
tation m
onit
o
ring,
co
mm
unity
manag
eme
n
t and
ca
mpu
s
mana
geme
n
t. Parts of th
e main
obj
ect
i
ves of vide
o
surveill
an
ce
are
moving o
b
je
cts dete
c
tion
a
nd tra
c
king,
whi
c
h
are
cl
o
s
ely related
to ea
ch
othe
r. Moving
obje
c
ts
detectio
n
, as the prima
r
y step in video su
rve
illan
c
e, will di
re
ctly influence the tracki
ng,
identifying an
d analyzi
ng
of moving o
b
ject
s. Ther
e
is a variety
of variable
s
influenci
ng the
tracking
of
moving o
b
je
cts li
ke th
e
backg
ro
u
nd variation, su
dden ch
ang
e
of
illuminati
on
in
monitori
ng e
n
vironm
ent, shado
w a
nd n
o
ise,
whi
c
h
make
s it m
o
re difficult to
detect the
m
o
ving
obje
c
ts [1]. O
v
er re
ce
nt years,
schola
r
s
both in
lan
d
a
nd ab
roa
d
ha
ve been lo
oki
ng for the
rig
h
t
approa
ch to detect the m
o
ving obje
c
ts in video se
quen
ce
s an
d
have alrea
d
y made som
e
prog
re
ss. Me
thods
alre
ad
y put forwa
r
d incl
ude th
e optical flo
w
metho
d
which
usually
use
s
cha
r
a
c
teri
stics of
flow-vect
o
rs ove
r
tom
e
to
in
dicate
moving
regi
o
n
s i
n
a
video
se
que
nce [2
];
Frame
Difference Meth
od
(FDM) th
at finds
obje
c
ts
b
y
using th
e di
fference b
e
tween the i
m
ag
es
of the curre
n
t frame an
d previou
s
or next
fram
e within the
succe
s
sive frames [3] and
Backgroun
d Subtra
ction Method (B
S
M
)
that uses the
differe
nce
between th
e
initial ba
ckg
r
ound
image,
whe
n
any obje
c
ts a
r
e n
o
t tra
c
ke
d and
the fra
m
e imag
e
wh
en obj
ect
s
a
r
e moving [4].
A.
Do
shi
and
A.
G. Bors [2]
in
trodu
ced
the
optical
flow m
e
thod to dete
c
t moving obj
ects. Ho
weve
r,
most
calculat
ing u
s
ing th
e
optical m
e
th
od is to
o co
mplicate, a
n
d
also cost
s more time
and
memory. Wit
hout the su
p
port of sp
eci
a
l hard
w
a
r
e,
this metho
d
can n
o
t be a
pplied to the
real
-
time system.
FDM ha
s three advanta
g
e
s
: simpl
e
cal
c
ulatio
n, sma
ll amount of comp
utation
and
easy to
impl
e
m
ent. But it i
s
ina
c
curate in
the
det
e
c
tion
zo
ne,
as u
s
i
ng the
p
r
evio
us
or next fra
m
e
from the cu
rrent frame rep
r
esent
s the b
a
ckgroun
d im
age of the cu
rre
nt frame [5]. Out of the
s
e
three
categ
o
r
ies, BSM receive
d
the
most a
ttentio
n due to its comp
utation
a
lly affordabl
e
impleme
n
tation and its a
c
curate dete
c
tion of moving entities. Whil
e BSM is highly depend
en
t on
a goo
d b
a
ckgrou
nd m
o
d
e
l to re
du
ce
the influe
nce of the
s
e
cha
nge
s in
the surveill
an
ce
environ
ment
due to noi
se
and lighting,
etc [6].
In addition, there
are
also
some m
e
tho
d
s in movin
g
target dete
c
tion. The
s
e m
e
thod
s
may be a si
g
n
le metho
d
, such a
s
spa
c
e
-
time mod
e
l, hybird g
r
ap
h
method a
nd feature
wei
ght
, or
a combi
natio
n of two method
s. A method ba
sed
on Hori
zo
n
t
al Edges with Local Au
to
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 5, May 2014: 3719 – 37
27
3720
Correl
ation (L
AC) wa
s use
d
to
d
e
tect
v
ehicl
es
[7]. It
doe
s n
o
t u
s
e
the verti
c
al
edge. M
engxi
n
Li
and
Jingji
ang
Fan
pro
p
o
s
e
a meth
ond
,whi
ch
com
b
in
es th
e inte
r-frame diffe
ren
c
e metho
d
with
improve
d
b
a
ckgroun
d
su
btraction
meth
o
nd [8]. T
he i
m
prove
d
b
a
ckgroun
d
su
btraction
meth
o
d
of
the meth
od
make
s u
s
e
of LBP to
build
the ba
ckg
r
ou
nd. But the
b
a
ckgroun
d
ca
n not
be
up
d
a
ted
real
-time.
Given the
ab
ove analy
s
is,
the pap
er
pro
poses
a meth
od that i
s
not
only a
c
curat
e
in the
detectio
n
zon
e
, but al
so
ro
bust to
noi
se
and
sud
den
illumination
ch
ange
s. Th
e remaind
e
r
of the
pape
r is org
anized as fol
l
ows: Section
2 describ
es our app
roa
c
h in detail. In Section 3, we
pre
s
ent the
e
x
perime
n
tal result
s an
d di
scussio
n
. Fin
a
lly, our con
c
lusio
n
and
a
c
kno
w
led
g
me
n
t
are p
r
ovide
d
in Section 4.
2. Proposed
M
e
thod
In orde
r to
so
lve the pro
b
le
m that re
sults fr
om sce
ne il
lumination va
riation, we propo
se
a novel
ap
proach, which
combi
n
e
s
th
e
asymm
e
tr
i
c
frame differe
nce
meth
od (AFDM)
with
t
he
adaptive
mixt
ure of
Gau
s
si
ans metho
d
(AMoGM).
Fi
rst, it is the
d
e
tection
re
sul
t
s that only u
s
e
the AFDM, th
en
we
sho
w
the re
sult
s o
n
ly usi
ng AM
oGM a
nd
give the
detecti
on results
co
m
e
from the prop
ose
d
method.
The algo
rith
m flow cha
r
t is sh
own in Figure 1.
Figure 1. The Flow Cha
r
t of the Propo
sed Method
2.1.
Image Preprocessin
g
To avoid vi
sible
stri
p
with di
storti
o
n
,
easily progra
mmed
and saving
memo
ry
con
s
um
ption,
the in
put
col
o
r im
age
sho
u
ld b
e
c
onve
r
ted into
a
grayscale im
ag
e in
acco
rda
n
ce
with the fo
rm
ula (1). Ea
ch
pixel with
ei
ght nonli
nea
r scale
s
save
d and
the
r
e
are
a total of
256
gray levels
[9].
B
G
R
Gray
114
.
0
587
.
0
299
.
0
(1)
The p
r
o
c
e
s
ses of the i
m
age
s ge
nerated an
d
tran
smitted are
often interfe
r
ed
by the
noise, whi
c
h i
s
mainly due
to came
ra sh
ake, imag
e
di
gitalizatio
n an
d light jitter, etc. At the same
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A Moving O
b
j
e
cts
Dete
ctio
n Method wit
h
Re
sist
a
n
ce to Illum
i
natio
n Cha
nge (Xi
aoling
Wan
g
)
3721
time FDM is
sen
s
itive to n
o
ise that affe
cts the a
c
cu
racy of target
detectio
n
,
so
it is necessa
ry for
the imag
es t
o
de
noi
se.
Median
filter by sele
cting
a sha
pe of
the
a
c
tive wind
ow, su
ch
a
s
recta
ngul
ar, li
near, ap
proximately circul
ar or
cr
ucifo
r
m, etc, which
contain
s
an
odd num
be
r of
pixels i
s
a
n
ef
fective supp
ression
of ima
ge n
o
ise
tech
nology. Sup
p
o
sin
g
W represe
n
ts
a
sam
p
le
window, the pixel value will be got by the formula
(2)
:
(,
)
[
(
,
)
,
(
,
)
]
I
xy
m
e
d
I
x
k
y
l
k
l
W
(2)
To improve the visual
effect of the ima
ge,
highlig
ht the interestin
g
cou
r
se and f
a
cilitate
sub
s
e
que
nt analysi
s
and p
r
ocessin
g
, the image
en
h
ancement p
r
oce
s
s will be
implemente
d
to
the denoi
se
d
video imag
e, who
s
e qu
ality has be
en deg
rad
a
ted. This pa
per sele
cts t
he
histog
ram eq
ualization
to enha
nce
the image.
Th
e
g
r
ay histo
g
ra
m will be
eve
n
ly distrib
u
te
d in
the entire gradation rang
e from a rela
tively conc
en
trated gradati
on interval. By increa
sin
g
the
dynamic
ran
g
e
of the pixel gray we
a
c
hi
eve the overa
ll image co
ntrast.
2.2.
AFDM
FDM i
s
a me
thod that co
mpares b
e
tween the
a
d
ja
cent fra
m
e
s
corre
s
p
ondin
g
points
of
pixel values to find moving targets: the scene wi
th
ou
t moving target, the chang
e of the adjacent
frame
s
corre
s
pondi
ng p
o
int
s
of pixel valu
es i
s
very
sm
all; conve
r
sel
y
there
will b
e
more obvio
us
cha
nge
s.
Rel
a
ted to
the
sy
mmetrical fra
m
e differe
n
c
e
metho
d
[10],
AFDM
can
a
v
oid to
reu
s
e
o
f
the cu
rrent frame which h
a
s b
een
de
structe
d
o
r
pol
luted to bri
n
g abo
ut the
error d
e
tectio
n.
)
,
,
(
i
y
x
I
is the pixel value of the
ith
frame at the
coo
r
dinate
o
f
)
,
(
y
x
, the corre
s
pondi
ng
points of the
pixel values i
n
the previo
u
s
fr
ame
and
next frame are re
spe
c
tively expresse
d as
)
1
,
,
(
i
y
x
I
and
)
1
,
,
(
i
y
x
I
.
)
,
1
,
,
(
i
i
y
x
bidf
is a
binary differen
c
e im
ag
e betwe
en
)
1
,
,
(
i
y
x
I
and
)
,
,
(
i
y
x
I
,
)
1
,
1
,
,
(
i
i
y
x
bidf
is
a
b
i
na
r
y
d
i
ffe
r
e
nc
e image
b
e
t
w
e
en
)
1
,
,
(
i
y
x
I
and
)
1
,
,
(
i
y
x
I
, therefore the
di
ssym
m
etric differe
nce
op
eration
of the
ith
frame
is expre
s
sed
as follo
ws:
(,
,
1
,
)
(,
,
)
(,
,
1
)
b
i
f
d
x
y
i
i
Ix
y
i
Ix
y
i
(3)
(
,
,
,
1)
(
,
,
1
)
(
,
,
1)
bidf
x
y
i
i
I
x
y
i
I
x
y
i
(4)
(,
,
)
(,
,
1
,
)
(,
,
,
1
)
sb
i
d
f
x
y
i
bi
df
x
y
i
i
bi
d
f
x
y
i
i
(5)
The formula
(3)
sh
ows th
at only
1
)
,
1
,
,
(
i
i
y
x
bidf
and
1
)
1
,
1
,
,
(
i
i
y
x
bidf
at
the s
a
me time, then
1
)
,
,
(
i
y
x
sbidf
. It can eliminate the reveal
ed
backg
rou
nd i
m
age, and
acce
ss to the
moving obje
c
ts region of t
he
ith
frame.
2.3. AmoGM
BSM is a basic method of
obje
c
t detecti
on and tra
c
ki
ng, whi
c
h u
s
es a refe
re
nce image
as a ba
ckgro
und mod
e
l, calcul
ates the
differen
c
e im
age of the cu
rre
nt frame a
nd the refere
nce
image then u
s
e
s
the thre
shold sepa
rati
ng out mo
vin
g
targets. Th
e recon
s
tru
c
ti
on and up
dat
ing
of the b
a
ckg
r
oun
d m
odel
directly d
e
te
rmine
the
de
tection
re
sult
s. Ba
sed
on
the M
u
ti-mo
dal
mixture of Gau
ssi
an ba
ckgro
und dif
f
eren
ce
met
hod [11, 12
], simultane
ously u
s
e
s
K
uncorrelate
d Gau
ssi
an
di
stribution
s
to
descri
be th
e
state of a
pix
e
l,
]
7
,
3
[
K
.
K
is 4 in t
h
is
article. E
a
ch
Gau
s
sian
di
stributio
n h
a
s
its o
w
n
me
an, varia
n
ce
and
wei
ght. In the
dete
c
tion
pro
c
e
ss,
as
long a
s
th
e
pixel value
is
in acco
rdan
ce with any
one
of the
K
G
a
uss
i
an
distrib
u
tion
s whi
c
h re
pre
s
ent the backg
roun
d, t
hen the pixel has t
he ba
ckgro
u
n
d
cha
r
a
c
teri
st
ics
and
is co
nsid
ered as
the backg
rou
nd pixel;
on
t
he other h
and, the pixel is d
e
termin
ed a
s
the
obje
c
t pixel.
The probabilit
y distribution
of the
estimated of the coordinate of
)
,
(
y
x
in the
ith
frame
is expre
s
sed
as the form
ul
a (4):
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 5, May 2014: 3719 – 37
27
3722
K
k
xy
k
i
xy
k
i
xy
k
i
i
y
x
I
i
y
x
I
P
1
,
,
,
,
,
,
)
,
),
,
,
(
(
*
))
,
,
(
(
(6)
Whe
r
e
)
,
|
)
,
,
(
(
,
,
,
,
xy
k
i
xy
k
i
i
y
x
I
is the
kth
Gau
ssi
an di
stribution at th
e coo
r
din
a
te
of
)
,
(
y
x
in the
ith
frame, whi
c
h is d
e
fin
ed as the fo
rmula (7
).
1
,,
,,
1
((,
,
)
)
((,
,
)
)
2
,,
,
,
1/
2
2
,,
1
((
,
,
)
,
,
)
(2
)
|
|
T
ik
x
y
ik
x
y
i
Ix
y
i
Ix
y
i
i
k
xy
i
k
xy
n
ik
x
y
Ix
y
i
e
(7)
In the formula (5),
n
is the dimensi
onali
t
y of
(,
,
)
I
xy
i
,
,,
ik
x
y
,
,,
ik
x
y
and
,,
ik
x
y
respe
c
tively repre
s
e
n
t me
an, varian
ce
and
weig
ht o
f
the
kth
Gau
ssi
an di
stributio
n in the
ith
frame .Furthermore,
,,
1
K
ik
x
y
k
=1. B
e
ca
use the
g
r
ay imag
e i
s
singl
e-cha
n
n
e
l,
n
is one when
usin
g a mixture of Gau
s
sia
n
model fo
r the gray
imag
e build
s the
backg
rou
nd.
Whe
n
initialized,
ini
t
is equal
with
each pixel v
a
lue of the first frame,
900
2
init
an
d
005
.
0
init
.
A
cco
rdin
g
to the establi
s
he
d ba
ckgro
und mod
e
l, for
)
,
,
(
i
y
x
I
, if it meets the formul
a (8
) with one of i
t
s
K
Gau
ssi
an dist
ribution
s
,
)
,
,
(
i
y
x
I
will be considered to match with the background model. And
D
is set ba
sed
on experi
e
n
c
e in orde
r to determi
ne the
similarity
,
]
5
.
3
,
5
.
2
[
D
.
1,
,
1
,
,
|(
,
,
)
|
*
ik
x
y
i
k
x
y
Ix
y
i
D
(8)
If
(,
,
)
I
xy
i
matche
s with backgroun
d, then
,,
ik
x
y
,
,,
ik
x
y
and
,,
ik
x
y
will be updat
ed in
accordan
ce
with the
formula
(9
), (1
0), (11
)
. It
mean
s th
e b
a
ckgroun
d m
odel
will
also b
e
update
d
. While if
(,
,
)
I
xy
i
match
e
s with a
n
y one of its
K
Gaussia
n
distri
bution
s
, we think
(,
,
)
I
xy
i
has n
o
effect on a singl
e model a
nd th
e
paramete
r
s of each G
a
u
ssi
an di
strib
u
t
ion of
the remai
n
un
cha
nge
d mod
e
l.
)
,
,
(
)
1
(
,
,
1
,
,
i
y
x
I
xy
k
i
xy
k
i
(9)
22
2
,,
1
,
,
1
,
(1
)
(
(
,
,
)
)
i
k
x
y
ik
x
y
ik
Ix
y
i
(10)
xy
k
i
xy
k
i
,
,
1
,
,
)
1
(
(11)
In formula
(9
) an
d (11),
is a m
odel
le
arnin
g
fa
ctor,
[0
,1
]
and th
e g
r
e
a
ter th
e
value of
the
backg
rou
n
d
update f
a
ster. In th
is pap
er,
1/
n
u
mFrame
s
when
200
num
F
rame
s
, and
1
/
200
when
200
num
F
rame
s
.
is para
m
eter u
pdat
e
rate,
,,
/
ik
x
y
.
If
(,
,
)
I
xy
i
doe
s
not
match
with
a
n
y one
of it
s
K
G
a
u
ssi
an distrib
u
tion
s, now
its
distribution will be considered a new
distributi
on form and need to be
added to its original
model. After
adde
d, if the numb
e
r of
Gau
ssi
an di
stribution
s
in t
he mo
del i
s
greate
r
tha
n
K
,
firs
tly
K
Gau
s
sian
di
stributi
ons in th
e m
odel
will b
e
sorte
d
a
c
co
rding to
the
2
,,
,,
/
ik
x
y
ik
x
y
desce
nding
o
r
de
r, then the
distrib
u
tion
with the minim
u
m
2
,,
,
,
/
ik
x
y
ik
x
y
will be substituted
with
a
the ne
w Ga
ussian
distri
b
u
tion form
a
n
d
(,
,
)
new
I
xy
i
,
2
new
=9
00 a
n
d
2
new
=
0
.005. After
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A Moving O
b
j
e
cts
Dete
ctio
n Method wit
h
Re
sist
a
n
ce to Illum
i
natio
n Cha
nge (Xi
aoling
Wan
g
)
3723
backg
rou
nd model
up
date
d
,
,,
1
K
ik
x
y
k
may be not one. So
,,
ik
x
y
will be done the
norm
a
lization
pro
c
e
ssi
ng in
accord
an
ce
with the Equa
tion (12
)
:
,,
'
,,
,,
1
,
1
,
2
...
ik
x
y
ik
x
y
K
ik
x
y
k
kK
(12)
On the
G
a
u
s
sian
di
stributi
on of
ea
ch
pi
xel mod
e
l h
a
v
ing be
en
so
rted, if the
cu
mulative
probability of the
bth
state is greate
r
than
0
T
and
b
is the smalle
st, the pixel belon
gs to a
backg
rou
nd
state, the re
st
of t
he state i
s
determi
ned
as a
fore
gr
ou
nd, the formu
l
a (1
3). Th
us
we
obtain the
de
tection
re
sult
(,
,
)
Gx
y
i
. But if the n
u
mbe
r
of the
target pixel
s
in the
ratio
of the
total number
of pixels is g
r
eate
r
than 0
.
85, we
think that the surroundi
ng illum
i
nation inten
s
ity
has ch
ang
ed,
and th
e G
a
u
ssi
an di
stri
but
ion with
,,
max
(
)
ik
x
y
amo
ng
K
Gau
ssi
an
distri
bution
s
in
ea
ch pixel model will
b
e
repla
c
e
d
by correspon
ding
pixel’s di
stri
b
u
tion in th
e current fra
m
e.
Its
mean is
equ
al with the pi
xel value, varian
ce is 9
0
0
and wei
ght is
,,
max
(
)
ik
x
y
. T
h
is
p
r
oc
es
s
ensure
s
that
the new di
stribution i
s
a
det
ermin
ed
backg
rou
nd
and the a
c
curate d
e
tecti
o
n
of the next frame imag
e
.
1
arg
m
i
n
(
)
b
bk
k
B
T
(13)
2.4.
Integra
t
ed E
x
tra
c
tion Mo
v
i
ng Object
For the
ith
frame image
)
,
,
(
i
y
x
Sbidf
is the re
sult from the p
r
o
c
e
ss
of 2.2.and
(,
,
)
Gx
y
i
is from the process of 2.3
,
t
hen integra
t
ed extraction
moving obje
c
t
(,
,
)
obj
x
y
i
will
be got by the formula (14
)
:
)
,
,
(
)
,
,
(
)
,
,
(
i
y
x
G
i
y
x
Sbidf
i
y
x
obj
(14)
The form
ula
(14
)
sh
ows that only
0
)
,
,
(
i
y
x
Sbidf
and
0
)
,
,
(
i
y
x
G
at the sam
e
time, then
0
)
,
,
(
i
y
x
obj
. The value
of
)
,
,
(
i
y
x
Sbidf
d
epen
ds
on th
e followi
ng t
w
o
ca
se
s: if
the numbe
r o
f
the target pixels in the rat
i
o of t
he total numbe
r of pixels is greate
r
than 0.85,
we
think that the surro
undi
ng illuminat
ion int
ensity ha
s ch
ange
d, then
)
,
,
(
)
,
,
(
i
y
x
bifd
i
y
x
Sbidf
; otherwi
se
)
,
,
(
)
,
,
(
i
y
x
sbifd
i
y
x
Sbidf
.
After sai
d
det
ection,
within
the obje
c
ts a
r
ea may
be th
e presen
ce of
small
voids a
nd the
external may
exit discrete
noise p
o
ints,
as well
a
s
the pre
s
e
n
ce of the shad
o
w
. So the image
post-processi
ng op
eratio
n
on the in
te
grated extra
c
tio
n
moving o
b
j
e
ct is th
erefo
r
e n
e
ce
ssa
r
y to
improve t
h
e
final dete
c
t
i
on results.
Post-p
ro
ce
ssi
ng of the
i
m
age
mainly
refe
rs to
the
mathemati
c
al
morphol
ogy pro
c
e
ssi
ng of
the bina
ry
i
m
age
obtain
e
d
[13],
who
s
e
ba
sic ide
a
i
s
to
use the
stru
ctural eleme
n
ts having a certain s
hap
e to measu
r
e a
nd extract th
e corre
s
p
ond
ing
sha
pe in the i
m
age, in orde
r to achi
eve the pu
rpo
s
e
s
of the image
analysi
s
an
d recognitio
n
. The
basi
c
ope
rati
ons of math
ematical mo
rpholo
g
y incl
ude: dilation,
erosi
on, op
ening (ero
si
on
followe
d by dilation) an
d closin
g (dilatio
n
followe
d by ero
s
ion
)
. Erosio
n ope
rati
on ca
n ma
ke
a
moving obj
e
c
t bou
nda
ry inwa
rd
cont
ractio
n an
d
eliminate
sm
all and in
sig
n
ificant o
b
je
cts.
Selecting la
rg
e stru
ctural elem
ents can make
o
b
je
cts
between sm
all conn
ectivity etched awa
y
.
Clo
s
ing op
eration can fill the hole
s
in the area,
the n
a
rrow fra
c
tu
re, fine Mu gully and contou
r of
the gap. So
i
n
this
pap
er,
we firstly u
s
e
a cl
osi
ng o
p
e
ration
to fill tiny hole
s
in t
he bo
dy, smo
o
th
the bound
ary
of the object bounda
ry ,then a ero
s
ion
operation to eliminate so
me small obj
ects
cau
s
e
d
d
ue t
o
noi
se
an
d i
llumination
chang
es,
and
thus
obtain
the final
resul
t
s of th
e o
b
j
e
ct
detectio
n
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 5, May 2014: 3719 – 37
27
3724
3.
Experimenta
l
Results a
n
d Analy
s
is
The a
bove i
s
the
descri
p
tion of the
algo
ri
thm p
r
opo
se
d by
this pa
pe
r. Next, the
algorithm i
s
simulated in
MATLAB, especi
a
lly
in the complex
environm
ent
where illumination
sud
denly ch
ange
s
a
nd we
h
a
ve carried on
the
results
qualit
ative and
qu
antitative to the
experim
ent result
s. Figure 2 is
the detection
result before illumi
nation
changes in
a com
p
lex
environ
ment,
of whi
c
h th
e f
i
rst o
ne i
s
a
n
origin
al ima
g
e
. Figu
re 2
(
a
)
is o
b
taine
d
with the
meth
od
of 2.2 result, Figure 2
(
b
)
is the bi
nary i
m
age
of
the
moving O
b
je
ct dete
c
tion
u
s
ing
only 2.3
sai
d
method, Fig
u
r
e 2 (c) is the
algorithm
propo
sed in th
i
s
pa
per te
st result o
b
taine
d
. Figure 3 is the
corre
s
p
ondin
g
test
re
sults after illu
mina
tion chan
ge,
whe
r
ein
the fi
rst o
ne i
s
th
e
origi
nal im
ag
e,
Figure 3
(
a
)
is used
only 2.
2 said m
e
tho
d
re
sult
obtai
ned Fi
gu
re 3
(
b) is a
bina
ry image
of th
e
moving targe
t
detection u
s
ing o
n
ly 2.3 said me
tho
d
obtained an
d Figure 3(c) is the prop
o
s
ed
algorith
m
test
result.
Figure 2. The
Simulation Result
s
before the Illumination Ch
ang
es
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A Moving O
b
j
e
cts
Dete
ctio
n Method wit
h
Re
sist
a
n
ce to Illum
i
natio
n Cha
nge (Xi
aoling
Wan
g
)
3725
Figure 3. The
Simulation Result
s after th
e Illumination
Chan
ge
s
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 5, May 2014: 3719 – 37
27
3726
Re
sults from
Figure 2 an
d
Figur
e 3 an
d
their contrast
can b
e
foun
d that usin
g
only 2.2
said meth
od,
the extracte
d moving obj
ect bou
nda
ry
is not comp
lete, and the
target intern
al
voids exi
s
t, but the rob
u
st
ness to e
n
vironment
cha
n
ge is
relative
ly good. And
just u
s
ing
2.3
descri
bed
m
e
thod, the
e
x
tracted
movi
ng ta
rget,
alt
houg
h the
o
u
tline of th
e
moving ta
rge
t
is
more
co
mple
te, is mo
re
sensitive to ill
umi
nation
ch
ange i
n
the
e
n
vironm
ent a
nd the
dete
c
tion
results a
r
e
n
o
t ideal in th
e ca
se
of illuminati
on
m
u
tation. Du
ri
ng the mo
m
ent of illumin
a
tion
cha
nge
s, the
prob
ability o
f
erro
r d
e
tect
ion in
cre
a
ses and
recon
s
tructin
g
ea
ch
pixel’s mixture
Gau
ssi
an mo
del will take
some time.
While u
s
ing t
he pro
p
o
s
ed
algorithm in
this pape
r, the
moving obje
c
t bounda
rie
s
intact, inner cavity is rel
a
tively small and ha
s go
o
d
rob
u
stn
e
ss to
illumination
chang
e, even in the case o
f
illuminat
ion cha
nge al
so i
s
able to det
ect the moving
targets
accu
rately. Due to
the ch
ange
in illuminatio
n, there a
r
e
also
dete
c
tio
n
errors, but
the
error i
s
in a
relatively smal
l prop
ortion,
so t
he p
r
o
p
o
s
ed m
e
thod i
s
still abl
e to
accurately de
tect
moving obje
c
ts. As time goes o
n
, the impact of illu
mination vari
ation is sm
all
e
r and
smalle
r, the
false d
e
tectio
n rate
qui
ckly
returns to
th
e allo
wed
ran
ge, and th
e a
c
cura
cy rate
rebo
und
s to t
he
desi
r
ed ran
g
e
.
Acco
rdi
ng to
the quantita
t
ive
evaluation of the pro
posed meth
o
d
in [14], Re
call an
d
Precessio
n
are use
d
to ana
lyze the pro
p
o
se
d algo
rith
m:
fn
tp
tp
call
Re
(15)
fp
tp
tp
ecision
Pr
(16)
In the form
u
l
a (1
5),
tp
re
pre
s
ent
s the
total num
b
e
r of true
p
o
sitive pixel
s
,
fn
rep
r
e
s
ent
s th
e total n
u
mbe
r
of fal
s
e
neg
ative pixels, a
nd
)
(
fn
tp
r
e
pr
es
e
n
t
s
th
e to
ta
l n
u
mber
of true po
sitive pixels in the
ground truth. In the formula
(16
)
,
fp
is the total nu
mber of fal
s
e
positive pixel
s
, and
)
(
fp
tp
indi
ca
tes the total n
u
mbe
r
of po
si
tive pixels in t
he dete
c
ted
b
i
nary
obje
c
ts ma
sk.
Table 1 i
s
obt
ained by 2.2, 2.3 and the al
gorithm
of Preci
s
ion a
nd Recall, from
which
we
can
see that
either b
e
fore
or after the li
ght
cha
nge
s, the pro
p
o
s
ed
algorithm h
a
s
bee
n impro
v
ed
more
than
2.2 and
2.3. Al
though
due
to illumin
a
ti
on
variation,
Re
call a
nd P
r
e
c
ision
de
crea
se,
the prop
osed
algorithm is
more reliabl
e
than 2.2
and
2.3, and is able to meet the req
u
ire
m
e
n
ts
of accurate d
e
tection of m
o
ving obje
c
ts.
Table 1. Pre
c
ision a
nd Recall of Algorith
m
A
l
g
o
ri
th
m
Illumi
nati
on
c
h
a
nge
FrameN
um
Precision
Recall
AFDM
AmoG
M
Proposed
Before
After
Before
After
Before
After
400
400
400
400
400
400
85%
83%
90%
80%
94%
93%
82%
80%
87%
87%
95%
90%
4. Conclu
sion
After full analysis of F
D
M and BDM
,
this pape
r prop
oses to
combi
ne AF
DM and
AMoGM to
d
e
tect the
mo
ving obje
c
ts.
The
pro
p
o
s
ed meth
od n
o
t only comp
ensates f
o
r t
he
sho
r
tco
m
ing
s
that AFDM can’t extract the comp
l
e
te moving obje
c
t boun
dary, bu
t also improv
e
s
AMoGM robustness agai
nst illumi
nation changes. Through the ex
perim
ent results
it can
be
see
n
that th
e alg
o
rithm
can
extra
c
t
a complete
moving ta
rge
t
and th
e
ro
bustn
ess to
noise
interferen
ce i
s
also very g
ood, even in
the ca
se of t
he ambi
ent illumination
ch
ange
s. Ho
we
ver,
due to
cha
n
g
e
s in
illumin
a
tion, moving
o
b
ject
s is alwa
ys with
sh
ado
w, whi
c
h
bri
n
gs inte
rferen
ce
for movin
g
t
a
rget
dete
c
ti
on. Th
erefo
r
e, usi
ng
th
e
algo
rithm
of targ
et
dete
c
tion
process, it
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
A Moving O
b
j
e
cts
Dete
ctio
n Method wit
h
Re
sist
a
n
ce to Illum
i
natio
n Cha
nge (Xi
aoling
Wan
g
)
3727
remai
n
s to b
e
added the
sha
d
o
w
pro
c
essing, so
as to improve
the accu
ra
cy of the moving
target dete
c
tion
Ackn
o
w
l
e
dg
ements
The auth
o
rs wish to th
a
n
k an
onymo
us re
viewers for their va
luable
comm
ents to
improve
the
q
uality of the
p
aper.
Thi
s
p
a
per was s
upp
orted
in
part
by a g
r
a
n
t fro
m
the P
r
oje
c
t
of
“The research on dru
g
safety traceab
ility and warehou
se ma
n
ageme
n
t system in 2012
the
Ministry of IOT Special
”
. This mate
rial is base
d
upo
n work fund
ed
by Zhejiang P
r
ovinci
al Natu
ra
l
Scien
c
e Fo
un
dation of Chi
na und
er G
r
a
n
t No. LQ12
F
0100
5.
Referen
ces
[1]
Yi T
ang, W
e
i-Ming L
i
u, Li
ang
Xi
on
g.
Impr
ovin
g Ro
b
u
stness a
nd
Accura
cy
in
Moving
Object
Detectio
n Usi
ng Secti
on-D
i
s
tributio
n Bac
k
grou
nd Mo
d
e
l.
Internati
o
n
a
l conf
erenc
e
on
Natur
a
l
Co
mp
utation (I
CNC)
. 20
08; 8: 167-1
74.
[2]
A Doshi, AG Bors. Smoothi
ng of optic
al
fl
o
w
usi
ng ro
bu
stified diffusi
on
kernels.
Imag
e and V
i
sio
n
Co
mp
uting
. 2
0
10; 28(1
2
): 157
5-15
89.
[3]
KC Hui, W
C
Siu. Exte
nd
ed
Anal
ysis of
Mo
tion C
o
mpe
n
sated F
r
ame
Difference for
Block Base
d
Motion Prediction Error.
IEEE Trans. Im
age Processing
. 20
07; 16(5): 1
232
-124
5.
[4] Ji
w
o
o
ng
Ba
ng
,
Dae
w
o
n
Kim, H
y
eo
nsan
g Eo
m. Motion Obj
e
ct and Re
gi
o
nal D
e
tectio
n Method Us
ing
Block-b
a
sed B
a
ckgro
und
Differenc
e Vid
eo
F
r
ame.
Emb
e
d
ded an
d
Re
al-
T
ime Co
mp
uti
ng
Syste
m
s
and Ap
plic
atio
n (RT
C
SA)
. 20
12; 58(1
0
): 350
- 357.
[5]
B Ristic, S Arulamp
a
l
a
m, N Gordon.
B
e
yon
d
the Ka
lman F
ilter: P
a
rticle F
ilters
for T
r
acking
Appl
icatio
ns.
IEEE Internatio
nal C
onfere
n
ce
Image
Proces
sing. 20
04; 15(
10): 485-
49
2.
[6]
F
an-Chi
eh C
h
eng, Shi
h
-Ch
i
a
Huan
g, Shan
q-Jan
g
Rua
n
. Illumi
nati
on-Se
nsitive Back
gr
oun
d Mode
li
ng
Approach for
Accurate M
o
ving Object
Det
e
ction.
IEEE
T
r
ansacti
ons
on
Broa
dcasti
ng.
20
11;
57(4)
:
794 - 80
1.
[7]
Hon
g
ji
n Z
hu,
Hon
ghu
i F
an,
Shuq
ian
g
Gu
o. Movin
g
Ve
hicl
e Detecti
on
an
d T
r
acking i
n
T
r
acffic Imag
e
base
d
on
Hor
i
zonta
l
Edg
e
s.
T
E
LKOMNIKA Indon
esia
n
Journ
a
l of El
e
c
trical Eng
i
n
e
e
rin
g
.
201
3,
11(1
1
); 647
7-6
483.
[8]
Meng
xin Li, Jin
g
jin
g F
an, Ying
Z
hang, Rui Z
h
ang,
W
e
iji
ng
Xu, Ding
d
in
g Ho
u. Moving Obje
ct Detection
and
T
r
acking
Algorit
hm.
T
E
L
K
OMNIKA Ind
ones
ian
Jo
urn
a
l of
Electric
al
Eng
i
ne
erin
g:
201
3; 1
1
(10):
553
9-55
44.
[9] Ji
He.
Movin
g
object d
e
tection a
nd
moti
o
n
trajectory a
nalysis.
D
a
li
an
: Institute of
Sign
al a
n
d
Information Pr
ocessi
ng, Dal
i
a
n
Univ
ersit
y
of T
e
chnolot
y
.
2
0
09.
[10]
Shiso
ng Z
hu,
Min Gu, Jin
g
Liu. Movi
ng V
ehicl
e D
e
tectio
n an
d T
r
acking Alg
o
rithm i
n
T
r
affic Video.
T
E
LKOMNIKA Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
:
2013, 1
1
(6): 3
053-
305
9
[11]
C Stauffer, E Grimson.
Ada
p
t
ive back
g
rou
n
d
mixture
mod
e
ls for re
al-ti
m
e trackin
g
.
IEEE Int. Conf.
Comp
uter Visi
on an
d Pattern
Reco
g
n
itio
n. 1
999; 2: 24
6-25
2.
[12]
B Lei,
L
Xu. R
e
al-time
outd
oor
vide
o surv
eil
l
a
n
ce
w
i
t
h
ro
bust
foregro
u
n
d
e
x
t
r
action
an
d o
b
j
e
ct tracking
via multi-state t
r
ansiti
on man
a
geme
n
t.
Pattern Recognition.
Letters
. 200
6; 27(1
5
): 181
6-1
825.
[13] Bai
Jinta
o
.
Re
search o
n
Alg
o
rith
m of movi
ng o
b
ject track
i
ng i
n
vid
eo.
T
i
anji
n
: Institute
of Sign
al a
n
d
Information Pr
ocessi
ng, T
i
anjin Univ
ersit
y
. 2
009.
[14] F
an-Chi
eh
Ch
eng,
Sh
ih-C
hi
a
Hu
ang, Sh
a
nq-Ja
ng R
u
a
n
.
Scene An
al
ysis for Object
Detectio
n in
Advanc
ed S
u
r
v
eill
anc
e S
y
ste
m
s Usin
g L
apl
acia
n Distri
buti
on Mo
del.
Syst
em
,
Ma
n
,
Cyb
enetics
,
Pa
r
t
C
:
Applic
ations
and Rev
i
ew
s
. 201
1; 41(5): 58
9 - 598.
Evaluation Warning : The document was created with Spire.PDF for Python.