TELKOM
NIKA
, Vol. 11, No. 4, April 2013, pp. 2079
~20
8
3
ISSN: 2302-4
046
2079
Re
cei
v
ed
Jan
uary 11, 201
3
;
Revi
sed Fe
br
ua
ry 23, 20
13; Accepted
March 5, 201
3
Background Modeling Method based on 3D Shape
Reconstruction Technology
Xue Yuan*
1
, Xiaoli Hao
2
, Houjin
Chen
3
, Xue
y
e Wei
4
1,2,
3,4
School of Electron
ic and
Information En
gin
eeri
n
g
,
Bei
j
i
ng Jia
o
ton
g
Un
iversit
y
,
No.3 Sha
ng Yu
an Cu
n, Hai Di
an District Bei
j
i
ng, Chi
n
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: xyu
an@
bjtu.
edu.cn
A
b
st
r
a
ct
In this resear
ch, w
e
prese
n
t a nov
el d
y
na
mic b
a
ckg
roun
d mod
e
li
n
g
metho
d
ba
sed o
n
reconstructe
d
3D sha
pes, w
h
ich ca
n so
lve
backgr
oun
d
m
o
deling problem
s
of multi-c
a
mera in real-time.
W
h
ile 3
D
sha
p
e
reconstructi
o
n
is a po
pul
ar
techno
lo
gy
w
i
dely use
d
for d
e
tecting, tracki
ng or id
entifyi
n
g
vario
u
s o
b
jects
,
little effort h
a
s
bee
n
ma
de
in
app
lyin
g th
is
u
s
eful
meth
od
to back
g
ro
und
subtractio
n. In
this
w
o
rk w
e
propose an ap
pro
a
c
h
to using
3D shap
e reco
nstruction tech
no
l
ogy to dev
elo
p
a novel d
e
cisi
o
n
mak
i
n
g
mech
a
n
is
m for back
g
rou
nd i
m
age
updati
ng.
T
h
i
s
3D sha
pe r
e
constructi
on
base
d
back
g
ro
un
d
subtractio
n
me
thod
is a
daptiv
e to ch
an
ges
in
ill
u
m
in
at
io
n, c
apa
ble
of h
a
n
d
ling
su
dde
n i
l
l
u
mi
nati
on c
han
g
e
s
as w
e
ll as co
mplex dy
na
mic s
c
enes effici
entl
y
.
Ke
y
w
ords
: background
s
ubtr
a
ction, intru
der
detection, 3
D
shap
e reco
nstruction,
multi-c
a
mer
a
Copy
right
©
2013 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
Moving o
b
je
cts dete
c
tion
a
nd
segm
enta
t
ion from
a vi
deo
se
quen
ce is one
of th
e mo
st
essential ta
sks in obje
c
t tra
cki
ng an
d video su
rv
eillan
c
e [1-8], [13-1
5
]. A commo
n approa
ch is to
use
ba
ckgro
u
nd subtra
ctio
n, whi
c
h first
build
s
a statistical ba
ckground
m
odel, then
lab
e
ls
t
he
pixels that a
r
e unli
k
ely to be ge
nerated
by this
mod
e
l
as foreg
r
oun
d. Although a
large
numb
e
r
of
backg
rou
nd subtractio
n method
s
h
a
ve
be
en rep
o
r
ted in
the l
i
terature ove
r
the
pa
st few
decade
s, ch
alleng
e rem
a
ins when th
e
scene
s to
b
e
model
ed contain dyna
mic ba
ckgro
und
s
such as wavi
ng tree, illumi
nation changes, etc.
In [4], the Gaussian
-ba
s
e
d
method
s hav
e an a
s
su
mpt
i
on that the pi
xel colo
r valu
es over
time co
uld b
e
modele
d
by
one o
r
multipl
e
Gau
s
sian
d
i
stributio
ns. I
n
[5], the Lo
cal De
pen
den
cy
Histo
g
ra
m (L
DH) was p
r
o
posed, whi
c
h
is com
put
ed
over the re
g
i
on ce
ntere
d
on a pixel, L
DH
effectively extract
s
the
spat
ial depe
nde
n
c
y statisti
cs o
f
the cente
r
pi
xel which con
t
ain sub
s
tanti
a
l
eviden
ce for
labeling th
e
pixel in dyna
mic sc
e
n
e
s
. A techniq
ue
use
d
wid
e
ly for ba
ckgrou
nd
subtractio
n is the adaptive
Gau
ssi
an mix
t
ures
meth
o
d
of [7]. These
method
s cl
a
ssify ea
ch pix
e
l
indep
ende
ntly, and morph
o
logy is used
later to
creat
e homog
eno
us re
gion
s in
the segment
ed
image. [8-9] pre
s
ent the B
a
yesia
n
app
roach wh
i
c
h i
s
an alternativ
e segm
entati
on schem
a. The
backg
rou
nd, sha
d
o
w
,
and
foreg
r
ou
nd cl
asse
s
a
r
e co
nsid
ere
d
to b
e
sto
c
h
a
sti
c
pro
c
e
s
ses
which
gene
rate th
e
observed
pix
e
l value
s
accordin
g to l
o
cally sp
ecifie
d
dist
ribution
s
.
The
s
e
meth
ods
can
ada
pt to
ch
ange
s i
n
i
llumination
sl
owly, but
p
e
rforms
po
orly in
complex d
y
namic scen
es,
and al
so
perf
o
rm
s po
orly t
o
han
dle
sud
den illumi
nati
on chan
ge
s. Their
perfo
rm
ance will
nota
b
ly
deteriorate in the presence of
dynamic
backgrounds such as
wavi
ng tree, illumi
nation changes,
etc.
It is noted t
hat the 3
D
sha
pe recon
s
tru
c
tion te
chnolo
g
y ha
s been
wid
e
ly use
d
for
detectin
g
, tra
cki
ng o
r
iden
tifying object
s
succe
s
sfully, however, t
o
our b
e
st
knowl
edge, th
ere
has be
en n
o
pu
blic
re
p
o
rt of u
s
in
g
3D sh
ape
recon
s
tru
c
tio
n
technolo
g
y for b
a
ckg
r
o
und
subtractio
n. In this
work,
we p
r
op
ose
a method
to
build a d
e
ci
si
on-m
a
ki
ng u
n
it that is abl
e to
judge which part of the ba
ckgro
und ima
ge sh
ould
be
update
d
imm
ediately and
whi
c
h pa
rt of the
backg
rou
nd
image
shoul
d remai
n
u
n
ch
ang
ed b
a
se
d on th
e 3D
shap
e re
con
s
tru
c
tion
techn
o
logy. It turn
s o
u
t th
at the p
r
op
osed ba
ckg
r
ou
nd
subtractio
n metho
d
i
s
able to
ada
pt to
cha
nge
s in
illumination,
ha
ndle
sud
den
illumination
chang
es, a
nd
cop
e
with
co
mplex dynam
ic
sc
ene
s ef
f
i
cie
n
t
l
y
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No. 4, April 2013 : 2079 – 2
083
2080
2. Backg
rou
nd subtr
a
c
t
ion based o
n
3D shap
e re
cons
truc
tion
The 3
D
sh
a
pes a
r
e
re
constructe
d u
s
ing
sha
pe
from silh
ouet
te (SFS) te
chni
que
momentarily whi
c
h is introduced in [10-12]. A 2D
example of the visual cone
is illustrated i
n
Figure 1. Fig
u
re 1
sh
ow
s different
viewpoints C1
;
C2, whi
c
h all
have a different view at the
intrude
r I, an
d silh
ouette
s
S1; S2, whi
c
h are g
o
tten
usin
g conven
tional ba
ckg
r
ound
su
btra
ct
ion.
The interse
c
ti
on of the proj
ected
silho
u
e
tte
s
form is
H, H is
the vis
u
al hull.
Figure 1. 3D
Re
con
s
tru
c
tio
n
usin
g Shap
e from Silhou
ette Method
The
key ste
p
s
p
r
op
osed in
this
work
are
descr
ibe
d
in
what follo
ws.
The p
e
rfo
r
ma
nce
of
backg
rou
nd
subtractio
n d
epen
ds m
a
in
ly on t
he modele
d
ba
ckgrou
nd ima
g
e
s. The a
d
a
p
tive
backg
rou
nd i
m
age
s a
r
e
u
pdated
timely wh
en th
e
dy
namic
chan
g
e
s
occu
r i
n
b
a
ckgroun
d, a
n
d
the regi
on
s of intrude
rs
enterin
g the
surv
eill
an
ce area are hel
d
unchan
ged
.
The
adapti
v
e
algorith
m
s for backgroun
d image p
r
o
c
e
s
sing a
r
e:
(,
)
&
(,
)
k
i
f
x
y
JudgeArea
x
y
I
1
(,
)
(
,
)
kk
B
xy
B
x
y
else
(,
)
&
(,
)
k
i
f
x
y
JudgeA
r
e
a
x
y
I
(,
)
(
,
)
kk
B
xy
F
x
y
Figure 2. The
Flow of Ju
dg
ing the Intrud
er Segme
n
t
Figure 3. The
Flow of Upd
a
ting the
Backgroun
d Image
s
Figure 4. 3D
Shape Recon
s
tru
c
tion for t
he
Intrude
r and t
he Shado
w
Calcula
ti
on
t
h
e
h
e
igh
t
of
th
e
I-
--
-H
(I
)
If
H
(
I)
>t
hr
e
s
ho
ld
I
is
jud
g
ed
as
th
e intr
ude
r
s
e
gm
ent
--
--
-
K
I
I
is jud
g
e
d
as
dy
n
a
mi
c
ba
c
k
-
gr
o
u
n
d
ch
an
g
e
s
s
e
gm
en
t
Ye
s
No
Calcula
ti
on
t
h
e
h
e
igh
t
of
th
e
I-
--
-H
(I
)
If
H
(
I)
>t
hr
e
s
ho
ld
I
is
jud
g
ed
as
th
e intr
ude
r
s
e
gm
ent
--
--
-
K
I
I
is jud
g
e
d
as
dy
n
a
mi
c
ba
c
k
-
gr
o
u
n
d
ch
an
g
e
s
s
e
gm
en
t
Ye
s
No
C1
C2
S5
S7
H1
I
S6
S8
H2
C1
C2
S5
S7
H1
I
S6
S8
H2
Pix
e
l (
x
,y)
(,
)
k
if
x
y
I
(,
)
(
,
)
kk
B
xy
F
x
y
1
(,
)
(
,
)
kk
B
x
y
Bx
y
Ye
s
No
C1
C2
S1
S2
H
I
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Backgroun
d Modelin
g Met
hod ba
se
d on
3D S
hape
Reco
nstructio
n
Techn
o
log
y
(Xue Yuan)
2081
Here,
Jud
ge
Area i
s
the
co
mmon fiel
d of
view
for t
w
o
came
ra
s,
k i
s
the frame
nu
mber, B
is the ba
ckground ima
ge, I is the intrude
r regi
on
s, F is the foregrou
nd image.
Figure 2 illustrated the flow of judging
t
he intruder segment, the thresh
old is cho
s
e
based on the
experien
c
e. Figure 3 illustrated t
he flow of updatin
g the background ima
g
e
s
. To
illustrate thi
s
method, co
nsider 3
D
sh
ap
es re
co
nst
r
u
c
tion using SF
S techniqu
e as de
picted i
n
Figure 4. I is an intrud
er
enterin
g the
surveill
an
ce area,
the silh
ouettes
in different
viewpo
ints
C1, C2 a
r
e S
5
, S7, the reconstructe
d 3
D
sh
ape of th
e intrude
r I is
H1. Becau
s
e
the height of H1
is hig
her than
thre
shold, th
e silh
ouette
s
S5, S7
are j
u
dged
as th
e silhouette
s of i
n
trude
rs. In the
other
ha
nd, H2
i
s
th
e re
constructe
d 3
D
sha
pe of
th
e shad
ow o
r
i
llumination
chang
es ap
pe
aring
on the grou
n
d
, the height of H2 are lo
wer than th
re
shold, then the
silhouette
s S6, S8 are jud
ged
as th
e dyn
a
m
ic
ba
ckgro
und
ch
ang
es. We
sepa
ra
te the
backg
roun
d im
age
into follo
wi
ng
segm
ents:
1),
the
seg
m
ent
s
contai
ning
the dyn
a
mic
b
a
ckgroun
d ch
ange
s (such as
S
6
,
S8); 2
)
,
the segm
ent
s h
a
ven’t a
n
y ch
ang
es with
exis
tin
g
ba
ckg
r
oun
d imag
e; 3
)
, the
segm
e
n
ts
contai
ning int
r
ude
rs (su
c
h
as S5, S7).
T
he ba
ckground im
age
s are mo
dele
d
ba
sed o
n
the
followin
g
rule
s: The segm
ents containi
ng the
dyna
mic ba
ckgrou
nd ch
ang
es
should b
e
upd
ated
immediately, and
the se
g
m
ents contai
ning
intrude
rs shoul
dn’t b
e
upd
ate. Th
e are
a
s
whi
c
h are
not belon
ging
to the Jud
g
e
A
rea a
r
e u
p
d
a
ted u
s
ing th
e co
nvention
a
l method
su
ch a
s
Ga
ussi
an-
based metho
d
.
Figure 5. Selecting the Th
resh
old
The thre
sh
ol
d is dynami
c
in our re
sea
r
ch a
nd sele
cted u
s
ing th
e method illu
strated in
Figure 5. In
orde
r to
sele
ct the thre
sh
old for
silho
u
e
tte S3, we
assume
silh
o
uette S3 is t
h
e
proje
c
tion of
shado
w app
earin
g on the groun
d in t
he real wo
rl
d, ignoring the proje
c
tion
on
came
ra
C2, 3
D
sh
ape
H2
can b
e
re
co
n
s
tru
c
ted.
Th
e
height of H2
(heig
h
t1)
ca
n be sele
cted
as
the thre
shold.
3. Experiment
Figure 6. The
Examples of the Experime
n
t Results
(a)
(
b) T
h
e inp
u
t
ima
g
e
(c) T
h
e m
o
del
e
d
backgr
o
un
d i
m
ag
e
T
h
e backgro
und
Im
ag
e (w
av
ing tre
e
)
C1
C2
S3
S4
H
e
ight 1
H2
C1
C2
S3
S4
H
e
ight 1
H2
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No. 4, April 2013 : 2079 – 2
083
2082
The test im
a
ges
we
re ma
nually captured.
We
sel
e
cted video
seq
uen
ce
s of scene
s for
testing, and a
total of 300 image
s we
re
use
d
fo
r this
experim
ent. These incl
ud
e 220 imag
es of
indoo
r sce
n
e
s
, 80 im
age
s of outdoo
r
scen
es, all th
e test ima
g
e
s
in
clud
e the
intrude
r o
r
t
h
e
dynamic b
a
ckgroun
d ch
a
nge
s simulta
neou
sly.
The
r
e are two t
y
pes of dyna
mic ba
ckgro
und
cha
nge
s in th
e test image
s, i.e., sudde
n illuminat
ion
chang
es (Figu
r
e 7) a
nd waving tree (Fi
g
u
r
e
6). The exam
ples of the test im
ages and the
result
i
m
ages are illustrated in Fi
gure 6
and Fi
gure
7. We com
p
ared
with the Gau
ssi
an-based metho
d
s to demo
n
strate the
efficien
cy of our
prop
osed m
e
thod. In this experi
m
ent
, we defin
e
the case that the intru
der regio
n
s
are
embed
ded in
the backgro
und imag
e mistakenly a
s
false up
da
ting, and the case that the
dynamic b
a
ckgrou
nd chan
g
e
s are not em
bedd
ed in
the
backgroun
d timely as miss updating.
Experiment
result
sho
w
s t
hat with the
pr
op
osed me
thod the false updatin
g ra
te is 0%
and
th
e miss updatin
g rate
is 0.67% (2 miss u
pdatin
g image
s out of 300 image
s), the rea
s
o
n
for
the miss upd
ating is that
both the la
b
e
ls of dynam
ic ba
ckgroun
d cha
nge
s
a
nd the label
s of
intrude
r are conjoint.
We defin
e th
e ca
se that the intru
der
r
egion
s a
r
e e
m
bedd
ed in t
he ba
ckgrou
nd imag
e
mistakenly
as false
up
datin
g, and
defin
e
the
ca
se
tha
t
the dyn
a
mic ba
ckground
cha
nge
s
are
n
’t
embed
ded i
n
the ba
ckgro
u
nd timely as
miss upd
at
in
g. Experimen
t result sh
ows u
s
ing
pro
p
o
s
ed
method, the false up
dating
rate is 0% and t
he miss
updatin
g rate
is 0.67% (2 miss u
pdatin
g
image
s), the
rea
s
on
of the
miss
upd
atin
g is the
la
bel
s of dynami
c
backg
rou
nd
chang
es a
nd t
h
e
label
s of intru
der a
r
e conjoi
nt. For example, as
sho
w
n
in Figure 7
(
a
)
, since the la
bels of dyna
mic
backg
rou
nd chang
es an
d the label
s of intrude
r ar
e conjoint, 3D sh
ape re
co
nst
r
u
c
ted contain the
part of
sud
d
en illumin
a
tion chan
ge
s
and the
pa
rt
of intrud
er,
the re
sult
of the mod
e
led
backg
rou
nd i
m
age i
s
illu
st
rated i
n
Figu
re 7(c), th
e
re
gion of
sud
d
e
n
illuminatio
n
cha
nge
s i
s
n
o
t
embed
ded in
the backg
rou
nd timely.
Figure 7. The
Examples of the Ex
perime
n
t Results (mi
ss u
pdatin
g)
In order to v
a
lidate th
e ef
ficien
cy of p
r
opo
sed
meth
od, we
com
p
ared
the
exp
e
rime
nt
usin
g Gau
s
si
an-b
a
sed me
thods introdu
ced in [7
] with our meth
od. Usi
ng G
aussia
n
-b
ase
d
method
s, the
false
up
dati
ng rate i
s
5
%
(15
false i
m
age
s o
u
t o
f
300 i
m
age
s) and
the
miss
updatin
g rate
is
3.3 %
(10
miss up
datin
g imag
es).
A
s
sho
w
n
from
the exp
e
rim
ental results,
the
efficien
cy of
prop
osed
me
thod i
s
b
e
tter than
t
he
co
n
v
entional m
e
thod
ob
serva
b
l
y, it is b
e
cau
s
e
the p
r
op
ose
method
can
handl
e the
dy
namic b
a
ckg
r
ound
chan
ge
s
su
ch
a
s
su
dden
illumin
a
t
ion
cha
nge
s time
ly and corre
c
t
l
y.
4. Conclusio
n
In this work
we p
r
e
s
ented
a
novel
dynam
ic
b
a
ckg
r
ou
n
d
subtractio
n
method, i
n
which
the
3D sha
p
e
s
are re
con
s
tructed usi
n
g
sha
p
e
s
fro
m
silho
uette
techni
que momenta
r
ily.
The
experim
ental
results de
mo
nstrate
d
the
e
fficiency of the prop
osed al
gorithm
s.
Ackb
o
w
l
e
g
m
ent
The wo
rk wa
s sup
p
o
r
ted by the Specialize
d
Re
sea
r
ch Fu
nd f
o
t
Do
ct
or
sl P
r
o
g
ram of
High
er Grant No.20
1
1
000
9
1200
03 and
Grant No.20
1
1000
91
1000
1
;
The Nationa
l Natual Scen
ce
(a)
(
b) T
he inp
u
t
ima
g
e
(c) T
he m
o
del
ed
bac
kgr
oun
d i
m
ag
e
T
he bac
kgro
und
Imag
e (s
udd
en
illu
min
a
ti
on cha
n
ge
s)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Backgroun
d Modelin
g Met
hod ba
se
d on
3D S
hape
Reco
nstructio
n
Techn
o
log
y
(Xue Yuan)
2083
Found
ation o
f
China No.6
1271
305 a
n
d
No. 6097
20
93; Scho
ol Found
ation of Beijing Jia
o
to
ng
University un
der G
r
ant No. W1
1
J
B0
046
0 and No.20
1
1JBZ0
1
0
Referen
ces
[1]
X Y
uan, Y S
o
ng,
X W
e
i. An
Automatic Sur
v
eill
anc
e S
y
ste
m
Using F
i
s
h
-
e
ye L
ens C
a
m
e
ra.
Ch
ines
e
Optics Letters.
201
1; 9(2).
[2]
X Y
u
a
n
, Y S
ong,
X W
e
i.
Parall
el
sub-
n
eura
l
n
e
t
w
ork
s
y
stem
for
h
and
vei
n
pattern r
e
cog
n
iti
o
n.
Chinese Optics Letters
. 2011;
9(5).
[3]
X Yu
an, X W
e
i
,
Y Song. Pedestrian D
e
tecti
on fo
r Cou
n
tin
g
Appl
icatio
ns Using
a T
op-Vie
w
Camer
a
.
IEICE Trans. I
n
f. & Syst
. 2011; E94(6).
[4]
C Stauffer, W
E
L Grimson. Learning patte
rn
s o
f
a
c
ti
vi
ty
usi
n
g re
al
-ti
m
e
tra
cki
ng
.
IEEE Trans.
PAMI
.
200
0; 22(8).
[5]
S Z
hang, H Yao, S Liu. D
y
n
a
mic Backgr
o
u
nd
Subtracti
o
n
Based on L
o
c
a
l Dep
e
n
d
e
n
c
y
Histogram.
IJPRAI
. 2009; 23(7).
[6]
S Lee, H.W
oo, Moon Gi Kan
g
. Global Ill
u
m
i
nati
on
Invari
a
n
t Object Dete
cti
on W
i
th Lev
el Set Based
Bimod
a
l Segm
entatio
n.
IEEE Trans. Circuits and Syste
m
s for Vide
o Techn
o
lo
gy.
200
0; 20(4).
[7]
DS L
ee. Effect
ive Ga
ussia
n
mixtur
e l
ear
nin
g
for v
i
de
o
ba
ckgrou
nd s
ubtr
a
ction.
IEEE T
r
ans. PAMI
.
200
5; 27(5).
[8]
Y W
ang, KF
Loe, JK W
u
. A d
y
namic c
o
nditi
ona
l ran
d
o
m
field mo
del
for foregrou
n
d
an
d sha
d
o
w
segmentation.
IEEE Trans. P
A
MI
. 2006; 28(
2).
[9] C Bened
ek,
T
Sziran
y
i
. Ba
y
e
si
an F
o
regro
und a
nd
Shad
o
w
Detec
t
ion in Unc
e
rt
ain F
r
ame Ra
te
Surveil
l
a
n
ce Vi
deos.
IEEE Trans. Im
age Proc
essing
. 200
8; 17(4).
[10] JL La
nda
baso, M Pard`
a
s, JR Casas.
Reconstructi
o
n
of 3D Sh
ap
es Cons
id
erin
g Incons
istent
2
D
Silh
ouettes
. IEEE Internatio
n
a
l Co
nferenc
e
on Image Proc
essin
g
. 200
6.
[11] A La
ure
n
ti
ni. T
he Visua
l
Hull
Co
nce
p
t for Sil
hou
ette-B
ased Ima
ge
U
ndersta
ndi
ng.
I
EEE Trans. PA
MI
.
199
4; 16(2).
[12] A Bottino,
A Laur
enti
n
i. Introduc
ing
a n
e
w
pr
obl
em:
shap
e-from-sil
h
ouette
w
h
e
n
th
e rel
a
tive p
o
siti
on
s
of the vie
w
-p
oi
nts is unkno
w
n
.
IEEE Trans.
PAMI.
2003; 21(11).
[13] F
eng W
e
i, Bao W
e
n
x
in
g.
A ne
w
tec
h
n
o
l
og
y of remote
sensin
g ima
g
e
fusion.
Te
lkom
nika
. 20
12; 1
0
:
551-
556.
[14] Sun Jun,
W
ang Yan,
W
u
Xi
ao
ho
ng, Z
han
g Xia
odo
n
g
, Gao Hong
yan. A ne
w
im
age se
gme
n
tatio
n
alg
o
rithm an
d i
t
s applic
atio
n in lettuce o
b
ject
segmentati
on.
T
e
lkomnik
a
. 2012; 10: 5
57-5
63.
[15] Usma
n Ak
ram. Retin
a
l I
m
age
Prepr
oc
e
ssin
g
: Backgr
oun
d a
nd
No
is
e Se
gmentati
o
n.
TELKOMNIKA:
Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2012; 1
0
(3): 5
37-4
4
.
Evaluation Warning : The document was created with Spire.PDF for Python.