TELKOM
NIKA
, Vol. 11, No. 5, May 2013, pp. 2716 ~
2722
ISSN: 2302-4
046
2716
Re
cei
v
ed
Jan
uary 20, 201
3
;
Revi
sed Ma
rch 1
8
, 2013;
Acce
pted Ma
rch 2
5
, 2013
Features Extraction for Object Detection Based on
Interest Point
Amin Moha
med Ahsa
n*,
Dzulkifli Bin
Mohamad
F
a
cult
y
of Com
putin
g, Univ
ersiti T
e
knologi M
a
la
ysi
a
813
10, Skud
ai,
Johor, Mala
ys
i
a
T
e
lp: +
(
6)07-553 33
33, fa
x: +
(
6)07-
556 5
0
4
4
/ 557 4
908
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: am200
2as
@
g
mail.c
o
m, dzu
l
kifli5
7@gm
ai
l.com
A
b
st
r
a
ct
In co
mputer v
i
sion, o
b
ject
de
tection is
an
e
ssent
ia
l proc
es
s for further pr
ocesses s
u
ch
as ob
ject
tracking, a
naly
z
i
n
g
and s
o
o
n
. In the sa
me
context, extr
a
c
tion featur
es
play i
m
porta
nt role to d
e
tect th
e
obj
ect correctly
. In this paper
w
e
present a
meth
od to ex
tr
act local feat
ur
es base
d
on i
n
terest poi
nt w
h
ich is
used t
o
d
e
tect key-po
ints w
i
thin
an
i
m
ag
e,
then, co
mput
e histo
g
ra
m
of grad
ie
nt (HOG) for the reg
i
o
n
surrou
nd that poi
nt. Propose
d
meth
od us
e
d
spee
d-up
ro
bust feature (
S
URF
) meth
o
d
as interest poi
nt
detector a
nd e
xclud
e the des
criptor. T
he ne
w
descripto
r is compute
d
by u
s
ing HOG method. T
he pr
opo
s
e
d
meth
od
got a
d
v
antag
es of b
o
th
menti
o
n
e
d
met
hods
. T
o
eval
uate th
e p
r
opos
ed
met
h
od, w
e
use
d
w
e
ll-
know
n datas
et w
h
ich is Calt
ech1
01.
The i
n
itial r
e
sult is
enco
u
rag
i
n
g
in
spite of usin
g
a smal
l data
for
traini
ng.
Ke
y
w
ords
: Object Detection, SURF, HOG, k
-
NN
Copy
right
©
2013 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
No
wad
a
ys, the appli
c
atio
ns of obj
ect
detecti
o
n
an
d cla
ssifi
catio
n
became o
n
e
of the
most lea
d
ing
use
s
in m
a
ny fields
su
ch as, in
du
stri
es, ro
botics,
se
curity, mob
ile and inte
rn
et
servi
c
e
s
. In
robots the o
b
j
ect
cla
ssifi
ca
tion an
d lo
calizatio
n com
m
only u
s
ed
to re
cog
n
ize
a
certai
n obj
ect
within a sce
ne, more
over, facial re
cog
n
ition play i
m
porta
nt role
in the se
cu
rity
issues.
Obje
ct dete
c
tion te
chni
que
s or
m
e
thod
s ar
e ess
entially
for further tas
k
s
(i.e
cla
ssifi
cation,
categ
o
ri
zatio
n
, analysi
s
, e
t
c). Yilmaz [1]
categ
o
ri
zed t
he obje
c
t det
ection m
e
tho
d
s
into fou
r
cat
egori
e
s,
poin
t
-based,
se
g
m
entat
ion
-
ba
sed, ba
ckgro
und-
ba
sed
a
nd, supe
rvised-
based to dete
c
t the obje
c
t.
Mean
-shift [2], Graph
-cut [
3
], and Active cont
o
u
r [4]
are exam
ple
of segm
ent-b
ase
d
to
detect the
obj
ect. Whil
e, ba
ckgro
und
mo
deling
used
t
o
dete
c
t the o
b
ject
within a
scene
are
vary;
mixture of G
aussia
n
[5], Eigenba
ckg
r
o
und [6] an
d
Dynami
c
texture
ba
ckg
ro
u
nd [7] are th
e
comm
on mo
dels b
a
sed
on mod
e
ling
the backg
r
ound. In the
other h
and,
Support Ve
ctor
Machi
ne [8],
Ne
ural
Network [9] an
d
Adaptive
b
o
o
sting
[10] u
s
ed
to d
e
tect the obj
ect
as
sup
e
rvised te
chni
que
s
Point-ba
se
d detecto
r
i
s
u
s
ed
to se
arch
fo
r poi
nts that de
mon
s
trate qui
ck
ch
ange
s i
n
both the ho
ri
zontal a
nd ve
rtical o
r
ientati
on of t
heir int
ensity. Such
points
calle
d the keypoi
nts or
intere
st point
s. Tho
s
e poin
t
s are inva
ria
n
t to chang
es in transfo
rma
t
ion and illumi
nation.
Comm
only d
e
tectors
use
d
ba
se
d on
the inte
re
st p
o
ints i
n
cl
ude:
Ha
rri
s inte
re
st poi
nt
detecto
r [11], Scale Invari
ant Feature Tran
sfo
r
m (S
IFT) [12] and
Speed-
Up Robu
st
Features
(SURF) [13].
While SIFT and SURF
are invari
ant
to illumination, rotation and scale, Harri
s
intere
st point’
s
dete
c
tor i
s
not invariant to scale.
In the s
a
me time,
Harris
detec
tor is
fas
t
er than
both SIFT an
d SURF b
u
t less accu
rate.
Bauer
et al. [14] perfo
rme
d
a co
mpa
r
ison
stu
d
y on
both SIFT an
d SURF re
ga
rding th
e
invarian
ce a
gain
s
t: rotation, scale ch
ange, imag
e
noise,
chan
ge in lighting
conditio
n
s,
and
cha
nge
of vi
ew
point.
Du
ring all
their t
e
sts, SIFT
p
e
rform
e
d
little bit b
e
tter th
an S
URF,
bu
t it’s
slo
w
er a
nd m
o
re compl
e
x comp
utationa
lly than SURF.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No
. 5, May 2013 : 2716 – 272
2
2717
In spite of, SURF i
s
o
p
timal in term of
det
ectin
g
the
intere
st point
and ha
s a
re
aso
nabl
e
feature
s
dim
e
nsio
n (de
s
cri
p
tor),
but, still
has some
drawb
a
cks in te
rm of
rotation
tran
sform
a
tion
and illumin
a
tion (i.e. sha
d
o
w
).
Relative to aforem
ention
e
d
issu
es, Dal
a
l
and Trigg
s
[15] pre
s
ente
d
a method b
a
se
d on
histog
ram
of
gradi
ent
(HO
G
) a
s
de
scrip
t
or o
r
f
eatu
r
e extracto
r
a
s
e
x
plained
i
n
section
2.2. HOG
perfo
rmed
well in term of invarian
ce a
gain
s
t rotatio
n
and illumin
a
tion, esp
e
ci
ally the shad
ow.
But, it is
not i
n
variant to scale trans
f
ormation.
Based o
n
wh
at mentioned,
we pre
s
e
n
t a met
hod that take the adva
n
tage of SURF, just
to dete
c
t the
interest
poin
t. Our de
scri
ptor
us
ed
HOG m
e
thod
i
n
stea
d of
SURF
de
scripto
r
. It
comp
utes
HOG of
re
gion
abo
ut ea
ch
i
n
tere
st
p
o
int
that are d
e
te
cted
by u
s
ing
ope
nSURF f
r
om
[19].
2. Res
earc
h
Method
2.1. SURF Detec
t
or an
d Des
c
riptor
SURF in [13] involves
two s
t
eps
:
firs
t is
to detect
the interest
point, seco
nd is to
con
s
tru
c
t the
descri
p
tor to
detect the
ke
y points wi
thi
n
an ima
ge a
s
sho
w
n in Fi
gure
1, there
are
four ste
p
s i
n
volved. 1) Ca
lculat
e the i
n
tegral
of an i
m
age. 2
)
Co
mpute the
Hessian m
a
trix. 3)
Con
s
tru
c
t scale sp
ace. 4) Locali
z
e the
intere
st
poin
t
s. Once th
e intere
st point
s dete
c
ted, the
descri
p
tor ca
n be
built i
n
t
w
o
step
s: firstly,
orientatio
n a
ssi
gnme
n
t, se
con
d
ly, compute
su
m
of
Harr-wavel
et respon
se
s.
Figue
r 1. SURF ste
p
s
To in
crea
se
the pe
rforma
nce
of S
U
RF an
inte
rme
d
iate im
age
rep
r
e
s
entatio
n called
“Integral Image” a
s
in [16] is used to sp
eed up
the
ca
lculatio
n of any rectan
gle
area by (1).
(1)
Whe
r
e (x,y) i
s
a poi
nt in the ori
g
in ima
ge I,
is an integral im
ag
e at a locatio
n
X=
(x,y)
T
,
which
re
pre
s
e
n
ts the
sum
m
ati
on of
all
pixel
s
in
ima
ge I
o
f
a recta
ngul
a
r
regio
n
fo
rm
ed
by the origin
and x.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Features E
x
traction for
Obj
e
ct Dete
ction
Ba
sed o
n
Intere
st Point (Am
i
n Moham
ed Ahsan
)
2718
To dete
c
t structure of bl
ob
-like
at lo
cati
ons
, He
ssi
an matrix
is use
d
be
cau
s
e
of its
goo
d
perfo
rman
ce
[13] as in (2
).
(2)
Whe
r
e H i
s
a hessia
n
matri
x
for point X=(x,y) at scale
σ
in image I,
and
is the
convol
ution o
f
the Gaussia
n
se
con
d
ord
e
r de
rivative in point X, sa
me thing for L
yy and Lxy.
To get
an a
c
curate
ap
proximation fo
r t
he
He
ssi
an
determina
nt, Bay [13] pu
rposed
a
formula u
s
in
g
the approxim
ated Gau
s
sia
n
as in (3
)
(3)
Whe
r
e Dxx, Dyy, and Dxy are the app
roximations fo
r the se
con
d
orde
r of Gau
s
sian.
In cont
ra
st to Lowe [12], Bay [13] use
d
filter
incre
a
si
n
g
to build th
e
pyramid to
re
pre
s
ent
the scale
-
spa
c
e. Inste
ad o
f
build a diffe
rent
scal
e
s
o
f
the origi
nal
image, Bay b
u
ilt a differe
n
t
size of filter to apply on the origin
al ima
ge as
sho
w
n
in Figure 2.
So, SURF is
comp
utationa
lly
efficient and
size invaria
n
t.
Figure 2. Sca
l
e-Spa
c
e: SIFT (left), SURF (right
).
Interest
poi
nts lo
cali
ze
d o
v
er all
scal
e
s
in
3x3x3
n
e
ighb
orh
ood
by applyin
g
t
he n
on-
maximum su
ppre
s
sion
as
in [17]. To ori
entation
dete
r
minatio
n, the Harr-wavel
et respon
se
s in
x
and y dire
ctio
ns are cal
c
ul
ated with si
ze
4s
(s: scale)
and ra
diu
s
6s of detected p
o
ints.
To get the dominant orie
ntation, sum of all
respon
se
s within a slidi
ng orie
ntation
windo
w
of size
π
/3 a
r
e
cal
c
ulated.
The o
r
ientat
ion of the i
n
te
re
st poi
nt is the long
est
vector
over
a
ll
wind
ows. Fin
a
lly, the co
m
pone
nts
of the de
scripto
r
are
cal
c
ul
ate
d
by divide
d
each
windo
w into
4x4 sub
-
regio
n
s, then a
ppl
y the Harr-wa
v
elet again
o
n
each sub-re
gion to get th
e final vector
as
follow:
Whe
r
e e
a
ch
sub
-
region
gives fou
r
value
s
, whi
c
h
mean 4x4x
4=6
4
value
s
for ea
ch
interes
t
point.
2.2. Histogr
a
m
of Gradie
nt
Dalal
and T
r
iggs [1
5] pre
s
ente
d
a me
thod
ba
se
d
on gri
d
of Histogram O
r
i
entation
Gradi
ent (HO
G
) a
s
de
scrip
t
ors; tho
s
e d
e
scripto
r
s
re
p
r
esent the fe
ature
s
set for the object. T
h
is
method invol
v
es five steps as sh
own in Figure 3.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No
. 5, May 2013 : 2716 – 272
2
2719
Figure 3. An overview of st
atic
HO
G feature extra
c
tion
, Dalal [15].
The first step
is applyin
g
the no
rmali
z
at
ion equ
alization on a
n
ima
ge in o
r
de
r to
redu
ce
the effects of
illumination
variance and the loca
l shadowing. Next step is
to compute the first
order
gradients for further
resi
st
ance to illumination
variations. Th
ird
step invol
v
es dividi
ng t
h
e
image
into small sub
regi
on calle
d
“cell’, the hi
stogram
gradient
is accumulat
ed for all
pix
e
ls
within e
a
ch cell. Fourth
st
ep is to no
rm
alize th
e cell
across l
a
rg
e
regio
n
s whi
c
h incl
ude
a g
r
oup
of cell
s
call
ed “block” to get better illumination i
n
variance. T
hat l
a
st
step
is collecting the
HOG
from all overl
appe
d blo
c
ks which are
co
nsid
ere
d
as a
descripto
r.
2.3. Propose
d
Method
In this pape
r we present a new meth
od to extrac
t feat
ure
s
that use
d
to detect an
object,
this meth
od
based o
n
S
U
RF d
e
tecto
r
and
HO
G.
T
he
k-n
earest
neigh
bo
r alg
o
rithm
(k-NN) is
use
d
as tem
pora
r
y cla
s
sifier to examin
e the
feature
s
extra
c
tor
method; Figu
re 4
sho
w
s t
h
e
purp
o
sed met
hod.
Input
image
s
Interes
t
Point
detecto
r
De
sc
ript
o
r
Cla
ssif
i
e
r
Figure 4. Purpo
s
e
d
Meth
od
HOG
KNN
SURF
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Features E
x
traction for
Obj
e
ct Dete
ction
Ba
sed o
n
Intere
st Point (Am
i
n Moham
ed Ahsan
)
2720
First, input i
m
age
s we
re
divided into two group
s, p
o
sitive gro
u
p
which rep
r
e
s
ent
s the
obje
c
t, negati
v
e gro
up
whi
c
h
rep
r
e
s
ent
s n
on-obje
c
t. Seco
nd
ste
p
is
getting t
he inte
re
st p
o
int
within im
age
s fo
r e
a
ch g
r
oup, o
n
ly the
point
s that
a
r
e
co
rne
r
b
e
i
ng ta
ken. It
can b
e
d
one
b
y
cho
o
si
ng the
intere
st point
s that their la
placi
an value
greate
r
than
1 as sho
w
n in
Figure 5.
Figure 5. Intere
st Points: edge a
nd
corner (l
eft), corner only (right
)
The fou
r
th
st
ep i
s
to
com
puter th
e
HO
G for
ea
ch i
n
terest
point, i
n
orde
r to d
o
that we
comp
ute the
HOG for sq
uare a
r
ea th
at surrou
nd the intere
st p
o
int, where the intere
st p
o
in
t
sho
u
ld be
ce
nter as
sho
w
n in Figure 6.
Figure 6. Re
gion ab
out intere
st point
We
do
not h
a
d
to a
pply all
step
mention
ed in
[15], we
co
mpute th
e
HO
G by
ove
r
lapp
ed
slidin
g
wind
o
w
o
n
the
regi
on that
ha
s
b
een ta
ke
n, th
at yields to
g
e
t 81
feature
s
fo
r e
a
ch int
e
re
s
t
point. Before
applying
k-NN
cla
ssifie
r
,
all po
sitive fe
ature
s
la
bele
d
a
s
o
b
je
ct a
nd g
e
t 1 fo
r t
heir
grou
p, the n
egative featu
r
es ta
ke 0 fo
r their g
r
ou
p. Then, all feature
s
are combine
d
in two
matrices, on
e
for feature
s
, se
con
d
for group.
The la
st ste
p
, applying
k-NN cl
assifie
r
to
ex
amine th
e
feature
s
, k-NN cl
assifie
r
d
oes
not
requi
re
so m
u
ch
setting; therefo
r
e, we
used it a
s
tempo
r
ary
cla
ssifie
r
. Figu
re 7 sh
ows some
example
s
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No
. 5, May 2013 : 2716 – 272
2
2721
Figure 7. k-NN example
re
sult: obje
c
t and non
-obj
ect
(left), object
only (rig
h
t).
Only three re
gion
s have b
een taken to rep
r
e
s
ent the
object (i.e. h
u
man face) which a
r
e
the regio
n
s
surroun
d eyes,
nose, an
d mouth.
3. Results
To evalu
a
te t
he p
e
rfo
r
man
c
e
of the
pro
posed m
e
tho
d
, we
u
s
ed
a
numb
e
r of i
m
age
s
from calte
c
h
1
01 data
s
et as sho
w
n in Ta
ble 1.
Table 1. Data
set used
Datase
t
Phase
images
No. o
f
images
Total
Caltech101
Trai
n
Positive
28
52
Negative 24
Test Both
10
10
Measurement
s used to evaluat
e the pe
rforma
nce of propo
se
d method are: detectio
n
rate so
metim
e
s called
sen
s
itivity, specificit
y, and pre
c
ision
whi
c
h are mentione
d in [18].
Thre
e mea
s
u
r
es a
r
e
comp
uted by (4), (5), and (6) re
spe
c
tively.
(4)
(5)
(6)
Whe
r
e TP, TN, FP, and FN are true p
o
sitive
, true negative, false positive, a
nd false
negative re
sp
ectively.
Table 2
sho
w
s the re
sult of
TP, TN, FP,
FN
, all po
sitives, and all n
e
gatives obtai
ned by
the purp
o
sed
method for e
a
ch te
st imag
e use
d
in test phase. Sum
m
ation also computed to be
use
d
to comp
ute the three
measures me
ntioned a
bov
e.
Dete
ction rate, spe
c
ificity, and p
r
e
c
isi
on
whi
c
h
are obtaine
d u
s
ing o
u
r m
e
thod a
r
e,
0.85%, 97.8%, and 90.5% resp
ectivel
y
.
Initial result shows a go
od
performan
ce
although a f
e
w num
be
rs
of images h
a
v
e been
use
d
for trai
n
i
ng. Mo
reove
r
, k-NN
cla
ssifier u
s
ed
in
our
wo
rk i
s
o
n
ly to examin
e the
pro
p
o
s
ed
extrac
tor, s
o
, it is
features
extrac
tion is
sue rather than c
l
as
s
i
fic
a
tion.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Features E
x
traction for
Obj
e
ct Dete
ction
Ba
sed o
n
Intere
st Point (Am
i
n Moham
ed Ahsan
)
2722
Table 2. Re
sult
Image
No.
TP TN
FP
FN
A
l
l
_
P
a
A
ll_
N
b
1 41
357
2
7
43
364
2 41
228
7
8
48
236
3 34
117
6
4
40
121
4 27
131
7
2
34
133
5 34
72
1
11
35
83
6 41
161
1
6
42
167
7 39
194
5
7
44
201
8 35
133
4
4
39
137
9 37
201
3
6
40
207
10 36
167
2
8
38
175
Sum 365
1761
38
63
403
1824
a. all positive point
s, b. all negative poi
n
t
s
3. Conclu
sion
In this p
ape
r,
we
have i
n
trodu
ced
a m
e
thod to
extra
c
t ne
w featu
r
es fo
r o
b
je
ct
detection
based o
n
SURF
dete
c
to
r and
HOG
descri
p
tor
wi
th som
e
mo
dification i
n
aforem
ention
ed
method
s. Initial results a
r
e
encouragin
g
.
Curre
n
tly, we are working
on others cl
assi
fiers su
ch as SVM a
nd ANN in li
ne with
contin
ued e
n
han
ceme
nt of feature
s
extractor. Anoth
e
r interest p
o
i
nt detecto
r such a
s
SIFT
will
be take
n into our con
s
ide
r
a
t
ion.
Referen
ces
[1]
Yilmaz A, O Ja
ved, an
d M Sh
ah.
Object trac
king: A surv
ey. Acm C
o
mp
uti
ng Surv
eys
. (C
SUR). 20
06;
38(4): 13.
[2]
Coma
nici
u D a
nd Meer P.
Me
an shift an
alysi
s and a
ppl
icati
ons
. In IEEE International Conference o
n
Comp
uter Visi
on (ICCV).19
9
9
; 2: 1197
–12
0
3
.
[3]
Shi J and Malik J.
Normali
z
e
d cuts an
d i
m
age s
e
g
m
e
n
tat
i
on
. IEEE T
r
ans. Patt. Analy
.
Mach. Intell.
200
0; 22(8): 88
8–9
05.
[4]
Caselles V, Kimmel R, and Sapiro G.
Geodesic ac
tive contours
.
1995. In IEEE International
Confer
ence on
Computer
Vis
i
on (ICCV). 694
–69
9.
[5]
Stauffer C and Grimson W.
L
earn
i
ng p
a
tter
n
s of activity using re
al ti
me
tracking
. IEEE
T
r
ans. Patt.
Anal
y. Mach. Intell. 20
00; 22(
8): 747–
76
7.
[6]
Oliver NM, B Rosari
o an
d A
P
Pentla
nd.
A Bayesian com
p
uter visi
on
system
for modeling hum
an
interacti
ons
. Pattern Anal
y
s
is
and Mach
in
e Intelli
ge
nce.
IEEE
T
r
ansactio
n
s on, 20
00; 2
2
(8): 831-
84
3.
[7]
Monn
et A, Mittal A, Paragi
os
N and Rame
sh V.
Background
mo
del
ing
and su
btractio
n of dyna
mi
c
scenes
. In IEEE Internatio
nal
Confer
ence
on
Co
mputer Vis
i
on (ICCV).
200
3: 1305
–1
31
2.
[8]
Papa
ge
orgi
ou
C, Oren M
and P
o
g
g
io T
.
A gen
eral f
r
amew
ork for
obj
ect detect
i
on
. In IEEE
Internatio
na
l C
onfere
n
ce o
n
Comp
uter Visi
on (ICCV). 199
8: 555–
56
2.
[9]
Ro
w
l
e
y
H, Bal
u
ja S, and Ka
nad
e T
.
Neura
l
netw
o
rk-base
d
face detectio
n
. IEEE
T
r
ans.
Patt. Analy
.
Mach. Intell. 19
98; 20(1): 2
3–3
8.
[10]
Viol
a P, Jon
e
s
M and
Sno
w
D.
Detectin
g p
edestri
ans
usi
ng p
a
tterns of
moti
on
an
d a
p
pear
ance
. I
n
IEEE International Confer
ence on Computer Vision
(ICCV). 2003: 734–741.
[11]
Harris C an
d M
Stephens.
A c
o
mbi
ned cor
n
e
r
and ed
ge d
e
tector
. Manche
ster, UK. 1988.
[12] Lo
w
e
DG.
Disti
nctive i
m
a
ge f
eatures fro
m
s
c
ale-i
n
var
i
ant k
e
ypo
i
nts
. Internatio
nal
jour
na
l of compute
r
vision, 2
0
0
4
; 60(2): 91-1
10.
[13] Ba
y
H, et
a
l
.
S
pee
de
d-up
ro
b
u
st features
(S
URF
)
. Comp
uter Visi
on
a
nd I
m
age
Un
derst
and
ing.
20
08;
110(
3): p. 346-
359.
[14]
Bauer
J, N S
u
nder
hauf,
et al
.
Co
mp
arin
g s
e
vera
l i
m
ple
m
entatio
ns
of
tw
o rec
ently
pu
bl
ishe
d feat
ur
e
detectors
. 20
0
7
, unpu
bl
ishe
d.
[15]
Dala
l N an
d B
T
r
iggs.
Histog
r
ams of ori
ent
ed gra
d
ie
nts for hu
ma
n dete
c
tion
. in CVPR
. 2005. Sa
n
Dieg
o
, CA, USA: Ieee.
[16]
Paul V
i
ol
a an
d
Michae
l Jo
ne
s.
Rapid
ob
jec
t
detection
usi
ng a
booste
d
cascad
e
of si
mp
le fe
atures
.
Cvpr. 200
1; 1: 511.
[17]
A Neub
eck, L Van Gool.
Effe
cient no
n-
maxi
mu
m su
ppr
essi
on
. in: ICPR. 2006.
[18] F
a
w
c
ett
T
.
An i
n
troducti
on to
ROC ana
lysis
. 200
6, Pattern Reco
gniti
on L
e
tters. 27: 861–
874.
[19] www
.
c
hris
eva
n
s
dev/op
ens
urf.
Evaluation Warning : The document was created with Spire.PDF for Python.