TELK
OMNIKA
,
V
ol.
16,
No
.
2,
Apr
il
2018,
pp
.
739
746
ISSN:
1693-6930,
accredited
A
b
y
DIKTI,
Decree
No:
58/DIKTI/K
ep/2013
DOI:
10.12928/telk
omnika.v16.i2.7418
739
Real
Time
F
ace
Recognition
Based
on
F
ace
Descriptor
and
Its
Application
I
Gede
P
asek
Suta
Wija
y
a*
,
Ario
Y
udo
Husodo
,
and
I
W
a
y
an
Agus
Arimba
wa
Depar
tment
of
Inf
or
matics
Engineer
ing,
Engineer
ing
F
aculty
,
Matar
am
Univ
ersity
Jl.
Majapahit
62
Matar
am,
Lombok,
Indonesia
*Corresponding
A
uthor
,
email:
gpsuta
wija
y
a@unr
am.ac.id,
ar
io@ti.ftunr
am.ac.
id,
ar
imba
w
a@unr
am.ac.id
Abstract
This
pape
r
presents
a
real
time
f
ace
recognition
based
on
f
ace
descr
iptor
and
its
application
f
or
door
loc
king.
The
f
ace
descr
iptor
is
represented
b
y
both
local
an
d
global
inf
or
mation.
The
local
inf
or
mation,
which
is
the
dominant
frequency
content
of
sub-f
ace
,
is
e
xtr
acted
b
y
z
oned
discrete
cosine
tr
ansf
or
ms
(DCT).
While
the
global
inf
or
mation,
which
also
is
the
dominant
frequency
content
and
shape
inf
or
ma
tion
of
the
whole
f
ace
,
is
e
xtr
acted
b
y
DCT
and
b
y
Hu-moment.
Theref
ore
,
f
ace
descr
iptor
has
r
ich
inf
or
mation
about
a
f
ace
image
which
tends
to
pro
vide
good
perf
or
mance
f
or
real
time
f
ace
recognition.
T
o
decrease
the
dimensional
siz
e
of
f
ace
descr
iptor
,
the
pred
ictiv
e
linear
discr
iminant
analysis
(PDLD
A)
is
emplo
y
ed
and
the
f
ace
classification
is
don
e
b
y
kNN.
The
e
xper
imental
results
sho
w
that
the
proposed
real
time
f
ace
recognition
pro
vides
good
perf
or
mances
which
indicated
b
y
98.30%,
21.99%,
and
1.8%
of
accur
acy
,
FPR,
and
FNR
respectiv
ely
.
In
addition,
it
also
needs
shor
t
computational
time
(1
second).
K
e
yw
or
d:
f
ace
recognition,
real
time
,
LD
A,
f
ace
descr
iptor
,
f
ace
classification
Cop
yright
c
2018
Univer
sitas
Ahmad
Dahlan.
All
rights
reser
ved.
1.
Intr
oduction
This
paper
presents
an
application
of
real
time
f
ace
recognitio
n
based
on
f
ace
descr
iptor
f
or
door
loc
king
system.
The
f
ace
descr
iptor
is
the
dominant
frequency
content
of
sub-f
ace
(local)
and
whole
f
ace
(global).
The
f
ace
descr
iptor
is
e
xtr
acted
b
y
z
oned
DCT
,
non-z
oned
DCT
,
and
Hu-moment.
The
main
aim
of
DCT
coefficients
based
f
ace
descr
iptor
is
to
get
r
ich
inf
or
mation
of
f
ace
image
which
can
giv
e
better
achie
v
ement
than
that
of
compact
f
eatures
(CF)
based
method
[1]
f
or
real
time
f
ace
recognition.
The
predictiv
e
linear
discr
iminant
analysis
(PDLD
A)
is
hire
d
to
drop-off
the
dimensional
siz
e
of
the
descr
iptor
and
the
k
nearest
neighborhood
(kNN)
is
utiliz
ed
f
or
v
er
ification.
The
main
aim
of
this
w
or
k
is
to
obtain
strong
f
ace
recognition
against
lighting
v
ar
iation
which
can
be
applied
to
the
secur
ity
system,
i.e
.
door
loc
king
system
which
is
an
e
xtended
v
ersion
of
our
pre
vious
w
or
k[2].
F
ace
recognition
has
been
widely
de
v
eloped
b
y
man
y
researchers[3],
such
as
statistical-
based
(ICA,
PCA,
and
naiv
e
Ba
y
esian),
globa
l
f
eatures-based,
ar
tificial
intelligent-based
(i.e
.,
genetic
algor
ithm,
ar
tificial
neur
al
netw
or
k,
SVM
and
e
tc
,)
and
an
y
their
v
ar
iations-based
[1,
2]
f
ace
recognition
algor
ithms
.
The
most
popular
algor
ithm
is
f
ace
recognition
based
on
subspace
projection:
LD
A,
eigenf
ace
(PCA),
and
their
v
ar
iations
[4,
5].
The
LD
A
and
their
v
ar
iation
become
popular
due
to
its
simple
implementation
and
less
computation
comple
xity
.
In
addition,
their
dis-
cr
imination
po
w
er
is
higher
than
that
of
the
PCA,
which
mak
e
the
perf
or
mance
of
LD
A
and
their
v
ar
iation
be
better
than
PCA.
Discrete
cosine
tr
ansf
or
m
(DCT)
based
f
ace
recognition
[6]
has
been
repor
ted
that
it
pro-
vided
good
perf
or
mance
compared
to
other
approaches
.
Both
PCA
and
LD
A
is
possib
le
to
be
directly
e
x
ecuted
on
images
in
JPEG
standard
f
or
mat
unaccompanied
b
y
perf
or
ming
in
v
erse
DCT
tr
ansf
or
m
because
the
y
can
w
or
k
in
DCT
domain.
The
DCT
-based
system
requires
cer
tain
nor-
malization
techniques
to
o
v
ercome
v
ar
iations
in
f
acial
geometr
y
and
illumination.
Ho
w
e
v
er
,
both
approaches
e
xtr
acted
the
f
ace
f
eatures
using
only
b
loc
k-based
DCT
.
The
f
ace
recognition
method
using
selection
DCT
coefficients
from
75%
to
100%
DCT
and
setting
the
high
frequency
to
z
ero
has
been
proposed
to
handle
illumination
prob
lem[6].
Ho
w
e
v
er
,
it
nee
ds
high
computational
time
Receiv
ed
September
28,
2017;
Re
vised
December
21,
2017
;
Accepted
J
an
uar
y
18,
2018
Evaluation Warning : The document was created with Spire.PDF for Python.
740
ISSN:
1693-6930
because
in
v
erse
DCT
tr
ansf
or
ms
and
Contr
ast
Limited
Adaptiv
e
Histo
g
r
am
Equalization
(CLAHE)
is
mandator
y
to
obtain
an
illumination
in
v
ar
iant
f
ace
image
.
Regarding
real
time
f
ace
recognition
algor
ithms[7,
8],
mostly
the
eigenf
ace
(PCA)
has
been
successfully
implemented.
Ho
w
e
v
er
,
the
PCA
is
lac
k
of
discr
iminant
po
w
er
,
which
mak
e
the
system
be
lac
k
of
accur
acy
.
In
addition,
the
combination
of
compact
f
eatures
(CF)
and
LD
A
projection
has
been
applied
f
or
real
time
f
ace
recognition[1].
The
CF
v
ector
w
as
e
xtr
acted
b
y
LBP
and
z
oned
DCT
,
while
the
classificat
ion
w
as
perf
or
med
b
y
nearest
neighbor
r
ules
.
The
LD
A
w
as
emplo
y
ed
f
or
dimensional
reduction
of
CF
v
ector
.
Theref
ore
,
alter
nativ
e
real
time
f
ace
recognition
using
DCT
coefficients
based
f
ace
de-
scr
iptor
which
consists
of
dominant
frequency
content
e
xtr
acted
b
y
discrete
cosine
tr
ansf
or
ms
(DCT),
local
f
eatures
e
xtr
acted
b
y
z
one
DCT
(b
loc
k-based
DCT)
and
shape
inf
or
mation
e
xtr
acted
b
y
Hu-moment.
The
DCT
coefficients
based
f
ace
descr
iptor
tends
to
impro
v
e
the
perf
or
mance
of
CF
based
f
ace
recognition
because
it
has
r
ich
inf
or
mation.
2.
Pr
oposed
Method
In
this
research,
there
are
tw
o
main
modules:
f
ace
recognition
engine
and
its
implemen-
tation
f
or
a
door-loc
king
system.
The
f
ace
recognition
engine
pr
incipally
has
three
subsystems:
f
ace
detection,
f
eature
e
xtr
action,
recognition
and
v
er
ification
r
ules
,
as
sho
wn
in
Fig.
1(a).
While
the
door-loc
king
system
consists
of
a
f
ace
recognition
engine
and
solenoid
cont
rol
circuit,
as
presented
in
Fig.
1(b).
2.1.
Pr
oposed
F
ace
Recognition
Engine
The
mechanism
of
f
ace
recognition
and
v
er
ification
can
be
descr
ibed
as
f
ollo
ws:
1.
Suppose
,
the
tr
aining
set
is
giv
en
to
the
recognition
engine
f
or
finding
out
machine
par
am-
eters
and
guiding
the
engine
to
be
intelligent.
Fur
ther
more
,
f
ace
image
descr
iptors
that
are
F
ig
.
1
.
Face
r
ec
o
g
n
itio
n
an
d
v
er
i
f
icati
o
n
p
r
o
ce
s
s
.
4.1
Face D
e
t
e
ct
i
on
The
f
a
c
e
de
te
c
ti
on
is
on
e
im
porta
nt
modul
in
r
e
a
l
ti
me
fa
c
e
re
c
o
g
nit
ion
e
spec
iall
y
on
a
ppli
c
a
ti
on
of
fa
c
e
r
e
c
o
gnit
ion
for
e
lec
tronic
s
k
e
y
.
I
n
t
his
r
e
se
a
rc
h,
the
h
a
a
r
-
li
ke
b
a
se
d
fa
c
e
de
te
c
ti
on
pr
ovided
b
y
p
rovide
d
b
y
ope
n
C
V
li
br
a
r
y
is
im
pleme
nted
for
f
a
c
e
de
tec
ti
on.
This
a
lg
or
it
hm
ha
s
be
e
n
re
porte
d
than
pr
ovide
robust
pe
rf
or
man
c
e
a
mong
the
other
’s
a
l
g
or
it
hm
[
19
]
.
The
fa
c
e
de
tec
ti
on
modul
star
ts
fa
c
e
loca
li
z
a
ti
on
for
de
finin
g
F
a
ce
s
ig
n
a
tu
r
e
Fa
c
e
De
s
c
r
ip
t
o
r
Ex
tr
a
c
tio
n
Fa
c
e
De
s
c
r
ip
t
o
r
D
i
m
e
n
sio
n
a
l
Re
d
u
c
tio
n
E
lect
ro
nic K
ey
s
Ver
if
ica
t
io
n P
ro
ce
s
s
Usi
ng
M
et
rics a
nd
k
NN
Rule
Query
I
m
age
Re
giste
r
e
d
I
m
age
s
Fa
c
e
De
te
c
tio
n
a
n
d
No
r
m
a
li
z
a
ti
o
n
Fa
c
e
De
te
c
tio
n
a
n
d
No
r
m
a
li
z
a
ti
o
n
ID
a
n
d
Re
g
iste
r
e
d
K
e
y
(a)
F
a
c
e
D
e
t
e
c
t
i
on &
F
e
a
t
ur
e
s
E
xt
r
a
c
t
i
on
M
odul
e
S
ol
e
noi
d K
e
y
Cont
r
ol
M
od
u
l
e
D
e
t
e
c
t
F
a
c
e
Cl
os
e
/
O
p
e
n
K
e
y S
i
gna
l
C
ont
rol
In
s
t
ru
c
t
i
o
n
Fa
ce D
et
ect
i
on U
n
i
t
Sol
e
n
oi
d
C
o
nt
r
o
l
U
ni
t
(b)
Figure
1.
Diag
r
am
b
loc
ks:
(a)
f
ace
recognition
engine[2]
and
(b)
door-loc
king
based
on
f
ace
image
.
TELK
OMNIKA
V
ol.
16,
No
.
2,
Apr
il
2018
:
739
746
Evaluation Warning : The document was created with Spire.PDF for Python.
TELK
OMNIKA
ISSN:
1693-6930
741
e
xtr
acted
dur
ing
the
tr
aining
process
are
stored
in
the
database
as
registered
f
ace
signa-
tures
.
The
f
ace
image
descr
iptor
is
e
xtr
acted
b
y
using
a
f
ast
z
oned
and
non-z
oned
DCT
,
and
Hu-moment,
then
selected
a
sma
ll
par
t
of
tr
ansf
or
mation
coefficient
ha
ving
g
reatest
magni-
tude
.
Then
the
chosen
coefficients
are
quantiz
ed
f
or
shar
pening
the
k
e
y
f
eatu
res
called
as
f
ace
signature
.
2.
In
the
recognition
process
,
the
quer
y
f
ace
signature
is
e
xtr
acted
b
y
using
a
similar
technique
to
the
tr
aining
process
.
Ne
xt,
the
similar
ity
score
is
deter
mined
b
y
matching
quer
y
f
ace
signature
and
registered
f
ace
signatures
.
In
this
case
,
the
smallest
score
is
concluded
as
the
best
lik
eness
.
3.
In
the
v
er
ificat
ion
process
,
the
kNN
is
hired
to
find
the
highest
probability
of
quer
y
f
ace
signature
,
which
is
close
to
the
registered
f
ace
signature
.
If
the
probability
of
quer
y
f
ace
signature
,
which
is
close
registered
f
ace
signature
of
class
B
,
the
input
quer
y
is
v
er
ified
as
class
A.
The
kNN
is
chosen
because
it
could
giv
e
good
perf
or
mance
(91.5%
of
recognition
r
ate
and
2.66
seconds
of
computational
time)
f
or
f
ace
recognition
in
small
and
compact
de
vices(ARM
processor)[9].
2.1.1.
F
ace
Acquisition
F
ace
image
acquisition
is
done
b
y
using
a
standard
USB
camer
a.
Ne
xt,
histog
r
am
equal-
ization
is
utiliz
ed
to
decrease
the
eff
ect
of
lighting
condition
dur
ing
f
ace
acquisition.
Finally
,
the
Haar-lik
e
based
f
ace
detection[10],
which
has
been
widely
e
xamined
and
pro
vide
rob
ust
perf
or-
mance
among
the
others
algor
ithm,
is
emplo
y
ed
f
or
f
ace
detection.
Simply
,
the
f
ace
detection
algor
ithm
star
ts
from
f
ace
localization
to
define
a
region
of
interest
(R
OI)
of
f
ace
,
and
then
from
the
detected
f
ace
R
OI
is
confir
med
b
y
detecting
the
tw
o
e
y
es
inside
the
R
OI,
finally
the
fir
med
f
ace
R
OI
is
cropped
and
passed
to
f
ace
recognition
engine
f
or
fur
ther
process
on
real
time
f
ace
recognition.
The
illustr
ation
of
f
ace
detection
is
presented
in
Fig.
2.
fa
c
e
th
e
re
g
ion
of
int
e
re
s
t
(RO
I
)
.
The
f
a
c
e
R
O
I
w
il
l
be
c
onfir
med
b
y
de
te
c
ti
ng
the
two
e
y
e
s
c
oordin
a
tes
that
e
x
is
ti
ng
insi
de
the
fa
c
e
R
O
I
.
F
inall
y
,
the
fir
me
d
fa
c
e
R
O
I
is
c
roppe
d
a
nd
sa
ve
d
fo
r
fur
ther
pr
oc
e
ss
in
re
a
l
ti
me
fa
c
e
r
e
c
o
g
nit
ion.
The
il
lust
ra
ti
on
of
fa
c
e
de
te
c
ti
on
is
shown
in
F
ig
,
3
.
B
a
se
d
on
our
re
a
l
-
ti
me
e
x
pe
rimen
tal
r
e
sult
s,
Ha
a
r
-
li
ke
s
fa
c
e
d
e
tec
ti
on
p
rovide
s
robust
e
nou
g
h
p
e
rf
o
r
manc
e
s
a
g
a
inst
the
lar
ge
il
lum
ination
va
ria
ti
ons
a
nd re
qui
re
s
s
hort ti
me pr
oc
e
ssi
n
g
.
(a
)
(b)
(c
)
F
ig
.
2
.
Face
d
etec
tio
n
p
r
o
ce
s
s
es: (
a)
f
ac
e
lo
ca
lizatio
n
,
(
b
)
e
y
es d
ete
ctio
n
,
(
c)
cr
o
p
p
in
g
f
ac
e.
4.2
Feat
u
re
E
x
t
rac
t
i
on
I
n
thi
s
re
se
a
rc
h,
a
ne
w
a
p
pr
oa
c
h
to
fa
c
e
fe
a
ture
s
w
hich
is
de
fine
d
ba
se
d
one
Diff
e
re
nt
o
f
Ga
ussi
a
n (
D
o
G)
.
The
Do
G itself is i
mpl
e
mente
d
f
or
f
indi
n
g
out i
nte
re
sti
ng ke
y
point
s in
the
im
a
g
e
c
a
n
be
im
plem
e
nted
fo
r
il
lum
ination
norma
li
z
a
ti
on
be
c
a
use
it
w
or
k
looks
li
ke
low
-
pa
ss
filter
in
g
.
The
D
o
G
of
im
a
ge
c
a
n
b
e
e
x
tra
c
ted
usin
g
the
il
lust
ra
ti
on
a
s
shown
in
F
ig
3.
F
ig
.
3
.
The
D
o
G e
x
tar
c
ti
on pr
oc
e
dur
e[
10
].
This
pr
oc
e
du
re
wor
ks
f
a
st
a
nd
e
ff
icie
nt
b
e
c
a
use
i
t
re
plac
e
s
a
c
omput
a
ti
on
a
ll
y
int
e
nsive
of
L
a
plac
ian
o
f
Ga
ussi
a
n
pr
oc
e
ss
with
a
sim
ple
subt
ra
c
ti
on.
The
re
for
e
,
the
DoG
of
im
a
g
e
s is
a
pp
rox
im
a
tel
y
the sa
me a
s the
L
a
plac
ia
n of G
a
ussi
a
n.
Figure
2.
F
ace
detection
algor
ithms:
(a)
f
ace
localization,
(b)
e
y
es
detection,
(c)
cropping
f
ace
2.1.2.
F
ace
Descriptor
Extraction
In
this
paper
,
the
f
ace
descr
iptor
e
xtr
action
process
is
sho
wn
b
y
using
diag
r
am
b
loc
k
in
Fig.
3.
The
filter
ing
and
contr
ast
stretching
are
also
emplo
y
ed
to
eliminate
the
lighting
v
ar
iation
eff
ect
dur
ing
f
ace
captur
ing.
In
detail,
the
f
ace
descr
iptor
e
xtr
action
is
done
b
y
using
some
steps
as
f
ollo
ws:
1.
P
erf
or
ming
the
local
binar
y
patter
n
(LBP)
and
f
ollo
w
ed
b
y
perf
or
ming
none-z
one
DCT
(on
entire
image)
to
obtain
the
global
inf
or
mation
of
f
ace
image
.
LBP
and
its
v
ar
iation
ha
v
e
been
successfully
implemented
f
or
f
ace
recognition[11].
In
this
case
,
small
par
t
(less
than
64)
coefficients
are
selected
as
global
inf
or
mation.
The
LBP
is
implemented
t
o
get
rob
ust
global
inf
or
mation
of
f
ace
image
against
illuminations
.
2.
P
erf
or
ming
z
one
DCT
(as
perf
or
med
on
JPEG
compression)
to
obtain
local
f
eatures
of
the
f
ace
image
,
as
sho
wn
in
Fig.
4.
In
this
case
,
less
than
f
our
coefficients
are
selected
from
Real
Time
F
ace
Recognition
Based
on
F
ace
Descr
iptor
...
(I
Gede
P
asek
Suta
Wija
y
a)
Evaluation Warning : The document was created with Spire.PDF for Python.
742
ISSN:
1693-6930
Featur
e
E
xt
rac
tio
n
N
o
rmal
i
z
a
ti
o
n
L
BP
Zo
n
e
DCT
S
h
ap
e
A
n
alys
i
s
U
N
on
-
Zo
n
e
DCT
Figure
3.
F
ace
descr
iptor
e
xtr
action
processes
Featur
e
E
xt
rac
tio
n
N
o
rmal
i
z
a
ti
o
n
L
BP
Zo
n
e
DCT
S
h
ap
e
A
n
alys
i
s
U
N
on
-
Zo
n
e
DCT
0
50
100
150
200
250
-2
0
2
4
6
8
10
12
N
o
rmal
i
z
e
d
F
ac
e
Zo
n
e
DCT
S
e
l
e
c
te
d
c
o
e
ff
i
c
i
e
n
ts
Figure
4.
Local
f
eatures
e
xtr
action
processes
each
z
one
as
local
f
eatures
.
The
local
f
eatures
represent
specific
inf
or
mation
of
sub
f
ace
image
which
is
a
v
ailab
le
in
some
lo
w
frequency
components
.
3.
P
erf
or
ming
the
shape
analysis
using
Hu-moment
to
get
shape
inf
or
mation
of
f
ace
image
.
In
this
case
,
only
f
our
moments
(first-f
our
th)
is
considered
because
the
fifth-se
v
enth
moment’
s
v
alues
are
close
to
z
ero
.
It
means
that
shape
inf
or
mation
is
not
a
v
ailab
le
in
the
fifth-se
v
enth
moments
.
4.
Finally
,
combining
the
global
inf
or
mation,
local
f
eatures
,
and
shape
inf
or
mation
to
get
r
ich
f
ace
descr
iptor
.
In
this
w
or
k,
the
f
ace
descr
iptor
is
represented
b
y
the
dominant
frequency
content
of
whole
and
sub
f
ace
images
.
The
local
f
eatures
represent
the
most
inf
or
mation
of
k
e
y
point.
Similar
to
SIFT
f
eatures
,
this
f
ace
descr
iptor
is
r
ich
of
inf
or
mation
which
tends
lighting
in
v
ar
iant
because
the
lighting
v
ar
iation
has
been
decreased
b
y
filter
ing
and
contr
ast
stretching.
2.1.3.
Dimensional
Reductions
In
this
paper
,
the
pre
dictiv
e
LD
A
(PDLD
A[1])
algor
ithm
is
hired
to
drop-off
f
ace
descr
iptor
siz
e
.
The
PDLD
A
is
similar
to
LD
A
which
define
optim
um
projection
mat
r
ix,
W
,
b
y
eigen
analysis
of
the
betw
een
class
scatter
,
S
b
,
and
the
with-in
class
scatter
,
S
w
[1,
4].
The
W
has
to
satisfy
the
Eq.
1.
J
LD
A
(
W
)
=
ar
g
max
W
j
W
T
S
b
W
j
j
W
T
S
w
W
j
(1)
This
algor
ithm
has
been
estab
lished
that
can
a
v
oid
the
retr
aining
prob
lem
of
LD
A.
It
can
be
done
b
y
redefining
the
S
b
and
the
S
w
using
global
mean,
a
.
It
means
the
a
is
estimated
b
y
calculating
it
from
l
sub-sample
data
that
is
r
andomly
selected
from
a
giv
en
data
set.
Finally
,
the
dimensional
reduction
is
done
b
y
Eq.
2.
y
k
i
=
W
T
x
k
i
(2)
where
y
k
i
is
projected
f
ace
descr
iptor
and
x
k
i
is
an
input
f
ace
descr
iptor
.
By
using
this
concept,
the
input
f
ace
descr
iptor
can
be
decreased
more
than
50%
of
or
iginal
siz
e
.
2.2.
Door
-Loc
king
System
The
door
loc
king
hardw
are
system
consists
of
fiv
e
subsystems
named:
Raspberr
y
mod-
ule
,
a
set
of
output
po
w
er
ga
in
system,
a
ser
v
er
,
a
netw
or
k
s
witch,
and
a
door
solenoid
system.
The
Raspberr
y
module
is
used
to
control
the
door
solenoid
to
loc
k
ed
or
unloc
k
ed
depending
on
the
output
status
giv
en
b
y
the
softw
are
recognition
system
in
the
ser
v
er
.
The
ser
v
er
pro
vides
a
logic
condition
1
(ref
ers
to
unloc
k
ed)
or
0
(ref
ers
to
loc
k
ed).
This
logic
condition
wrote
in
a
file
which
can
be
accessed
through
the
netw
or
k.
A
w
eb
ser
v
er
is
installed
on
the
ser
v
er
to
pro
vide
this
f
eature
.
A
netw
or
k
s
witch
is
used
to
connect
the
ser
v
er
and
the
Raspberr
y
through
the
computer
netw
or
k.
TELK
OMNIKA
V
ol.
16,
No
.
2,
Apr
il
2018
:
739
746
Evaluation Warning : The document was created with Spire.PDF for Python.
TELK
OMNIKA
ISSN:
1693-6930
743
The
Raspberr
y
initial
mode
firstly
sets
to
0
which
will
loc
k
the
door
solenoid.
The
Rasp-
berr
y
contin
uously
chec
ks
the
ser
v
er
output
condition
through
the
netw
or
k.
If
the
ser
v
er
status
is
diff
erent
from
the
initial
status
,
then
the
Raspberr
y
processes
the
prog
r
am
to
command
the
solenoid.
The
ser
v
er
status
will
dr
iv
e
the
prog
r
am
to
control
solenoid
either
loc
k
or
unloc
k.
The
latest
status
,
then
used
as
the
initial
status
,
and
the
chec
king
processes
will
contin
ue
.
The
Raspberr
y
uses
ser
v
ers
logic
output
condition
as
an
input
and
a
Raspberr
y
GPIO
(gener
al
pur
pose
input
output)
pin
as
an
output.
This
Raspberr
y
GPIO
output
status
is
used
as
an
input
b
y
the
door
solenoid
as
a
command
t
o
loc
k
or
unloc
k
the
door
.
Since
the
solenoid
input
v
oltage
requirement
is
12VDC
and
the
Raspberr
y
GPIO
output
v
oltage
is
3.3VDC
then
a
rela
y
is
needed
to
dr
iv
e
the
solenoid.
The
rela
y
has
a
minim
um
5V
input,
which
is
higher
than
the
Raspberr
y
GPIO
output
(3.3V)
as
w
ell
as
the
current.
T
o
dr
iv
es
the
rela
y
,
Raspberr
y
will
need
a
simple
po
w
er
gain
system.
A
simple
po
w
er
gain
system
can
be
b
uilt
using
a
tr
ansistor
and
some
resistors
as
sho
wn
in
the
Fig
5.
Figure
5.
Circuits
of
door
loc
king
hardw
are
system
3.
Experiments
and
Result
Discussions
Both
off-line
and
real
time
e
xper
iments
w
ere
carr
ied
out
to
kno
w
the
perf
or
mance
of
f
ace
recognition
engine
based
on
f
ace
descr
iptor
(FD).
f
our
w
ell
kno
wn
f
ace
datasets:
ORL[1,
12],
Image
Media
Labo
r
ator
y
K
umamoto
Univ
ersity
(ITS)
[1,
4],
and
India
(IND)[4],
and
Y
ale
B[13]
w
ere
chosen
f
or
doing
off-line
e
xper
iments
.
The
ORL
dataset
has
400
g
r
a
yscale
f
aces
that
w
ere
tak
en
from
40
persons
.
F
ace
v
ar
iations
e
xample
of
the
ORL
dataset
is
presented
in
Fig.
6(a)[1].
ITS
f
ace
database
belongs
to
Image
Media
Labor
ator
y
K
umamoto
Univ
ersity
,
which
is
an
ethnic
East
Asia
f
ace
image
,
especially
J
apan
and
Chin
ese
.
ITS
has
90
samples
and
each
sample
has
(a)
ORL
dataset
[12]
(b)
ITS
dataset[4]
(c)
IND
f
ace
dataset[2]
Featur
e
E
xt
rac
tio
n
(a)
Sub
-
Set
1
(S
1
)
(b)
Sub
-
Set
2
(S
2
)
(c)
Su
b
se
t
3
(S3
)
(d)
Su
b
Set
4
(S4)
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5
th
Day
(d)
Y
ale
B[13]
Figure
6.
The
e
xample
of
f
ace
v
ar
iations
of
tested
datasets
.
Real
Time
F
ace
Recognition
Based
on
F
ace
Descr
iptor
...
(I
Gede
P
asek
Suta
Wija
y
a)
Evaluation Warning : The document was created with Spire.PDF for Python.
744
ISSN:
1693-6930
10
to
15
f
ace
v
ar
iations
.
Examples
of
f
ace
v
ar
iations
of
the
ITS
f
ace
database
can
be
presented
in
Fig.
6(b)[1].
Thirdly
,
India
dataset
is
color
f
ace
image
dataset
which
has
61
persons
(22
f
emale
and
39
male).
Thee
are
ele
v
en
pose
v
ar
iations
as
presented
in
Fig.
6(c).
Some
f
acial
emotions
are
also
included
in
this
dataset
such
as
smile
,
disgust,
neutr
al,
and
laugh[4].
The
Y
ele
B
dataset
is
divided
into
f
our
sets
,
as
sho
wn
in
Fig.
6(d).
In
this
case
,
the
sub-set
1
w
as
chosen
as
tr
aining
and
the
remaining
sub-sets
w
ere
selected
as
testing.
In
addition,
the
off-line
e
xper
iments
w
ere
carr
ied
out
b
y
under
conditions:
firstly
,
50%
f
aces
of
each
dataset
w
ere
arbitr
ar
ily
elected
f
or
tr
aining
data
and
lefto
v
er
par
t
w
as
chosen
as
quer
ying
images;
secondly
10-F
old
cross-v
alidation
w
as
enf
orced
f
or
perf
or
mance
e
v
aluation;
finally
,
recognition
r
ate
and
computational
time
w
ere
utiliz
ed
as
a
perf
or
mance
indicator
.
9
8
.
3
1
9
2
.
7
5
8
2
.
1
1
9
9
.
1
4
9
8
.
3
5
9
2
.
6
6
80
85
90
95
100
I
T
S
OR
L
I
N
D
R
e
c
ogni
t
i
on
R
a
t
e
(
%
)
Fac
e
D
a
t
a
bas
e
s
CF
FD
100
7
5
.
6
0
1
2
.
5
5
100
8
9
.
8
9
1
6
.
5
4
10
20
30
40
50
60
70
80
90
100
S
1
v
s
.
S
2
S
1
v
s
.
S
3
S
1
v
s
.
S
4
R
e
c
ogni
t
i
on
R
a
t
e
(
%
)
Y
a
l
e
G
D
a
t
a
bas
e
CF
FD
(a)
On
ORL,
ITS
,
and
IND
datasets
9
8
.
3
1
9
2
.
7
5
8
2
.
1
1
9
9
.
1
4
9
8
.
3
5
9
2
.
6
6
80
85
90
95
100
I
T
S
OR
L
I
N
D
R
e
c
ogni
t
i
on
R
a
t
e
(
%
)
Fac
e
D
a
t
a
bas
e
s
CF
FD
100
7
5
.
6
0
1
2
.
5
5
100
8
9
.
8
9
1
6
.
5
4
10
20
30
40
50
60
70
80
90
100
S
1
v
s
.
S
2
S
1
v
s
.
S
3
S
1
v
s
.
S
4
R
e
c
ogni
t
i
on
R
a
t
e
(
%
)
Y
a
l
e
G
D
a
t
a
bas
e
CF
FD
(b)
Y
ale
B
dataset
Figure
7.
Off-line
perf
or
mances
of
our
f
ace
recognition
compared
to
baseline
(CF
based
method[1])
on
tested
datasets
.
The
e
xper
imental
results
(see
Fig.
7)
sho
w
that
f
ace
re
cognition
engine
based
on
FD
giv
es
better
perf
or
mance
than
those
baseline
methods
(compact
f
eatures
(CF)
based
f
ace
recognition[1].
In
a
v
er
age
,
the
FD
based
f
ace
recognition
engine
pro
vides
b
y
about
96.72%
of
recognition
r
ate
on
ORL,
ITS
,
and
IND
datasets
(see
Fig.
7(a)).
In
other
w
ords
,
FD
based
f
ace
recognition
engine
can
impro
v
e
b
y
about
5.66%
th
e
perf
or
mance
of
CF
based
f
ace
recognition[1].
It
can
be
achie
v
ed
b
y
our
f
ace
descr
iptor
that
has
r
ich
inf
or
mation
which
is
f
or
med
b
y
global
inf
or
mation,
local
f
eatures
,
and
shape
inf
or
mation.
The
global
and
local
inf
or
mation
is
represented
b
y
some
lo
w
frequency
components
of
whole
and
sub-f
ace
images
.
This
achie
v
ement
is
in
line
with
the
basic
theor
y
of
signal
processing
that
the
most
signal
inf
or
mation
is
located
in
the
lo
w-frequency
element.
In
ter
ms
of
rob
ustness
of
FD
to
an
y
v
ar
iations
of
lighting
condition
compared
with
the
CF
method,
the
FD
based
method
giv
es
better
perf
or
mance
than
CF
(see
Fig.
7(b)).
It
pro
v
es
th
at
the
FD
has
r
ich
inf
or
mation
which
rob
ust
to
lighting
in
v
ar
iant
due
to
filter
ing
and
contr
ast
stretching
bef
ore
the
e
xtr
action.
Regarding
e
x
ecution
time
,
the
FD
based
f
ace
recognition
engine
tak
es
less
than
1
second
in
a
v
er
age
f
or
perf
or
ming
the
matching
betw
een
quer
ying
f
ace
descr
iptor
among
the
registered
f
ace
descr
iptors
of
all
tested
datasets
.
It
can
be
achie
v
ed
because
the
f
ace
descr
iptor
is
repre-
sented
b
y
32
elements
of
or
iginal
siz
e
f
ace
images
(128x128
pix
els).
F
rom
off-line
e
xper
imental
data,
the
proposed
f
ace
recognition
engine
is
potential
to
be
used
f
or
electronic
k
e
ys
f
or
door
loc
king
system.
In
the
real
time
e
xper
iments
,
the
system
w
as
tested
using
large
v
ar
iability
f
ace
images
in
ter
ms
of
pose
and
captur
ing
time
.
In
t
his
case
,
1002
f
ace
images
ha
v
e
been
collected
b
y
using
w
eb
camer
a
Logitech
C300
(1.3
MP
(1280
x
1024))
from
13
persons
of
the
staff
on
Inf
or
matics
Engineer
ing
Dept.,
Engineer
ing
F
aculty
,
Matar
am
Univ
ersit
y
,
in
fifth
da
ys
.
F
rom
this
dataset,
159
f
ace
images
captured
on
the
first
da
y
(almost
11
images
f
or
each
person)
w
ere
used
as
the
tr
aining
and
843
f
aces
w
ere
prepared
f
or
testing.
Examples
of
f
aces
v
ar
iation
are
sho
wn
in
Fig.
8.
The
par
ameters
f
or
real
time
e
v
aluation
of
f
ace
recognition
engine
w
ere
accur
acy
,
F
alse
P
ositiv
e
Rate
(FPR),
F
alse
Negativ
e
Rate
(FNR),
and
computational
time
.
The
e
v
aluation
results
affir
m
that
the
proposed
f
ace
recognition
engine
using
f
ace
descr
iptor
has
perf
or
med
proper
ly
,
TELK
OMNIKA
V
ol.
16,
No
.
2,
Apr
il
2018
:
739
746
Evaluation Warning : The document was created with Spire.PDF for Python.
TELK
OMNIKA
ISSN:
1693-6930
745
Featur
e
E
xt
rac
ti
o
n
(a)
Sub
-
Set
1
(S
1
)
(b)
Sub
-
Set
2
(S
2
)
(c)
Su
b
se
t
3
(S3
)
(d)
Su
b
Set
4
(S4)
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5
th
Day
Featur
e
E
xt
rac
ti
o
n
(a)
Sub
-
Set
1
(S
1
)
(b)
Sub
-
Set
2
(S
2
)
(c)
Su
b
se
t
3
(S3
)
(d)
Su
b
Set
4
(S4)
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5
th
Day
Figure
8.
Example
of
f
ace
images
v
ar
iations
f
or
real
time
e
v
aluation.
9
7
.
96
25
.
0
3
2
.
2
0
98
.
4
6
1
7.
6
6
1.
6
7
0
20
40
60
80
1
0
0
1
2
0
Ac
c
u
r
ac
y
F
N
R
F
P
R
R
a
t
e
(
%
)
Eva
l
u
a
ti
o
n
Pa
r
ame
te
r
s
C
F
F
D
(a)
Acur
acy
,
FNR,
and
FPR
7
4.
9
7
73
.
5
0
82
.
3
4
8
1.
45
4
0
5
0
6
0
7
0
8
0
9
0
Sen
s
it
iv
i
t
y
Pr
e
c
is
s
io
n
R
a
t
e
(
%
)
Ev
al
u
a
ti
o
n
P
ar
a
met
er
s
C
F
F
D
(b)
Sensitivity
and
precision
Figure
9.
P
erf
or
mance
of
real
time
e
xper
iments
.
which
is
indicated
b
y
more
than
98%
of
accur
acy
and
less
than
2%
and
18%
of
FPR
and
FNR,
respectiv
ely
(see
Fig.
9(a)).
These
perf
or
mances
can
b
e
achie
v
ed
because
the
de
v
eloped
recog-
nition
engine
using
the
f
ace
descr
iptor
of
the
DCT
and
sub-space
analysis
pro
vides
w
ell
data
separ
ation.
Compared
to
the
perf
or
mance
of
CF
based
f
ace
recognition,
our
proposed
method
significantly
decreases
the
FNR
b
y
about
7.37%
and
in
another
side
,
it
is
not
m
uch
increase
and
decrease
the
accur
acy
and
FPR,
as
presented
in
Fig.
9(a).
The
FD
based
f
ace
recognition
engine
also
impro
v
es
significantly
sensitivity
and
precision
of
CF
based
f
ace
recognition
b
y
about
7.37%
and
7.95%
respectiv
ely
,
as
sho
wn
in
Fig.
9(b).
It
also
affir
ms
that
our
proposed
method
can
handle
the
f
alse
negativ
e
prob
lem
of
the
baseline
method
(the
correct
person
is
f
alsely
recogniz
ed
as
others).
Ov
er
all,
the
real
time
perf
or
mances
confir
m
the
off-line
achie
v
ements
which
can
impro
v
e
the
baseline
perf
or
mances
.
The
last
e
xper
iment
w
as
carr
ied
out
to
kno
w
the
perf
or
mance
of
FD
based
f
ace
recog-
nition
engine
f
or
door
loc
king
system
which
w
as
done
b
y
the
staff
of
Inf
or
matics
Engineer
ing
Dept.,
Engineer
ing
F
aculty
,
Matar
am
Univ
ersity
,
in
one
w
eek.
The
door
loc
king
system
can
w
or
k
proper
ly
,
which
is
sho
wn
b
y
the
accur
acy
,
FPR,
and
FNR
b
y
about
98.
30%,
21.99%,
and
1.8%,
respectiv
ely
.
The
last
result
also
re-affir
m
that
the
FD
is
po
w
erful
f
or
real
time
f
ace
recognition
engine
.
4.
Conc
lusion
and
Future
W
ork
The
real
time
FD
based
f
ace
recognition
engine
giv
es
better
perf
or
mances
than
those
of
baseline
(CF).
F
rom
the
Off-line
e
v
aluations
,
it
pro
vides
high
recognition
r
ate
(a
v
er
age
more
than
96%)
f
or
all
tested
datasets
,
while
the
real
time
e
xper
iment
al
data
pro
vide
high
accur
acy
(more
than
98%)
and
less
f
alse
v
er
ification
r
ate
(b
y
about
17.66
%
of
f
alse
negativ
e
and
1.67%
of
f
alse
positiv
e
r
ate).
Regarding
the
computa
tional
time
,
the
proposed
electronics
k
e
y
sim
ulator
needs
less
than
1
second
f
or
the
matching
process
.
In
addition,
the
application
of
FD
f
ace
recognition
engine
f
or
the
door-loc
king
system
also
w
or
ks
proper
ly
,
which
is
indicated
b
y
98.30%,
21.99%,
and
1.8%
of
accur
acy
,
FPR,
and
FNR
respectiv
ely
.
The
door-loc
king
system
based
on
f
ace
image
has
to
be
e
v
aluated
in
large
siz
e
dataset
Real
Time
F
ace
Recognition
Based
on
F
ace
Descr
iptor
...
(I
Gede
P
asek
Suta
Wija
y
a)
Evaluation Warning : The document was created with Spire.PDF for Python.
746
ISSN:
1693-6930
to
kno
w
its
rob
ust
perf
or
mance
against
the
large
v
ar
iability
of
f
ace
images
in
pose
,
lighting,
and
accessor
ies
.
I
n
addition,
the
proposed
system
still
needs
to
be
impro
v
ed
b
y
adding
some
illu-
mination
compensation,
such
as
Contr
ast
Limited
Adaptiv
e
Histog
r
am
Equalization
(CLAHE)
to
decrease
the
f
alse
negativ
e
recognition.
Ac
kno
wledgment
W
e
w
ould
lik
e
to
send
our
g
reat
a
ppreciation
to
the
staff
of
inf
or
matics
Engineer
ing
Dept.
on
their
par
ticipation
in
the
e
v
aluation
of
this
system.
In
addition,
our
g
reat
honor
is
also
to
The
Minister
of
Research
and
Higher
Education
Repub
lic
of
Indonesia
f
or
research
funding
under
scheme
competitiv
e
research
g
r
ant
2015-2016.
Ref
erences
[1]
I.
G.
P
.
S
.
Wija
y
a,
A.
Y
.
Huso
do
,
and
A.
H.
J
atmika,
“Real
time
f
ace
recognition
engine
using
compact
f
eatures
f
or
electronics
k
e
y
,
”
in
Inter
national
Seminar
on
Intelligent
T
echnology
and
Its
Applications
(ISITIA)
,
Lombok,
Indonesia,
J
uly
2016.
[2]
I.
G.
P
.
S
.
Wija
y
a,
A.
Y
.
Husodo
,
and
I.
W
.
A.
Ar
imba
w
a,
“Real
time
f
ace
recognition
using
dct
coefficients
f
ace
descr
iptor
y
,
”
in
Inter
national
Conf
erence
on
Inf
or
matics
and
Computing
(ICIC
2016)
,
Lombok,
Indonesia,
October
2016.
[3]
R.
Chellappa,
C
.
L.
Wilson,
and
S
.
Sirohe
y
,
“Human
and
machine
recognition
of
f
aces:
a
sur
v
e
y
,
”
Proceedings
of
the
IEEE
,
v
ol.
83,
no
.
5,
pp
.
705–741,
Ma
y
1995.
[4]
I.
G.
P
.
S
.
Wija
y
a,
K.
Uchim
ur
a,
and
G.
K
outakii,
“F
ace
recognition
based
on
incremental
predictiv
e
linear
discr
iminant
analysis
,
”
IEEJ
T
r
ansactions
on
Electronics
,
Inf
or
mation
and
Systems
,
v
ol.
133,
no
.
1,
pp
.
74–83,
8
2013.
[5]
J
.
Zhang
and
D
.
Scholten,
“A
f
ace
recognition
algor
ithm
based
on
impro
v
ed
contour
let
tr
ansf
or
m
and
pr
inciple
component
analysis
,
”
TELK
OMNIKA
(T
elecomm
unication
Comput-
ing
Electronics
and
Control)
,
v
ol.
14,
no
.
2A,
pp
.
114–119,
2016.
[6]
A.
Thamizhar
asi
and
J
.
J
a
y
asudha,
“An
illumination
in
v
ar
iant
f
ace
recognition
b
y
selection
of
dct
coefficients
,
”
Inter
national
Jour
nal
of
Image
Processing
(IJIP)
,
v
ol.
10,
no
.
1,
p
.
14,
2016.
[7]
H.
H.
Lwin,
A.
S
.
Khaing,
and
H.
M.
T
un,
“A
utomatic
door
access
system
using
f
ace
recog-
nition,
”
Inter
national
Jour
nal
of
Scientific
&
T
echnology
Research
,
v
ol.
4,
no
.
6,
pp
.
210–221,
J
un
2016.
[8]
M.
Ba
ykar
a
and
R.
Da,
“Real
time
f
ace
recognition
and
tr
ac
king
system,
”
in
Electronics
,
Computer
and
Computation
(ICECCO),
2013
Inter
national
Conf
erence
on
,
No
v
2013,
pp
.
159–163.
[9]
E.
Setia
w
an
and
A.
Muttaqin,
“Implementation
of
k-nearest
neightbors
f
ace
recognition
on
lo
w-po
w
er
processor
,
”
TELK
OMNIKA
(T
elecomm
unication
Computing
Electronics
and
Con-
trol)
,
v
ol.
13,
no
.
3,
pp
.
949–954,
2015.
[10]
P
.
Viola
and
M.
Jones
,
“Rapid
object
detection
using
a
bo
osted
cascade
of
simple
f
eatures
,
”
in
Proceedings
of
the
conf
erence
on
Computer
Vision
and
P
atter
n
Recognition
,
2001,
pp
.
511–518.
[11]
S
.
R.
K
onda,
V
.
K
umar
,
and
V
.
Kr
ishna,
“F
ace
recognition
using
m
ulti
region
prominent
lbp
representation,
”
Inter
national
Jour
nal
of
Electr
ical
and
Computer
Engineer
ing
,
v
ol.
6,
no
.
6,
p
.
2781,
2016.
[12]
F
.
S
.
Samar
ia
and
A.
C
.
Har
ter
,
“P
ar
ameter
isation
of
a
stochastic
model
f
or
human
f
ace
identification,
”
in
Applications
o
f
Computer
Vision,
1994.,
Proceedings
of
the
Second
IEEE
W
or
kshop
on
.
IEEE,
1994,
pp
.
138–142.
[13]
Y
ale
,
“The
e
xtended
y
ale
f
ace
database
b
,
”
2001.
[Online].
A
v
ailab
le:
http://vision.ucsd.edu/
iskw
ak/ExtY
aleDatabase/ExtY
aleB
.html
TELK
OMNIKA
V
ol.
16,
No
.
2,
Apr
il
2018
:
739
746
Evaluation Warning : The document was created with Spire.PDF for Python.