Indonesi
an
Journa
l
of El
e
ct
ri
cal Engineer
ing
an
d
Comp
ut
er
Scie
nce
Vo
l.
14
,
No.
1
,
A
pr
il
201
9
, p
p.
503
~
512
IS
S
N: 25
02
-
4752, DO
I: 10
.11
591/ijeecs
.v1
4
.i
1
.pp
503
-
512
503
Journ
al h
om
e
page
:
http:
//
ia
es
core.c
om/j
ourn
als/i
ndex.
ph
p/ij
eecs
A comp
uter visi
on based
image p
rocessin
g system
for
depressi
on dete
ction am
on
g s
tud
en
ts for coun
seling
Na
m
boodiri
S
an
d
hya P
ar
am
esw
aran
,
D.Venk
atar
am
an
Depa
rtment
o
f
C
om
pute
r
Scie
n
ce a
nd
Engi
n
ee
rin
g,
Am
rit
a
Schoo
l
of
Engi
n
ee
ring
,
Coim
bat
ore
,
A
m
rit
a
Vishw
a
Vid
y
ap
eetha
m
,
I
ndia
Art
ic
le
In
f
o
ABSTR
A
CT
Art
ic
le
history:
Re
cei
ved
A
pr
11
, 201
8
Re
vised
Jun
12
, 2018
Accepte
d
Ja
n 7
, 201
9
Ps
y
chol
ogi
ca
l
p
roble
m
s
in
col
le
ge
student
s
like
depr
ession,
pessim
ism
,
ec
c
ent
ri
ci
t
y
,
an
xie
t
y
etc.
are
ca
used
pr
inc
ip
a
lly
due
to
the
negl
e
ct
of
cont
inuous
m
onit
oring
of
stude
nts’
ps
y
cho
logic
al
we
ll
-
be
ing.
I
dent
ifica
ti
on
of
depr
ession
at
col
le
g
e
le
v
el
i
s
desira
ble
so
t
hat
it
ca
n
be
c
ontrol
le
d
b
y
givi
ng
better
co
unseli
ng
at
the
s
ta
rti
ng
st
age
it
se
lf.
If
a
counse
lo
r
ide
ntifies
depr
ession
in
a
student
in
the
in
itial
stage
s
it
se
lf,
he
ca
n
eff
e
ctively
hel
p
th
at
student
to
over
come
depr
ession.
But
among
la
rge
num
ber
of
student
s,
i
t
bec
om
es
a
d
ifficult
ta
sk
for
the
counse
lor
to
k
ee
p
tr
ac
k
of
th
e
signifi
c
ant
cha
nges
tha
t
oc
c
ur
in
stud
ent
s
as
a
r
esult
of
depr
ession.
But
advance
s
in
th
e
Im
age
-
Proce
ss
ing
fie
ld
h
ave
le
d
to
the
develop
m
ent
of
eff
e
ct
iv
e
s
y
st
ems
,
which
prove
capabl
e
of
detec
ting
emotions
fro
m
fa
ci
al
images
,
in
a
m
uch
sim
ple
r
wa
y
.
Th
us,
we
nee
d
an
a
utomate
d
s
y
st
em
tha
t
ca
p
ture
s
fa
ci
a
l
images
of
student
s
and
ana
l
y
z
e
th
em,
for
eff
ective
d
et
e
ct
ion
of
depr
ession.
In
th
e
proposed
s
y
ste
m
,
an
at
t
empt
is
bei
ng
m
ade
to
m
ake
use
of
the
Im
age
proc
essing
te
chn
ique
s,
to
stud
y
t
he
fronta
l
fa
ce
f
ea
tur
es
of
col
lege
student
s
and
pre
d
ic
t
depr
ession.
Th
is
s
y
s
te
m
will
be
trai
ned
with
fa
ci
a
l
fe
at
ur
es
of
positi
ve
and
n
eg
at
iv
e
fa
c
ia
l
emotions.
To
pre
d
ict
depr
ession,
a
vi
deo
of
the
student
is
ca
ptu
re
d,
from
which
the
fa
ce
of
th
e
student
is
ext
ra
ct
ed
.
The
n
using
Gabor
filt
ers,
th
e
fa
cial
f
e
at
ure
s
are
ext
r
ac
te
d.
Cla
ss
ifica
t
i
on
of
the
s
e
fa
cial
f
eature
s
i
s
done
using
S
VM
cl
assifie
r
.
The
l
evel
of
d
e
pre
ss
ion
is
ide
nti
f
ie
d
b
y
c
a
lc
ul
at
ing
th
e
a
m
ount
of
nega
tive
emotions
pre
sent
in
th
e
ent
ir
e
vide
o
.
Based
on
the
le
ve
l
o
f
depr
ession,
not
ifi
c
at
ion
is
send
to
the
class
advi
sor,
dep
artm
ent
counse
lor
or
unive
rsit
y
counse
lor,
in
dic
a
ti
ng
th
e
student
’s
disturb
ed
m
ent
al
sta
te
.
The
pre
sent
s
y
st
em
works
with
an
ac
cur
a
c
y
of
64.
38%.
Th
e
paper
con
cl
u
des
with
th
e
d
esc
ription
of
a
n
ext
end
ed
arc
hi
te
c
ture
usi
ng
othe
r
inpu
ts
li
ke
acade
m
ic
score
s,
soci
al
c
onte
nt
,
pe
e
r
opini
ons
and
hostel
ac
t
ivi
t
ie
s
to
buil
d
a
h
y
b
rid
s
y
stem
for
depr
ession
det
e
ct
ion
as
fu
tu
re
work.
Ke
yw
or
d
s
:
Com
pu
te
r
visi
on
Depressi
on d
et
ect
ion
Faci
al
f
eat
ures
Feat
ur
e
ex
tr
act
ion
Im
age p
r
ocessi
ng
Copyright
©
201
9
Ins
ti
tute
o
f Ad
vanc
ed
Engi
n
ee
r
ing
and
S
cienc
e
.
Al
l
rights re
serv
ed
.
Corres
pond
in
g
Aut
h
or
:
Nam
bo
odiri Sa
ndhya Pa
ram
es
war
a
n
,
Dep
a
rtm
ent o
f C
om
pu
te
r
Scie
nce a
nd E
ng
i
ne
erin
g,
Am
rita
Sch
ool
of Enginee
rin
g,
Coim
bator
e, Am
ri
ta
V
ishw
a
Vidyapeet
ham
, Ind
ia
.
Em
a
il
:
cb.
en.p
2cv
i
16004@c
b.st
udents.am
rita
.edu
1.
INTROD
U
CTION
In
c
ollege
stu
de
nts,
de
pressi
on
is
the
res
ult
of
the
so
ci
al
c
hange
du
e
t
o
e
m
erg
ence
of
t
he
inter
net,
sm
art
ph
on
es
and
dif
fer
e
nt
so
ci
al
m
edia
s
it
es.
Ma
j
or
it
y
of
st
ud
e
nts
te
nd
t
o
co
nceal
their
ps
yc
ho
l
o
gica
l
pro
blem
s
du
e
to
the
s
ocial
sti
gm
as
relat
ed
t
o
de
pr
e
ssio
n
a
nd
al
s
o
due
to
peer
pr
es
sure.
So
m
e
stud
ents
rem
ai
n
total
ly
un
aware
of
their
ps
yc
ho
l
og
ic
al
pro
bl
e
m
s
and
thus
r
e
m
ai
n
dep
ri
ve
d
of
any
help
t
hat
m
ay
pr
ov
e
vital
to
their
m
ental
healt
h.
I
t
bec
om
e
s
a
di
ff
ic
ult
ta
s
k
f
or
the
co
uns
el
or
t
o
ke
ep
t
r
ack
of
t
he
si
gnific
ant
cha
nges
that
occur
i
n
stu
de
nt
s
as
a
resu
lt
of
de
pr
es
sio
n
in
a
la
rg
e
num
ber
of
stu
de
nts
.
T
hu
s
we
ne
ed
a
nd
aut
om
at
ed
syst
e
m
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2502
-
4752
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci,
Vo
l.
14
, N
o.
1
,
A
pr
il
2019
:
503
–
512
504
that
captur
es
i
m
ages
of
stu
de
nts
and
a
naly
ze
the
m
fo
r
eff
e
ct
ive
de
pressi
on
detect
ion.
Fa
ci
al
exp
ressi
ons
are
the
m
os
t
i
m
po
rtant
f
or
m
of
non
-
ve
rb
al
c
om
m
un
ic
at
ion
s
to
ex
pr
e
ss
a
pe
rsons
’
em
otion
al
or
m
ental
sta
te
.
A
la
rg
e
num
ber
of
stu
dies
are
currently
un
de
rgoin
g
on
‘
Fac
ia
l
featur
e
anal
yse
s’
f
or
em
otion
rec
ogniti
on
f
r
o
m
im
ages
wh
ic
h
eff
ect
ively
hel
p
in
pr
e
dicti
on
of
m
ental
healt
h
co
nd
it
io
n
of
hu
m
an
bein
gs.
This
stu
dy
pro
po
s
es
an
autom
at
ed
syst
e
m
that
detect
s
dep
ressi
on
le
vels
in
stu
den
ts
by
analy
zi
ng
f
rontal
fa
ce
i
m
ages
of
colle
ge
stud
e
nts.
T
o
predict
de
pr
e
ssion,
a
vid
e
o
of
the
stud
e
nt
is
captur
e
d,
fro
m
wh
ic
h
the
f
ace
of
the
st
udent
i
s
extracte
d.
The
n
us
in
g
Ga
bor
filt
ers,
the
facial
featur
es
are
extracte
d.
Cl
assifi
cat
ion
of
these
facial
featur
es
is
done
usi
ng
S
VM
cl
assifi
er.
The
le
vel
of
depressio
n
is
identifie
d
by
cal
culat
i
ng
th
e
a
m
ou
nt
of
neg
at
iv
e
e
m
otion
s
pr
e
se
nt in
t
he
e
ntire
vid
e
o.
A
com
par
iso
n
of
Ma
nual
F
ACS
co
ding
and
A
uto
m
at
e
d
FA
C
S
co
di
ng
for
fi
nd
i
ng
ou
t
Faci
al
Ex
pr
essi
ons
of
depresse
d,
s
howe
d
high
sim
il
arity
in
resu
lt
s
of
both
t
he
m
e
tho
ds
[1]
.
High
ly
depr
essed
patie
nts
we
re
fou
nd
to
e
xhib
it
low
presenc
e
of
sm
il
e
(A
U12)
or
sad
ne
ss
(AU
15)
.
T
hey
showe
d
th
e
hi
gh
pr
ese
nce
of
c
onte
m
pt
(A
U14
)
an
d
dis
gu
st
(
AU1
0)
al
on
g
with
sm
i
le
.
Figu
re
1
sho
ws
ac
ti
on
unit
s
fou
nd
to
be
pr
ese
nt
in
de
pressi
on
vid
e
os
[
1]
–
(a)
AU
10
–
Disgust,
(
b)
AU
12
–
Happ
y,
(c)
A
U
14
–
Con
te
m
pt,
(d
)
AU
15
–
Sa
d.
Figure
1. Acti
on
Un
it
s
fou
nd to be
pr
ese
nt in
depressi
on v
i
de
os
[1
]
–
(a
) A
U 10
–
Dis
gust, (b)
AU
12
–
Happy,
(c) A
U
14
–
C
on
te
m
pt,
(
d)
A
U 15
–
Sad
The
res
ults
poi
nted
out
that
t
he
m
os
t
accur
at
e
act
ion
unit
f
or
depressio
n
de
te
ct
ion
w
as
A
U14
(acti
on
un
it
relat
ed
to
con
te
m
pt).
In
[2
]
,
the
ident
ific
at
ion
of
de
pr
essi
on
was
done
by
analy
sing
facial
la
ndm
ark
po
i
nts.
T
he
dis
ta
nces
bet
wee
n
them
wer
e
f
ound
out
us
in
g
eucli
dea
n
an
d
ci
ty
blo
ck
dis
ta
nce
m
e
tho
ds
.
Her
e
bo
t
h
vid
e
o
a
nd
a
ud
i
o
fe
at
ures
are
e
xtract
ed
a
nd
the
n
f
us
e
d
to
gethe
r
and
the
n
cl
ass
ifie
d.
In
[
3]
a
cross
database
a
naly
sis
of
th
ree
m
a
in
dataset
s
–
‘
Bl
ack
D
og
In
st
it
ute
depressio
n
dataset
(Bla
c
kDo
g)
,
U
niv
e
r
sit
y
of
Pit
tsbu
r
gh
de
pressi
on
dataset
(
Pit
t),
an
d
A
ud
i
o/Vis
ual
Em
ot
ion
Chall
e
ng
e
de
pr
essi
on
dataset
(AVE
C)
ha
s
been
do
ne
w
hi
ch
analy
sis
th
e
three
dataset
s
ind
i
viduall
y
as
well
as
by
com
bin
ing
t
he
m
fo
r
detect
io
n
of
depressi
on
featur
es
.
The
data
set
was
ge
ner
a
li
zed
into
ey
e
act
ivit
y
data,
head
pos
e
data,
featur
e
fusio
n
data
and
hy
br
i
d
dat
a.
O
f
al
l,
the
ey
e
act
ivit
y
m
od
al
it
y
sho
we
d
bette
r
pe
r
for
m
ance.
T
he
re
su
lt
s
in
dicat
ed
that
if
var
ia
bili
ty
in
trai
ning
data
is
m
or
e
the
te
sti
ng
res
ults
will
be
be
tt
er.
I
n
[4
]
,
t
hr
ee
dif
fer
e
nt
m
et
ho
ds
are
discusse
d
f
or
e
m
ot
ion
rec
ogni
ti
on
.
O
ne
is
us
e
of
A
U
rathe
r
than
A
AM
fe
at
ur
es
f
or
cl
ass
ific
at
ion
w
her
e
AU
14,
pro
ve
d
to
be
the
m
os
t
accurate
AU
for
de
pr
es
sio
n
identific
at
io
n.
The
sec
ond
m
et
hod
is
by
us
ing
th
e
app
ea
ra
nce
fea
tures
from
the
AA
M
f
or
cl
ass
ific
at
ion
us
in
g
S
VM
a
nd
the
t
hir
d
is
m
ultim
od
al
f
us
io
n
of
vo
cal
and
vid
e
o
fea
tures.
T
his
stud
y
cl
aim
s
that
du
ri
ng
cl
in
ic
al
interviews
of
the
de
pr
e
ssed,
the
depr
ession
sy
m
pto
m
s
are
com
m
un
ic
at
ed
nonver
bally
and
ca
n
be
dete
ct
ed
autom
at
icall
y.
An
ot
her
s
tud
y
f
or
fin
ding
ou
t
dep
ressi
on
f
r
om
facial
featur
es
has
bee
n
do
ne
by
m
easur
ing
‘Multi
-
Scal
e
Entropy’
(M
SE)
ove
r
tim
e
per
i
od
on
t
he
patie
nt
inter
view
vid
e
o.
[
5
]
MSE
capt
ur
es
t
he
va
riat
ion
s
t
hat
occ
ur
in
the
vi
de
o
ac
ro
ss
a
sin
gle
pi
xel.
The
vid
e
os
of
patie
nts
w
ho
ha
d
lo
wer
de
pre
ss
ion
le
vels
w
ere
hi
gh
ly
e
xpressive
of
their
em
otion
s
an
d
su
c
h
vid
e
os
s
howe
d hig
h
e
ntr
opy l
evels
, othe
rw
is
e
the e
ntr
op
y l
evel wa
s lo
w.
In
[6
]
patie
nts
wer
e as
ke
d
to
wear
de
vices to obse
rv
e
their
h
eart
-
rate, slee
p
patte
r
n,
t
heir red
uctio
n
i
n
so
ci
al
interact
i
on,
th
ei
r
GP
S
locat
ion
to
c
he
ck
if
they
a
re
sk
ip
ping
w
ork
et
c.
f
or
depre
ssion
analy
sis.
Data
colle
ct
ion
of
de
pr
es
sed
patie
nts
has
al
s
o
be
en
do
ne
in
[
7]
by
ind
ic
at
in
g
them
fil
m
-
c
li
ps
to
cat
ch
the
outwa
r
d
app
ea
ra
nces
of
feeli
ng
s
a
nd
f
ur
t
her
m
or
e
by g
ivin
g
a
n
assig
nm
ent
of
pe
rce
iving
n
e
gative and
posit
ive
fe
el
ing
s
from
var
io
us
f
aci
al
pictures
.
In
[
8],
f
or
a
vi
deo,
t
he
face
r
egio
n
is
first
m
anu
al
ly
init
ia
li
zed
an
d
t
hen
KL
T
(K
a
na
de
-
T
om
a
si
-
Lucas
)
trac
ke
r
is
util
iz
ed
to
ext
ract
cu
rvat
ur
e
in
f
or
m
at
i
on
from
the
pi
ct
ur
e
.
Vide
o
base
d
appr
oach
in
dic
at
ed
m
or
e
pr
e
ci
sion
a
s
it
s
um
s
up
the
face
area
al
l
the
m
or
e
preci
sel
y.
A
te
c
hn
i
qu
e
f
or
face
recog
niti
on
wit
h
t
he
assist
a
nc
e
of
Ga
bor
Wa
velet
has
li
ke
w
ise
bee
n
pro
posed.
[
9].
Her
e
recog
niti
on
of
faces
inv
a
riant
to
P
os
e
a
nd
O
rie
nt
at
ion
is
do
ne.
The
featu
res
extracte
d
are
cl
assifi
ed
with
the
hel
p
of
SV
M
cl
assifi
er.
This
fr
am
ewo
rk
cl
a
i
m
s
to
ou
tpe
rfor
m
oth
er
face
reco
gnit
io
n
te
chn
i
qu
e
s.T
he
work
in
[10]
pro
poses
an
im
pr
oved
f
ace
rec
ogniti
on
syst
em
wh
ic
h
us
es
Stat
iona
ry
W
a
velet
T
ran
s
f
or
m
for
f
eat
ur
e
e
xtracti
on
a
nd
Con
se
r
vative
Bi
nar
y
Partic
le
Sw
arm
Op
tim
iz
at
ion
for
featur
e
sel
ect
ion.
The
pro
posed
m
et
ho
d
cl
aim
s
to
giv
e
good
pe
rfo
rm
ance
un
der
cl
uttered
bac
kgr
ound
an
d
is
m
uch
eff
ect
iv
e
and
r
obus
t
to
cha
nges
du
e
t
o
Evaluation Warning : The document was created with Spire.PDF for Python.
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci
IS
S
N:
25
02
-
4752
A com
pu
te
r vis
ion
base
d
im
ag
e p
r
ocessi
ng sy
ste
m
for
d
e
p
re
ssion …
(
Namboo
diri S
andhy
a
P
aramesw
aran
)
505
il
lu
m
inati
on
,
oc
cl
us
io
n,
a
nd
expressi
on.
Uti
li
zat
ion
of
la
ndm
ark
points
[
11
]
to
c
om
pu
te
the
LBP
H
of
facial
featur
e reduces
LBP
histogra
m
’s
di
m
ension,
wh
ic
h
is
us
e
d
fo
r
face
detect
ion
.
T
oo
few
la
nd
m
ark
points resu
lt
in
loss
of
feat
ures.
T
he
refor
e
m
or
e
la
nd
m
ark
points
need
to
be
e
x
tract
e
d
to
im
pr
ove
the
true
posit
ive
ra
te
of
the r
ec
ogniti
on
p
r
ocess
. In [12]
the eye a
nd
ey
ebrow
featu
r
es are
detect
ed wit
h 4 a
nd
3 fe
at
ur
e
po
i
nts fo
r
each
ey
e
and
ey
eb
r
ow
res
pecti
vely
.
On
e
ca
n
al
s
o
di
vid
e
the
e
ye
brow
int
o
th
ree
eq
ual
pa
rts:
the
inn
er
,
th
e
centre
and
the o
ute
r
pa
rt
as
in [
13]
. F
eat
ur
e points can
al
so
be
det
ect
ed
us
i
ng
dif
fer
e
nt
te
m
plate
m
a
tc
hin
g
te
c
hniq
ues
[14].
Faci
al
e
xpressi
on
rec
og
niti
on
ca
n
al
so
be
done
in
t
w
o
ph
a
ses:
m
anu
al
ly
locat
ing
f
ourteen
points
in
face
reg
i
on
a
nd
c
re
at
e
a
gr
ap
h
with
ed
ges
[
15]
th
at
connect
su
c
h
points
a
nd
t
he
n
trai
ni
ng
a
rtific
ia
l
neu
ral n
e
tworks
to
recog
nize
th
e
six
basic
em
otions.
T
he
process
of
facial
featur
e
e
xtracti
on
ca
n
al
so
be
done
usi
ng
A
rtific
ia
l
Neural
Net
wor
ks
Mult
il
ay
er
Perce
ptron
(M
LP)
with
back
-
pro
pa
gation
a
lgorit
hm
trai
ni
ng
t
he
ANN
with
a
nu
m
ber
of
e
xa
m
ples,
cal
le
d
le
arn
i
ng
set
[
16]
an
d
t
hen
as
sign
i
ng
wei
gh
t
s
to
m
ake
the
netw
ork
ca
pa
bl
e
of
cl
assify
ing
facial
expressi
on
s
.
Feat
ures
of
vi
deo
an
d
a
udio
inf
or
m
at
ion
ar
e
sepa
rated
fro
m
the
vid
e
o
util
iz
ing
a Mov
em
ent H
ist
or
y Hist
ogra
m
(
MHH)
whi
ch
re
pr
ese
nts to
the quali
ti
es
of
m
inu
te
ch
an
ges
that occ
ur
i
n
face
and
vocal
a
ppearances
of
t
he
de
pr
ess
ed
[
17]
.
Em
otion
re
cogniti
on
f
ro
m
faces
ca
n
al
s
o
be
detect
ed
us
in
g
Ra
ndon
a
nd
Wav
el
et
t
ran
s
f
or
m
s.
The
Ra
ndon
proc
ess
pro
j
ect
s
the
2D
im
age
into
Ra
ndon
s
pace
an
d
t
he
D
WT
f
ram
ewo
r
k
e
xtracts
t
he
coe
ff
ic
ie
nts
at
the
sec
ond
le
vel
dec
om
posit
ion
[
18]
.
T
he
f
undam
ental
facial
featur
e
s
ch
os
e
n
are
ey
e,
nos
e
and
m
ou
th
local
es
that
can
be
sepa
rate
d
by
ap
plyi
ng
Haar
featu
re
base
d
Ad
a
boos
t
al
gorithm
.
This
strat
egy
dim
inishes
the
face
pr
e
processi
ng
ti
m
e
for
la
r
ge
data
bases.
Faci
al
A
ct
ivit
y
Un
it
s ar
e ad
diti
on
al
ly
b
ei
ng
r
ecognize
d, w
he
re a
com
bin
at
ion
of v
a
rio
us
f
aci
al
acti
vity
un
it
s can
f
orm
disti
nct
com
plex
facial
expressio
ns
f
or
bet
te
r
inv
e
sti
gation
[
19
]
.
On
the
off
c
han
ce
th
at
the
stud
e
nts'
dep
ress
e
d
feeli
ng
s
are
m
app
e
d
t
o
their
act
ion
s
in
cl
ass
room
,
their
ent
hu
sia
sti
c
sta
te
can
be
see
n
if
they
are
disco
urage
d
or
not,
an
d
in
li
gh
t
of
this
the
instru
ct
or
can
help
the
stud
e
nt
by
giv
in
g
caref
ul
con
si
derat
ion
to
that
spe
ci
fic
stud
e
nt
as
in
[2
1].
I
n
the
eve
nt
that
div
erse
faces
in
a
sa
m
e
scene
dem
on
strat
e
a
si
m
il
ar
po
sit
ive
or
ne
gative
e
m
otion
,
it
would
com
pr
e
hend
the
entire
ci
rcu
m
sta
nce
of
the
scene,
re
ga
rd
le
ss
of
w
he
ther
sub
j
ect
s
in
th
e
scene
are
upbe
at
or
in
t
he
cas
e
of
s
om
et
hin
g
inco
rr
ect
ly
is
go
i
ng
on
in
the
scene
as
in
[
24]
.
The
w
ork
i
n
[
25]
pro
po
ses
a
sys
tem
that
identi
fies
de
pr
essi
on
in
colle
ge
stu
den
ts
by
fin
din
g
out
the
pre
sence
of
lo
w
le
vel
of
happy
f
eat
ur
e
s
in
f
r
on
ta
l
fa
c
e
vid
e
os
of
st
ud
e
nts.
If
the
happy
f
eat
ur
e
s
are
lo
w
i
n
th
e
vid
e
o
t
he
st
ud
e
nt
is
pr
e
dicte
d
of
ha
ving
de
pressi
on.
I
n
[
26]
the
process
of
em
otion
rec
ogniti
on
is
done
bas
ed
on
s
peec
h
s
ign
al
processi
ng
a
nd
e
m
otion
trai
ni
ng
rec
ogniti
on.
T
he
pro
sodi
c
par
am
et
ers
fr
om
sp
eech
si
gn
al
s
a
nd
t
he
facial
featur
e
s
f
r
on
t
he
vid
e
o
sig
nal
s
are
e
xtracte
d
and
cl
assifi
ed
par
al
le
ll
y.
Bot
h
the
cl
assifi
er
res
ults
are
c
om
bin
ed
us
in
g
‘Bim
od
al
’
integrati
on
for
the
final
e
xpressi
on
rec
ogniti
on
res
ult.
A
face
recog
niti
on
syst
em
wh
i
c
h
represe
nt
s
a
fa
ce
us
in
g
Ga
bor
-
HOG
featu
re
s
is
pro
pose
d
in
[
27]
.
T
he
fa
ce
i
m
age
is
filt
ered
us
in
g
a
Gabo
r
Fil
te
r
ba
nk.
T
he
Ga
bor
m
agn
it
ud
e
im
ages
ar
e
obta
ined
an
d
the
Hist
ogram
of
O
riente
d
G
rad
ie
nt
is
com
pu
te
d
on
the
se
m
ag
nitud
e
im
ages.
The
res
ults
sh
ow
that
th
e
fu
s
io
n
of
both
the
m
et
ho
ds
outper
f
orm
s
the
perform
ance
of
both
the
proc
esses
w
he
n
perform
ed
ind
ivi
dual
ly
.
A
featu
r
e
sel
ect
ion
al
gorithm
is
propose
d
in
[28],
w
hich
use
s
2D
Gabo
r
wav
el
et
tran
sf
or
m
at
ion
to
process
only
the
ey
e
and
nose
r
egio
ns
of
face
i
m
ages
whi
c
h
s
hows
hi
gh
e
r
acc
ur
acy
in
detect
ion
of
m
ulti
-
po
se
a
nd
m
ulti
exp
re
ssion
face
.
Ta
ble
1
s
hows
a
naly
sis
ta
bu
la
te
d.
Ta
ble
1
de
picts
the
analy
sis
of
m
ai
n
fi
ve
pap
e
rs
ta
ken
f
or
re
fe
ren
ce
wic
h
inc
lud
e
t
he
depre
ssio
n
featur
e
s
ext
rac
te
d
in
eac
h
pa
per,
the
li
m
i
tati
on
s
of
eac
h
pap
e
r
a
nd
the
possible
fu
t
ure
w
ork
that
c
an
be
unde
rd
at
e
n
f
or
each
par
ti
cula
r
r
esea
rch pa
per.
Table
1.
A
naly
sis Tab
ulate
d
Papers
Depressi
o
n f
ea
tur
es ex
tracted
Li
m
ita
tions
Fut
ure
sco
pe
“
So
cial
Ris
k
……”
Actio
n
Units
Intervi
ews in
gen
e
ral
a
re
le
ss
stru
ctu
red.
Dep
ressio
n
r
elated
qu
estio
n
n
aires
m
a
y
captu
re
dep
re
ss
iv
e f
acial
ex
p
ressio
n
s
"Cros
s
-
cu
ltu
ral
..
..
”
Ey
e
m
o
v
e
m
en
t
an
d
Head p
o
se
m
o
v
e
m
en
t
Tr
ain
in
g
on
sp
ecif
i
c datasets
-
p
reven
ts f
ro
m
gen
eralizing
to
d
if
f
erent ob
serv
ati
o
n
s.
More vari
ed
datas
e
ts
can b
e created
"Discri
m
-
in
atin
g
clin
ical .
.
.”
Un
su
p
ervis
ed
f
eat
u
res
-
Multi Scale
E
n
trop
y
,D
y
n
a
m
i
cal
an
aly
sis
,
Ob
se
rvab
ility
f
eatu
res
Un
su
p
ervis
ed
f
eat
u
res ar
e
us
ed
in
an exp
lo
ratory
s
ettin
g
.
Featu
res ca
n
be
cla
ss
if
ied
acc
o
rdin
g
to
their dis
cri
m
in
at
o
ry
po
we
r
"Facial
g
eo
m
et
ry...
”
Facial land
m
a
rks
’
(vid
eo
)
&
Statistical d
escr
ip
to
rs (
au
d
io
)
are
f
u
sed
.
No
n
dep
ressed
I
n
d
iv
id
u
als n
o
t
class
if
ied
pro
p
erly
Op
ti
m
izin
g
of
f
eatu
res fo
r
d
etectio
n
o
f
non d
ep
ressiv
e f
eatu
res.
"Video
-
b
ased
..
.”
Face reg
io
n
was
m
an
u
ally
ini
tialized
&
then
tr
acked
wit
h
KL
T
Rein
itialized
of
f
ace
regio
n
requ
ired if
the trac
k
ed
po
in
ts are
b
elo
w a
th
resh
o
ld
.
Can
con
sid
er
f
ace
as a who
le f
o
r
th
e
en
tire
v
id
eo
.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2502
-
4752
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci,
Vo
l.
14
, N
o.
1
,
A
pr
il
2019
:
503
–
512
506
2.
RESEA
R
CH MET
HO
D
OL
OGY
In
[
1],
the
m
os
t
accurate
act
ion
unit
on
e
f
or
depressio
n
de
te
ct
i
on
dep
ic
t
ed
as
A
U
14.
B
ased
on
this
theo
ry,
the
cu
r
ren
t
stu
dy
pro
poses
a
syst
em
t
hat
will
be
trai
ned
with
feat
ures
of
ha
pp
y,
ne
utral,
c
on
te
m
pt
an
d
disgust
fa
ces.
The
n
in
t
he
te
s
ti
ng
ph
a
se,
vide
os
of
colle
ge
stud
e
nts
will
be
colle
ct
ed
wh
i
le
they
are
a
nsweri
n
g
diff
e
re
nt
quest
ionnaires.
T
he
stud
e
nts’
facia
l
featur
e
s
will
be
e
xtracted
a
nd
cl
assifi
ed
by
SV
M
cl
assif
ie
r
f
or
depressi
on
detect
ion
.
De
pres
sion
detect
io
n
will
be
done
by
ov
erall
pr
e
se
nce
of
happy,
neu
t
ral,
co
ntem
pt
and
disgust
featu
re
s
thr
ough
ou
t
the
vid
e
o
fr
am
es
an
d
st
udent
will
be
cl
assifi
ed
as
ha
ving
low,
m
od
erate
or
hi
gh
depressi
on.
T
he
arc
hitec
tural
diag
ram
of
t
he
pro
posed
a
uto
m
at
ed
syst
e
m
can
be
m
od
el
ed
in
the
f
ollo
wing
way.
2.1.
Pr
oposed
Archi
tectu
r
al
D
ia
gr
am
Figure
2
s
hows
arc
hitec
tural
di
agr
am
fo
r
the
pro
po
se
d ‘
Depressio
n Dete
ct
ion’ syst
em
.
Figure
2. A
rch
i
te
ct
ur
al
d
ia
gr
a
m
f
or
the
pro
pose
d
‘
De
pr
es
sion Detec
ti
on’
s
yst
e
m
2.2.
D
es
c
ri
pt
i
on
of Pr
opose
d A
rc
hitect
ur
al D
i
ag
r
am
.
2.2.1.
Tr
ainin
g Datase
t Cre
at
i
on
In
a
ddit
ion
to
happy,
c
on
te
m
pt
an
d
dis
gust
e
m
otion
s,
t
he
e
m
ot
ion
‘
Ne
utr
al
’
face
al
so
i
m
pl
ie
s
la
ck
of
interest
,
or
em
otion
le
ss
face
wh
ic
h
m
ay
be
pu
t
f
ort
h
by
th
e
depresse
d.
T
he
input
is
con
s
equ
e
ntly
a
dataset
of
happy,
ne
utral,
co
ntem
pt
and
disgust
faces.
For
c
ollec
ti
ng
the
in
put
data
set
a
G
U
I
is
c
reated
t
hat
cap
tures
i
m
ages (
f
or ea
ch 4 em
otion
s)
of the
stu
den
t
as Fig
ur
e
3 bel
ow
:
Figure
3. Faces
of stude
nt cap
t
ur
e
d for trai
ning set
Evaluation Warning : The document was created with Spire.PDF for Python.
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci
IS
S
N:
25
02
-
4752
A com
pu
te
r vis
ion
base
d
im
ag
e p
r
ocessi
ng sy
ste
m
for
d
e
p
re
ssion …
(
Namboo
diri S
andhy
a
P
aramesw
aran
)
507
The
trai
ning
da
ta
set
create
d
con
ta
in
s
40
i
m
ages
each
of
Happy,
Neu
t
r
al
,
Con
te
m
pt
and
Dis
gust
faces.
Finall
y
we have
a total
of
160
im
ages in
the
in
pu
t
tra
ining dataset
.
2.2.2.
F
ace
D
etectio
n
an
d
F
eature E
xtr
ac
tion
On
ce
t
he
trai
ni
ng
set
is
c
rea
te
d,
face
f
ro
m
each
im
age
is
detect
ed
us
i
ng
t
he
Viola
Jo
ne
s
Face
Detect
ion
al
go
rithm
.
This
alg
ori
th
m
m
ake
s
us
e
of
Haa
r
featur
es
,
w
hi
ch
w
he
n
co
nvolv
e
d
thr
ough
ou
t
th
e
i
m
age,
we
get
high
out
pu
t
va
lues
only
at
th
os
e
re
gi
on
s
t
ha
t
m
at
ch
the
pa
tt
ern
of
the
ha
ar
feat
ur
es
a
nd
the
n
us
in
g
A
da
boost
al
go
rithm
and
casca
ding
cl
assifi
ers,
it
det
ect
s
a
face
as
in
Fig
ur
e
4(b
).
Faci
al
featur
es
from
each
face
im
age
are
extr
act
ed
us
i
ng
G
ab
or
f
il
te
rs.
A
Ga
bor
filt
er
bank
of
40
filt
ers
is
cr
eat
ed
us
in
g
5
s
cal
es
(2,
3,
3.5
,
4
an
d
5)
a
nd
8
or
ie
ntati
on
s
(
0,
23,
45
,
68,
90,
113,
13
5
and
158.)
as
in
[20].
The
Ga
bor
filt
er
ba
nk
of
4
0
filt
ers
cr
eat
ed
is
show
n
in
Fig
ur
e
4
(c
).
F
or
a
face
de
te
ct
ed,
the
G
a
bor
feat
ur
es
e
xt
racted
are
s
ho
wn
i
n
Figure
4 (d).
(a)
(b)
(c)
(d)
Figure
4. (a
)
I
nput im
age,
(
b)
Face detect
e
d,
(c)
Ga
bor Fi
lt
er Ban
k (
d)
Ga
bor
Featu
res
Fo
r
e
ver
y
im
a
ge
a
Ga
bor
fe
at
ur
e
vect
or
is
fo
rm
ed
as
an
‘n
x
1’
c
olu
m
n
vecto
r
as
in
Figure
5
(a)
.
Feat
ur
e
vecto
r
s
of
al
l
the
in
put
i
m
ages
found
out
an
d
com
bin
e
d
to
f
or
m
a
featur
e
vecto
r
of
‘
n
x
160’
f
eat
ur
e
set
as
in
Fig
ure
5
(b).
T
he
dim
ension
of
th
is
feat
ure
set
i
s
ve
ry
hi
gh
an
d
s
o
P
rini
pal
Com
po
ne
nt
A
naly
sis
(P
CA
)
is
app
l
ie
d
to
this
featur
e
set
fo
r
dim
ension
al
it
y
red
uctio
n.
T
hu
s
we
get
a
‘1
60
x
160’
reduce
d
dim
ension
featur
e
set
after
ap
pling
PC
A
as
in
Fig
ur
e
5
(c
).
This
‘G
a
bor
F
eat
ur
e
Set’
is
the
input
f
eat
ur
e
set
for
trai
ning.
Cl
asses
are
assig
ned
to
eac
h
fea
ture
vect
or.
Th
e
Happy
an
d
th
e
Neu
tral
im
ages
are
co
ns
ide
r
ed
as
po
sit
ive
cl
ass
and
he
nce
assigne
d
the
value
‘+1
’
a
nd
the
Con
te
m
pt
and
Disgust
i
m
ages
are
co
ns
ide
red
as
neg
at
ive
cl
ass
and
he
nce
assi
gn
e
d
t
he
value
‘
-
1’.
Finall
y
we
get
a
Ga
bor
Feat
ur
e
Set
f
or
trai
ni
ng
wit
h
‘16
0
x
161’
dim
ension
with the
16
1
st
colum
n
as t
he
class val
ue
as
in Figu
re
5
(
d).
(a)
(b)
(c)
(d)
Figur
e
5. (a
)
F
eat
ur
e
vecto
r f
or one im
age; (
b) Featue
v
ect
or set f
or 16
0
i
m
ages; (c) PC
A
a
pp
li
ed
f
eat
ur
e
set;
(d)
Feat
ur
e
set
assigne
d wit
h cl
asses
2.2.3.
D
atase
t
C
re
at
i
on
fo
r
Te
sting
Fo
r
test
ing, a
GUI is c
reated,
w
he
re t
he
stu
de
nt is g
i
ven a li
nk to
a
nswe
r
a
si
m
ple o
nline
‘
Depressi
on
An
al
ysi
s
Te
st’
as
s
how
n
i
n
F
igure
6(
a
).
The
syst
em
captures
the
f
rontal
f
ace
vi
deo
of
t
he
stu
den
t,
us
in
g
t
he
syst
e
m
web
ca
m
.
This
vid
eo
i
s
co
nv
e
rted
i
nto
f
ram
es
and
f
ro
m
each
f
ram
e,
the
f
ace
is
cr
oppe
d
a
nd
t
he
Gabo
r
featur
e
s ar
e e
xtracted in
t
he
sa
m
e w
ay
as in
the traini
ng
pha
se. Th
e
Ga
bor feat
ur
e
vect
or
for
all
the fram
es are
con
cat
e
nated
t
o
f
or
m
a
te
st
featur
e
set
.
F
or
a
sa
m
ple
vid
eo
of
160
fr
am
es
the
te
st
featur
e
set
is
as
sh
own
i
n
Figure
6(b
).
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2502
-
4752
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci,
Vo
l.
14
, N
o.
1
,
A
pr
il
2019
:
503
–
512
508
(a)
(b)
Figure
6. (a
) G
UI
f
or
capt
ur
in
g
st
ud
e
nt’s vi
de
o for test
in
g; (b) Test
Feat
ur
e
Set f
or 16
0 fr
a
m
es
2.3.
Cl
as
sific
ati
on w
ith SV
M
The
in
pu
t
featur
e
set
is
gi
ve
n
to
a
S
uppor
t
Vector
Ma
c
hi
ne
cl
assifi
er
f
or
trai
ning.
T
he
S
upport
Vecto
r
Ma
chi
ne
is
a
m
od
el
that
sp
li
ts
the
two
set
s
in
the
be
st
po
ssi
ble
wa
y.
This
is
the
be
st
sp
li
t
becau
s
e
it
is
the
wide
st
m
ar
gin
that
se
pa
ra
te
s
the
two
groups.
This
li
ne
is
cal
le
d
the
hype
rp
la
ne.
T
he
nea
rest
poi
nts
ar
e
cal
le
d
the
Sup
port
Vectors.
x
+
=
0
(
E
quat
ion o
f hype
rp
la
ne)
(1)
(
)
=
∑
(
x
T
x
)
+
(Equati
on
of
functi
on)
(2).
Wh
e
re,
t
he
set
are
the
suppo
rt
vecto
rs.
Since
x
+
=
0
an
d
(
x
+
)
=
0
def
i
ne
the
sa
m
e
plane
f
or
posit
ive
sup
port
v
e
ct
or
s:
x
+
+
=
+
1
an
d
f
or
ne
gative
s
uppo
rt
vect
or
s:
x
−
+
=
−
1
.
The
n
the
m
arg
in is
giv
e
n by:
|
|
|
|
. (
+
−
−
)
=
(
+
−
−
)
|
|
|
|
=
2
|
|
|
|
(3)
To
ob
ta
in
the
op
ti
m
al
hyperplane
we
need
to
m
axi
m
iz
e
t
he
m
arg
in
|
|
|
|
or
w
e
m
ini
m
iz
e
the
weig
ht
vect
or
½
(w.w
)
.
Since
it
bec
om
es
a
co
ns
trai
ne
d
opti
m
iz
at
ion
pro
ble
m
this
pro
blem
can
be
co
nv
erted
t
o
un
c
on
strai
ned
op
ti
m
iz
ation
prob
le
m
b
y usi
ng La
Gr
a
nge m
ulti
ple.
=
(
,
)
=
1
2
(
.
)
−
∑
.
.
(
.
)
−
∑
.
.
+
∑
(4)
Her
e
‘
w
’
has
to
be
m
ini
m
iz
e
d
a
nd
bias
te
r
m
‘
b
’
has
to
be
m
axi
m
iz
ed.
First,
we
ta
ke
the
de
rivati
ve
of
t
he
LaGr
a
nge
with
r
es
pect to
‘
b
’
t
o get:
=
∑
.
=
1
=
0
Wh
ere
, m
is the num
ber
of
fe
at
ur
e
vecto
r
(5)
This
is
one
of
the
co
ns
trai
ns
we
ha
ve
now.
The
n
we
ta
ke
the
der
i
vative
of
La
Gr
a
nge
with
res
pect
to
w
to
get:
=
∑
.
.
=
1
Wh
ere
, ‘
m’
is
nu
m
ber
of trai
ning sam
ple
s
(6)
Wh
e
n we
subst
it
ute the a
bove wei
ght ex
press
ion
with t
he ori
gin
al
e
xpressi
on
of the La
G
ra
ng
e:
∑
=
1
−
1
2
∑
.
.
.
(
.
)
(7)
Th
us
the
Deci
sion
ru
le
de
pe
nd
s
m
ai
nly
on
ly
on
the
dot
pro
du
ct
of
t
he
unknow
n
sam
ples
(
.
)
.
Given
a
po
i
nt ‘
z
’,
t
he d
eci
sion
w
hethe
r
the
point
belo
ng
s
to
class
1 o
r
cl
ass
2:
(
)
=
(
∑
.
.
.
+
=
1
)
(8)
If
t
he
sig
n
is
po
sit
ive
the
n
‘
z
’
is
cl
assifi
ed
to
cl
ass
‘1’
if
ne
gative
‘
z
’
is
cl
assifi
ed
t
o
cl
ass‘
-
1’.
T
he
SV
M
cl
assifi
er
cl
assifi
es
the
te
st
data
an
d
gi
ves
th
e
pr
e
dicte
d
cl
as
ses.
A
s
in
Fig
ure
7,
fi
rst
im
ag
e
is
cl
assifi
ed
as
1
-
po
sit
ive
im
age,
im
age
2
–
(
-
1)
so
ne
gative
i
m
age,
im
age
3
is
cl
assifi
ed
a
s
1
-
so
p
os
it
iv
e
i
m
age
and
so
on
al
l
the 16
0
im
ages g
et
classi
fie
d
t
o get a
160 X
1 m
a
trix of
pr
e
di
ct
ed
cl
asses.
Evaluation Warning : The document was created with Spire.PDF for Python.
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci
IS
S
N:
25
02
-
4752
A com
pu
te
r vis
ion
base
d
im
ag
e p
r
ocessi
ng sy
ste
m
for
d
e
p
re
ssion …
(
Namboo
diri S
andhy
a
P
aramesw
aran
)
509
Figure
7. Pr
e
di
ct
ed
cl
asses
for
the
160
te
st
fra
m
es
2.3.1.
De
press
ion Le
vel Iden
tifica
tion
Fo
r
ide
ntifyi
ng
the
le
vel
of
der
es
sio
n
f
rom
the
vid
e
o
we
nee
d
t
o
fi
nd
out
the
tot
al
a
m
ou
nt
of
neg
at
ive
em
otion
s
in
the
vid
e
o
fr
am
es.
The
stud
e
nt’s
em
otion
le
vel
m
a
y
change
withi
n
the
tim
e
du
rati
on
of
the v
i
deo. T
he vide
o
is t
her
e
f
or
e
d
i
vid
e
d
int
o
th
ree
par
ts
of
equal t
im
e d
urat
ion
. As
d
e
pic
te
d
in t
he
Ta
bl
e 2
:
If
al
l
the
th
ree p
arts o
f
t
he
vi
de
o
ha
ve
m
or
e o
f
posit
ive
em
otions
–
t
he
stu
den
t
ca
n
be
cl
assifi
ed
as h
a
ving
‘No Dep
ressei
on’.
If
first
tw
o
pa
r
ts
of
the
vid
e
o
sh
ow
posit
ive
e
m
otion
an
d
th
e
third
pa
rt
sh
ow
ne
gative
em
otion,
the
vid
e
o
is cl
assifi
ed
as
‘Low
De
pr
es
sion’ si
nce
on
ly
end p
a
rt
of
t
he vide
o
is s
howi
ng n
e
gative
em
otion.
If
tw
o
par
ts
of
the
vid
eo
is
sh
owin
g
posit
iv
e
e
m
otion
,
the
n
the
stud
e
nt
m
ay
be
su
ff
eri
ng
f
r
om
‘Mild
Depressi
on’,
a
s m
os
t par
ts
of
the v
i
deo s
how
posit
ive em
oti
on.
If
out
of
the
thre
e
par
ts,
tw
o
pa
rts
of
the
vid
eo
s
how
ne
gative
em
otion
s,
the
n
the
st
ud
e
nt
is
m
os
tl
y
sh
owin
g ne
gative e
xpressi
on
s
so
pr
e
dicte
d
a
s
h
a
ving
‘H
i
gh
Depressi
on’.
Table
2.
De
pr
e
ssion Le
vel Ide
ntific
at
ion
Tab
le
Ti
m
e
Duratio
n
First
Part of
Vide
o
M
iddle P
a
rt
of Vi
de
o
La
st
Part
of Vid
e
o
Depressi
o
n Lev
el
Features
Presen
t
(H
a
ppy
,
Neut
ra
l
–
Pos
itive
cla
ss
-
‘P
o
sitiv
e
’
Co
nte
m
pt
and Di
sg
ust
–
Neg
a
tive cla
ss
–
‘
Neg
a
tive’)
Po
sitiv
e
Po
sitiv
e
Po
sitiv
e
No
Depressio
n
Po
sitiv
e
Po
sitiv
e
Neg
ativ
e
Low Depress
io
n
Po
sitiv
e
Neg
ati
ve
Po
sitiv
e
Mild Dep
ressio
n
Po
sitiv
e
Neg
ativ
e
Neg
ativ
e
Hig
h
Depressio
n
Neg
ativ
e
Po
sitiv
e
Po
sitiv
e
Mild Dep
ressio
n
Neg
ativ
e
Po
sitiv
e
Neg
ativ
e
Hig
h
Depressio
n
Neg
ativ
e
Neg
ativ
e
Po
sitiv
e
Hig
h
Depressio
n
Neg
ativ
e
Neg
ativ
e
Neg
ativ
e
Hig
h
Depressio
n
3.
E
X
PERI
MEN
TAL RES
UL
TS A
ND AN
A
LYSIS
Her
e
vid
e
os
fi
ve
diff
e
re
nt
stud
e
nts
wer
e
ta
ken
f
or
ex
per
i
m
ental
analy
sis.
F
or
a
sin
gle
vid
e
o,
each
fr
am
e
of
the
vi
deo
was
analy
sed
m
anu
al
ly
and,
base
d
on
the
em
otion
present
they
were
assigne
d
as
hav
i
ng
po
sit
ive
‘+
1’
or
ne
gative
‘
-
1’
em
otion
.
T
hese
a
re
th
us
the
act
ual
cl
as
ses
of
the
te
st
vid
e
o
f
ram
es.
The
cl
assifi
er pre
dicte
d
eac
h
f
ram
e to
belo
ng to
ei
ther po
sit
ive
or n
e
gative cla
ss.
Table
3.
C
onf
usi
on Mat
rix for
v
ide
o I
E
m
o
tion
Neg
a
tive (Ac
tual)
Pos
itive (A
ctual)
To
t
al
Neg
a
tive (P
r
edict
ed)
65
16
81
Pos
itive (P
redic
te
d)
41
38
79
To
tal
106
54
160
Table
3
re
pr
e
s
ents
the
co
nfu
sion
m
at
rix
of
act
ual
and
pr
edict
ed
cl
asses
fo
r
t
he
te
st
vid
eo
fr
am
es.
Ov
e
rall
160
i
m
ages
of
the
te
st
vid
eo
wh
e
re
con
si
der
e
d.
65
vid
e
o
fr
am
es
wh
e
re
co
rr
ect
l
y
cl
assifi
ed
as
hav
i
ng
neg
at
ive
em
otion
an
d
the
r
e
m
ai
nin
g
16
f
ram
es
incorre
ct
ly
c
la
ssifie
d
as
po
sit
ive
cl
ass.
Fo
r
the
po
sit
ive
e
m
otion
f
ram
e
s,
out
of
the
79,
38
we
re
co
r
rectl
y
cl
assifi
e
d
as
posit
ive
and
the
rem
ai
ni
ng
41
wer
e
wrongly
cl
assifi
e
d.
F
or
this
par
ti
c
ular
vid
e
o
the
cl
ass
ifie
r
w
orke
d
w
it
h
an
acc
ur
ac
y
of
64.38%
a
s
show
n
i
n
Ta
ble
4
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2502
-
4752
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci,
Vo
l.
14
, N
o.
1
,
A
pr
il
2019
:
503
–
512
510
belo
w
w
hich
dep
ic
ts
the
Pe
rfor
m
ance
m
e
t
rics
of
t
he
sys
tem
fo
r
a
sam
ple
vid
e
o.
He
r
e
Error
perce
nt
age
is
qu
it
e
l
ow
–
35
.62%.
Se
ns
it
iv
it
y
wh
ic
h
is
th
e
abili
ty
of
th
e
syst
e
m
to
co
rr
ect
ly
cl
assify
as
hav
i
ng
negat
ive
e
m
otion
(tr
ue
po
sit
ive
rate)
i
s
arou
nd
61.
32%
w
he
re
as
sp
eci
fici
ty
-
the
abili
ty
of
the
syst
e
m
to
cor
rectl
y
identify
po
sit
iv
e
em
otion
(tr
ue
ne
gative
rate)
is
70.37%.
P
r
eci
sion
w
hich
dep
ic
ts
how
cl
os
e
diff
ere
nt
s
a
m
ples
are to
each
o
t
he
r
is
80.25%
.
Table
4.
Per
f
orm
ance
m
et
rics
of the
syst
em
for
a
sam
ple v
ideo
Perf
o
r
m
a
nce
Va
lue (
%)
Accurac
y
6
4
.38
Er
ror
3
5
.62
Sen
sitiv
ity
6
1
.32
Sp
ecif
icity
7
0
.37
Precisio
n
8
0
.25
FalsePo
sitiv
eRate
2
9
.63
F1
_
sco
re
6
9
.52
As
sho
wn
in
T
able
5,
fi
ve
vide
os
we
re
co
ns
i
der
e
d
f
or
te
sti
ng.
F
or
al
l
the
five
vid
e
os
fi
rst
160
f
ram
es
wer
e
c
onside
re
d
f
or
te
sti
ng.
The
vid
e
o
was
then
div
ide
d
i
nto
th
ree
e
qu
al
par
ts.
T
he
s
um
of
the
posit
ive
an
d
the
neg
at
ive
e
m
ot
ion
for
ea
ch
par
t
was
f
ou
nd
out.
If
posit
ive
em
otion
s
we
re
m
or
e
c
om
par
ed
t
o
ne
gativ
e
e
m
otion
s
t
hen
that
pa
rt
of
t
he
vid
e
o
was
la
be
le
d
as
‘Posit
ive’
.
T
he
‘A
ct
ua
l
Em
otion
Stat
e’
of
the
vi
de
os
was
fou
nd
out
by
cal
culat
ing
the
am
ou
nt
of
th
e
posit
ive
an
d
the
ne
gative
e
m
otion
s
pr
es
ent
i
n
the
act
ua
l
cl
ass.
Si
m
il
arly
,
the
‘P
re
dicte
d
Em
otion
Stat
e’
of
the
vide
os
was
fou
nd
out
by
cal
culat
in
g
the
am
ou
nt
of
t
he
po
sit
ive
and t
he
n
e
gative em
otion
s
pr
e
sent i
n t
he pre
dicte
d
c
la
ss.
Table
5.
Id
e
nti
fyi
ng Lev
el
of
Depressi
on
Ou
t
of
the
five
vid
eos
,
Vide
os
–
I
and
I
I
sho
wed
sam
e
Act
ual
and
P
red
ic
t
ed
em
otion
al
s
ta
te
.
On
e
of
the
vi
deos,
Vi
deo
-
I
V
with
Mi
ld
De
pr
essi
on
was
pre
dicte
d
as
‘
High
D
epr
es
sio
n’
.
F
or
the
r
em
ai
nin
g
two
vid
e
os
,
Vi
deo
s
–
II
a
nd V
,
the A
ct
ual
a
nd
the
Pr
e
dicte
d
Em
otion
al
sta
te
w
er
e
co
ntra
dicti
ng.
T
he
syst
em
wo
r
ks
with
a
m
axi
m
um
accuracy
of
64.
38%.
I
f
the
stu
den
t
is
pr
e
dicte
d
as
ha
ving
‘L
ow
D
epr
es
sio
n’
,
t
he
‘Cla
ss
Advisor’
is
se
nt
a
no
ti
ficat
ion
m
ail
ind
ic
at
ion
th
e
m
ental
sta
te
of
t
he
stu
den
t.
I
f
t
he
st
ud
e
nt
has
‘Mil
d
Depressi
on’,
t
he
‘Cla
ss
A
dv
i
so
r
’
a
nd
the
‘
Dep
a
rtm
ent
C
ouns
el
lo
r’
are
no
ti
fie
d.
If
t
he
stud
e
nt
has
got
‘
High
Depressi
on’
al
ong
with
t
he
‘
Cl
ass
Advis
or’
,
the
‘D
e
pa
rtm
ent
Co
unsel
lor
’
an
d
t
he
‘Uni
ver
sit
y
co
u
ns
el
lor’
are
al
so
in
form
ed
about the
stu
de
nt’s dist
urbe
d m
ental
stat
e.
The
a
naly
sis
of
this
w
ork
de
pi
ct
s
that,
us
in
g
al
gorithm
pr
op
os
e
d
in
the
c
ur
ren
t
stu
dy,
the
p
rese
nce o
f
depressi
on
feat
ur
es
can
be
ef
f
ect
ively
found
ou
t
e
ve
n
f
or
a
sm
a
ll
du
ra
rio
n
of
vi
de
o.
T
his
process
can
in
tur
n
be
ap
plied
f
or
a
vid
e
o
of
a
ny
la
rg
e
du
rati
on
and
de
pr
essi
on
featur
es
ca
n
be
identifie
d
ef
f
ect
ively
.
This
wor
ks
pro
ves
t
hat
if
t
he
syst
em
is
trai
ned
ef
fecti
ve
ly
with
the
im
a
ges
of
depressi
on
feat
ur
es
al
one,
the
i
den
ti
fi
cat
ion
of
de
pr
es
sio
n
in
vi
deo
s
can
be
s
uccess
fu
ll
y
done
with
vid
eo
m
od
al
it
y
al
on
e.
Ma
ny
of
the
pr
e
vious
works
dealt
with
ide
ntific
at
ion
of
a
ll
the
basic
six
hu
m
an
e
m
oti
on
s
,
but
he
re
on
ly
the
ide
ntific
at
ion
of
f
ou
r
m
a
in
e
m
otion
s
-
ha
ppy,
co
ntem
pt,
disgust
an
d
ne
utral
are
c
onsidere
d
wh
ic
h
ar
e
m
ai
nly
fo
un
d
in
de
pr
esse
d
as
in
[1
]
.
T
his
in
trun
reduces
the
trai
nin
g
a
nd
t
est
ing
ove
rloa
d
an
d
im
pr
ov
e
s
the
cl
assifi
er
per
f
orm
ance.
In
thi
s
work,
the
m
ai
n
fo
c
us
w
as
t
o
find
out
de
pressi
on
in
st
ud
e
nts,
who
a
re n
ot f
orm
erly
diagn
os
ed
with
de
pres
sion.
This
syst
em
does
no
t
m
ake
use
of
a
ny
sta
nd
a
rd
em
otion
rec
ogniti
on
d
at
ab
ases
for
trai
ni
ng.
I
ns
te
a
d
it
captu
res
Vid
eo
Actual
–
v
e
Actual
+v
e
Actual
E
m
o
tion
State
Predi
cted
–
v
e
Predi
cted
+v
e
Accura
cy
(
%
)
First
Part
E
m
o
tion
Seco
nd
Part
E
m
o
tion
Third
Part
E
m
o
tion
Predic
ted
E
m
o
tion
State
I
106
54
Hig
h
Dep
ressio
n
81
79
6
4
.38
Neg
ativ
e
Po
sitiv
e
Neg
ativ
e
Hig
h
Dep
ressio
n
II
83
77
Mild
Dep
res
sio
n
72
88
5
1
.88
Po
sitiv
e
Neg
ativ
e
Po
sitiv
e
Mild
Dep
ressio
n
III
41
119
No
t
Dep
ressed
84
76
5
5
.63
Neg
ativ
e
Neg
ativ
e
Po
sitiv
e
Hig
h
Dep
ressio
n
IV
88
72
Mild
Dep
ressio
n
91
69
5
4
.38
Po
sitiv
e
Neg
ativ
e
Neg
ativ
e
Hig
h
Dep
ressio
n
V
101
59
Hig
h
Dep
ressio
n
79
81
42
.50
Po
sitiv
e
Po
sitiv
e
Neg
aiv
e
Low
Dep
ressio
n
Evaluation Warning : The document was created with Spire.PDF for Python.
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci
IS
S
N:
25
02
-
4752
A com
pu
te
r vis
ion
base
d
im
ag
e p
r
ocessi
ng sy
ste
m
for
d
e
p
re
ssion …
(
Namboo
diri S
andhy
a
P
aramesw
aran
)
511
the
stu
den
t
f
ac
e
e
m
otion
s
it
s
el
f
for
trai
ni
ng
the
cl
assifi
er.
The
te
sti
ng
vi
deo
is
ta
ken
c
aptu
red
at
the
sam
e
tim
e, w
it
h
the s
am
e
ca
m
era u
nd
e
r
the sam
e b
ack
gro
und
c
onditi
ons.
T
his
helps
cl
assifi
e
r
to
eff
ic
ie
ntly
iden
ti
fy
e
m
otion
s
from
the v
i
deo of t
he
sam
e p
erson wh
os
e im
ages
wer
e
take
n pr
i
or for
trainin
g
t
he
cl
assifi
er
.
4.
CONCL
US
I
O
N
AND
F
UT
U
RE W
ORK
This
stu
dy
was
unde
rtake
n
for
fi
nd
i
ng
out
the
le
vel
of
de
pressi
on
in
fi
ve
diff
e
re
nt
vi
deos
of
colle
ge
stud
e
nts.
T
he
pr
ese
nce
of
‘
Happy’
,’Neut
r
al
,
-
(
posit
ive
e
m
otion
)
a
nd
’Conte
m
pt’
and
‘
Digust’
-
(
Ne
gative
e
m
otion
)
fa
ci
al
featur
es
,
w
hich
are
f
ound
pr
om
inent
in
depressio
n
vi
deo
s
wer
e
f
ound
ou
t
and
analy
sed
.
The
dataset
f
or
trai
ning
an
d
te
sti
ng
wa
s
capt
ur
e
d
se
par
at
el
y
and
t
he
facial
f
eat
ur
es
of
t
he
sam
e
wer
e
cl
assifi
e
d
us
in
g
a
S
uppo
r
t
Vector
Ma
ch
ine
cl
assifi
er.
The
am
ou
nt
of
the
posit
ive
a
nd
ne
gative
e
m
ot
ion
s
in
eac
h
vi
de
o
was
a
naly
si
ed
and
the
vid
e
os
wer
e
pre
dicte
d
as
vide
os
wi
th
‘
High
Depr
ession’,
‘Mil
d
Depressi
on’
or
‘L
ow
Depressi
on’.
T
he
cl
assifi
er
pr
edict
ed
t
he ou
t
com
es w
it
h
a
m
axi
m
u
m
accur
acy
of
64.38%
accu
racy.
The
m
or
e
the
nu
m
ber
of
trai
ning
sam
ples,
the
m
or
e
accur
at
e
will
be
the
cl
assifi
er
pr
e
di
ct
ion
.
The
te
sti
ng
vid
e
os
captur
e
d
are
of
m
or
e
than
thousa
nd
f
ram
es,
out
of
w
hi
ch
only
the
first
160
f
ram
es
wer
e
consi
der
e
d
he
r
e
fo
r
te
sti
ng
pur
pose.
T
his
proces
s
can
be
done
f
or
t
he
entire
vid
e
o,
by
find
i
ng
ou
t
th
e
key
fr
am
es
of
t
he
v
ide
o,
by
us
in
g
a
key
fr
am
e
extracti
on
te
ch
nique
in
the
f
ut
ur
e
w
ork.
The
cu
rr
e
nt
stu
dy
deals
on
ly
with
t
he
r
ecent
vid
e
os
of
the
stu
de
nt.
Howe
ver,
for
m
or
e
accurate
depressi
on
det
ect
ion
,
the
hist
or
y
of
the
stu
de
nt
s
hould
al
s
o
t
o
be
t
aken
int
o
c
onsiderati
on.
T
her
e
fore,
i
n
t
he
fu
t
ur
e
w
ork,
m
or
e
vi
deo
s
of
t
he
sam
e
stud
e
nt,
ta
ke
n
at
diff
ere
nt
ti
m
e
du
rati
on
ca
n
be
c
on
si
der
e
d.
T
his
m
a
y
help
to
analy
se
and
c
om
par
e
the
past
and
the
prese
nt
m
ental
sta
te
of
the
stu
den
t
and
pr
ov
i
de
m
or
e
in
form
ation
to
the
pr
oces
s
of
de
p
ressi
on
le
vel
identific
at
ion.
Depressi
on
de
te
ct
ion
f
ro
m
vid
eos
al
on
e
f
orm
s
on
ly
a
pa
rt
of
the
whol
e
process
of
identify
in
g
depressi
on.
T
hose
st
ud
e
nts,
who
m
ay
be
c
la
ssifie
d
as
no
t
depresse
d,
m
ay
be
victim
s
to
de
pressi
on
i
n
the
fu
t
ur
e.
F
or
this
reason,
the
ir
ot
her
act
ivit
ie
s
hav
e
to
be
co
nt
inu
ously
m
on
it
or
e
d.
This
inc
lud
es
the
c
on
ti
nuous
m
on
it
or
ing
of
their
academ
ic
act
ivit
ie
s,
t
heir
e
xtra
cu
r
ricular
act
ivit
ie
s
an
d
al
so
t
heir
s
ocial
act
ivit
ie
s.
Mon
it
ori
ng
ac
adem
ic
activities
inclu
de
m
on
it
or
in
g
the
stu
de
nt’s
gr
a
des
a
nd
at
te
nd
a
nce
.
Decr
easi
ng
i
n
gr
a
des
or
at
te
nda
nce
m
ay
al
so
be
due
to
a
st
ud
e
nt
’s
ext
ra
cu
rr
ic
ular
act
ivit
es,
l
ike
en
gag
i
ng
i
n
spo
rts
or
art
s.
If
a
stud
e
nt’s
gr
a
de
s
or
at
te
nda
nce
are
poor
an
d
t
hey
are
not
act
ive
in
ot
her
m
e
diu
m
s
li
ke
arts
or
s
ports
al
so
,
then
they
m
ay
be
at
a
hig
h
ris
k
of
fall
ing
into
de
pr
essi
on.
He
nc
e
stud
e
nts’
ext
ra
curricula
r
act
ivit
ie
s
hav
e
al
so
to
be
co
ntin
uous
l
y
m
on
it
or
ed
for
ind
e
ntific
at
io
n
of
de
pr
es
sio
n.
I
n
ad
diti
on
t
o
this,
the
re
shou
l
d
al
so
be
a
way
of
m
on
it
or
ing
a
s
tud
e
nt
’s
so
ci
al
m
edia
con
te
nt
beca
us
e
i
f
the
stu
den
ts
’
s
oci
al
m
edia
con
te
nt
s
how
a
ne
ga
ti
ve
at
ti
tud
e
to
ward
s
li
fe,
t
hen
s
uc
h
a
stu
den
t
m
a
y
be
a
victi
m
of
stres
s
a
nd
de
pr
essi
on.
F
ur
t
her
m
or
e,
f
or
stud
e
nts
residin
g
in
ho
ste
ls,
inp
ut
f
rom
the
ho
ste
l
a
uthoriti
es
reg
a
rd
i
ng
the
act
iv
it
ie
s
of
a
stud
ent
within
the
ho
ste
l
sh
oul
d
al
so
be
con
si
der
e
d
f
or
m
on
it
or
in
g
the
stu
den
ts
’
da
y
to
day
act
ivit
ie
s.
If
the
stu
den
t
le
aves
t
he
ho
ste
l
pr
em
ise
s
to
co
ll
ege,
but
in
tu
rn
sk
i
ps
cl
asse
s
by
in
dulgi
ng
in
oth
e
r
neg
at
ive
act
ivit
ie
s,
th
en
the
re
is
a
r
isk
of
the
stude
nt
fall
ing
int
o
a
ne
ga
ti
ve
sta
te
of
m
i
nd
s
et
wh
ic
h
m
ay
even
tuall
y
le
ad
to
depressi
on.
The
f
utu
re
work
to
this
stud
y
i
s
to
form
an
el
aborate
m
od
el
of
de
pressi
on
i
den
ti
ficat
ion
process
,
by
ta
kin
g
al
l
the
above
m
entioned
fact
or
s
i
nto
c
onsid
erati
on
a
nd
co
m
bin
ing
it
with
the
c
urre
nt
work
of
ide
ntifyi
ng
de
pr
essi
on
wit
h
i
m
ages.
ACKN
OWLE
DGME
NT
The
aut
hors
w
ou
l
d
li
ke
to ex
t
end
the
hear
t
fe
lt
g
rati
tud
e to the
fac
ulty
-
in
-
c
harge of
Am
rit
a
-
Co
gn
iz
a
nt
Inn
ov
at
io
n
La
b
,
De
par
tm
ent
of
Com
pu
te
r
sci
ence
an
d
En
gin
ee
rin
g,
Am
rita
school
of
E
ng
i
ne
erin
g,
Coim
bator
e f
or the s
upport e
xt
end
e
d
in
car
ry
ing
out t
his
w
ork
.
REFERE
NCE
S
[1]
Gira
rd,
Jeffr
e
y
M.,
Jeffrey
F.
C
ohn,
Moham
m
ad
H.
Mahoor,
Se
y
edmoham
m
adMa
vada
t
i,
and
Dea
n
P.
Rosenwald.
"
Soci
al
risk
and
depre
ss
ion:
Ev
i
denc
e
from
manual
and
automat
ic
facial
ex
press
ion
analy
sis
.
"
In
Autom
at
ic
Fac
e
and
Gesture
Re
c
ognit
ion (F
G),
1
0th
IE
EE Int
ern
a
ti
onal Confe
r
ence
and
W
orkshops
on,
pp
.
1
-
8
.
I
E
EE
,
2013.
[2]
Pam
pouchi
dou,
A.,
O.
Sim
ant
ir
a
ki,
C
-
M.
Vaz
ak
opoulou,
C
.
Ch
a
tz
ak
i,
M
.
Pedi
ad
it
is,
A.
Mar
ida
ki
,
K.
Maria
s
et
al.
"
Fac
ial
geome
tr
y
and
spe
ec
h
an
aly
sis
for
depre
s
sion
det
e
ction
.
"
In
Engi
n
ee
ring
i
n
Medic
in
e
and
Biol
og
y
Soc
i
e
t
y
(EMBC),
39th
A
nnual
In
te
rn
at
io
nal
Conf
ere
n
ce
of
the IEEE, pp
.
1433
-
1436.
I
EEE,
2017
.
[3]
Guill
emin
F,
Bom
bar
die
r
C,
Beaton
D.
Cross
-
cul
tura
l
ad
aptati
on
of
hea
lt
h
-
r
elate
d
qual
ity
of
l
ife
m
ea
sures:
liter
at
u
r
e
re
vie
w
and
prop
osed
guideline
s
.
Journal
of
clini
c
al
ep
ide
miolog
y
.
46(12):1417
-
32
.
1993
[4]
Cohn,
Jeffrey
F.
,
Tomas
Sim
on
Kruez
,
I
ai
n
Ma
tthews
,
Ying
Yan
g,
Minh
Hoai
N
gu
y
en
,
Marg
araTe
j
era
Padi
lla,
F
eng
Zhou,
and
Fern
ando
De
la
Tor
re
.
"
Detect
ing
d
epre
ss
ion
from
fac
ial
a
ct
ions
and
voc
al
pros
ody."
In
Affec
t
iv
e
Com
puti
ng
and
Inte
lligen
t
Inte
ra
ct
ion
and
W
orkshops
.
ACII
2009
.
3rd
Inte
rna
t
ion
al
Confer
en
ce
o
n,
pp.
1
-
7.
IEE
E
,
2009.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2502
-
4752
Ind
on
esi
a
n
J
E
le
c Eng &
Co
m
p
Sci,
Vo
l.
14
, N
o.
1
,
A
pr
il
2019
:
503
–
512
512
[5]
Hara
ti,
Sahar
,
Andrea
Crowell,
Hele
n
Ma
y
ber
g
,
Jun
Kong,
and
Sham
imN
e
m
at
i.
"
Discriminati
ng
cl
inical
phases
of
rec
ove
ry
from
major
depre
ss
iv
e
disorder
using
t
he
dynami
cs
of
f
aci
al
ex
press
ion
.
"
In
Eng
ineeri
n
g
in
Medi
ci
n
e
a
nd
Biol
og
y
Soci
ety
(EM
BC),
38th
A
nnual
In
te
rn
at
io
nal
Conf
ere
n
ce
of
the,
pp
.
2254
-
2257.
IE
EE,
201
6.
[6]
Ta
snim
,
Mashru
ra
,
R
ifa
tShah
ri
yar
,
Now
shinNah
ar,
and
Hos
sain
Mahm
ud.
"
Intelli
gen
t
d
epre
ss
ion
detec
t
ion
an
d
suppor
t
system:
Stat
isti
cal
ana
ly
sis,
psyc
holog
ic
al
rev
ie
w
an
d
design
im
pli
c
ati
on
.
"
In
e
-
He
al
th
Networki
n
g,
Applic
a
ti
ons a
nd
Services (He
a
lthcom
),
18th
Internat
ion
al
Conf
er
enc
e
on,
pp.
1
-
6.
IEEE, 2016.
[7]
Pam
pouchi
dou,
Anasta
sia,
Kos
ta
s
Maria
s,
M
anol
isTsikna
k
is,
P.
Sim
os,
Fan
Yang,
and
F
abr
iceMeri
aude
a
u.
"
Designing
a
fr
amework
fo
r
as
sisting
depre
ss
ion
sev
erit
y
ass
e
ss
ment
from
fac
ial
image
analys
is
.
"
In
Signal
and
Im
age
Proce
ss
in
g
Applicati
ons
(I
CS
IPA
),
Inte
rna
t
iona
l
Confer
e
nc
e
on,
pp.
578
-
58
3.
IE
EE,
2015
.
[8]
Madda
ge,
Nam
unu
C.
,
Ra
ji
ndaS
ena
ra
tne,
Lu
-
Shi
h
Alex
Low,
M
arg
aret
Le
ch
,
an
d
Nicholas
All
e
n.
"
Vi
d
eo
-
based
det
e
ct
ion
o
f
th
e
cl
in
ic
al
d
epre
s
sion
in
adole
sc
e
nts
.
"
In
Eng
inee
ring
in
Medi
ci
n
e
and
Bio
log
y
Socie
t
y
,
(EMB
C).
Annual
Int
ern
a
tional
Conf
ere
n
ce of
th
e
I
EE
E
,
pp.
3723
-
3726.
IEEE,
2009
.
[9]
Kart
hika
R,
Par
amesw
ara
n
L.
S
tudy
of
Gabor
wa
ve
l
et
for
face
rec
ognition
in
variant
to
pose
and
orient
ati
o
n
.
InProce
edi
ngs of
the Int
ern
ationa
l
Confer
ence
on
Soft
Com
puti
ng
S
y
stems
,
pp.
50
1
-
509.
Springe
r,
New Del
hi
.
2016
.
[10]
Babu,
S.
Hite
s
h,
Sach
in
A.
B
ira
jdh
ar,
and
S
amart
h
T
ambad.
"
Face
R
ec
ogn
it
ion
using
Entr
opy
based
Fa
c
e
Segre
gati
on
as
a
Pre
-
proce
ss
ing
Techni
que
and
Conservati
ve
BP
SO
based
Fe
ature
Sel
ection
.
"
India
n
Confer
e
nce
on
Com
pute
r
Vi
sion Gra
phic
s
an
d
Im
age
Proc
essing,
pp
.
46
.
AC
M,
2014.
[11]
Xiang,
Gao,
Zh
u
Qiu
y
u,
W
ang
Hui,
and
Chen
Yan.
"
Fac
e
rec
ogn
it
ion
based
on
LBPH
and
r
egre
ss
ion
of
Lo
ca
l
Bi
nary
fe
atures.
"
In
Audio,
L
an
guage
and
Im
ag
e
Proce
ss
ing
(IC
ALIP),
Internat
i
onal
Conf
ere
n
ce
on,
pp
.
414
-
41
7.
IEE
E
,
2016
.
[12]
Moreir
a,
Julia
no
L.
,
Adria
na
Br
aun,
and
Soraia
R.
Mus
se.
"
Ey
es
and
ey
ebrows
det
e
ct
ion
for
pe
rform
ance
driven
animati
on.
"
In
G
ra
phic
s,
Pat
te
rns
and
Im
ag
es
(SIBG
RAP
I),
23rd
SIBG
RA
PI Confe
re
nc
e
on
,
pp
.
1
7
-
24.
IE
EE,
201
0.
[13]
Florea
,
L
aur
a
,
and
Ra
luc
aBo
ia.
"
E
ye
brows
lo
cal
izati
on
for
ex
press
ion
anal
ysis.
"
In
In
telligent
Com
puter
Com
m
unic
at
ion and
Proce
s
sing
(
ICCP
),
Inte
rn
ati
onal
Conf
ere
n
ce on, pp. 281
-
284
.
IE
EE,
2011
.
[14]
Phuong,
Hoang
Minh,
Le
Dung,
Ton
y
de
Souz
a
-
Daw,
Ngu
y
en
T
i
enDz
ung,
and
T
hangMa
nh
Hoan
g.
"
Extracti
on
of
human
fac
ial
fe
a
tures
based
on
Haar
fe
ature
wit
h
Adaboost
and
image
rec
ognitio
n
te
chni
qu
es
.
"
In
Comm
unic
at
ion
s
and
E
le
c
tronics (
ICCE),
Fourth
I
nte
rna
ti
ona
l
Con
fe
re
nc
e
on
,
pp
.
3
02
-
305.
IE
EE,
2
012.
[15]
Ta
ncho
tsrinon,
Chaiy
asit,
Suphakant
Phim
oltare
s,
and
Sara
n
y
a
Mane
ero
j
.
"
Facial
ex
press
ion
rec
ognit
ion
usi
ng
graph
-
based
fe
a
tures
and
artif
i
ci
al
n
eu
ral
n
etwor
ks
.
"
In
Im
agi
ng
S
y
st
ems
and
Techni
ques
(IST),
pp.
331
-
334.
IEE
E
,
2011
.
[16]
Ow
a
y
j
an,
Mich
el
,
Rog
er
Achk
ar,
and
Mous
saIskanda
r.
"
Fa
ce
Dete
ction
wit
h
Ex
press
ion
Recogni
ti
on
usin
g
Arti
ficial
Neural
Net
works
.
"
In
B
iomedic
a
l
Eng
in
ee
ring
(MECB
ME),
3rd
Midd
l
e
E
ast
Confer
en
ce
on
,
pp
.
115
-
1
19.
IEE
E
,
2016
.
[17]
Meng,
Hong
y
in
g,
Di
Huang
,
Heng
W
ang,
Hong
y
u
Yang
,
Moham
m
ed
AI
-
Shuraifi
,
an
d
Yunhong
W
ang.
"
Depress
ion
rec
ognit
ion
based
on
dynami
c
f
aci
al
and
vo
ca
l
e
xpre
ss
ion
f
e
atures
using
p
artial
l
east
squ
are
regress
ion
."
In
Proce
edi
ngs
of
t
he
3rd
ACM
int
ern
ational
works
hop
on
Audio/vis
ual
emotion
ch
al
l
enge
,
pp
.
21
-
3
0.
ACM
.
2013.
[18]
Ali
H,
Sritha
r
an
V,
Hari
har
an
M,
Za
ab
a
SK
,
E
lshai
kh
M.
Fea
t
ure
ext
r
action
u
sing
Radon
tra
n
sform
and
Discre
te
W
ave
le
t
Tr
ansfo
rm
for
fa
ci
al
emotion
re
cogn
it
io
n.
InRobot
i
cs
and
Manufac
turin
g
Aut
omation
(
ROMA
)
,
2016
2n
d
IEE
E
Inte
rnat
io
nal
Symposium,
pp.
1
-
5
.
IE
EE.
2
016.
[19]
Ti
an
YI
,
Kan
ade
T,
Cohn
JF
.
Re
cogni
z
ing
a
ct
ion
unit
s
for
f
acia
l
expr
ession
an
alys
is
.
IEEE
Tr
ansacti
ons
on
pa
tt
e
rn
analy
sis and
ma
chi
ne
intelligen
c
e
.
23(2
):97
-
115.
2001
[20]
Haghigha
t
M
,
Z
onouz
S,
Abdel
-
Motta
l
eb
M.
C
loudID:
Trustw
orth
y
cl
oud
-
b
ase
d
and
cro
ss
-
en
te
rprise
b
iometr
ic
ide
nti
f
icati
on
.
E
xpe
rt S
yste
ms
wi
th
App
li
ca
ti
ons.
42(21):7905
-
16.
2015.
[21]
Sahla
KS
,
Kum
a
r
TS.
Cl
assroom
Te
a
chi
ng
As
sess
m
ent
Based
on
S
tude
nt
Emotions.
InTh
e
Int
ernational
Symposium
on
Intelli
g
ent Sys
te
ms
Technol
og
ie
s and Applicat
i
ons
pp.
475
-
486
.
Springer, Cha
m
.
2016
.
[22]
Nehru,
Manga
yar
kar
asi
,
an
d
S.
Padm
ava
thi
.
"
Illum
inat
ion
in
vari
ant
faces
detec
tion
using
vi
o
la
j
ones
algori
thm
.
"
In
Advanc
ed
Com
puti
ng
and
Com
m
unic
at
ion
S
y
stems
(ICACC
S
),
4th
Inte
rn
at
io
nal
Confer
en
ce
on,
pp.
1
-
4.
IE
E
E,
2017.
[23]
Vikra
m
,
K.,
and
S.
Padm
ava
thi
.
"F
ac
ia
l
parts
detec
t
ion
using
Vi
o
la
Jone
s
algorit
hm
.
"
In
Advanc
e
d
Com
puti
ng
an
d
Com
m
unic
at
ion S
y
stems
(ICACCS
),
4th
In
te
rn
ational Confe
re
nc
e
on,
pp.
1
-
4
.
IE
EE
,
2017.
[24]
Athira
,
S.,
R
.
Manjusha,
and
La
th
aPar
ames
wara
n.
"
S
ce
ne
Unders
tandi
ng
in
Images.
"
In
The
Int
ern
a
ti
on
al
S
y
m
posium
on
Inte
lligent
S
y
st
e
m
s
Te
chnol
ogi
e
s
and
Appli
ca
t
io
ns,
pp.
261
-
271
.
Springe
r
In
te
rn
at
ion
al
Publishin
g,
2016.
[25]
Venka
ta
r
aman,
D.,
Para
m
eswar
an,
N.
S
.
"
Extracti
on
o
f
Fa
ci
al
Fe
atures
for
De
press
ion
Dete
cti
on
among
Students.
"
Inte
rna
ti
ona
l
Journal
of
Pure
and
Applie
d
Mathe
m
at
i
cs,
Inte
r
nat
ion
al
Confer
enc
e
on
Advan
ce
s
in
Com
pute
r
Scie
nc
e
,
Engi
n
e
eri
ng
and Te
chn
olog
y
,
pp
.
455
-
4
62,
2018
.
[26]
W
ang,
Yutai,
Xinghai
Yang,
and
Jing
Zou.
"Resea
rc
h
of
e
m
oti
on
re
cognit
ion
base
d
on
spee
ch
and
fa
cial
expr
ession.
"
Ind
onesian
Journal
of
E
le
c
tric
al
En
gine
ering
and
C
omputer
Scienc
e
11
,
no.
1:
83
-
90
.
2013
.
[27]
Oua
nan,
Ham
id
,
Moham
m
ed
Ouana
n,
and
B
ra
himA
ksass
e.
"G
abor
-
HO
G
F
ea
tur
es
base
d
Face
Rec
ogn
it
i
on
Scheme.
"
Indon
esian
Journal
of
El
e
ct
rica
l
Eng
in
ee
ring a
nd
Computer
Sc
ie
nc
e
15
,
no
.
2:
331
-
335.
2015.
[28]
Li
n,
Chu
an,
Xi
Qin,
Guo
-
li
ang
Zhu,
Jiang
-
hu
a
W
ei
,
and
Cong
Li
n.
"F
a
ce
de
t
ec
t
ion
al
gor
it
hm
base
d
on
m
ult
i
-
orie
nt
at
ion
Gab
or
fil
t
ers
and
fe
at
ure
fusion
.
"
In
donesian
Journa
l
of
E
le
c
tric
al
E
ngine
ering
and
Computer
Sci
en
c
e
11
,
no
.
10:
5986
-
5994.
2013
.
Evaluation Warning : The document was created with Spire.PDF for Python.