TELKOM
NIKA
, Vol.11, No
.3, March 2
0
1
3
, pp. 1697
~1706
ISSN: 2302-4
046
1697
Re
cei
v
ed
De
cem
ber 2
8
, 2012; Re
vi
sed
Jan
uar
y 27, 2
013; Accepte
d
February 1
0
, 2013
Facial A
n
imation Based on Feature Points
Beibei Li, Qiang Zhan
g
* , Dongsh
e
ng
Zhou, and Xiaopeng
Wei
Ke
y
Lab
orator
y of Advance
d
Desig
n
an
d Inte
lli
ge
nt Comp
u
t
ing, Ministr
y
of
Educatio
n
Dali
an U
n
ivers
i
t
y
, Lia
oni
ng D
a
lian, T
e
l:
+
86-4
11-8
740
20
45, F
a
x:+
86-
411-
8
740
37
33
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: zhang
q@d
l
u.
edu.cn
A
b
st
r
a
ct
T
h
is pa
per pr
e
s
ents a hy
brid
meth
od for sy
n
t
hesi
z
i
n
g n
a
tur
a
l a
n
i
m
ati
on
of facial
express
i
on w
i
th
data fro
m
moti
on ca
pture. T
h
e captur
ed
exp
r
essio
n
w
a
s
tra
n
sferred fro
m
t
he sp
ace
of so
urce p
e
rfor
ma
n
c
e
to that of a 3D
target face usi
ng a
n
accur
a
te
ma
ppi
ng pr
oc
ess in or
der to
reali
z
e
the re
u
s
e of motion
d
a
ta.
T
he transferre
d ani
mation w
a
s then a
ppl
ie
d to synthesi
z
e the expr
essi
on of the targ
et mo
del thro
u
gh
a
framew
ork
of tw
o-stage def
ormatio
n
. A lo
cal d
e
for
m
atio
n techn
i
q
ue p
r
eli
m
i
nari
l
y co
nsid
ered
a set
o
f
nei
ghb
or featu
r
e po
ints for
every vertex
and th
ei
r i
m
p
a
ct on th
e v
e
rtex. F
u
rther
mor
e
, the g
l
o
b
a
l
defor
mati
on w
a
s exp
l
oite
d to
ensur
e the s
m
o
o
thn
e
ss of the w
hol
e faci
al
mes
h
. T
he ex
peri
m
e
n
tal r
e
s
u
lt
s
show
our hybri
d
mes
h
defor
mation stra
tegy w
a
s effective,
w
h
ich could a
n
i
ma
te different target face with
out
complic
ated
manu
al efforts re
quir
ed by
most
of facial ani
ma
tion ap
pro
a
che
s
.
Key
w
ords
: f
a
cial a
n
im
ation, m
e
sh deform
a
tion, feature points
Copy
right
©
2013 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
Synthesizi
ng
reali
s
tic hum
an facial exp
r
essio
n
of 3D facial model
s is one of the most
chall
engin
g
p
r
oble
m
s in
co
mputer g
r
ap
h
i
cs. Alt
houg
h more a
nd mo
re progress i
s
made in fa
ce
modelin
g, expre
ssi
on cap
t
ure and a
n
i
m
ation tec
h
n
i
que
s, sop
h
isticated mani
pulation al
wa
ys
frustrates th
e
stra
nge
rs
a
nd ev
en
co
st
s the p
r
ofe
s
sion
al anim
a
to
rs muc
h
time to grasp th
e
essential
s
. T
herefo
r
e, a
n
i
n
tuitive, easy
and effe
ctive system fo
r synthesi
z
ing f
a
cial
expre
s
si
on
woul
d be u
s
eful in a vari
ety of applications
su
ch
as the movie
indust
r
y, video gam
es a
n
d
teleco
nfere
n
cing.
Perform
a
n
c
e-driven fa
cial
animation
h
a
s b
een
one
of the fore
most ap
proa
che
s
for
captu
r
ing the
expression motion of a
human a
c
tor.
The capture
d
data of
facial expressio
n
is
only the motion of a few sparse m
a
rked p
o
ints o
n
the face. Ho
wever, facial expre
ssi
o
nal
animation ai
ms to drive the whol
e target face
of the comp
uter-g
enerated mo
del to perform th
e
natural
expre
ssi
on si
milar t
o
the so
urce
actor. So
the
reu
s
e of moti
on data to a
n
i
mate the faci
al
mesh
es i
s
a cruci
a
l proble
m
.
The motion space of the target mo
del is so di
fferent
from that of
the perfo
rme
r
that a
step of map
p
i
ng pro
c
e
s
s should o
r
igin
al
ly comple
te t
he tran
sform
a
tion task. Th
e next step is
to
drive the ta
rg
et face
with the motion
da
ta of
the limited sparse fe
ature p
o
ints
comp
uted by
the
above-mentio
ned p
r
o
c
e
ss.
Con
s
ide
r
ing
that the ta
rg
et face is
co
mposed of m
a
ssive point
s, a
con
c
i
s
e st
rat
egy is that o
n
ce the
moti
on of
every
point ca
n be
calculated
a
c
cordi
ng to t
h
e
motion of the
feature p
o
in
ts, it is pro
n
e
to defor
m
of the total fac
i
al mesh. Parker [1] built a
muscle mo
de
l to simulate t
he expressio
n
with t
he mu
scl
e vecto
r
. The po
sition of
facial poi
nts
is
update
d
relying on the speci
a
l cosi
ne
functions
. Subsequ
ently, a lot of resea
r
che
r
s [2-4]
devoted to th
e muscle m
o
del for the p
r
odu
ction of fa
cial ani
matio
n
. The difficul
t
y of this method
is that the model is compl
i
cated an
d the expr
e
ssi
on
cannot be retargete
d
on
to another fa
ce.
The ra
dial ba
sis fun
c
tion
s
(RBF
s) [5] are oft
en deplo
y
ed to acqui
re the motion of the points
on
the target m
odel. From t
he view of interpolat
io
n, the RBF
s
whi
c
h take adva
n
tage of the 3D
positio
ns of t
he me
sh verti
c
e p
r
ovide a
smooth
m
e
sh
. Howeve
r, the huma
n
exp
r
essio
nal mot
i
on
is regi
onal an
d the RBFs a
s
a sort of global meth
o
d
ignore the ge
ometri
c st
ru
ct
ure of the facial
mesh. Althou
gh interpolati
on with the
RBFs is e
a
sy to impleme
n
t, it has to be e
x
tended to so
me
partial ap
proa
che
s
to deal
with the disco
n
tinuou
s prob
lem.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No.3, March 20
13: 1697 – 1
706
1698
A great m
any
re
sea
r
ch efforts
have be
en directe
d
towa
rd
reali
s
ti
c faci
al mod
e
ling an
d
facial exp
r
ession a
n
imatio
n. To animat
e
the tar
get
model, faci
al expre
ssi
on
s are a
nalyze
d
into
solving the
weight for blen
ding ela
borate sha
p
e
s
or u
pdating the p
o
sition of feat
ure poi
nts.
Blend sha
pe [6-9] is a widely used method, whic
h interpolate
s
a lot of selective shape
s
to obtain the
desi
r
ed
sculp
t
ed sh
ape
s fo
r targ
et m
ode
l. Much
com
m
ercial 3
D
a
n
imation
software
provide
s
sp
e
c
ial toolkit for blend shap
e animation.
The facial an
imation with this method i
s
made u
p
of two vital point
s, the con
s
truction
of
the blend
sha
p
e
s
an
d the ca
lculatio
n of the
weig
ht whi
c
h
also
ha
s a
p
a
rticul
arly
strong in
flu
e
n
c
e
on the
ultim
a
te anim
a
tion
. Chua
ng
et al.
[10] establi
s
h
ed a
system
whi
c
h a
u
to
matically
fou
nd the
key shape
s a
nd t
he corre
s
po
n
d
ing
weig
ht which
were use
d
to
drive the target model
. Joshi et al. [11] p
u
t forward th
e segme
n
tation
idea for the b
l
end sh
ape
s i
n
orde
r to unfold the pecul
i
a
rity of the capture
d
expression. Le
wis et
al. [12] prese
n
ted an app
roach of
dire
ct manipulatio
n of blend sh
ape
s usin
g in
vers ki
nemati
cs,
whi
c
h mad
e
the editing
of blend shape
s efficie
n
t and intuitive. Liu et al. [13] raised
an
optimizatio
n schem
e that a
u
tomatically d
i
scovere
d
the
non-li
nea
r rel
a
tionship of b
l
end shap
es i
n
facial animati
on. Wilson et
al. [14] propose
d
to construct co
rre
sp
onde
nces be
tween detaile
d
blend shap
es to acquire m
o
re re
alisti
c digital animatio
n
. As the fou
ndation rol
e
of blend shap
e
s
,
the tediou
s work
of discov
ering the
pro
per bl
e
nd
sh
ape
s is time-con
sumi
ng a
nd even a po
rtion
of efforts is m
ade on th
e compressio
n o
f
complex ble
nd sh
ape m
o
dels [15]. A major p
r
o
b
le
m of
blend
sha
pe i
s
to employ linear bl
end
sh
ape
s to
synth
e
si
ze hig
h
ly
non-lin
ea
r expression.
The geo
metric defo
r
matio
n
s a
r
e domi
nated by
the
pre-de
sign
e
d
muscle o
r
surfa
c
e
tissu
es
whi
c
h
are u
s
ed to i
m
itate the act
i
on of faci
al ti
ssue un
de
r di
fferent expre
ssi
on, or
by the
feature point
s who
s
e mot
i
on can ma
ke a great
differen
c
e on o
t
her vertice. Yano et al. [2]
acq
u
ire
d
a
se
t of expre
ssi
o
nal pa
ram
e
te
rs fr
om
the m
u
scle-ba
s
ed system and a
pplied
th
em
o
n
the target model
s to generate simila
r expressi
on.
The param
eters were l
earn
ed from
th
e
analysi
s
of the elasti
c facial skin mo
del. You
et al. [16] const
r
ucte
d a mathematical m
odel
according to
the physi
cal prop
ertie
s
of ski
n
deformat
i
on and u
s
ed
the synthesi
z
ed n
e
w faci
al
sha
p
e
s
on t
he ba
si
s of
the force
s
at
the poi
nt
s.
Bickel
et al.
[17,
18] obta
i
ned la
rge
-
scale
deform
a
tion
s by a fast linear
shell mo
del whi
c
h
wa
s co
ntroll
ed throu
gh a
spa
r
se
set of u
s
er-
defined featu
r
e point
s. Althoug
h the deformatio
n
with these me
thods
seem
s effective, th
e
model
s are much
compl
e
x for animatio
n
.
In the blend sha
pe interp
olation
,
to
solve the vital
sha
p
e
s
, a scheme of se
g
m
entation
accomp
anyin
g with p
r
in
ci
ple compo
n
e
n
t analysi
s
(PCA) u
s
ually
divides th
e face i
n
to several
sep
a
rate re
gi
ons. Ho
weve
r,
se
gmentati
on
de
co
upl
e
s
the natu
r
al
correl
ation b
e
twee
n different
parts
of a face. Th
erefo
r
e, we d
e
scribe a hyb
r
i
d
method th
at avoids th
e inapp
rop
r
i
a
te
segm
entation
by adaptively segmenti
ng the face
into different region
s. A way of local
deform
a
tion,
proximity-ba
sed weighting
(PBW),
i
s
in
trodu
ced to
model the
re
gion
s. Our P
B
W
scheme
differs from the
one of [19]
whi
c
h i
s
ba
sed on th
e bl
end
sha
pe. Instea
d, the
hinge
weig
hting poli
c
y we use is
motivated by the work
[20]. We assum
e
that the vertice on the facia
l
mesh a
r
e influen
ced by se
veral proxima
l
feature poi
nts. Once the weig
ht of
the
proximal feat
ure
points are acquire
d, the motion of
the vertice is e
nab
le to be com
puted. In [20], they exploited
the surfa
c
e di
stan
ce whi
c
h
act
ually wa
s the sum of the length
of the
edge
s betwe
en two vertice
.
Our m
e
thod
use
s
mo
re e
x
actly geode
sic di
stan
ce t
o
indicate the
distan
ce b
e
twee
n two vertice
along the fa
ci
al mesh
whi
c
h is disco
n
tin
uou
s with hol
es. Moreove
r
, we adopt th
e sine fun
c
tio
n
s
for the
weig
h
t
ing of featu
r
e point
s, whi
c
h i
s
mo
re
coinci
dent
with the motio
n
of facial
mu
scl
e
[21].
Whe
n
the local deform
a
tion is con
d
u
c
te
d, the global deform
a
tion is also con
s
id
ered t
o
make the facial mesh smooth. The conve
n
ti
onal
RBFs are use
d
to implement the global
deform
a
tion [22]. The final animation wi
th t
he propo
sed app
roa
c
h
depe
nd
s on the blendi
ng
of
the local a
nd
global d
e
form
ation.
2. Rese
arch
Metho
d
This pa
per p
r
opo
se
s a pro
g
ram of facia
l
animation b
a
se
d on the feature poi
nts usi
n
g
the motion capture
data from the
p
e
rf
orma
nce of a
n
acto
r. The
system h
andl
es the p
r
a
c
tical
reu
s
e proble
m
of motion
captu
r
e data.
The trans
fo
rmation of motion spa
c
e ba
sed on the RBFs
with geo
de
si
c dist
ance is
prima
r
ily co
n
ducte
d to
obt
ain the expre
ssi
onal
m
o
tio
n
of the feature
points for the
target model. Afterwards,
the dis
posal
of
two-sta
g
e
deformation is employed
to
synthe
size the facial exp
r
e
ssi
on
for the
target mod
e
l. The RBF
s
realize the glo
bal defo
r
mati
on
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Facial Anim
ation Base
d on
Feature Points (Beib
e
i Li)
1699
and the local deform
a
tion is dominate
d
by the infl
uence of feature points in the neigh
bor are
a
o
f
every vertex.
2.1 The tra
n
sforma
tion o
f
expre
ssion
al space
The ori
g
inal
expre
ssi
on i
n
our
syste
m
is
extra
c
ted from the
sequ
en
ce
s
of motion
captu
r
e, which belon
gs to t
he sp
ace of the pe
rform
e
rs. Ho
weve
r, the targ
et face is a
comp
uter-
gene
rated m
odel whi
c
h is in another space. In or
der to achieve synchro
n
ized
facial animation
betwe
en different spa
c
e
s
, we pre
s
e
n
t a method of
the transfo
rm
ation of expression
al spa
c
e,
whi
c
h take
s the geom
etric structu
r
e of the huma
n
face into acco
u
n
t. RBFs are
widely use
d
to
retarget the
source a
n
imati
on to the targ
et fa
ce, re
alizing the
spa
c
e
transfo
rmati
on bet
wee
n
the
two mod
e
ls.
The conventi
onal RBF
s
fo
r this p
r
oble
m
[23-25] a
r
e
often base
d
on the Eucli
d
ean
distan
ce, whi
c
h obvio
usly
ignores th
e d
i
scontin
u
o
u
s
area
s of the
human fa
ce
and lea
d
s to
the
artificial moti
on inform
atio
n tran
sferred
to the ta
rget
face. In this pape
r, we u
s
e the g
eod
e
s
ic
distan
ce in RBFs to estimate the motion inform
ation
embedde
d in the first frame of the source
seq
uen
ce, im
plementin
g the spa
c
e tra
n
sformation fro
m
the sou
r
ce model to the target mo
del.
The RBF
s
u
s
ed in our
syst
em are [5]:
1
()
(
|
|
|
|
)
()
n
kk
k
k
ij
i
j
i
j
f
Fw
F
F
q
F
(
1
)
whe
r
e
k
i
F
is
the
i
th feature point in source motion ca
pture fram
e
k
,
||
||
kk
ij
FF
denote
s
the
distan
ce b
e
twee
n
k
i
F
and
k
j
F
,
()
k
i
qF
is a polyn
omi
a
l reg
a
rd
ed
as a
radi
atio
n
transfo
rm, an
d
n
is the number of feature
points. The
basis functio
n
(||
|
|
)
kk
ij
FF
here
we
use i
s
th
e
invert mu
lti-quad
ri
c functio
n
22
(
|
|
|
|)
1
/
||
||
kk
kk
ij
ij
i
FF
FF
r
and
m
i
n
(
||
||)
kk
ii
j
i
j
rF
F
.
The
RBFs
are train
ed b
e
twee
n the
so
urce
featu
r
e
points
at the
first fram
e
and the
corre
s
p
ondin
g
feature poi
nts on the target face.
It is equivalent to solve the three dimen
s
ion
a
l
linear system
s
of
n
equation
s
(in the three
dimensi
onal
ca
se).
Whe
n
0
k
pre
s
e
n
ts in equ
atio
n (1), that just
means at the
first frame:
00
()
ii
mf
F
(
2
)
Let
the matrix s
u
c
h
as
(||
|
|
)
kk
ij
i
j
FF
and
3,
3
n
M
R
the matrix of position
s
for feature po
ints at curren
t frame on target fa
ce. Co
mbining e
qua
tions (1
) and
(2), the syste
m
can b
e
define
d
by
M
W
(
3
)
In orde
r to obtain the mo
vement of feature
poi
nts
at every fram
e on target f
a
ce, firstly
we have to
comp
ute the
weight matri
x
W
. The relati
onship bet
we
en the location of feature
points at the first fram
e and
the one on ta
rget face can
be ea
sily achi
eved by
1
WM
(4)
The tran
sformation from
sou
r
ce face
space
to the target face sp
ace can be computed
by (4) in whi
c
h the geod
esi
c
distan
ce
instead of
Euclide
an di
stan
ce is ap
plied in the basi
s
function
s.
Once the ma
pping
relatio
n
shi
p
is
con
s
tru
c
ted, whi
c
h me
an
s th
e wei
ght mat
r
ix
W
is
given, the location of feature
poi
nts on
the target face at
each frame ca
n be extracted fro
m
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No.3, March 20
13: 1697 – 1
706
1700
M
with the equ
ation (3). In this way, not only
the coordin
a
tes of sou
r
ce feature points are
adapte
d
to the target face, but also the
speci
a
l
morph
o
logy of the face i
s
take
n into accou
n
t.
2.2 H
y
brid Deforma
t
ion o
f
Face M
esh
To synthe
size the animati
on for the target m
odel a
c
cording to th
e motion of the feature
points, a nu
m
ber of metho
d
s em
ploy the partition
p
r
i
n
cipl
e to solv
e discontin
uo
us motion
of the
expre
ssi
on. It is likely to
result in artifi
cial
exp
r
e
ssi
on bet
ween
the adja
c
ent
regio
n
s i
n
this
situation. We
present the plan of the PBW in
whi
c
h
the motion of the vertex depend
s on
its
proximal feat
ure p
o
ints. T
he PBW-ba
s
ed defo
r
mati
o
n
makes full
use of the lo
cal regi
ons
aro
und
the vertex an
d from a
n
overall p
o
int of
view we
re
ga
rd the inte
rpo
l
ation of RBF
s
a
s
the glo
b
a
l
deform
a
tion i
n
orde
r to op
timize sm
oot
hne
ss of t
he
facial me
sh. We exploit g
eode
si
c dista
n
ce
measures for the distance
between two
vertice al
ong
the facial mesh an
d the co
sine fun
c
tion
s
as the weighti
ng functio
n
s.
2.2.1 Local Deforma
t
ion u
s
ing PBW
For ea
ch feat
ure poi
nt on the target face, ther
e is a l
o
cal regio
n
in
which the vertice a
r
e
intensively inf
l
uen
ced by th
at feature poi
nt. On t
he other h
and, ev
ery vertex on
the facial me
sh
is controlled
by the proximal feature p
o
int and
the neigh
bor feat
ure point
s of
the proximal one.
Therefore, th
e distri
bution
of f
eature poi
nts on the target face s
hou
ld guarantee
the simila
rity of
the co
nfigu
r
at
ion with th
e a
c
tor
wh
ose fe
ature
poi
nt
s
are
defined
a
c
cordi
ng to th
e prope
rties
of
expre
ssi
onal motion.
2.2.1.1 Proximal area
Given the mesh of the target face an
d the
configu
r
ation of feature point
s, we firstly
comp
ute a
set of geode
si
c di
stan
ce
s from the ve
rtex to every feature
point
s. The geo
de
sic
distan
ce exa
c
tly descri
b
e
s
the
su
rface distan
ce betwe
en two point
s on facial me
sh. The ne
arest
feature point
away from the vertex is defi
ned as the dominant
controlle
r for that vertex.
Therefore, th
ere is a lo
cal
area for eve
r
y featur
e poi
nt, which i
s
compri
se
d of vertexes whi
c
h
sha
r
e one do
minant controller.
At
the same
time,
we co
nsi
d
e
r
th
e neig
hbo
r fe
ature
s
p
o
ints of
the domina
n
t controller. F
r
om an
other
perspe
c
ti
ve, the domin
ant controlle
r an
d its neigh
bo
rs
form a proximal regio
n
whi
c
h is su
ppo
se
d to
have sig
n
ificant influe
nce o
n
the vertex.
Figure 1. Pro
x
imal-ba
s
e
d
weig
hting
2.2.1.2 Proximal-base
d
w
e
ighting
The pu
rpo
s
e
of PBW is to cal
c
ulate the
weig
ht of the feature poi
nts in the adjace
n
t area
of the vertex. Given a portion of the facial me
sh a
s
sh
own in Figure
2, the weight can
be
comp
uted wit
h
the followin
g
step
s:
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Facial Anim
ation Base
d on
Feature Points (Beib
e
i Li)
1701
Step 1: For the vertex
P
on the target face, the do
minant feature point
1
F
can
be
acq
u
ire
d
with
the aforeme
n
tioned meth
od.
Furtherm
o
re, the proxi
m
al region of
1
F
contain
s
th
e
neigh
bor feat
ure poi
nts of
1
F
su
ch a
s
2
F
and
3
F
.
Step 2: In the p
r
oximal
regio
n
of the
feature
poin
t
1
F
, the line
1
F
P
conne
cts t
h
e
ver
t
ex
P
to the domina
n
t fe
ature
point
1
F
. Joint the
do
minant poi
nt
1
F
with ea
ch n
e
ighb
or
feature
i
F
,
such
as
12
F
F
. The sm
allest two an
gles
between
1
F
P
and
1
i
F
F
are sel
e
cted fo
r the
cal
c
ulatio
n of
weig
ht. If
i
is
the angl
e bet
wee
n
1
F
P
and
1
i
F
F
and the
small
e
st two
angl
es are
2
and
3
(a
s in Figure 2
)
, they have to
guara
n
tee the follo
wing
con
d
ition:
23
,
22
If there is onl
y one
satisfying that requ
e
s
t, it will be re
tained for the
sub
s
e
que
nt step.
Step 3: To prepare for
co
mputi
ng the
weig
ht of the
feature p
o
int
s
, a weighted
distan
ce
d
can be o
b
tai
ned:
12
2
1
3
3
23
23
12
2
2
cos
c
os
,
cos
c
os
2
2
,
co
s
2
dd
and
d
d
onl
y
(5)
The distan
ce
ij
d
in the
formu
l
a indicate
s the Euc
lide
a
n
distance betwee
n
the feature
points
i
F
and
j
F
.
Step 4: The weig
ht of the feature poi
nt
1
F
can b
e
obtain
ed usi
ng the
equatio
n:
1
1
cos
(
(
1
))
2
p
p
d
w
d
(
6
)
For the othe
r
feature poi
nts in
the proxim
al regio
n
of
1
F
, the wei
ght is:
co
s
(
(
1
))
2
ip
ip
d
w
d
(
7
)
The distan
ce
ip
d
is the geode
sic di
stan
ce from the vertex
P
to the feature point
s
i
F
.
From th
e eq
uation
(7), it
can
be fou
n
d
that t
he fe
ature
point
s i
n
the p
r
oxim
al re
gion
of the
dominant controller
1
F
have the less effect on the vertex
, if they are neare
r
away from
1
F
. It
jus
t
reflect
s
the prominent role
of the feature
point
1
F
in the proximal a
r
ea
of the vertex.
2.2.1.3 The L
o
cal De
form
ation
The p
r
in
ciple
of the local deform
a
tion
is t
he fa
ct that the motion of the ve
rtex is
determi
ned b
y
that of its proximal
featu
r
e poi
nts. Wh
en the weigh
t
of the feature poi
nts in t
he
proximal regi
on of the vertex
P
is com
p
uted, the displacement
p
s
of the vertex
P
can
b
e
cal
c
ulate
d
in term
s of the following fo
rmul
a:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No.3, March 20
13: 1697 – 1
706
1702
2
0
_
2
0
n
ip
i
i
ip
p
l
ocal
n
ip
i
ip
ws
d
s
w
d
(
8
)
In each fram
e of the ani
m
a
tion sequ
en
ce, the di
spl
a
ceme
nt
i
s
of the proxim
al fe
ature
points
i
F
is from the result of the transfo
rmation of
the motion spa
c
e. The weig
ht of
the feature
point
i
F
is
ip
w
and
n
is the numb
e
r of the feature poi
nt
s which ma
ke a
difference on the
motion of the vertex in
the pr
oximal regi
on. The dista
n
ce
ip
d
betwee
n
the feature point
i
F
a
nd
the vertex
P
is
the Eucli
dean
distan
ce
in
current fram
e,
whi
c
h i
s
different from th
at in the
stage
of PBW. Actually, it could be mo
re exactl
y with geod
esi
c
distan
ce
than Euclide
an
distan
ce
(Fig
u
r
e 2). Ho
wev
e
r, con
s
id
erin
g the effi
cien
cy of the animation and th
e compli
catio
n
of
comp
uting th
e geo
de
sic di
stan
ce in
ru
n
n
ing time,
we
apply the E
u
clid
ean to
roughly m
e
a
s
ure
the distan
ce
betwe
en the feat
ure p
o
int a
nd the vertex.
a
Figure 2. The
Euclidea
n di
stan
ce (the d
a
sh
ed li
ne
se
gment on the
right figure) a
nd the geo
de
sic
distan
ce (th
e
solid
curve
se
gment on the
right
figure). (a) facial m
e
sh; (b)the left eye;(c)the
mouth.
2.2.2 Global Deformatio
n
using RBF
s
There co
uld b
e
ce
rtain relat
i
on for the ex
pre
ssi
onal m
o
tion of the fa
cial me
sh in
d
i
fferent
regio
n
s. The
local deformation is likely to s
egment this abst
r
act rel
e
van
c
e, so the global
deform
a
tion is followe
d to tune the mo
tion as a wh
ole. The RBFs [5] are well kno
w
n for its
power to app
roximate high
dimensi
onal
smooth
su
rfa
c
e
s
and a
r
e u
s
ed foe the m
odel fitting. It
is
absolutely distinct with the retargeting p
r
oc
e
s
s that we
con
s
tru
c
t a d
e
formatio
n model:
00
_
1
(|
|
|
|
)
n
kk
p
g
lobal
j
i
j
j
sw
P
F
(
9
)
whe
r
e
0
i
P
denote
s
the
i
th vertice on the ta
rg
et face,
k
i
P
is th
e motion offset of the
i
th
vertice at fra
m
e
k
,
0
j
F
represe
n
ts the
j
th feature p
o
int on
the target fa
ce,
00
||
||
ij
PF
is
the
B
c
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Facial Anim
ation Base
d on
Feature Points (Beib
e
i Li)
1703
Euclide
an di
stance
betwee
n
0
i
P
and
0
j
F
and
n
is the nu
mbe
r
of feat
ure points. Th
e radially
s
y
mmetric
bas
i
s
func
tion
00
(|
|
|
|
)
ij
PF
he
re is multi-qu
adri
cs:
00
00
2
(
|
|
|
|)
||
||
ij
i
j
PF
PF
r
(
1
0
)
At each frame, the RBFs are train
ed betwee
n
the feature point
s on the target face and
their motion o
ffsets at the current frame, and in th
is way we acqui
re the differen
t
coefficients
for
the interpol
ation. Then, the coefficient
s are u
s
ed to
calcul
ate the motion offset
s of the vertice at
the curre
n
t frame.
2.2.3 Blendi
ng
Whe
n
both
th
e local a
nd th
e glob
al def
ormation a
r
e
o
b
tained,
we
u
s
e a
pa
ram
e
ter
to
blend them. In each frame
of the animation,
the total displa
cem
e
n
t
of the
vertex
P
c
o
ns
is
ts
of
the following
two parts a
s
sho
w
n in formul
a
(11
)
. Therefore, th
e position of
the vertex
P
in
curre
n
t frame
is its static p
o
sition (or at the firs
t frame) c
o
mbined with
its current
displ
a
cement
.
__
(1
)
p
p
l
o
c
a
l
p
gl
ob
al
ss
s
(
1
1
)
3. Results a
nd Discu
ssi
on
3.1 Experiment se
tting
a
b
Figure 3. Faci
al Mo-Cap e
n
v
ironme
n
t (a) and facial m
a
rker
setup (b)
Source moti
on data
we
use i
s
captu
r
ed fr
om the
passive opti
c
al Mo
-Cap
system:
DVMC-88
20,
which is co
mposed of
eight infra-red (IR) came
ras wi
th four million pixels, with
60Hz
capture rate. After s
i
mple
process, the motion capture
data
can be u
s
e
d
in
our
system. In
the experim
ent, 60 infra-re
d sen
s
o
r
marke
r
s ar
e pasted in
the face of performe
r
s.
In
perfo
rman
ce,
the moveme
nt of the hea
d is limited
in
a small
rang
e
,
basically, rotation angl
e le
ss
than 5 deg
re
e and glo
bal
shifting le
ss t
han 1/20 of
th
e length of he
ad, as sho
w
n
in Figure 3.
3.2 Experiment res
u
lts a
nd analy
s
is
In orde
r to validate the effect of animatio
n
wi
t
h
our
sy
st
em,
we
sel
e
ct
a f
e
male
f
a
ce a
s
the target mo
del whi
c
h i
s
different from
the ac
tor. Th
e motion capt
ure d
a
ta from
one pe
rform
e
r
can
be
reu
s
e
d
after th
e d
enoi
sing
pro
c
essing.
O
u
r
experim
ental
platform i
s
based o
n
VC++
platform an
d
OpenG
L graphi
c libra
ry, embedd
ed t
he Matrix<li
b
> to accomp
lish the matrix
manipul
ation.
Figure 4 de
monst
r
ate
s
four different
animati
on
se
quen
ce
s of the targ
et face with the
method of our hybrid deformation.
After
the expressio
nal motion fr
o
m
the actor is transform
ed to
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No.3, March 20
13: 1697 – 1
706
1704
the spa
c
e
of the target mo
del, the two
-
stage defo
r
ma
tion is cond
u
c
ted to d
r
ive the target fa
ce
and to gen
erate the simila
r expre
s
sion
a
l
animation to
the sou
r
ce p
e
rform
a
n
c
e.
To verify the effectiveness of our met
hod
we h
a
ve
also implem
ented the m
e
thod of
deform
a
tion
with GRB
F
[26] as
comp
arison. Figu
re 5 sh
ows several exp
e
ri
mental re
sult
s with
two method
s. Each colum
n
in Figure 5 corresp
ond
s to
the same frame from
one animatio
n
seq
uen
ce. G
enerally sp
ea
king th
e rang
e of expr
essi
on with
G
R
BF ch
ang
es
m
o
re
wid
e
ly than
that with ou
r method u
s
in
g
the identical
sou
r
ce
expression. Con
s
eque
ntly
, with the method
of
GRBF the
sh
ape of the
m
outh alters
sharply
su
ch
as in
Col
u
mn
2 and
3 an
d
it appea
rs that
overfitting impact
s
the natural expressi
on in Column
4. Another distinct pr
oble
m
is the shap
e of
eyes in Col
u
mn 5. It seems that one
or more
feature poi
nts m
a
ke too mu
ch effect on some
vertice, whi
c
h
leads to the
distortio
n
of the ey
es. The
bottom row i
s
the re
sult with our metho
d
and b
o
th the
motion of t
he mo
uth an
d the eye
s
appe
ar pl
au
sibility. In addition, our PB
W
strategy
com
putes the
wei
ght in advan
ce and
the efficien
cy of the animation i
s
guarantee
d.
Seq1
Seq2
Seq3
Seq4
Figure 4. Fou
r
different ani
mation se
que
nce
s
with o
u
r method
Deformation
with GRBF
Our meth
od
Figure 5. The
compa
r
i
s
on
of deformatio
n
with GRBF
and ou
r meth
od
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Facial Anim
ation Base
d on
Feature Points (Beib
e
i Li)
1705
Figure 6 de
scrib
e
s th
e det
ails of the m
outh
co
rresp
ondin
g
to the
expre
ssi
on
s in Figure
5. The top row sho
w
s th
e re
sult with
GRBF
an
d
the external
outlines
of the mouth a
r
e
appa
rently bl
urred d
ue to t
he in
co
rre
ct
motion in
th
o
s
e
zon
e
s. T
h
e sh
ape
of th
e mouth
with
our
method i
s
prese
n
ted in th
e bottom line
.
Although it
is sli
ghtly unsmooth in the
inner
outline
s
of
the mouth, the entire eff
e
ct
ca
nnot bring mu
ch
trouble for u
s
ers to re
co
gnize differe
nt
expre
ssi
on. On the wh
ole
,
the result of our
meth
od can obtain a n
a
tural ani
mati
on se
que
nce.
Deformation
with GRBF
Our meth
od
Figure 6. The
local chan
ge
of the mouth
in different expre
ssi
on with
two method
s
4. Conclusio
n
We h
a
ve p
r
e
s
ente
d
a fra
m
ewo
r
k of synthesi
z
ing
realisti
c faci
al
animation
u
s
ing th
e
motion ca
pture data. The source ani
mati
on from
the p
e
rform
e
r u
n
d
e
rgo
e
s the transfo
rmatio
n of
the motion
space in o
r
de
r to obtai
n the motion
of
the feature
points fo
r t
he target fa
ce.
Afterwa
r
ds, t
he two-stag
e deform
a
tion i
s
employe
d
, whi
c
h consi
d
ers b
o
th the local influe
nce of
the feature
s
and the
glob
al smo
o
th de
formation.
T
h
e animatio
n
i
s
ultimately
synthe
sized
by
blendi
ng the l
o
cal a
nd the
global d
e
form
ation.
From the exp
e
rime
ntal re
sults, our meth
od
ba
sically
meets the de
mand of anim
a
tion. In
the future, we
will plan to work
on comp
u
t
ing the proxi
m
al regi
on
s a
nd the corre
s
pondi
ng weig
ht
of the feature points in re
al time. Acco
rding to
this notion, the motion informa
t
ion in adjacent
frame
s
, the previou
s
fram
es an
d the b
a
ck frame
s
o
f
the current frame, ca
n b
e
extracted a
nd
use
d
for the
cal
c
ulatio
n
of the wei
g
h
t. Another
i
m
provem
ent
of our a
p
p
r
o
a
ch
wo
uld b
e
to
captu
r
e an
d tran
sfer fine d
e
tails such as
wrin
kle
s
and
small defo
r
m
a
tions of skin.
Ackn
o
w
l
e
dg
ements
This
wo
rk i
s
sup
porte
d
by the Prog
ram
for
Ch
a
ngjian
g
Sch
o
lars an
d In
novative
Re
sea
r
ch Team in University (No.IRT1
109),
the Pro
g
ram for Liao
ning Scien
c
e and Technol
o
g
y
Re
sea
r
ch in
University (No.LS
2010
008
), the Prog
ra
m for Liao
nin
g
Innovative Re
sea
r
ch Te
am in
University (No.LT201
101
8
)
, Natural Sci
ence F
oun
da
tion of Liaoning Province (2011
0200
8), the
Program for Liaoni
ng Key Lab of Intelligent Inform
at
ion Processin
g
and Network Technolo
g
y in
University an
d by "Liaonin
g
BaiQian
W
a
n
Talents P
r
o
g
ram
(
20
109
2
1010, 20
119
2
1009
)".
Referen
ces
[1]
Parke FI.
Computer ge
nerat
e
d
ani
mati
on of
faces
. Proceed
ings of the AC
M annu
al conf
erenc
e (ACM
'
72). Boston. 1
972:4
51-
457.
[2]
Yano K, Hara
da K. A facial express
i
o
n
par
ameter
izatio
n b
y
elas
tic s
u
rface mode
l.
Internationa
l
Journ
a
l of Co
mputer Ga
mes T
e
chn
o
lo
gy
. 200
9; 2009(
1):1-1
1.
[3]
Aina OO, Zhang JJ.
Auto
mati
c musc
le
ge
ne
ration for
phys
i
cally-b
ase
d
fac
i
al
ani
matio
n
. Procee
din
g
s
of the ACM SIGGRAPH 2010 Poster
s. Los Angeles. 2010:
105-
105.
[4]
F
r
atarcang
eli M.
Position
‐
b
a
sed fac
i
a
l
an
i
m
ation s
y
nt
hes
is.
Co
mp
uter A
n
i
m
ati
on a
nd V
i
rtual W
o
rl
ds
.
201
2; 23(3-
4):457-
466.
[5]
Buhma
nn MD. Radial bas
is functio
n
s: theory
and im
p
l
eme
n
tations. Camb
ridg
e: Cambrid
ge Univ Pr
.
200
3.
[6]
Berger
on P, L
a
cha
pel
le P.
Contro
lli
ng fac
i
al ex
press
i
on
s and
body
move
me
nts in t
he co
mputer-
gen
erate
d
ani
mate
d short '
T
ony De Peltri
e
'
. Proceedings
of the SIGGRAPH'85:ACM
SIGGRAPH
198
5 T
u
torial Notes. 198
5.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NIKA
Vol. 11, No.3, March 20
13: 1697 – 1
706
1706
[7]
Hua
ng H, Cha
i
J,
T
ong X, W
u
HT
. Leveragi
n
g
moti
on ca
ptu
r
e and 3
D
sca
nni
ng for hig
h
-fidel
it
y
fac
i
al
performa
nce a
c
quisiti
on.
AC
M T
r
ansaction
s on Graphics (
T
OG)
. 2011; 3
0
(4): 1-10.
[8]
Seol Y, Seo J, Kim PH, Lew
i
s
JP, Noh J. W
e
i
ghted po
se space ed
iti
ng for facial a
n
imati
on.
Th
e
Visua
l
Co
mp
uter
. 2012; 2
8
(3)
:
319-3
27.
[9]
Seo Y, Le
w
i
s
J, Seo J, An
j
y
o K, Noh J.
Spacetime e
x
pr
essi
on clo
n
i
ng for ble
nds
hap
es.
ACM
T
r
ansactio
n
s o
n
Graphics (T
OG)
. 2012; 31(2):1-12.
[10]
Chu
ang E, Bregler C.
Performa
nce driv
en facial a
n
i
m
at
i
o
n
using bl
ends
h
ape int
e
rpo
l
ati
o
n
. Stanfor
d
Univers
i
t
y
. 2
0
0
2
.
[11]
Joshi P,
T
i
en W
C
, Desbrun M, Pighin F
.
Learn
i
ng C
ontro
ls for Blend Sh
ape Bas
ed Re
alistic F
a
cia
l
An
im
a
t
io
n
. Proceedings of the Eurographics/SIGGR
APH Sy
mposium
on Com
puter Animation.
200
3:18
7-19
2.
[12]
Le
w
i
s JP, Anj
y
o K. Direct manip
u
lati
on b
l
en
dsha
pes.
Co
mputer Graph
ics
and Ap
plic
atio
ns
. 2010
;
30(4): 42-
50.
[13]
Liu X, Xia S, Fan Y, W
ang Z. Explor
in
g Non
‐
Lin
ear Rel
a
ti
onsh
i
p of Blen
dsha
pe F
a
cial
Animati
on.
Co
mp
uter Graphics F
o
ru
m
. 2
011; 30(
6): 165
5-16
66.
[14]
W
ilson CA, Alexan
der O,
T
u
n
w
attan
a
p
ong
B, Peers P,
Ghosh A, Busch
J,
Hartholt A, Debevec P
.
F
a
cial carto
g
raphy: inter
a
cti
v
e hig
h
-reso
lu
tion scan corr
espo
nd
ence
.
Procee
din
g
s o
f
the ACM
SIGGRAPH 2011 T
a
lks (SIGGRAPH '11). Ne
w
Y
o
rk. 2011.
[15]
Seo J, Irving
G, Le
w
i
s JP, Noh J. Compr
e
ssio
n
an
d
dir
e
ct manip
u
l
a
ti
on of compl
e
x
blendsh
a
p
e
mode
ls.
ACM T
r
ansactio
n
s o
n
Graphics (T
OG)
. 2011; 30(6):1-10.
[16]
You L, Southe
rn R, Z
hang J. Adaptive ph
ysics–ins
p
ir
ed facial a
n
imati
o
n
.
Motion in Gam
e
s
. 20
09
;
588
4:20
7-21
8.
[17]
Bickel B, Lang
M, Botsch
M, Otadu
y
MA, Gross M.
Pose-spac
e ani
ma
tion an
d transfer of facial
detai
ls
. Proceedings of the 2008 ACM SIGGRAPH/Eurog
raphics Sy
mposium on Co
mputer Animation.
Dubl
in. 20
08:5
7
-66.
[18]
Bickel B, Botsch M, Angst R, Matusik W, Ot
aduy
M,
Pfister
H, Gross M.
Multi-
scale capture of facial
geom
etr
y
a
nd
motion.
ACM T
r
ansacti
ons o
n
Graphics (T
OG)
. 2007; 26(
3): 33.
[19]
Z
hang
L, Snav
el
y N, C
u
rless
B,
Seitz S. Sp
acetime F
a
c
e
s
:
High-
Reso
luti
on C
aptur
e for
~
Model
in
g
and An
imati
on.
Data-Driven 3D Fa
cial Anim
ation
. 200
7:24
8-
276.
[20]
Kahl
er K, Hab
e
r J, Seidel H
P
.
F
eature poi
nt based
mes
h
defor
mati
on
appl
ied to
mpeg-
4 faci
a
l
ani
mation
. Pro
c
eed
ings
of the IF
IP
T
C
5/WG
510 DEF
O
R
M
'
2000 W
o
rks
hop
and AVA
T
A
RS'
200
0
W
o
rkshop o
n
Deforma
ble Av
atars. 2001:
24-
34.
[21]
Waters K. A
muscle model
for animation
three-dime
nsi
ona
l facial e
x
pressi
on.
ACM SIGGRAPH
Co
mp
uter Graphics
. 19
87; 21
(4): 17-24.
[22]
Ren
dal
l T
C
S,
Alle
n CB. Red
u
ced surf
ace p
o
int sel
e
ctio
n o
p
tions for effici
ent mesh d
e
for
m
ation us
in
g
radi
al bas
is fun
c
tions.
Journ
a
l
of Computati
o
n
a
l Physics
. 20
1
0
; 229(8): 2
810
-282
0.
[23]
Dutreve
L, Me
yer A, B
ouak
a
z
S.
F
eature p
o
ints b
a
se
d fa
cial a
n
i
m
ation
retargeti
n
g
. Pr
ocee
din
g
s of
the 200
8 ACM
s
y
mp
osi
u
m on
Virtual re
alit
y s
o
ft
w
a
r
e
an
d te
chno
log
y
. Bor
d
eau
x. 20
08:1
9
7
-20
0
.
[24]
F
ang XY, W
e
i XP, Z
hang Q, Z
hou CJ. On the simul
a
tio
n
of expr
essio
n
a
l
animatio
n
bas
ed on facia
l
MoCap.
SCIENCE CHINA In
formati
on Sci
e
nces
. 201
2:1-1
2
.
[25]
Edge MSLJ
D, King SA, Madd
ock S.
Use an
d Re-us
e
of F
a
cial Motio
n
Ca
pture Data
. Pr
ocee
din
g
s of
the Visio
n
, Vid
eo, and Gra
phi
cs. 2003:1-8.
[26]
Rhee T
,
Hw
ang Y, Kim JD, Kim C.
Real-ti
m
e
facial
ani
matio
n
from
liv
e vid
e
o
trackin
g
. Pro
c
eed
ings
of
the 2011 ACM
SIGGRAPH/Eurogr
aph
ics S
y
mposi
u
m
on Computer Anim
ation (SCA '
1
1
)
. 2011:215-
224.
Evaluation Warning : The document was created with Spire.PDF for Python.