Intern
ati
o
n
a
l
Journ
a
l of
Re
con
f
igur
able
and Embe
dded
Sys
t
ems
(I
JRES)
V
o
l. 4,
N
o
.
2
,
Ju
ly 20
15
, pp
. 99
~121
I
S
SN
: 208
9-4
8
6
4
99
Jo
urn
a
l
h
o
me
pa
ge
: h
ttp
://iaesjo
u
r
na
l.com/
o
n
lin
e/ind
e
x.ph
p
/
IJRES
An Efficient Framework for Floo
r-plan P
redi
c
tion
of
Dynamic
Runtime Reconfigurable Systems
A. Al-W
attar,
S. Areibi, G. Grewal
F
acult
y
of
Engin
eering
and
Com
puter S
c
ienc
e
University
of
Gu
elph, Guelph
, C
a
nada
Article Info
A
B
STRAC
T
Article histo
r
y:
Received Dec 16, 2014
Rev
i
sed
Mar
23
, 20
15
Accepted Apr 20, 2015
Several embedd
ed applicat
ion domains for reconfigurable s
y
stems tend to
combine frequent changes with hi
gh performance demands of their
workloads
s
u
ch as
im
age pro
c
es
s
i
ng, wear
abl
e
com
puting
a
nd network
processors. Tim
e
m
u
ltiplex
i
ng of reconfi
gur
able
hardware resour
ces raises
a
number of new issues, rang
ing fro
m run-ti
me
sy
st
e
m
s to c
o
mpl
e
x
programming models that usually
form
a Reconf
igurabl
e
hardwar
e
Operating
S
y
s
t
em
(ROS
).
The Oper
ating
S
y
s
t
em
performs online task scheduling
and
handles
res
ourc
e
m
a
nagem
e
nt.
There ar
e m
a
n
y
ch
all
e
nges
i
n
adapti
v
e
com
puting and
d
y
nam
i
c r
eco
nfigurabl
e
s
y
s
t
em
s
.
One of the m
a
jor
understudied
ch
allenges is estimating th
e
requir
e
d resources in terms of
soft
cores, Program
mable Reconfigurable
R
e
gion
s (PRRs), the appropriate
communication infrastructur
e,
an
d to pred
ict
a near optimal
lay
o
u
t
and floor-
plan of th
e re
co
nfigurabl
e
logi
c
fabric
. S
o
m
e
of thes
e is
s
u
es
are
s
p
ecifi
c to
the application being
d
e
signed
,
while
o
t
hers
are more gen
e
ra
l and re
la
t
e
to
the underly
i
ng run-time enviro
nment.
Static resource allo
catio
n for Run-
Time Reconf
igu
r
ation (R
TR) often leads
to infer
i
or and unaccep
table results.
In this paper, w
e
present a novel ad
aptive and d
y
namic
methodolog
y
,
based
on a Machine Learning approach, for pr
edicting
a
nd estim
ating th
e necessa
r
y
resources for an application b
a
sed on
past historical infor
m
ation. An
important featur
e of the proposed methodol
og
y
is that th
e s
y
stem is able to
learn
and gener
a
liz
e and,
therefo
r
e, is
exp
ect
ed t
o
im
prove its
ac
curac
y
ov
er
time. Th
e goal
of the entire pr
ocess is
to extract useful hidd
en
knowledge
from the data.
This knowledge is th
e prediction and estimation of the
necessar
y
resour
ces for
an
unkno
wn or
not previo
usly
s
een
application
.
Keyword:
Dynam
i
c run time syste
m
s
Flo
o
r-p
l
an pred
ictio
n
Machine lea
r
ni
ng
Reconfigura
b
l
e
OS
Copyright ©
201
5 Institut
e
o
f
Ad
vanced
Engin
eer
ing and S
c
i
e
nce.
All rights re
se
rve
d
.
Co
rresp
ond
i
ng
Autho
r
:
Sha
w
ki
Ar
ei
bi
Uni
v
ersity
o
f
Guel
ph
5
0
Ston
e R
o
ad
, Gu
elph
,
On
tario
,
Can
a
d
a
,
1
-
51
9-8
244
120
Em
a
il: sareib
i@uog
u
e
l
p
h.ca
1.
INTRODUCTION
In t
h
e are
a
of com
puter a
r
c
h
itecture c
h
oices sp
a
n
a
wi
de s
p
ectrum
,
w
ith
App
licatio
n
Sp
ecific
Int
e
grat
ed
C
i
rc
ui
t
s
(
A
SIC
s
) a
n
d
Ge
ne
ral
Pu
r
pos
e P
r
oces
so
r
s
(
G
PPs
)
bei
n
g
at
o
p
p
o
si
t
e
e
n
ds.
Ge
neral
p
u
r
p
o
se
process
o
rs
are
flexible, but
unli
k
e ASIC
s are
not
opt
i
m
ized
to
sp
ecific ap
p
licatio
n
s
. Reco
nfigu
r
ab
le
Arch
itectu
r
es,
in
g
e
n
e
ral, and Field
Prog
rammab
l
e Gate
Arrays (FPGAs), in
p
a
rticu
l
ar,
fill th
e g
a
p
b
e
t
w
een
th
ese two
ex
tre
m
es b
y
ach
iev
i
ng
bo
th
th
e
h
i
gh
p
e
rfo
r
m
a
n
ce of ASICs
an
d
the flex
i
b
ility o
f
GPPs.
Howev
e
r,
FPGAs are still n
o
t
a m
a
tch
fo
r th
e l
o
wer
po
wer con
s
u
m
e
d
b
y
ASICs
nor th
e p
e
rform
a
n
ce ach
iev
e
d
by th
e
latter . On
e imp
o
rtan
t
feature o
f
FPGAs is th
eir cap
a
b
ility to
ad
ap
t during th
e run
-
tim
e o
f
an app
licatio
n
.
Th
e
run
-
tim
e reco
nfigu
r
ation
en
v
i
ron
m
en
t cap
abilit
y in
FPGAs
p
r
ov
id
es co
mm
o
n
b
e
n
e
fits in
ad
ap
ting
h
a
rd
ware
al
go
ri
t
h
m
s
dur
i
ng sy
st
em
run
-
t
i
m
e
, shari
n
g har
d
ware res
o
urces t
o
re
d
u
c
e
devi
ce co
unt
, po
wer c
o
n
s
u
m
pti
on,
an
d sh
orten
i
ng reco
nfigu
r
ation
tim
e [1
].
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
089
-48
64
IJR
E
S V
o
l
.
4, No
. 2,
J
u
l
y
20
1
5
:
9
9
– 12
1
10
0
Seve
ral
em
bedde
d ap
pl
i
cat
i
on
dom
ai
ns for rec
o
nfi
g
u
r
a
b
l
e
sy
st
em
s tend t
o
c
o
m
b
ine fre
q
u
ent
chan
ges
wi
t
h
h
i
gh
pe
rf
orm
a
nce dem
a
nds
of
t
h
ei
r
w
o
r
k
l
o
a
d
s s
u
c
h
as i
m
age
pr
ocessi
ng
,
weara
b
l
e
c
o
m
put
i
n
g
,
and
n
e
t
w
or
k
pr
ocess
o
rs
.
In
m
a
ny
em
bedde
d
sy
st
em
s used
i
n
di
ff
ere
n
t
a
p
p
l
i
cat
i
ons, se
ver
a
l
wi
rel
e
ss
st
an
da
r
d
s
an
d
tech
no
log
i
es,
su
ch
as Wi
Max
,
WLAN, GSM, WCDM
A
h
a
v
e
to
b
e
utilized
an
d
su
pp
orted. However, it is
u
n
lik
ely th
at these p
r
o
t
o
c
o
l
s
will b
e
u
s
ed
si
m
u
l
t
an
eo
usly. Accord
ing
l
y, it is p
o
ssi
b
l
e to
d
y
n
a
m
i
call
y
lo
ad
on
ly
the one t
h
at is
neede
d
. Another exam
ple is em
ploying
di
f
f
e
r
ent
m
achi
n
e
v
i
si
on al
go
ri
t
h
m
s
o
n
t
o
a
n
U
n
m
a
nne
d
Ariel Veh
i
cle (UAV) and
u
tilizin
g
th
e m
o
st ap
p
r
op
riate based
on
th
e env
i
ro
n
m
en
t o
r
p
e
rh
ap
s th
e n
e
ed
to
l
o
we
r po
wer
c
ons
um
pt
i
on.
Tim
e
m
u
lt
i
p
l
e
xi
n
g
o
f
reco
n
f
i
g
u
r
abl
e
ha
rd
ware re
so
urce
s
rai
s
es a num
ber
of ne
w i
s
s
u
es, ra
n
g
i
n
g
fr
om
runt
im
e
sy
st
em
s t
o
com
p
l
e
x pro
g
ra
m
m
i
ng
m
odel
s
t
h
at
usual
l
y
form
a R
econfi
g
u
r
abl
e
ha
r
d
wa
re
Ope
r
at
i
n
g Sy
s
t
em
(R
OS). T
h
e O
p
erat
i
n
g
Sy
st
em
perfo
r
m
s onl
i
n
e t
a
s
k
sche
d
u
l
i
ng
and
ha
ndl
es r
e
so
urce
man
a
g
e
m
e
n
t
. Th
e m
a
in
o
b
j
ectiv
e o
f
an
Op
erating
Syste
m
fo
r reco
nfigu
r
ab
le p
l
atfo
rm
s is to
redu
ce th
e
com
p
l
e
xi
t
y
of
appl
i
cat
i
o
ns
de
vel
o
pm
ent
by
gi
vi
n
g
t
h
e
de
v
e
l
ope
r a
hi
ghe
r
l
e
vel
of
abst
ra
ct
i
on
wi
t
h
w
h
i
c
h t
o
wo
rk
.
1.
1. Pro
b
l
em Defi
ni
ti
on
Per
f
or
m
a
n
ce i
s
on
e of
th
e
f
und
am
ent
a
l
reaso
n
s f
o
r
usi
ng R
e
c
o
nfi
g
u
r
abl
e
C
o
m
put
i
ng
Sy
st
em
s
(R
C
S
). B
y
m
a
ppi
ng al
go
ri
t
h
m
s
and appl
i
c
at
i
ons t
o
ha
rd
ware
, desi
gne
r
s
can t
a
i
l
o
r
n
o
t
onl
y
t
h
e c
o
m
put
at
i
o
n
com
pone
nt
s,
b
u
t
al
so
per
f
o
r
m
dat
a
-fl
ow
o
p
t
i
m
i
zat
i
on t
o
m
a
t
c
h t
h
e al
g
o
ri
t
h
m
.
O
n
e
o
f
t
h
e
m
a
i
n
pr
o
b
l
e
m
s
en
coun
tered
i
n
Run
Tim
e
Reco
nfigu
r
ation
(R
TR) is i
d
en
tifying
t
h
e m
o
st ap
p
r
opriate fram
e
work
or
i
n
fra
st
ruct
ure t
h
at
sui
t
s
a
n
ap
pl
i
cat
i
on. Ty
pi
cal
l
y
, a desi
g
n
e
r w
o
ul
d m
a
nual
l
y
di
vi
de t
h
e FPG
A
fab
r
i
c
i
n
t
o
static and dy
nam
i
c regions [2]. The
static regions
would
accom
m
odate m
odules that do not c
h
ange i
n
tim
e
su
ch
as th
e task
m
a
n
a
g
e
r and n
ecessary bu
ses u
s
ed
for commu
n
i
catio
n
.
Th
e d
y
n
a
m
i
c
reg
i
on
is p
a
rtitio
n
e
d
i
n
t
o
a
set
o
f
uni
fo
rm
or
no
n-
u
n
i
f
o
r
m
m
odul
es
wi
t
h
a c
e
rt
ai
n si
ze
(w
e refe
r t
o
t
h
es
e as “P
ro
gr
am
m
a
bl
e
Reconfigura
b
l
e
Regions”
(PRRs)) that act as applica
tion specific hardware accelerat
ors for the inc
o
m
i
ng
t
a
sks t
h
at
nee
d
t
o
be exec
ut
e
d
. E
v
ery
ap
pl
i
cat
i
on (e.
g
.
,
M
achi
n
e
Vi
si
on
,
W
i
rel
e
ss-
Sens
or
Net
w
or
k)
re
qui
res
speci
fi
c t
y
pes of res
o
urces t
h
at
opt
im
i
ze cer
t
a
i
n
ob
ject
i
v
es
such as re
duci
ng
po
wer c
ons
um
pt
i
on, im
pr
ovi
n
g
the exec
ution t
i
m
e
, reducing t
h
e c
o
st
or a
com
b
ination of t
h
ese
objective
s
.
In c
u
rre
nt
R
T
R
sy
st
em
s, desi
gne
rs t
e
n
d
t
o
per
f
o
rm
reso
urce al
l
o
cat
i
o
n
an
d fl
oo
r-
pl
a
nni
ng
o
f
t
h
e
FPGA
fabric a
priori. T
h
ese
allocated re
sources,
howe
ve
r,
m
i
ght
n
o
t
be
app
r
op
ri
at
e f
o
r
a ne
w a
n
d
di
f
f
ere
n
t
i
n
com
i
ng appl
i
cat
i
on (e.
g
., s
t
ream
i
ng, n
o
n
-
s
t
r
eam
i
ng, hy
b
r
i
d
)
.
In
st
ead o
f
t
a
i
l
o
ri
ng t
h
e
FPG
A fa
bri
c
f
o
r an
ap
p
lication
,
t
h
e latter h
a
s to
su
ffer if t
h
e
floo
r-p
l
an
is
a m
i
ss m
a
tch
to
th
e app
licatio
n
itself.
A
o
n
e
size
fits all
app
r
oach i
n
t
h
i
s
case woul
d n
o
t
onl
y
hi
n
d
er
t
h
e am
ount
o
f
per
f
o
r
m
a
nce sou
g
h
t
by
usi
n
g
R
T
R
,
but
m
i
ght
even
deteriorate spe
e
d, power,
or
the com
b
in
ati
o
n
o
f
bo
th. As a resu
lt o
f
th
is p
r
e-sp
ecifi
ed
, fin
ite nu
mb
er of
resource typ
e
s and
qu
an
tities allo
cated
, a
hefty p
r
ice in
term
s o
f
p
e
rforman
ce is often p
a
id sin
c
e th
e floo
r-
p
l
an
and
layou
t wou
l
d no
t
b
e
su
itab
l
e
for th
e app
licati
o
n
in
wh
ich
it is b
e
i
n
g used fo
r. Th
is
p
e
rforman
c
e
penal
t
y
m
a
y
occur i
n
t
h
e
f
o
r
m
of i
n
crease
d
po
wer
co
ns
um
pt
i
on si
nce
m
e
et
i
ng pe
rf
orm
a
nce re
qui
rem
e
nt
s
might entail the usage
of m
u
ltiple P
RRs. Accordingly an
adap
tive a
n
d dynamic approa
ch is necessa
ry for
resource estimatio
n
an
d fl
o
o
rp
lann
ing
.
1.
2. M
o
ti
va
ti
o
n
There
are m
a
ny challenges
in adap
t
i
ve com
put
i
n
g a
n
d dy
n
a
m
i
c recon
f
igurable system
s. One
of the
m
a
jor
u
nde
rst
udi
e
d
c
h
al
l
e
ng
es i
s
est
i
m
a
t
i
n
g an
d
pre
d
i
c
t
i
ng t
h
e r
e
q
u
i
r
e
d
res
o
urce
s i
n
t
e
rm
s of so
ft
core
s
,
PRRs, a
n
d the
appropriate c
o
mm
unication infrastructure,
to
nam
e
just
a fe
w. Som
e
of the
s
e iss
u
es are
speci
fi
c t
o
t
h
e
appl
i
cat
i
o
n bei
ng
desi
g
n
e
d
,
w
h
i
l
e
ot
he
rs
are
m
o
re general
a
nd
rel
a
t
e
t
o
t
h
e
un
derl
y
i
n
g
r
u
n-t
i
m
e
en
v
i
ron
m
en
t. Static resou
r
ce allo
catio
n fo
r
RTR mig
h
t
le
ad
to inferi
o
r
resu
lts. Th
e
nu
mb
er of
PRRs fo
r one
appl
i
cat
i
o
n m
i
ght
be
di
f
f
ere
n
t
t
h
a
n
t
h
e
n
u
m
b
er re
qui
re
d
by
an
ot
he
r.
The
t
y
pe
of
PR
R
s
(
uni
f
o
r
m
, non
-
uni
fo
rm
, hy
bri
d
) al
s
o
pl
ay
s a
cruci
a
l
r
o
l
e
i
n
det
e
rm
i
n
i
ng b
o
t
h
per
f
o
r
m
a
nce and
p
o
w
er
con
s
um
pt
i
on,
as see
n
in
Figu
re
1
.
Th
e typ
e
of Sched
u
l
er u
s
ed
to d
e
term
in
e when
/wh
e
re a task
is ex
ecu
t
ed
i
s
also
im
p
o
r
tan
t
for
specific real-ti
m
e operations. The type of comm
unicati
on infrastructure that conn
ects PRRs with
th
e Task
Man
a
g
e
r p
l
ays an
im
p
o
r
tan
t
ro
le to
sp
eed
u
p
a certain
app
licatio
n
.
Acco
rd
ing
l
y, in
this work
we seek
to
o
v
e
rco
m
e th
e li
mitatio
n
o
f
static resou
r
ce
allo
catio
n
with
a m
o
re appealin
g
app
r
o
a
ch
th
at
can fo
rce th
e
i
n
fra
st
ruct
ure
of t
h
e
reco
nfi
g
u
r
a
b
l
e
com
p
u
t
i
ng
pl
at
fo
rm
to acc
om
m
odate an
d m
a
t
c
h t
h
e a
ppl
i
cat
i
o
n
rat
h
e
r
than t
h
e
reve
rs
e.
1.
3. Pro
p
ose
d
Appr
o
a
ch
In
t
h
is p
a
p
e
r, w
e
p
r
esen
t a n
o
v
e
l ad
ap
ti
v
e
an
d
d
y
n
a
m
i
c
m
e
th
o
d
o
l
ogy b
a
sed
on
an
in
tellig
en
t
Machine Lea
r
ning approac
h
that is
u
s
ed
t
o
pred
ict an
d
esti
m
a
te
th
e n
ecessary res
ources for a
n
a
p
plication
base
d o
n
past
hi
st
ori
cal
i
n
f
o
r
m
at
i
on. An i
m
po
rt
ant
feat
u
r
e
of t
h
e p
r
o
p
o
se
d m
e
t
hod
ol
o
g
y
i
s
t
h
at
t
h
e sy
stem
i
s
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
RES I
S
SN
:
208
8-8
7
0
8
An
Efficien
t
Fra
m
ew
o
r
k f
o
r Flo
o
r
-p
l
a
n Predictio
n
o
f
Dynamic Ru
n
time Reco
n
figu
r
ab
le
… (S
ha
wki Areib
i
)
10
1
able to learn a
s
it gains m
o
re know
ledge and, the
r
efore
,
is
expecte
d
to generalize and im
prove its accurac
y
o
v
e
r tim
e. Ev
en
tho
ugh
th
e ap
pro
ach
is
g
e
neral eno
ugh
to
p
r
ed
ict m
o
st if
n
o
t
all typ
e
s of reso
urces
from th
e
num
ber
of
PR
R
s
, t
y
pe
of
PR
R
s
, t
h
e
t
y
pe
of
sche
d
u
l
e
r,
an
d
com
m
uni
cat
i
on i
n
fra
st
ru
ct
ur
e, we
l
i
m
i
t
our
res
u
l
t
s
to the form
er three re
quire
d
for an
a
pplication.
W
e
pla
n
to accom
p
lish th
is task by first extracting c
e
rtain
featu
r
es fro
m
t
h
e app
licatio
n
s
th
at are ex
ecu
ted
on
th
e recon
f
i
g
urab
le p
l
at
form
. Th
e featu
r
es co
m
p
iled
will b
e
u
s
ed
to
train
an
d
bu
ild
a classificatio
n
m
o
del th
at is
capable of predicti
ng t
h
e fl
oo
rpla
n ap
p
r
o
p
riate f
o
r a
n
ap
p
lication
.
Th
e classification
m
o
d
e
l is a
su
p
e
rv
ised
lea
r
ning a
p
proach that ca
n
gene
ralize and acc
urately
pre
d
i
c
t
t
h
e cl
ass of an i
n
c
o
m
i
ng appl
i
cat
i
on t
h
at
i
t
l
earned f
r
o
m
prev
i
ousl
y
seen pa
t
t
e
rns. O
u
r pr
op
ose
d
approach
will
be base
d on se
veral m
odules includi
ng
be
nc
hm
ark gene
ration, data collection, pre-proce
ssi
ng
of
dat
a
,
dat
a
c
l
assi
fi
cat
i
on,
a
n
d
p
o
st
p
r
oces
si
ng
. T
h
e
g
o
al of the
e
n
tire
process
is
to
e
x
tract useful, hidden
k
now
ledg
e fr
om th
e d
a
ta, th
i
s
k
now
ledg
e i
s
th
en
u
s
ed
to p
r
ed
ict and
esti
m
a
te th
e n
e
cessary resou
r
ces and
app
r
op
ri
at
e fl
o
o
r
p
l
a
n
f
o
r
an
u
n
k
n
o
w
n
or
not
pre
v
i
o
usl
y
see
n
a
ppl
i
cat
i
o
n.
Fig
u
re
1
.
Floo
rp
lan
d
ecision
s:
(A)
Pred
ictin
g th
e layou
t,
(B) Pred
ictin
g th
e
n
u
m
b
e
r
o
f
PR
Rs and
GPPs
1.
4. Contri
bution
The m
a
i
n
co
nt
r
i
but
i
o
ns
of
t
h
i
s
pa
per
can
be
c
l
earl
y
st
at
ed as
f
o
l
l
o
wi
ng:
1
.
Th
e m
a
j
o
rity o
f
wo
rk
on
Reco
nfigu
r
ab
le C
o
m
p
u
tin
g Syste
m
s rely on a
static approac
h
to estim
ate and
deci
de u
p
on t
h
e reso
urces re
q
u
i
r
e
d
t
o
solve
a specific proble
m
.
In this
wor
k
, we i
n
st
ea
d pr
o
p
o
s
e a no
ve
l
dy
nam
i
c and a
d
apt
i
v
e
ap
p
r
oa
ch.
2.
To t
h
e best
o
f
our
kn
o
w
l
e
d
g
e
, t
h
e use o
f
dat
a
-m
i
n
i
ng and m
achi
n
e-l
e
arni
ng t
ech
ni
q
u
es has
not
b
een
pr
o
pose
d
by
a
n
y
resea
r
ch
g
r
ou
p t
o
ex
pl
oi
t
t
h
i
s
speci
fi
c t
y
pe o
f
De
si
g
n
Ex
pl
o
r
at
i
on
f
o
r R
e
c
o
nfi
g
u
r
abl
e
Syste
m
s in
term
s o
f
pred
icting
th
e approp
riate floo
rp
lan
of
an
ap
p
lication
.
1.
5. Pa
per
Or
ga
ni
z
a
ti
on
The rem
a
i
nder
of t
h
i
s
pa
per
i
s
or
gani
ze
d i
n
t
o
fi
ve sect
i
o
ns.
Sect
i
on
(
2
)
pr
o
v
i
d
es a
n
o
v
er
vi
ew
o
f
R
econ
f
i
g
ura
b
l
e
C
o
m
put
i
ng and M
a
c
h
i
n
e L
earni
ng al
o
n
g
wi
t
h
necessa
ry
back
gr
o
u
n
d
.
Sect
i
on (
3
)
di
s
c
usses
t
h
e m
o
st
si
gni
fi
cant
w
o
r
k
p
ubl
i
s
hed i
n
t
h
e fi
el
d o
f
res
o
urce estim
a
t
ion
for Reconfigurable Syste
m
s. In
Section (4) the
propose
d
technique fo
r estimating and
pre
d
icting the
neces
sary
res
o
urces
is describe
d. Section
(
5
)
pr
ov
id
es
ou
r
r
e
su
lts b
a
sed
on
th
e pr
oposed
im
p
l
e
m
en
tatio
n
.
Con
c
lu
si
o
n
s
and
f
u
t
u
r
e
d
i
r
ection
s
ar
e fin
a
lly
p
r
esen
ted
in
Sectio
n
(6).
2.
BA
C
KGR
OUN
D
R
econ
f
i
g
ura
b
l
e
com
put
i
ng i
s
capabl
e
of m
a
ppi
ng
ha
rd
wa
re t
a
sks
ont
o a
fi
ni
t
e
FPG
A
f
a
bri
c
whi
l
e
t
a
ki
ng i
n
t
o
ac
cou
n
t
t
h
e de
p
e
nde
ncy
am
ong t
a
sks a
nd t
i
m
i
ng cri
t
e
ri
a.
There
f
ore, m
a
nagi
ng t
h
e res
o
u
r
ces
becom
e
s essential for any
re
configur
a
b
le c
o
m
puting
platform
. In this
s
ection, we
provide t
h
e
neces
sary
back
g
r
o
u
nd
fo
r t
h
e
pr
op
ose
d
wo
r
k
i
n
t
h
i
s
pape
r.
Acc
o
r
d
i
ngl
y
,
t
h
e
co
nc
ept
o
f
r
u
n
-
t
i
m
e reco
nfi
g
urat
i
on a
n
d
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
089
-48
64
IJR
E
S V
o
l
.
4, No
. 2,
J
u
l
y
20
1
5
:
9
9
– 12
1
10
2
Xilin
x
fl
o
w
[2] u
s
ed
in
th
is
work
will b
e
d
i
scu
s
sed,
fo
llo
wed
b
y
reso
urce m
a
n
a
g
e
m
e
n
t
in
th
e form o
f
an
o
p
e
rating
system
.
Th
e co
n
c
ept o
f
m
ach
in
e learn
i
n
g
, d
a
ta mi
n
i
ng
and
classificatio
n
will th
en
b
e
in
trod
u
c
ed
as
a m
eans to pre
d
ict necessa
ry
resources
requi
r
ed by
t
h
e
ope
r
a
t
i
ng sy
st
em
i
n
t
h
e
pr
o
pose
d
f
r
am
ework
.
2.
1. Par
t
i
a
l
Re
con
f
i
g
ur
ati
o
n Fl
ow
Dy
nam
i
c part
i
a
l
reco
n
f
i
g
urat
i
on
al
l
o
ws
se
veral
i
nde
pe
nd
ent
bi
t
st
ream
s o
f
c
o
nfi
g
u
r
a
t
i
ons t
o
be
m
a
pped i
n
t
o
t
h
e FPGA f
a
b
r
i
c
i
ndepe
n
d
ent
l
y
, as one arc
h
itecture can be s
e
lectively
swappe
d o
u
t
of t
h
e chi
p
wh
ile ano
t
h
e
r is left ex
ecu
ting
.
Partial recon
f
i
g
uratio
n
p
r
ov
id
es th
e cap
a
b
ility o
f
k
eep
i
n
g
certain
p
a
rt
s in
tact
in the FPGA while othe
r
parts are
repl
aced, sim
ila
r
to m
e
m
o
ry
managem
e
nt in traditional C
P
Us.
A
Reco
nfigu
r
ab
l
e
Co
m
p
u
tin
g
syste
m
trad
itio
nally co
n
s
ists
of a Gen
e
ral Purpo
s
e Pr
o
c
essor (GPP)
and
sev
e
ral
Reco
nfigu
r
ab
l
e
Mo
du
les th
at
execute
dedi
cated ha
rdware tasks in
pa
rallel [3
]. A fund
am
en
tal featu
r
e of a
Partially Reco
n
f
i
g
urab
le FPGA is th
at th
e lo
g
i
c and
in
terco
n
n
ects are time
m
u
l
tip
le
x
e
d
.
Th
er
ef
or
e, in
or
der
for an application to
be m
a
pped on to
an FPGA, it needs to be di
vide
d in
to inde
pende
n
t tasks such tha
t
each
sub-tas
k
ca
n
be exec
uted at a
differe
n
t tim
e.
Fi
gu
re
2.
1
D
v
e
rsus
2
D
t
a
s
k
p
l
acem
e
nt
area
m
odel
s
fo
r rec
o
n
f
i
g
ura
b
l
e
de
vi
ces
Xi
l
i
nx
Part
i
a
l
R
econ
f
i
g
urat
i
o
n
desi
gn
fl
o
w
[
2
]
u
s
es a
b
o
t
t
o
m
up s
y
nt
hesi
s a
p
p
r
oach
,
whe
r
e
R
econ
f
i
g
ura
b
l
e
M
o
d
u
l
e
s ha
v
e
t
o
be sy
nt
he
s
i
zed sepa
rat
e
ly. Each
Reconfig
urab
le Mo
du
l
e
is co
n
s
id
ered as
a
separat
e
pr
o
j
e
c
t
whe
r
e i
t
i
s
veri
fi
ed a
n
d sy
nt
hesi
zed
separat
e
l
y
.
The t
o
p
desi
gn t
r
eat
s eac
h Pa
rt
i
a
l
R
econ
f
i
g
ura
b
l
e
R
e
gi
on as a bl
ack b
o
x
.
Aft
e
r
gene
rat
i
ng al
l
net
-
l
i
s
t
s
(t
op de
si
g
n
and R
eco
nfi
g
u
r
a
b
l
e
Modules), each Partial Reconfi
g
urab
le Region m
u
st be
manually floor-
planne
d usi
ng t
h
e Xilinx PlanAhe
a
d
design t
ool. The PRR can be
rectangula
r
or L sh
ap
ed
with
so
m
e
restrictio
ns. More
d
e
tails can
b
e
fo
und
i
n
[2
].
The t
a
s
k
pl
ace
m
e
nt
m
odel
i
n
a
reco
nfi
g
u
r
a
b
l
e
devi
ce
can be
abstracted
as a
1D
or 2D m
odel. T
h
e
1D m
odel
di
vi
des t
h
e rec
o
nfi
g
u
r
a
b
l
e
devi
ce
i
n
t
o
col
u
m
n
s t
h
at
can be rec
o
n
f
i
g
ure
d
se
pa
rat
e
l
y
, and
wh
ere a
t
a
sk si
ze i
s
assi
gne
d base
d
o
n
wi
dt
h o
n
l
y
. I
n
t
h
e 2
D
base
d ap
pr
oac
h
, a t
a
sk can
ha
ve a
n
y
wi
dt
h an
d
hei
g
ht
and ca
n
be pla
ced any
w
he
re
on t
h
e FPGA
fabric.
T
h
e 1D
m
odel sim
p
lif
ies the placement m
echanis
m and
trad
es th
is simp
lificatio
n
for a su
bop
ti
m
a
l d
e
v
i
ce u
tilizati
o
n
[4
]. Bo
th
m
o
d
e
ls are sh
own in
Fig
u
re
2
.
Partial
recon
f
i
g
uration
is ap
p
ealing
an
d
attractiv
e sin
ce it p
r
o
v
i
d
e
s flex
ib
ility. Ho
wev
e
r, m
u
lti
t
a
sk
ing
reco
nfi
g
urab
le
har
d
ware i
s
co
m
p
l
e
x and re
q
u
i
r
es s
o
m
e
overhea
d i
n
t
e
rm
s
of m
a
nagem
e
nt
. In o
r
der f
o
r users t
o
be
nefi
t
from
th
e flex
i
b
ility
o
f
su
ch
systems, an
o
p
e
rating syste
m
m
u
st
b
e
d
e
v
e
lop
e
d
to
fu
rt
h
e
r
redu
ce th
e co
m
p
lex
ity o
f
appl
i
cat
i
o
n de
vel
o
pm
ent
by
gi
vi
n
g
t
h
e de
v
e
l
ope
r a hi
gh
e
r
l
e
vel
of abst
r
act
i
on. I
n
su
bs
ect
i
on (
2
.
2
.)
w
e
wi
ll
bri
e
fl
y
di
scus
s
t
h
e m
a
i
n
com
pone
nt
s
of
an
o
p
e
rat
i
n
g
sy
st
em
fo
r
ru
n t
i
m
e recon
f
i
g
urat
i
o
n
p
l
at
form
s.
2.2.
Oper
ating Sys
t
ems
for
Reconfi
g
ur
abl
e
Computing
The
use o
f
a
n
ope
rat
i
n
g sy
st
em
for r
eco
nfi
g
ura
b
l
e
com
put
i
ng s
h
oul
d ease
appl
i
cat
i
o
n de
vel
o
pm
ent
,
hel
p
t
a
c
k
l
e
t
h
e u
nde
rl
y
i
ng a
r
chi
t
ect
u
r
e, a
n
d m
o
st
im
por
t
a
n
tly v
e
rify an
d m
a
in
tain
ap
p
lication
s
. Th
ere are
several
esse
nt
i
a
l
m
odul
es t
h
at
need t
o
exi
s
t
i
n
any
r
eco
n
f
i
g
ura
b
l
e
o
p
erat
i
n
g sy
st
em
im
p
l
em
ent
a
t
i
on, as sh
o
w
n
i
n
Fi
g
u
re
3. T
h
e m
a
i
n
co
m
pone
nt
s
of a
n
y
har
d
ware
ope
r
a
t
i
ng sy
st
em
are t
h
e
bi
t
-
st
rea
m
m
a
nager, sc
hed
u
l
e
r
,
placer, and c
o
mmunica
tions network.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
RES I
S
SN
:
208
8-8
7
0
8
An
Efficien
t
Fra
m
ew
o
r
k f
o
r Flo
o
r
-p
l
a
n Predictio
n
o
f
Dynamic Ru
n
time Reco
n
figu
r
ab
le
… (S
ha
wki Areib
i
)
10
3
Fig
u
re
3
.
Recon
f
i
g
urab
le
OS:
essen
tial co
m
p
on
en
ts
1
.
Th
e
Sch
e
du
ler:
In
an
y typ
e
o
f
op
erating
syst
e
m
a sch
e
d
u
l
er u
s
u
a
lly d
ecid
e
s wh
en tasks
will b
e
ex
ecu
t
ed
.
Effi
ci
ent
t
a
sk
sche
dul
i
n
g al
g
o
ri
t
h
m
s
have t
o
t
a
ke
dat
a
co
m
m
uni
cat
i
on,
t
a
sk de
pen
d
e
n
ci
es, and
res
o
u
r
ce
u
tilizatio
n
o
f
task
s in
to
co
n
s
i
d
eration
to
op
t
i
m
a
l
l
y
ex
p
l
o
it
th
e p
e
rform
a
n
ce o
f
a d
y
n
a
mic reco
nfigu
r
ab
l
e
syste
m
. Sch
e
du
lers in
recon
f
ig
urab
le op
eratin
g
sy
stem
s differ f
r
o
m
con
v
entio
nal so
ftware a
p
p
r
oac
h
es
su
ch
as
Linu
x
o
r
W
i
n
dow
s in sev
e
r
a
l w
a
ys:
(a)
Reconfigura
b
l
e
operating sy
ste
m
schedulers
are
hea
v
ily de
pende
n
t on the placem
ent of
hardware
t
a
sks,
w
h
i
l
e
i
n
con
v
e
n
t
i
onal
s
y
st
em
s t
h
e sch
e
dul
e
r
ca
n
be i
nde
pe
nde
nt
fr
o
m
m
e
m
o
ry
al
l
o
cat
i
on.
(b)
Com
putation resources a
r
e a
v
ailable as s
oon as a ta
s
k
is
placed in t
h
e re
conf
igura
b
le fa
bric, while
in
co
nv
en
tio
n
a
l
syste
m
s th
e task m
a
y wait in
a
ready
queue
for free
pr
ocessi
ng
res
o
urces
.
(c)
Reconfigura
b
l
e
com
puting s
c
hedulers
ha
ve to
take int
o
account
rec
o
nfi
g
uration time and
should
min
i
mize th
is
ti
me b
y
tak
i
n
g
in
t
o
accoun
t task
p
r
e-fet
c
h
i
ng
and
reuse. Th
is is n
o
t an
issu
e in
conve
n
tional
operating syste
m
s.
I
n
ou
r pr
ev
ious wo
rk
sev
e
r
a
l sch
e
d
u
ling
alg
o
r
ith
m
s
w
e
re pr
opo
sed for
r
e
con
f
i
g
ur
able syste
m
s to
o
v
e
rco
m
e
th
e li
mita
tio
n
o
f
co
nv
en
tion
a
l
sch
e
du
lers [5]. Sch
e
du
lin
g tech
n
i
qu
es propo
sed
in
the
literatu
re h
a
v
e
d
i
fferen
t
goals, su
ch
as
i
m
p
r
ov
ing
h
a
rd
ware resou
r
ce u
tilizatio
n
,
redu
ction
of
sche
dul
i
n
g t
i
m
e and/
or
rec
o
n
f
i
g
urat
i
o
n o
v
e
r
head
. Ot
her
re
con
f
i
g
ura
b
l
e
c
o
m
put
i
ng sc
he
dul
e
r
s at
t
e
m
p
t
t
o
re
d
u
ce f
r
a
g
m
e
nt
at
i
on a
n
d
po
we
r c
ons
um
pt
i
o
n
[
6
]
.
2
.
Bit-strea
m
Man
a
g
e
r: Th
is mo
du
le m
a
n
a
g
e
s th
e lo
ad
ing
/
un
lo
ad
ing
o
f
p
a
rtial b
it-streams fro
m
a sto
r
age
in
to
Prog
r
a
mmab
l
e Reco
nf
igur
ab
le Re
gi
ons
(
P
RRs). T
h
e bit-stream
m
a
nag
e
r req
u
ires
fast
and fai
r
ly
larg
e
stora
g
e m
e
dia, there
f
ore it is
pre
f
era
b
le to use a de
di
cated
h
a
rdware m
o
du
le fo
r su
ch
a task
.
A
b
it-strea
m
m
a
nager i
s
f
u
rt
her
dec
o
m
posed i
n
t
o
a st
ora
g
e
m
a
nager a
n
d
a reco
nfi
g
u
r
a
b
l
e
m
a
nager. T
h
e l
a
t
t
e
r at
t
e
m
p
ts
t
o
rea
d
bi
t
-
st
re
am
s fr
om
t
h
e st
ora
g
e m
a
nage
r a
n
d
d
o
w
nl
oa
ds t
h
em
ont
o t
h
e F
P
G
A
fab
r
i
c
.
3.
The
Placer: In conve
ntional
reconf
i
g
ura
b
le de
vices a
pla
cer
decides
where
to as
sign ne
w tas
k
s in
the
reconfi
g
ura
b
le area. In RTR syste
m
s
the scheduler a
nd
placer are inte
r-
depe
ndent. T
h
ere
f
ore, they
are
i
m
p
l
e
m
en
ted
as a sing
le en
tity wh
ich
m
a
k
e
s it ch
alleng
ing
to
id
en
tify clear
b
oun
d
a
ries
between
t
h
em
.
4.
C
o
m
m
uni
cat
i
o
n M
a
na
ger:
T
h
i
s
m
odul
e de
fi
nes
ho
w ha
r
d
wa
re an
d s
o
f
t
ware t
a
sks c
o
m
m
uni
cat
e an
d
interact with each othe
r. T
h
e
co
mm
unication network propos
ed by the
manager depe
nds
on the type of
applications used in the sys
t
e
m
. For exa
m
ple, strea
m
i
ng ap
pl
i
cat
i
ons
such as
vi
si
o
n
nee
d
a di
ffe
re
n
t
to
po
log
y
th
an
t
h
at of a cen
t
r
alized
co
m
p
u
t
atio
n
a
l app
licatio
n
.
In c
u
rrent run-tim
e
reconf
igurab
le system
s,
th
e floo
rp
lan
an
d
res
ources a
r
e fixe
d and allocated a pri
o
ri
bef
o
re
t
h
e
sy
st
em
i
s
depl
oy
ed.
The
al
l
o
c
a
t
e
d res
o
urces
in the
form
of t
h
e
num
b
er, size, s
h
a
p
e
of
reco
nfi
g
u
r
a
b
l
e
regi
o
n
s a
nd t
h
e sched
u
l
e
r t
y
pe t
o
be
used
m
i
ght
not
be s
u
i
t
a
bl
e fo
r al
l
ty
pes of t
a
s
k
s
a
n
d
appl
i
cat
i
o
ns t
o
be e
x
ec
ut
ed
on
t
h
e F
P
G
A
fab
r
i
c
.
Acco
r
d
i
ngl
y
,
a
per
f
o
r
m
ance pe
nal
t
y
wi
t
h
si
gni
fi
ca
nt
cost
s c
oul
d
be
i
n
cu
rre
d.
He
nc
e, a m
o
re s
u
i
t
a
bl
e dy
nam
i
c app
r
oach
i
s
nee
d
ed
f
o
r
res
o
urc
e
s est
i
m
a
t
i
on a
n
d
allo
catio
n
.
In
su
b
s
ectio
n (2
.3
.),
we
will in
trod
u
c
e th
e m
a
in
co
n
c
ep
t
o
f
Mach
in
e Learn
i
ng and
Data m
i
n
i
ng
t
o
p
r
e
d
i
c
t
an
d
est
i
m
a
t
e
t
h
e necessary
re
so
urc
e
s f
o
r
an
ap
pl
i
cat
i
on
base
d
on
past
hi
st
ori
cal
i
n
f
o
rm
at
i
on.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
089
-48
64
IJR
E
S V
o
l
.
4, No
. 2,
J
u
l
y
20
1
5
:
9
9
– 12
1
10
4
2.
3. D
a
t
a
Mi
ni
ng,
M
a
chi
n
e
L
e
arni
n
g
and
Cl
assi
fi
c
a
ti
on
Dat
a
M
i
ni
n
g
i
s
t
h
e co
re
pr
oce
ss o
f
t
h
e
k
n
o
w
l
e
dge
di
sc
ove
ry
pr
oce
d
u
r
e.
A
dat
a
-m
i
n
i
ng fl
ow i
n
cl
u
d
es
di
ffe
re
nt
st
age
s
, suc
h
as Pre
-
pr
ocessi
ng
, C
l
assi
fi
cat
i
on, C
l
ust
e
ri
n
g
an
d P
o
st
-
p
r
o
cessi
ng
.
The m
a
i
n
ob
j
ect
i
v
e
o
f
th
e en
tire p
r
o
cess is to
ex
tract u
s
eful
hi
dd
en k
n
o
w
l
e
d
g
e
fr
om
t
h
e dat
a
. Each set
of
dat
a
can be m
i
ned
usi
n
g
di
ffe
re
nt
dat
a
-
m
i
n
i
ng t
ech
ni
que
s d
e
pe
n
d
i
n
g
on
t
h
e
dat
a
an
d t
h
e
g
o
al
of
m
i
ni
ng
pr
oced
u
r
e, as
s
h
ow
n i
n
Fi
gu
re 4.
Fi
gu
re 4.
Dat
a
m
i
ni
ng fl
o
w
Classification i
s
one
of the
main phases
of
data-m
ining.
It
is a
key function t
h
at categorizes item
s
and
rec
o
rds in
a database
spe
c
ific classes or categories.
T
h
e
m
a
in objective of classifica
tion is to acc
urately
p
r
ed
ict th
e targ
et class for each
item
in
th
e d
a
ta. Classificatio
n
is co
n
s
id
ered
to b
e
a form
o
f
sup
e
rv
ised
l
earni
n
g
t
e
c
hni
que
t
h
at
i
n
fe
rs
a f
u
nct
i
o
n
f
r
o
m
l
a
bel
e
d
dat
a
, as s
h
ow
n i
n
Fi
gu
re
5.
Th
e
t
r
ai
ni
n
g
dat
a
u
s
ual
l
y
consists of a s
e
t of rec
o
rds.
Each
r
eco
r
d
i
s
a pai
r
c
onsi
s
t
i
ng
o
f
a
n
i
n
p
u
t
o
b
ject
al
on
g
wi
t
h
a
desi
re
d
o
u
t
p
ut
target. The trai
ning data is analy
zed by the
supervised learning algo
r
ith
m an
d
pr
odu
ces a
m
o
d
e
l o
r
f
unction
whi
c
h ca
n
be
use
d
f
o
r m
a
pp
i
ng
ne
w e
x
am
pl
es.
The
m
a
in
ob
ject
i
v
e
i
s
t
o
pr
od
uce
a
m
odel
t
h
at
co
r
r
ect
l
y
d
e
term
in
es th
e
class lab
e
l fo
r
u
n
s
een
i
n
stan
ces. In
o
t
h
e
r
word
s, th
e learn
i
ng
algo
rit
h
m
wi
ll h
a
v
e
t
h
e capab
ility
to
g
e
n
e
ralize fro
m
th
e p
r
ov
id
ed
trai
n
i
ng
d
a
ta
to
un
se
e
n
situa
tions i
n
s
o
m
e
reasonable acc
urate fas
h
ion.
For t
h
e w
o
r
k
p
r
o
p
o
sed i
n
t
h
i
s
pape
r, a cl
assi
fi
cat
i
on m
odel
i
s
used t
o
pre
d
i
c
t
t
h
e appr
o
p
r
i
at
e t
y
pe of
reso
u
r
ces an
d l
a
y
out
fo
r a re
con
f
i
g
ura
b
l
e
c
o
m
put
i
ng pl
at
f
o
rm
gi
ven a speci
fi
c ap
pl
i
cat
i
on. C
l
assi
fi
ca
t
i
on i
n
o
u
r cu
rren
t
wo
rk
b
e
g
i
n
s
with
a d
a
ta set
in
wh
ic
h t
h
e
class assignments are known.
For e
x
a
m
ple,
a
cl
assi
fi
cat
i
on
m
odel
t
h
at
pre
d
i
c
t
s
t
h
e n
u
m
b
er o
f
PR
R
s
c
o
ul
d
be de
vel
o
p
e
d ba
sed
o
n
o
b
s
erve
d
dat
a
f
o
r
m
a
ny
dat
a
fl
ow
g
r
ap
hs
ove
r a ce
rt
a
i
n pe
ri
o
d
of t
i
m
e
. B
i
nary
cl
assi
fi
cat
i
on i
s
c
onsi
d
ere
d
t
o
b
e
t
h
e si
m
p
l
e
st
t
y
pe of
classification problem
where
the
target
at
t
r
i
but
e
has
o
n
l
y
t
w
o
p
o
ssi
bl
e
va
lues.
For e
x
ample in our
work, t
h
e
two
po
ssi
b
l
e valu
es cou
l
d
b
e
eith
er u
n
i
form o
r
n
on-un
ifo
r
m
PRRs. On th
e o
t
h
e
r h
a
nd
, m
u
lti-class
targ
ets
co
u
l
d
h
a
v
e
m
u
ltip
le v
a
lu
es; fo
r ex
am
p
l
e, small,
med
i
u
m
o
r
larg
e nu
m
b
er o
f
soft co
res th
at can
b
e
used
in
a
reco
nfi
g
u
r
a
b
l
e
com
put
i
ng pl
at
fo
rm
.
3.
RELATED WORK
The use
of bot
h
m
achine-learning
and
d
a
ta-min
in
g
m
e
th
od
s, as propo
sed
in
th
is work, rep
r
esen
ts a
new
di
rect
i
o
n
for rec
o
nfi
g
u
r
abl
e
-c
om
put
i
n
g resear
ch
.
In
contrast, it h
a
s already
bec
o
m
e
a fast-gr
o
wi
ng
research
area i
n
ph
ysical d
e
sig
n
.
App
licati
o
n
s
in
clud
e
p
r
ed
ictin
g
d
e
fects in
silico
n
wafers
[7
] id
en
t
i
fying
spee
d pat
h
s i
n
pr
ocess
o
rs as
gui
des f
o
r
per
f
o
rm
ance im
provem
e
nt
[8]
and
desi
g
n
ex
pl
orat
i
o
n f
o
r
hi
g
h
-l
e
v
el
synthesis [9]. A notable effort in
the area of CAD for ASICs is PADE
[10], a ne
w ASIC
placem
e
n
t flow
whi
c
h em
pl
oy
s
m
achi
n
e-l
ear
n
i
ng a
nd
dat
a
m
i
ni
n
g
m
e
t
hods t
o
p
r
e
d
i
c
t
and e
v
al
uat
e
p
o
t
e
nt
i
a
l
dat
a
-pat
hs u
s
i
n
g
hi
g
h
-
d
i
m
ensi
onal
dat
a
fr
om
t
h
e
ori
g
i
n
al
desi
g
n
’
s
net
-
l
i
s
t
.
PA
DE
achi
e
ves
7% t
o
1
2
% i
m
pro
v
em
ent
s
i
n
so
lu
tion
qu
ality co
m
p
ared
t
o
th
e state-o
f
-th
e
-art.
A su
mm
a
r
y of
o
t
h
e
r su
ccessfu
l
app
licatio
n
s
o
f
d
a
ta-min
in
g
d
r
i
v
en pr
ed
icti
o
n
to
pr
ob
lem
s
in
th
e ar
ea of
p
h
y
sical
d
e
sign
can
b
e
f
oun
d in
[
1
1
]
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
RES I
S
SN
:
208
8-8
7
0
8
An
Efficien
t
Fra
m
ew
o
r
k f
o
r Flo
o
r
-p
l
a
n Predictio
n
o
f
Dynamic Ru
n
time Reco
n
figu
r
ab
le
… (S
ha
wki Areib
i
)
10
5
Fi
gu
re
5.
S
upe
rvi
s
e
d
l
ear
ni
n
g
st
eps
Th
ere seem
s t
o
b
e
an
ab
und
an
ce
o
f
research
work
in
th
e literatu
re th
at co
v
e
rs th
e co
n
c
ep
t o
f
m
a
nagem
e
nt
of reco
nfi
g
ura
b
l
e
com
put
i
ng s
y
st
em
s. M
o
st
of t
h
e p
r
e
v
i
o
u
s
wo
rk m
a
i
n
l
y
conce
n
t
r
at
es on t
h
e
devel
opm
ent of Ope
r
ating Sy
stem
s and m
a
nagers
along
wi
th the
necess
a
ry
m
odules
suc
h
as
sche
dulers and
placers.
Only very few articles discuss the c
once
p
t of
re
source estim
a
tion and utilization of m
achine learni
ng
techniques
for pre
d
icting the nece
ssa
ry
r
e
so
urces
f
o
r
dy
nam
i
c reco
nfi
g
u
r
abl
e
c
o
m
put
i
ng sy
st
em
s. [1]
prese
n
t
e
d
an
a
u
t
o
m
a
t
e
d t
ool
t
o
su
p
p
o
r
t
dy
n
a
m
i
c recon
f
i
g
urat
i
o
n
fo
r
hi
g
h
per
f
o
r
m
a
nce reg
u
l
a
r
ex
pre
ssi
on
searchi
n
g
.
The
aut
h
o
r
p
r
ese
n
t
e
d a
m
e
t
hod t
o
qui
ckl
y
an
d a
ccurat
e
l
y
est
i
m
at
e t
h
e resou
r
ce req
u
i
r
em
ent
s
of a
gi
ve
n set
o
f
re
gul
a
r
e
x
p
r
essi
ons
.
H
o
we
ver
,
t
h
i
s
wo
r
k
i
s
l
i
m
i
t
e
d si
nce
i
t
ap
pl
i
e
s
onl
y
t
o
reg
u
l
a
r
ex
p
r
essi
on
searchi
n
g
.
Al
s
o
,
n
o
pre
d
i
c
t
i
o
n
no
r l
ear
ni
n
g
fr
om
past
hi
st
o
r
y
i
s
ap
pl
i
cabl
e
i
n
t
h
i
s
ap
pr
oac
h
.
In
[
1
2]
t
h
e a
u
t
h
o
r
s i
n
vest
i
g
at
e t
h
e
use
o
f
a
dva
nc
ed
m
e
ta-h
euristic techniq
u
e
s alon
g
with
m
ach
in
e
learn
i
ng
to
au
to
m
a
te
th
e o
p
timizatio
n
o
f
a reco
nfigu
r
ab
l
e
appl
i
cat
i
o
n pa
r
a
m
e
t
e
r set
.
The app
r
oac
h
pr
o
pos
ed
in [12] is calle
d the Machi
n
e Lear
ni
ng O
p
t
i
m
i
zer
(M
LO), and
i
n
v
o
l
v
es
a
Particle Swarm Op
ti
m
i
zat
io
n
(PSO)
m
e
t
hod
ol
o
g
y
al
on
g wi
t
h
an u
nde
rl
y
i
ng s
u
r
r
ogat
e
fi
t
n
ess
f
unct
i
o
n m
odel
base
d on S
u
pp
ort
Vect
o
r
M
achi
n
es
(SVM
) a
n
d a
Gaus
si
an
Pr
oc
ess (
G
P
)
.
Thei
r a
p
p
r
oac
h
i
s
m
a
i
n
l
y
used t
o
save
t
i
m
e
on
a
n
al
y
s
i
s
an
d a
p
pl
i
cat
i
on
sp
ecific too
l
dev
e
lop
m
en
t. Ou
r
wo
rk
is com
p
le
tely
different t
h
an ML
O in t
h
e se
ns
e that our fra
m
ework
pre
d
icts the necessary res
o
urces and floor-plan in
Dy
na
m
i
c Reconfigura
b
le System
s
to optim
ize the
execut
i
o
n o
f
b
e
nchm
arks
a
s
t
h
ey
ar
ri
ve f
o
r pr
ocessi
ng
.
The aut
h
o
r
s i
n
[1
3]
pr
op
ose a
fast
, a pri
o
ri
est
i
m
a
t
i
on of re
s
o
u
r
ces d
u
r
i
n
g t
h
e sy
st
em
l
e
vel
desi
gn f
o
r
FPGAs an
d
ASICs targ
etin
g FIR/IFR filters. Th
e
p
r
ed
ictio
n
was
b
a
sed
o
n
Neu
r
al Net
w
orks. Th
e typ
e
of
resources they targeted included
Area
, M
a
xi
m
u
m
Freq
ue
ncy
and
Dy
na
m
i
c Power C
o
nsum
pt
i
o
n
.
H
o
weve
r
,
th
e work is v
e
ry li
mited
and
i
s
no
t ap
p
licab
l
e
to
Dyn
a
m
i
c Ru
n Tim
e
Rec
o
nfigu
r
ation
app
licatio
n
s
.
An on
-lin
e pred
ictor
for a d
y
n
a
m
i
c reco
nfigu
r
ab
l
e
s
y
st
em
i
s
pro
pos
ed i
n
[1
4]
t
o
re
d
u
c
e
reco
nfi
g
u
r
at
i
o
n o
v
e
r
hea
d
by
pre
-fet
c
hi
n
g
h
a
rd
ware
m
odul
e
s
. Th
e p
r
op
ose
d
al
g
o
ri
t
h
m
us
es a pi
ece
wi
se
l
i
n
ear
pre
d
i
c
t
o
r
t
o
fi
n
d
c
o
r
r
el
at
i
o
n
a
n
d
l
o
a
d
har
d
w
a
re m
odul
es
a
pri
o
ri
.
Thi
s
w
o
rk
t
r
i
e
s t
o
o
p
t
i
m
i
ze t
h
e use
o
f
fi
xed
resources,
while our work
pla
y
s a role at a m
u
ch hi
ghe
r
le
vel, and see
k
s t
o
predict the
nec
e
ssary re
source
s.
The w
o
r
k
i
n
[
15]
p
r
o
p
o
sed a
dy
nam
i
c l
earni
ng dat
a
m
i
ni
ng t
echni
que f
o
r fai
l
u
re p
r
e
d
i
c
t
i
on of
hi
g
h
per
f
o
r
m
a
nce com
put
i
ng.
The
m
a
i
n
cont
ri
bu
t
i
on i
s
t
o
dy
na
m
i
cal
ly
i
n
crease t
h
e t
r
ai
ni
n
g
s
e
t
du
ri
n
g
t
h
e s
y
st
em
o
p
e
ration
,
wh
i
c
h
h
e
lps in
pred
ictin
g
failures at early d
e
p
l
oym
ent
.
The wo
rk i
n
[
1
5]
i
s
fu
ndam
e
nt
al
l
y
diffe
rent
fro
m
o
u
r
wo
rk
, sin
ce it
d
o
e
s no
t pred
ict resou
r
ces b
a
sed on
an
in
tellig
en
t
mach
in
e learn
i
n
g
appro
a
ch
.
A m
u
lti-o
b
j
ect
iv
e d
e
si
g
n
sp
ace ex
p
l
o
r
ation
to
o
l
is propo
sed
in
[16
]
to
enab
le reso
urce
man
a
g
e
m
e
n
t
for t
h
e M
o
len
reconfi
g
ura
b
le arc
h
itect
u
r
e. Th
e pr
opo
sed ap
pro
ach anal
yzes an application s
o
urce
c
ode
a
nd
usi
n
g he
u
r
i
s
t
i
c
t
echni
q
u
es
de
t
e
rm
i
n
es a set
of
har
d
ware/
s
oft
w
are ca
ndi
d
a
t
e
con
f
i
g
urat
i
on
(s
ub
-t
asks
).
The
resource m
a
nager the
n
uses t
h
ese candi
dates to exploit
m
o
re efficie
n
tly the available sy
ste
m
resources
. Their
work
ten
d
s to
o
p
tim
ize
th
e su
b
t
ask
s
o
f
th
e
ap
p
lication
to
fit a fix
e
d
pre-determin
ed
p
l
atfo
rm
, wh
ile ou
r work
pre
d
i
c
t
s
a sui
t
a
bl
e pl
at
fo
rm
for a gi
ven a
ppl
i
cat
i
on. T
h
eir work targete
d
a specifi
c platform (Molen) and does
not
t
a
ke
part
i
a
l
rec
o
n
f
i
g
urat
i
o
n i
n
t
o
acc
ou
nt
.
In
[1
7]
t
h
e a
u
t
h
ors
pr
o
p
o
s
ed an al
go
ri
t
h
m
for Pr
o
g
r
a
m
m
abl
e
R
e
con
f
i
g
ura
b
l
e
(
P
R
)
m
odul
e
gene
rat
i
o
n. Th
e pro
p
o
se
d t
echni
que ca
n be
i
n
t
e
grat
e
d
i
n
m
a
nual
desi
g
n
fl
ow, t
o
a
u
t
o
m
a
t
e
t
h
e gene
r
a
t
i
on of
PR partitions
and m
o
dules.
The a
u
thors
form
ulated
the
PR
m
odule
gene
ration problem
as a standa
rd
M
a
xi
m
u
m
-
W
e
i
ght
I
nde
pe
nd
ent
Set
Pro
b
l
e
m
[18]
. The
i
r desi
g
n
su
p
p
o
r
t
s
m
u
l
t
i
p
l
e
ob
ject
i
v
es s
u
ch as
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
089
-48
64
IJR
E
S V
o
l
.
4, No
. 2,
J
u
l
y
20
1
5
:
9
9
– 12
1
10
6
reconfi
g
uration overhea
d
and area wit
h
di
f
f
ere
n
t
co
nst
r
ai
nt
s. T
h
i
s
wo
r
k
i
s
di
ffere
nt
t
h
an o
u
r
wo
r
k
i
n
m
a
ny
aspects. For exa
m
ple, techniques proposed i
n
their pa
per
do not use m
ach
ine lear
n
i
ng
nor
d
o
th
ey learn f
r
o
m
p
r
ev
iou
s
resu
lts; m
o
reo
v
e
r it is li
m
ited
to
th
e g
e
n
e
ration
o
f
PR p
a
rtitio
n
s
.
The cl
osest
p
u
b
l
i
s
he
d resear
c
h
t
o
o
u
r
wo
rk
can be f
o
un
d i
n
[
19]
, [
2
0]
, [
21]
an
d [
22]
. I
n
[
19]
, t
h
e
aut
h
ors
prese
n
t
s
a hi
gh l
e
vel
p
r
edi
c
t
i
o
n m
odel
i
ng t
echni
que
t
h
at
pr
od
uces
pre
d
i
c
t
i
on m
o
d
e
l
s
for i
s
cel
l
a
neou
s
pl
at
fo
rm
s and
t
ool
chai
ns a
nd a
p
pl
i
cat
i
o
n
dom
ai
ns. Th
e fram
e
wor
k
pr
o
pose
d
i
n
t
h
i
s
pa
per
uses
l
i
n
ear
reg
r
essi
o
n
a
n
d
neu
r
al
net
w
o
r
ks t
o
acc
u
r
at
el
y
capt
u
re t
h
e
rel
a
t
i
on
bet
w
e
e
n ha
r
d
wa
re a
nd
so
ft
wa
re m
e
t
r
i
c
s.
The fram
ework takes an ANSI-C de
scri
ption as input an
d estim
a
tes
various FPGA-relat
e
d m
easures, such as
area,
fre
q
u
enc
y
, o
r
l
a
t
e
ncy
.
Ho
we
ver
,
ou
r
app
r
oach
i
s
t
o
t
a
l
l
y
di
ffere
nt
i
n
t
h
e
f
o
l
l
o
wi
n
g
aspect
s:
(a)
It
doe
s
not
pre
d
i
c
t
har
d
w
a
re res
o
u
r
ces co
nsum
pt
i
on
but
pre
d
i
c
t
s
t
h
e m
o
st
o
p
t
i
m
a
l
l
a
yout
(
P
R
R
s
, So
f
t
C
o
res) t
h
at
w
oul
d
best exec
ute a certain appl
ication
suc
h
t
h
at
po
wer i
s
red
u
ce
d an
d per
f
o
r
m
a
nce is enha
nce
d
, (
b
) o
u
r
fram
e
work
is
a
ssociated with an Op
e
r
atin
g S
y
stem
for
Dy
n
a
m
i
c Reconfi
g
ura
b
le sy
stem
s, (c
)
ou
r
fram
e
wo
rk
t
a
rget
s
dy
nam
i
c rec
o
n
f
i
g
ura
b
l
e
desi
gns
an
d
n
o
t
st
at
i
c
desi
gn
s as i
n
Q
u
i
p
u.
The w
o
r
k
i
n
[2
0]
prese
n
t
s
a fram
e
wor
k
con
s
i
s
t
i
ng
of
t
w
o l
a
y
e
rs f
o
r
reso
urce m
a
nagem
e
nt
of
dy
nam
i
c reconfi
g
u
r
abl
e
pl
at
fo
rm
s. The p
r
op
ose
d
sy
st
em is capable
of evalua
t
i
n
g t
h
e per
f
o
r
m
a
nce of
a
reco
nfi
g
u
r
a
b
l
e
com
put
i
ng pl
at
form
based on
pre
d
i
c
t
i
on
m
odel
.
The fr
am
ewor
k i
s
appl
i
e
d t
o
a
n
art
i
f
i
c
i
a
l
vi
si
o
n
case st
u
d
y
.
H
o
weve
r, t
h
i
s
pa
per
d
o
es
not
see
k
t
o
p
r
edi
c
t
nei
t
h
e
r
t
h
e l
a
y
out
n
o
r
t
h
e sui
t
a
bl
e re
so
urce
s
requ
ired
fo
r th
e app
licatio
n
.
Th
e re
source
s are in fact fixed and the
ma
in
task
o
f
th
e run-ti
m
e
reso
urce
manager (RR
M
) is to all
o
cate the be
st com
putati
o
n
a
l r
e
sour
ce (sof
tw
ar
e or
h
a
rd
w
a
r
e
) b
a
sed on
the
appl
i
cat
i
o
n. T
h
e a
p
p
r
oac
h
i
n
[
?
,
R
u
nTi
m
eOpt
-FPL
2
0
1
3
]
s di
ffe
rent
si
nce t
h
e
ap
pl
i
cat
i
on l
e
vel
de
ci
si
o
n
m
a
ki
ng
ru
ns a
gree
dy
o
p
t
i
m
i
z
at
i
on
whi
c
h i
s
com
put
at
i
onal
l
y
expe
nsi
v
e
t
o
fi
n
d
t
h
e
best
m
a
ppi
n
g
t
h
at
r
e
t
u
r
n
s
t
h
e m
a
xim
u
m
pe
rf
orm
a
nce.
I
n
ou
r a
p
pr
o
ach
we
use
a
sm
art supervis
ed learning approac
h
to effi
ciently
pre
d
i
c
t
t
h
e
best
l
a
y
out
t
h
at
w
o
ul
d m
a
xim
i
ze the
per
f
o
r
m
a
nce an
d
red
u
ce
p
o
we
r c
o
nsum
pt
i
on.
In [2
1
]
t
h
e au
t
h
ors
propo
se an
o
n
line ad
ap
ti
v
e
al
g
o
rith
m
t
h
at decid
e
s th
e b
e
st im
p
l
e
m
e
n
tatio
n to
b
e
use
d
f
o
r t
h
e e
x
ecut
i
o
n
of a
n
i
n
st
ance
base
d o
n
f
eat
ure
s
of t
h
e p
r
oce
ss
and
hi
st
o
r
y
o
f
execut
i
o
n.
The
wo
rk
ten
d
s
to
im
p
r
ov
e t
h
e
h
a
rd
ware/softw
are
p
a
rtitio
n
i
ng
task b
y
avo
i
d
i
ng
p
r
ed
eterm
i
n
e
d ex
ecu
tion
times and
conce
n
t
r
at
i
n
g
on
ru
n
-
t
i
m
e
based o
n
sy
st
e
m
execut
i
on
h
i
st
ory
.
T
h
i
s
w
o
r
k
d
o
es
n
o
t
use any
st
at
i
s
t
i
cal
or
machine learning technique t
o
pre
d
ict res
ources
or
fl
o
o
r
-
p
l
a
n of
t
h
e reco
n
f
i
g
ura
b
l
e
sy
st
em
.
Th
e wo
rk
in
[22
]
p
r
op
o
s
es a d
ecisio
n
mak
i
ng
supp
ort fram
e
wo
rk
called
DRu
i
d
wh
ich
u
tilizes
m
achi
n
e l
earni
ng a
n
d a m
e
t
a
-he
u
ri
st
i
c
(c
o
m
bi
nes Genet
i
c
Al
g
o
ri
t
h
m
s
and R
a
n
dom
For
e
st
) t
o
e
x
t
r
act
and
learn
ch
aracteristics th
at
m
a
k
e
certain
fun
c
t
i
o
n
a
lity o
f
app
licatio
n
s
m
o
re su
itab
l
e
for
a certain
co
mp
u
ting
tech
no
log
y
. St
artin
g fro
m
a ’C’ im
p
l
e
m
en
tatio
n
,
t
h
e
fram
e
work either sel
ects the
be
st
c
o
m
put
at
i
onal
e
l
em
ent
that can be accelerated by the
com
put
ational ele
m
ent or offers sugge
stions
on code tra
n
sform
a
tion that can be
applied. The expe
rt syste
m
i
d
entifie
s
88.9% of the tim
e
the functionalities
that are effi
ciently accelerated by
usi
n
g t
h
e F
P
G
A
. T
h
i
s
w
o
r
k
,
ho
we
ver
,
i
s
di
f
f
ere
n
t
fr
om
ou
r p
r
o
p
o
sed
w
o
r
k
si
nce i
t
d
o
es
not
pre
d
i
c
t
t
h
e
m
o
st
su
itab
l
e floor-plan
no
r layou
t o
f
th
e
reco
nfigu
r
ab
le co
m
p
u
tin
g
p
l
atform
fo
r a sp
ecific app
l
icatio
n
,
bu
t pred
icts
the functionality of the a
p
plication that
can
be accelerated e
ffici
ently by a
n
FPGA.
4.
METHO
D
OL
OGY
Th
e ab
ility to
d
e
term
in
e an
d
p
r
ed
ict th
e h
a
rd
ware resou
r
ces requ
ired
b
y
an
app
licatio
n
i
n
a d
y
n
a
m
i
c,
ru
n t
i
m
e recon
f
i
g
ura
b
l
e
en
vi
r
onm
ent
can significantly im
prove t
h
e system
’s ove
r
all perform
ance in di
ffere
n
t
way
s
.
Ho
we
ve
r,
opt
i
m
i
z
i
ng t
h
e
necessa
ry
h
a
rd
ware
res
o
ur
ces f
o
r a
gi
ven
t
a
sk
gra
p
h i
n
real
t
i
m
e
i
s
an NP
-
har
d
p
r
o
b
l
e
m
[
23]
. T
h
ere
f
o
r
e
we are pr
o
pos
i
ng a m
achi
n
e
l
earni
n
g
base
d
t
echni
q
u
e t
o
est
i
m
a
t
e
and pr
edi
c
t
t
h
e nee
d
e
d
res
o
u
r
ces.
Gi
ven
a new
ap
pl
i
cat
i
o
n
,
o
u
r
p
r
op
os
ed f
r
am
ewor
k
em
pl
oy
s a dat
a
base, a
si
m
u
l
a
tor a
n
d
mach
in
e learn
i
n
g
al
g
o
rith
m
s
t
o
con
s
tru
c
t a m
o
d
e
l cap
ab
le o
f
in
tellig
en
tly esti
m
a
t
i
n
g
th
e resou
r
ces req
u
ired
t
o
optim
ize various
objectives
s
u
ch as
ru
n-t
i
m
e, area
, p
o
w
er
con
s
um
pt
i
on a
n
d
cost
.
T
h
e
pr
op
ose
d
a
p
p
r
oa
ch
uses
pre
v
i
o
us
kn
o
w
l
e
dge a
nd
feat
ures e
x
t
r
act
e
d
fr
om
benchm
arks t
o
creat
e a
sup
e
r
v
i
s
ed m
achi
n
e
-
l
ear
ni
n
g
m
odel
th
at is cap
ab
le o
f
pred
ictin
g
th
e estim
a
t
ed
n
ecessary
resou
r
ces. Fi
g
u
re
6 illu
strates th
e
p
r
op
o
s
ed
fram
e
work
an
d fl
o
w
u
s
ed in
th
is work.
Th
e
fram
e
work
co
nsists of
t
h
ree m
a
in
p
h
a
ses: d
a
ta
p
r
eparatio
n, trai
n
i
ng
an
d
t
e
st
i
ng (cl
a
ssi
fi
cat
i
on/
p
r
edi
c
t
i
o
n
)
. T
h
e
dat
a
pre
p
arat
i
o
n
p
h
a
se uses
b
o
t
h
s
y
nt
het
i
c
an
d re
al
-l
i
f
e be
nchm
arks
, a
sim
u
l
a
t
o
r and
a dat
a
base, as sho
w
n i
n
Fi
g
u
r
e 6-
A. I
n
t
h
i
s
pha
se, be
nchm
arks are
gene
r
a
t
e
d and e
v
al
u
a
t
e
d i
n
term
s of the
power cons
um
ed and exec
u
tion
t
i
m
e
u
s
in
g a si
m
u
la
to
r, as wil
l
b
e
ex
p
l
ai
n
e
d in
Section
(4
.1
.).
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
RES I
S
SN
:
208
8-8
7
0
8
An
Efficien
t
Fra
m
ew
o
r
k f
o
r Flo
o
r
-p
l
a
n Predictio
n
o
f
Dynamic Ru
n
time Reco
n
figu
r
ab
le
… (S
ha
wki Areib
i
)
10
7
Fig
u
r
e
6
.
Ov
erall
m
e
th
o
d
o
l
ogy an
d f
l
o
w
All necessa
ry features t
h
at wi
ll be use
d
to train
the m
achine learning al
gor
ithm
s
are extracted from
bot
h t
h
e
dat
a
f
l
ow
g
r
ap
hs
ge
nerat
e
d i
n
a
ddi
t
i
on t
o
m
e
t
r
i
c
s p
r
o
v
i
d
e
d
by
t
h
e si
m
u
l
a
t
o
r.
I
n
t
h
e t
r
ai
ni
n
g
pha
se
(shown
i
n
Fi
gu
re
6-B) the fram
ewo
r
k u
tilizes th
e feat
u
r
es ex
tracted
from
th
e p
r
ev
ious stag
e to trai
n
an
d
creat
e a m
odel
t
h
at
l
ear
ns
fr
o
m
previ
o
us
hi
s
t
ori
c
i
n
fo
rm
at
ion
.
T
h
i
s
m
ode
l
ext
r
act
s
usef
ul
hi
dde
n
k
n
o
w
l
e
d
g
e
(i
.e.,
ge
neral
i
z
es) f
r
om
t
h
e da
t
a
and est
i
m
at
es/
p
re
di
ct
s res
o
urces
o
f
t
h
e
re
con
f
i
g
ura
b
l
e
c
o
m
put
i
ng sy
st
e
m
. The
th
ird and
fin
a
l
testin
g
/
p
r
ed
ictio
n ph
ase u
tilizes th
e
d
e
v
e
loped
m
o
d
e
l
g
i
v
e
n
n
e
w un
seen
t
a
sk
g
r
aph
s
t
o
pred
ict
necessa
ry
res
o
urces
as see
n
i
n
Fi
gu
re
6-C
.
Each
of
t
h
ese
pha
ses a
r
e e
x
p
l
ai
ned i
n
m
o
re
det
a
i
l
i
n
t
h
e
f
o
l
l
o
wi
n
g
sect
i
on.
4.
1.
D
a
t
a
Pre
p
ara
t
i
o
n S
t
a
g
e
In
th
is stag
e a tab
u
l
ar
d
a
tabase su
itab
l
e for th
e d
a
t
a
-m
i
n
i
ng e
ngi
ne i
s
con
s
t
r
uct
e
d. E
ach dat
a
bas
e
record corres
p
onds t
o
a
Data
-Flow
Gr
a
p
h (DFG) al
ong
with its feature
s
or
attributes that are
necess
a
ry for
the training sta
g
e. T
h
e
DFGs
can either
be s
y
nthesized
o
r
t
a
ken
fr
om
real
-l
i
f
e ap
pl
i
cat
i
ons. T
h
e sy
nt
he
si
zed
DF
Gs are
eva
l
uat
e
d
un
de
r
di
ffe
re
nt
ha
rd
ware
set
-
u
p
s
.
The e
v
al
uat
i
o
n ca
n ei
t
h
er
b
e
per
f
o
rm
ed u
s
i
ng
a
d
e
d
i
cated
h
a
rdware
p
l
atform
o
r
a sim
u
lato
r. Th
e latte
r allows fo
r
faster ev
alu
a
tion
and
flex
ib
ility. Each
DFG
i
s
eval
uat
e
d
u
s
i
n
g
di
f
f
e
r
ent
har
d
ware
scen
ari
o
s,
by
va
ry
i
ng t
h
e
n
u
m
b
er
of
p
r
oce
ssi
ng
el
em
ent
s
(PR
R
s
,
GPPs
), size/s
h
ape of PRRs a
nd sc
he
dule
r
s.
The system
perf
orm
a
nce m
e
tri
c
s (p
o
w
er c
o
nsum
pt
i
o
n
,
ex
ecut
i
o
n
tim
e
) for eac
h
case is the
n
re
corde
d
acc
ordingly.
Seve
ra
l features
from
each
DFG s
u
ch
as num
b
er of
node
s,
depe
ndencies
,
fan-out, c
r
itical path, slack, s
h
ara
b
le re
sources and m
a
ny m
o
re are ext
r
a
c
ted (see
Ta
ble 2).
These feat
ures
are treated as individu
al m
e
asura
b
le attributes that are e
ffectiv
ely exp
l
o
ited
in
a sup
e
rv
ised
learning tas
k
s
u
ch as classific
a
tion.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
089
-48
64
IJR
E
S V
o
l
.
4, No
. 2,
J
u
l
y
20
1
5
:
9
9
– 12
1
10
8
The
necessa
ry
m
odul
es t
o
c
o
n
s
t
r
uct
t
h
e
dat
a
b
a
se are:
1. T
a
sk
G
r
ap
h
Gene
rat
o
r:
The
fi
rst
st
e
p
i
n
ou
r m
e
t
h
o
dol
ogy
i
n
v
o
l
v
es usi
n
g
a t
a
s
k
-
g
ra
p
h
gene
r
a
t
o
r t
o
au
to
m
a
tical
ly
syn
t
h
e
size a larg
e nu
m
b
er
o
f
inp
u
t
task
s.
Th
ese i
n
pu
t tasks are u
s
ed later to train an
d test
th
e
p
r
o
p
o
s
ed
pred
ictors. Each inp
u
t
task
is rep
r
esen
ted as
a DFG,
whe
r
e
the
nodes
in t
h
e DFG re
pres
e
n
t
part
i
c
ul
a
r
o
p
er
at
i
ons t
o
be sc
hed
u
l
e
d
an
d as
si
gne
d ei
t
h
e
r
t
o
ha
r
d
wa
re
or
soft
ware
. The
edge
s o
n
t
h
e
ot
he
r
han
d
re
prese
n
t
t
h
e
n
o
r
m
a
l
depende
nci
e
s
a
n
d fl
o
w
bet
w
ee
n ope
rat
i
o
ns.
The
dat
a
-fl
ow
gra
p
hs are
ge
nerat
e
d usi
ng
a sim
i
l
a
r appr
oach t
o
t
h
at
p
r
o
p
o
sed i
n
[
2
4
]
. Each D
F
G
ha
s
rando
m
l
y g
e
n
e
rated
p
a
ram
e
te
rs, as shown
i
n
Tab
l
e 1. Th
e p
r
o
b
a
b
ility d
i
strib
u
tion
o
f
DFG p
a
ram
e
ters is
cur
r
ent
l
y
bas
e
d
on a
u
n
i
f
or
m
di
st
ri
but
i
on,
b
u
t
any
ot
he
r
di
st
ri
b
u
t
i
o
n, i
n
cl
u
d
i
n
g a
di
s
t
ri
but
i
o
n
base
d
o
n
real
-w
o
r
l
d
ap
p
l
i
cat
i
ons, can b
e
em
pl
oy
ed. A
t
o
t
a
l
of 25
8 da
t
a
-fl
o
w
gra
p
hs
are gene
rat
e
d a
t
t
h
i
s
st
ep i
n
t
h
e
flo
w
.
Tab
l
e 1
.
Data Flo
w
Graph
s
: Statistics
Para
m
e
ter Range
Note
Nodes
5 -
1000
# of no
des in a DFG
E
dges
0 -
1863
# of edges (
d
epend
e
ncies)
E
dges per
Node
0 -
2
Aver
age # of edges per
node
Subgr
aphs
1 -
968
# of Subgr
ahs in th
e task
T
a
sk ty
pes
3 -
16
# of task ty
pes
HW
tasks
ar
ea
–
Avg/M
i
n/M
a
x/.
.
2.
Sim
u
l
a
t
o
r (Ev
a
l
u
at
i
o
n
)
:
T
h
e
eval
uat
i
o
n
of
t
h
e pe
rf
orm
a
nce of
al
l
D
F
Gs base
d o
n
di
ffe
re
nt
ha
r
d
ware
configurations
is determ
ined before t
h
e feat
ures a
n
d
attributes of each
DFG are st
ore
d
i
n
the
database. In
ou
r
pre
v
i
o
us
wo
rk
[
5
]
a co
m
p
l
e
t
e
hard
wa
re rec
o
n
f
i
g
u
r
a
b
l
e
sy
st
em
was desi
g
n
e
d
, m
a
ppe
d a
n
d
eval
uat
e
d
b
a
sed
on
a Xilin
x
Virtex-6
FPGA
b
o
a
rd
. Th
i
s
p
l
atform
was in
itiall
y u
s
ed
fo
r ev
alu
a
ting
DFGs. However,
usi
n
g suc
h
a p
l
at
form
t
o
eval
uat
e
hu
n
d
re
ds
of
DFG
s
base
d o
n
di
f
f
ere
n
t
har
d
ware co
n
f
i
g
u
r
at
i
o
ns i
s
bo
t
h
com
p
l
e
x an
d t
e
di
o
u
s.
The
FP
GA
pl
at
f
o
rm
t
o
be
use
d
i
n
t
h
i
s
wo
r
k
r
e
q
u
i
r
es a
di
ffe
rent
f
l
oo
r-
pl
an a
n
d
bi
t
-
stream
fo
r each
n
e
w configu
r
atio
n
wh
ich
li
mited
th
e scope of testin
g an
d ev
alu
a
ti
o
n
.
Accordingly a arc
h
itecture
reconfigur
ab
le
si
m
u
lato
r was d
e
v
e
lop
e
d to
si
m
u
late th
e hardware p
l
atform
di
scuss
e
d
i
n
[5
]
,
whi
l
e
ru
n
n
i
n
g t
h
e
de
vel
o
pe
d
reco
nfi
g
u
r
a
b
l
e
o
p
erat
i
n
g sy
s
t
em
.
Th
is sim
u
lato
r
u
tilizes th
ree sch
e
du
lers and
su
ppo
rts
an
y
nu
m
b
er of PRR
s
and
/
or
GPPs. Th
e sim
u
lato
r is
avai
l
a
bl
e
un
der
t
h
e
ope
n
so
urc
e
GP
L l
i
cense
on
Gi
t
H
u
b
[2
5
]
. The
si
m
u
l
a
t
o
r i
s
devel
ope
d t
o
em
ul
at
e t
h
e
h
a
rdware
p
l
atfo
rm
an
d exp
e
cts th
e fo
llowing
co
nfigu
r
ati
o
n files as i
n
pu
t:
•
A Task Library file wh
ich sto
r
es task informati
o
n
u
s
ed by th
e sim
u
lato
r. Task informatio
n
i
n
clud
es
t
h
e m
ode of
o
p
erat
i
o
n (
s
o
f
t
w
are
,
ha
rd
wa
r
e
or
h
ybrid),
execution tim
e
,
area,
reconfig
uration
ti
m
e
,
reco
nfi
g
u
r
at
i
o
n p
o
w
er a
n
d d
y
n
am
i
c
po
wer
con
s
um
pt
i
on
(
H
y
b
ri
d t
a
sk
s c
a
n m
i
grat
e bet
w
een
ha
rd
war
e
and
s
o
ftwa
re)
.
Som
e
of t
h
ese
val
u
es
are
base
d
on
anal
y
t
i
c
m
odel
s
fo
un
d i
n
[2
6]
,
[2
7]
an
d,
w
h
i
l
e
ot
he
rs
are m
easure
d
m
a
nual
l
y
fr
om
act
ual
i
m
pl
em
ent
a
t
i
ons
o
n
a
Xi
l
i
nx
Vi
rt
e
x
-
6
pl
at
fo
rm
.
•
A Lay
out
fi
l
e
whi
c
h speci
fi
e
s
t
h
e FPG
A fl
oo
r-
pl
an
. The layout includes
the size, shape, and
num
be
r
o
f
PRRs along w
ith
typ
e
s and
nu
m
b
er o
f
GPPs. Th
e
FPGA
fab
r
ic is first p
a
rtitio
n
e
d
un
ifo
r
m
l
y
th
en
p
a
rtitio
n
e
d
wit
h
50
% in
crease in
size, wh
il
e v
a
ryin
g
th
e
n
u
m
b
e
r of PR
Rs. Th
is resu
lts in
d
i
fferen
t
l
a
y
out
s f
o
r t
h
e
sam
e
num
ber o
f
PR
R
s
.
•
A
DF
G fi
l
e
wh
i
c
h st
o
r
es t
h
e
d
a
t
a
fl
o
w
gra
p
h
s
t
o
be sc
he
dul
ed a
n
d
exec
ut
e
d
.
•
A Configu
r
atio
n file wh
ich
sto
r
es t
h
e
g
e
neral settin
g
s
for th
e sim
u
lato
r su
ch
as t
h
e sch
e
du
ler t
o
b
e
use
d
, res
o
u
r
ces
l
o
cat
i
o
n
s
a
n
d
ot
he
r param
e
t
e
rs.
In
or
der t
o
ha
ve a di
ve
rse
d
a
t
a
base we e
v
al
uat
e
d
eve
r
y
DF
G wi
t
h
vari
ous
ha
rd
ware
set
t
i
ngs an
d t
h
en
recorde
d
t
h
e e
v
aluation
(pe
r
form
ance) m
e
trics. The
t
w
o
m
o
st
im
port
a
nt
eval
uat
i
o
n m
e
t
r
i
c
s use
d
we
re
t
o
t
a
l
execut
i
o
n
t
i
m
e
and t
o
t
a
l
po
wer c
ons
u
m
pti
on f
o
r a
gi
ve
n D
F
G.
E
ach D
F
G
was
eval
uat
e
d
wi
t
h
di
ffe
re
nt
PR
R
num
bers a
n
d
P
R
R
si
zes,
vari
o
u
s
n
u
m
b
er o
f
GPPs
, a
n
d sc
h
e
dul
e
r
s.
Thi
s
r
e
sul
t
e
d i
n
dat
a
base
cont
ai
ni
ng
on
avera
g
e
se
ve
ra
l
hu
n
d
re
ds o
f
r
ecor
d
s pe
r DF
G.
3.
Sche
dul
e
r
s:
T
h
ree
o
n
-l
i
n
e
s
c
hed
u
l
i
n
g al
g
o
r
i
t
h
m
s
were d
e
vel
o
ped
t
h
at
can
han
d
l
e
b
o
t
h ha
rd
wa
re a
n
d
so
ft
ware task
s. Th
e algorithm
s
h
a
v
e
d
i
fferen
t ta
sk
reuse cap
ab
ilities
to
min
i
m
i
ze
recon
f
i
g
uration
ove
rhead. Tasks are capable of
m
i
grating bet
w
een s
o
ft
ware
(GP
P
) a
nd
har
d
wa
re (PRRs
) to
m
i
nim
i
ze total
execution tim
e
and m
i
tigate resource scarci
ty issues.
The
sche
dulers rec
o
rd system
m
e
trics, learn, a
n
d
accomm
odate future tasks.
The sc
hed
u
l
e
r
s
whe
r
e fi
rst
d
e
vel
o
ped a
n
d i
m
pl
em
ent
e
d o
n
a ha
rd
wa
re
pl
at
fo
rm
, and
t
h
en i
n
c
o
rp
ora
t
ed
in
to
th
e sim
u
la
to
r t
h
at is sh
own
in Figure
6
.
Th
e sch
e
d
u
l
ers are
d
e
scrib
e
d
i
n
m
o
re
d
e
tails in
[5
].
Evaluation Warning : The document was created with Spire.PDF for Python.