Int
ern
at
i
onal
Journ
al of Ele
ctrical
an
d
Co
mput
er
En
gin
eeri
ng
(IJ
E
C
E)
Vo
l.
10
,
No.
3
,
June
2020
,
pp. 3
216
~
3226
IS
S
N: 20
88
-
8708
,
DOI: 10
.11
591/
ijece
.
v10
i
3
.
pp3216
-
32
26
3216
Journ
al h
om
e
page
:
http:
//
ij
ece.i
aesc
or
e.c
om/i
nd
ex
.ph
p/IJ
ECE
An effici
ent reso
urce sh
aring t
echn
iqu
e for
mu
lti
-
t
enant
datab
ases
Palla
vi G
.
B
.
1
,
P
.
Jayarek
ha
2
1
Depa
rtment of
Com
pute
r
Scie
n
ce
and Engi
ne
ering
,
BMS
Coll
eg
e
of Engin
ee
ring
,
Indi
a
2
Depa
rt
m
ent
of
I
nform
at
ion
Sci
e
nce
and Engi
ne
e
ring
,
BMS
Col
lege
of Engin
ee
rin
g
,
Indi
a
Art
ic
le
In
f
o
ABSTR
A
CT
Art
ic
le
history:
Re
cei
ved
Ma
r 27
, 201
9
Re
vised
N
ov 9
,
20
19
Accepte
d
Nov 23,
2019
Multi
-
te
n
ancy
is
a
ke
y
compone
nt
of
Software
a
s
a
Servic
e
(Saa
S)
par
adi
gm
.
Multi
-
te
n
ant
soft
ware
has
ga
ine
d
a
lot
of
attention
in
acade
m
ic
s
,
re
sea
rch
and
business
are
na.
They
provide
sca
la
b
il
i
t
y
and
ec
onom
ic
ben
ef
it
s
for
both
cl
oud
servi
ce
provide
rs
and
te
nan
ts
b
y
sha
ring
sam
e
res
ourc
es
and
infra
struc
ture
in
isol
at
ion
o
f
share
d
da
ta
b
ase
s,
net
work
and
computing
resourc
es
with
Service
l
evel
agr
e
ement
(SLA)
co
m
pli
anc
es.
In
a
m
ult
it
ena
n
t
sce
nar
io
,
ac
t
ive
t
ena
nts c
om
pet
e
f
or
resourc
es
in
o
rde
r
to
a
ccess
th
e
da
ta
b
ase
.
If
one
te
n
ant
b
lo
cks
up
the
resou
rce
s,
th
e
per
for
m
an
ce
of
a
ll
th
e
othe
r
t
ena
nts
m
a
y
b
e
restr
ic
t
e
d
and
a
fa
ir
sha
ring
of
th
e
r
esourc
es
m
a
y
be
co
m
prom
ised.
The
per
form
an
c
e
of
te
n
ant
s
m
ust
not
be
aff
e
ct
ed
b
y
r
esourc
e
-
intensive
ac
t
ivi
ties
and
vola
tile
workloads
of
othe
r
te
nant
s
.
Moreove
r,
the
prime
goal
of
prov
ide
rs
is
to
a
cc
om
pli
sh
low
cost
of
o
per
ation,
sa
ti
sf
ying
spec
i
fic
sche
m
as/SLAs
of
ea
ch
te
n
ant.
Consequent
l
y
,
t
her
e
is
a
nee
d
t
o
design
and
deve
lop
eff
ec
t
iv
e
and
d
y
namic
r
esourc
e
sha
ring
al
gorit
hm
s
whic
h
ca
n
handle
above
m
entione
d
issues.
Th
is
work
pre
sents
a
m
odel
r
efe
rr
ed
as
Mult
i
-
Te
nan
t
D
y
n
amic
Resourc
e
Sc
hedul
ing
Mode
l
(MTDRS
M)
embrac
ing
a
quer
y
c
la
ss
ifica
t
ion
and
work
er
sorting
techn
ique
ena
bl
ing
ef
fic
i
ent
and
d
y
nami
c
resourc
e
sharing
among
te
nant
s
.
Th
e
ex
per
iments
show
signifi
c
ant
per
form
anc
e
improvem
ent
ov
er exi
sting
m
odel
.
Ke
yw
or
d
s
:
Cl
oud
c
om
pu
ti
ng
Mult
-
it
enan
cy
Re
so
urce
m
anag
em
ent
SLA
TPC
-
C
Copyright
©
202
0
Instit
ut
e
o
f Ad
vanc
ed
Engi
n
ee
r
ing
and
S
cienc
e
.
Al
l
rights re
serv
ed
.
Corres
pond
in
g
Aut
h
or
:
Pall
avi G
.
B.
,
Dep
a
rtm
ent o
f C
om
pu
te
r
Scie
nce
a
nd E
ng
i
ne
erin
g,
BM
S Co
ll
ege
of Enginee
rin
g,
VTU, Ba
ng
al
ore,
India
.
Em
a
il
:
palla
vi.cse@b
m
sce.ac.
in
1.
INTROD
U
CTION
Cl
oud
com
pu
t
ing
c
urre
ntly
is
an
em
erg
in
g
a
nd
m
os
t
prom
isi
ng
te
ch
no
l
og
y,
on
w
hich
var
ie
d
researc
h
has
be
en
car
ried
by
var
i
ou
s
c
omm
un
it
ie
s
[1
]
.
It
has
bee
n
a
dopted
by
va
rio
us
orga
nizat
ion
and
I
T
industries
to
buil
d
an
d
de
ploy
custom
m
ade
app
li
cat
ion
in
var
i
ou
s
fiel
ds
as
gen
et
ic
sci
ence,
healt
hc
are
and
s
o
on.
Cl
oud
te
chnolo
gies
is
dri
ve
n
by
econom
ie
s
of
scal
e
by
pr
ovidi
ng
la
rg
e
scal
e
distribu
te
d
c
ompu
ti
ng
infr
a
struct
ur
e i
n
w
hich
res
our
ce such
as co
m
pu
ti
ng po
wer, stor
a
ge,
p
la
tf
orm
etc. and
ser
vi
ces are p
rovi
de
d
on
dem
and
thr
ou
gh
i
nter
net
[
2],
The
ser
vice
offer
e
d
by
cl
oud
te
c
hnol
og
ie
s
are
broad
ly
cl
assifi
ed
into
three
cat
egories.
T
he
y
ar
e
I
nfrastr
uc
ture
as
a
ser
vice
(I
aa
S)
,
Plat
f
or
m
as
a
serv
ic
e
(P
aaS
)
a
nd
S
of
t
war
e
as
a
se
rv
ic
e
(S
aaS
).
Wh
il
e
IaaS
pr
ov
i
der
s
offer
var
i
ou
s
hard
war
e
c
ompu
ta
ti
onal
nee
ds
,
Paa
S
pr
ov
i
der
s
offer
fr
am
ewor
ks
and
pro
gr
am
m
ing
la
ngua
ges
require
d
to
de
velo
p
s
of
t
wa
re/ap
p
li
cat
ion
s
and
SaaS
pr
ov
i
der
s
offe
r
a
fu
ll
-
fled
ged
rea
dy
to
use
a
pp
li
cat
ion
as
a
se
rv
ic
e
.
SaaS
is
a
n
at
t
racti
ve
off
er
for
softwa
re
c
ompanies
a
s
they
can
us
e
var
i
ou
s
a
pp
li
cat
io
ns
without
the
nee
d
to
purc
hase
an
d
m
ai
ntain
th
em
on
thei
r
ow
n
in
fr
a
str
uctu
re.
Als
o,
ser
vice
pro
vid
e
r
achie
ve
s
fu
ll
eco
nom
y
of
scal
e
by
hosti
ng
su
c
h
Sa
aS
ap
plica
ti
on
us
in
g
a
m
ulti
ten
ant
m
od
el
w
he
re t
enan
t
ref
e
r
t
o
a
n orga
nizat
ion
/
com
pan
y.
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
An
ef
fi
ci
ent res
ou
r
ce s
ha
ri
ng t
echn
i
qu
e
for
m
ulti
-
te
nant
da
t
abas
es
(
Pallav
i G
.
B
.
)
3217
Mult
it
enan
cy
is
on
e
of
t
he
ke
y
con
cer
ns
i
n
SaaS.
It
re
fers
to
a
pr
i
ncipl
e
in
so
ft
war
e
arch
it
ect
ure,
wh
ic
h
is
the
a
bili
ty
to
enab
l
e
SaaS
a
pp
li
c
at
ion
to
se
r
ve
m
ulti
ple
te
nan
ts
us
i
ng
a
sing
le
se
rv
ic
e
i
ns
ta
nce
.
Mult
it
enan
cy
inv
a
riably
occ
urs
at
the
datab
ase
la
ye
r
of
th
e
SaaS
a
pp
li
ca
ti
on
[
3]
re
ferre
d
to
as
M
ulti
-
te
nan
t
Data
Ma
na
ge
m
ent
Syst
e
m
(
MTDBM
S)
w
her
e
m
ulti
ple
t
enan
ts
a
re
c
onso
li
dated
on
to
the
data
ti
er
r
eso
ur
c
e
wh
il
e
at
the
sa
m
e
tim
e
isolati
ng
t
hem
a
m
on
g
one
an
oth
e
r
as
if
they
wer
e
runn
i
ng
on
ph
ysi
cal
ly
segr
eg
at
ed
resou
rces.
Ma
ny
orga
nizat
io
ns
e
xpor
t
th
ei
r
data
to
t
hir
d
pa
rty
MTD
BM
S
in
orde
r
to
pro
visi
on
data
m
anag
em
ent
s
erv
ic
es.
A
MTDBM
S
m
a
y
isolat
e
te
nan
ts
in
a
sh
ared
data
ba
se
syst
e
m
-
by
ded
ic
at
ed
dat
abases
(sh
a
re
d
m
achi
ne
ap
proac
h),
-
by
sh
a
re
d
dat
abases
a
nd
se
pa
rate
ta
bles
or
schem
as
(sh
ar
ed
proc
ess
ap
proac
h)
or
by
an
ass
ociat
ion
of
each
da
ta
set
in
a
sh
ar
ed
ta
ble
with
t
he
ap
pro
pr
ia
te
te
nan
t
(s
har
e
d
ta
ble
appro
ac
h)
[4
]
.
Id
e
ntific
at
ion
of
recor
ds
f
or
a
par
ti
cula
r
te
nan
t
is
done
ba
sed
on
a
uni
qu
e
te
na
nt
id
[5
]
.
Howe
ver,
on
e
of
the
m
ajo
r
chall
eng
e
s
posed
by
m
ult
it
enan
t
app
li
cat
io
ns
is
eff
ect
ive
util
iz
a
ti
on
of
res
ource
s
[6
]
.
Each
te
na
nt
is
sta
ti
cal
l
y
assigned
a
n
eq
ual
am
ou
nt
of
res
ource
.
This
m
a
y
le
ad
to
ineffi
ci
ent
util
iz
ation
of
res
ources
wh
e
n
there
a
re
fewer
or
m
or
e
l
oa
ds
of
que
ries
on
databases
than
ex
pecte
d
a
nd
is
the
r
efore
undesira
ble
i
n
a
m
ultite
nan
t
s
yst
e
m
.
Moreover,
se
rv
ic
e
pr
ov
i
der
s
m
us
t
al
so
m
eet
the
crit
eria
of
Se
r
vice
Level
A
gr
e
e
m
ent
(S
L
A)
[
7] of te
nan
ts
.
Ther
e
a
re
seve
ral
dire
co
ns
eq
uen
ce
f
or
both
te
nan
t
and
se
r
vice
pro
vid
er
s
uch
as
ine
ff
ic
i
ency
in
data
centre
a
nd r
eve
nu
e
,
li
m
i
te
d
cl
oud
a
pp
li
cabil
it
y
and
unpre
dicta
ble
ap
plica
ti
on
pe
rfo
rm
ance
[
8].
H
oweve
r
these
issues
a
re
beh
i
nd
the
sc
ope
of
this
pa
per.
I
n
sta
te
-
of
-
art
s
ing
le
te
na
nt
da
ta
base
syst
em
,
the
tw
o
as
pec
ts
of
perform
ance
analy
sis
are
s
erv
e
r
hardw
a
r
e
for
op
e
rati
ng
the
databa
se
an
d
wor
kl
oad.
Howe
ver
wit
h
m
ul
ti
-
te
nan
cy
,
since
di
ff
e
ren
t
te
nan
t
acce
ss
the
sam
e
datab
ase
at
dif
fer
e
nt
rates,
wor
klo
a
ds
a
nd
com
plexiti
es,
vend
or
s
need
to
keep
a
c
he
ck
on
perform
ance
at
ta
inm
e
nt
of
each
te
na
nt.
As
a
resul
t,
optim
al
reso
urc
e
util
iz
at
ion
bec
om
es
a
k
ey
req
ui
rem
ent
fo
r
the
ser
vice
pro
vid
e
rs.
T
his
pa
per
e
xp
l
or
es
re
so
urce
m
anag
e
m
en
t
arch
it
ect
ure
co
m
po
sed
of
a
rc
hitec
ture
a
nd
s
cheduli
ng
strat
egies
to
a
ddres
s
m
ult
it
enan
cy
issues,
pa
rtic
ul
arly
sh
ari
ng
of
r
es
ources
am
ong
te
nan
ts
i
n
order
t
o
c
om
pu
te
intensi
ve
qu
eries
an
d
scal
abili
ty
fo
r
w
ork
flo
w
execu
ti
on.
T
o
pro
vid
e
sc
al
abi
li
ty
,
the
MT
-
D
BM
S
sho
uld
r
un
on
lo
w
c
os
t
com
m
od
it
y
har
dware
an
d
sca
le
out
to a m
any servers to
pr
ov
i
de
s
erv
ic
e t
o
la
r
ge c
onsu
m
ers.
Wor
kf
lo
w
sc
he
du
li
ng
is
a
proces
s
of
ide
nt
ify
ing
an
d
m
anag
i
n
g
th
e
execu
ti
on
of
ce
rtai
n
ta
sk
on
a
distrib
uted
ne
twork
.
It
al
lo
cat
es
certai
n
a
m
ou
nt
of
ap
pr
opriat
e
res
ourc
e
to
a
ta
s
k
a
nd
c
om
plete
s
the
ta
sk
within
us
e
r’s
def
i
ned
de
adli
ne
or
obj
ect
iv
e
tim
e.
Dev
el
op
i
ng
a
n
eff
ic
ie
nt
scheduli
ng
m
od
el
will
ai
d
in
i
m
pr
ove
t
he
ov
e
rall
syst
em
per
form
ance.
Sc
he
du
li
ng
distri
bu
te
d
t
ask
is
co
ns
id
ered
to
be
NP
-
ha
rd
pro
blem
[9
]
,
as
a
resu
lt
no
op
ti
m
al
so
luti
on
is
fou
nd
w
it
hin
po
ly
nom
i
al
tim
e.
To
achieve
nea
r
opt
i
m
a
l
sche
du
li
ng
m
any
he
ur
ist
ic
sc
hedulin
g
has
been
prese
nted
.
H
oweve
r,
t
he
se
te
ch
niques
are
no
t
s
uitab
le
for
sche
du
li
ng
workflo
w
in
m
ulti
-
te
nan
t
cl
ou
d
com
pu
ti
ng
e
nv
i
ronm
ents.
To
a
ddress
t
hi
s
issue,
t
he
a
uthors
of
[
10
]
pr
ese
nt
ed
an
ef
fici
ent
w
ork
flo
w
s
cheduli
ng
wh
e
re
a
pro
of
-
of
-
con
ce
pt
e
xp
e
ri
m
ent
of
real
-
worl
d
sci
entifi
c
wor
k
fl
ow
a
pp
li
cat
ion
s
has
bee
n
per
f
orm
ed
to
dem
on
strat
e
the
scal
abili
ty
of
the
sc
hedulin
g
al
gorithm
,
whic
h
ve
rifies
th
e
eff
ect
ive
nes
s
of
the
pro
pose
d
s
olu
ti
on.
H
ow
e
ver
the
y
did
no
t
c
onside
r
the
i
m
pact
of
resour
c
e
fail
ur
e
a
nd
dyna
m
ic
SLA
requirem
ent
of
T
enan
ts
.
Mor
eo
ver,
eff
ic
ie
nt
resou
rc
e
al
locat
ion
an
d
load
bala
ncin
g
te
chn
iq
ue
is
r
equ
i
red,
beca
use
there
is
un
c
ertai
nty
in
resour
ce
a
nd
loa
d
wh
ic
h
changes
over
t
i
m
e.
Re
qu
est
f
or
resou
rces
c
hanges
over
ti
m
e
and
the
r
es
ource
it
sel
f
un
derg
o
seve
ral
changes
(i.e. re
s
ource c
an jo
i
n or l
eav
e a n
et
wor
k)
.
T
hese
dynam
ic
u
nce
rtai
nties m
igh
t l
ead
to
p
e
r
form
ance b
ottl
eneck.
This
w
ork
pr
e
sents
a
dy
nam
ic
scheduli
ng
te
chn
iq
ue
for
Mult
i
-
Tena
nt
SaaS
cl
oud
e
nvir
on
m
ent
ov
e
rc
om
ing
the
ab
ov
e
c
halle
ng
e
s.
Fir
stl
y,
arch
it
ect
ure
of
the
pr
opos
e
d
Mult
i
-
Tena
nt
Database
Sys
tem
is
pr
ese
nted
. S
ec
ondly for d
y
na
m
ic
sch
edu
li
ng
, th
e query (l
oa
d)
a
nd
res
ourc
e inf
orm
at
ion
is colle
ct
ed
accor
ding
to
Mem
or
y,
I/O
an
d
CPU
.
Thir
dly
the
query
an
d
res
ource
a
re
div
i
de
d
into
th
ree
qu
e
ues
acc
ord
ing
to
Mem
or
y,
I/
O
a
nd
CP
U
intens
ive.
Last
ly
,
the
scheduler
util
iz
es
the
ov
e
rall
resou
rce
avail
able
and
sc
he
dule
to
resou
rce
with
l
igh
te
r
loa
ds.
T
his
ai
d
in
bala
ncin
g
the
loa
d
and
m
ake
fu
ll
us
e
of
idle
inst
ances.
T
he
pa
pe
r
is
org
a
nized
as
f
ollows:
In
sect
ion
2
,
a
stud
y
of
relat
ed
wor
k
is
been
car
ri
ed
out.
A
sim
p
le
m
ult
it
enan
t
database
arch
it
ect
ure
an
d
relat
ed
al
gorithm
s
and
flo
w
cha
rts
ar
e
disc
us
se
d
in
sect
io
n
3
.
Ex
pe
rim
e
ntal
set
up
a
nd
res
ults
are
discuss
e
d
i
n
sect
io
n 4. Fi
nally
, s
ect
ion 5
co
nclu
des
t
he pape
r.
2.
RESEA
R
CH
ME
THO
DOL
ODY
The
iss
ues
per
t
ai
nin
g
to
sc
he
duli
ng
ta
sk
on
m
ul
ti
ple
work
e
rs
has
been
wi
dely
stud
ie
d
in
distrib
uted
,
par
al
le
l,
gri
d
a
nd
cl
us
te
r
c
ompu
ti
ng
a
nd
i
n
r
ecent
ye
ar
the
sam
e
kin
d
of
s
tud
y
is
bee
n
ca
rr
ie
d
out
co
ns
ideri
ng
virtu
al
w
orke
rs
on
cl
oud
e
nv
i
ronm
ent.
The
te
chn
i
qu
e
s
ad
opte
d
by
these
m
od
el
s
diff
er
f
ro
m
char
act
erist
ic
of
work
l
oa
d,
res
ources,
perfor
m
ance
m
et
ric
and
sch
ed
uling
in
m
ulti
a
gen
t
arc
hitec
ture
[
11]
.
All
these
m
et
ho
dolo
gies
are
de
sig
ned
base
d
on
Heur
ist
ic
Algo
rit
hm
,
Meta
-
Heurist
ic
Algo
rithm
,
Scie
ntific
Workflo
ws
Exec
ution,
De
adline
-
a
ware
S
cheduli
ng
an
d
Mult
i
-
te
nan
t
S
aaS
A
pp
li
cat
io
ns
,
wh
ic
h
is
ex
te
ns
ively
resea
rch
e
d
in the p
resen
te
d work.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
10
, No
.
3
,
J
une
2020
:
32
16
-
3226
3218
Heurist
ic
Algo
rithm
:
Ma
ny
e
xisti
ng
ap
proa
ches
ha
ve
c
onsidere
d
heurist
ic
m
et
ho
ds
f
or
cl
us
te
ri
ng
,
ta
sk
duplica
ti
on
an
d
sche
duli
ng.
Few
e
xam
ples
are:
In
[
12
]
Jing
-
Chi
ou
L
iou
et
al
,
pr
ese
nted
a
ta
sk
cl
ust
ering
al
gorithm
with
no
du
plica
ti
on
nam
el
y
CASS
-
I
I.
T
hey
c
om
par
ed
their
a
lgorit
hm
with
DS
C
in
te
rm
s
of
both
sp
ee
d
and
s
olut
ion
qual
it
y.
In
[1
3]
R.
Ba
j
aj
and
D.
P.
A
grawal
pr
ese
nted
ta
sk
duplica
tio
n
based
sc
he
du
l
i
ng
m
echan
ism
fo
r
heter
o
ge
ne
ous
netw
ork
(T
ANH).
In
[14
]
,
a
Hetero
ge
neous
Earli
est
finish
tim
e
(
HEF
T
)
sche
du
li
ng
te
c
hn
i
qu
e
for
sin
gle
w
ork
flo
w
was
prese
nted
by
H
.
To
pc
uoglu,
S.
Har
i
ri,
an
d
M.
Y.Wu
a
nd
in
[
15
]
H.
M.
Far
d,
et
al
.,
pr
esented
a
m
ult
i
-
obj
ect
ive
he
uri
sti
c
sche
du
li
ng
f
or
gri
d
a
nd
cl
ou
d
e
nv
ir
onm
ent.
Howe
ver
t
hes
e
m
od
el
s
are
not
su
it
able
for
m
ulti
-
ten
ant
cl
oud
e
nv
i
ronm
ent,
due
to
un
pr
e
di
ct
able
perform
ance
(thro
ughput)
.
Since
so
m
e
te
na
nt
m
ay
op
t
for
best
eff
ort
be
hav
i
or
[
16]
and
so
m
e
m
a
y
p
ref
e
r
perform
ance isolat
ion
.
Me
ta
-
Heurist
ic
Algorithm
:
To
m
ini
m
iz
e
wo
rk
fl
ow
e
xec
utio
n
co
st
in
cl
oud
env
i
ro
nm
ent,
the
aut
hor
s
i
n
[17,
18
]
ha
ve
ad
op
te
d
pa
rtic
le
swar
m
op
ti
m
iz
at
ion
(P
S
O)
based
sche
du
li
ng
te
chn
i
qu
e
a
nd
in
[1
9]
an
opti
m
iz
at
ion
of
ge
netic
al
gorithm
(G
A
),
An
t
c
olony
optim
iz
ation
(
AC
O)
a
nd
PS
O
ha
s
bee
n
im
ple
mented
.
In
[20]
H
.
M.
Far
d
et
al
.,
ha
s
i
m
ple
m
ented
a
dynam
ic
s
cheduli
ng
a
nd
pr
ic
in
g
m
od
el
for
sin
gle
query
f
or
m
ul
ti
-
cl
ou
d
pl
at
fo
rm
and
has
com
par
ed
with
tra
diti
on
al
m
od
el
m
ulti
-
obje
ct
ive
ev
olu
ti
onary
al
gorit
hms,
i.e.,
NSGA
-
I
I
an
d
SPEA2.
T
he
se
entire
m
odel
s
are
desig
ne
d
to
optim
ize
in
gr
id
e
nviro
nm
ent
and
induce
com
pu
ti
ng
ove
rh
ea
d. He
nce t
hese m
od
el
s a
r
e not s
uitable
for l
arg
e
wo
rkflow ap
plica
ti
on.
Scie
ntific
Wor
kf
l
ow
s
Ex
ecuti
on
:
I
n
[
21]
the
auth
or
s
ha
ve
stud
ie
d
the
pe
rfo
rm
ance
and
co
st
involve
d
in
com
pu
ti
ng
in
public
cl
oud
env
ir
on
m
ent.
They
showe
d
that
a
m
azon
EC2
is
no
t
su
it
able
for
I/O
int
ensiv
e
app
li
cat
io
n
(
N
ASA
H
PC
cl
ust
er)
due
la
ck
of
pa
rall
el
heter
og
e
ne
ous
com
pu
ti
ng
platfo
r
m
.
To
i
m
pr
ov
e
syst
e
m
perform
ance
the
aut
hors
of
[22]
pr
e
sorte
d
local
it
y
awar
e
sche
du
li
ng.
H
ow
e
ve
r
eval
ua
ti
on
on
dynam
ic
real
world
wor
klo
a
d
is
not
car
ried
out.
Sim
il
arl
y
D.
Y
ua
n
et
al
,
in
[
23
]
pre
sented
a
data
placem
ent
strat
egy
in
sci
entifi
c clo
ud workfl
ow
s
b
y
adoptin
g k
-
m
ean clu
ste
rin
g.
Dead
li
ne
-
a
ware
Sche
duli
ng
:
The
aut
hors
of
[
24
]
ha
ve
stud
ie
d
dynam
ic
resou
rce
al
locat
ion
f
or
adap
ti
ve
a
ppli
cat
ion
on
cl
ou
d
platf
or
m
.
They
ad
op
te
d
Q
-
le
ar
ning
bas
ed
le
arn
i
ng
m
od
el
to
m
eet
t
he
us
e
r
def
i
ne
dea
dlin
e
for
pa
rtic
ular
app
li
cat
ion
re
qu
i
rem
ent.
A
gri
d
ba
sed
sc
he
du
li
ng
m
od
el
f
or
dead
li
ne
co
ns
trai
nt
weathe
r
f
or
eca
sti
ng
syst
em
a
nd
a
he
ur
ist
ic
m
od
el
to
m
eet
dead
li
ne
f
or
s
ci
entifi
c
app
li
c
at
ion
w
ork
flo
w
ha
s
been
pr
e
sente
d
in
[
25,
26
]
resp
ect
ively
.
I
n
[
27
]
S.
A
bri
sh
am
i
e
t
al
.
pr
ese
nted
s
che
du
li
ng
strat
e
gies
f
or
sing
le
w
ork
flo
w
instance
f
or
IaaS
cl
oud
pl
at
fo
rm
.
Howe
ver
none
of
th
ese
m
od
el
s
co
ns
ide
red
m
ulti
-
te
na
nt
cl
oud
e
nv
i
ron
m
ent.
Mult
i
-
te
nan
t
S
aaS
A
pp
li
cat
ion
s:
Ma
ny
ap
proac
hes
ha
ve
been
present
ed
f
or
m
ulti
-
te
nan
t
Saa
S
app
li
cat
io
ns
.
A
two
-
ti
er
m
u
lt
it
enan
t
ar
chite
ct
ur
e
has
bee
n
presente
d
in
[28].
A
m
od
el
to
determ
ine
op
ti
m
al
al
locat
ion
poli
cy
and
a
res
ource
al
locat
io
n
m
od
el
for
Sa
aS
ap
plica
ti
ons
has
bee
n
presented
in
[29
,
30
]
resp
ect
ively
.
I
n
[31]
S.
W
al
r
aven
et
al
.
pr
e
sented
an
ada
pt
ive
m
idd
le
wa
r
e
desi
gn
f
or
ef
fici
ent
m
ulti
-
ten
ant
SaaS
ap
plica
ti
on
s
.
T
he
aut
hors
i
n
[
32
]
hi
gh
li
ghte
d
t
he
pro
blem
of
tra
diti
on
al
CPU
sh
ari
ng
a
ppr
oa
ch
f
or
Database
as
a
serv
ic
e
(DA
A
S)
sce
nar
i
o
a
nd
hav
e
pro
pose
d
a
n
ef
fecti
ve
and
ef
fici
ent
C
PU
s
ha
rin
g
te
c
hn
i
qu
e
.
They
hav
e
fo
c
us
e
d
on
fi
n
e
-
grai
ne
d
reservat
ion
of
CPU
wi
thout
sta
ti
c
a
llo
cat
ion.
The
w
ork
al
so
sup
ports
on
dem
and
res
our
ce
avail
abili
ty
.
H
ow
e
ver
s
ha
rin
g
of
CP
U
r
edu
ce
s
the
sys
tem
cost
but
a
t
the
sam
e
tim
e
i
t
reduces
t
he
sy
stem
per
form
a
nce
as
well
.
I
n
[
33
]
Vi
vek
Nar
asay
ya
et
al
.
pr
opos
e
d
a
reservat
ion
te
chn
i
que
cal
le
d
SQ
L
V
M
of
key
resour
ces
i
n
a
data
base
syst
em
su
ch
as
CP
U,
I/
O
,
an
d
m
e
m
or
y.
The
a
uthors
cl
aim
that
un
li
ke
a
t
rad
it
ion
al
VM,
a
S
Q
LVM
is
m
uch
m
or
e
li
gh
twei
ght
as
it
s
on
ly
goal
is
to
pro
vide
resource
isol
at
ion
acro
s
s
te
na
nt
s.
I
n
[
34]
,
Ying
H
ua
Z
hou
et
al
has
intr
od
uced
a
DB2
M
MT
(m
assive
m
ul
ti
-
te
nan
t
da
ta
base
platfo
rm
)
high
le
vel
a
rch
it
ec
ture.
T
he
a
uthor
has
a
ddress
ed
key
te
ch
nic
al
chall
en
ges,
includi
ng
re
source
,
te
nan
t
an
d
offe
rin
g
m
anag
em
ent,
m
on
it
or
in
g,
scal
a
bili
ty
a
nd
sec
uri
ty
.
Th
ey
hav
e
c
om
par
ed
th
e
eco
no
m
ic
s
of
DB2
MM
T a
nd
trad
it
io
nal s
olu
ti
on w
it
h p
rec
ise
d
at
a s
howi
ng accepta
ble
perform
ance
.
To
co
nclu
de,
extensi
ve
surv
ey
and
the
stu
dy
of
relat
ed
work
s
howcas
e
that
scheduli
ng
a
nd
lo
a
d
balancin
g
play
s
an
i
m
po
rta
nt
ro
le
in
i
m
pr
ovin
g
the
pe
rfo
rm
ance
of
m
ul
ti
-
te
nan
t
cl
oud
arch
it
ect
ur
e
.
Ma
ny
appr
oach
es
a
dopt
var
io
us
he
ur
ist
ic
,
Me
ta
he
ur
ist
ic
,
cl
us
te
rin
g
an
d
opti
m
iz
at
ion
te
chni
qu
es
to
cl
assi
fy
us
er
qu
i
res
an
d
res
ource
cl
assifi
c
at
ion
.
All
thes
e
appr
oach
es
a
re
tim
e
con
sum
ing
proce
s
se
s,
in
du
ce
c
ompu
ta
ti
on
ov
e
r
head
an
d
are
a
nd
m
ay
no
t
be
a
ppli
cable
f
or
dy
nam
ic
workflo
w
pro
visio
ning.
T
o
ov
e
rco
m
e
these
chall
enges, w
e
prese
nt
an
ef
fi
ci
ent
sche
duli
ng
te
c
hn
i
qu
e
f
or
m
ulti
-
Tenan
t cl
oud
a
rch
it
ect
ur
e
that fu
ll
y u
ti
li
ze
s
the syst
em
r
esources
w
it
h SL
A gu
a
ra
ntee.
3.
ARCHITE
CT
UR
E
OF
M
U
LT
I
-
TE
NAN
T DA
T
ABA
S
E SYS
TE
M
3.1.
Model
li
ng
of
multi
-
te
nant
s
ys
te
m
An
ove
rall
ar
chite
ct
ur
e
of
Mult
i
-
te
nan
t
da
ta
base
syst
e
m
is
pr
ese
nte
d
in
Fig
ure
1.
T
he
Te
na
nt
Ma
nag
e
r
m
ain
ta
ins
th
e
ser
vice
le
vel
a
gree
m
ent
receiv
ed
f
r
o
m
the
te
nan
ts.
The
se
SLA
base
d
te
nan
t
requirem
ent
is
consi
der
e
d
for
desi
gn
i
ng
a
m
ul
ti
te
nan
t
syst
e
m
and
m
ai
n
ta
ining
t
he
sys
tem
Qo
S
(
Lat
ency).
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
An
ef
fi
ci
ent res
ou
r
ce s
ha
ri
ng t
echn
i
qu
e
for
m
ulti
-
te
nant
da
t
abas
es
(
Pallav
i G
.
B
.
)
3219
The
ot
her
i
nput
to
Tena
nt
Ma
nag
e
r
is
th
e
te
nan
t
co
nf
i
gurati
on
file
wh
e
re
te
na
nt
sp
eci
fic
set
ti
ngs
are
est
ablished
.
Te
nan
ts
request
f
or
the
ta
s
k
e
xe
cution
or
data
base
acce
ssi
ng.
Tena
nt
Ma
na
ger
c
hec
ks
the
load
and
sc
he
du
le
s
the
te
nan
t
as
per
a
vaila
bili
t
y
of
the
w
ork
ers
base
d
on
SLA
c
on
st
rain
t
of
the
co
rr
es
pondin
g
te
nan
t.
Wo
r
ke
r
s
exec
ute
the
t
ask.
DB
c
on
ne
ct
or
is
us
e
d
f
or
est
ablishi
ng
the
c
onnecti
on
betwee
n
dat
abase
serv
e
r
a
nd
Ten
ant
Ma
na
ger.
The
ty
pe
of
da
ta
base
sha
rin
g
appr
oach
us
e
d
is
the
schem
a
base
d
m
ulti
-
ten
ancy
appr
oach.
A dynam
ic
r
esource
sch
e
duli
ng sys
tem
f
or
a
ssig
nin
g j
obs is i
ntr
oduce
d
i
n
the
ne
xt secti
on.
Figure
1
.
A
rch
i
te
ct
ur
e
of
m
ulti
-
te
nan
t
databa
se syst
em
3.2.
Multi
-
ten
ant
dy
n
amic res
ourc
e sche
duli
ng
m
od
el
The
obj
ect
ive
of
pro
po
se
d
dy
nam
ic
reso
ur
ce
sche
du
li
ng
syst
e
m
is
that
the
Me
m
or
y,
I/O
an
d
CPU
us
a
ge
do
not
confli
ct
each
oth
e
r
in
ord
er
to
im
pr
ov
e
sche
du
li
ng
pe
rfor
m
ance
an
d
util
iz
ing
res
ource
eff
ic
ie
ntly
.
Le
t’s
co
ns
ide
r
a
case
wh
e
re
s
om
e
qu
ery
ex
ecuti
on
re
qu
ir
es
le
ss
I/O
or
Mem
or
y
resour
ces
,
bu
t
it
m
igh
t
re
qu
i
re
hi
gh
e
r
C
PU
resou
rce
t
o
c
om
plete
the
ta
sk
.
T
his
sce
nar
i
o
can
be
e
ff
ect
ively
s
olve
d
by
pro
po
se
d
dy
nam
ic
sched
uling
syst
em
,
a
nd
m
or
eo
ver
eff
ect
ive
loa
d
balancin
g
ap
proac
h
ai
d
in
bette
r
util
iz
at
ion
of
i
dle insta
nces.
The
sc
he
du
li
ng syst
e
m
co
m
pr
ise
s of t
hr
ee m
od
ules
Te
nan
t
Task
Mana
ger(TT
M)
Glob
al
Ten
a
nt Man
ager(G
TM)
Dy
nam
ic
Sch
ed
uler
3.2.1
.
Sy
s
tem
mod
el
Ar
c
hitec
ture
of
syst
em
fr
a
m
ewor
k
is
pres
ented
i
n
Fig
ure
2.
T
he
Te
na
nt
Task
Ma
na
ger
(TTM
)
m
anag
es
t
he
ta
sk
/q
ue
ry
r
equ
e
ste
d
by
the
te
na
nt.
S
i
m
ultaneou
sly
it
al
so
pro
cesses
the
se
request.
The
processe
d
requests
are
f
urt
her
div
i
de
d
into
se
par
at
e
qu
eues
based
on
the
te
na
nt
requ
irem
ent
of
Me
m
or
y,
I/O
a
nd
CPU
f
or
c
om
pu
ta
ti
on
or
searc
hing
of
data.
Me
an
w
hile,
the
L
ocal
Wor
ker
Ma
na
ge
r
(L
WM)
m
on
it
or
s
the
w
orke
r
loa
d
a
nd
updates
t
he
in
form
at
ion
to
the
Global
Tena
nt
Ma
na
ge
r
(
GTM)
.
GT
M
so
rts
t
he
a
va
il
able
workers
base
d
on
CP
U,
I/
O
an
d
Me
m
or
y
for
processi
ng
ta
sk
.
Dy
nam
i
c
sche
dule
r
w
orks
betw
een
Tena
nt
Task
Ma
nag
e
r
an
d
Gl
ob
al
Tena
nt
Ma
na
ge
r.
Sc
he
du
le
r
ta
kes
the
re
quest
ta
sk
que
ue
from
Tenan
t
Task
Ma
nag
e
r
an
d
i
nfor
m
at
ion
fro
m
the
Glob
al
Tena
nt
Ma
nager
an
d
sc
hedul
es
the
ta
s
k
ba
s
ed
on
best
co
m
pat
ible
value f
or
both
Global
Te
nan
t
Ma
nag
e
r
a
nd T
enan
t
Task
Ma
nag
e
r.
3.2.2
.
Sy
s
tem
pa
r
amet
er
s
Let
us
co
ns
i
der
te
na
nts,
workers
an
d
num
ber
of que
ry
re
quest
s.
Wor
ker
s
are
re
pr
ese
nted
as
=
{
1,
2,
3
…a
,
,
…
,
}
an
d
queries
a
r
e
represe
nted
as{
1,
2,
3,
…
,
,
…
,
}.
The
w
orke
rs
in
cl
oud
en
vi
ronm
ent
rep
re
sent
a
set
of
virtu
al
m
achines
w
hich
a
re
threa
ds
in
our
exp
e
rim
ents.
Each
th
read’s
c
om
pu
ti
ng
ca
pa
bili
ty
is
def
ine
d
by
it
s
pa
ram
et
er
su
c
h
as
M
e
m
or
y,
I/O
an
d
CP
U
(i.e.
=
(
,
)
wh
e
re
de
fines
Me
m
or
y
us
age,
def
i
nes
I/
O
wait
in
g
ti
m
e
an
d
de
fi
nes
CP
U
util
iz
at
ion
r
es
pe
ct
ively
).
T
he GTM
per
i
od
ic
al
ly
co
ll
ect
s an
d updates t
his infor
m
at
ion
from
LW
M.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
10
, No
.
3
,
J
une
2020
:
32
16
-
3226
3220
Figure
2
.
A
rch
i
te
ct
ur
e
of
m
ulti
-
te
nan
t
dynam
ic
r
eso
urce sc
he
du
li
ng m
od
el
3.2.3
.
Quer
y
c
lassifier
In
it
ia
ll
y,
the
te
nan
t
ta
s
k
m
anag
er
c
ollec
ts
te
nan
t
s
ubm
i
tt
e
d
que
ry
al
ong
with
the
resou
rce
re
qu
i
red
inf
or
m
at
ion
to
proces
s
t
he
qu
ery
an
d
SL
A
r
equ
i
rem
ent
.
Th
e
qu
e
ry
s
p
eci
fi
es
query
siz
e
S
y,
require
d
CP
U
Vy
,
m
e
m
or
y
Ry
,tim
e
req
uire
d
f
or
the
exec
ution
By
[Th
is
infor
m
at
ion
is
ob
ta
ined
f
r
om
con
fig
file
fo
r
eac
h
te
nan
t
sh
ow
n
in
F
ig
ure
1]
. T
he
i
nform
at
ion
is g
at
he
red
i
n order
to c
at
er th
e
queri
es d
em
and
in
g dive
rsified
r
es
ourc
es
.
Hen
ce
f
or
th
,
a
query
r
eq
uest
ed
by
te
na
nt
y
is
represe
nted
as
Qy
=
(
Ry
,
Vy,
Sy,
By
).
The
TTM
furthe
r
determ
ines the
I/O req
uire
d
as
:
D
y
=
S
y
V
y
(1)
The
I/O
us
a
ge
is
directl
y
depend
e
nt
on
the
qu
e
ry
siz
e
and
CPU
capa
bili
ty
and
is
therefore
c
om
pu
te
d
by
the
rate
of
and
.
F
ur
t
her
,
the
rec
ei
ved
queries
are
cl
assifi
ed
and
que
ued
up
.
I
n
order
to
c
la
ssify
the
receive
d
query,
t
he
cl
oud
resou
rce
pa
ra
m
e
te
rs
(system
par
am
et
ers
in
our
case
)
R
k
,
D
k
an
d
V
k
of
Me
m
or
y,
I/O
a
nd
CP
U
a
re
def
in
ed
.
T
he
n,
f
or
each
que
ry
Qy,
wei
gh
ts
of
R,
D
an
d
V
are
c
om
pu
te
d
by
it
s
val
ue
Ry
,
D
y
and
V
y
and
R
k
,
D
k
an
d
V
k
.
Th
e
m
axim
u
m
of
th
ese
three
wei
gh
ts
are
c
onsi
der
e
d
as
que
r
y
group
Qyg
.
If
Qyg
=
D
,
th
e
qu
e
ry
is
portion
e
d
int
o
que
ue
of
I/O
inte
ns
ive,
if
=
,
the
query
is
portio
ne
d
into
queue
s
of
CP
U
intensi
ve
an
d
so
on.
In
the
pro
po
se
d
m
od
el
the
qu
eries
in
the
se
three
que
ues
are
eq
ual
to
th
e
total
nu
m
ber
of que
ries (i.e
. eac
h o
ne of
thr
ee
que
ues
m
akes up o
nly o
ne part
of
al
l qu
e
ries).
Q
yg
=
ma
x
(
R
,
D
,
V
)
=
(
R
y
R
k
⁄
,
D
y
D
k
⁄
,
V
y
V
k
⁄
)
.
(2)
Finall
y,
total
M
qu
eries
w
hi
ch
are
par
ti
ti
on
e
d
into
th
re
e
qu
eries
are
represe
nted
as
,
and
of
I/O
inte
ns
ive
,
CPU
inte
ns
i
ve
an
d
−
−
Me
m
or
y
i
ntensi
ve
res
pe
ct
ively
by
the
qu
e
ry
gro
up
.
L
QD
=
{
Q
j
+
1
,
Q
j
+
2
,
Q
j
+
3
,
…
,
Q
yD
,
…
,
Q
j
+
i
}
(3)
L
QV
=
{
Q
1
,
Q
2
,
Q
3
,
…
,
Q
yV
,
…
,
Q
j
}
(4)
L
QR
=
{
Q
j
+
i
+
1
,
Q
j
+
i
+
2
,
Q
j
+
i
+
3
,
…
,
Q
yR
,
…
,
Q
M
−
j
−
i
}
.
(5)
A deta
il
ed
di
a
gram
is sh
own
in
Fi
gure
3
.
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
An
ef
fi
ci
ent res
ou
r
ce s
ha
ri
ng t
echn
i
qu
e
for
m
ulti
-
te
nant
da
t
abas
es
(
Pallav
i G
.
B
.
)
3221
Figure
3
.
Q
uery
cl
assifi
cat
ion t
echn
i
qu
e
3.2.4
.
Wor
ker
so
rtin
g tech
nique
The
w
orke
r
in
cl
oud
en
vir
onm
ent
con
sist
s
of
set
of
vi
rtua
l
m
achine
(threads)
.
Each
vir
tual
m
achine
com
pu
ti
ng
ca
pa
bili
ty
is
def
ined
by
it
s
para
m
et
er
su
ch
a
s
Me
m
or
y,
I/O
a
nd
CPU
(i
.e.
L
x
=
(
R
x
,
D
x
,
V
x
)
.
This
param
et
er
def
ines
,
Mem
or
y
us
a
ge,
I/
O
wait
ing
tim
e
a
nd
CP
U
util
iz
at
ion
.
T
he
GTM
per
io
dical
ly
c
ollec
ts
and
updates
t
his
inf
orm
ation
f
ro
m
L
W
M.
L
W
M
gathe
rs
Me
m
or
y
us
age,
I/O
w
ai
ti
ng
pe
rio
d
an
d
CPU
util
iz
at
ion
inform
ation
from
l
ocal
wor
ker
s
e
it
her
per
i
odic
al
ly
def
ined
by
us
er
or
w
he
n
50
%
of
the
t
ask
is
com
plete
d
in
a
par
ti
cular
th
read.
L
W
M
tr
ansm
it
s
this
i
nfor
m
at
ion
to
the
GTM.
Ne
xt,
GTM
s
or
ts
these
workers
from
sm
a
ll
to
la
rg
e
consi
der
i
ng
M
e
m
or
y,
I/
O
a
nd
CP
U
re
sour
c
es
an
d
form
s
qu
eue
s
,
and
resp
ect
ively
i.
e.
,
LR
ho
l
ds
the
w
orke
rs
in
the
i
ncr
easi
ng
orde
r
of
t
heir
m
e
m
or
y
capaci
ty
,
LD
ho
l
ds
the
wor
ker
s
in
the
increasin
g
order
of
their
I/O
capaci
ty
and
L
V
hold
s
the
sam
e
wo
r
ke
rs
in
the
inc
r
easi
ng
order o
f
CP
U
a
vaila
ble r
e
sp
ec
ti
vely
.
L
R
=
{
W
1
,
W
2
,
W
3
,
…
,
W
yR
,
…
,
W
H
}
(6)
L
D
=
{
W
1
,
W
2
,
W
3
,
…
,
W
yD
,
…
,
W
H
}
(7)
L
V
=
{
W
1
,
W
2
,
W
3
,
…
,
W
yV
,
…
,
W
H
}
(8)
All
the
w
orke
r
s
are
s
or
te
d
rat
her
t
han
cl
assify
ing,
due
t
o
s
iz
e
and
res
our
ces
dynam
ic
s.
As
a
resu
lt
,
these
th
ree
que
ues
ar
e
com
posed
of
w
orker
s
with
al
l
the
re
so
urces
,
unli
ke
the
que
ries
qu
eue.
C
onseq
ue
ntly
,
the
pro
posed
m
od
el
co
m
pr
ise
s
of
t
wo
ty
pes
of
qu
e
ues
.
Th
e
qu
e
ry
que
ues
represe
nting
Mem
or
y,
I/O
a
nd
C
P
U
intensive
queri
es
an
d
the
w
orke
r
qu
e
ues,
wh
ic
h
are
f
orm
ed
by
s
or
ti
ng
Me
m
or
y,
I/
O
a
nd
C
PU
lo
ad
f
r
om
sm
a
ll
too
b
i
g.
3.2.5
.
D
ynami
c scheduli
n
g a
pproach
Last
ly
,
the
scheduler
assi
gns
qu
e
ry
(b
a
se
d
on
it
s
ty
pe,
weig
ht
an
d
S
LA)from
qu
eu
es
of
Te
nan
t
Ma
nag
e
r
to
w
orkers
s
or
te
d
by
GTM.
i.e
.,
ba
sed
on
weig
ht
(CPU)
assi
gn
ed
to
a
qu
e
ry
say
q1
a
high
or
lo
w
CPU
util
iz
at
ion
w
orker
is
al
locat
ed.
I
f
a
qu
e
ry
has
le
ss
weigh
t
,
then
it
is
assigned
a
work
e
r
with
le
ss
process
i
ng
po
wer
a
nd
f
or
hig
he
r
w
ei
ght
query
a
w
orke
r
with
hi
gh
pro
cessi
ng
powe
r
is
assigne
d.A
qu
e
ry
q2(m
e
m
or
y
or
i/
o
intensive
)
in
accor
da
nce
with
it
s
weig
ht
can
be
assi
gn
e
d
to
a
w
orker
wh
ic
h
is
a
lready
execu
ti
ng
a
no
t
her
qu
e
ry
if
it
has
en
ough
r
eso
ur
ce
to
handle
the
query
and
al
s
o
SL
A
of
the
query
is
m
et
.
If
neithe
r
fail
s
a n
e
w wor
ker
i
s assig
ne
d
to
query
q2.
Be
sides,
f
or
m
axi
m
iz
ing
reso
urce
util
iz
ation
,
t
he
query
is
assigned
t
o
a
w
orker
w
it
h
le
ss
load
(i.e.
as
sig
ning
qu
e
ry
c
orres
pond
i
ng
to
it
s
ty
pe
a
nd
loa
d).
Fo
r
e
xam
ple,
the
Me
m
or
y,
I/
O
a
nd
CP
U
int
ensiv
e
qu
e
ries
ar
e
ass
ign
e
d
to
w
orke
r
with
lo
w
Me
m
or
y,
I
/O
a
nd
CPU
us
a
ge
res
pecti
vely
.
Mo
r
eov
e
r,
t
he
sc
he
du
le
r
will
assign
eac
h
query
f
ro
m
e
ach
queu
e
to
a
diff
e
re
nt
avail
able
wor
ker
f
or
si
m
ultaneou
s
e
xecu
ti
on.
T
his
a
ids
in r
e
duci
ng the
load an
d
e
nha
nce syste
m
eff
ic
ie
ncy.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
10
, No
.
3
,
J
une
2020
:
32
16
-
3226
3222
3.2.6
.
D
ynami
c scheduli
n
g a
pproach
If
the
num
ber
of
w
orke
rs
are
m
or
e
than
the
req
ue
ste
d
nu
m
ber
of
qu
e
rie
s
then
base
d
on
requirem
ent
the
sche
dule
r
will
assign
the
qu
e
ry
to
the
worker
m
ai
nta
ining
t
he
lo
a
d.
Howe
ver
if
r
equ
e
ste
d
nu
m
ber
of
qu
e
ries
are
m
or
e
tha
n
the
a
va
il
able
work
e
rs,
then
queries
ne
eds
to
be
assi
gn
e
d
in
gro
up
as
show
n
in
F
i
gure
4.
It
m
akes
on
e
batch
of
queri
es
fr
om
s
ub
queries,
qu
e
ues
it
as
g=M/
G
,
wh
ere
G
re
presents
the
nu
m
ber
of
qu
e
ues
cre
at
e
d.
Rem
ai
nin
g
M
-
g
queries
will
be
con
sid
ered
in
ne
xt
gro
up.
If
M
-
g>
H
then
the
pr
ocess
of
gro
up
i
ng
the
queries
is
co
ntinu
e
d
oth
e
r
wise
w
orker
s
are
a
ssign
e
d
t
o
qu
e
ries
on
a
re
gu
l
ar
ba
sis.
T
his
proces
s
is
rep
eat
ed
un
ti
l
execu
ti
on
of
la
st
qu
e
ry.
I
n
th
is
appro
ac
h
ea
ch
w
orke
r
is
assigne
d
with on
e
ta
sk
an
d
us
a
ge
s
of
CPU,
m
e
m
or
y
and
I
O
are
a
ll
m
ai
ntained.
Tena
nt
qu
e
ry
execu
ti
on
is
al
so
faster
yi
el
ding
to
high
s
yst
e
m
perform
ance an
d t
hro
ughp
ut.
3.2.7
.
D
ynami
c scheduli
n
g a
da
p
tivit
y me
t
ho
d
If
the
num
ber
of
w
orke
rs
a
re
m
or
e
than
the
req
ue
ste
d
nu
m
ber
of
qu
e
rie
s
then
base
d
on
requirem
ent
the
sche
dule
r
will
assign
the
qu
e
ry
to
the
worker
m
ai
nta
ining
t
he
loa
d.
Howe
ver
if
r
equ
e
ste
d
nu
m
ber
of
qu
e
ries
are
m
or
e
tha
n
the
a
va
il
able
work
e
rs,
then
queries
ne
eds
to
be
assi
gn
e
d
in
gro
up
as
show
n
in
Fi
gure
4.
It
m
akes
on
e
batch
of
queri
es
from
su
b
queries,
que
ues
it
as
g=M/
G
,
where
G
repr
esents
the
nu
m
ber
of
qu
e
ues
create
d.
Rem
ai
nin
g
M
-
g
queries
will
be
con
sid
ered
in
ne
xt
gro
up.
If
M
-
g>
H
then
the
pr
ocess
of
gro
up
i
ng
the
queries
is
c
onti
nu
e
d
oth
e
r
wise
w
orker
s
are
a
ssign
e
d
t
o
qu
e
ries
on
a
re
gu
l
ar
ba
sis.
T
his
proces
s
is
rep
eat
ed
un
ti
l
execu
ti
on
of
la
st
qu
e
ry.
I
n
th
is
appro
ac
h
ea
ch
w
orke
r
is
assigne
d
with on
e
ta
sk
an
d
us
a
ge
s
of
CPU,
m
e
m
or
y
and
I
O
are
a
ll
m
ai
ntained.
Tena
nt
qu
e
ry
execu
ti
o
n
is
al
so
faster
yi
el
ding
to
high
s
yst
e
m
perform
ance an
d t
hro
ughp
ut.
Figur
e
4
.
Flo
w
char
t
of
dy
nam
ic
sch
e
du
li
ng a
dap
ti
vity
m
et
ho
d
4.
E
X
PERI
MEN
TAL RES
UL
T AND
A
NAL
YS
IS
We
ha
ve
co
nducte
d
seve
ral
exp
e
rim
ents
to
evaluate
th
e
perform
ance
of
pr
opos
e
d
m
od
el
ov
er
existi
ng
Mute
Be
nch
ap
pro
ach
[
4]
in
te
rm
s
of
la
te
ncy
and
th
rou
ghput
(transac
ti
on
per
seco
nds)
.
Fo
r
exp
e
rim
ent
evaluati
on
OLT
P
an
d
YC
SB
be
nch
m
ark
is
us
e
d.
T
he
M
ute
Be
nch
m
od
el
is
desig
ne
d
usi
ng
j
a
va
fr
am
ewo
r
k
in
wh
ic
h
the
aut
hors
ha
ve
at
te
m
pted
to
up
gr
ade
OLT
P
-
Be
nch
int
o
a
Mult
i
-
Tena
nt
Da
ta
base
Be
nch
m
ark
Fram
ewo
r
k.
I
n
the
pr
e
sente
d
work,
we
ha
ve
incorporate
d
pr
op
os
e
d
Mu
lt
i
-
Tenan
t
Dy
nam
ic
Re
so
urce
Sc
he
du
le
r
Mo
del
(MTDRSM
)
into
[4
]
.
We
f
ur
t
her
e
xten
de
d
m
od
el
[4
]
t
o
s
upport
wor
klo
a
d
execu
ti
on
f
or
diff
e
re
nt
ben
c
hm
ark
s
and
m
ul
ti
-
te
nan
t
w
orkloa
d
exec
ut
ion
on
dif
fere
nt
database
s
uch
a
s
My
SQ
L,
Or
a
cl
e, and
H
2D
B
e
tc
. b
y
us
in
g Hi
bernate f
ram
e
work.
The
MT
DRS
M
is
dev
el
oped
us
in
g
J
AVA
program
m
i
ng
la
ngua
ge
on
ecl
ipse
ne
on
f
ram
ewo
r
k.
The
syst
em
env
ir
on
m
ent
us
e
d
f
or
w
orkl
oad
exec
ution
is
I
-
5,
3
.
2
GH
z
, q
ua
d
c
or
e In
te
l
cl
ass
proce
ssor w
it
h
16
GB
RAM.
We
hav
e
co
ns
i
de
red
w
orkl
oad
execu
ti
on
of
TPCC
and
YCSB
be
nchm
ark
on
H2
database
.
The
w
orkl
oad
execu
ti
on
is
ca
rr
ie
d
out
f
or
both
with
a
nd
wi
thout
SL
A
com
pliances.
Ea
ch
te
nan
t
is
gi
ven
a
set
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
An
ef
fi
ci
ent res
ou
r
ce s
ha
ri
ng t
echn
i
qu
e
for
m
ulti
-
te
nant
da
t
abas
es
(
Pallav
i G
.
B
.
)
3223
of
wor
ker
(t
hr
e
ads)
for
w
orkl
oad
e
xec
ution.
The
num
ber
of
w
orker
is
vari
ed
as
10,
20
a
nd
50.
T
he
te
na
nt
I
D
is
increm
ented
by
3
(i.e
.
f
or
10,
20
a
nd
50
w
orker
the
re
a
re
4,
7
a
nd
17
te
na
nts,
res
pecti
ve
ly
)
an
d
6
te
na
nt
pe
r
execu
ti
on
is
c
on
si
der
e
d.
Eac
h
te
na
nt
execut
es
it
s
wo
rk
l
oa
d
with
un
li
m
i
t
ed
data
rate.
T
he
OLT
P
an
d
YCSB
work
l
oa
d
m
ix is co
m
po
se
d of
25% r
ea
d rec
ord
a
nd 15%
for
each
oth
e
r
tra
nsa
ct
ion
ty
pes
.
4.1.
SLA
an
d
SL
A
brea
c
h
In
t
he
Qu
e
ry
Q
y
=
(
K
y
,
V
y
,
R
y
,
T
y
)
,
if
the f
irst
t
hr
ee
par
am
et
ers
re
pr
ese
nt q
ue
ry
s
iz
e,
CPU u
ti
li
zat
ion
and
m
e
m
or
y
that
t
he
te
na
nt
app
li
es
to
us
e,
the
n
T
y
is
t
he
S
LA
breac
h
of
t
he
query.
T
hes
e
par
am
et
ers
c
om
e
from
the
Ten
a
nt
ta
sk
m
anag
e
r
a
nd
a
re
s
ubm
it
te
d
by
te
nan
t
s.
If
the
query
Q
y
fail
to
m
eet
T
y
de
fine
d
by
te
na
nt
to it
s servic
e
prov
i
der, the
n
t
he
SL
A
is c
onsidere
d
t
o be
br
e
ached. T
he SL
A
is m
easur
e
d as f
ollows:
Qu
e
ry r
et
rieval
tim
e is cal
culat
ed
q
R
e
trie
v
a
l
=
∑
(
q
y
−
w
+
q
y
−
pr
oc
es
s
ed
)
H
(9)
w
he
re
q
y
−
w
is
the
wait
ing
ti
m
e,
q
y
−
pr
oce
ss
e
d
is
the
proces
sing
or
qu
e
ry
com
pleti
on
ti
m
e
and
H
is
t
otal
nu
m
ber
of que
ries.
C
heck if
q
R
e
t
rieva
l
>
T
y
. If
ye
s
query
is breache
d.
4.2.
Latenc
y perf
or
man
ce
evalu
at
i
on
In
Fig
ur
e
5
the
la
te
ncy
perform
ance
cons
iderin
g
dif
fer
e
nt
w
orke
r
without
SL
A
c
om
pl
ia
nces
is
sh
ow
n.
It
is
seen
from
gr
ap
h
the
MTDRSM
perform
s
better
than
Mu
TeB
ench
in
te
rm
of
la
te
ncy
per
for
m
anc
e
consi
der
i
ng
va
ried
w
orke
r.
T
he
MTDRSM
reduce
la
te
ncy
by
23
.
87%
,
11.
82%
an
d
46.
63%
co
ns
ide
ring
10,
20
a
nd
50
wor
ker
re
sp
ect
ivel
y,
ov
e
r
MuTe
Be
nch.
A
n
ave
rag
e
la
te
ncy
re
du
ct
io
n
of
27.
44%
is
achiev
ed
by
MTDRSM
ov
er
MuTeB
enc
h.
Sim
i
la
rly
,
In
Fig
ur
e
6
the
la
te
ncy
per
for
m
ance
con
si
de
rin
g
dif
fer
e
nt
worker
with
S
LA
com
pliances
is
sho
wn.
It
is
see
n
from
gr
ap
h
t
he
MTDRSM
pe
rfor
m
s
bette
r
t
han
MuTeB
e
nc
h
in
te
rm
of
la
te
nc
y
perform
ance
co
ns
ide
rin
g
va
ried
w
orker.
The
MT
DRS
M
reduce
la
te
ncy
by
23.
08%,
11.7
%
and
45.
83
%
c
onside
rin
g
10,
20
an
d
50
w
ork
er
res
pecti
vely
,
over
M
uTeB
ench.
A
n
a
ve
ra
ge
la
te
ncy
re
duct
ion
of
28.
2%
is
ac
hieve
d
by
MT
DRSM
ov
e
r
M
u
TeB
enc
h.
It
i
s
seen
f
ro
m
Figure
5
a
nd
Fig
ur
e
6
t
hat
pr
ovisi
on
in
g
SLA
t
o
te
nan
t i
nduces
a slig
ht overhea
d
in
lat
ency pe
rfor
m
ance.
Figure
5.
A
verage lat
ency ac
hieve
d for
var
i
ed
worker
with
ou
t
SLA
Figure
6.
A
verage lat
e
ncy ac
hieve
d for
var
i
ed
worker
with
S
LA
4.3
.
Throu
gh
p
ut (
t
ransacti
on
pe
r seco
nd
e
valu
at
i
on
)
per
fo
r
man
ce
evalu
ati
on
Table
s
1
an
d
2
descr
ibes
the
transacti
on
sta
tus
with
ou
t
an
d
with
SLA
res
pecti
vely
.
The
transacti
on
sta
tus is c
om
po
se
d of
f
ollow
i
ng ty
pe:
-
Com
plete
d
tr
ansacti
on: t
his
sh
ows
the t
ransac
ti
on
is
su
cc
essfu
ll
y c
om
pl
et
ed,
-
A
borte
d
tra
nsa
ct
ion
: t
his
sho
w
the
tra
ns
act
ion i
s a
borted b
y user
/sy
ste
m
,
-
Re
j
ect
ed
tra
ns
act
io
n:
this
sh
ows
the
tra
nsa
ct
ion
is
rej
e
ct
ed
du
e
to
w
r
ong
inf
orm
ation
entere
d
(.
i.e
.
non
-
existe
nt acc
ount num
ber
) d
uri
ng
t
ran
sact
i
on and
-
U
ne
xpect
ed
e
rror
: t
his is
du
e
to un
e
xpect
ed
scenari
o
s
uc
h
a
s server/
netw
ork dow
n.
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
10
, No
.
3
,
J
une
2020
:
32
16
-
3226
3224
It
is
seen
from
Table
s
1
an
d
2,
MTDRSM
a
chieves
high
num
ber
of
tra
nsa
ct
ion
pe
r
seco
nd
(
TPS
)
as
c
om
par
ed
to
MuTeB
enc
h.
I
n
Fig
ur
e
7
the
throu
ghput
pe
rfor
m
ance
co
ns
ide
rin
g
dif
fe
ren
t
w
orke
rs
without
SLA
c
om
plian
ces
is
show
n.
It
is
seen
from
gr
ap
h
th
e
MTDRSM
perform
s
better
than
Mu
Te
Be
nch.
The
MT
DRS
M
i
m
pr
ov
e
s
thr
ough
pu
t
by
5.87%,
3.03%
an
d
2.6
3%
consi
der
i
ng
10,
20
a
nd
50
w
ork
e
r
resp
ect
ively
,
over
M
uTeBe
nc
h.
A
n
ave
ra
ge
thr
oughput
i
m
pr
ov
em
ent
of
3.8
4%
is
ac
hieve
d
by
MTDRSM
ov
e
r
M
uTeBe
nc
h.
Sim
i
la
rly
,
in
Fi
gure
8
the
thr
oughput
perform
ance
co
nsi
der
in
g
dif
fer
e
nt
w
orke
r
with
SL
A
com
pliances
is
sh
ow
n.
It
is
seen
f
ro
m
gr
a
ph
the
MT
DRS
M
perform
s
bette
r
than
M
uT
eB
ench
i
n
te
rm
s
of
thr
oughput
perform
ance
cons
iderin
g
var
ie
d
worker.
Th
e
MTDRSM
im
pro
ves
th
r
oughpu
t
by
7.2
4%
,
7.4
%
and
7.1%
co
nsi
der
in
g
10,
20
a
nd
50
work
e
r
re
sp
ect
iv
el
y,
ov
e
r
Mu
TeB
ench.
An
aver
a
ge
t
hro
ughput
i
m
pr
ovem
ent
of
7.25
%
is
ac
hieve
d
by
MTDRSM
over
MuTeB
enc
h.
I
t
is
seen
fr
om
Figure
7
a
nd
Figure
8
that
prov
isi
on
i
ng
S
LA
to
te
na
nt
induces
an
ov
e
rh
ea
d
in
t
hro
ughput
pe
rfor
m
ance
of
M
uTeBenc
h,
w
he
re
us
MTDRSM
is e
ff
ic
ie
nt
wh
e
n p
rovisio
ning S
L
A
to
tena
nt.
Table
1
.
T
ra
nsa
ct
ion
stat
us w
it
ho
ut
SL
A
T
ra
n
s
a
c
ti
o
n
S
ta
t
u
s
w
i
t
h
o
u
t
SL
A
N
u
m
b
er
o
f
w
o
r
k
er
C
o
m
p
l
e
te
d
T
ra
n
s
ac
t
i
o
n
A
b
o
r
t
e
d
Tr
a
n
s
ac
t
i
o
n
R
e
j
e
ct
e
d
T
ra
n
s
a
ct
io
n
U
n
e
x
p
e
c
te
d
er
r
o
r
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
10
6711
6849
10
7
74911
79901
31
5
20
13203
13288
11
8
149122
154134
59
45
50
10507
12873
9
7
148255
151225
116
102
Table
2.
T
ra
nsa
ct
ion
stat
us w
it
h
SL
A
T
ra
n
s
a
c
ti
o
n
S
ta
t
u
s
w
i
t
h
S
LA
N
u
m
b
er
o
f
w
o
r
k
er
C
o
m
p
l
e
te
d
T
ra
n
s
ac
t
i
o
n
A
b
o
r
t
e
d
Tr
a
n
s
ac
t
i
o
n
R
e
j
e
ct
e
d
T
ra
n
s
a
ct
io
n
U
n
e
x
p
e
c
te
d
er
r
o
r
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
M
u
T
eB
e
n
c
h
M
TD
R
SM
10
10799
12894
12
8
144222
154264
37
21
20
10906
13108
10
8
142508
152592
64
50
50
11062
12960
8
8
140554
150260
39
27
Figure
7
.
Th
r
ough
pu
t ac
hie
ve
d for
var
ie
d work
e
r
without S
LA
Figure
8.
Th
r
ough
pu
t ac
hie
ve
d for
var
ie
d work
e
r
with S
LA
5.
CONCL
US
I
O
N
Mult
it
enan
t
da
ta
base
m
anag
e
m
ent
on
cl
ou
d
en
vir
on
m
ent
has
at
ta
ine
d
huge
intere
st
a
m
on
g
var
io
us
orga
nizat
ion
s
,
du
e
t
o
scal
abili
ty
and
cost
benefit
s.
The
wide
su
r
vey
carrie
d
ou
t
s
hows
the
existi
ng
sc
he
duli
ng
te
chn
iq
ue
s
uff
ers
due
to
N
P
-
Hard
pro
blem
.
Ther
e
fore
an
eff
ic
ie
nt
sche
duli
ng
a
nd
load
balancin
g
m
ec
han
ism
is
require
d
f
or
dynam
ic
resour
ce
al
locat
io
n.
He
re
we
pre
sented
que
ry
cl
assifi
cat
ion
and
w
orke
r
s
ort
ing
te
chn
iq
ue
f
or
dynam
ic
reso
urce
al
locat
io
n
and
ha
ndli
ng
idle
insta
nce
ef
fici
ently
.
Ex
pe
rim
ents
are
c
onduct
e
d
to
evaluate
th
e
per
f
orm
ance
of
MTDR
S
M
in
te
r
m
s
of
la
te
ncy
and
throu
ghput
w
it
h
and
without
SLA
com
pliances.
The
ex
pe
rim
en
ts
are
cond
ucted
co
ns
ide
rin
g
var
ie
d
te
nan
t,
worker
an
d
w
orkl
oad
s
uch
as
TPCC
and
YCSB
be
nch
m
ark
s.
The
exp
e
rim
ental
ou
tc
om
e
sh
ow
s
the
MTDRS
reduces
a
ver
a
ge
la
te
ncy
of
27.
44%
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
An
ef
fi
ci
ent res
ou
r
ce s
ha
ri
ng t
echn
i
qu
e
for
m
ulti
-
te
nant
da
t
abas
es
(
Pallav
i G
.
B
.
)
3225
and
28.
2%
over
Mute
Be
nc
h
with
out
an
d
with
SL
A
c
om
pl
ia
nce
res
pe
ct
ively
.
The
MTDRSM
im
pro
ves
aver
a
ge
th
r
oughput
by
3.84%
and
7.2
5%
over
Mute
Be
nc
h
without
an
d
with
SL
A
c
ompli
ance
re
sp
ect
ively
.
The
ov
e
rall
res
ult
achieve
d
s
hows
that
w
he
n
SLA
is
giv
e
n
t
o
te
na
nt
the
re
incu
r
a
n
ove
rh
e
ad
f
or
Mute
B
ench
m
od
el
,
as
a
r
esult
af
fect
th
e
pe
rfor
m
ance
of
th
rou
ghpu
t
and
in
duce
la
te
ncy
f
or
te
nan
t.
T
his
s
hows
the
eff
ic
ie
ncy
of
ha
nd
li
ng
idle
instance
by
MTDRSM
m
od
el
.
The
ov
e
rall
resu
lt
achieve
d
sho
ws
that
the
MTDRSM
can
pro
visio
n
SLA
wit
hout
incu
rr
i
ng
la
te
nc
y
to
te
nan
ts
an
d
pe
rfor
m
s
sign
ific
antly
bette
r
tha
n
Mute
Be
nc
h.
P
rovisio
ning
sec
ur
it
y
to
databa
se
acce
ss
in
m
ulti
-
te
nan
t
cl
oud
e
nv
ir
onm
ent
is
a
crit
ic
al
factor
i
n
increasin
g
wi
de
adoption.
T
he
f
utu
re
wor
k
w
ou
l
d
co
ns
i
der
prov
isi
on
i
ng
sec
ur
it
y
to
m
ulti
-
te
nan
t
cl
oud
SaaS e
nv
i
ronm
ent.
ACKN
OWLE
DGE
MENTS
The
a
uthors
would
li
ke
t
o
ackno
wled
ge
and
t
hank
T
echn
ic
al
E
du
c
at
ion
Q
ualit
y
Im
pr
ov
em
ent
Pr
og
ram
[
TEQ
IP
]
P
hase
3, BM
S Co
ll
ege
of
En
gin
eeri
ng
, B
asava
nagu
di
, B
ang
al
or
e
.
REFERE
NCE
S
[1]
B.
P.
Rim
al
and
E.
Choi
,
“
A
servic
e
-
or
ie
nt
ed
ta
x
onom
ic
al
spec
tr
um
,
cl
oud
y
chall
enge
s
and
oppor
tuni
ties
of
cl
oud
computing,
”
in
I
nte
rnational
Jou
rnal
of
Comm
un
ic
ati
on
Syst
ems
,
vol.
25
,
no
.
6
,
pp
.
796
–
819
,
Jun.
2012.
[2]
I.
Fos
te
r,
Y.
Zh
ao,
I
.
Raicu,
and
S.
Lu,
“
Cloud
c
om
puti
ng
and
gr
id
computing
36
0
-
degr
ee
compar
ed,
”
2008
Gr
id
Computing
En
vir
onments
Workshop
,
Aus
ti
n,
TX
,
pp.
1
-
10
,
2008
.
[3]
Stefa
n
Aulba
ch,
Torste
n
Grus
t,
Dea
n
Jac
obs,
Alfons
Kem
per
,
and
J
an
Rit
ti
n
ge
r,
“
Multi
-
t
enant
databa
ses
fo
r
software
as
a
se
r
vic
e
:
Schema
-
m
appi
ng
te
chn
i
ques,
”
In
SIGM
OD
'08:
Proce
e
dings
of
the
20
08
ACM
SIGMO
D
int
ernati
ona
l co
nfe
renc
e
on
Man
ageme
nt
o
f
da
ta
,
p
p.
1195
–
120
6,
Jun.
2008.
[4]
Andrea
s
Gobel,
“
MuTeBe
nch:
Turni
ng
OLTP
-
Benc
h
int
o
a
Multi
-
T
ena
n
c
y
Da
ta
base
Ben
chma
rk
Fram
ework,
”
The
fi
f
th
In
te
rna
ti
onal
Con
fe
ren
c
e
on
C
loud
Com
puti
ng,
GRIDs
a
nd
Vi
rtual
izati
on
,
pp
.
84
-
47
,
201
4
[5]
Li
hen
g
,
Yang
dan,
and
Zh
ang
xia
ohong,
“
Surve
y
on
m
ult
i
-
te
n
a
nt
dat
a
arc
hi
tect
ure
for
sa
as,
”
IJ
CSI
Inte
rnationa
l
Journal
of
Computer
Sc
ie
nc
e
Iss
u
es
,
v
ol
.
9
,
issue
6,
no
.
3
,
Nov
.
20
12.
[6]
C
or
-
Paul
Beze
m
er,
And
y
Zai
dm
an,
“
Multi
-
T
ena
nt
Sa
aS
Applicati
ons
:
Ma
i
nte
nan
ce
Dre
a
m
or
Nightmare
?
,
”
IWP
SE
-
E
VOL
'10:
Proce
ed
ings
of
the
Jo
int
ERCIM
Workshop
on
Soft
ware
Evol
uti
on
(
EV
OL
)
and
Inte
rnation
al
Workshop on
Pr
inc
iples of
Soft
w
are
Ev
o
lut
ion
(
IWP
SE)
,
pp.
88
-
9
2,
Sep
.
2010
.
[7]
Archa
na
Bhask
ar
and
Rajee
v
R
anj
an,
“
Opti
m
iz
ed
m
emory
m
odel
fo
r
hadoop
m
ap
red
uce
fra
m
ework,”
in
Int
ernati
onal
Journal
of
Elec
t
rical
and
Computer
Eng
ine
ering
(
IJE
CE)
,
v
ol.
9,
n
o.
5
,
pp
.
4396
-
4407,
Oct
.
201
9.
[8]
H.
Ballani,
P.
Costa,
T.
Kara
g
ia
nnis,
and
A.
Rows
tron,
“
Towa
rds
pre
dictab
le
d
at
a
ce
n
te
r
n
e
tworks,”
in
AC
M
SIGCO
MM
Computer
Comm
unication Re
v
ie
w
,
pp
.
242
–
253
,
Aug.
2011
.
[9]
Toa
n
Phan
Thanh,
Lo
c
Ng
u
y
en
The,
Said
E
lna
f
far
,
Cuong
Nguy
en
Doan,
Huu
Dang
Quoc,
“
An
Eff
e
ct
iv
e
PS
O
-
inspire
d
Alg
ori
t
hm
for
W
orkflow
Schedul
ing,”
Inte
rnationa
l
J
ournal
of
Elec
tric
al
and
Compu
te
r
Engi
n
ee
ring
(
IJE
CE)
,
v
ol.
8,
n
o.
5,
pp
.
3852
-
3859,
Oct
.
2018
.
[10]
B.
P.
Rim
al
an
d
M.
Maie
r,
“
W
orkflow
Schedul
ing
in
Multi
-
Te
nan
t
Cloud
Com
puti
ng
Envi
r
onm
ent
s,”
in
IE
EE
Tr
ansacti
ons on Paralle
l
and
Dis
tribut
ed
Syste
ms
,
vol
.
28
,
no
.
1
,
p
p.
290
-
304
,
Jan
.
2017.
[11]
F.
S.
Hs
ie
h
and
J.
B.
Li
n,
“
A
d
y
namic
sche
m
e
for
sche
duli
ng
co
m
ple
x
ta
sks
in
m
anuf
ac
turi
ng
s
y
stems
base
d
on
col
l
abor
ation
of
age
nts,
”
in
App
lied
Int
el
l
ige
n
ce
,
vo
l.
41
,
no
.
2
,
pp
.
366
–
382
,
Sep
.
2014.
[12]
J.
-
C.
L
iou
and
M.
A.
Pa
li
s,
“
An
eff
icien
t
t
ask
cl
ust
eri
ng
h
eu
risti
c
for
sch
ed
uli
ng
DA
Gs
on
m
ult
iprocess
ors,”
in
Proc
.
,
R
esource
Manag
eme
nt
,
Symp.
o
f
Parallel
and
Distrib.
P
roce
ss
ing
,
pp
.
1
52
–
156,
1996
.
[13]
R.
Bajaj
and
D.
P.
Agrawal
,
“
Im
proving
sc
hedul
ing
of
ta
s
ks
in
a
hetero
gene
ous
envi
ro
nm
ent
,
”
in
IEEE
Tr
ansacti
o
ns on Paralle
l
and
Dis
tribut
ed
Syste
ms
,
vol
.
15
,
no
.
2
,
p
p.
107
-
118
,
Feb
.
2004.
[14]
H.
Topc
uog
lu,
S.
Hari
r
i,
M.
Y.
W
u,
“
Perform
anc
e
-
eff
ective
and
low
-
complexi
t
y
ta
sk
sche
du
li
ng
for
heteroge
n
eo
u
s
computing,
”
in
I
EE
E
Tr
ansacti
o
ns on
Parallel
a
nd
Distribute
d
S
yste
ms
,
vo
l
.
13
,
no.
3
,
pp
.
260
-
2
74,
Mar
.
2002
.
[15]
H.
M.
Fard,
R
.
Pro
dan,
J.J.
D.
Bar
rionue
vo,
and
T
.
Fahringe
r,
“
A
m
ult
i
-
objecti
v
e
a
pproa
ch
for
wor
kflow
sche
duli
n
g
in
heteroge
n
eou
s
envi
ronm
ent
s,
”
2012
12th
IE
EE
/A
CM
Int
ernati
onal
S
ymposium
on
Cluste
r,
Cloud
a
nd
Gr
id
Computing
(
cc
grid 2
012)
,
Ott
awa
,
ON
,
pp
.
300
-
30
9
,
2012
.
[16]
D.
Shue,
M.
J.
Free
dm
an,
and
A.
Shaikh,
“
Perform
anc
e
isol
ati
on
and
f
ai
rn
ess
for
m
ult
i
-
te
n
an
t
c
loud
storag
e,”
OSD
I'12:
Proce
edi
ngs
of
the
10th
USENIX
c
onfe
renc
e
on
Operating
Syste
m
s
Design
and
I
mpleme
ntation
,
pp.
349
–
362
,
Oc
t.
2012
.
[17]
S.
Pande
y
,
L.
W
u,
S.
Guru,
and
R.
Bu
yy
a
,
“
A
pa
rti
cle
sw
arm
optim
iz
at
ion
-
b
ase
d
heur
isti
c
for
sch
edul
ing
workflo
w
appl
i
ca
t
ions
in
cl
oud
computing
envi
ronm
ent
s,”
2010
24th
IEE
E
Inte
rnat
ional
Confe
ren
ce
on
Adv
anc
e
d
Information
Ne
t
working
and
Ap
pli
cations
,
Per
th, WA
,
pp.
400
-
40
7,
2010
.
[18]
M.
A.
Rodrigu
e
z
and
R.
Bu
yy
a
,
“
Dea
dli
n
e
b
ase
d
resourc
e
pro
visioni
ngand
sc
hedul
ing
al
gor
ithm
for
scie
nt
ific
workflows
on
clouds
,
”
in
IE
EE
Tr
ansacti
ons on Cloud Computi
n
g
,
vol
.
2
,
no
.
2
,
p
p.
222
-
235
,
201
4.
[19]
Z.
W
u,
X.
L
iu,
Z
.
Ni,
D.
Yuan
,
a
nd
Y.
Yang,
“
A m
ark
et
-
orie
n
te
d
hie
rar
chi
c
al
sch
e
duli
ng
strateg
y
i
ncl
oud
workflo
w
s
y
stems
,
”
The
Jo
urnal
of
Sup
erc
o
mputing
,
vo
l. 63, no. 1, pp. 256
–
2
93,
Jan
.
2013
.
[20]
H.
M.
Fard,
R.
Prodan,
and
T.
Fahringe
r,
“
A
trut
hful
d
y
n
amic
workflow
sche
duli
ng
m
ec
hani
s
m
for
co
m
m
erc
ia
l
m
ult
ic
loud
env
i
ronm
ent
s,”
in
IEE
E
Tr
ansactions
on
Parall
el
and
Distribute
d
Syste
ms
,
vo
l.
24,
no.
6
,
pp.
1203
-
1212
,
J
un.
2013
.
Evaluation Warning : The document was created with Spire.PDF for Python.