Internati
o
nal
Journal of Ele
c
trical
and Computer
Engineering
(IJE
CE)
V
o
l. 6,
N
o
.
3
,
Ju
n
e
201
6,
p
p
.
9
6
3
~
973
I
S
SN
: 208
8-8
7
0
8
,
D
O
I
:
10.115
91
/ij
ece.v6
i
3.7
943
9
63
Jo
urn
a
l
h
o
me
pa
ge
: h
ttp
://iaesjo
u
r
na
l.com/
o
n
lin
e/ind
e
x.ph
p
/
IJECE
Load Balancing Techniques for
Efficient Traffic Management
in Cloud Environment
Talasila Sasid
h
ar,
Vani Havisha, Sai
K
o
us
hik, Mani
Dee
p
, V
Kris
hna Redd
y
Department o
f
C
o
mputer Scien
c
e and
Engin
eerin
g, K
L Univ
ersity
, India
Article Info
A
B
STRAC
T
Article histo
r
y:
Received Apr 29, 2015
Rev
i
sed
Jan 19, 201
6
Accepte
d
Fe
b 2, 2016
Cloud computin
g is an intern
et
based
computin
g. This co
mputing paradigm
has enhanced th
e use of network where the cap
ability
of one n
ode can be
utili
zed b
y
o
t
h
e
r node. Cloud
service provid
e
s access on dem
a
nd to
dis
t
ributiv
e res
o
urces
s
u
ch as
da
tabas
e
, s
e
rvers
,
s
o
ftware, in
fras
t
ructure
et
c.
in pay
as
y
ou go basis. Load
balan
c
ing is o
n
e of the vex
i
n
g
issues in
distributed
envir
onment. Resour
ces of
service
p
r
ovider
n
eed to balan
ce the
load of client r
e
quest.
Load balanci
ng is
adapt
e
d in order to i
n
creas
e
th
e
resource consu
m
ption in Data centers
that leads to enhance the overall
perform
ance
of s
y
s
t
em
achi
e
ving
client
satisfactio
n.
Keyword:
C
l
ou
d dat
a
cen
t
e
rs
Loa
d
bal
a
nci
n
g
Traf
fi
c m
a
nage
m
e
nt
Virtu
a
lizatio
n
Copyright ©
201
6 Institut
e
o
f
Ad
vanced
Engin
eer
ing and S
c
i
e
nce.
All rights re
se
rve
d
.
Co
rresp
ond
i
ng
Autho
r
:
Talasila Sasid
h
ar,
Depa
rt
em
ent
of C
o
m
put
er
Sci
e
nce a
n
d
E
ngi
neeri
n
g
,
K L Un
iv
ersity,
V
a
dd
esw
a
r
a
m
5
225
02
, Gun
t
ur
D
i
st
r
i
ct, Andh
r
a
Pr
ad
esh
,
Ind
i
a.
1.
INTRODUCTION
Traffic engine
ering in cloud
data centers ha
s bec
o
m
e
a
m
a
j
o
r ch
allen
g
e
particu
l
arly wh
en
th
e leg
acy
protoc
ols are
e
m
ployed in
data centers .Data centers
of
fer l
i
m
it
ed and
u
n
-scal
a
b
l
e
t
r
affi
c m
a
nagem
e
nt
.
Al
t
h
o
u
gh t
h
e
use o
f
VL
ANs
i
s
a way
t
o
provi
de scal
ab
l
e
t
r
affi
c m
a
nagem
e
nt
. Gener
a
l
l
y
broa
dcast
do
m
a
i
n
s
are created
by
routers. B
u
t
with the
Virtualization
of L
A
N’s, a s
w
itch creates
th
e
b
r
o
a
d
cast do
main
. One
needs VLAN
whe
n
t
h
ere a
r
e
m
o
re than
200
devices
on
LAN, an
d th
ere ex
ists a lo
t
of bro
a
d
cast traffic
on
LAN, or
wh
en
en
ab
ling
a si
n
g
le switch
in
to
m
u
l
tip
le v
i
rt
ua
l
swi
t
c
hes.
W
i
t
h
vi
rt
ual
i
zat
i
o
n
of a L
A
N, a
d
e
vi
ce
can be connected to one switch, anothe
r de
vice can be
connected to a diffe
rent sw
i
t
c
h
,
an
d t
hose
devi
ce
s ca
n
still b
e
on
th
e sa
m
e
b
r
o
a
d
cast
d
o
m
ain
.
Devi
ces
o
n
di
ffe
rent
VL
AN
’s c
o
m
m
uni
cat
e wi
t
h
a
router whic
h
is used
t
o
route between
the
sub
n
et
s. C
o
nfi
g
u
r
i
n
g
VLA
N
’
s can
vary
e
v
e
n
bet
w
ee
n di
f
f
e
rent
m
odel
s
of s
w
i
t
c
hes
.
V
L
AN
’s
of
fer
h
i
ghe
r
per
f
o
r
m
a
nce f
o
r
m
e
di
u
m
and l
a
r
g
e L
A
N’s
as
on
acco
u
n
t
t
h
ey
l
i
m
i
t
t
h
e br
oa
dcast
s
.
As
t
h
e am
ount
o
f
t
r
af
fi
c
and t
h
e n
u
m
b
er o
f
de
vi
ces rai
s
e so
does t
h
e
num
ber
of
broadcast pac
k
ets. VLAN’s m
a
y
even
be c
o
nsidered
for providing security because
a us
er esse
ntially puts
one group
of de
vices,
i
n
one
VLAN, on
the
i
r own
n
e
two
r
k
.
A trun
k port is a sp
ecial p
o
rt th
at
run
s
ISL t
o
ca
rry
traf
fic f
r
om
m
o
re
tha
n
one
V
L
AN
.
Bu
t VLAN’s en
clo
s
es
few
d
i
sadv
an
tag
e
s as it is
di
ffi
cul
t
t
o
m
a
nage VL
AN
rat
h
e
r
t
h
a
n
m
a
nagi
ng
onl
y
LA
N, T
r
a
ffi
c bet
w
een
V
L
AN
’s m
u
st
go t
h
r
o
ug
h r
o
ut
er .i
.e. o
n
e s
h
a
l
l
need a ro
ut
e
r
, t
h
e
n
sh
o
u
l
d
set
up
t
h
e ro
ut
i
ng
pr
ot
oc
ol
and t
r
u
n
k
,
t
h
ere i
s
a hi
g
h
ri
sk o
f
vi
rus at
t
acks be
cause i
f
on
e sy
st
em
of a VLAN i
s
in
fected
b
y
v
i
ru
s th
en
it
m
a
y
in
fect all th
e s
y
ste
m
s o
f
th
at
VLAN,
Ad
m
i
n
i
strato
r n
e
ed
s to
add
ad
d
itional layer
of sec
u
ri
t
y
, It
al
l
o
ws t
o
im
pl
em
ent
t
h
e l
ogi
cal
gro
upi
n
g
o
f
devi
ces
by
funct
i
o
n i
n
st
ea
d of l
o
cat
i
o
n.
Exi
s
t
i
n
g
pape
r i
n
t
r
od
uc
ed a
n
o
v
el
dec
o
m
posi
t
i
on a
p
pr
oac
h
t
o
s
o
l
v
e t
h
e
VL
AN
m
a
ppi
ng
pr
obl
e
m
i
n
cl
oud
dat
a
cent
e
r
s
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJEC
E
V
o
l
.
6,
No
. 3,
J
u
ne 2
0
1
6
:
96
3 – 9
7
3
96
4
th
ro
ugh
co
lu
mn
g
e
n
e
ration
,
wh
ich
is an effectiv
e tech
n
i
qu
e that is
p
r
o
v
en
to reach opti
m
a
lity b
y
ex
p
l
oring
only a
sm
all subset
of t
h
e sea
r
ch s
p
ace.
Di
ffe
re
nt
l
o
a
d
bal
a
nci
n
g
al
g
o
r
i
t
h
m
s
have
be
en
pr
o
pose
d
i
n
o
r
de
r t
o
m
a
nage t
h
e r
e
so
ur
ce
s o
f
se
r
v
i
ce
provider e
ffici
ently and effec
tively.
Thi
s
pr
oject
prese
n
t
s
t
h
e pe
rf
orm
a
nc
e
analysis of a
n
efficient m
e
thod for
l
o
ad bal
a
nci
n
g
t
o
sol
v
e som
e
of t
h
e key
feat
ure
s
l
i
k
e ove
rl
oa
d re
jec
t
i
on, p
r
o
cess m
i
grat
i
on, a
n
d
faul
t
to
leran
c
e in
cl
o
u
d
.
2.
RELATED WORK
2.
1.
L
o
a
d
B
a
la
n
c
in
g
Loa
d
bal
a
nci
n
g i
s
one o
f
t
h
e cruci
a
l
i
ssues o
f
cl
ou
d com
put
i
ng w
h
i
c
h di
vi
de
s t
h
e wor
k
l
o
a
d
dy
nam
i
cal
ly
am
ong t
h
e
pr
oc
essor
s
by
im
pr
ovi
ng
t
h
e
per
f
o
rm
ance o
f
t
h
e
sy
st
em
.The t
o
t
a
l
pr
oces
si
n
g
t
i
m
e
a
mach
in
e req
u
i
res to
ex
ecu
t
e all th
e task
s assig
n
e
d
to
it is
termed as
W
o
rkl
o
ad. L
o
ad
balancing is done so that
every
virt
ual
m
achine in the
cloud
system
doe
s the
sam
e
am
ount
of
work
throughout
resultin
g in increasing
th
e th
ro
ugh
pu
t
an
d
m
i
n
i
mizi
n
g
respon
se time. Balan
c
in
g th
e lo
ad
of v
i
rtu
a
l m
ach
in
es u
n
i
fo
rm
ly
me
an
s th
at
n
o
m
ach
in
e is
eith
er id
le
or
partially
loaded
but m
achines a
r
e loa
d
e
d
e
qual
l
y.
2.
2.
Benefits
By distributing the
workl
o
a
d
am
ong the
process
o
rs
res
u
lts in utilizing the a
v
ailable resourc
e
s
opt
i
m
al
ly
by
reduci
ng t
h
e re
spo
n
se t
i
m
e, enha
nci
n
g t
h
e
o
v
eral
l
pe
rf
orm
a
nce by
achi
e
v
i
ng m
a
xim
u
m
cl
i
e
nt
satisfactio
n
.
Also
h
e
lp
s in
i
m
p
l
e
m
en
tin
g
fail o
v
e
r, En
ablin
g
Scalab
ility, th
ere b
y
void
i
n
g
b
o
ttlen
e
ck
s and
ove
r
pr
ovi
si
oni
ng
. L
o
ad
bal
a
n
c
i
ng i
s
neede
d
fo
r achi
e
vi
n
g
g
r
een c
o
m
put
i
n
g i
n
cl
ou
ds as
onl
y
l
i
m
i
t
e
d energ
y
i
s
co
nsum
ed a
n
d
l
e
ss am
ount
s o
f
ca
rb
o
n
i
s
e
m
i
t
t
e
d.
Fin
a
lly th
e g
o
a
l o
f
lo
ad
b
a
lancin
g
is to
im
p
r
o
v
e
t
h
e p
e
rforman
ce sub
s
tantiall
y. W
i
t
h
th
e h
e
lp
o
f
l
o
ad
b
a
lan
c
ing
a b
a
ck
up
p
l
an is main
tain
ed
ev
en
wh
en
a
system fails p
a
rtially. Lo
ad
b
a
lan
c
ing
h
e
lp
s i
n
co
n
t
in
u
i
ng
th
e serv
ice
b
y
p
r
ov
isio
n
i
n
g
an
d d
e
-p
ro
v
i
si
on
ing
th
e in
stances of ap
p
licatio
n
s
withou
t fai
l
. It m
a
in
tain
s syste
m
stability.
Load balancing accommodates
fu
t
u
re m
odification in the
system
.
2.
3.
Ca
te
gori
e
s o
f
l
oad
b
a
l
a
nci
n
g al
g
o
ri
thm
s
Bro
a
d
l
y, Lo
ad
b
a
lan
c
ing algorith
m
s
are categ
orized
in
to three sets: Symmetric, Send
er
In
itiated
and
Receiver Initiated. Symm
e
t
ri
c load
balanci
n
g is a c
o
m
b
ination of receive
r
initiated
and sender.
Ba
sed on
t
h
e
cu
rren
t
state of th
e
syste
m
l
o
ad b
a
lan
c
ing is sp
lit in
to two catego
r
ies a) Static Al
go
rith
m
,
b
)
Dy
n
a
m
i
c
Algo
rith
m
a)
Static Algorithm
– In t
h
is
algo
rithm
each se
rve
r
is
as
signe
d a
we
i
g
ht a
n
d accordingly the
hi
ghest
weighted se
rve
r
receives m
o
re connections.
In t
h
is s
ituation all weights a
r
e equivale
nt and
servers
rece
ive
a balanc
ed traffic [1].
b)
Dynam
i
c Algorithm
– Alloca
tes the acc
urat
e weights
on
serv
ers
b
y
search
ing
in th
e en
tire netwo
r
k
an
d a
l
i
ght
est
wei
ght
ed se
rve
r
i
s
p
r
eferred to
b
a
lance th
e traffic
The m
a
i
n
di
ffe
rence i
s
, al
t
h
o
u
gh
base
d
on
a s
i
m
p
l
e
rule w
h
e
r
e m
o
re load
s a
r
e conj
ur
ed
up
o
n
serv
ers
an
d
resu
lting
in
i
m
b
a
lan
ced
traffic, wh
ere as in
d
y
n
a
m
i
c
l
o
ad
b
a
lan
c
ing
is p
r
ed
icted
o
n
a q
u
e
ry th
at can
be
mad
e
frequ
en
tly o
n
th
e servers, bu
t so
m
e
t
i
m
es ex
iste
d
t
r
affic will p
r
ev
en
t qu
eries to
b
e
an
swered
and
cor
r
es
po
n
d
i
n
gl
y
m
o
re adde
d ove
r
h
ead
. The
fol
l
o
wi
n
g
i
s
t
h
e i
n
t
e
ract
i
on a
m
ong t
h
e com
p
o
n
e
n
t
s
of a d
y
n
am
i
c
lo
ad
b
a
lancing
alg
o
rith
m
.
Fi
gu
re
1.
I
n
t
e
r
act
i
on am
on
g t
h
e c
o
m
pone
nt
s
o
f
a
dy
nam
i
c load
bal
a
nci
n
g
al
go
ri
t
h
m
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Loa
d
Ba
lan
c
ing
Techn
i
qu
es f
o
r Efficien
t
Tra
ffic Man
a
g
e
men
t
in
Cl
o
u
d
En
vironmen
t
....
(Ta
l
a
s
ila
Sa
sidh
a
r
)
9
65
2.
4.
L
oad
b
a
l
a
n
c
i
n
g
Al
g
o
ri
thm
s
To achi
e
ve t
h
e
m
a
xim
u
m
l
o
ad by
di
st
ri
b
u
t
i
ng t
h
e
wo
rkl
o
ad am
ong t
h
e m
u
lt
i
p
l
e
net
w
o
r
k l
i
n
ks we
e
m
p
l
o
y
ee o
t
h
e
r algo
rith
m
s
to
d
i
stribu
te th
e lo
ad and als
o
c
h
eck the
pe
rf
or
m
a
nce an
d c
o
s
t
.
2.
4.
1.
Round
Robin
Algorithm
Ro
und
Rob
i
n
i
s
o
n
e
o
f
t
h
e existin
g
lo
ad
b
a
l
a
n
c
ing
techn
i
qu
es th
at d
i
stri
bu
tes m
u
ltip
le n
e
twork
link
s
t
o
achi
e
ve m
a
xim
u
m
t
h
ro
u
g
h
put
[
2
]
and m
i
ni
m
u
m
respo
n
s
e t
i
m
e
t
o
avoi
d o
v
erl
o
adi
n
g
.
Here sc
hed
u
l
i
n
g t
i
m
e
qua
nt
um
pl
ay
s
an i
m
port
a
nt
r
o
l
e
.
Fi
gu
re
2.
R
o
un
d R
obi
n
Al
g
o
ri
t
h
m
Ro
und
Ro
b
i
n
u
s
es th
e tim
e
q
u
a
n
t
u
m
co
n
c
ep
t where th
e t
i
m
e
is
d
i
v
i
d
e
d
in
to
m
u
ltip
le s
e
g
m
en
ts an
d
each node is given a pa
rticul
ar tim
e
interval and a node
has to perform
its actions w
ithin this a
llocated time
i
n
t
e
rval
o
n
l
y
.
The
res
o
u
r
ces
pr
o
v
i
d
e
d
t
o
cl
i
e
nt
o
n
t
h
e
bas
e
d
on
t
i
m
e quant
um
. If t
h
e t
i
m
e
quant
um
is l
a
rg
e
ro
u
nd
r
obi
n al
go
ri
t
h
m
i
s
sam
e
as t
h
e FC
FS.
If
t
h
e time quant
u
m
extrem
ely t
o
o
small th
en
R
o
und
Rob
i
n
sche
dul
i
n
g i
s
c
a
l
l
e
d p
r
oces
so
r
.
Here selection o
f
lo
ad
on
con
t
ex
t switch
e
s an
d
sh
ari
n
g
of alg
o
rith
m
is
rando
m
an
d
th
is lead
s to
situation where
som
e
nodes
are heavily
lo
ad
ed an
d so
m
e
are ligh
tly lo
ad
ed. Tho
ugh
t
h
e al
g
o
rith
m
i
s
v
e
ry
sim
p
l
e
t
h
e ad
di
t
i
onal
l
o
a
d
on
t
h
e sc
he
du
l
e
r deci
des t
h
e si
ze o
f
qua
n
t
um
where
b
y
i
t
has l
o
nge
r
aver
a
g
e
w
a
itin
g
tim
e, hig
h
nu
m
b
er
of
co
n
t
ex
t sw
itches,
h
i
gh
er
t
u
rnar
oun
d ti
m
e
[
2
] an
d low
thr
oug
h pu
t.
Fi
gu
re
3.
Exec
ut
i
o
n
o
f
pr
oces
ses w
ith
i
n
tim
e
qu
an
t
u
m
in
circu
l
ar
qu
eu
e
Step
-
b
y
– Ste
p
:
1
.
Ro
und
Rob
i
nVM Lo
ad
Balancer (RR
V
M Lo
ad Balan
cer
)
m
a
i
n
t
a
i
n
s an i
nde
x
of
VM’s
and state of VM’s
(bu
s
y/av
ailab
l
e).In
itially, all VM’s
h
a
v
e
zero
allo
cation
.
a.
T
h
e datacent
e
rcontroller rec
e
iv
es the
cloud re
quest/cloudlets.
b
.
It stores arri
v
a
l /bu
r
st tim
e o
f
th
e
u
s
er requ
ests.
c. T
h
e re
quests
allocated t
o
VM’s are
ba
sed
on
t
h
e
st
at
es k
n
o
w
n f
r
o
m
VM
qu
eue
d.
The
R
R
V
M
Loa
d
B
a
l
a
nce
r
al
l
o
cat
es t
h
e t
i
m
e qua
nt
um
fo
r
user
re
que
st
e
x
ecut
i
o
n.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJEC
E
V
o
l
.
6,
No
. 3,
J
u
ne 2
0
1
6
:
96
3 – 9
7
3
96
6
2.
a.
T
h
e
RRVM Loa
d
Balance
r
calcula
tes tu
rn
-aroun
d ti
m
e
for each
p
r
o
cess.
b
.
It also calcu
l
a
tes th
e response ti
m
e
an
d
averag
e w
a
iting
t
i
m
e
o
f
u
s
er requ
ests.
c. It
decides
the sche
duling order
3
.
After th
e ex
ecu
tio
n of cl
oudl
ets, the
VM’s
a
r
e
de-allocat
ed b
y
th
e RRV
M L
o
ad
Ba
la
nc
e
r
4.
The datacente
rcont
roller chec
ks for
t
h
e
new/
pending /
w
aiting re
quests in
que
ue
C
ont
i
n
ue fr
om
st
ep-
2
.
2.
4.
2.
Thro
ttled L
o
ad Ba
l
a
ncing
Alg
o
r
ithm
(TLB)
Th
e to
tal ex
ecu
tio
n ti
m
e
in
th
is alg
o
rith
m is esti
m
a
ted
in
three stag
es. In th
e
first st
ag
e
v
i
rtu
a
l
machines formed are ideall
y
waiting
f
o
r
t
h
e sche
d
u
l
e
r
t
o
sche
d
u
l
e
t
h
e j
obs i
n
t
h
e
que
ue,
o
n
ce
j
obs
are
allocated, the
virtual m
achines
in
th
e clo
ud starts p
r
o
cessi
n
g
, wh
ich
is th
e second
stage, an
d
fin
a
lly in
th
e
third stage t
h
e
cleanup
or
th
e
d
e
stru
ctio
n of t
h
e
vi
rt
ual
m
achi
n
es
occ
u
r
s
.
The
pr
o
pose
d
al
go
ri
t
h
m
wi
ll
im
prove
t
h
e
per
f
o
rm
ance by
p
r
o
v
i
d
i
n
g
t
h
e re
so
urce
s
on
dem
a
nd
,
resu
lting
in increased nu
m
b
er of
jo
b
ex
ecu
tio
ns and
t
h
us red
u
c
i
n
g th
e rej
e
ctio
n
i
n
t
h
e
n
u
m
b
e
r
o
f
jo
bs
su
b
m
itted
.
Th
e th
roug
hpu
t of th
e co
m
p
u
ting
m
o
d
e
l can
be esti
m
a
ted
as
th
e to
tal n
u
m
b
e
r
o
f
jo
b
s
execu
ted
with
in
a tim
e s
p
an withou
t con
s
id
ering
th
e virtu
a
l m
ach
in
e form
at
io
n
tim
e
and
d
e
stru
ction
tim
e.
Fig
u
re
4
.
Thro
ttled
sch
e
du
lin
g pro
cess
The
pr
o
pose
d
al
go
ri
t
h
m
wi
ll
im
prove
t
h
e
per
f
o
rm
ance by
p
r
o
v
i
d
i
n
g
t
h
e re
so
urce
s
on
dem
a
nd
,
resu
lting
in increased nu
m
b
er of
jo
b
ex
ecu
tio
ns and
t
h
us red
u
c
i
n
g th
e rej
e
ctio
n
i
n
t
h
e
n
u
m
b
e
r
o
f
jo
bs
su
b
m
itted
.
Step
-
b
y
– Ste
p
:
1.
Th
ro
ttled
V
MLo
a
d
B
alan
cer (TM Lo
ad
Balan
cer) m
a
in
tain
s an
in
d
e
x
tab
l
e o
f
VMs and
th
e state o
f
th
e
VM
(BUS
Y/A
V
A
I
L
ABLE
).
2.
Wh
en
VM is
started
it is sai
d
t
o
b
e
av
ailab
l
e., a DataCe
nterController
rece
ives a
ne
w re
quest
3.
DataCen
t
er C
o
n
t
ro
ller inqu
ires th
e
n
e
w TMLo
ad
Balan
cer for
n
e
x
t
l
o
catio
n
4.
TMLo
ad
Balancer p
a
rses th
e
allo
catio
n
tab
l
e fro
m
to
p
and u
n
til th
e first
av
ailab
l
e VM is fou
n
d
o
r
p
a
rsed
co
m
p
letely
5.
If f
o
u
n
d
:
a. Th
e TM Lo
ad
Balan
c
er
return
s th
e
VM i
d
t
o
th
e DataCen
t
erCon
t
ro
ller.
b
.
Th
e DataCen
t
erCon
t
ro
ller
send
s th
e request to
th
e VM i
d
en
tified
b
y
that id
c. DataCen
t
erCo
n
t
ro
ller
no
tifies th
e
TM Load
Balan
c
er
o
f
th
e n
e
w allocatio
n
d.
TM L
o
ad Balancer
update
s the allocation table acc
ordingly
6.
If n
o
t
f
o
un
d:
a.
The
TM
Loa
d
Balancer returns -1
b.
The
DataCente
r
Controller que
u
es t
h
e re
quest
c.
Whe
n
t
h
e VM
finis
h
es
proce
ssing t
h
e
request,
and the
DataCenterController recei
ves
the res
p
ons
e
clo
u
d
l
et, it no
tifies th
e
TM Load
Balan
c
er
for
d
e
-allo
cation
.
d
.
Th
e
DataCen
t
erCon
t
ro
ller ch
eck
s
for t
h
e left ov
er
waiting
req
u
e
sts in th
e
q
u
e
u
e
.
If t
h
ere ex
ists an
y, it
cont
i
n
ues
f
r
om
st
ep
3
C
ont
i
n
ue fr
om
st
ep 2
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Loa
d
Ba
lan
c
ing
Techn
i
qu
es f
o
r Efficien
t
Tra
ffic Man
a
g
e
men
t
in
Cl
o
u
d
En
vironmen
t
....
(Ta
l
a
s
ila
Sa
sidh
a
r
)
9
67
2.
4.
3.
Equally Spre
a
d Current Execution
Algorithm
(ESCE)
Here lo
ad
b
a
lan
cer m
a
k
e
s an
effo
rt to
allo
cate e
q
u
a
l lo
ad
t
o
all th
e v
i
rtu
a
l
m
achines connected wit
h
th
e d
a
ta cen
tre. Lo
ad
b
a
lan
c
er m
a
in
tain
s an in
d
e
x
tab
l
e of
VM
’s al
o
ng
wi
t
h
n
u
m
b
er of requests currently
assig
n
e
d
for each
Vi
rtu
a
l Mach
in
e
(VM
)
.
Wh
en
a
requ
est is o
r
ig
in
ated
fro
m
th
e d
a
ta cen
tre to
allo
cate th
e
n
e
w V
M
, th
en
Lo
ad
Balan
c
er
scan
s t
h
e en
tire in
dex tab
l
e
fo
r least lo
ad
ed
V
M
.
I
f
m
o
r
e
th
an on
e V
M
is fo
und
th
en
lo
ad
b
a
lan
cer selects the first id
en
tified
VM fo
r
h
a
nd
lin
g
th
e clien
t
/n
od
e’s
requ
est
,
an
d
also
return
s th
e
VM id
t
o
t
h
e
data cen
tre con
t
ro
ller.
Th
e d
a
ta cen
tre id
en
tifies VM b
y
id and comm
unicates the request to
it. The data cent
r
e revises the
i
nde
x t
a
bl
e by
i
n
creasi
n
g t
h
e
al
l
o
cat
i
on co
unt
of i
d
e
n
tifi
e
d VM.
When VM exec
utes
the assigned t
a
sk,
a
requ
est is co
mm
u
n
i
cated
to
data cen
tre
wh
ich
is
furth
e
r
no
tified
b
y
th
e lo
ad b
a
lan
cer th
at ag
ai
n
rev
i
ses the
in
d
e
x
tab
l
es
by d
ecreasing
t
h
e iden
tified
VM’s allo
catio
n coun
t
b
y
o
n
e
even tho
u
g
h
th
ere
rem
a
in
s an
ad
d
ition
a
l co
mp
u
t
ation
o
v
erhead
for scann
i
ng
th
e qu
eu
e again
and
ag
ai
n
.
Fi
gu
re
5.
ESC
E
Pr
ocess
Step
-
b
y
– Ste
p
:
1.
Fi
nd
t
h
e
ne
xt
a
v
ai
l
a
bl
e VM
.
2.
Ch
eck fo
r all cu
rren
t allocation
co
un
t, i
f
it is
less th
an m
a
x
len
g
t
h
o
f
VM, allo
cate th
e
VM
3.
If a
v
ailable
VM is not alloca
ted create a
ne
w
one
, C
o
unt t
h
e active
loa
d
on each VM
4.
Retu
rn th
e i
d
of tho
s
e
VM
which
is
h
a
v
i
n
g
least lo
ad.
5.
Th
e
VMLo
ad
B
a
lan
cer
will allo
cate th
e req
u
e
st to
o
n
e
of th
e VM
6.
If a
VM is o
v
e
rlo
a
d
e
d
t
h
en
t
h
e VMLo
ad
Bal
a
n
cer
will d
i
strib
u
t
e so
m
e
o
f
i
t
s work
t
o
th
e
VM h
a
v
i
ng
least
work so
th
at every VM is
equally lo
ad
ed.
7.
The datacentercont
roller rece
ives
the
re
sponse to t
h
e re
quest sent and
then allocate the
waiting
reque
s
ts
fr
om
t
h
e jo
b
p
ool
/
q
ue
ue t
o
t
h
e avai
l
a
bl
e
V
M
& s
o
on
8.
C
ont
i
n
ue fr
om
st
ep-
2
2.
5.
D
e
ploy
ing
A
l
go
rit
h
ms o
f
l
o
ad
ba
la
ncing
2.
5.
1.
Cl
ou
d
an
al
ys
t
– si
m
u
l
a
ti
on
t
ool
Clo
u
d
an
alyst is actu
a
lly a to
o
l
k
it fo
r sim
u
l
a
tio
n
of
cloud
scenari
o
s to s
u
pport evaluation of socia
l
network tools
according to
ge
ogra
phic distribution
of user
s and data ce
nters [3]. Cl
oud analyst features are
sh
own
i
n
Table 1
.
In
th
is si
m
u
latio
n
too
l
co
mm
u
n
ities o
f
u
s
ers and
d
a
ta cen
t
ers su
ppo
rting
t
h
e
so
cial
networks a
r
e c
h
aracterize
d
a
n
d base
d
on their location,
pa
ram
e
t
e
rs such
as use
r
e
xpe
ri
e
n
ce
usi
n
g t
h
e
soci
al
net
w
or
k a
ppl
i
c
at
i
on a
nd l
o
ad
on
t
h
e
dat
a
c
e
nt
er a
r
e l
o
gg
ed/
o
bt
ai
ne
d. C
l
ou
d A
n
al
y
s
t
i
s
abl
e
t
o
di
s
p
l
a
y
t
h
e
out
put
i
n
gra
p
h
i
cal
form
[4]
.
Table
1. Cl
oud Analyst Features
Para
m
e
ters
Cloud
Analyst
Co
m
m
unication on Networ
k
L
i
m
ited
Gr
aphical Repor
ts
Capable to display
Availability Open
Source
P
l
a
t
form
S
i
m
J
a
v
a
Sim
u
lation tim
e
Seconds
L
a
nguage/Scr
ipt Java
Phy
s
ical M
odels
None
E
n
er
gy
M
odels
None
Power
Saver
M
odes
None
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJEC
E
V
o
l
.
6,
No
. 3,
J
u
ne 2
0
1
6
:
96
3 – 9
7
3
96
8
Al
l
com
pone
n
t
s i
n
C
l
oud
Anal
y
s
t
com
m
uni
cat
e t
h
r
o
ug
h t
h
e p
r
o
cess
of m
e
ssage passi
n
g
. T
h
e
l
o
we
rm
ost
l
a
yer i
s
resp
o
n
si
b
l
e for m
a
nagi
n
g
t
h
e c
o
m
m
uni
cat
i
on bet
w
ee
n va
ri
o
u
s c
o
m
p
o
n
e
n
t
s
. T
h
e s
econ
d
l
a
y
e
r has al
l
t
h
e su
b l
a
y
e
rs
i
n
i
t
t
h
at
ha
ve t
h
e
m
a
i
n
cl
ou
d
co
m
ponent
s
[5]
.
C
l
ou
d A
n
al
y
s
t
[6]
i
s
a G
U
I
based t
ool
w
h
i
c
h
was de
ve
l
ope
d o
n
C
l
o
u
d
Si
m
[7]
arch
i
t
ect
ure [8]
.
C
l
ou
dSi
m
i
s
a
t
ool
ki
t
w
h
i
c
h
p
e
rm
it
s a user t
o
pe
rf
orm
m
odel
i
ng, si
m
u
l
a
t
i
on
. The cl
ou
d
anal
y
s
t
t
ool
as sh
o
w
n
in
Figu
re
6
rem
o
v
e
s all th
e co
m
p
lex
ities b
y
d
e
v
e
lop
i
ng
GUI so
th
at focu
s can
b
e
do
ne o
n
sim
u
latio
n
rat
h
er
than
on
progra
mming. A
use
r
has acce
ss to
perform
sim
u
la
tions
repeate
d
ly with slight c
h
ange i
n
para
meters
v
e
ry easily and
q
u
i
ck
ly. Th
e clo
u
d
an
alyst allo
ws
u
s
ers to set th
e lo
cation
o
f
d
a
ta cen
t
ers for g
e
n
e
ratin
g t
h
e
ap
p
lication
.
In th
is v
a
riou
s co
nfigu
r
ation
param
e
ters
can
be
set
s
u
ch
a
s
num
ber of use
r
s, num
ber of req
u
est
gene
rat
e
d per user per ho
ur
,
num
ber
o
f
vi
rt
ual
m
achi
n
es,
num
ber o
f
p
r
o
cesso
rs, am
ount
of st
o
r
a
g
e,
net
w
or
k
bandwidth
and othe
r neces
sary
pa
ram
e
te
rs.
Taki
ng the
pa
ram
e
ters into
acc
ount
the tool c
o
m
putes the
si
m
u
latio
n
result an
d
resu
lt is
d
i
sp
layed in
g
r
ap
h
i
cal
form
.
Fi
gu
re 6.
The
C
l
ou
d Anal
y
s
t
Arc
h
i
t
ect
ure
The
out
c
o
m
e
com
p
ri
ses res
p
o
n
se t
i
m
e, proc
essi
ng
t
i
m
e
, co
st
et
c. B
y
per
f
o
rm
i
ng
vari
o
u
s
sim
u
l
a
t
i
o
n
ope
rations the
cloud provi
d
er can focu
s on the m
o
st ideal a
p
proach
for allocating t
h
e res
o
urces, c
h
oosi
ng t
h
e
d
a
ta cen
ter, op
ti
m
i
zin
g
co
st b
a
sed
on
requ
est. Th
e
v
a
ri
o
u
s
activ
ities p
e
rform
e
d
in
clo
u
d
an
alyst to
o
l
are
summ
arized as Figure
7.
Fig
u
r
e
7
.
Tasks of
Cloud
an
alyst
The m
a
in compone
n
ts
of
cl
oud analyst tool
are:
Simulati
on
:
By co
n
s
id
ering
th
e
v
a
riou
s
p
a
ram
e
ters th
is to
o
l
ex
ecu
t
es th
e sim
u
lat
i
o
n
and
ou
tcomes th
e
req
u
ire
d
result
s.
User B
a
se:
He
re
user
base
i
s
m
odel
e
d t
o
rep
r
esent
t
h
e
user
s w
h
o
de
pl
oy
t
h
e a
ppl
i
cat
i
o
n.
Da
ta
Ce
nter
Co
ntr
o
ller:
Pl
ays cru
c
ial
ro
le in
co
n
t
ro
lling
th
e v
a
riou
s
d
a
t
a
cen
ter activ
ities.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Loa
d
Ba
lan
c
ing
Techn
i
qu
es f
o
r Efficien
t
Tra
ffic Man
a
g
e
men
t
in
Cl
o
u
d
En
vironmen
t
....
(Ta
l
a
s
ila
Sa
sidh
a
r
)
9
69
GUI P
a
ck
age
:
A graphical interface is disp
layed for vari
ous
user i
n
terface
s to configure the
dive
rse
si
m
u
latio
n
p
a
ra
m
e
ters in
an
efficien
t
way. Th
e
GUI
o
f
cloud
an
alyst is shown in
figu
re b
e
lo
w
Interne
t
Characteris
t
ics
:
Vari
o
u
s
in
ternet ch
aracteristics are m
o
d
e
l
e
d
for sim
u
latio
n
,
wh
ich
i
n
co
rp
orat
es t
h
e
m
easure of l
a
t
e
ncy
and ba
nd
wi
dt
h an
d cur
r
ent
pe
rf
orm
a
nce l
e
vel
of t
h
e dat
a
cent
e
r
s
f
o
r
assi
gni
ng
bet
w
een t
h
e
re
gi
o
n
s
.
Vm Load Balancer:
Resp
on
sib
l
e fo
r allocatin
g
th
e lo
ad
on
v
a
riou
s
d
a
ta cen
ters
based
on
the
req
u
est
gene
rat
e
d
by
use
r
s.
O
n
e
of t
h
e p
o
l
i
c
y
has t
o
be sel
ect
ed f
r
om
ro
u
n
d
r
obi
n al
g
o
ri
t
h
m
,
equal
l
y
sprea
d
cu
rren
t ex
ecu
tio
n lo
ad
, and
thro
ttled
.
Cloud
App Service Broker
:
Ha
n
d
l
e
s
t
h
e
t
r
affi
c ro
ut
i
n
g bet
w
ee
n user
bases
a
n
d dat
a
cent
e
rs
by
m
odel
l
i
ng ser
v
i
ce br
oke
r. T
h
e
servi
ce
br
o
k
er
can
use o
n
e
of
t
h
e r
out
i
n
g
po
l
i
c
i
e
s from
t
h
e gi
ve
n t
h
ree p
o
l
i
ci
es
with
op
tion
of ch
oo
si
n
g
either clo
s
est
d
a
ta cen
ter
o
r
o
p
t
i
m
i
ze respo
n
se
t
i
m
e
and reco
n
f
i
g
uri
n
g
dy
na
m
i
cal
ly
with
lo
ad. Th
e clo
s
est d
a
ta cen
ter ro
u
t
es the traffic fro
m
th
e sou
r
ce
u
s
er b
a
se to
th
e clo
s
est d
a
ta center in
t
e
rm
s of net
w
o
r
k l
a
t
e
ncy
.
R
econ
f
i
g
uri
ng
dy
n
a
m
i
cal
ly
wi
t
h
load r
o
ut
i
n
g po
l
i
c
y
works
on t
h
e pri
n
ci
pl
e l
o
ad o
f
whe
n
e
v
er
t
h
e
per
f
o
r
m
a
nce o
f
part
i
c
ul
ar
dat
a
cent
e
r
de
g
r
a
d
es
bel
o
w a
gi
ven
t
h
resh
ol
d
val
u
e t
h
e
n
t
h
e
l
o
ad
o
f
t
h
at
dat
a
ce
nt
er
i
s
eq
ual
l
y
di
st
r
i
but
ed
am
ong
ot
he
r
dat
a
cent
e
rs.
Simulati
on
Confi
g
ur
ati
o
n
:
C
o
n
f
i
g
uri
n
g
v
a
ri
o
u
s com
pon
ent
of t
h
e cl
o
ud a
n
al
y
s
t
t
ool
need t
o
b
e
do
ne
vari
ous c
o
m
pone
nt
o
f
t
h
e cl
o
ud a
n
al
y
s
t
t
ool
f
o
r a
n
al
y
z
i
ng l
o
a
d
bal
a
nci
n
g p
o
l
i
c
i
e
s. In
Fi
g
u
re
10
,
Fi
gu
r
e
11 a
n
d Fi
g
u
re
12
param
e
t
e
rs fo
r t
h
e
user
base co
n
f
i
g
u
r
at
i
o
n
,
ap
pl
i
cat
i
on de
pl
oy
m
e
nt
an
d dat
a
c
e
nt
e
r
con
f
i
g
urat
i
o
n a
r
e sh
ow
n.
Fr
o
m
t
h
e fi
gure
w
e
can i
n
fe
re t
h
at
t
h
e l
o
cat
i
on
of
user
bases
h
a
s been
defi
ne
d i
n
si
x
d
i
fferen
t
reg
i
on
s of th
e wo
rl
d. In
th
e tab
l
e
belo
w we
h
a
v
e
two
d
a
ta cen
ters in
u
s
e to
h
a
nd
le th
e requ
est
o
f
th
e
clien
t
’s/u
sers.
The
gi
ve
n i
n
Tabl
e 2
re
p
r
esent
s
t
h
e u
s
er ba
se co
nf
i
g
u
r
at
i
on a
n
d
appl
i
cat
i
o
n
depl
oy
m
e
nt
co
nfigu
r
ation. Si
m
u
latio
n
1
80
min
s
.
Tabl
e 2. Use
r
B
a
se
C
o
nfi
g
u
r
at
i
o
n
Nam
e
Region
Requests
per
user
Data size
per
r
e
quest
Peak hour
s star
t
(GM
T
)
Peak hour
s
end (
G
MT
)
Avg.
peak
user
s
Avg.
Peak-
o
ff
user
s
UB1 0
3
1000
13
15
4000
00
4000
0
UB2 1
12
1000
15
17
1000
00
1000
0
UB3 2
8
1000
20
22
3000
00
3000
0
UB4 3
9
1000
1
3
1500
00
1500
0
UB5 4
7
1000
21
23
5000
0
100
Serv
ice Bro
k
e
r Po
licy : Clo
s
est Data Cen
t
er
App
licatio
n Dep
l
o
y
m
e
n
t:
Tabl
e 3. A
ppl
i
cat
i
on Depl
oy
m
e
nt
C
o
n
f
i
g
ur
at
i
o
n
Data Center
#VM’s
I
m
age
Size
Me
m
o
r
y
BW
DC1 20
100
1024
10
DC2 20
100
1024
1000
Data Cen
t
er C
o
nfigu
r
ation
:
Data Center
s:
Tabl
e 4. Dat
a
C
e
nt
er
C
o
n
f
i
g
urat
i
o
n
3.
R
E
SU
LTS AN
D ANA
LY
SIS
After sim
u
lati
n
g
, th
e
resu
lt co
m
p
u
t
ed
b
y
clo
u
d
an
alyst is as shown in
th
e
fo
llowing
fi
g
u
res.
Configuration
for each loa
d
balanc
ing
poli
c
y depe
ndi
ng
on that the
res
u
lt calculated for the m
e
tric
s like
respon
se tim
e, requ
est
p
r
o
cessin
g
tim
e an
d
co
st in fu
lf
illin
g th
e
requ
est
h
a
s b
e
en
sho
w
n
i
n
Fi
g
u
res
9
,
10, 11
.
3.
1.
Resp
onse
Time
Resp
on
se ti
m
e
fo
r each
u
s
er
b
a
se and
o
v
e
rall respo
n
se time is calcu
lated
b
y
th
e
cloud
an
alyst for
each loa
d
ing
policy and
res
u
lts are tabulate
d
in t
h
e Ta
ble
5, 6
a
nd 7 re
spectively.
W
e
can infer
from the
f
i
gu
r
e
t
h
at over
a
ll r
e
spon
se
ti
m
e
o
f
Ro
und Ro
b
i
n
p
o
licy and
ESCE
po
licy is al
m
o
st sa
m
e
w
h
ile th
at of
Th
ro
ttled
po
licy is v
e
ry m
u
ch lo
w as co
m
p
ared
to o
t
h
e
r two po
licies.
Na
m
e
Region
Arch
OS
VMN
Cost $/Hr
Me
m
o
r
y
Cost
Storage
Cost $/Hr
Data T
r
ansf
er
Cost per
$/Gb
Physical
HW Units
DC1 0
x36
L
i
nux
Xen
01
035
01
01
2
DC2 2
x86
L
i
nux
Xen
01
035
01
01
1
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJEC
E
V
o
l
.
6,
No
. 3,
J
u
ne 2
0
1
6
:
96
3 – 9
7
3
97
0
Tabl
e
5.
O
v
era
l
l
R
e
spo
n
se Ti
m
e
of R
o
u
n
d
R
obi
n
Al
g
o
ri
t
h
m
Over
all Response
T
i
m
e
Using Robin Policy
Av
er
age (
m
s)
M
i
nim
u
m
(
m
s)
M
a
xim
u
m
(
m
s)
Over
all Response tim
e
754.
81
67.
97
1589.
1
0
Data Center Pr
ocessing tim
e
472.
77
0.
40
1064.
8
9
Resp
on
se ti
m
e
b
y
r
e
g
i
on
User Base
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
UB1 172.
60
8
67.
974
244.
91
UB2 229.
71
177.
14
4
340.
71
7
UB3 243.
60
5
162.
12
5
340.
56
3
UB4 1,
173.
70
5
303.
74
1589.
9
9
4
UB5 317.
40
8
278.
35
7
356.
42
5
UB6 212.
46
8
169.
31
8
327.
10
6
Tabl
e
6.
O
v
era
l
l
R
e
spo
n
se Ti
m
e
of E
S
C
E
A
l
go
ri
t
h
m
Over
all Response
T
i
m
e
Using E
S
C
E
Aver
age (
m
s)
M
i
nim
u
m
(
m
s)
M
a
xim
u
m
(
m
s)
Over
all Response tim
e
757.
45
67.
97
1580.
0
8
Data Center Pr
ocessing tim
e
475.
50
0.
40
1053.
0
9
Resp
on
se ti
m
e
b
y
r
e
g
i
on
User Base
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
UB1 172.
89
7
65.
767
244.
37
8
UB2 229.
50
7
177.
14
4
356.
59
7
UB3 243.
92
5
162.
12
5
340.
24
1
UB4 1,
173.
21
8
303.
34
1580.
8
8
2
UB5 318.
24
7
278.
35
7
375.
24
2
UB6 212.
52
6
169.
31
8
327.
05
2
Tab
l
e
7
.
Ov
erall Resp
on
se Ti
me o
f
Thro
ttled
Algo
rith
m
Over
all Response
T
i
m
e
Using T
h
r
o
ttled
Av
er
age (
m
s)
M
i
nim
u
m
(
m
s)
M
a
xim
u
m
(
m
s)
Over
all Response tim
e
511.
33
63.
30
1456.
3
6
Data Center Pr
ocessing tim
e
246.
06
0.
40
935.
23
Resp
on
se ti
m
e
b
y
r
e
g
i
on
User Base
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
UB1 117.
94
3
63.
3
194.
08
5
UB2 225.
71
3
177.
14
4
328.
15
3
UB3 160.
77
72.
256
303.
22
4
UB4 781.
09
299.
54
7
1456.
3
6
2
UB5 318.
20
9
278.
35
7
374.
41
5
UB6 210.
38
9
169.
31
8
336.
31
4
3.
2.
Data
Ce
nter
Reques
t
Servi
c
ing
Time
Data Cen
t
er R
e
q
u
e
st Serv
icin
g Tim
e
fo
r each
d
a
ta
ce
nter calculated by
the cloud a
n
a
l
yst for eac
h
lo
ad
ing
p
o
licy h
a
s b
e
en
show
n
in
t
h
e Tab
l
e 8
r
e
sp
ectiv
ely. Th
is h
a
s tabu
lated
th
at serv
icin
g
tim
e o
f
Ro
und
Ro
b
i
n
po
licy an
d
ESCE algo
rith
m
is al
mo
st sam
e
wh
ile th
at o
f
Thro
ttled
po
licy is v
e
ry m
u
ch
lo
w as
com
p
ared t
o
ot
her
t
w
o
pol
i
c
i
e
s.
Tabl
e 8. O
v
era
l
l
Dat
a
C
e
nt
er
R
e
quest
Ser
v
i
n
g
Ti
m
e
of Al
g
o
ri
t
h
m
s
Data Center Request Ser
v
icing T
i
m
e
For
Round Robi
n Algor
ith
m
Data Center
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
DC 1
68.
673
1.
911
173.
34
5
DC 2
646.
23
8
0.
404
1064.
8
8
8
For
E
S
C
E
Algor
ith
m
Data Center
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
DC1 68.
876
1.
911
171.
75
6
DC2 649.
91
1
0.
404
1053.
0
8
8
For Throttled Algo
rith
m
Data Center
Average (
m
s)
Mini
m
u
m
(
m
s)
Maxi
m
u
m
(
m
s)
DC 1
37.
348
1.
911
120.
45
4
DC 2
334.
22
2
0.
404
935.
23
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Loa
d
Ba
lan
c
ing
Techn
i
qu
es f
o
r Efficien
t
Tra
ffic Man
a
g
e
men
t
in
Cl
o
u
d
En
vironmen
t
....
(Ta
l
a
s
ila
Sa
sidh
a
r
)
9
71
3.
2.
1.
Load
Bal
a
nci
n
g Ch
allenge
s
– Cloud Com
putin
g
In
cloud
co
m
p
u
tin
g, lo
ad
b
a
l
a
n
c
ing
is req
u
ired
to
d
i
stribu
te th
e d
y
n
a
mic lo
cal work
l
o
ad
ev
en
ly
acro
s
s all th
e
n
o
d
e
s. It assists in
h
i
g
h
u
s
er satis
faction
an
d
resou
r
ce u
tilizatio
n
ratio
b
y
gu
aran
teein
g
a
pr
ofi
c
i
e
nt
,
rea
s
on
abl
e
di
st
ri
b
u
t
i
o
n
o
f
eac
h
pr
ocessi
ng
re
sou
r
ce.
A
p
pr
o
p
ri
at
e l
o
a
d
ba
l
a
nci
n
g
su
p
p
o
r
t
s
i
n
th
in
n
i
n
g
reso
urce u
tilizatio
n
,
actu
a
lizin
g
fail-o
v
e
r, en
ab
li
n
g
scalab
ility
an
d
elasticity, k
eep
i
n
g
away
fro
m
b
o
ttlen
e
ck
s etc.. [9
],[10
]
. Desp
ite th
e fact that clo
u
d
co
m
p
u
tin
g is on
p
a
ce. Research
in
clo
u
d
co
m
p
u
tin
g is
still in
its in
i
tial stag
es, and
so
m
e
ex
p
e
ri
m
e
n
t
al d
i
fficu
lties stay u
n
so
lv
ed
b
y
estab
lish
e
d
research
ers,
esp
ecially lo
ad adj
u
stin
g d
i
ffi
c
u
lties [11
]
.
Elasticity is key feature in cloud
where
res
o
urces
can be
allocated
or rel
eased a
u
tom
a
tically. And
a
user ca
n
we us
e or
release the
resourc
e
s of the cloud,
by
ke
epi
n
g t
h
e sam
e
per
f
o
r
m
a
nce as t
r
adi
t
i
onal
sy
st
em
s
by
m
a
ki
ng use of
best
p
o
ssi
bl
e
res
o
u
r
ces.
3.
3.
Virtua
l Ma
chines Mig
r
a
t
ion
W
i
t
h
vi
rtualization, a
n
entire
machine can
be seen as
a file o
r
set of files, t
o
un
lo
ad
a h
eav
ily Lo
ad
ed
mach
in
e, it is po
ssib
l
e t
o
sh
ift a
v
i
rt
u
a
l
mach
in
e
between physical machines.
Th
e m
a
in
obj
ectiv
e is t
o
distribute the l
o
ad i
n
a datacenter
or set
of datacen
ters. T
h
en
dynam
i
c distribu
tion
of load
by m
oving t
h
e
v
i
rtu
a
l m
ach
in
e b
y
u
s
ers is
u
n
a
n
s
werab
l
e
as th
is k
e
ep
s
away fro
m
th
e b
o
ttlen
e
ck
s
in
Clo
u
d
co
mp
u
ting
fram
e
wor
k
.
Fi
gu
re
8.
Dat
a
C
e
nt
er L
o
a
d
i
n
g Ti
m
e
Fi
gu
re
9.
Use
r
Ho
url
y
R
e
s
p
on
se Ti
m
e
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJEC
E
V
o
l
.
6,
No
. 3,
J
u
ne 2
0
1
6
:
96
3 – 9
7
3
97
2
Fig
u
r
e
10
. D
a
t
a
Cen
t
er
Ho
ur
l
y
Pr
ocessing
Ti
m
e
s
3.
4.
Energy Management
Eco
nom
y
of sal
e
i
s
a benefi
ci
ary
fact
or t
h
at
sup
p
or
ts clou
d. En
er
g
y
sav
i
ng
is a cruci
a
l note that
allo
w
s
a set of g
l
ob
al r
e
sou
r
ces supp
or
ted
by co
nd
en
sed pr
ov
id
er
s.
I
f
so
th
en
ho
w
a user
can use a
par
t
of
Datacenter whi
l
e
m
a
intaining
standa
rd
pe
rformance rem
a
ins unsol
v
a
b
le.
3.
5.
Emergence of Small Data
Ce
nters for Cloud Com
p
uting
Sm
a
ll datacenters are be
neficial as they
are le
ss expe
nsive .Sm
a
ll provi
ders
delive
r
the cloud
co
m
p
u
tin
g serv
ices lead
i
n
g to
g
e
o-d
i
v
e
rsity co
m
p
u
ting
.
Yet at th
e sam
e
ti
m
e
Lo
ad
b
a
lan
c
ing
will b
e
co
m
e
a
pr
o
b
l
e
m
on a
g
l
obal
scal
e t
o
e
n
su
re a
n
a
d
e
q
u
a
t
e
resp
o
n
se t
i
m
e
wi
t
h
an
o
p
t
i
m
a
l
ci
rcul
at
i
on
of
a re
so
urce
.
3.
6.
Stored Data Manageme
nt
From
t
h
e past
, dat
a
st
ored ac
ross t
h
e
net
w
o
r
k ha
s an ex
p
one
nt
i
a
l
rai
s
e even f
o
r or
ga
n
i
zat
i
ons by
out
s
o
urci
n
g
t
h
ei
r dat
a
st
o
r
ag
e or
fo
r i
n
di
v
i
dual
s
, t
h
e m
a
nagem
e
nt
of
dat
a
st
ora
g
e
h
a
s bec
o
m
e
a
m
a
jo
r
chal
l
e
ng
e f
o
r
cl
ou
d c
o
m
put
i
n
g
.
T
h
e
n
t
h
e
d
i
st
ri
but
i
o
n
of
t
h
e
dat
a
f
o
r
o
p
t
i
m
u
m
st
orage
i
n
cl
o
u
d
f
o
r
a
qui
c
k
access is the
present
day’s c
h
a
llenge.
4.
CO
NCL
USI
O
N
Lo
ad
Balan
c
i
n
g
d
i
stribu
tes t
h
e
d
y
n
a
m
i
c lo
cal work
lo
ad
ev
en
ly acro
ss all th
e no
d
e
s in th
e cloud
s.
Loa
d
B
a
l
a
nci
n
g st
ri
ves t
o
ac
h
i
eve a
hi
g
h
use
r
sat
i
s
fact
i
o
n a
n
d
res
o
urce
ut
i
l
i
zat
i
on rat
i
o
b
y
avoi
di
n
g
si
t
u
at
i
o
n
wh
ere left
o
v
er no
d
e
s are either h
e
av
ily b
a
lan
ced or
id
le. Th
ere
b
y
o
v
e
rall p
e
rfo
rm
an
ce an
d resou
r
ce
u
tility o
f
th
e syste
m
in
creases.
W
ith
pro
p
e
r balan
c
ing, reso
urce u
tility ratio
is
m
a
in
t
a
in
ed
m
i
n
i
m
u
m wh
ich
will fu
rt
h
e
r
red
u
ce e
n
er
gy
con
s
um
pt
i
on.
In t
h
i
s
pa
per E
x
i
s
t
i
ng l
o
a
d
ba
l
a
nci
ng t
ech
ni
que
s t
h
at
have
been di
sc
usse
d w
h
i
c
h f
o
cus
on re
d
u
ci
n
g
associated ove
r
hea
d
,
service respon
se t
i
m
e
and
i
m
provi
ng
pe
rf
orm
a
nce
et
c. b
u
t
n
o
n
e
of
t
h
e t
e
c
hni
q
u
es
has
con
s
i
d
ere
d
t
h
e
ener
gy
cons
u
m
pti
on
and ca
rbon emission factors. Yet at
the sa
m
e
tim
e
there are numerous
ex
istin
g issu
es
wh
ich
h
a
v
e
not b
een fu
lly add
r
essed
li
k
e
L
o
ad Balancing, Virt
ual Machi
n
e Migration,
Server
Co
n
s
o
lid
ation
,
and
En
erg
y
M
a
n
a
g
e
m
e
n
t
. Key to
th
ese issues is th
e issu
e
o
f
l
o
ad
b
a
lan
c
i
n
g, th
at is ob
lig
ed t
o
d
i
stribu
te th
e ex
cess
d
y
n
a
m
i
c
lo
cal work
l
o
ad
ev
en
ly to
all th
e no
d
e
s i
n
th
e Clou
d
to
at
tain
to
a h
i
gh
clien
t
fu
lfillm
en
t an
d reso
urce
u
tilizatio
n
ratio
.
REFERE
NC
ES
[1]
R
.
X
.
T
.
a
n
d
X
.
F
.
Z
.
,
“
A
L
o
a
d
B
a
l
a
n
c
i
n
g
S
t
r
a
t
e
g
y
Based on C
o
mbination of S
t
atic and D
y
n
a
mic,” in
Database
Technology and
Applica
tions (
D
BTA)
, 2010
.2nd
International W
o
rkshop (
2010
)
, pp.1-4.
[2]
S. Hiranwal, K. C. Ro
y
,
“Adaptive
Round Robin Scheduling Using Shortest Bu
rst Approach Based On
Smart Time
Sl
i
c
e,
”
International Journal Of
Com
puter Scien
ce and
Communication
, vol/issue: 2(2), pp. 319-3
23, 2011
.
[3]
M. Rahul and J. Prince, “Stud
y
and Compariso
n
of CloudSim
Simulators in the Cloud Compu
ting,”
The
SIJ
Transactions on
Computer Scien
ce
Engi
neering
&
i
t
s Applicatio
ns (
C
SEA)
, vol/issue: 1(4), pp. 1
11-115, 2013
.
Evaluation Warning : The document was created with Spire.PDF for Python.