TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 14, No. 3, June 20
15, pp. 534 ~ 5
4
2
DOI: 10.115
9
1
/telkomni
ka.
v
14i3.779
8
534
Re
cei
v
ed Fe
brua
ry 6, 201
5; Revi
se
d April 13, 201
5; Acce
pted Ma
y 7, 2015
High Performance Computing Clusters Design and
Analysis Using Red Hat Enterprise Linux
Atiqur Rahm
an
Dep
a
rtment of Comp
uter Scie
nce
& Engi
ne
e
r
ing, Un
iversit
y
of Chittago
ng, Chittag
o
n
g
, Bangl
ades
h
E-mail: atiqcs
e
09@c
u
.ac.bd
Abstract
The pur
pos
e of this paper was to
configure
a cluster c
o
m
p
uting system
t
o
im
pr
ove performanc
e
over th
at of a
singl
e c
o
mput
er, w
h
ile ty
pic
a
lly
be
ing
muc
h
more
cost-ef
f
ective tha
n
si
ngl
e co
mputer
s of
compar
abl
e sp
eed. Hi
gh p
e
rformanc
e co
mp
uting
mor
e
an
d
mor
e
issu
ed
in all k
i
nds of fields. But, for th
e
common us
er, there are s
e
ve
ral qu
estio
n
s o
n
impl
e
m
ent
i
n
g the proc
ess
of compet
i
ng o
n
the spec
ial
i
zed
clusters:
suc
h
as
ex
pens
e hi
gh, ma
nag
e
m
e
n
t
difficu
lt
y an
d o
perati
o
n
co
mp
lex
an
d so
on. T
o
overco
me
these
prob
le
ms, this articl
e d
e
sig
n
s a
nd r
e
a
l
i
z
e
s
th
e h
i
gh
perfor
m
a
n
ce c
o
mputi
ng
envir
on
me
nt bas
ed
o
n
Lin
u
x cluster
a
fter studying t
h
e key tech
no
lo
gy of cl
uster
a
nd p
a
ral
l
el
co
mp
utin
g. F
i
nal
l
y
, the perfor
m
ance
of system
environm
ent is tes
t
ed by
interactive vers
ion of
CPI algorithm
.
Before disc
us
sion on HP
C t
h
e
cluster co
mput
ing syste
m
w
i
t
h
its class
i
ficat
i
on, the
adv
ant
ages
an
d ap
pl
i
c
ation
of clust
e
rin
g
syste
m
w
ill
also b
e
disc
ussed her
e.
Ke
y
w
ords
:
H
P
CC-hi
gh
perf
o
rmanc
e co
mp
uting c
l
uster
,
NT
P-netw
o
rk time
protoc
ol,
S
S
H-secur
e
shell
,
NF
S-netw
o
rk file shar
e
,
PDSH-pu
b
lic d
o
m
ai
n super h
e
ro
Copy
right
©
2015 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Cluste
r Computing &
Linux
A supe
rcomp
u
ter is o
ne o
f
the biggest,
fast
est co
m
puters rig
h
t this minute.
So, th
e
definition of sup
e
rcom
puti
ng is co
nsta
ntly
changin
g
. Supercom
puting is al
so called
Hig
h
Perform
a
n
c
e
Comp
uting (HPC). Hig
h
-Perform
an
ce
Comp
uting (HPC) is
a
branch of co
m
puter
sci
en
ce that focu
se
s on
de
veloping
sup
e
rcomp
u
te
rs, parall
e
l proce
ssi
ng alg
o
rith
ms, and
relat
e
d
softwa
r
e [6].
It has b
r
ou
gh
t servi
c
e d
e
m
and i
n
crea
ses in
line
sh
ape
with n
e
twork
high
sp
eed
advan
ceme
nt. Server
ca
n
overlo
ad i
n
very sho
r
t
time wh
en v
i
siting
dema
nd in
crea
sin
g
.
Therefore, cl
usters
te
ch
n
o
logy
em
erg
e
s
as the ti
mes
re
quire
from thi
s
. At pre
s
e
n
t, clu
s
ters
system ha
s applied in
many fields, su
ch a
s
scie
n
tific rese
arch calculation, petroleum
exploratio
n, weath
e
r fore
ca
st and biol
ogical info
rm
ation, sign
al handli
ng an
d
so on. What
we
can
fo
re
se
e,
along with symmetrically multipro
ce
ssi
ng
ma
chin
e prod
uct
l
a
rg
e
l
y
used and
high
perfo
rman
ce
netwo
rk prod
uct pe
rfe
c
ted
,
as
we
ll
a
s
variou
s software and hard
w
are
p
r
od
ucti
on
prod
uced,
sy
stemati
c
a
n
d
appli
c
atio
n
softwa
r
e
em
erge
d, n
e
w
gene
ration
hi
gh p
e
rfo
r
ma
nce
clu
s
ters sy
ste
m
will becom
e popul
ar plat
form of com
p
uter field.
1.1. Wh
y
Lin
u
x?
Although
clu
s
terin
g
can
be pe
rform
e
d on va
rio
u
s operating systems li
ke
Wind
ows,
Macinto
s
h, S
o
lari
s etc. , Linux has it
s o
w
n adva
n
tag
e
s which are
as follo
ws:
a)
Linux run
s
on
a wide ra
nge
of hardware.
b)
Linux is exce
ptionally stabl
e.
c)
Linux so
urce
cod
e
is freely
distribute
d
.
d)
Linux is
relatively virus
free.
e)
Having a
wid
e
variety of tools an
d appli
c
ations for free
.
f)
Good e
n
viron
m
ent for deve
l
oping
clu
s
ter infrastructu
re
.
In the next chapter
we
wil
l
discusse
d a
bout
the ove
r
view of HP
C clu
s
ter al
ong
with its
feature
s
, u
s
e
s
a
nd b
enefit
s. The
rest of
the pa
pe
r wi
ll be o
r
g
anize
d a
c
cordi
ng t
o
the follo
win
g
stru
cture: se
ction 2 illu
strated HP
C cl
uster’
s frame
w
ork al
ong
with its featu
r
es, b
enefits
an
d
use
s
, sectio
n
3 different
step
s of impl
ementation
o
f
HPC cl
uste
r, se
ction 4
ben
chma
rkin
g,
results an
d p
e
rform
a
n
c
e a
nalysi
s
of HPC clu
s
ter a
n
d
sectio
n 5 co
nclu
de
s this p
aper.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
High Pe
rform
ance Com
put
ing Clu
s
ters
De
sign a
nd Analysi
s
Using
Red Hat… (Atiqur Ra
hm
an)
535
2. O
v
er
v
i
e
w
of High Per
f
ormance
Co
mputing Clu
s
ter [6]
2.1. What is
High Perfo
r
mance Com
puting Clu
s
ter?
High
-Perfo
rm
ance
Comp
uting (HP
C
) i
s
use
d
to de
scribe comp
utin
g
environm
en
ts
which
utilize
supercomputers and computer
cl
usters to
address
complex
com
put
ational requi
rements,
sup
port a
ppli
c
ation
s
with
signifi
cant p
r
oce
s
sing tim
e
req
u
ire
m
en
ts, or re
quire
pro
c
e
ssi
ng
o
f
signifi
cant a
m
ounts
of da
ta.
High
-p
erf
o
rma
n
ce co
mputing
cl
ust
e
r
use
clu
s
te
r n
ode
s to
pe
rform
con
c
u
r
rent calcul
ation
s
. A
high
-pe
r
formance clu
s
te
r all
o
ws
applications to
work i
n
parallel,
therefo
r
e en
han
cing
th
e perfo
rman
ce
of
the
ap
p
lication. Hig
h
p
e
rform
a
n
c
e
cluster a
r
e al
so
referred to a
s
computatio
n
a
l clu
s
ter
or grid c
l
us
ter of HPC c
l
us
ter.
Figure1. Gra
phical View o
f
High Performance clu
s
te
r
2.2. Cluste
r Frame
w
o
r
k
Figure 2 sh
o
w
s the
stru
ct
ure of a clu
s
t
e
r.
Figure 2. Clu
s
ter F
r
ame
w
o
r
k
A cluste
r
con
t
ains
several
of serve
r
s (at least two)
whi
c
h po
sse
ss
sh
are
d
-da
t
a stock
spa
c
e. Whe
n
any
se
rver runs an
ap
pli
c
ation, a
pplication data h
a
ve bee
n sa
ved in the d
a
ta
spa
c
e th
at share
d
. Op
era
t
ing syste
m
o
f
serve
r
a
nd
file of appli
c
a
t
ion
program of
each serv
er
store
s
i
n
lo
cal sto
r
ag
e space. Each
node i
n
clu
s
ter serve
r
co
mmuni
cate
s with
ea
ch
ot
her
throug
h internal area n
e
twork.
When
a node
of server
occu
r f
ault, the run
n
ing ap
plicatio
n
prog
ram o
n
this serve
r
will
be taken ove
r
automati
c
all
y
by another node
serve
r
.
Adopting
th
e clu
s
ter archit
ecture,
the
r
e are
many
ad
vantage
s
su
ch a
s
fre
e
exp
ansi
on,
highly manageability, highl
y usability, hi
gh ratio of
performance-to-price
and so on,
whi
c
h sol
v
ed
the tech
nolo
g
y
application
task
of cro
ssi
ng platfo
rm a
nd op
eratin
g
system m
ana
gement,
syst
em
softwa
r
e
-ha
r
dwa
r
e
runni
n
g
state mo
nitoring. It’s
id
e
a
l and
stro
ng
system pl
atform to un
de
rtake
large
-
scale scientific p
r
oje
c
t cal
c
ulatio
n.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 14, No. 3, June 20
15 : 534 – 54
2
536
2.3. Unique
Bene
fits o
f
HPC Clu
s
te
r
s
The sig
n
ifica
n
ce of hig
h
-p
erform
an
ce computi
ng
(HPC) cl
uste
rin
g
is gro
w
in
g at a faste
r
spe
ed to
day. Thi
s
i
s
be
cause m
o
re
a
nd m
o
re
te
chnical a
nd
scientific con
c
ern
s
are
b
e
i
n
g
analyzed on comp
uter
si
m
u
lation. High
perfo
rman
ce
comp
uting
is
that usi
n
g
so
me p
r
o
c
e
s
so
rs
parall
e
l impl
ements
a ta
sk,
whi
c
h ra
ise
s
the effi
cien
cy of ca
lculatio
n. HP
C cl
uste
rs
o
ffe
r
engin
eers, scienti
s
ts,
a
n
d
tech
nolo
g
y analyst
s
the
requi
re
d com
puting re
sou
r
ce
s
fo
r
ma
ki
ng
cru
c
ial d
e
ci
si
ons. Thi
s
is requi
red to
promot
e p
r
odu
ct innova
t
ion, accelerate the pace of
developm
ent, re
sea
r
ch a
n
d
minimi
ze t
he time to
market. Lea
d
i
ng servi
c
e p
r
ovide
r
s i
n
t
h
is
domain
are
sup
portin
g
th
e R &
D
co
mmunity by
offering vali
d
a
ted integ
r
at
ed solution
s
with
optimize
d
clu
s
ter
configu
r
a
t
ion for ce
rtai
n appli
c
ation
s
.
3. Implementation of
HPC Cluste
r [18
]
3.1. Diffe
ren
t
Steps of
HP
C Clus
ter Im
plem
enta
tio
n
Once Li
nux h
a
s
been
in
sta
lled (prefe
ra
b
l
y the sa
me v
e
rsi
o
n
)
on
all
the no
des tha
t
need
to be
clu
s
te
re
d. To
reali
z
e
t
he e
n
viron
m
e
n
t of pa
rall
el
comp
uting,
we ad
opt fou
r
common
PCs i
n
desi
gn, sel
e
ct one of them as mai
n
supervi
sion n
o
de co
mpute
r
that name is
hpcnd1
and IP
address i
s
19
2.168.1.10
. T
he othe
r thre
e PCs
are re
gard
ed a
s
su
bordi
nate n
o
d
e
com
pute
r
s t
hat
their na
me a
r
e
hp
cnd2
,
hpcnd3
and
hpcnd4
, IP a
d
d
r
e
ss
es
ar
e
re
sp
ec
tive
ly
192.168.1.
20,
192.16
8.1.30
and
1
92.16
8
.
1.40
. All of these
are
con
n
e
c
ted th
roug
h net
wo
rk
whi
c
h a
d
opts
NETGEAR F
S
-608
versio
n 2
switche
r
and
form
e
d
star shap
e
LAN
stru
cture. Install
Lin
u
x
operating sy
stem and the
nec
essa
ry tools pa
ckag
e on ea
ch PC
of four com
p
uter no
des
(n
otice
that close the
firewall
).
3.1.1. SSH (Secure Shell)
Con
f
igura
t
ion
A packet-based bina
ry pro
t
ocol
that p
r
o
v
ides en
crypted conne
ctio
ns to remote
host
s
or
serve
r
s. Se
cure
Shell i
s
a p
r
og
ram
to
log
into
ano
ther
com
pute
r
ove
r
a n
e
twork, to
exe
c
ute
comm
and
s in
a rem
o
te m
a
chi
ne, an
d to move files
from on
e ma
chin
e to an
other. It provid
es
stron
g
a
u
the
n
tication
and
se
cu
re
com
m
unication
s
over in
se
cu
re
cha
nnel
s. It is a
repl
ace
m
ent
for rlogi
n, rsh,
rcp, an
d rdi
s
t, telnet, ftp.
Con
n
e
c
tion to hpcnd2
clo
s
ed.
The
simila
r
p
r
ocedu
re
will
also
have
to
apply respe
c
tively to the p
a
ir
h
p
cnd
1
-h
pcnd3
and
hp
cnd1
-hpcnd4
,
so t
hat, the se
rv
er (hp
c
nd
1)
can logi
n into
other
client
s
se
curely with
out
password but
ton usin
g a p
ublic
key (u
si
ng DSA-al
go
rithm).
3.1.2. NTP (Net
w
o
r
k
Time Protocol
) Co
nfigura
t
ion
NTP
stand
s for
Network Tim
e
Protocol,
and
it is
an Inte
rnet protocol
used to
synchro
n
ize
the clo
c
ks of
com
puters t
o
so
me-tim
e
referen
c
e.
NTP is an I
n
ternet
stan
d
a
rd
protocol
ori
g
i
nally devel
oped by P
r
ofessor
David
L.
Mills at
the
Universi
ty of
Delaware. Ti
me
usu
a
lly just
a
d
vances. If you have
com
m
unicating
p
r
og
ram
s
runn
ing on
differe
nt com
puters,
time still sho
u
ld even adv
ance if you switch fr
om o
ne com
pute
r
to another.
Obviou
sly if
one
system is ah
e
ad of the
others, the others
are be
hind th
at parti
cul
a
r o
ne. From the perspe
c
tive of
an external
o
b
se
rver,
switchin
g bet
wee
n
the
s
e
syste
m
s
woul
d
ca
use
time to j
u
mp fo
rward
and
back, a non
-d
esirable effe
ct.
3.1.3. PDSH (Public Dom
a
in Super Heroes)
PDSH
i
s
an efficient,
mult
ithread
ed re
mote shell cli
ent
which
ex
ecute
s
com
m
and
s
o
n
multiple remote host
s
in
parallel.
Unli
ke rsh
(remote
shell
)
which runs
co
mman
d
s o
n
a
singl
e
remote ho
st,
PDSH can ru
n
multiple re
mote
co
mma
nds i
n
pa
ralle
l. It is a thre
a
ded a
ppli
c
ati
on
that use
s
a sliding
wind
ow (or
fan-out
)
of threa
d
s to
con
s
e
r
ve
re
sou
r
ces on t
he initiating
h
o
st
and allo
w so
me con
n
e
c
tio
n
s to time out
while all othe
r con
n
e
c
tion
s continu
e
.
3.1.4. NFS (Net
w
o
r
k
File Sy
stem) Con
f
igura
t
ion
NFS stand
s
for Net
w
ork F
ile
System, a
file
system develop
ed by
Sun Micro
s
ystem
s
,
Inc. It is a cli
ent/serve
r
system that all
o
ws us
ers to
acce
ss file
s across a n
e
twork an
d tre
a
ts
them as if they reside
d in a local file dire
ctory.
For exa
m
ple, if you
were usi
ng a comp
uter lin
ked
to a second
comp
uter via
NFS, you
could a
c
ce
ss
files on
the
se
con
d
com
puter
as if t
hey
resi
ded
in a
dire
ctory o
n
the first
com
puter.
T
h
is i
s
accom
p
lishe
d thro
ugh
th
e processe
s
of
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
High Pe
rform
ance Com
put
ing Clu
s
ters
De
sign a
nd Analysi
s
Using
Red Hat… (Atiqur Ra
hm
an)
537
exporting
(th
e
process by
whi
c
h a
n
NF
S serve
r
p
r
ov
ides
rem
o
t
e
c
lient
s
wit
h
a
c
ce
ss t
o
it
s f
i
l
e
s)
and
mou
n
ting (th
e
p
r
o
c
e
s
s by
which
fil
e
sy
stem
s a
r
e ma
de
avail
able to
the
o
peratin
g
syst
em
and the u
s
e
r
). Simply we
can
say tha
t
NFS is a file system u
s
ed for sha
r
in
g of files ove
r
a
netwo
rk. Oth
e
r re
sou
r
ces
l
i
ke printe
rs
a
nd stora
ge d
e
vice
s can
al
so be sh
are
d
.
This
me
an
s
t
hat
usin
g NFS files can be a
c
ce
ssed re
mot
e
ly.
3.1.5. MPICH2 Installa
tion
& Con
f
igura
t
ion
MPICH2 i
s
a
portabl
e impl
ementation
o
f
MPI (Messa
ge Passin
g Interface), a
standard
for messa
g
e
-
passin
g
for di
stribute
d
me
mory appli
c
at
ions u
s
e
d
in parall
e
l co
mp
uting. It provides
an MPI impl
ementation t
hat efficientl
y
supp
or
ts
different com
putation an
d
comm
uni
cat
i
on
platform in
clu
d
ing
comm
od
ity cluste
rs, h
i
gh-spe
ed n
e
t
works an
d p
r
op
riety high
-end
com
putin
g
sy
st
em
s.
MP
I
CH2
is
F
r
ee
S
o
f
t
w
are
an
d is availa
bl
e for mo
st fl
avors of
UNIX and
Mi
cro
s
oft
Wind
ows. M
P
ICH2
provides
a
se
paration of
pro
c
ess ma
nag
e
m
ent an
d
co
mmuni
cation.
The
default runtim
e enviro
n
me
n
t
con
s
ist
s
of
a set of
dae
mons, calle
d mpd
(mu
s
ic player
d
aem
o
n
),
that establi
s
h comm
uni
cation amon
g
the mach
in
es to be u
s
ed before a
pplication proce
s
s
startup, th
us providi
ng a
clea
re
r pi
ct
ure
of wh
at
is wro
ng
wh
en commu
ni
cation
ca
nno
t be
establi
s
h
ed a
nd providi
ng
a fast and scalable
star
tup
mecha
n
ism whe
n
parallel
jobs are sta
r
ted.
4. Benchma
r
king, Results & Perform
a
nce Analy
s
is
4.1. Benchm
arking [1
9]
Be
n
c
hma
r
k
i
ng
is
a
s
y
s
t
ema
t
ic
pr
oc
ess
fo
r
ide
n
tifying an
d impl
e
m
enting
be
st or
better
pra
c
tice
s. Di
mensi
o
n
s
typically mea
s
u
r
ed are q
uality, time and co
st.
No
w let
'
s st
ar
t
some ben
ch
mar
k
ing:
[hpower@h
p
c
c1 clu
s
ter]$
cd
mpi
c
h2
-1.
0
.8p1/exampl
es/
cpi file alre
ad
y complied a
nd executeab
le
-rw-r--r-- 1 hp
owe
r
po
we
r 6
78 Nov 3 20
0
7
child.
c
-rwxr-xr-x 1 h
power po
we
r
5764
50 Jun 6
14:34 cpi
-rw-r--r-- 1 hp
owe
r
po
we
r 1
515 Nov 3 20
07 cpi.
c
-rw-r--r-- 1 hp
owe
r
po
we
r 1
964 Jun 6 14:
34 cpi.o
-rw-r--r-- 1 hp
owe
r
po
we
r 4
469 Nov 3 20
07 cpi.vcproj
drwx
r-xr-x 2 hpower po
we
r
4096
Jun 6 1
4
:31 cxx
drwx
r-xr-x 2 hpower po
we
r
4096 Ma
r 27
01:40 devel
o
pers
-rw-r--r-- 1 hp
owe
r
po
we
r 1
0446
Nov 3 2
007 exampl
e
s
.sln
drwx
r-xr-x 2 hpower po
we
r
4096
Jun 6 1
4
:31 f77
drwx
r-xr-x 2 hpower po
we
r
4096
Jun 6 1
4
:31 f90
-rw-r--r-- 1 hp
owe
r
po
we
r 4
55 Nov 3 20
0
7
hello
w.c
-rw-r--r-- 1 hp
owe
r
po
we
r 1
892 Nov 3 20
07 icpi.
c
-rw-r--r-- 1 hp
owe
r
po
we
r
6
802 Jun 6 14:
31 Makefile
-rw-r--r-- 1 hp
owe
r
po
we
r 6
767 Ma
r 27 0
1
:40 Makefile
.in
-rw-r--r-- 1 hp
owe
r
po
we
r
1
490 Ma
r 12 2
008 Ma
kefile.
s
m
drwx
r-xr-x 2 hpower po
we
r
4096 Ma
r 27
01:39 mpiexe
c
-rw-r--r-- 1 hp
owe
r
po
we
r 1
049 Nov 3 20
07 parent.c
-rw-r--r-- 1 hp
owe
r
po
we
r 4
6399
Nov 3 2
007 pma
ndel.
c
-rw-r--r-- 1 hp
owe
r
po
we
r 4
7798
Nov 3 2
007 pma
ndel
_fence.c
-rw-r--r-- 1 hp
owe
r
po
we
r 4
522
Nov 3 20
07 pman
del_f
ence.vcproj
-rw-r--r-- 1 hp
owe
r
po
we
r 4
5576
Nov 3 2
007 pma
ndel
_se
r
vice.
c
-rw-r--r-- 1 hp
owe
r
po
we
r 4
532 No
v 3 20
07 pman
del_
s
ervi
ce.vcp
roj
-rw-r--r-- 1 hp
owe
r
po
we
r 4
7510
Nov 3 2
007 pma
ndel
_sp
a
serv.c
-rw-r--r-- 1 hp
owe
r
po
we
r 4
510 No
v 3 20
07 pman
del_
s
pa
se
rv.vcproj
From the
s
e
e
x
amples
we’ll
use i
c
pi.
c
to whi
c
h i
s
the i
t
erative versi
on of cpi (cycles p
e
r
inst
ru
ct
ion
)
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 14, No. 3, June 20
15 : 534 – 54
2
538
4.2. Results
To find th
e re
sults a
s
well
as to
test th
e
perfo
rma
n
ce
of HP
C
clu
s
ter at fi
rst
we
have to
run
or co
mpil
e the i
c
pi.
c
p
r
og
ram
at th
e cl
uste
r. To
compil
e the
prog
ram
we
have to
write
the
f
o
llowin
g
com
m
and
s:
[hpower@h
p
c
nd
1 ~]$ whi
c
hmpi
exec
/cluste
r
/mpi
ch2/bin/mpiex
e
c
[hpower@h
p
c
nd
1 exampl
es]$
mpi
cc
-o
icpiicpi.c
No
w execute it in different node
s an
d al
so
op
en top -c to che
c
k the pro
c
e
s
ses.
First w
e
w
i
ll
run w
i
th
sin
g
le
node:
[hpower@h
p
c
nd
1 ~]$
/clu
ster/m
pich
2/
bin/mpi
e
xec -n
1
/cl
u
ster/mpich2-
1.0.8p1/exam
ples/i
cpi
Or
[hpower@h
p
c
nd
1 exampl
es]$ mpiexe
c -n 1 ./icpi
Enter the nu
mber of interv
als: (0 quit
s
)
1000
0000
00
pi is app
roxi
mately 3.141
5926
5359
214
01, Erro
r is 0.
0000
0000
000
2347
0
w
a
ll cloc
k time = 19.196
214
Enter the nu
mber of interv
als: (0 quit
s
)
top -c o
u
tput :
[root@hp
c
nd
1 ~]# top -c
top - 15:53:0
8
up 1:57, 3 u
s
ers,
load av
erag
e: 0.21, 0.11, 0.03
Tasks: 188 to
tal, 2 running,
186 sle
epin
g
,
0 stopped, 0
zombie
Cpu
(
s): 12.5
%
us, 0.0%sy, 0.0%ni, 87.4%
id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3368
19
6k total, 9367
60k u
s
e
d
, 24
3143
6k fre
e
, 1405
04
k buffers
Swap: 122
89
716
k total, 0k used, 12
289
716
k free, 63
4932
k cache
d
PID USER PR NI VIRT RES S
HR S %CPU %MEM
TIME+
COMMAND
1428
9 hpo
we
r 25 0 233
2 8
12 676
R 100
0.0 0:07.80./icpi
1 root 15 0 2
032 64
0 552
S 0 0.0 0:01.82 init [5]
Let’s run
w
i
t
h
t
w
o
nod
es
:
[hpower@h
p
c
nd
1 exampl
es]$ mpiexe
c -n 2 ./icpi
Enter the nu
mber of interv
als: (0 quit
s
)
1000
0000
00
pi is app
roxi
mately 3.141
5926
5359
051
70, Erro
r is 0.
0000
0000
000
0723
9
wall cl
ock time = 10.30
383
1
Enter the nu
mber of interv
als: (0 quit
s
)
hpcnd1 top -c Output:
[root@hp
c
nd
1 ~]# top -c
top - 15:56:4
6
up 2:01, 3 u
s
ers,
load av
erag
e: 0.14, 0.10, 0.04
Tasks: 188 to
tal, 2 running,
186 sle
epin
g
,
0 stopped, 0
zombie
Cpu
(
s): 12.5
%
us, 0.0%sy, 0.0%ni, 87.4%
id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3368
19
6k total, 9414
72k u
s
e
d
, 24
2672
4k fre
e
, 1406
56
k buffers
Swap: 122
89
716
k total, 0k used, 12
289
716
k free, 63
4876
k cache
d
PID USER PR NI VIRT RES S
HR S %CPU %MEM
TIME+
COMMAND
1431
2 hpo
we
r 25 0 232
8 8
48 712
R 100
0.0 0:08.84 ./icpi
1 root 15 0 2
032 64
0 552
S 0 0.0 0:01.82 init [5]
Let’s run
w
i
t
h
three no
de
s:
[hpower@h
p
c
nd
1 exampl
es]$ mpiexe
c -n 3 ./icpi
Enter the nu
mber of interv
als: (0 quit
s
)
1000
0000
00
pi is app
roxi
mately 3.141
5926
5359
051
70, Erro
r is 0.
0000
0000
000
0723
9
w
a
ll cloc
k time = 6.8693
43
Enter the nu
mber of interv
als: (0 quit
s
)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
High Pe
rform
ance Com
put
ing Clu
s
ters
De
sign a
nd Analysi
s
Using
Red Hat… (Atiqur Ra
hm
an)
539
Let’s run
w
i
t
h
four nod
es
:
[hpower@h
p
c
nd
1 exampl
es]$ mpiexe
c -n 4 ./icpi
Enter the nu
mber of interv
als: (0 quit
s
)
1000
0000
00
pi is app
roxi
mately 3.141
5926
5359
051
70, Erro
r is 0.
0000
0000
000
0723
9
w
a
ll cloc
k time = 4.2119
89
Enter the nu
mber of interv
als: (0 quit
s
)
The follo
win
g
scre
en
shot
sho
w
in
g the
result
s of i
c
pi.
c
compil
ation
,
i.e. wall
clo
c
k time
usin
g sin
g
le a
nd
multiple nodes.
The followi
ng
screen
sho
o
t sho
w
ing the
top –c outp
u
t of each
comp
ilation.
1. Usin
g sin
g
l
e
node (hp
c
n
d1)
2. Usin
g dou
ble nod
es (hp
c
nd
1+hp
cnd
3
)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 14, No. 3, June 20
15 : 534 – 54
2
540
3. Usin
g thre
e node
s (h
pcnd1
+hp
c
n
d
2
+
hpcnd3
)
4. Usin
g four
node
s (h
pcnd
1+h
p
cnd2
+h
p
c
nd
3+ h
p
cnd
4
)
4.3. Perform
ance Analy
s
is [20
]
The followi
ng
table sho
w
in
g the wall clo
ck time after
compili
ng the
icpi.c usi
n
g single or
multiple nod
e
s
by usin
g different inte
rval
s.
Table 1
No. of intervals(C
y
c
les) Value of
n in algorithm
100
10000
1000000
100000000
100000000
0
hpcnd1 (w
all clock time)
0.000036
0.000228
0.019338
1.920956
19.196214
hpcnd1+hpcnd2
(w
all clock time)
0.002732
0.000927
0.010423
0.962485
10.303831
hpcnd1+hpcnd2
+hpcnd3 (w
all clock time)
0.005705
0.000766
0.007697
0.685562
6.869343
hpcnd1+hpcnd2
+hpcnd3+hpcnd4 (w
all cl
ock time
)
0.006224
0.000425
0.000259
0.425128
4.211989
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
High Pe
rform
ance Com
put
ing Clu
s
ters
De
sign a
nd Analysi
s
Using
Red Hat… (Atiqur Ra
hm
an)
541
No
w we ca
n rep
r
e
s
ent
the inform
ation of
table
with a g
r
ap
h to unde
rstand th
e
perfo
rman
ce
analysi
s
mo
re easily thro
u
gh gra
phi
cal repre
s
e
n
tation
.
Figure 3.
Gra
phical rep
r
e
s
entation of nu
mber of interv
als vs. wall
cl
ock time
5. Conclusio
n
With the
pop
ularity of hi
g
h
pe
rform
a
n
c
e calcul
ation
and
network te
chn
o
logy, clu
s
ter
system, which is the h
o
t p
o
int of re
sea
r
ch a
nd mai
n
strea
m
of pa
rallel co
mputin
g in the world
,
is
large a
d
vant
age that ca
n
’
t be sub
s
tituted. This p
aper offe
rs t
he method o
f
setting up the
environ
ment
of pa
rallel
co
mputing
tech
nology
ba
sed
on
Linux
an
d me
ets th
e
need
s
of a
la
rge
-
scale scie
ntific and e
ngin
e
e
ring
com
puti
ng und
er
the
experim
ental
environ
ment. Here we
have
discu
s
sed th
e perfo
rman
ce analysi
s
of
HPC clu
s
te
r using o
n
ly one algo
rithm,
i.e. iterative
cpi
algorith
m
. Th
ere m
any oth
e
rs
algo
rithm
whi
c
h a
r
e av
ailable fo
r pe
rforma
nce an
alysis. In futu
re,
the pe
rform
a
nce
can
be
analyzed
usi
ng oth
e
r
alg
o
rithm
s
. Th
e
numb
e
r no
d
e
s
ca
n al
so
be
increa
sed to
8 to 16 n
ode
s or more to
improve
th
e
stren
g
th of
HPC clu
s
te
r. T
he HP
C cl
ust
e
rs
can al
so be i
m
pleme
n
ted usin
g other versi
on of
Lin
u
x like Cento
s
, Fedo
ra or
other op
erati
ng
sy
st
em li
ke U
N
I
X
or Ma
c et
c.
Referen
ces
[1]
Yang, S
h
i
n
-Je
r
, Chu
ng-C
h
ih
T
u
, Jyhj
ong
Lin.
Desi
gn Is
sue
and
Perf
ormanc
e An
al
ysis of D
a
t
a
Migratio
n T
o
ol
in
a C
l
o
ud-Ba
sed E
n
viro
n
m
ent
. Proce
edi
n
g
s of th
e 4th
Internati
o
n
a
l
C
onfere
n
ce
o
n
Comp
uter Engi
neer
ing a
nd N
e
t
w
o
r
ks. Sprin
ger Internati
o
n
a
l Pub
lish
i
n
g
. 2015.
[2]
Hadj
id
oukas
P
E
, et al.
Π
4U:
A hi
gh
perfor
m
ance c
o
mp
uting fram
e
w
o
r
k
for Ba
yes
i
a
n
unc
ertaint
y
qua
ntificati
on o
f
comple
x mod
e
ls.
Journ
a
l of Co
mp
utation
a
l
Physics.
201
5;284: 1-2
1
.
[3]
Yao, Yus
hu, et
al. SciD
B for
High
Performa
nce Arra
y-struc
t
ured Sci
enc
e
Data at
NERS
C.
Co
mp
utin
g
in Scie
nce & Engi
neer
in
g.
20
15; 1: 1-1.
[4]
Youn
ge, Andr
e
w
J, John P
aul W
a
lters, Geoffr
e
y
C F
o
x. Supp
orting H
i
gh Performa
nc
e Molec
u
l
a
r
D
y
namics i
n
Vi
rtualiz
ed Cl
usters usin
g IOMMU, SR-IOV, a
nd GPUDir
ect. 201
5.
[5]
Visser, Marco
D, et al. S
1
T
e
xt: Speed
i
ng u
p
ec
ol
ogi
cal a
nd
evol
u
t
ionar
y c
o
mp
u
t
ations i
n
R;
essenti
a
ls of hi
gh perform
anc
e co
mputi
ng fo
r biol
ogists. 20
15.
[6]
Kaur, Arvin
der
, Shradd
ha V
e
rma. Perform
ance
M
easur
e
m
ent an
d An
al
ysis of H
i
g
h
-
Availa
bi
lit
y
Clusters.
ACM SIGSOF
T Soft
ware Engineering Notes.
2
015
; 40(2): 1-7.
[7]
Goude
y, B
enj
a
m
in, et al. H
i
g
h
perform
anc
e
comput
i
ng en
abli
ng e
x
h
austi
ve
an
al
ysis of hig
her
or
der
singl
e n
u
cle
o
ti
de p
o
l
y
mor
phi
sm interactio
n
in Gen
o
me W
i
de Assoc
i
atio
n
Studies.
Healt
h
Information
Scienc
e an
d Systems.
20
15; 1: 3.
[8]
Yao, Yus
hu, et
al. SciD
B for
High
Performa
nce Arra
y-struc
t
ured Sci
enc
e
Data at
NERS
C.
Co
mp
utin
g
in Scie
nce & Engi
neer
in
g.
20
15; 1: 1-1.
[9]
Belg
acem, Mo
hame
d
Ben, B
a
stien C
h
o
par
d. A h
y
br
id
HP
C/clou
d distri
b
u
ted i
n
frastructure: Co
upl
in
g
EC2 c
l
ou
d r
e
s
ources
w
i
th
H
P
C cl
usters to
run
lar
g
e
tig
h
t
l
y
c
o
u
p
le
d m
u
ltiscale
a
p
p
lica
t
ions.
Fu
tu
re
Generati
on C
o
mp
uter Syste
m
s.
2015; 42: 11
-21.
[10]
Dumitrel
Lo
ghi
n, Bogd
an M
a
rius T
udor, et al.
A Perfor
ma
nce Study
of Big Dat
a
on S
m
a
ll N
o
d
e
s
.
Procee
din
g
s of
the VLDB End
o
w
m
e
n
t. 2015;
8(7).
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 14, No. 3, June 20
15 : 534 – 54
2
542
[11]
Hartog, J
e
ssic
a
, et a
l
. Perfor
mance
Ana
l
ysi
s
of Ad
aptin
g
a Ma
pRe
duc
e
F
r
ame
w
ork to
D
y
namic
all
y
Accommodate Heterogeneit
y
.
T
r
ansactions
on Lar
ge-Sc
al
e Data-a
nd Kn
ow
ledg
e-Ce
nte
r
ed Systems
XX
. Springer B
e
rlin H
e
i
del
ber
g. 2015: 1
08-1
30.
[12]
Du Ju
n, et a
l
. Researc
h
o
n
Li
nu
x-B
a
sed
PC
Cl
uster
S
y
stem
an
d it
s Appl
icati
on
in N
u
meric
a
l
Simulati
on for
Shal
lo
w
Bur
i
e
d
Soft Soil T
unnel.
Appl
ied Me
chan
ics an
d Materia
l
s
. 201
5; 730
[13]
Koba
ya
shi
H
i
r
oaki. Fe
asi
b
ilit
y st
ud
y
of a
fu
ture
HP
C s
y
st
em for m
e
mor
y
-i
ntens
ive
ap
plicati
ons: fi
na
l
report.
Sustain
ed Si
mu
latio
n
Performanc
e 2
014
. Spri
ng
er Internati
o
n
a
l Pu
blish
i
n
g
. 201
5: 3-16.
[14]
Sharifi
Had
i
, Omar Aaziz, Jo
n
a
than
Co
ok.
Monitor
i
ng HPC app
licati
ons in
the
pro
ducti
on envir
on
me
nt
.
Procee
din
g
s of
the 2nd W
o
rks
hop o
n
Para
lle
l
Progr
ammi
ng
for Anal
ytics A
pplic
atio
ns. ACM. 2015.
[15]
Gudiva
da, V
e
nkat N, Ja
ga
dees
h N
a
n
d
ig
am, Jord
an
Paris. Pro
g
ra
mming P
a
ra
di
gms in
Hi
g
h
Performanc
e C
o
mputi
ng.
Res
earch a
nd Ap
pl
icatio
ns in Glo
bal Su
perc
o
mp
uting.
20
15: 30
3.
[16]
El-Mours
y
, Ali A, et al. Par
a
llel PPI Pr
e
d
i
c
tion Perform
a
nce Stu
d
y
on
HPC Pl
atforms.
Journa
l of
Circuits, System
s and Computers.
2015.
[17]
Ahmed
Mun
i
b,
Ishfaq A
h
ma
d
,
Mohamm
ad
Saad
Ahma
d.
A surve
y
of g
enom
e se
qu
en
ce ass
e
mbl
y
techni
qu
es and
algor
ithms usi
ng hi
gh-p
e
rfor
mance com
puti
ng.
T
he Jour
na
l of Superco
mp
uting.
20
15
;
71(1): 29
3-3
3
9
.
[18]
Islam Nusrat S
,
et al.
Hig
h pe
rforma
nce R
D
M
A-base
d
des
i
gn of HDF
S ov
er InfiniB
and
. Procee
din
g
s
of the Intern
ati
ona
l Co
nfere
n
c
e
on Hig
h
Pe
rformance Co
mputin
g,
Ne
t
w
orkin
g
, Storag
e an
d An
al
ysis
.
IEEE Compute
r
Societ
y
Pr
ess
.
2012.
[1
9
]
D
a
na
l
i
s
An
thony
, e
t
a
l
.
T
he scal
abl
e hetero
g
e
neo
us
comp
utin
g (SHOC) benc
h
m
ark su
ite
.
Procee
din
g
s
of the
3rd
W
o
rk
shop
o
n
Gen
e
r
al-Purp
o
se
C
o
mputati
o
n
on
Graph
ics Pro
c
essin
g
U
n
its
.
ACM. 2010.
[2
0
]
N
i
c
h
o
l
s
J, e
t
a
l
. H
P
C
-
EPIC fo
r h
i
g
h
re
so
l
u
tio
n
si
mu
la
ti
o
n
s
o
f
en
vi
ron
m
e
n
t
al
an
d
su
sta
i
na
b
ilit
y
assessment.
Computers a
nd
Electron
ics in
Agricult
ure.
20
11; 79(2): 1
12-
115.
[21]
Kopp
er K
a
rl. T
he
Lin
u
x
E
n
ter
p
rise
Cl
uster:
buil
d
a
hig
h
l
y
avail
a
b
l
e
clust
e
r
w
i
t
h
c
o
mmo
dit
y
h
a
rd
w
a
re
and free soft
w
a
re. No Starch
Press. 2005.
[22]
Bookma
n Char
les. Lin
u
x
clust
e
rin
g
: buil
d
i
ng
and
ma
intai
n
i
n
g Lin
u
x
cluster
s
. Sams Publis
hin
g
. 200
3.
[23]
Baker Mark. Cl
uster computi
n
g
w
h
ite p
a
p
e
r. 200
0.
[24]
Yang, Ch
ao-T
ung, Yu-L
un
Luo, Chu
a
n
-
Lin L
a
i. Des
i
gni
ng com
put
ing pl
atform for BioGrid.
Internatio
na
l jo
urna
l of compu
t
er appl
icatio
ns
in techno
lo
gy.
200
5; 22(1): 3-
13.
Evaluation Warning : The document was created with Spire.PDF for Python.