TELKOM
NIKA
, Vol.14, No
.4, Dece
mbe
r
2016, pp. 15
86~159
7
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v14i4.4238
1586
Re
cei
v
ed
Jun
e
29, 2016; Revi
sed O
c
tob
e
r 14, 201
6; Acce
pted O
c
t
ober 2
9
, 201
6
Iteration Methods for Linear Systems with Positive
Definite Matrix
Longqua
n Yong*
1
, Shouheng Tuo
2
, J
i
arong Shi
3
1,2
School of Mathematics an
d Comp
uter
Scie
nce, Shaa
n
x
i S
c
i-T
e
ch Univer
sit
y
, Hanz
ho
ng
7230
01, Ch
ina
;
3
School of Scie
nce, Xi’a
n Univ
ersit
y
of Arch
itecture an
d T
e
chno
log
y
,
Xi’
an
710
05
5, Chin
a
*Corres
p
o
ndi
n
g
author
, e-ma
i
l
:
y
o
ngl
on
gqu
a
n
@1
26.com
A
b
st
r
a
ct
T
w
o classica
l
first order
iter
ation
meth
ods
, Ric
h
a
rdso
n
i
t
eration
a
nd
H
SS iterati
o
n
fo
r lin
ear
systems w
i
th
positiv
e defi
n
it
e matrix, are
d
e
monstrat
e
d
. Theoretic
al a
n
a
lyses
and c
o
mp
utatio
nal r
e
sults
show
that the HSS iteratio
n
has the a
d
va
ntages of fast co
nverg
enc
e s
p
e
ed, hig
h
co
mp
utation
efficien
cy,
and w
i
tho
u
t requir
e
ment of symmetry.
Ke
y
w
ords
:
ite
r
ation
meth
od,
Ric
hards
on
it
er
atio
n, HSS
it
eratio
n, sy
mmetric p
o
sitive
d
e
finite,
ge
nera
l
i
z
e
d
positiv
e defi
n
ite
Copy
right
©
2016 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
Con
s
id
er a lin
ear sy
stem:
A
xb
(1)
Whe
r
e
nn
A
R
is nonsi
ngula
r
. Instea
d of sol
v
ing syst
em
(1) by a dire
ct method, e
.
g., by
Gau
ssi
an eli
m
ination, in
many ca
se
s
it may
be advantageo
us t
o
use a
n
iterative method
of
solutio
n
. This is pa
rticula
r
l
y
true wh
en the dime
nsi
o
n
n
of sy
stem
(1) i
s
v
e
ry
la
rge. This
pap
er
provide
s
som
e
cla
ssi
cal ite
r
ative method
s.
The ge
neral
scheme
of what is kno
w
n
as the
first orde
r iteratio
n pro
c
e
s
s co
nsi
s
ts of
su
ccessively comp
uting th
e terms of the
sequ
en
ce:
1
,
0
,
1
,
2
,
kk
xB
x
k
(2)
Whe
r
e the in
itial guess
0
n
x
R
is specified arbitr
arily [1]. The matrix
B
is known a
s
the
iteration m
a
tri
x
. Clearly, if the
sequ
en
ce
k
x
co
nverg
e
s, i.
e., if there i
s
a limit:
li
m
k
k
x
x
, then
x
is
the s
o
lution of s
y
s
t
em (1).
Iterative method (2
) is first orde
r be
ca
use the n
e
xt iterate
1
k
x
dep
end
s only on
one
p
r
e
v
io
us
iter
ate
,
k
x
. Followin
g
we
give two
cla
ssi
cal fi
rst
orde
r ite
r
atio
n, Richa
r
d
s
o
n
iteratio
n an
d
HSS iteration
.
The basi
c
contribut
io
n of present wo
rk is to
vali
date the perfo
rm
ance of
iteration
method for lin
ear sy
stem
s with po
siti
ve definite matri
x
generate
d
randomly.
In se
ction 2,
Richardson it
eration
meth
od
an
d
co
nve
r
gen
ce analy
s
is are demo
n
strate
d,
and HSS iteration metho
d
and conve
r
gen
ce an
al
ysis are devel
oped in Sect
ion 3. Optimal
conve
r
ge
nce
spe
e
d
s
of Richa
r
d
s
on an
d HSS iter
ation are stu
d
ie
d in Section 4 for symmetri
c
positive defini
t
e matrix. Section 5 gives some
nume
r
i
c
al example g
enerated ra
n
domly
. Sectio
n
6
con
c
lu
de
s the pape
r
.
We
now describe ou
r
notation.
All vectors will be
colum
n
v
e
ctors. T
he
notation
nn
A
R
will
signify a
real
nn
matrix,
ρ
()
A
be
spe
c
tral ra
dius of m
a
trix
A
, and
2
A
den
ot
e
2-no
rm. We
write
I
for the identity matrix (
I
is suitabl
e dimen
s
ion
in context ). A vector of
zeros in a
real space of arbitr
ary dimensi
on will be denoted by
0
.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Iteration Meth
ods for
Linea
r System
s wit
h
Positive
De
finite Matrix (Long
quan Yo
ng)
1587
Defini
tion 1.1
The matrix
A
is sy
mmet
r
i
c
po
sit
i
v
e
definite (SPD, [2]), i.e.,
0
T
dd
A
for every
,
n
dR
d
0
, and
T
A
A
.
Defini
tion 1.2
The matrix
A
is gene
rali
ze
d positive def
inite (GPD, [2
]), i.e.,
0
T
dd
A
for every
,
n
dR
d
0
.
2. Richard
s
o
n
Iteration
fo
r Sy
mmetric
Positiv
e
Definite Matrix
Richardson iteration
was
prop
osed by
Lew
is
Ric
h
ards
on [3]. It is
s
i
milar to
the Ja
cobi a
n
d
Gau
s
s-Seid
el method.
B
y
reca
st
ing
sy
st
em (
1
) a
s
f
o
llows:
x
IA
x
b
The Ri
cha
r
d
s
on iteration i
s
:
1
,0
,
1
,
2
,
kk
k
k
xI
A
x
b
x
b
A
x
k
(3)
In doing
so, t
he ne
w
syste
m
will be
equi
valent to t
he
origin
al on
e for any valu
e
of the param
eter
0
. Sy
stem (3) is a parti
cula
r
ca
se of (2
) wi
th
B
IA
and
b
.
In orde
r to prove conve
r
ge
nce of Ri
ch
ardso
n
iteration, we firs
t give following lemma.
Lemma 2.1
Let
m
i
n1
23
1
m
a
x
0
nn
. For any
0
, let
ρ
()
m
a
x
1
j
j
.
1. If
the para
m
eter
satisfies the inequalities
ma
x
2
0
, then
ρ
()
1
.
2. The
ρ
()
achi
eves its mini
mal value
ma
x
m
i
n
opt
opt
ma
x
m
i
n
ρ
()
+
w
hen
opt
mi
n
m
a
x
2
.
Proof.
1)
Sinc
e
m
i
n1
23
1
m
a
x
0
nn
, thus
:
mi
n
m
a
x
ma
x
1
ma
x
1
,
1
j
j
If the parame
t
er
satisfie
s the ineq
ualitie
s
ma
x
2
0
, then:
max
m
a
x
11
1
1
1
And,
mi
n
mi
n
m
i
n
ma
x
112
1
1
1
1
Thus,
mi
n
m
a
x
ma
x
1
ma
x
1
,
1
1
j
j
.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 14, No. 4, Dece
mb
er 201
6 : 1586 – 159
7
1588
2) If
ma
x
2
0
, the value.
mi
n
m
a
x
ρ
(
)
ma
x
1
ma
x
1
,
1
j
j
Is sho
w
n
by
a bol
d p
o
lyg
onal li
ne i
n
fi
gure
1; it
coi
n
cid
e
s with
mi
n
1
before
th
e
intersectio
n
p
o
int, and
after this
point it
coin
cid
e
s
wit
h
ma
x
1
. Co
nsequ
ently, the minimu
m
value of
opt
opt
ρρ
()
is
achi
eved p
r
e
c
isely at the intersectio
n
, i.e., at the va
lue of
opt
obtaine
d from
the following
con
d
ition:
o
p
t
m
in
opt
m
i
n
opt
m
a
x
o
pt
m
a
x
1=
1
=
1
=
(
1
)
Whi
c
h yield
s
:
op
t
mi
n
m
a
x
2
Con
s
e
quently
,
ma
x
m
i
n
opt
o
p
t
ma
x
m
i
n
ρ
()
+
Figure 1. Image of
mi
n
m
a
x
ρ
(
)
ma
x
1
ma
x
1
,
1
j
j
Example 2.1:
Let
12
3
3,
2
,
3,
4
n
. For any
0
, let:
ρ
(
)
ma
x
1
ma
x
1
2
,
1
3
1
4
j
j
,
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Iteration Meth
ods for
Linea
r System
s wit
h
Positive
De
finite Matrix (Long
quan Yo
ng)
1589
Figure 2. Image of function
12
,
1
3
,
14
, and bold po
lygonal line
Is image of
ρ
(
)
m
a
x
1
2
,
1
3
,
1
4
m
a
x
12
14
,
.
From Fig
u
re
2, if
ma
x
2
00
.
5
, then:
ρ
()
m
a
x
1
2
,
1
3
,
1
4
m
a
x
1
2
1
4
1
,
More
over, if
opt
mi
n
m
a
x
21
3
,
ρ
()
m
a
x
1
j
j
a
c
hieve
s
i
t
s minim
a
l
value
ma
x
m
i
n
opt
opt
ma
x
m
i
n
1
ρ
()
+3
.
Theorem 2.
1:
Sppo
se
nn
AR
be
symmet
r
ic positive definite (SPD)
matrix. In
Richardson iteration (3),
if the paramet
er
satisfie
s the inequ
alitie
s
ma
x
2
0
, then the
seq
uen
ce
k
x
of iteration
(3
)
conve
r
ge
s to
the sol
u
tion
of linear sy
stem (1
) for arbitrary initia
l
point
0
n
xR
. More
over,
the best
p
e
rform
a
n
c
e
of converge
nce o
c
curs on
ma
x
m
i
n
o
p
t
opt
ma
x
m
i
n
ρρ
()
+
with
opt
mi
n
m
a
x
2
, where
mi
n
and
ma
x
are the
minimum an
d
the maximum eigenvalu
e
s
of the matri
x
A
.
Proof.
Let
ρ
()
B
be sp
ectral ra
d
i
us of matrix
BI
A
.
The ne
ce
ssa
r
y and
suffici
ent
con
d
ition for
conve
r
ge
nce of
Rich
ard
s
o
n
iteration is
ρ
()
1
B
.
Suppo
se th
at,
,1
,
2
,
,
j
jn
, are the
ei
genvalu
es
of
A
arrang
ed in
the asce
ndin
g
orde
r:
m
i
n1
23
1
m
a
x
0
nn
Suppo
se
,1
,
2
,
,
j
vj
n
, b
e
the
eige
nvalue
s of
BI
A
. T
h
en
1
jj
v
,
1,
2
,
,
jn
.
Since
m
i
n1
23
1
m
a
x
0
nn
, then:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 14, No. 4, Dece
mb
er 201
6 : 1586 – 159
7
1590
mi
n
1
2
3
1
m
a
x
11
nn
vv
v
v
v
Thus,
mi
n
m
a
x
ρ
()
m
a
x
m
a
x
1
=
m
a
x
1
,
1
jj
jj
v
Bv
Acco
rdi
ng to lemma 2.1, if
ma
x
2
0
, then:
min
m
ax
ρ
(
)
ma
x
m
a
x
1
=
ma
x
1
,
1
1
jj
jj
v
Bv
Thus
Richa
r
d
s
on iteration i
s
co
nverg
ent.
More
over, when
ρ
()
achieve
s
its minimal
value
ma
x
m
i
n
opt
opt
ma
x
m
i
n
ρ
()
+
, Rich
a
r
dson
iteration ha
s the be
st perfo
rman
ce of co
nverge
nce in theory.
3. HSS Iteration for Generaliz
ed Positi
v
e
Definite
Matrix
For
sy
stem of
linear eq
uati
ons
with gen
erali
z
ed p
o
siti
ve definite matrix
nn
A
R
(or
non-He
rmitia
n positive defi
n
ite).
Splitting coeffici
ent matrix.
A
HS
Whe
r
e
1
()
2
T
HA
A
,
1
()
2
T
SA
A
. Since
,
TT
HH
S
S
, we call
A
HS
Hermitian/ske
w
-He
r
mitian
splitting (HSS, [4-5]).
Lemma 3.1
[6]:
Let
nn
A
R
,
1
()
2
T
HA
A
,
1
()
2
T
SA
A
.
1)
A
is gene
ral
i
zed p
o
sitive
definite (GP
D
)
if
and o
n
l
y
if
H
is symmetric positiv
e
definite (SPD).
2) If
A
is
gen
eralize
d
po
sitive definite
(G
PD), then
the
determin
ant
0
A
, and
1
A
is
also g
ene
rali
zed p
o
sitive d
e
finite.
3) If
A
is ge
ne
ralize
d
po
sitive definite
(G
PD), the
n
for
any
0
,
(+
)
I
H
,
(+
)
I
S
,
1
(+
)
I
H
, and
1
(+
)
I
S
are also gene
rali
zed
positive defin
ite.
4) If
A
is generalize
d
po
sitive definite (G
PD), then for
any
0
,
1
()
(
+
)
I
SI
S
is an orth
ogo
nal matrix, thus
1
2
()
(
+
)
=
1
IS
I
S
.
HSS iteration
method of sy
stem (1
) a
s
follows:
1
()
()
,
0
,
1
,
2
,
kk
xT
x
G
b
k
,
(4)
Whe
r
e
11
1
1
()
=
+
+
,
()
2
+
+
T
I
S
I
H
I
H
I
S
G
IS
IH
,
and p
a
ra
met
e
r
0
. For the
converg
e
n
c
e p
r
ope
rty of the
HSS iteratio
n, we first giv
e
followi
ng
lemma.
Lemma 3.2:
Let
mi
n
1
2
3
1
m
a
x
0
nn
. For any
0
, let:
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Iteration Meth
ods for
Linea
r System
s wit
h
Positive
De
finite Matrix (Long
quan Yo
ng)
1591
ρ
()
m
a
x
+
j
j
j
1)
Fo
r any
0
,
ρ
()
1
.
2) Th
e
ρ
()
a
c
hieve
s
its minimal value
ma
x
m
i
n
opt
o
p
t
max
m
i
n
ρ
()
+
w
h
en
opt
m
i
n
m
ax
.
Proof.
1)
Sinc
e
0,
1
,
2,
,
j
jn
, for any
0
+1
,
2
,
,
jj
j
n
,
That is:
1
+
j
j
,
1,
2
,
,
j
n
Thus:
ρ
()
m
a
x
1
+
j
j
j
2) Since
m
i
n1
23
1
m
a
x
0
nn
, thus
:
max
mi
n
mi
n
m
ax
ma
x
m
a
x
,
++
+
j
j
j
Con
s
e
quently
, the minimum value of
op
t
opt
ρρ
()
is achiev
ed pre
c
i
s
ely
at the
intersectio
n
, i.e., at
the value of
opt
obtaine
d from the followin
g
co
ndition:
o
p
tm
i
n
o
p
tm
i
n
o
p
tm
a
x
o
p
t
m
a
x
o
p
tm
i
n
o
p
tm
i
n
o
p
tm
a
x
o
p
tm
a
x
==
=
++
+
+
Whi
c
h yield
s
:
op
t
m
i
n
m
a
x
So,
ma
x
m
i
n
opt
o
p
t
max
m
i
n
ρ
()
+
Example 3.1:
Let
12
3
3
2
,3
,4
n
,
. For any
0
, let:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 14, No. 4, Dece
mb
er 201
6 : 1586 – 159
7
1592
23
4
ρ
(
)
ma
x
=
max
,
,
++
2
+
3
+
4
j
j
j
Figure 3. Image of function
23
4
,,
23
4
, and bold polygonal line
Is image of
234
2
4
ρ
()
m
a
x
,
,
m
a
x
,
23
4
+
2
+
4
.
From Fig
u
re
3, for any
0
, th
en:
24
ρ
()
m
a
x
,
1
+2
+4
More
over, if
opt
m
i
n
m
a
x
==
8
,
ρ
()
achieve
s
its
minimal value
.
ma
x
m
i
n
opt
op
t
ma
x
m
i
n
22
ρ
()
=
+2
2
Followi
ng we apply the abo
ve lemmas to
obtai
n the co
nverge
nce of HSS iteration
.
Theorem 3.1
:
Suppo
se
nn
AR
b
e
gene
ralized
positive definite (GPD) matrix. In HSS
iteration (4),
for any
0
,
ρ
((
)
)
ρ
()
1
T
, so the se
quen
ce
k
x
of iteration (4
) conve
r
g
e
s
to the solution of linear system (1
) for arbitra
r
y initial point
0
n
xR
. Moreover, the
best
pe
rform
ance of
conve
r
g
e
n
ce
occu
rs on
ma
x
m
i
n
opt
opt
ma
x
m
i
n
ρ
()
+
with
opt
m
i
n
m
a
x
, where
mi
n
and
ma
x
are th
e minim
u
m an
d the
m
a
ximum eig
e
n
value
s
of th
e
matrix
H
.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Iteration Meth
ods for
Linea
r System
s wit
h
Positive
De
finite Matrix (Long
quan Yo
ng)
1593
Proof:
1
11
11
2
11
22
1
2
ρ
((
)
)
=
ρ
(+
)
(
)
(
+
)
=
ρ
()
(
+
)
(
)
(
+
)
=
(
)(
+
)
(
)
(
+
)
(
)
(
+
)
(
)(
+
)
=
+
=
m
ax
+
j
j
j
TI
S
T
I
S
I
H
IH
I
S
IS
I
H
IH
I
S
IS
I
H
IH
I
S
IS
IH
I
H
Acco
rdi
ng to
lemma 3.2,
for any
0
,
ρ
((
)
)
ρ
()
1
T
. Thus
Rich
ard
s
o
n
iteration is
co
nverge
nt.
More
over, when
ρ
()
achieve
s
its minim
a
l
value
ma
x
m
i
n
opt
o
p
t
max
m
i
n
ρ
()
+
, HSS
iteration ha
s the be
st perfo
rman
ce of co
nverge
nce in theory.
Rem
a
r
k
.
In HSS iteration, if
A
is symmet
r
i
c
po
sitive definite (SPD), th
en:
1
()
2
T
H
AA
A
,
1
()
2
T
SA
A
O
Thus,
11
()
=
+
,
()
2
+
TI
A
I
A
G
I
A
Comp
ared
wi
th Richa
r
d
s
on iteration,
HSS
iteration red
u
ces the requirement of
symmetry. S
o
ab
sol
u
te v
a
lue
equ
ation
s
[7], s
addl
e
point p
r
obl
e
m
[8] are al
so solved
by
HSS
iterative meth
od.
4. Optimal Conv
ergence Speed of
Ric
h
ardso
n
and
HSS Iteratio
n
For
s
y
mmetric
pos
i
tive definite matrix, t
he
optimal converg
e
n
c
e spe
ed
of Richard
s
o
n
and HSS iteration are
cont
raste
d
in this
se
ction.
Lemma 4.1:
Let
0
ab
. Then:
+
+
ba
b
a
ba
ba
Proof:
0
ab
1
b
a
bb
aa
ba
a
b
()
(
)
ba
b
a
b
a
b
a
+
+
ba
b
a
ba
ba
Theorem
4.
1:
If
A
is sy
mmetric p
o
si
tive definite
(SPD) matrix
, then th
e o
p
timal
conve
r
ge
nce spe
ed of HS
S iteration is
sup
e
rio
r
to that of Richa
r
d
s
on iteration.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 14, No. 4, Dece
mb
er 201
6 : 1586 – 159
7
1594
Proof.
Suppos
e
that,
,1
,
2
,
,
j
jn
, a
r
e the
eige
n
v
alues
of
A
arra
nge
d in th
e
ascen
d
ing o
r
der:
m
i
n1
23
1
m
a
x
0
nn
Then, by Theo
rem 3.
1, the optimal sp
e
c
tra
l
radiu
s
of HSS iteration is
ma
x
m
i
n
opt
opt
ma
x
m
i
n
ρ
()
+
with
op
t
m
i
n
m
a
x
. Whi
l
e by T
heo
rem 2.1, th
e
optimal
spe
c
tral
radi
u
s
of Richardson iteration i
s
ma
x
m
i
n
opt
o
p
t
ma
x
m
i
n
ρ
()
with
op
t
mi
n
m
a
x
2
.
Acco
rdi
ng to Lemma 4.1, if
mi
n
m
a
x
, then:
ma
x
m
i
n
ma
x
m
i
n
ma
x
m
i
n
ma
x
m
i
n
+
Thus,
opt
op
t
opt
opt
ρ
()
ρ
()
So the corresp
ondi
ng o
p
timal conve
r
gen
ce
spe
e
d
of HSS iteration i
s
su
perio
r
to that of
Richardson iteration.
Rem
a
r
k
.
Powe
r metho
d
is conve
n
tional way for
finding the
greate
s
t
ei
genvalu
e
of a matrix.
Since
nn
A
R
be
sym
m
etric po
sitive definite
ma
trix. Thus,
m
a
ximum ei
ge
nvalue
ma
x
of matrix
A
ca
n be o
b
taine
d
by po
wer
m
e
thod. Mea
n
w
hile, mini
mu
m eigenval
ue
mi
n
of
matrix
A
can b
e
obtaine
d by calculati
ng m
a
ximum eige
nvalue of matrix
1
A
.
5. Computati
onal Res
u
lts
In orde
r to illustrate the p
e
rform
a
n
c
e o
f
Ri
cha
r
d
s
on
iteration an
d HSS iteration
method,
we solve linear sy
stems
with symmetri
c
positive
defi
n
ite matrix ge
nerate
d
ra
nd
omly. Where the
data (
A
,
b
) are gene
rated
by the Matlab scripts:
rand
('
state',0);R=rand
(n,n
);A=
R'*
R
+n*
e
ye(n
);b=A*
one
s(n,1
)
;
Suc
h
that
(1
,
1
,
,
1)
T
x
is
the uniq
ue
so
lution. We se
t the ra
ndom
-numbe
r g
ene
rator to the
state of 0
so that the
same d
a
ta ca
n be
reg
ene
rated. L
e
t
0
x
0
,
4
11
0
. W
e
us
e
1
kk
xx
as the sto
p
p
ing rule. All experime
n
ts
were pe
rformed on M
a
tlabR200
9a wi
th
Intel(R) Co
re
(TM) 4
×
3.3G
Hz and 2
G
B RAM.
Simulation
s were carried
out to com
p
a
r
e t
he pe
rformance of the
Rich
ardson i
t
eration,
HSS iteratio
n
,
and G
a
u
s
s-Seidel
iteratio
n.
Spect
r
al ra
dius (
ρ
), itera
t
ions
(k), an
d
elap
sed tim
e
(t)
of three iterati
on method
s a
r
e listed in T
a
ble 1 for different dimen
s
io
n
n
.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
Iteration Meth
ods for
Linea
r System
s wit
h
Positive
De
finite Matrix (Long
quan Yo
ng)
1595
Table 1. Spe
c
tral radiu
s
, iteration
s
, and
elap
sed time
for different di
mensi
on
n
n
Gauss-Seidel Richardson
HSS
ρ
k
t
(
s
)
ρ
k
t
(
s
)
ρ
k
t
(
s
)
50 0.8822
87
0.0049
0.8659
84
0.0051
0.5771
24
0.0064
100 0.9631
280
0.0108
0.9277
165
0.0047
0.6756
33
0.0075
150 0.9823
577
0.0614
0.9501
244
0.0140
0.7242
40
0.0201
200 0.9897
986
0.2894
0.9620
326
0.0294
0.7558
47
0.0358
250 0.9932
1482
0.9227
0.9692
407
0.0707
0.7777
52
0.0580
300 0.9952
2069
2.1918
0.9714
489
0.1200
0.7946
57
0.0962
350 0.9965
2814
4.9141
0.9778
574
0.2017
0.8084
62
0.1328
400 0.9973
3619
9.5858
0.9805
657
0.3437
0.8194
67
0.1909
450 0.9978
4511
16.9223
0.9826
741
0.4601
0.8287
71
0.2457
500 0.9982
5521
28.3678
0.9843
825
0.6667
0.8367
75
0.3223
550 0.9985
6605
44.8219
0.9857
909
0.9678
0.8436
79
0.4497
600 0.9988
7847
68.9272
0.9869
994
1.2830
0.8497
82
0.5649
650 0.9989
9101
100.3113
0.9879
1079
1.6678
0.8551
86
0.6657
700 0.9991
10450
142.4005
0.9887
1164
2.1944
0.8599
89
0.7852
750 0.9992
11996
200.6673
0.9895
1249
2.6713
0.8643
93
0.9353
800 0.9993
13526
272.2897
0.9901
1334
3.3175
0.8683
96
1.0879
850 0.9994
15337
368.2543
0.9907
1421
4.0451
0.8720
99
1.2743
900 0.9994
17052
475.4076
0.9912
1509
4.8815
0.8754
102
1.4799
950 0.9995
18962
621.7682
0.9917
1595
5.7889
0.8785
105
1.7050
Figure 4
(
a
)
shows
sp
ect
r
a
l
ra
diu
s
of th
ree
metho
d
s for
differe
nt
dimen
s
ion
n.
Figu
re
4(b
)
sho
w
s
spe
c
tral
radi
us
of two
m
e
thod
s fo
r d
i
fferent dim
e
nsio
n n. Fi
g
u
re
5(a)
sh
o
w
s
iteration
s
of
three
metho
d
s fo
r diffe
rent dime
ns
i
o
n n. Fig
u
re
5(b
)
sho
w
s i
t
eration
s
of t
w
o
method
s for d
i
fferent dimen
s
ion n. Fig
u
re
6(a)
sh
ows e
l
apsed time o
f
three metho
d
s for
differen
t
dimen
s
ion n.
Figure 6(b
)
shows ela
p
sed
time
of two method
s for d
i
fferent dimen
s
ion n.
(a)
(b)
Figure 4. Spectral radiu
s
(a)
(b)
Figure 5. Iterations
0
10
0
20
0
30
0
40
0
50
0
60
0
70
0
80
0
90
0
10
00
0.
5
5
0.
6
0.
6
5
0.
7
0.
7
5
0.
8
0.
8
5
0.
9
0.
9
5
1
di
m
e
ns
i
o
n
s
pec
t
r
al
radi
us
G
a
us
s
-
S
e
i
d
el
i
t
e
r
at
i
o
n
R
i
c
h
ard
s
o
n
i
t
er
at
i
o
n
H
SS i
t
e
r
a
t
io
n
0
100
200
300
400
500
60
0
70
0
800
900
100
0
0.
55
0.
6
0.
65
0.
7
0.
75
0.
8
0.
85
0.
9
0.
95
1
di
m
ens
i
o
n
s
pec
t
r
a
l
r
adius
R
i
c
hards
o
n
i
t
erat
i
o
n
H
S
S
i
t
erat
i
o
n
0
100
200
300
400
50
0
600
700
80
0
900
1000
0
0.
2
0.
4
0.
6
0.
8
1
1.
2
1.
4
1.
6
1.
8
2
x 1
0
4
di
m
e
ns
i
o
n
i
t
er
at
i
ons
Gau
s
s
-
S
e
i
del
i
t
erat
i
o
n
R
i
c
hards
on i
t
er
at
i
o
n
H
S
S
i
t
er
at
i
o
n
0
10
0
200
30
0
40
0
50
0
60
0
700
80
0
900
100
0
0
20
0
40
0
60
0
80
0
100
0
120
0
140
0
160
0
di
m
e
ns
i
o
n
i
t
er
at
i
ons
R
i
c
h
ards
on
i
t
er
a
t
i
o
n
H
S
S
i
t
er
at
i
o
n
Evaluation Warning : The document was created with Spire.PDF for Python.