Internati
o
nal
Journal of Ele
c
trical
and Computer
Engineering
(IJE
CE)
Vol.
4, No. 6, Decem
ber
2014, pp. 962~
973
I
S
SN
: 208
8-8
7
0
8
9
62
Jo
urn
a
l
h
o
me
pa
ge
: h
ttp
://iaesjo
u
r
na
l.com/
o
n
lin
e/ind
e
x.ph
p
/
IJECE
Adaptive PID Type Iterative Learning Control
S
a
ra
Za
miri*
,
A
li Ma
da
dy
,
H
a
mid-
R
e
za
R
e
za
-A
lik
h
a
ni
*Department of Control Engineering,
S
c
ien
ce
an
d Res
ear
ch Br
an
ch, Is
l
a
m
i
c A
zad
Univers
i
t
y
,
Bor
oujerd,
Iran
Article Info
A
B
STRAC
T
Article histo
r
y:
Received
J
u
n 27, 2014
Rev
i
sed
Sep
10
, 20
14
Accepted
Sep 30, 2014
In this paper, a
n
adaptiv
e PID-t
y
p
e
iterative learning con
t
rol scheme is
proposed for
tracking
problem in
rep
e
titiv
e s
y
s
t
ems with unknown
parameters
. In this scheme, we use a
com
b
inati
on of an optim
al P
I
D-t
y
p
e
iter
a
tiv
e l
earn
i
n
g
contro
ller
and
proje
c
tion
l
i
ke
adjust
ing a
l
gor
ithm
tha
t
is
based on
trackin
g
error
which
decreases
b
y
iter
ations in
cr
ement. Lay
a
punov
method is used
to convergence anal
y
s
is of the presented scheme, an
d
convergen
ce co
ndition is obtain
e
d in term
of algorithm
step size range. Th
e
effectiven
ess of
proposed techni
que is
illustr
a
t
e
d
b
y
sim
u
lat
i
on r
e
sults.
Keyword:
Ad
ap
tiv
e con
t
ro
l
Iterative learni
ng control
M
o
n
o
t
o
ni
c c
o
n
v
er
ge
nce
PID type
ILC
Copyright ©
201
4 Institut
e
o
f
Ad
vanced
Engin
eer
ing and S
c
i
e
nce.
All rights re
se
rve
d
.
Co
rresp
ond
i
ng
Autho
r
:
Sara Zam
i
ri
Depa
rt
m
e
nt
of C
ont
r
o
l
E
n
gi
ne
eri
n
g, Sci
e
nce and
R
e
searc
h
B
r
anc
h
Islamic Azad
Uni
v
ers
ity
, B
o
ro
u
j
er
d,
Ira
n.
Em
a
il: s.za
m
i
ri
1
365
@yah
oo
.co
m
1.
INTRODUCTION
There a
r
e m
a
ny industrial applica
tions, t
h
at the syste
m
m
u
st pe
riod
ically do a ce
rtain
task over a
fi
ni
t
e
t
r
i
a
l
l
e
ngt
h, suc
h
as i
n
m
achi
n
e assem
b
ly
by
robot
m
a
ni
pul
at
o
r
s,
chem
i
cal bat
c
h proce
sses, an
d m
a
ny
o
t
h
e
r si
m
ilar e
x
am
p
l
es. Now, if h
u
m
an
o
p
e
rato
rs p
e
rform
s
u
ch
th
is task
rep
eated
ly, th
ey will learn
to
d
o
th
eir
j
o
b
better a
nd
better. This is
because of
human's learni
ng and ada
p
tive
ability. this ki
nd of learning i
s
calle
d
i
t
e
rat
i
v
e l
earni
ng c
ont
rol
(
I
L
C
) [1
-3]
,
w
h
i
c
h was fi
r
s
t
i
n
t
r
od
uce
d
by
Ari
m
ot
o et al
. i
n
198
4 [
1
]
.
The i
m
port
a
nt
characte
r
istic of ILC is
using
inform
ation that are rec
o
rd
e
d
at each iteration t
o
a
d
just the cont
rol si
gna
l in an
atte
m
p
t to
redu
ce t
h
e track
i
n
g
error
ob
tain
ed during
the n
e
x
t
iteration
,
wh
ere b
y
in
creasing
t
h
e
n
u
m
b
ers
iteratio
n
s
t
h
e t
r
ack
i
n
g error
will co
nv
erg
e
n
ce to zero
[4].
Th
e op
erati
o
n
of ILC
in co
n
t
ro
lling
repetitiv
e
syste
m
s with unknown pa
ra
meters creates adapti
ve IL
C
alg
o
rith
m
s
. In
[5
], so
m
e
a
d
ap
tiv
e so
m
e
iterativ
e
l
earni
n
g
c
o
nt
r
o
l
sc
hem
e
s for
t
r
a
j
ect
ory
t
r
acki
n
g
o
f
r
o
bot
m
a
ni
p
u
l
a
t
o
rs
,
wi
t
h
u
n
k
n
o
w
n
param
e
ters, i
s
pr
o
pose
d
.
Not
e
t
h
at
m
a
ny
of t
h
e pr
op
ose
d
adapt
i
v
e
IL
C algorithm
s
are
com
b
inati
on of ada
p
tive controlle
rs
and
no
n
-
ada
p
t
i
v
e ILC
al
g
o
ri
t
h
m
s
. Accor
d
i
n
gl
y
i
n
[6]
,
by
I
L
C
al
gori
t
h
m
a st
andar
d
m
odel
refe
rence s
c
hem
e
is exp
a
nd
ed
to con
tin
uou
s-ti
me SISO lin
ear tim
e-in
v
a
ri
ant syste
m
s wh
ich
p
e
rform
rep
e
titiv
e task
s.
In
[7
], a
n
e
w ad
ap
tiv
e
switch
i
ng
learn
i
ng
co
n
t
ro
l ap
pro
ach,
wh
ich
is called ad
ap
tiv
e switch
i
ng
learn
i
ng
PD con
t
rol
law,
was
p
r
op
osed
t
h
at it h
a
s th
e ab
ility o
f
b
o
th
learn
i
ng
an
d ad
ap
tiv
e. A self-tun
ing
iterativ
e learn
i
ng
con
t
rol
ap
pro
ach in [8] w
a
s
p
r
op
osed
f
o
r
lin
ear
ti
me-
v
ar
ying
unk
now
n systems. In [9
], an
ad
ap
tiv
e PI
D lear
n
i
n
g
cont
rol
l
e
r
was
prese
n
t
e
d
whi
c
h c
o
m
posed
o
f
an
ada
p
t
i
v
e
PID fee
d
back c
ont
rol
schem
e
and a
fee
d
forward
in
pu
t lear
n
i
ng
sch
e
m
e
. Co
m
b
in
es b
o
t
h
con
c
ep
t o
f
m
o
d
e
l r
e
f
e
r
e
n
ce ad
ap
tiv
e con
t
r
o
l and
I
L
C w
a
s pr
oposed
in
[
1
0
]
fo
r unk
now
n lin
ear r
e
p
e
atab
le syst
em
s. A
n
a
d
apt
i
ve P
I-t
y
p
e
ILC
sch
e
m
e
was
prese
n
t
e
d i
n
[1
1]
,
w
i
t
hou
t
an
y prior
k
nowledg
e
o
f
syst
e
m
p
a
ram
e
ters. Based
on
an
esti
m
a
t
i
o
n
p
r
oced
ure
u
s
ing
a Kalm
an
filter an
d an
o
p
tim
izat
io
n
of a
q
u
adratic criterio
n
is
p
r
esen
ted
i
n
[12],
an a
d
aptive
It
erative
Lea
r
ni
ng
C
o
nt
r
o
l
(
I
L
C
).
A
recent resea
r
c
h
[13] studied t
h
e optim
a
l
design of PI
D-type ILC for a discrete-tim
e
line
a
r repetitive syste
m
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJECE Vol. 4, No. 6, D
ecem
ber 2014
:
962 – 973
9
63
B
y
expa
ndi
ng
t
h
e res
u
l
t
s
o
f
[1
3]
t
o
u
n
k
n
o
w
n
sy
st
em
, a new c
o
nt
r
o
l
al
go
ri
t
h
m
called ada
p
t
i
v
e
PI
D-t
y
p
e
iterativ
e learn
i
n
g
con
t
ro
l th
at
is th
e m
a
in
d
e
bate o
f
th
is
p
a
p
e
r.
The outline
of
the pa
per is as follows.
In
Se
ction
2, som
e
necessa
ry defi
nitions
of t
h
e
problem
ar
e
gi
ve
n.
A s
u
m
m
a
ry
of t
h
e st
r
u
ct
u
r
e o
f
P
I
D t
y
pe ILC
a
nd i
t
s param
e
t
e
r op
t
i
m
a
l
desi
gn i
s
prese
n
t
e
d i
n
se
ct
i
on 3
.
In section 4, an ada
p
tive PID-Ty
pe ILC a
nd its conv
ergence operation is given.
In s
ection5, sim
u
lation
resu
lts are
p
r
esen
ted
to
illu
st
rate th
e effecti
v
en
ess of th
e
p
r
op
o
s
ed
m
e
th
o
d
. Th
e last sectio
n
con
c
ludes th
e
pape
r.
2.
PROBLEM FORMUL
ATION
AN PRELIMINARIES
Let u
s
in
tridu
c
e su
b
s
crip
t
‘j’
and
‘i
’
as repet
i
t
i
on (o
r o
p
erat
i
on/
o
r
i
t
e
rat
i
o
n
)
and t
i
m
e duri
ng a gi
ven
rep
e
titio
n
o
f
the system
resp
ectiv
ely wh
ere
b
o
t
h
j
an
d
i
are in
teg
e
rs, and
∈
0
,
. In
t
h
i
s
pape
r,
we
c
o
nsi
d
e
r
th
at th
e
p
l
an
t t
o
b
e
co
n
t
ro
lled is a d
i
screte-time, lin
ear, time-inv
a
rian
t
,
sing
le-inpu
t sing
le-ou
t
pu
t system
s an
d
descri
bed
as
fo
l
l
o
w:
1
,
,
0
,
0,1,
…
,
,
0
,1,
…
(1
)
W
h
er
e
∈
is the
state vector,
∈
an
d
∈
are i
n
pu
t an
d
o
u
t
pu
t
o
f
the system
resp
ectiv
ely.
A
,
B
,
and
C
are real-value
d coe
ffici
ents with
ap
pr
op
ri
at
e di
m
e
nsi
ons
. Al
so
x
0
is th
e syste
m
in
i
tial co
n
d
itio
n. In
th
i
s
p
a
rt, con
s
id
er
(1
) and
m
a
k
e
the fo
llo
wi
n
g
reaso
n
ab
le assu
m
p
tio
ns:
(A1) T
h
e m
a
trixes
A, B
and C
are
known.
(A2) T
h
e scala
r
CB
i
s
n
onze
r
o.
(A3) T
h
e syste
m
initial condit
i
on
x
0
is
un
know
n.
Un
de
r i
t
e
rat
i
v
e l
earni
n
g
co
nt
r
o
l
st
rat
e
gy
, t
h
e
err
o
r
bet
w
ee
n
t
h
e gi
ve
n
desi
red
out
put
t
r
a
j
ect
ory
y
d
(
i
)
and
the
sy
st
em
act
u
al
out
put
y
j
(
i
)
b
e
co
m
e
s
m
aller b
y
in
creasin
g
the n
u
m
b
e
rs o
f
rep
e
titio
n
,
so
that fo
llo
wi
n
g
track
ing
can be
esta
blish:
lim
→
fo
r 1
i
M
(2
)
Because only finite
tim
e
inte
rvals (
M
<
s
a
m
p
les) are c
o
m
s
ider
ed
ou
tpu
t
traj
ecto
ry
y
d
(
i
) form
b
y
b
u
ild
in
g
su
per
v
ector
s
1
U
(
j
) and
Y
(
j
)
fo
r
m
u
j
(
i
) an
d
y
j
(
i
) as fo
llo
ws:
U
(
j
)=
[
u
j
(0
)
u
j
(1
)
u
j
(2
) …
u
j
(
M
– 1)
]
T
(3
)
Y
(
j
)=
[
y
j
(1
)
y
j
(2)
y
j
(3
) …
y
j
(
M
)]
T
Whe
r
e
T
d
e
no
t
e
s th
e tran
spo
s
e.
From
(1
) t
h
e f
o
l
l
o
wi
n
g
rel
a
t
i
on
obt
ai
ne
d e
a
si
l
y
:
Y
(
j
) =
H
p
U
(
j
) +
H
x
x
0
(4
)
Whe
r
e
H
p
an
d
H
x
are th
e fo
llowing
m
a
trix
es:
⋮
,
00
0
⋯
0
0
0
⋮⋱
⋮
⋯
(5
)
Whe
r
e
h
k
d
e
n
o
tes th
e stand
a
rd
Marko
v
p
a
rameters o
f
th
e syste
m
(1
), th
at
is:
h
k
=
CA
k
-1
B
for
k
= 1,
2
,
…
,
M
(6
)
1
T
h
e
s
u
p
e
r-
ve
ct
or
s
a
r
e
m
a
rk
e
d
b
y
t
h
e e
l
i
m
in
at
i
o
n
of
t
h
e ar
g
u
m
e
n
t
t
i
m
e
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ad
apt
i
ve
P
I
D
Type It
erat
i
ve Lear
ni
n
g
C
ont
rol
(
Sar
a Za
mi
ri)
96
4
Let us t
h
e
operator T
to m
a
p the
vector
h
to
a lo
wer t
r
iang
ular To
ep
litz matrix
H
p
,
H
p
=
T
(
h
) t
h
at vector
h
is
as fo
llo
w:
h
=
[
h
1
h
2
h
3
…
h
M
]
T
(7
)
Com
ment 1.
We co
nsi
d
er a
ssum
p
t
i
on (
A
2
)
i
s
a st
an
dar
d
assum
p
t
i
on i
n
ILC
desi
g
n
w
h
i
c
h
gua
ra
nt
ees t
h
e
existence
of the learning
gains. T
h
at is
h
1
=
CB
0
.
Th
is i
s
no
t really a restrictio
n
b
ecau
s
e it can
b
e
satisfied
by
ch
o
o
si
n
g
a
pr
o
p
er
sam
p
l
i
ng
peri
od
i
n
di
s
c
ret
i
z
i
ng t
h
e c
ont
i
n
u
o
u
s
-t
i
m
e sy
st
em
s.
Using
(4) on
e can
write:
Y
(
j
+1) =
y
(
j
) +
H
p
Y
(
j
)
j
=
0,
1,
…
(8
)
Whe
r
e:
V
(
j
)=
U
(
j
+1) –
U
(
j
) (
9
)
From
(8
)
we ca
n
get
:
Y
d
–
Y
(
j
+1
) =
Y
d
–
Y
(
j
) –
H
p
V
(
j
) (
1
0
)
The desi
re
d o
u
t
put
t
r
a
j
ect
o
r
y
y
d
and the
error
e
j
(
i
) =
y
d
(
i
) –
y
j
(
i
)
can
b
e
also
written
as b
e
l
o
w
v
ectors:
Y
(
d
)=[
y
d
(1
)
y
d
(2
)
y
d
(3
)
…
y
d
(
M
)]
T
(1
1)
E
(
j
)=[
e
j
(1
)
e
j
(2
)
e
j
(3
) …
e
j
(
M
)]
T
Th
erefo
r
e relatio
n
(1
0)
ca
n
b
e
rewritten
as follo
ws:
E
(
j
+1) =
E
(
j
) –
H
p
V
(
j
)
j
= 0, 1,
…
(1
2)
Th
e abo
v
e
relatio
n
is t
h
e
d
y
n
a
mics o
f
th
e erro
r v
ect
o
r
i
n
t
h
e rep
e
titio
n
d
o
m
ain
.
3.
P
I
D
TYP
E
IL
C
AN
D
I
T
S
P
A
RA
M
ETER
O
P
T
I
M
A
L
DES
I
GN
3.
1. PI
D
T
y
pe
I
t
era
t
i
v
e
L
e
a
r
nin
g
Contro
l
Accord
ing
to th
e
[13
]
PID-typ
e
ILC is
d
e
fi
ned
as fo
llo
w:
1
1
,
i
= 0,
1, …
,
M
-
1
,
j
= 0, 1,
…
(1
3)
Whe
r
e,
k
p
,
k
i
an
d
k
d
a
r
e
PID l
earni
ng gai
n
s
(param
eter/coefficient
), wh
ich
are called pro
p
o
r
tion
a
l, i
n
tegratio
n
an
d
d
e
riv
a
tiv
e learn
i
ng
g
a
ins resp
ectiv
ely.
Using
v
ectors rep
r
esen
tatio
n (9
) an
d
(11), we
can re
wri
t
e the a
b
ove
relatio
n
lik
e com
p
act fo
rm
o
f
t
h
e
follo
win
g
fo
rm
ula:
V
(
j
) =
k
p
E
(
j
) +
k
i
T
i
E
(
j
) +
k
d
T
d
E
(
j
) (
1
4
)
Whe
r
e:
1 1 1
…
1
,
100
110
111
…
0
0
0
⋮
⋮
1
1
1
…
1
(1
5)
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJECE Vol. 4, No. 6, D
ecem
ber 2014
:
962 – 973
9
65
1
1
0
…
0
,
10
0
1
1
0
0
1
⋱
…0
0
…0
0
⋮
⋮⋱
00
00
…
10
0
110
0
1
1
3.
2. C
o
n
v
erge
n
ce
A
n
al
ysi
s
The p
r
op
ose
d
ILC
S
i
s
sai
d
t
o
be c
o
n
v
e
r
ge
nt
i
f
t
h
e l
earni
ng e
r
r
o
r a
p
pr
o
aches an i
n
fi
ni
t
e
sim
a
l val
u
e
after su
fficien
t
learn
i
ng
iterati
o
n
s
. Math
em
at
ically th
e fo
llowing
two
d
e
finitio
n
s
an
d th
eorem
s
are g
i
v
e
n
.
Definiti
on 1.
For
p
r
o
p
o
se
d I
L
C
S
can
be s
h
ow
n t
o
co
n
v
er
ge i
n
t
h
e se
nse
t
h
at
as
j
we
h
a
ve
y
j
(
i
)
y
d
(
i
)
for
all
i
[0,
M
], for arb
itrary i
n
itial co
nd
ition
s
, su
ch th
at
(2)
h
o
l
d
s
, m
ean
ing
:
lim
→
0
(1
6)
Theorem 1.
I
L
C
S
i
s
c
o
n
v
er
gent
i
f
a
n
d
o
n
l
y
i
f
l
earni
ng
ga
i
n
s
k
p
,
k
i
and
k
d
satisfy th
e i
n
eq
u
a
lity as fo
llows:
1
1
(1
7)
Proof:
see [13]
Com
ment 2.
According to c
o
mment 1, since scalar
≜
i
s
no
nzer
o i
t
can
be
fi
n
d
num
e
rous
real num
b
ers
for learn
i
ng
g
a
in
s wh
ich
th
ey
satisfy in
equ
a
lity (17
)
.
Definiti
on 2.
The propose
d
ILCS
is
called m
onotonically
conve
r
ge
nt, if
for a
n
y E(0) t
h
e following c
o
ndition
hol
d:
‖
1
‖
‖
‖
(1
8)
fo
r
=
1, 2,
an
d
j
= 0,
1, 2,.
...
In
pa
rt
i
c
ul
ar,
‖
1
‖
‖
‖
if and
o
n
l
y if eith
er
E
(
j
) =
0
, th
at
‖‖
den
o
t
e
s t
h
e
-n
or
m
.
In
t
h
eorem
1
g
i
v
e
u
s
a sufficien
t and
n
e
cessary con
d
ition
fo
r t
h
e presen
ted
learn
i
ng
process. No
te th
at, th
i
s
co
nd
itio
n do
es n
o
t
gu
aran
tee th
e conv
erg
e
n
ce to
m
o
no
ton
i
c. Th
us, th
eo
rem
2
is presen
ted
for m
o
no
ton
i
c
co
nv
erg
e
n
ce. In
th
is th
eo
rem
,
an
op
ti
m
a
l
m
e
th
od
is
u
s
ed
for ch
oo
sing
k
p
,
k
i
and
k
d
.
Theorem 2.
T
h
e
prese
n
t
e
d
I
L
C
S
ha
s m
ono
t
oni
c c
o
n
v
e
r
ge
nce,
wi
t
h
m
a
xi
m
u
m
desi
red c
o
n
v
e
r
ge
nce
rat
e
, i
f
the learning
ga
ins
k
p
,
k
i
and
k
d
are
chose
n
as
follows:
1
1
1
(1
9)
That,
∈
a
n
d als
o
∈
i
s
de
fi
ne
d as
f
o
l
l
o
w:
⋮
∑
,
⋮
,
(2
0)
Proof:
see [13].
4.
AD
APTI
VE P
I
D T
Y
PE IL
C
In th
is
p
a
rt, we n
e
ed
to con
s
i
d
er th
ese cond
itio
n
s
:
(
B
1)
A
ll t
h
e sy
ste
m
p
a
r
a
m
e
ter
s
, nam
e
l
y
th
e matr
ix
A
,
B and
C, ar
e
un
know
n.
(B2) T
h
e scala
r
CB
i
s
n
onze
r
o.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ad
apt
i
ve
P
I
D
Type It
erat
i
ve Lear
ni
n
g
C
ont
rol
(
Sar
a Za
mi
ri)
96
6
Here
, acc
or
di
n
g
t
o
(B
1)
, M
a
r
k
o
v
pa
ram
e
ter
s
of th
e system (1
), th
at is
h
= [
h
1
h
2
h
3
h
M
]
T
ar
e
u
nkn
own
an
d the
relation (19) is
useless. So, i
n
this case at first vector
h
sh
ou
ld
b
e
estimated
and
th
en
in
ord
e
r to
d
e
term
in
e
learn
i
ng
g
a
ins,
we
u
s
e t
h
e relatio
n
as fo
llo
w:
1
1
1
(2
1)
Hen
c
e, th
e contro
l law (1
3), ch
ang
e
t
o
:
1
1
,
i
= 0,
1, …
,
M
-
1
,
j
= 0, 1,
…
(2
2)
Or:
V
(
j
) =
k
p
(
j
)
E
(
j
) +
k
i
(
j
)
T
i
E
(
j
) +
k
d
(
j
)
T
d
E
(
j
)
Whe
r
e
,
and
are,
res
p
ectively, the e
s
tim
a
tions of
h
,
h
i
, a
n
d
h
d
in
t
h
e
j
t
h
iteratio
n, th
at is:
…
(2
3)
An
d:
,
The
i
s
det
e
rm
i
n
ed
by
a
sui
t
abl
e
m
e
t
hod s
o
t
h
at
a
cco
rdi
n
g
t
o
t
h
e ass
u
m
p
ti
on B
2
,
fol
l
owi
n
g c
o
ndi
t
i
on
hol
ds fo
r
al
l
∈
0,1
,
…
0
(2
4)
Un
til th
e learn
i
n
g
g
a
i
n
s
k
p
(
j
),
k
i
(
j
) a
n
d
k
d
(
j
) always ex
ist.
Th
e
n
e
x
t
step
i
s
to
estab
lish
an
on
lin
e ad
ap
ti
v
e
algo
rith
m
fo
r estim
at
in
g
h
so
th
at (24
)
ho
ld
.
For th
is
pu
rpo
s
e
let co
n
s
i
d
er:
1
∆
(2
5)
Whe
r
e
∆
is a m
o
d
i
fier term
, which
m
u
st b
e
d
e
term
in
ed
in
a
suitab
l
e m
e
th
o
d
.
In ord
e
r t
o
d
e
term
in
atio
n
of the m
o
d
i
fier
term
, (1
2
)
is rewritten
as fo
llowin
g
fo
rm
:
E
(
j
+1) =
E
(
j
) –
W
(
j
)
h
(2
6)
Whe
r
e:
0
1
2
…
1
(2
7)
0
1
B
y
usi
n
g
estimated
E
(
j
+1) as th
e
fo
llow:
(
j
+1) =
E
(
j
) –
W
(
j
)
(
j
) (
2
8
)
From
t
h
e
di
ffe
r
e
nce
of
rel
a
t
i
o
n
(2
6)
an
d
(
2
8
)
, we
ha
ve:
(
j
)=
W
(
j
)(
(
j
) –
h
) (
2
9
)
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJECE Vol. 4, No. 6, D
ecem
ber 2014
:
962 – 973
9
67
Whe
r
e
(
j
)
≜
E
(
j
+1) –
(
j
+1).
Now, th
e
pu
rpo
s
e is
d
e
term
i
n
atio
n
o
f
m
o
d
i
fier term
∆
in (25), s
o
that
val
u
e of vect
or
(
j
) decrease whe
n
the num
b
er
of i
t
eration i
n
creas
e, the
r
efore
,
w
e
de
fi
ne a
q
u
a
d
rat
i
c
cost
fu
nct
i
on
o
n
(
j
) as
follo
w:
1
2
(3
0)
Whe
r
e
∈
is a sy
mme
tric p
o
sitiv
e
d
e
fi
n
ite m
a
t
r
ix
.
There
f
ore, we
rewrite the
(25) as t
h
e
followi
ng:
1
(3
1)
Whe
r
e
(
j
) is
a po
sitiv
e scal
ar called
al
g
o
rith
m
step
size,
dem
onst
r
at
e
s
t
h
e
gra
d
i
e
nt
of t
h
e
g
(
j
) w
i
th
respect t
o
.
Using
(26
)
an
d (2
8) it is easy
to
d
e
riv
e
th
at:
j
(3
2)
So,
f
r
om
t
h
e (
3
1)
an
d
(3
2
)
we
can
wri
t
e
t
h
e
m
odi
fi
er t
e
rm
Δ
as
fo
llo
ws:
Δ
(3
3)
Whe
r
e:
Q
(
j
)=
W
T
(
j
)
P
(
j
) (
3
4
)
Fin
a
lly,
with
co
n
s
i
d
eri
n
g
th
e p
r
ev
iou
s
relatio
n
s
th
e adju
stin
g algo
rith
m
(2
5)
will b
e
co
me as fo
llows:
1
(3
5)
In
orde
r to c
onverge
nce analy
s
is of t
h
e pres
ented a
d
a
p
tiv
e sch
e
m
e
, at first we ex
am
in
e th
e estab
lish
m
en
t of
i
m
p
o
r
tan
t
con
d
itio
n
(24
)
, th
en, fo
r th
is
pu
rp
ose th
e
fo
llowing
step
s are co
nsid
ered
:
S1.
In
t
h
e ch
oosin
g of th
e in
itial co
nd
ition
s
fo
r adju
stin
g al
g
o
rith
m
(3
5),
we select
(0
)
0.
S2.
W
e
pro
v
i
de so
m
e
co
nd
itio
n
s
so
th
at
from
th
e fo
llo
wi
ng
assu
m
p
tio
n
(
j
)
0
The fol
l
o
wi
n
g
resul
t
c
oul
d be obt
ai
ne
d:
(
j
1)
0
In orde
r
to provide
the necessary
cond
itions
for step S2,
we
choose t
h
e st
ep size of algori
thm
(35) that i
s
(
j
)
with
co
nsid
eri
n
g th
e
fo
llo
wi
ng
co
nstrain
t
:
(3
6)
Whe
r
e
q
1
(
j
) is t
h
e
first elem
ent of vect
or
Q
(
j
) .
Th
erefo
r
e
b
y
usin
g
t
h
e
b
o
t
h
prev
i
o
u
s
step
s an
d m
a
th
e
m
a
tic
al in
du
ctio
n, co
nd
itio
n (2
4)
will b
e
gu
aran
t
eed
for
all
j
{0,
1
,
...}
.
The al
ge
brai
c
equat
i
o
ns
(2
1)
,
cont
r
o
l
l
a
w (
2
2
)
, a
nd t
h
e a
d
j
u
st
i
n
g al
g
o
ri
t
h
m
(35
)
are t
h
e m
a
i
n
part
s of t
h
e
p
r
esen
ted ad
aptiv
e PID typ
e
ILC.
Th
e con
v
e
rg
ence con
d
ition
o
f
th
e
p
r
op
o
s
ed
ad
ap
tiv
e PID typ
e
ILC is in
t
r
od
u
c
ed
i
n
th
e theo
rem
fo
llows:
Theorem 3.
Th
e
p
r
esen
ted ad
ap
tiv
e PID typ
e
ILC is co
nverg
en
t if t
h
e st
ep
size
(
j
)
in
th
e algo
rith
m
(3
5) is
ch
osen in
t
h
e fo
llo
wi
n
g
in
terval:
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ad
apt
i
ve
P
I
D
Type It
erat
i
ve Lear
ni
n
g
C
ont
rol
(
Sar
a Za
mi
ri)
96
8
0
2
(3
7)
Whe
r
e
ma
x
denot
es
t
h
e l
a
r
g
e
s
t
ei
gen
v
al
ue
.
Proof of
The
o
rem:
let us co
nsid
er
t
h
e
f
o
llow
i
ng
Lyap
unov
f
u
n
c
tion
cand
id
ate:
F
(
j
) =
(
j
)
h
(
j
) (
3
8
)
Whe
r
e:
(3
9)
No
w,
t
h
e di
f
f
er
ence of
t
h
e Ly
apu
n
o
v
f
u
nct
i
o
n (3
8)
i
s
gi
ve
n by
∆
F
(
j
) =
F
(
j
+1
) –
F
(
j
) =
-
T
(
j
)
R
(
j
)
(
j
) (
4
0
)
Whe
r
e
R
(
j
) is th
e
fo
llowing
sy
mm
e
t
ric
m
a
t
r
ix
:
R
(
j
) =
2
(
j
)
P -
2
(
j
)
PW
(
j
)
W
T
(
j
)
P
(4
1)
It is easy to
sh
o
w
th
at if
(
j
) is in
th
e in
terv
al (37
)
, th
en
th
e m
a
trix
R
(
j
) will b
e
p
o
s
itiv
e d
e
fin
ite, it can
b
e
ens
u
re
d that:
∆
F
(
j
)
0
(4
2)
That is
F
(
j
) is
a non
-
i
ncr
easi
n
g
f
u
n
c
tion
al
ong
j
directi
o
n and
he
nce
will
be bou
nd
ed
. Also
si
n
ce
F
(
j
) is
a
no
n
n
egat
i
v
e
se
que
nce,
t
h
e
n
fr
om
(42
)
,
we ca
n
obt
ai
n:
lim
∆
0
(4
3)
Since
R
(
j
) is a sy
mm
e
t
ric an
d
p
o
s
itiv
e d
e
fin
i
te
m
a
trix
, eq
uatio
n
∆
F
(
j
) = 0
im
pl
i
e
s
(
j
) = 0
,
th
en
, fro
m
(4
3)
we can
sho
w
th
at:
lim
0
(4
4)
Fo
r su
fficien
t
larg
e iteratio
n, fro
m
(44
)
,
we
hav
e
:
(
j
) =
0
(4
5)
So
, fro
m
alg
o
rith
m
(3
5
)
th
e
con
s
tan
t
v
a
lu
es relativ
e to
iterat
i
o
n
are
ob
tain
ed
for
lik
e
h
*
,
th
at is:
∗
for su
fficien
tly larg
e
j
(4
6)
In t
h
e
basi
s of
rel
a
t
i
on (
2
1) t
h
e const
a
nt
va
lues are calculated for elem
ents
of
vector
K
(
j
), lik
e
k
p
*
,
k
i
*
an
d
k
d
*
,
as fo
llo
ws:
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
1
1
1
for su
fficien
tly larg
e
j
(4
7)
Whe
r
e:
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJECE Vol. 4, No. 6, D
ecem
ber 2014
:
962 – 973
9
69
∗
∗
∗
∗
∗
∗
∗
∗
⋮
∗
,
∗
∗
∗
∗
∗
∗
∗
⋮
∗
∗
,
∗
∗
∗
∗
(4
8)
Fr
o
m
(
29)
,
(
45)
an
d (4
6)
w
e
hav
e
:
∗
0
(4
9)
Whe
r
e:
∗
∗
∗
∗
…
∗
(5
0)
We c
o
n
s
ider
two
di
ffe
rent ca
ses:
Case
1
.
T
h
e sc
alar
∗
∗
i
s
no
nzer
o
In th
is case from
(2
7
)
and
(49) th
e fo
llo
wi
n
g
con
c
lu
si
o
n
s
ho
ld
:
v
j
(
i
)
= 0 fo
r
i
=
0,
1,.
..,
M
-1 and
su
fficien
tly larg
e
j
(5
1)
By su
b
s
titu
ting
fo
r
k
p
(
j
),
k
i
(
j
) an
d
k
d
(
j
)
fr
o
m
(47
)
an
d f
o
r
v
j
(
i
) =
u
j
+1
(
i
) –
u
j
(
i
) f
r
om
(51
)
into
(
2
2
)
,
we can
obt
ai
n:
∗
∗
∗
1
∗
0
0
∗
∗
∗
2
∗
1
∗
1
0
⋮
∗
∗
∗
∗
∗
1
0
0
≜0
(5
2)
Wh
ich
can
b
e
written
i
n
v
i
ew of (47
)
as
fo
ll
o
w
s:
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
det
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
det
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
det
∗
∗
(5
3)
Whe
r
e:
det
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
Since
∗
is th
e fin
a
l v
a
lu
e
of
and acc
o
r
di
ng t
o
co
n
d
i
t
i
on
(2
4) t
h
e am
ount
s
of
are nonzero for all
j
{0,
1
,
...}
,
one ca
n c
oncl
ude that:
∗
0
Al
so,
f
r
om
(4
8
)
w
e
have:
∗
∗
∗
∗
⟹
∗
∗
(5
5)
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Ad
apt
i
ve
P
I
D
Type It
erat
i
ve Lear
ni
n
g
C
ont
rol
(
Sar
a Za
mi
ri)
97
0
Th
en
,
f
r
o
m
(
5
3)
,
(
54)
an
d (5
5) w
e
can
r
e
su
lt th
at:
∗
∗
∗
0
(5
6)
From
(5
6
)
,
bas
e
d
on
(
5
2),
i
t
can
be e
n
s
u
re
d t
h
at
e
j
(
i
)
= 0. T
h
ere
f
ore,
lim
→
0
(5
7)
Th
en
,
we can
say th
at in
t
h
is
p
a
rt
t
h
e
propos
ed a
d
aptive is
conve
r
ge
nce.
Case
2
. T
h
e
sc
alar
∗
∗
is zero.
Fro
m
(22) an
d
(47
)
we will h
a
v
e
:
∗
∗
∗
fo
r sufficien
tly larg
e
j
(5
8)
By su
b
s
titu
ting fo
r
V
(
j
)
f
r
om
(58
)
i
n
t
o
(1
2
)
,
we ca
n
get
:
∗
∗
∗
fo
r sufficien
tly larg
e
j
(5
9)
Whe
r
e
∈
is id
en
t
ity
m
a
trix
.
Whe
r
e
∗
∗
∗
.
That
H
e
is a l
o
wer triang
u
l
ar
To
ep
litz
m
a
tri
x
, t
h
en,
we
h
a
ve
He =T
(
h
e
).
By consi
d
eri
n
g the
vector
K
(
j
) and
m
a
trix
H
*
fro
m
(4
7) an
d (4
8)
resp
ectiv
ely an
d also b
y
defin
itio
n of
vector
=[
1
0
0
…
0]
T
∈
we can
write:
∗
∗
∗
∗
∗
∗
(6
0)
Whe
r
e:
1
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
2
,
3
,
…
,
(6
1)
Co
n
s
i
d
eri
n
g the lo
w triangu
lar form
o
f
H
e
, lead
s t
o
th
e fo
llowing
ch
aracteristic p
o
l
yno
m
i
al for it:
∆
d
e
t
(6
2)
B
y
usi
n
g
(5
3)
,
and
co
nsi
d
eri
n
g i
n
t
h
i
s
case
∗
after
so
m
e
m
a
n
i
p
u
l
ation
,
we ob
tain
:
|
|
1
∗
∗
∗
1
(6
3)
Clearly, all eigenvalue
s of
H
e
are ab
so
lu
t
e
ly less
th
an
one, so
we can say that
H
e
is stab
le m
a
trix
a
n
d
th
e
learn
i
ng
p
r
o
cess will conv
erg
e
, th
at m
ean
s:
lim
→
0
(6
4)
Here th
e
p
r
oo
f
o
f
th
e t
h
eorem
is co
m
p
leted
.
Com
ment 3.
Fo
r choo
sing
t
h
e
(
j
),
we shou
ld con
s
id
er
bo
th (3
6) and
(37
)
cond
itio
n
s
, t
h
en if
place in
th
e
in
terv
al o
f
(37
)
,
th
e
(
j
) w
e
sh
o
u
l
d
c
h
oos
e i
t
not
e
q
ual
t
o
.
5.
SIMULATION RESULTS
In
th
is
Sectio
n an
illu
strativ
e n
u
m
erical ex
a
m
p
l
e is
g
i
v
e
n to
d
e
m
o
n
s
trat
e th
e effectiv
en
ess
o
f
the
p
r
esen
ted ILC
alg
o
rith
m
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
IJECE Vol. 4, No. 6, D
ecem
ber 2014
:
962 – 973
9
71
Let u
s
con
s
id
er a DC m
o
to
r, wh
ich
ro
tates a
m
echanical load as Fi
gure
1,
wh
ere its field
wind
ing
curren
t is
co
nstan
t
,
bu
t its arm
a
tu
re supp
ly is v
a
riab
le.
Fig
u
re
1
.
DC
m
o
to
r with
constan
t
field
curren
t
In
t
h
i
s
si
t
u
at
i
o
n t
h
e
bl
ock
-
di
agram
of t
h
e m
o
t
o
r i
s
a
s
Fi
gu
re
2
[
14]
.
Fi
gu
re 2.
The
m
o
t
o
r
bl
oc
k-
di
agram
Whe
r
e
R
a
,
L
a
are the arm
a
ture wi
ndi
ng
resistan
ce a
n
d inductance re
spectively,
k
m
is th
e
m
o
to
r to
rque
constant, J and
b
are the m
echanical loa
d
in
ertia
m
o
m
e
n
t
u
m
an
d
frict
io
n
ratio
resp
ectiv
ely,
k
b
is t
h
e
b
a
ck
EM
F const
a
nt
.
Al
so
v
a
(
t
),
i
a
(
t
) are respectively the ar
m
a
ture source voltage and curre
nt,
(
t
) and
(
t
) are the
m
o
t
o
r shaft
rot
a
t
i
onal
spee
d a
nd a
n
gl
e res
p
e
c
t
i
v
el
y
.
Let
u
s
defi
ne t
h
e st
at
e vari
abl
e
s a
nd t
h
e
out
put
of t
h
e
m
o
to
r as fo
llows:
State varia
b
les:
x
(
t
) =
[
(
t
)
(
t
)
i
a
(
t
)]
T
Ou
t
p
u
t:
y
(
t
) =
(
t
)
Now, by c
o
nsi
d
eri
n
g Figure
2 it is easy to obtain
the
state
space e
q
uations
of the
m
o
tor
as follows:
́
Whe
r
e
́
≜
, a
nd:
01
0
0
0
,
0
0
1
,C = [
1
0
0]
It is d
e
sired to
d
e
term
in
e
v a
(
t
)
, s
o
that
y
(
t
)
Periodically tracks a
gi
ven command signal
y d
(
t
) in
tim
e in
terv
al
0,
t f
,
su
ch
t
h
at as th
e iteration
s
n
u
m
b
e
r
increases
, the
e
r
ror bet
w
een
y
(
t
) and
y d
(
t
)
va
ni
s
h
e
s
. T
h
e st
at
e e
q
uat
i
o
n
s
of t
h
e
m
o
t
o
r sh
oul
d
b
e
di
scret
i
z
e
d
Evaluation Warning : The document was created with Spire.PDF for Python.