TELKOM
NIKA
, Vol.12, No
.3, Septembe
r 2014, pp. 6
13~622
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v12i3.95
613
Re
cei
v
ed Ma
rch 2
2
, 2014;
Re
vised July
10, 2014; Accepted July 2
8
,
2014
The New Complex-Valued Wavelet Neural Network
Sufang Li, Ming
y
a
n Jiang
Schoo
l of Information Sci
enc
e and En
gi
neer
ing, Sha
n
d
ong
Univers
i
t
y
, Ji
na
n,
250
10
0, P.R.
Chin
a
e-mail: sufa
ngl
i
@
mail.sd
u.
ed
u
.
cn, correspon
din
g
auth
o
r:jia
ngmi
n
g
y
a
n
@s
du.ed
u.cn
A
b
st
r
a
ct
A new
compl
e
x-val
ued w
a
velet neur
al ne
tw
ork is propo
sed in this pa
per, by introd
ucin
g a
mo
difi
ed co
mpl
e
x-val
ued b
a
ck
propag
atio
n al
gorith
m
, in
w
h
ich a new
error function is to
be mi
ni
mi
z
e
d
by
the algor
ith
m
. T
he impr
ove
m
ent perfor
m
an
ce is further confir
me
d by t
he simu
lati
on re
sults, w
h
ich show
that the mo
difi
ed al
gorith
m
is
simp
ler tha
n
the conv
enti
o
n
a
l al
gorith
m
, a
nd has b
e
tter conver
genc
e, b
e
tter
stability a
nd fa
ster runni
ng sp
eed.
Ke
y
w
ords
: comp
lex-va
lu
ed
w
a
velet neur
al
netw
o
rk (C
VWNN); compl
e
x-valu
ed back
prop
agati
on (C
VBP)
algorithm
;
XOR
1. Introduc
tion
Wavelet an
al
ysis theo
ry is con
s
ide
r
ed t
o
be a bre
a
kt
hrou
gh in the
Fourie
r anal
ysis an
d
has b
een a
p
p
lied in man
y
rese
arch a
r
eas. Wavele
t
transfo
rm ca
n effectively extract the lo
cal
informatio
n o
f
the sign
al
by scaling
a
nd tran
slatio
n to analy
z
e
the sig
nal [
1
]. Combini
n
g the
wavelet
s
with
the artificial neural netwo
rk (A
NN), the wavelet neural network (WNN) ha
s b
een
develop
ed [2]-[4]. The ANN has many im
portant p
r
op
e
r
ties such as l
earni
ng, gen
e
r
alization, an
d
parall
e
l comp
utation, altho
ugh it ne
ed
a larg
e
nu
mb
er of ne
uron
s in hi
dden l
a
yer an
d can
not
conve
r
ge q
u
i
ckly. WNN
ha
s inhe
rited th
e good p
r
o
p
e
r
ties of the A
NN. Mo
reove
r
it can conve
r
ge
quickly and
give high preci
s
ion with
redu
ce
d
net
work si
ze b
e
c
au
se of the
time–frequ
e
n
cy
locali
zation p
r
opertie
s
of wavelets [5].
There are two types of
WNN
structu
r
e.
The first WNN i
s
p
r
e
-
wa
velet neural
netwo
rk,
and the a
r
chi
t
ecture i
s
sho
w
n in Fi
gure
1. The net
wo
rk firstly pro
c
ess the inp
u
t sign
al usi
ng t
he
orthog
onal wavelet matrix, then the network put
into learning and
discri
m
inatin
g. The seco
n
d
WNN is calle
d embed
ded
wavelet neu
ral netwo
rk, th
e ar
chitectu
re
of which is
shown in Figure 2,
in whi
c
h th
e
wavelet tra
n
sform al
g
o
rith
m is integ
r
at
ed into the fe
ed-fo
rward
n
eural
network. In
embed
ded wavelet neural
network, wavelet functi
ons a
r
e use
d
in the hidden layer of
the
netwo
rk
as a
c
tivation functions in
stead
of local fun
c
ti
ons in time
such a
s
G
a
u
s
sian a
nd si
g
m
oid
function
s.
Li et al. [6] p
r
opo
se
d co
m
p
lex-value
d
wavelet artifi
cial net
work
(CV
W
NN) u
s
ing Ha
ar
wavelet a
s
the hidde
n layer activation fu
nction (A
F) in
complex
-
val
ued artifici
al neural netwo
rk
(CVANN). Th
e com
p
lex-va
lued wavelet neural network
is the
comp
lex version
of the real-val
u
e
d
wavelet neu
ral network, whi
c
h ha
s co
mplex inputs,
outputs, con
nectio
n
weig
hts, dilation and
transl
a
tion p
a
r
amete
r
s, b
u
t the nonlin
ea
rity of
the hidden no
de
s re
mains
a re
al-valued fun
c
tion
(re
al-valu
ed wavelet funct
i
on). CVWNN has exp
a
n
ded its appli
c
ation
s
in fields deali
ng with
compl
e
x num
bers
such as
biomedi
cal i
m
age
proc
essing [7], telecommuni
cations [8],[9], carotid
arteri
al Doppl
er ultrasoun
d
signals cl
assifying
[5], spee
ch re
cog
n
ition [10], signal and ima
g
e
pro
c
e
ssi
ng wi
th the Fourie
r transfo
rmatio
n [11].
The
core alg
o
rithm of the
CVWNN is
complex-val
u
e
d
BP algo
rith
m, whi
c
h i
s
b
a
se
d on
gradi
ent de
scent often suffers from a lo
cal mi
nima p
r
oble
m
and h
a
s sl
ow conv
erge
nce. Many
method
s [12],[13] have been pro
pos
ed
to improve the performan
ce, such a
s
the conve
r
gen
ce
and the local stability. These method
s u
s
ua
lly applie
d
adaptive activation function and added a
term to the convention
a
l erro
r function to s
pee
d up the conve
r
ge
nce an
d prev
ent the learni
ng
from sticking
into the local
minima. Unfortunatel
y, the local minim
a
l proble
m
and some e
rro
rs
are cl
osely related to the neuro
n
satu
ration of
the activation fun
c
tion. Wh
en the actu
al output
approa
che
s
the extreme
value, the neuro
n
s in
th
e output layer and the
hidde
n layer are
sen
s
itive to input sign
als a
nd the pro
p
a
gation chain
will almo
st be
blocked.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 3, September 20
14: 61
3 – 622
614
Figure 1. The
archite
c
tu
re
of the pre-wa
velet neural n
e
twork
Figure 2. The
archite
c
tu
re
of the embed
ded wavelet neural network
In this pape
r, a modified CVWNN i
s
propo
s
ed to re
solve the XO
R pro
b
lem
s
. The n
e
w
CVBP algorit
hm and the
wavelet function activati
on f
unction in the hidden layer
can im
prove t
h
e
perfo
rman
ce
of the net
work, avoidin
g
th
e effetene
ss
of the saturation of the
acti
vation functio
n
,
having excellent function
al
approxim
atio
n and ge
neral
ization a
b
ilities.
2. W
NN
Wavelet is a new powe
r
ful tool for repr
e
s
entin
g nonline
a
rity. A function
can be
rep
r
e
s
ente
d
by the supe
rpositio
n of daught
e
r
s
of
a mother wa
velet
, where
can b
e
expre
s
sed a
s
(1)
and
are, respectively, call
ed dilatio
n
a
nd tra
n
sl
ation
paramete
r
s.
The
contin
uo
us
wavelet tran
sform of
is defined a
s
(2)
And the functi
on
can b
e
re
con
s
tru
c
ted b
y
the inverse
wavelet tran
sform
(3)
Hi
dde
n
l
a
yer
In
pu
t la
ye
r
O
u
t
pu
t la
ye
r
z
¡
1
z
¡
1
z
¡
1
x
p
(
n
)
net
(
h
)
i
j
(
k
)
o
(
n
)
.
.
.
.
.
.
.
.
.
.
.
.
F
o
(
¢
)
F
h
(
¢
)
F
h
(
¢
)
F
h
(
¢
)
.
.
.
Wa
ve
le
t
Tra
n
s
f
o
r
m
1
1
2
2
R
p
(
n
)
z
¡
1
z
¡
1
z
¡
1
x
p
(
n
)
ne
t
(
h
)
i
j
(
k
)
o
(
n
)
.
.
.
.
.
.
.
.
.
.
.
.
F
o
(
¢
)
.
.
.
f
(
x
)
Ã
a;
b
(
x
)
a
2
R
+
b
2
R
f
(
x
)
Ã
a;
b
(
x
)=
Z
1
¡1
f
(
x
)
Ã
a;b
(
x
)
dx
f
(
x
)
f
(
x
)=
Z
1
¡1
Z
1
¡1
w
(
a;
b
)
Ã
a;b
(
x
)
dadb
a
2
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
The Ne
w Co
m
p
lex-Valu
ed
Wavelet Ne
u
r
al Net
w
o
r
k (Sufang Li)
615
The continu
ous
wavelet
transfo
rm
and it
s inv
e
rse tran
sfo
r
m are not
dire
ctly
impleme
n
tabl
e on digital compute
r
s. When the inverse wavelet transfo
rm (3
) i
s
discretize
d,
has the follo
wing app
roxim
a
te wavelet-b
a
se
d rep
r
e
s
e
n
tation form:
(4)
whe
r
e
,
and
are weight co
efficients, tra
n
sl
atio
ns an
d
dilations for
each daug
hte
r
wavelet.
This a
pproximation can b
e
expre
s
sed
as the ne
ural netwo
rk
of Fi
gure
2, whi
c
h
contain
s
wavelet
nonlin
earitie
s in the artificial neuron
s rat
her
than the
standard sig
m
oidal no
nline
a
rities.
3 Traditional
complex-v
a
lued BP neur
al net
w
o
r
k
3.1 The for
w
ard propa
gation proces
s
In this pap
er,
the cla
ssi
cal
three-l
a
yer
netwo
rk
[1
4] is intro
d
u
c
ed,
the architect
u
re of
whi
c
h is
sh
o
w
n in Fig
u
re 2. The input
vector i
s
, wh
ich is
applie
d
to
the input layer of the netwo
rk. The
n
the input units
di
stribute the values to the hid
den layer unit
s
.
The net input
to the
j
th hidden unit is
(4)
whe
r
e
is the
compl
e
x-valu
ed con
n
e
c
tio
n
weight from
the
i
th
input
unit to the
j
th
unit, and
is the bias te
rms in the
j
th unit. The “
h
” supe
rscri
p
t is the quantities on the hid
den layer. Th
e
output of the hidde
n neu
ro
n is
(5)
whe
r
e “
R
” su
perscript an
d
“
I
” su
perscri
pt
are the qu
antities on th
e real p
a
rt a
nd the imagi
nary
part of the values respe
c
tively. “
F
” is the
compl
e
x-valu
ed activation
function, which is
(6)
whe
r
e
refers
to the formula
(4).
The net input
and the outp
u
t
of the
j
th output unit are
(7)
(8)
whe
r
e the “
o
” supe
rscript is the qua
ntities on the o
u
tput layer.
3.2 The bac
k
w
a
r
d prop
ag
ation proc
es
s
f
(
x
)
f
(
x
)
¼
K
X
k
=1
w
k
Ã
μ
x
¡
b
k
a
k
¶
w
k
b
k
a
k
X
p
=(
x
p
1
;x
p
2
;
¢¢
¢
;x
p
N
)
T
net
h
pj
=
ne
t
h
pj;R
+
jn
e
t
h
pj
;I
=
N
X
i
=1
w
h
ji
x
pi
+
μ
h
j
=
N
X
i
=1
(
w
h
ji
;
R
x
pi
;
R
¡
w
h
ji
;
I
x
pi
;I
)+
μ
h
j;
R
+
j
"
N
X
i
=1
(
w
h
ji
;
R
x
pi
;
I
+
w
h
ji
;
I
x
pi
;R
)+
μ
h
j;
I
#
μ
h
j
i
pj
=
i
pj;
R
+
ji
pj
;
I
=
F
h
j
(
ne
t
h
p
j
)=
f
h
j
(
ne
t
h
p
j
;R
)+
jf
h
j
(
ne
t
h
p
j
;I
)
F
h
(
x
)=
f
h
(
x
)+
jf
h
(
x
)
f
h
(
x
)
ne
t
o
pk
=
ne
t
o
pk
;R
+
jn
e
t
o
pk
;
I
=
L
X
j
=1
w
o
kj
i
pj
+
q
o
k
=
L
X
j
=1
(
w
o
kj
;
R
i
pj;
R
¡
w
o
kj
;
I
i
pj
;
I
)+
q
o
k;
R
+
j
2
4
L
X
j
=1
(
w
o
kj
;
R
i
pj;
I
+
w
o
kj
;
I
i
pj;
R
)+
q
o
k;
I
3
5
O
pk
=
O
pk
;R
+
jO
pk
;I
=
F
o
k
(
net
o
pk
)=
f
o
k
(
net
o
pk
;
R
)+
jf
o
k
(
ne
t
o
pk
;
I
)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 3, September 20
14: 61
3 – 622
616
The ba
ckwa
rd p
r
op
agati
on refe
rs to the erro
r sign
al ba
ckward
prop
agation.
is defined a
s
the error a
t
a single ou
tput unit, where “
p
” refers
to the
p
th
training ve
cto
r
, and “
k
” ref
e
rs to the
k
th
unit. The error is minimi
zed by the complex algorit
hm.
Since the
size of com
p
lex-valued
numb
e
r cann
ot
be
comp
ared, the error e
n
e
r
g
y
function is
a
s
follows, whi
c
h is the su
m of the squa
re
s of the errors of all output
units
,
(9)
whe
r
e the “*” is means co
mplex conju
g
a
tion and
M
is the node numbe
r of the
output layer. In
orde
r to
determine the
wei
ght ch
angi
ng
dire
ction, it is
necessa
ry to
cal
c
ulate
the
negative
of the
gradi
ent of th
e
accordi
ng to the real and imagi
nary p
a
rt of the coefficients. The weig
hts can
be written a
s
(10
)
Firstly, the adaption rul
e
of the output laye
r is con
s
ide
r
ed. Accordin
g to the steepe
s
t
desce
nt rule, the weig
hts can be up
date
d
as
(11
)
(12
)
whe
r
e
is learning ste
p
, whi
c
h is a p
o
sitiv
e
con
s
tant. Combinin
g (11
)
and (12
)
, we
can h
a
ve
(13
)
Finally
,
a
c
co
rding to the e
rro
r fun
c
tion formul
a, we can get
(14
)
The wei
ght u
pdate eq
uatio
ns can be
su
mmari
zed by
defining a q
u
antity
(15
)
Whe
n
the acti
ve function (A
F) is
si
gmoid
function, such as
(16
)
whi
c
h is
one
of the most
widely u
s
ed
AF for the artificial n
eural netwo
rk. The first-o
r
d
e
r
differential of the AF is
E
p
=
1
2
M
X
k
=1
±
pk
±
¤
pk
=
1
2
M
X
k
=1
£
D
pk
¡
f
o
k
(
net
o
pk
;
R
)
¡
jf
o
k
(
net
o
pk
;
I
)
¤£
D
¤
pk
¡
f
o
k
(
net
o
pk
;R
)+
jf
o
k
(
net
o
pk
;
I
)
¤
=
1
2
M
X
k
=1
h
¡
D
pk
;R
¡
f
o
k
(
ne
t
o
pk
;
R
)
¢
2
+
¡
D
pk
;
I
¡
f
o
k
(
ne
t
o
pk
;
I
)
¢
2
i
w
o
kj
(
t
)=
w
o
kj
;
R
(
t
)+
jw
o
kj
;
I
(
t
)
w
o
kj
;
R
(
t
+1
)
=
w
o
kj
;
R
(
t
)
¡
´
@E
p
@w
o
kj
;
R
(
t
)
w
o
kj
;
I
(
t
+1
)
=
w
o
kj
;
I
(
t
)
¡
´
@E
p
@w
o
kj
;
I
(
t
)
w
o
kj
(
t
+1
)
=
w
o
kj
(
t
)
¡
´
Ã
@E
p
@w
o
kj
;
R
(
t
)
+
j
@E
p
@w
o
kj
;
I
(
t
)
!
@E
p
@w
o
kj
;
R
+
j
@E
p
@w
o
kj
;
I
=
¡
1
2
h
(
D
pk
;R
¡
O
pk
;R
)
f
0
o
k
(
ne
t
o
pk
;
R
)
¡
j
(
D
pk
;
I
¡
O
pk
;
I
)
f
0
o
k
(
ne
t
o
pk
;I
)
i
i
¤
pj
±
o
p
k
=
f
0
o
k
(
net
o
p
k;
R
)R
e
(
D
pk
¡
O
pk
)+j
f
0
o
k
(n
e
t
o
pk
;
I
)Im
(
D
pk
¡
O
pk
)
f
o
(
x
)=
1
1+
e
¡
x
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
The Ne
w Co
m
p
lex-Valu
ed
Wavelet Ne
u
r
al Net
w
o
r
k (Sufang Li)
617
'
22
0(
1
)
1
()
(
1
1
)
(1
)
(
1
)
11
(1
)
11
()
(
1
(
)
)
x
ox
xx
xx
e
fx
e
ee
ee
fx
fx
(17
)
Ac
c
o
rding to (3.2.7), we c
a
n get
(18
)
Whateve
r
form of the output layer activati
on function
, the weight-update eq
uati
on in
the output layer ca
n be writ
ten
(19)
Similarly we can get the ad
aption rul
e
of the hidde
n la
yer,
(20
)
whe
r
e .
In this sectio
n, it is explai
ned the
re
sul
t
s of
re
sea
r
ch and
at the
same tim
e
is
given the
comp
re
hen
si
ve discussion
. Result
s can
be pre
s
ente
d
in figure
s
, grap
hs, table
s
and othe
rs that
make the
rea
der un
de
rsta
nd ea
sily [2],[5]. The discu
ssi
on can be
made in seve
ral su
b-ch
apt
ers.
4 Ne
w
c
o
mplex-v
a
lued
w
a
v
e
let neural net
w
o
r
ks
The complex
-
valued
WNN algorithm
as descri
bed a
bove ha
s the
following
qu
estion
s.
Whe
n
the actual value
appro
a
che
s
the ex
treme value, i.e., 0 or 1, the factors
and
make
s t
he err
o
r sig
n
a
l v
e
ry
small.
This me
an
s that an output
unit can b
e
maximally
wrong with
out p
r
odu
cin
g
a st
rong
error
sig
nal
with which the synaptic we
ight s
hould b
e
significa
ntly adjusted. Th
e search for a
minimum
i
n
t
h
e
error
will be
retarde
d
. M.Ji
ang et al. ha
ve introdu
ce
d
a modified e
rro
r fun
c
tion f
o
r the
compl
e
x-
valued BP n
eural
network to ove
r
com
e
the ab
ove
sho
r
tco
m
ing
s
and avoi
d the del
ay of the
conve
r
ge
nce
of the net
work.Instea
d of
minimizi
ng th
e sq
ua
re
s of
the differe
nces b
e
twe
en t
he
actual o
u
tput
s and d
e
si
red
outputs [16], in whi
c
h t
he e
rro
r fun
c
tion to be minimi
zed is a
s
follo
ws,
(22
)
whe
r
e
M
i
s
th
e total num
be
r of the o
u
tpu
t
neuron
s,
are the real p
a
rt and
the imagina
ry part of
the actual output
s of
the
k
th output neuro
n
respe
c
tively.
are
w
o
kj
(
t
+1
)
=
w
o
kj
(
t
)+
´±
O
pk
i
¤
pj
w
h
ji
;
I
(
t
+1
)
=
w
h
ji
;
I
(
t
)+
´
¡
±
h
p
j
;I
x
pi;
R
¡
±
h
p
j
;R
x
pi;
I
¢
±
h
pj
=
f
0
h
j
(
ne
t
h
pj
;
R
)R
e
³
P
M
k=
1
±
o
pk
w
o
¤
kj
´
+j
f
0
h
j
(n
e
t
h
pj
;
R
)Im
³
P
M
k=1
±
o
pk
w
o
¤
kj
´
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 3, September 20
14: 61
3 – 622
618
the real part and the imaginary part
of the desi
r
ed o
u
tputs of the
k
th output neuron. Mean
while
the back prop
agation
will be cha
nge
d, and t
he forward prop
agatio
n
remain
s un
chang
ed.
4.1 The adap
tation rule o
f
the outp
u
t l
a
y
e
r
The gra
d
ient
s of modified erro
r function
with
resp
ect
to
an
d
are
as
follows
,
(23
)
(24
)
Finally we ca
n get
(25)
And
(26
)
In orde
r to si
mply the upd
ate equatio
n,
we al
so intro
duce the erro
r term
(27
)
Thus th
e factors
and
are repl
aced
by
and
.
Therefore, ba
ck propag
atio
n
is
n
o
w
dire
ctly prop
agat
ed o the
differen
c
e
betwe
en the d
e
si
re
d
value and the actual valu
e. Formula (27) lacks the factors
and
, so” true
” error is me
asure
d
.
4.2 The adap
tation rule o
f
the hidden l
a
y
e
r
The ada
ptatio
n rule of the h
i
dden laye
r st
ill is
(28
)
Similiarly, we can have
E
p
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
The Ne
w Co
m
p
lex-Valu
ed
Wavelet Ne
u
r
al Net
w
o
r
k (Sufang Li)
619
(29
)
And,
(30
)
Finally, we ca
n get
(31
)
whe
r
e
. Obviously the ab
ove formula
is the
s
a
me with the traditional
CVBP algo
rithm in form. But there
are
no
and
, the “true” erro
r ca
n b
e
measured.
5 The ne
w
c
o
mplex-v
a
lued WNN
Compl
e
x-valu
ed wavel
e
t a
r
tificial ne
ural
netwo
rk u
s
e
d
Mexica
n h
a
t wavelet a
nd Ha
ar
wavelet fun
c
tion as
hidde
n layer AF in
stead of lo
ga
rithmic
sigm
o
i
d activation
function.In thi
s
pape
r
Mo
rlet wavelet
(or G
abor wavel
e
t)
functi
on i
s
chosen a
s
the
theee hi
dden
layer AF, whi
c
h
is define
d
by
(32)
in whi
c
h
via
experim
entati
on.
Like ne
w co
mplex-valu
ed
neural net
work, a
c
tivation function of output layer is cho
s
e
n
as loga
rithmi
c sigmoi
d in propo
sed CVWANN st
ru
cture
s
. Erro
r function is cho
s
e
n
as the
modified
erro
r fun
c
tion in
Eqs. (2
2). M
a
thematic
al
formulation
s
of prop
osed CV
WNN stru
ctu
r
es
are obtai
ned
by using wavelet function i
n
stea
d of
usi
ng loga
rithmi
c sigm
oid fun
c
tion. CVWNN
architectu
re
s
use
d
in this p
aper i
s
sh
own in Figure 2.
6. Simulation results
Ã
Mo
r
l
e
t
=c
o
s
(
ax
)
e
¡
bx
2
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 3, September 20
14: 61
3 – 622
620
In orde
r to verify the validity and pra
c
ticab
ility of the
prop
osed met
hod, the pape
r ca
rrie
s
out the p
r
o
c
e
ssi
ng of the
XOR p
r
obl
em
. The lea
r
nin
g
pattern is
called the
simil
a
r XO
R p
r
obl
em,
whi
c
h is sho
w
n in Table 1
.
The real part of
the
output can be se
en
as the XOR o
f
the input’s real
and input’
s
imagina
ry part, and the imagina
ry part of
the output is equ
al to the real pa
rt of the
input.
This p
r
o
b
lem
has
been
sim
u
lated
with a
1-3
-
1
compl
e
x-value net
work i
n
[12, 15]
. In this
paper the new complex
-
valued
WNN
and
convent
i
onal CVBPNN
and
CV
WNN are appli
ed
to
resolve the similar XOR problem. For th
e above me
thod
s, learnin
g
rate and maximum iteratio
n
numbe
r are cho
s
e
n
as 0.
1 and 5,00
0, resp
ective
ly. The archite
c
ture of the
compl
e
x-valu
ed
neural netwo
rk is 1-2
-
1. Whe
n
the minimal erro
r is 0.1, 0.01
and 0.001,
the learni
ng cu
rve for
the similar XOR pro
b
lem i
s
sho
w
n is Fi
gure 3,
Figure 4, and Figure 5 respe
c
tively. The success
rate an
d average lea
r
ni
ng
epo
ch
s are shown in Ta
bl
e 2. Wh
en th
e error
crite
r
i
a
wa
s u
s
ed
a
s
the
Ep = 0.1, the
new
CVWNN has 1
00% succe
ss
ra
te,
and its ave
r
a
ge lea
r
ning
e
poch is o
n
ly 153.
This i
s
a third of the co
nvent
ional
CV
WNN’
s 475
e
pochs.
Whe
n
the error
crit
eria i
s
Ep =
0.01
and Ep = 0.001, the impro
v
ed algorithm
is faster
than conventional
, which ca
n be seen from the
Figure 3 and
Figure 4 and
Table 2.
Table 1. Lea
rning pattern for simil
a
r XO
R
probl
em
Input patte
rn
Output p
a
ttern
0 0
i 1
1 1+i
1+i i
Figure 3. The
compa
r
i
s
on
of the learnin
g
curve
betwe
en the
prop
osed CV
WNN an
d the
conve
n
tional
CVWNN,
wh
en the minim
a
l error
is 0.1
Table 2. Simulation re
sult
s for simil
a
r X
O
R p
r
oble
m
Success rate
Average Iteration
s
Proposed
CVWNN
Conventional
CVWNN
Proposed
CVWNN
Conventional
CVWNN
Minimal error=0.
1
100%
100%
153
394
Minimal error=0.
01
100%
98%
380
608
Minimalerror=0.0
01 99%
95%
2189
3743
0
50
0
10
00
0
0.
1
0.
2
0.
3
0.
4
0.
5
:
M
i
n
i
m
u
m
e
rro
r i
s
0
.
1
I
t
e
r
at
i
o
n nu
m
b
e
r
MS
E
P
r
opos
ed C
V
W
N
N
C
o
nv
e
n
t
i
on
a
l
C
V
W
N
N
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
The Ne
w Co
m
p
lex-Valu
ed
Wavelet Ne
u
r
al Net
w
o
r
k (Sufang Li)
621
Figure 3. The
compa
r
i
s
on
of the learnin
g
curve b
e
twe
e
n
the prop
ose
d
CVWNN a
n
d
the
conve
n
tional
CVWNN,
wh
en the minim
a
l error
is 0.01
Figure 4. The
compa
r
i
s
on
of the learnin
g
curve
betwe
en the
prop
osed CV
WNN an
d the
conve
n
tional
CVWNN,
wh
en the minim
a
l error
is 0.001
A compa
r
iso
n
of the prop
ose
d
method
and
the con
v
entional CV
WNN is illu
strated in
Figure 3, 4,
and 5, from whi
c
h we ca
n see t
hat the prop
osed
method has better stabil
i
ty
conve
r
ge
nce perfo
rman
ce,
and faste
r
ru
nni
ng
spe
ed than the conv
entional
CVWNN.
7. Conclusio
n
In this paper,
a new CVWNN is p
r
op
osed,
who
s
e in
puts, outputs and weight
s are all
compl
e
x-valu
ed, and the nonline
a
r activation function rem
a
ins real-val
ued. The b
a
ck
prop
agatio
n learni
ng alg
o
rithm for traini
ng the compl
e
x-valued
wa
velet netwo
rk is modified
by
introducing a new error function.
The perform
a
nce of the propo
sed CVWNN is illu
strated wi
th
appli
c
ation to
the XOR. The simulation result
s dem
on
strate that the CVW
NN ha
s better stabil
i
ty
conve
r
ge
nce perfo
rman
ce,
and faster runnin
g
spe
e
d
than the con
v
entional CV
WNN. Anymore
,
in sign
al pro
c
e
ssi
ng an
d comm
uni
cati
on are
a
s, th
ere a
r
e a la
rge numb
e
r o
f
complex-vu
aed
numbe
r to be
dealt with,, thus the p
r
op
o
s
ed
CVWNN
provide
s
a po
werful tool fo
r such ca
se
s.
Referen
ces
[1]
P. Hong, X. Liang-Z
h
en
g. Effi
cient Object R
e
cog
n
itio
n Usi
ng
Boun
dar
y R
epres
entati
on and W
a
vel
e
t
Neur
al Net
w
o
r
k.
IEEE
Transactions on Neur
al Networks.
2008; 19: 2
132-
214
9.
[2]
Z
.
Qinghua. U
s
ing
w
a
ve
let
net
w
o
rk
in n
o
npar
ametric e
s
timation.
IEE
E
T
r
ansactio
n
s
on N
eura
l
Networks.
199
7; 8: 227-2
36.
[3]
RH. Abi
y
ev, O. Kaynak. F
u
zz
y
W
a
velet Ne
ural
Net
w
o
r
ks for Identificatio
n and Contro
l of D
y
n
a
mi
c
Plants&#
x
20
14
;A Novel Stru
ct
ure and
a Comp
arative
Stud
y
.
IEEE
Transactions on Industrial
Electron
ics.
20
08; 55: 31
33-3
140.
[4]
S. Yilmaz, Y. O
y
sal. Fuzz
y
Wavelet Ne
ur
al Ne
t
w
o
r
k M
ode
ls for Pred
iction a
nd Id
e
n
tificatio
n
of
D
y
namic
al S
y
s
t
ems.
IEEE
Transactions on Neural Networks
. 2010; 21: 15
9
9
-16
09.
[5]
Y. Ozba
y
,
S.
Kara, F
.
Latifoglu, R. Ce
y
l
an
, M.
Ce
y
l
a
n
. Compl
e
x-va
lue
d
w
a
v
e
l
e
t artificial n
eura
l
net
w
o
rk for Do
ppl
er sign
als cl
assif
y
in
g.
Artif I
n
tell Med.
200
7; 40: 143-
56.
[6]
C. Li, X. Liao, J. Yu. Comple
x-va
lu
ed
w
a
v
e
l
e
t net
w
o
rk.
Journa
l of Comp
uter and Syste
m
Scie
nces.
200
3; 67: 623-
632.
[7]
M. Cey
l
an, H. Yacar.
Bloo
d vessel extractio
n
from retin
a
l i
m
a
ges usi
ng C
o
mpl
e
x W
a
vel
e
t T
r
ansform
and C
o
mpl
e
x-
Valu
ed Artifici
al Ne
ural N
e
tw
ork.
in 2013 36t
h Internatio
nal C
onf
erenc
e o
n
T
e
lecommunic
a
tions a
nd Si
g
nal Proc
essin
g
(T
SP). 2013: 822-8
25.
[8]
J. Z
he, S. Z
h
ihua
n, H. Jiam
ing.
Be
hav
iora
l Mod
e
li
ng
of W
i
deb
and
RF
Pow
e
r Amplif
iers Usi
n
g
Com
p
lex-valued Wavelet Net
w
orks.
in 2006
Internation
a
l
Confer
ence
on
Communic
a
tio
n
s, Circuit
s
and S
y
stems P
r
ocee
din
g
s. 20
06: 820-
82
4.
[9]
G. Meijuan, T
.
Jing
w
e
n, Z
.
Shiru.
Mod
e
li
n
g
for mob
i
l
e
communic
a
tio
n
fadin
g
cha
nne
l
based o
n
w
a
velet ne
ural
netw
o
rk.
in Internatio
nal C
onfe
r
ence o
n
Infor
m
ation a
nd Aut
o
matio
n
, 200
8. ICIA 2008
.
200
8: 156
6-15
70.
0
50
0
10
0
0
0
0.
1
0.
2
0.
3
0.
4
0.
5
:
Mi
n
i
mu
m e
r
r
o
r
i
s
0
.
0
1
I
t
e
r
at
i
o
n nu
m
b
e
r
MS
E
P
r
opos
e
d
C
V
W
N
N
C
o
n
v
e
n
t
i
on
al
C
V
W
N
N
0
50
0
10
00
0
0.
1
0.
2
0.
3
0.
4
0.
5
:
M
i
n
i
m
u
m
e
rro
r i
s
0
.
0
0
1
I
t
e
r
at
i
o
n nu
m
b
e
r
MS
E
P
r
opos
ed C
V
W
N
N
C
o
nv
e
n
t
i
on
a
l
C
V
W
N
N
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 12, No. 3, September 20
14: 61
3 – 622
622
[10]
RT
. Anupam Shukl
a
, Hema
nt Kumar Me
ena, R
ahu
l K
a
la. Sp
eak
er Identific
atio
n u
s
ing W
a
v
e
le
t
Anal
ys
is and M
odu
lar Ne
ural
Net
w
orks.
Jour
nal of Acoustic
Society of Indi
a (JASI).
2009.
[11]
MKS. a A. Sh
ahb
ahram
i.
Classification Of
Ecg Arrhythm
ias Usi
ng Discr
ete W
a
velet T
r
ansfor
m
An
d
Neur
al Netw
orks.
Internation
a
l jour
nal of Computer Sci
e
n
c
e, Engine
eri
n
g and App
licati
on (IJCSEA).
201
2; 12: 1-13.
[12]
ZT
. a CV. Xiaomi
ng C
hen,
Songs
ong
Li
,
T
o
shimi Okada. A Mo
difi
ed Error Bac
k
prop
agati
o
n
Algorit
hm F
o
r Compl
e
x-Va
lu
e Ne
ural
Net
w
orks.
Internatio
nal J
ourn
a
l
of Neur
al Syste
m
s.
2005; 1
5
:
435
–4
43.
[13]
AS. Shafie, IA. Mohtar
, S. Ma
srom, N. Ahmad.
Backpro
pa
gatio
n neur
al n
e
tw
ork w
i
th ne
w
improve
d
error functio
n
and activ
a
tio
n
function for
classificati
on
probl
e
m
.
in
2012 IEEE Sy
mposium on
Huma
nities, Sc
ienc
e an
d Engi
neer
ing R
e
se
a
r
ch (SHUSER),
2012: 1
359-
13
64.
[14] T.
Nitta.
Compl
e
x-Val
ued N
e
u
r
al Netw
orks.
IGI Global. 200
9.
[15]
T. Nitta. An Ex
tension of the Back-
Prop
aga
tion Al
gorithm
to Compl
e
x N
u
mbers.
Ne
ura
l
Netw
orks.
199
7; 10: 139
1
-
141
5.
[16]
M. J. Sufang Li. A modifi
ed co
mple
x-va
lu
ed BP neural net
w
o
rk.
Journal of Co
mp
utation
a
l Informati
o
n
System
s.
20
14
; 10: 1-13.
Evaluation Warning : The document was created with Spire.PDF for Python.