TELKOM
NIKA
, Vol.13, No
.2, June 20
15
, pp. 587 ~ 5
9
6
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v13i2.1430
587
Re
cei
v
ed
Jan
uary 17, 201
5
;
Revi
sed Ma
rch 2
9
, 2015;
Acce
pted April 20, 2015
An Image Compression Method Based on Wavelet
Transform and Neural Network
Suqing Zhan
g
, Aiqiang Wang*
Information En
gin
eer De
part
m
ent, Hena
n V
o
catio
nal
and
T
e
chnical Institute,
Z
hengz
ho
u 45
004
6, Hen
an, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: 9297
95
121
@
qq.com
A
b
st
r
a
ct
Imag
e co
mpre
ssion is to c
o
mpr
e
ss the re
dun
da
ncy bet
w
een the p
i
xel
s
as muc
h
as
possi
ble
by
usin
g the corre
latio
n
betw
een
the nei
ghb
orh
o
od pix
e
ls
so as
to reduce the trans
missi
on b
a
ndw
idth an
d the
storage sp
ace.
T
h
is paper a
p
p
lies th
e inte
gr
ation of w
a
vel
e
t analys
is an
d artificia
l
neur
al netw
o
rk in th
e
imag
e compre
ssion, disc
usse
s its performa
n
c
e in the
imag
e compressi
on
theoretic
a
lly, analy
z
e
s
the mu
lti
-
resol
u
tion
a
nal
ysis tho
ught, c
onstructs a
w
a
velet
neur
al
n
e
tw
ork mo
de
l
w
h
ich is
use
d
in t
he
i
m
prov
e
d
imag
e co
mpre
ssion a
nd g
i
ve
s the corresp
o
ndi
ng a
l
gor
ith
m
. Only the w
e
ig
ht in t
he o
u
tput lay
e
r of th
e
w
a
velet n
eur
al
netw
o
rk n
eed
s traini
ng
w
h
ile
the w
e
i
ght of t
he i
n
p
u
t lay
e
r
can
be
deter
mi
ned
accor
d
i
ng
t
o
the rel
a
tions
hi
p betw
een t
h
e
interval
of the
sampli
ng
poi
n
t
s and the
inte
rval of the c
o
mp
actly-su
ppo
rted
interva
l
s. Once
deter
mi
ne
d, traini
ng
is
unn
e
c
essary, in
thi
s
w
a
y, it accel
e
rates th
e trai
nin
g
sp
ee
d of
th
e
w
a
velet n
eura
l
netw
o
rk an
d so
lves the
pro
b
l
e
m th
at it
is
diffi
cult to d
e
ter
m
i
ne th
e n
odes
o
f
the hi
dd
en l
a
yer
in th
e trad
itio
n
a
l
neur
al
netw
o
rk. T
he c
o
mp
uter si
mu
la
ti
on
exp
e
ri
me
nt s
how
s that th
e
alg
o
rith
m
of th
is
pap
er has
mor
e
excel
l
ent co
mpr
e
ssio
n
effe
ct
than the trad
ition
a
l ne
ura
l
n
e
tw
ork metho
d
.
Ke
y
w
ords
: Image C
o
mpress
i
on, W
a
velet An
alysis, Artificial
Neura
l
Netw
ork
1. Introduc
tion
Image comp
ression i
s
a t
e
ch
nolo
g
y which u
s
e
s
the
minimum bit
numbe
r to repre
s
e
n
t
the image i
n
formatio
n with
no or little di
stortion
whil
e
ensurin
g the
image q
uality. The process of
image
comp
ression i
s
to l
ook fo
r an a
ppro
p
ri
ate en
codi
ng o
r
tra
n
sform meth
od to re
du
ce
the
data volum
e
whi
c
h
ca
n
re
pre
s
ent
this i
m
age
[1].
Th
e sta
r
ting
poi
nt to
comp
re
ss the i
m
ag
e
data
volume is to
redu
ce th
e
redu
nda
nt da
ta to rep
r
e
s
e
n
t the image.
The ima
ge i
s
sto
r
ed i
n
the
machi
ne in the form of data matrix, th
erefo
r
e,
a se
ries of tran
sf
orm is cond
u
c
ted on that data
matrix to re
d
u
ce th
e redu
ndant p
a
rt. M
a
ke
effective
codi
ng o
n
the
pro
c
e
s
sed
d
a
ta to re
du
ce
the
codi
ng
sp
ace
.
Whe
n
rea
d
ing the
imag
e in th
e follo
w-u
p
p
h
a
s
e,
get the
ori
g
inal ima
ge
afte
r
inverse tran
sformation p
r
oce
s
sing [2]. As a ba
sic technol
ogy
of image p
r
oce
s
sing, im
age
comp
re
ssion
involves eve
r
y link of imag
e pro
c
e
s
sing.
At present, image
com
p
ression
plays
an
importa
nt rol
e
in the
satel
lite image
ry, spa
c
e
ex
ploration, telecon
f
eren
ce a
nd
medical ima
g
i
ng,
sin
c
e the ima
ge ha
s a larg
e data volum
e
and hig
her
requi
rem
ents
on the real ti
me [3].
The re
se
arch
of image co
mpre
ssion
start
ed from th
e pulse cod
e
modulation
(PCM
),
whi
c
h
wa
s propo
sed
on th
e tran
smi
ssi
o
n
of televi
sio
n
image i
n
19
48. The
re
se
arch in the
19
50s
and 18
60
s was limited to
the intrafra
me co
ding
of
the image. Starting from
the late 196
0s,
orthog
onal transfo
rm an
d other meth
od
s had b
een
b
r
ought forth a
nd preli
m
ina
r
y exploration
had
been ma
de o
n
the intrafra
me codi
ng of the image
(n
amely the mo
ving image coding
). The year
of 1988
wa
s
a gre
a
tly sig
n
ificant yea
r
i
n
the
develo
p
ment of ima
ge compressi
on coding
wh
en
the video
compressio
n
stand
ard
H.
261
and th
e fram
ework prin
cipl
e of
the
still im
age
comp
re
ssion
JPEG
we
re
b
a
si
cally d
e
termined
and
p
r
ogre
s
s
had
b
een
made
o
n
the fractal
a
n
d
neural network in the imag
e comp
re
ssi
o
n
codin
g
[4].
With an incre
a
sin
g
dema
n
d for appli
c
ation,
the traditio
nal
co
mpressio
n
method
s
hav
e failed
to m
eet the
req
u
irements of im
age
pro
c
e
s
si
ng
in the comp
ression efficie
n
cy and effe
ct, ther
efore, high-quality and high
-effici
ent smart im
age
comp
re
ssion
algorithm h
a
s be
co
me
an emph
asi
s
and obje
c
ti
ve of intern
ational re
se
a
r
ch.
People have
begu
n to bre
a
k thro
ugh th
e origin
al co
d
i
ng theory an
d sea
r
ch for some ne
w co
d
i
ng
approa
che
s
t
o
obtain
a hi
gher comp
re
ssi
on
ratio a
nd a
better
compressio
n q
uality. There
are
mainly two rese
arch thou
ghts: one i
s
to reali
z
e the
existing com
p
re
ssi
on alg
o
rithms
with n
e
w
techn
o
logy
with high
er
preci
s
ion
an
d t
he oth
e
r i
s
t
o
loo
k
fo
r b
r
and-ne
w im
a
ge
comp
re
ssi
o
n
theory, algo
rithm and corre
s
po
ndin
g
real
ization te
chn
o
logy [5].
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 587 – 59
6
588
This p
ape
r in
tegrate
s
wavelet theory a
nd arti
ficial
n
eural n
e
two
r
k (ANN), repl
a
c
e
s
the
excitation fun
c
tion in the
n
eural
network with
wavel
e
t function a
nd ap
plies th
e advantag
e
s
of
multi-re
sol
u
tion analy
s
is i
n
to the neu
ra
l netwo
rk
so
as to obtai
n a more flexibl
e
netwo
rk de
sig
n
and
better
ne
twork p
e
rfo
r
mance. It ha
s the a
d
vant
a
ges such a
s
l
a
rge
-
scale
pa
rallel
processing
and di
strib
u
ted inform
atio
n storage a
s
well
as e
x
cellent ad
a
p
tivity, self-organi
zatio
n
, fault
toleran
c
e, le
a
r
ning fu
nctio
n
and a
s
sociat
ive memory functio
n
. This pape
r firstly introdu
ce
s th
e
basi
c
prin
cipl
e of ima
ge
compressio
n.
Then it
elab
o
r
ates an
d int
egrate
s
wave
let analy
s
is a
nd
ANN. In
the
wavelet
neu
ral net
wo
rk, t
r
aining
is
only
nee
ded
in
the
weig
ht of
the outp
u
t la
yer
while that of t
he input laye
r can b
e
dete
r
mined a
c
cord
ing to the rel
a
tionshi
p between the inte
rval
of the sampl
i
ng points a
nd the interv
al of t
he wavelet compa
c
tly-su
ppo
rte
d
interval. Once
determi
ned,
no traini
ng i
s
necessa
ry; thus, it g
r
eat
l
y
accelerates the trainin
g
spe
ed of
wav
e
let
neural netwo
rk an
d solve
s
the problem
that it is
difficult to determine the nod
es in the hid
den
layer of the traditional ne
ural netwo
rk. T
he final
part i
s
the experi
m
ental simul
a
tion and a
naly
s
is.
2. Image Co
mpression
Mecha
n
ism
The pu
rp
ose
of digital im
age
comp
re
ssion i
s
to re
duce the n
e
cessary bit n
u
m
ber to
rep
r
e
s
ent the
image and
rep
r
e
s
ent the
image more
effectively so as to facili
tate the image
pro
c
e
ssi
ng, storage
an
d
transmi
ssion. The com
p
re
ssion
of time
domain
can
accele
rate th
e
transmissio
n
of various i
n
formatio
n source mo
re
quickly, more parallel op
eration
s
can
be
open
ed in th
e existing m
a
in line
s
of
comm
un
ciati
on thro
ugh t
he comp
re
ssion of fre
que
ncy
domain;the
compressio
n of energy domain
can
redu
ce the transmitte
r efficien
cy and t
h
e
comp
re
ssion
of spa
c
e do
main ca
n co
mpre
ss t
he data stora
ge space. In
the image data, there
are
plenty of
red
und
an
cie
s
, incl
udin
g
spa
c
e
re
dun
dan
cy, stru
ct
ural
red
und
a
n
cy, kn
owl
e
d
g
e
redu
nda
ncy, i
n
formatio
n en
tropy re
dun
d
ancy a
nd
visual re
dun
dan
cy, whi
c
h ma
ke
s it po
ssi
bl
e to
transfo
rm a l
a
rge di
gital image file into
small digital
image files
so as to a
c
hie
v
e the purp
o
s
e of
image
com
p
ression th
ro
ug
h the
redu
cti
on of
r
edu
nd
ant data.To
redu
ce the
im
age info
rmati
on
redundancy
by fully utilizing t
he visual characteri
stics of
hum
an eyes and the stati
s
tical
cha
r
a
c
teri
stics of the imag
e can e
n
sure
the image qu
ality [6].
Afte
r
p
r
oc
essin
g
a
n
ima
ge w
i
th
th
e p
r
oc
es
s o
f
Fig
u
re 1, the
re
sto
r
ed im
age
is
a lossy
image
with certain
com
p
ression. In th
e entire
pr
o
c
essing,
comp
ressio
n is
ge
nerate
d
from
the
quanti
z
ation
pro
c
e
s
s
a
nd the codi
ng pro
c
e
ss an
d
the sele
ction of q
uantization m
e
thod
s and the
quanti
z
ation
effect di
re
ctly affect the
final im
age
co
mpressio
n result.The
co
mmonly
-
u
s
ed
quanti
z
ation
method
s in
clude: scala
r
q
uantizati
on,
linear quanti
z
ation,
vector quanti
z
ation and
the mixed q
u
antizatio
n co
ding m
e
thod
of differ
ent
q
uantization m
e
thod
s ad
opt
ed by the
lo
w-
freque
ncy
a
nd hi
gh-f
r
eq
uen
cy sub-b
and
s. Th
e
f
r
equ
ently-u
se
d codin
g
m
e
thod
s in
clu
de:
Huffman
codi
ng, run-l
engt
h codin
g
, a
r
i
t
hmetic
co
din
g
an
d p
r
e
d
ictive codi
ng
suitable fo
r
sti
l
l
image [7].
Ima
g
e
Tr
a
n
s
f
o
r
m
Quan
tif
i
c
a
tio
n
Co
de
St
or
a
g
e
St
or
a
g
e
De
c
o
d
e
In
v
e
r
s
e
q
u
a
n
t
iz
a
t
io
n
In
v
e
r
s
e
tr
ans
f
o
r
m
Imag
e
re
s
t
or
a
t
i
o
n
Figure 1. Image en
codi
ng
and de
co
ding
process
3. Wav
e
let Neural Ne
t
w
o
r
k
3.1. Wav
e
let Analy
s
is
Wavelet, n
a
m
ely the
wav
e
in the
small
regi
on, i
s
a
speci
a
l wavefo
rm
with limite
d
lengt
h
and an
avera
ge value of 0
.
It has two feature
s
. O
n
e
is sm
all, in other
words,
it has compa
c
t
sup
port o
r
ap
proximate
co
mpact supp
o
r
t in the
time domain and
the other is the alternativ
ely
positive an
d negative vola
tility, namely t
hat the tributa
r
y comp
one
nt is 0.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression M
e
thod Based
on Wavel
e
t Transform
and Neur
al .... (Suqing Zhang)
589
3.1.1. Contin
uous
w
a
v
e
le
t tran
sfo
r
m (CWT)
Expand a
n
y f
unctio
n
f
t
in the
sp
ace of
2
L
R
in the
wavelet
basi
s, call it as CWT
of
the function
f
t
a
nd the tran
sfo
r
m formul
a is:
1
,
,,
tb
fa
b
a
a
R
WT
a
b
f
f
t
d
t
(1)
If the tolerance con
d
ition of
the wavelet is
sati
sfied, its inverse tran
sformation i
s
:
2
1
,
da
t
b
f
Ca
a
f
tW
T
a
b
d
b
(2)
In this
formula,
2
w
w
R
Cd
w
is the tolerance co
nditio
n
of
t
.
We can see
it this way, Fourie
r ana
lysis is to d
e
com
p
o
s
e th
e sign
als int
o
the
overlap
p
ing o
f
a serie
s
of sine wave
with
diffe
rent freq
uen
cie
s
, like
w
ise, and wa
velet analysi
s
is
to decompo
se the sig
nal
s into the overlappin
g
of
a
seri
es
of wav
e
let function
s. These
wave
let
function
s a
r
e
obtaine
d fro
m
a moth
er
wavelet fun
c
t
i
on after the
tran
slation
and
scale. T
he
wavelet an
alysis i
s
better t
han Fo
urie
r a
nalysi
s
in
that it has excelle
nt locali
zation
nature in b
o
th
time domain
and freq
uen
cy domain. Beside
s, sin
c
e g
r
adu
ally-refin
ed time-d
oma
i
n or frequ
en
cy-
domain
sam
p
ling
step
-le
ngth i
s
ad
opt
ed in th
e hi
g
h
-fre
que
ncy
comp
one
nt, any detail
of the
objec
ts
can be foc
u
s
ed [8].
3.1.2. Discre
t
e
w
a
v
e
let
tr
ansform
The im
age
in
formation
is
store
d
i
n
the
co
mpute
r
in
the
fo
rm of discrete
sign
als, so
it
need
s to discretize the
con
t
inuou
s wavel
e
t transfo
rm.
(i) Di
screti
zati
on of Scale a
nd Tra
n
sl
atio
n
Discretize th
e scale fa
cto
r
a
and the transl
a
tion fact
or
b
of the co
ntinuou
s wavelet
basi
s
functio
n
,
ab
t
and get the discrete wavelet tran
sform
,
f
WT
a
b
s
o
as
to
r
e
d
u
c
e
the
redu
nda
ncy o
f
the wavelet
transfo
rm
c
o
e
fficient. Discretize the
scal
e facto
r
a
and the tra
n
sl
ation
fac
t
or
b
in a
p
o
we
r
se
ries,
namely
00
,
mm
aa
b
b
(
m
is an inte
ge
r,
0
1
a
, but no
rmall
y
it is
assume
d that
0
1
a
) and get the
followin
g
discrete wavel
e
t functio
n
:
00
0
00
11
,0
0
m
m
tn
a
b
m
mn
a
aa
ta
t
n
b
(3)
Its coefficie
n
t of corre
s
po
nd
ence is:
,,
,
,
mn
mn
mn
Cf
t
f
t
t
d
t
(4)
(ii) Bina
ry Wa
velet Tran
sform
Binary wavel
e
t transfo
rm
is a spe
c
ia
l dis
c
rete wavelet trans
f
o
rm. Ass
u
me
0
2
a
,
0
1
b
, and
2
,
22
m
m
mn
tn
.
The discrete
wavelet tran
sform is:
,
,,
fm
n
W
T
mn
mn
f
t
t
d
t
(5)
The discrete
binary wavele
t transform is:
,
,,
fm
n
W
T
mn
mn
f
t
t
d
t
(6)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 587 – 59
6
590
3.1.3. Multi-r
esolution a
n
aly
s
is
The concept
of multi-re
solution an
alysis i
s
pro
p
o
s
ed wh
en co
nstru
c
ting o
r
t
hogo
nal
wavelet ba
sis in Mallat, explaining the
multi-re
sol
u
tion pro
perty o
f
the wavelet from the con
c
ept
of spa
c
e and
unify all the
previou
s
con
s
tru
c
tion met
hod
s of ortho
gonal wavele
t basis. The role
of Mallat
alg
o
rithm i
n
the
wavelet
anal
ysis i
s
equ
al
to the
role
of
fast F
ouri
e
r
transfo
rm
in t
h
e
cla
ssi
c Fou
r
ie
r analysi
s
.
Multi-re
sol
u
tion analy
s
is
can be vividly expre
s
sed a
s
a gro
up of n
e
sted m
u
lti-re
solutio
n
sub
-
spa
c
e [9]. Please se
e the Figu
re 2.
Figure 2. Ne
sted multi-re
so
lution su
b-sp
ace
Assu
me th
at
the freq
uen
cy
sp
ace of th
e
origin
al
sign
a
l
is
0
V
and
then
it is d
e
compo
s
ed
into 2 sub
-
spaces: the l
o
w-f
r
eq
uen
cy
1
V
and the high-frequ
en
cy
1
W
after the fi
rs
t level of
decompo
sitio
n
an
d
1
V
is d
e
c
omp
o
sed i
n
to the lo
w-f
r
eque
ncy
2
V
and
the hig
h
-fre
quen
cy
2
W
after the se
cond level of decompo
sitio
n
.The de
co
m
positio
n pro
c
ess of such sub-spa
c
e
can
be
recorded a
s
:
01
1
1
2
2
2
3
3
1
,,
,
,
NN
N
VV
W
V
V
W
V
V
W
V
V
W
Here, the sy
mbol
refers to the orthogon
al
sum
of two su
b-spaces,
f
V
is the
corre
s
p
ondin
g
multi-resol
u
tion an
alysis sub-sp
ace to re
solu
tion
2
j
, the vector
sp
ace
j
W
con
s
tituted b
y
the dilation and tra
n
sl
ation of the corre
s
p
ondin
g
wavelet fu
nction to th
e
scaling fun
c
ti
on is the
ort
hogo
nal com
p
lementa
r
y space of
j
V
, every
j
W
refle
c
ts
the high
-
freque
ncy su
b-spa
c
e
of
1
j
V
space si
gnal d
e
tails, an
d
j
V
reflects th
e lo
w-frequ
en
cy sub
-
spa
c
e
of
1
j
V
spa
c
e
sig
nal ap
proxim
ation. The foll
owin
g
characteristics of
su
b-spa
c
e can be
obtai
ne
d
from the discrete wavelet frame:
01
1
2
2
1
1
2
1
NN
N
VV
W
V
W
W
V
W
W
W
W
T
h
is
r
e
s
u
lt de
mo
ns
tr
a
t
es
t
hat limited
sub
-
spa
c
e
s
can
be
ap
proximate to t
he multi
-
resolution a
n
a
lysis
sub
-
sp
ace
0
V
with a res
o
lution of 2
0
=1.
3.2. Artificial
Neural Net
w
ork
Artificial ne
ural network (ANN) is
a
co
mplic
ate
d
net
work
system
whi
c
h i
s
ext
ensively
interconn
ecte
d by a large n
u
mbe
r
of sim
p
le proces
sin
g
units
simila
r to neu
ro
n. It is propo
se
d
on
the ba
sis
of the research
result
s
of mo
d
e
rn
parallel n
eurol
ogy. It reflects
so
me
cha
r
a
c
teri
stics of
human
b
r
ain,
ho
wever, it i
s
n
o
t an
act
ual d
e
scri
ptio
n of n
eural n
e
twork
but it
s
simplification,
1
234
WW
W
W
01
2
3
VV
V
V
W
W
W
V3
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression M
e
thod Based
on Wavel
e
t Transform
and Neur
al .... (Suqing Zhang)
591
abstractio
n
a
nd si
mulation
. It prese
n
ts
the lear
ning,
summ
ari
z
atio
n and
cla
s
sification fe
ature
s
simila
r to h
u
m
an b
r
ain
th
roug
h the
ad
justment
s of
intercon
ne
ction
stren
g
th. The
r
efore, the
fundame
n
tal obje
c
tive of neural n
e
two
r
k research
i
s
to explore the
mecha
n
ism the huma
n
brain
pro
c
e
s
ses,
store
s
and
se
a
r
ch
es i
n
fora
m
t
ion so a
s
to sea
r
ch the po
ssi
bility to apply this prin
ci
ple
to various
sig
nal pro
c
e
s
sin
g
[10]. The principl
e of
artificial ne
ural n
e
t
work is
sho
w
n in Figure 3.
Figure 3. Prin
ciple of artifici
al neural net
work
ANN is a n
o
n
-
linea
r an
d self-ada
ptive inform
atio
n proce
s
sing
syst
em intercon
n
e
cted by
many processing
units. It
is raise
d
ba
sed on th
e re
sea
r
ch results of mo
dern
neurology a
n
d
it
pro
c
e
s
ses th
e inform
ation
by simulati
ng the way the brai
n ne
uraln
e
two
r
k pro
c
e
s
ses
a
n
d
memori
ze
s in
formation.
Artificial neu
ral netwo
rk h
a
s
the followi
n
g
four ba
sic
chara
c
te
risti
c
s:
(i)
Non
-
line
a
rity. Non
-
line
a
r
relation
ship i
s
the ge
neral
cha
r
a
c
teri
sti
c
in th
e nat
ural
worl
d. The b
r
ain wi
sdom i
s
a non-li
nea
r
phen
omen
on.
Artificial neu
ron is eithe
r
in
the activation
or su
ppressi
on state, whi
c
h
i
s
a
kin
d
of non
-line
a
r rel
a
tion
ship
mathemati
c
al
ly. The n
e
twork
formed by th
e neu
ron
s
wi
th threshold
s
has be
tte
r p
e
rform
a
n
c
e,
whi
c
h can e
nhan
ce the f
ault
toleran
c
e a
n
d
storag
e ca
pa
city.
(ii)
Non
-
limitation
.
A neural
n
e
twork i
s
u
s
ually extensi
v
ely intercon
necte
d by m
any
neuron
s. Th
e
overall be
ha
vior of
a
syst
em n
o
t
only depe
nd
s
o
n
t
he cha
r
a
c
teri
stics
of a sig
nal
neuron, but it may also determin
ed by
the inte
racti
on and inte
rconne
ction of
the main unit
s
.
ANN si
mulat
e
s the non
-li
m
itation of the brain th
rou
gh the variou
s intercon
ne
ction of the units.
Asso
ciative
memory is a t
y
pi
cal exampl
e of non-limit
ation.
(iii)
Non-qualitation.
ANN has self
-adapt
ive, self-organizati
on and
self-learni
ng
cap
a
citie
s
. Neural n
e
two
r
k can not only
pro
c
e
ss
the
informatio
n with various
ch
ange
s, but the
non-li
nea
r
dynamic sy
ste
m
chang
es
continuo
usly i
n
the i
n
form
at
ion p
r
o
c
e
s
sing. The
iteratio
n
pro
c
e
ss i
s
fre
quently ado
pted to descri
b
e the
evolutio
n pro
c
e
ss of t
he dynami
c
system.
(iv)
Non
-
convexity. The evolu
t
ion directio
n
of a sy
stem
depe
nd
s on
ce
rtain
spe
c
ific
state fun
c
tion
in a
certai
n
condition. F
o
r
exampl
e, the
extremum
of energy fun
c
tion corre
s
po
n
d
s
to the stabl
e state of the sy
ste
m
. Non
-
convexity means th
at such fun
c
tion has
sev
e
ral
extremum
s, therefore, t
he system has several stable equilibrium
st
ates, whi
c
h
will result in the
diversity of sy
stem evolutio
n [11].
3.3 Wav
e
let Neur
al Net
w
ork Mech
ani
s
m
As a newly-e
mergi
ng mat
hematical mo
deling an
alysis method, wavelet neural
network
is a
su
bstitut
e
for the
feed
forwa
r
d
neu
ral network
to
approximate
any functio
n
t
r
an
sform
an
d
its
basi
c
th
oug
ht is to
u
s
e
waveron
to
re
place n
euron
and
buil
d
a
co
nne
ction
betwe
en
wav
e
let
transfo
rm a
n
d
neural net
work throug
h
con
s
iste
nt
and app
roxima
te wavelet de
comp
ositio
n. It is
X
Y
Thre
sh
old
Hidden laye
r unit
Weight
Input layer
Hidden layer
O
u
tput la
yer
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 587 – 59
6
592
formed
by in
tegrating
the
latest
-devel
oped
time
-fre
quen
cy lo
cali
zation
with
e
x
cellent
wav
e
let
transfo
rm
an
d the
self-l
ea
rning
fun
c
tion
of the tr
aditi
onal
artificial
neru
a
l n
e
two
r
k. The
seri
es to
be obtaine
d from tran
slatio
n and scale
chang
es a
fter
wavelet de
co
mpositio
n ha
ve the prope
rty
and
cla
s
sification
cha
r
a
c
teristics of t
he
comm
on
app
roximat
e
fun
c
tion
of the
wav
e
let
decompo
sitio
n
. Additionall
y
, since it has intro
d
u
c
e
d
two new p
a
ram
e
ters, namely the scale
factor an
d the transl
a
tion
factor, whi
c
h make
s
it have more fl
exible and e
ffective function
approximatio
n cap
ability, stron
g
e
r
pattern recogni
tio
n
ability and fault toleran
c
e
ability. See the
netwo
rk st
ru
cture
of
wav
e
let neu
ral
n
e
twork
and t
he excitatio
n
function
s
ad
opted in
vari
ous
layers
(Figu
r
e
4). The excit
a
tion functio
n
can be n
a
me
d Sigmoid fun
c
tion [12].
Figure 4. Multi-Input wavel
e
t network
The netwo
rk stru
cture
a
n
d
the
exp
r
e
s
sion a
r
e
ba
sica
lly the same
with BP n
e
twork,
that
is
to say, it is
formed by thr
ee laye
rs: inp
u
t layer,
hidd
en laye
r
and
output laye
r.
The
differen
c
e is
that the excitation function
of the neuro
n
in t
he hidde
n layer of BP
netwo
rk i
s
Sigomoid fun
c
ti
on:
()
1
/
(
1
)
x
f
xe
while
the
wa
velet netwo
rk uses th
e
wa
velet function
()
t
,whi
ch can
meet
the
admissibility condition as t
he excitation
function. The specific valuation of
()
t
can be
ch
osen
according to the actual re
quire
ment
s. The co
m
m
on
ly-see
n excit
a
tion functio
n
s in the output
layer incl
ude:
Sigomoid fun
c
tion an
d line
a
r Purlin
e fun
c
tion.
4. Establish
m
ent of Th
e Image Comp
ression
Algo
rithm Bas
e
d
on Wav
e
let Neur
al Net
w
ork
This paper i
n
itializes the pa
ramete
rs
of neu
ral
net
work with
M
o
rlet
wavelet
and
the
other types o
f
wavelet networks are the same in
th
e para
m
eter
setting ste
p
s except different
time-freq
uen
cy paramete
r
. The expressi
on
of Morlet
wavelet ba
si
s function is:
2
2
(
)
cos(
1.75
)
x
x
xe
(7)
Assu
me that the numbe
r o
f
neuron
s in the hi
dde
n layer of three
-
la
yered ne
ural
netwo
rk
is
M
, the numb
e
r of node
s in
the input layer is
L
, the nu
mber of neu
rons in the out
put layer is
N
,
j
i
w
is the co
nne
ctive weight from the
th
j
neu
ron in the hidd
en layer to
th
i
the neuron in the
input layer,
kj
w
is the co
nne
ctive weig
ht from the
h
kt
neuron in the out
put layer to the
th
j
neuron in the
hidden laye
r and
()
j
j
x
b
a
is the ex
citation of the
net output of the
th
j
neuron in
the hidde
n la
yer. Firstly, initialize
j
i
w
acco
rding to the followin
g
step
s:
(1) Firstly, ta
ke th
e
ran
d
o
m
num
ber u
n
iforml
y di
stri
buted i
n
the
rang
e of
[-1,
1] as the
initial setting
value of
j
i
w
;
(2) T
hen no
rmalize
j
i
w
by row;
X
V
W
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression M
e
thod Based
on Wavel
e
t Transform
and Neur
al .... (Suqing Zhang)
593
2
1
(1
,
2
,
,
)
ji
ji
L
ji
i
w
wj
M
w
(8)
(3) T
hen m
u
ltiple by a co
rresp
ondi
ng fa
ctor to the n
u
m
ber of n
ode
s
L
in the input layer,
the numbe
r o
f
neuron
s in the hidd
en lay
e
r
M
and the tra
n
sfer fu
nction
:
1
(1
,
2
,
,
)
L
ji
ji
wC
M
w
j
M
(9)
In this
formula,
C
is a con
s
t
ant relate
d to the transfe
r functio
n
in the
hidden laye
r.
The
valuation of
C
is very impo
rtant to the network. After se
veral lea
r
ni
n
g
pra
c
tice
s, the app
rop
r
iat
e
value for Morl
et neural n
e
twork is b
e
twe
en1.9-2.1.
(4)
Finally, a
s
soci
ate with
the traini
ng
sampl
e
s. A
s
sume that the
maximum v
a
lue a
n
d
minimum val
ue of the input sam
p
le
of the
th
i
neu
ron in the i
nput layer a
r
e
max
x
and
mi
n
x
respe
c
tively, then:
ma
x
m
i
n
2
(1
,
2
,
,
)
ji
ji
ii
w
wj
M
xx
(10)
The
j
i
w
obtaine
d from th
e a
bove ste
p
s is the init
ial
we
ight from th
e
input laye
r t
o
the
hidden layer.
After initializing
j
i
w
, the initial setting of t
he scal
e an
d
tran
slation p
a
ram
e
ter of
wavelet i
s
al
so ve
ry imp
o
r
tant to th
e n
e
twor
k conve
r
gen
ce.
It usually can
be
divided i
n
to t
w
o
cir
c
um
st
an
ce
s:
(1)The
num
b
e
r of
nod
es i
n
the i
nput l
a
yer i
s
1.
Ta
ke th
e
same
value fo
r th
e scal
e
para
m
eters
i
a
of every waveron and the transl
a
tion pa
rameter
(1
)
/
(
1
,
2
,
,
)
j
bj
S
M
j
M
. In
this
formula,
S
is the numb
e
r of trainin
g
sampl
e
s, an
d
M
is the numb
e
r of neu
ron
s
where the
exc
i
tation func
tion is
loc
a
ted.
(2)The n
u
mb
er of nod
es in
the input layer is
>1. It ca
n be known from the ba
sic
wavelet
theory that if
the time-dom
ain cente
r
of the mother wavelet is
*
t
and the radiu
s
is
, then the
con
c
e
n
trated time-dom
ain area
of
wavelet trans
lation is
:
**
[,
]
ba
t
a
ba
t
a
(11)
In orde
r to make the
wave
let scal
ability cove
r the en
tire ran
ge of the input vector, the
initial setting
of the scal
e
a
nd tran
slation
par
am
eters should
satisfy the followi
ng formul
as:
*
mi
n
1
*
max
1
L
ji
i
i
L
ji
i
i
ba
t
a
w
x
ba
t
a
w
x
(12)
It can obtain from the ab
ove formula:
ma
x
m
i
n
11
**
ma
x
m
i
n
11
2
()
(
)
2
LL
ji
i
j
i
i
ii
j
LL
ji
i
j
i
i
ii
j
wx
wx
a
wx
t
w
x
t
b
(13)
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 587 – 59
6
594
The ab
ove formula
requi
re
s the time-d
o
m
ain
center
and radiu
s
of
the mother
wavelet,
whi
c
h can be
obtaine
d thro
ugh calculatio
n.
The co
nne
cti
v
e
wei
ght
kj
w
from the
th
k
neuro
n
in
the
outp
u
t layer to th
e
th
j
neu
ron
in
the hidde
n la
yer can b
e
ini
t
ialized throu
gh the followi
ng method:
(1)Firstly, take the rand
o
m
numb
e
r
u
n
iformly
di
stri
buted in t
he
rang
e of [-1,1] as th
e
initial setting
value of
kj
w
;
(2)Then n
o
rm
alize
kj
w
:
2
1
(1
,
2
,
,
)
kj
kj
M
kj
i
w
wk
N
w
(14)
Image co
mpression
workfl
ow of wavel
e
t neural n
e
two
r
k i
s
sh
own in
Figure 5.
Figure 5. Image com
p
ressi
on wo
rkflo
w
o
f
ve
ctor quant
ization of wavelet neural ne
twork
Gen
e
rate th
e
i
n
it
i
a
l
co
de
th
ro
ug
h th
e ran
d
o
m
co
di
ng
m
e
t
h
od
as th
e
ini
tia
l
netw
or
k w
e
i
g
h
t
a
nd
set t
h
e
cy
cle
i
nde
x T
of t
h
e
w
a
vel
e
t
ne
ur
al
networ
k, t
h
e
i
n
it
ial
d
o
m
a
in and
th
e i
n
i
tia
l
learn
i
n
g
rat
e
Inpu
t t
h
e
trai
n
i
ng
ve
ct
or an
d
ob
tai
n
t
h
e
cor
r
espon
di
ng
w
i
nni
ng
ne
uro
n
Correct
the w
e
ight
ve
ctor
U
pdat
e
t
h
e
ne
i
g
h
borh
o
o
d
a
n
d le
arni
ng
rate
After the
trai
n
i
ng
is
ov
er an
d g
e
t
the
resu
l
t
co
de
Inpu
t
t
h
e test
vec
t
or, get
t
h
e
in
dex
,
seek
the c
ode
and reconstruct t
h
e
im
age
t=t+1
i=i+1
N
N
Y
D
i
vid
e
t
h
e
im
age s
ub-
blo
c
k
s
k
×
k an
d ge
nerat
e
tra
i
n
i
n
g
vec
t
or
Start
A
ll t
h
e im
age
sub-b
l
oc
ks
h
a
ve been
t
r
ain
e
d
Ac
h
i
ev
e
t
h
e
ma
x
i
m
u
m
cycle index
T
Y
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression M
e
thod Based
on Wavel
e
t Transform
and Neur
al .... (Suqing Zhang)
595
5. Experimental Simulation and An
aly
s
is
With “p
out girl” as th
e ori
g
i
nal imag
e, re
a
lize the i
a
m
ge comp
ressi
on by usi
ng B
P
neural
netwo
rk a
n
d
the method
of this paper in
MATL
AB environm
ent and the
effects of the
recon
s
tru
c
ted
images a
r
e i
ndicated in Fi
gure 6.
(a) O
r
igin
al image
(b) BP neural net
wo
rk
(c) Wav
e
let neural ne
twork
Figure 6. Simulation re
sult
of image co
m
p
re
ssi
on with
different algo
rithms
Figure 7. Wa
velet neural n
e
twork trai
nin
g
error
curve
It can be
se
en from th
e
above figu
re that com
p
ared
with th
e origi
nal im
age, the
recon
s
tru
c
ted
imag
e of
Fig
u
re
6(b) ha
s
bad vi
sual
eff
e
ct, obvio
us
distortio
n
and
the im
age
ed
ge
has
big di
sto
r
tion and it i
s
more
blu
rre
d. Besid
e
s, th
e
BP neural ne
twork trai
ning
time increa
ses
and the hig
h
e
r co
mpressi
on ratio is o
b
t
ained at the
sa
crifice of training time, whi
c
h will di
reclty
lead to the decrea
s
e of re
al time. However, only
the weight of the output layer in the wav
e
let
neural network nee
ds to be
trained while
the wei
ght of the input layer ca
n be det
ermin
ed by the
relation
shi
p
betwe
en
th
e
interval of the
samp
lin
g
point
s a
n
d
the inte
rval
of the
wav
e
le
t
comp
actly-su
pporte
d inte
rval. Once
d
e
termin
ed, the traini
ng
spe
ed of th
e wavel
e
t n
eural
netwo
rk
ca
n be greatly accele
rat
ed. F
r
om Figu
re 6(c), it can
b
e
see
n
that the image ad
opt
ing
wavelet n
e
u
r
al network i
s
very cl
ear, it
s detail
s
are
more
profoun
d and
it is ve
ry clo
s
e
to th
e
origin
al imag
e.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 2, June 20
15 : 587 – 59
6
596
6. Conclusio
n
With the increases of im
age pixels a
nd the tran
smissi
on rate, image co
m
p
re
ssi
on
techn
o
logy
h
a
s
be
com
e
o
ne of
the
bottlene
ck techn
o
logie
s
i
n
the
imag
e p
r
o
c
e
ssi
ng. T
h
is p
ape
r
has
reali
z
ed t
he appli
c
atio
n of wavelet neural network in the imag
e comp
re
ssio
n and effectiv
ely
enha
nced th
e com
p
re
ssi
on ability of image d
a
ta. The compa
r
i
s
on
re
sult wi
th the traditi
onal
neural n
e
two
r
k traini
ng
h
a
s
demo
n
st
rated that
th
e
algo
rithm
of this
pap
er
has ha
d b
e
tter
comp
re
ssion
efficien
cy and
effects.
Referen
ces
[1]
Ehsan OS. A
n
Alg
o
rithm f
o
r Rea
l
T
i
me Blind Ima
g
e
Qualit
y Com
paris
on a
nd
Assessment.
Internatio
na
l Journ
a
l of Electr
ical
a
nd Co
mp
uter Engi
ne
erin
g (IJECE)
. 201
2; 2(1): 120-1
2
9
.
[2]
W
e
i F
,
W
e
n
x
i
ng B. A
n
Impr
oved
T
e
chnol
o
g
y
of R
e
mote
Sens
ing
Ima
ge F
u
s
i
on
Ba
sed W
a
v
e
le
d
Packet a
nd P
u
lse C
o
u
p
le
d N
eura
l
Net.
T
E
L
K
OMNIKA Indones
ian
Jour
n
a
l
of E
l
ectrica
l
Engi
neer
in
g
.
201
2; 10(3): 55
1-55
6.
[3]
Mario A. Ro
dr
íguez D, H
e
rmilo SC. R
e
fi
ned F
i
xed D
o
ubl
e Pass Bi
n
a
r
y
Ob
ject Cl
a
ssificatio
n
for
Docum
ent Image Com
p
ressi
o
n
.
Digita
l
Sign
a
l
Processi
ng
. 2
014; 30(
7): 114
-130.
[4]
Kartik S, Ratan KB, Amitabha C.
Image
Compress
io
n Based o
n
Blo
ck
T
r
uncation
Codi
ng us
in
g
Clifford Al
gebr
a.
Procedi
a T
e
chno
logy
. 2
013
; 10(3): 699-7
0
6
.
[5]
G Roslin
e N, S Maruthup
eru
m
al. Normal
i
ze
d Image W
a
ter
m
arking Sc
he
me usin
g Cha
o
tic S
y
stem.
Internatio
na
l Journ
a
l of Infor
m
at
i
on a
nd N
e
tw
ork Security (IJINS).
2012; 1(4): 255-2
64.
[6]
A Alfal
ou, C
Brossea
u
, N
Abda
lla
h. Sim
u
ltan
eo
us C
o
mpressio
n
an
d Encr
ypt
i
on
of Col
o
r Vi
de
o
Images.
Optics Communications.
2015; 3
38(
1): 371-3
79.
[7]
Roma
n S. Ne
w
S
i
mpl
e
a
nd
Efficient Co
lor
S
pace T
r
ansfo
rmations for
Lo
ssless Imag
e
Compress
io
n.
Journ
a
l of Visu
al Co
mmu
n
ic
at
ion a
nd Imag
e Repr
esentati
o
n
.
2014; 25(
5): 1056-
106
3.
[8]
Hamid T
,
Aref M. W
a
velet Ne
ural N
e
t
w
ork A
ppl
i
ed for Pr
og
nosticati
on of
Contact Press
u
re b
e
t
w
ee
n
Soil a
nd Driv
in
g W
heel.
Infor
m
ati
on Proc
es
sing i
n
Agricu
lture
. 201
4; 1(1)
: 51-56.
[9]
Bharg
a
v V, Bi
s
w
a
r
u
p
D,
Ru
dra P,
etc. An
impr
ove
d
Sc
he
me for Id
entif
yi
ng F
a
ult Z
o
n
e
in A
Seri
e
s
Comp
ensate
d
T
r
ansmission Lin
e
usin
g Un
decim
ated W
a
velet T
r
ansform and Ch
eb
yshev Ne
ura
l
Net
w
ork.
Intern
ation
a
l Jo
urna
l of Electrical Po
w
e
r & Energy Systems
. 20
14
; 63(12): 76
0-7
68.
[10]
Yashar F
,
Nar
ges P, Yuk F
H
, etc. Estimati
ng Evap
otrans
pi
ration from T
e
mperatur
e a
n
d
W
i
nd Sp
ee
d
Data usi
ng Arti
ficial a
nd W
a
v
e
let Ne
ura
l
Ne
t
w
orks (W
NNs
).
Agricultura
l
W
a
ter
Mana
ge
me
nt
. 201
4;
140(
7): 26-3
6
.
[11]
Khal
ed D, T
a
rek AT
. Speake
r
Identificatio
n usin
g Vo
w
e
ls F
eatures throu
gh A Comb
ine
d
Method
o
f
F
o
rmants, W
a
velets, and N
eur
al Net
w
o
r
k Cla
ssifiers.
Appl
ie
d Soft Computi
n
g
. 201
5; 27(2)
: 231-23
9.
[12]
Majid J, Abu
l
K, AQ An
sari, etc. Generalize
d
Ne
ural
Net
w
ork a
nd
W
a
velet T
r
ansform Based
Appro
a
ch
for F
ault
Loc
ation
E
s
timation
of
a
T
r
ansmission
L
i
ne.
A
ppl
ie
d S
o
ft Computi
ng.
201
4;
1
9
(6):
322-
332.
Evaluation Warning : The document was created with Spire.PDF for Python.