TELKOM
NIKA
, Vol.13, No
.1, March 2
0
1
5
, pp. 137~1
4
5
ISSN: 1693-6
930,
accredited
A
by DIKTI, De
cree No: 58/DIK
T
I/Kep/2013
DOI
:
10.12928/TELKOMNIKA.v13i1.1270
137
Re
cei
v
ed O
c
t
ober 2
6
, 201
4; Revi
se
d Decem
b
e
r
9, 2014; Accepte
d
Jan
uary 7, 2015
An Image Compressio
n
Scheme Based on Fuzzy Neural
Network
Bo Wang*, Yubin Gao
Schoo
l of Com
puter an
d Co
ntrol Eng
i
ne
eri
n
g
,
North Un
ivers
i
t
y
of Chi
na, T
a
i
y
ua
n 03
005
1, Shan
xi, Chin
a
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: 2259
98
70@
q
q
.com
A
b
st
r
a
ct
Imag
e co
mpr
e
ssion tech
nol
o
g
y is to compr
e
ss the
redu
nd
ancy betw
e
e
n
the pixe
ls to reduce th
e
transmissio
n
b
r
oad
ban
d
and
storage
spac
e
by us
in
g the
correlati
on
of t
he i
m
ag
e p
i
xel
s
. F
u
zz
y
ne
ur
al
netw
o
rk effectively int
egrates
neur
al n
e
tw
ork techno
l
ogy
a
nd fu
zz
y
tech
n
o
lo
gy; co
mbi
n
es le
arni
ng, se
lf-
ada
ptivity, ima
g
in
ation
and
id
entity an
d uses
rule-b
ase
d
rea
s
oni
ng a
nd fu
zz
y
i
n
formatio
n
process
i
ng i
n
the
nod
es; thus
gr
eatly i
m
prov
ing
the tra
n
spar
en
cy of fu
zz
y
neu
ral n
e
tw
ork. T
h
is pa
per
mai
n
ly
investi
gates
th
e
app
licati
ons of
fu
zz
y
neur
al
netw
o
rk in image co
mpressi
on an
d real
i
z
e
s
the ima
ge c
o
mpressi
on a
n
d
reconstructi
on
of fu
zz
y
n
eur
a
l
n
e
tw
ork. It is de
mons
trate
d
in t
he s
i
mul
a
tion
exp
e
ri
me
nt
that the
i
m
a
g
e
compressi
on
a
l
gorit
hm b
a
se
d o
n
fu
zz
y
ne
ural
netw
o
rk
has s
i
gn
ificant
adv
anta
ges
i
n
trai
nin
g
sp
e
ed,
compression quality
a
nd ro
bu
stness.
Ke
y
w
ords
: image co
mpressi
on, fu
zz
y
the
o
r
y
, neural n
e
tw
ork
1. Introduc
tion
Image
com
p
ression
techn
o
logy refers t
o
the m
e
thod
usi
ng a
s
l
e
ss a
s
po
ssibl
e
bits to
rep
r
e
s
ent th
e imag
e
sig
nal fro
m
the
sig
nal
sou
r
ce
and
po
ssibly red
u
ci
ng
su
ch
re
so
u
r
ce
con
s
um
ption
occupi
ed by
the image
data a
s
the freque
ncy
b
a
ndwi
d
th,
storage spa
c
e a
nd
transmissio
n time etc., so as to tran
smi
t
and
store th
e image sig
n
a
ls [1]. The main purpo
se o
f
image
com
p
ression i
s
to
eliminate the
redu
nda
nt informatio
n of
image
s, incl
uding
en
codi
ng
redu
nda
ncy,
redun
dan
cy b
e
twee
n pixel
s
and
p
s
ych
o
l
ogical visual
redun
dan
cy in
formation
[2].
In
past
decade
s, studie
s
o
n
the imag
e co
mpre
ssi
on
h
a
ve obtain
e
d
the rapid
de
velopment, a
nd
many effectiv
e alg
o
rithm
s
have o
c
curre
d
,and
co
m
p
re
ssi
on
stan
dards
su
ch
a
s
JPEG, JPEG2
000
and MPEG h
a
ve been fo
rmed. In ord
e
r
to furthe
r compress ima
ges,
we can
start from t
w
o
asp
e
ct
s: one
is the u
s
e
of
vision
system
feature
with
the hum
an e
y
e as "final
consume
r
" of the
image info
rm
ation. The im
age
comp
re
ssion
ba
sed
o
n
huma
n
visu
al ch
ara
c
te
ristics in
crea
sin
g
ly
become
s
the
feature
for
pe
ople to
stu
d
y, and
be
ca
us
e
of the
compl
e
xity of the vi
sual
sy
stem,
at
pre
s
ent, the
r
e still exist a
lot of unkno
wn a
r
ea
s to
be explo
r
ed i
n
this field. T
he second i
s
the
developm
ent
of ne
w comp
ressio
n tool
s
and m
o
re
int
e
lligent al
go
ri
thms. O
w
ing
to the ex
celle
nt
performance,
there
still ex
ist a lot of
room for the
artificial neural
network
to
be applied in the
field of image
comp
re
ssi
on
[3].
Neu
r
al
netwo
rk
almo
st is i
n
volved in
every
a
s
pe
ct of
image
proce
ssi
ng,
su
ch a
s
ima
ge
segm
entation
,
image
enh
ancement, i
m
age
pattern
re
cog
n
ition, image re
storation
a
nd
im
ag
e
comp
re
ssion
etc. Almo
st e
v
ery ki
nd
of n
eural
n
e
two
r
k ca
n
be
dire
ct
ly or i
ndirectl
y
applie
d to
the
image
compression, and
the ran
ge of
a
pplication
s
in
clud
es all
so
rts of
dama
g
e
d
codin
g
m
e
tho
d
or some
ke
y steps of these metho
d
s. Wh
at
lot of neural
netwo
rk m
o
d
e
l reali
z
e
s
i
s
a
mathemati
c
al
mappin
g
fro
m
input sp
ace to
the outp
u
t spa
c
e.Th
e
is omo
r
phi
sm betwe
en the
image
com
p
ression
co
din
g
and
neu
ral
netwo
rk on th
e mathem
atical natu
r
e d
e
termin
es th
e f
a
ct
that the ne
ural network m
u
st
have
extensive
appli
c
ation in th
e i
m
age
com
p
ression
co
din
g
field
[4].
This pa
pe
r firstly studie
s
the theoretical
basi
s
and al
gorithm of th
e image com
p
re
ssi
on,
and then
con
s
tru
c
ts the fu
zzy ne
ural n
e
t
work mod
e
l and imple
m
e
n
ting step
s of
this algo
rith
m,
and finally un
der the Matla
benviro
n
ment
, condu
cts th
e image
com
p
re
ssi
on an
d
recon
s
tru
c
tio
n
throug
h the
e
x
perime
n
tal simulation. By com
parat
ive analysi
s
, the
algorith
m
in
this pa
per ha
s
good
co
nvergen
ce
spe
e
d
and hi
gh a
c
cura
cy, effe
ct
ively avoidi
ng the d
e
fect existing in
the
image comp
ression al
gorit
hm by the traditional ne
ura
l
network an
d
improving th
e comp
re
ssio
n
perfo
rman
ce
and the subje
c
tive quality of recon
s
tructe
d image.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 137 – 1
4
5
138
2. Basic The
or
y
of Image Compres
s
io
n
2.1. Image compression
mechanism
Becau
s
e
the
r
e a
r
e
usually
a la
rg
e a
m
o
unt of
data redun
dan
cy
in
the digital
i
m
age, so
it's po
ssible
to comp
re
ss it to re
du
ce th
e dat
a
re
presenting th
e im
age, thu
s
bei
ng
conve
n
ien
t
to
store a
nd transmit the i
m
age. Digita
l image
is
a two-di
men
s
ion
a
l functi
onsample
d
and
quantified, an
d usu
a
lly a two-dim
e
n
s
iona
l real
matrix is used to rep
r
ese
n
t an ima
ge.
Take th
e ima
ge
(,
)
f
xy
samplin
gfor exam
ple, sample
M
N
,
times
in the ho
rizon
t
al and
vertical di
re
ction re
spe
c
ti
vely, arrang
e
these d
a
ta
into one m
a
trix acco
rding
to the relati
ve
positio
n of sa
mpling poi
nt and al
so qu
a
n
tify each
ele
m
entso
as to
get a numeri
c
al matrix. T
h
is
matrix can
b
e
u
s
ed
to
re
place the
fun
c
tion
(,
)
f
xy
, that i
s
to
say, the
digital i
m
ag
e can
be
expre
s
sed b
y
a matrix. Matrix ele
m
ent is call
ed digital i
m
age pixel
or pixel ele
m
ent.
Rep
r
e
s
entati
on form is a
s
sho
w
n in formula1:
00
0
1
0
1
10
1
1
1
1
01
1
1
1
(,
)
(
,
)
(,
)
(,
)
(
,
)
(
,
)
(,
)
(,
)
(
,
)
(
,
)
[(
,
)
]
[
(
,
)
]
N
N
Sa
m
p
ling
MM
N
Q
uantific
ation
MN
l
M
N
fx
y
f
x
y
fx
y
fx
y
f
x
y
fx
y
fx
y
fx
y
f
x
y
fx
y
fi
j
f
i
j
(1)
In which,
(,
)
l
fi
j
rep
r
esents the pi
xel value qua
ntified.
If the sam
p
li
ng poi
nts
are
,
M
N
and
the q
u
antified level
is
2
n
Q
, digits req
u
ired
to
store a
digital
image is:
BM
N
n
(2)
Image com
p
ression is to
handl
e the numeri
c
al mat
r
ix and rep
r
e
s
ent this nu
meri
cal
matrix with
fe
wer data.
Diff
erent
amo
unt
of data
a
r
e
shown in
different ways i
n
t
e
rm
s of i
m
ag
es,
becau
se diffe
rent rep
r
e
s
en
tation will
produ
ce d
a
ta redun
dan
cy with different d
egre
e
, an
d th
e
purp
o
se of i
m
age
co
mpression i
s
to
redu
ce the
s
e
redu
nda
ncy
as
so
on a
s
p
o
ssible. A
s
u
s
ual,
there
exist th
ree
ba
sic
dat
a re
dun
dan
cy for the di
gital imag
e: pixel re
dun
dan
cy, mental visi
on
redu
nda
ncy
a
nd e
n
codin
g
redun
dan
cy.
Whe
n
o
n
e
or
more
of th
ese
thre
e type
s
o
f
red
und
an
cy
is
redu
ce
d or eli
m
inated, the image data
co
mpre
ssion
wil
l
be reali
z
ed [
5
].
2.2. Image compression
problem ana
l
y
s
is
For ima
g
e
s
, if quick o
r
re
al
-time tran
smi
ssi
on an
d la
rge sto
r
ag
e are req
u
ire
d
, the image
data shoul
d be comp
re
ssed.
Un
de
r
th
e
equ
al co
m
m
unication
capa
city, if the imag
e dat
a is
transmitted a
fter the com
p
re
ssi
on, mo
re ima
ge in
f
o
rmatio
n will
be tra
n
smitt
ed an
d al
so
the
comm
uni
cati
on ability will
be enhan
ced. Image compression resear
ch i
s
to find
ways of
high
comp
re
ssion
ratio an
d en
sure that the compress
e
d
image h
a
s a
p
p
rop
r
iate
sign
al-to-noi
se ra
tio,
and ori
g
inal
signal shoul
d be re
stored a
fter the
comp
resse
d
tran
smissi
on, and
besi
d
e
s
, duri
n
g
the com
p
re
ssion, tran
smission a
nd rest
oration
pro
c
e
ss, it is requi
red that the i
m
age di
storti
on
shall b
e
small
so as to cl
assify and re
co
gnize image
s easily.
The rea
s
on
why imag
es
can
be
comp
resse
d
lie
s in
that the data
volume of th
e origi
n
a
l
image is mu
ch greater th
an effective informatio
n am
ount it offers. That is to say, the origi
nal
image data fil
e
contain
s
a l
a
rge n
u
mbe
r
of redun
dan
cy and irreleva
ncy inform
ation. If
D
is used
to rep
r
e
s
ent t
he data vol
u
me and
du
the redun
dan
cy a
m
ount, the ef
fective inform
ation amo
unt
I
provided by the image i
s
:
I
Dd
u
(3)
Usually the o
r
iginal i
m
ag
e
data i
s
cl
ear,
and the
red
unda
ncy i
s
changi
ng
whi
c
h varie
s
according to
the image
usi
ng pu
rpo
s
e.
Differen
c
e
be
tween th
e "re
dund
an
cy informatio
n" an
d "
irrel
e
van
c
y
in
formation
”
ca
n
be simply unde
rsto
od
i
n
this
way: redun
dan
cy in
formation i
s
t
he
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression Schem
e Based
on Fuzzy Ne
ural Netwo
r
k (Bo Wa
ng)
139
recurring
info
rmation
an
d it
s d
e
letion
ex
erts no
loss t
o
the
origi
nal
informatio
n;
while
the
dele
t
ion
of the irrelev
ancy info
rma
t
ion exert
s
some infl
u
e
n
c
e on th
e o
r
ig
inal info
rmati
on an
d d
oes not
affect the und
ersta
ndin
g
of the informatio
n conte
n
t und
er limiting
co
ndition
s. But usu
a
lly there
is
no nee
d to accurately disting
u
ish these tw
o concepts. Th
e algorith
m
only deleting
the
redu
nda
ncy i
n
formatio
n is called "lo
s
sl
ess co
mpress
ion", an
d it can fully re
st
ore o
r
igin
al file
according to
the co
mpre
ssed file, bu
t the lossle
ss comp
re
ssi
on alg
o
rithm
is lo
wer in
the
comp
re
ssion.
The alg
o
rith
m deleting i
r
relevan
c
y info
rmation i
s
cal
l
ed "lossy co
mpre
ssion
”
, a
n
d
it can
only
approximatel
y re
store
th
e o
r
iginal
file ba
se
d o
n
the
com
p
re
ssed file,
an
d its
comp
re
ssion
is very hig
h
e
r
. Lossy com
p
re
ssi
on
can
be ad
opted
in most im
ag
es a
c
cording
to
the “fidelity" rule, an
d the
lossl
ess
co
mpre
ssi
on i
s
only applie
d
in som
e
sp
ecial
con
d
itio
ns.
Therefore,
strategie
s
sh
all be pro
p
e
r
ly selecte
d
in
line
with the goal
to win the largest be
nefit [6].
If
1
n
is u
s
e
d
to repre
s
e
n
t the
former
data a
m
ount of on
e
image a
nd
2
n
the comp
re
ssed
data amou
nt, the comp
re
ssion ratio
r
C
is defined a
s
:
12
/
r
Cn
n
(4)
The re
dun
da
ncy amou
nt
d
R
can b
e
expre
s
sed a
s
:
11
/
d
RC
(5)
From a tech
nical poi
nt of view, there are
two ways to comp
re
ss image
s. O
ne is to
redu
ce th
e total data amo
unt being tra
n
smitted
o
r
stored by de
creasi
ng redu
n
dan
cy gene
ra
ted
by ea
ch
rela
ted imag
e pi
xel, su
ch
as usi
ng
di
sc
rete
c
o
s
i
ne
tr
an
s
f
o
r
m
(
D
C
T
)
to
r
e
du
ce
th
e
correl
ation a
m
ong
data, a
nd then
keep
the mai
n
co
mpone
nt, thu
s
redu
cin
g
th
e amo
unt of
data.
The oth
e
r i
s
t
o
dete
r
mine
the ap
propri
a
te co
di
ng
sch
e
mes a
c
cordi
ng to the
dyn
a
mic
data
ran
g
e
and its
occu
rre
nce freq
u
ency of, det
ermin
e
, and
to reali
z
e t
he comp
re
ssion by arran
g
ing
prop
erly codi
ng bits o
c
cupi
ed by differen
t
dat
a to redu
ce the overall
bits requi
re
d [7].
3. Fuzzy
Neu
r
al Net
w
o
r
k
3.1. Introduc
tion to fu
zz
y
neural ne
t
w
ork
3.1.1. Fuzzy
neural ne
t
w
ork mechani
s
m
Fuzzy ne
ural
netwo
rk is th
e process to
handl
e sampl
e
s fu
zzily, an
d the
sam
p
le
s
sho
u
ld
be
con
s
tantly
tran
sformed
into
reg
u
lar form
s in
the
han
dling
proce
s
s. Th
e
mappin
g
rela
tion
betwe
en in
pu
t and
output
amount i
s
re
pre
s
ente
d
by
su
bordinate
function. T
h
e
input in
dicator
vec
t
or
is
12
[,
,
]
T
n
x
xx
x
,
i
x
st
and
s for the f
u
zzy varia
b
le.
Suppo
se
12
()
,
,
,
i
m
ii
i
i
Tx
A
A
A
and
1,
2
,
,
in
, in which, the value set o
f
numbe
r
j
of
i
x
is
(1
,
2
,
,
)
j
ii
A
jm
, and s
u
ppose one
certai
n fu
zzy
set of the
do
main
i
U
is
j
i
A
, and
()
(
1
,
2
,
,
,
1
,
2
,
)
j
i
ii
A
x
in
j
m
is the
su
bordi
nate
function
of
i
x
being
su
bo
rd
inate to
j
i
A
. The
output va
ria
b
le is
12
()
,
,
,
i
m
Ty
B
B
B
, in
whic
h,
1,
2
,
,
j
y
Bj
m
is the value of number
j
of
y
, and define the fuzzy set of domain
y
U
is
j
B
, and
()
j
B
y
is the subordinate fu
nction of
y
being su
bordinat
e to
j
B
[8].
3.1.2. Fuzzy
neuron
Like
ordina
ry artificial
neu
ral n
e
two
r
k, f
u
zzy neu
ral
netwo
rk is u
s
ually co
mpo
s
ed of a
large
am
oun
t of fuzzy o
r
no
n-fu
zzy
neuron
s that
co
nne
ct to
gether a
c
cording to
cert
ain
topologi
cal structu
r
e.
Here are
th
ree ba
si
c types of fuzzy neuron
s.
(1) F
u
zzified
neuron
The fu
zzy ne
uron
is a
kin
d
of ne
uro
n
which
can
qua
ntify or
stand
ardi
ze th
e o
b
s
ervatio
n
or inp
u
t values. In simpl
e
terms, the role of
fuzzy neuron lie
s in
transfo
rmin
g
input values into
the fuzzy valu
es:
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 137 – 1
4
5
140
()
yF
x
(6)
The in
put val
ue
x
can
eithe
r
be
discrete
or
contin
uou
s, definite
or
fuzzy. Th
e o
u
tput
value is the value of the su
bordi
nate fun
c
tion
()
F
of one certai
n fuzzy set.
(2) Defuz
z
i
fication
neuron
Defu
zzifi
catio
n
ne
uro
n
i
s
t
he info
rmatio
n
processin
g
unit that
ca
n tran
smit th
e outp
u
t
result rep
r
e
s
ented by the
“dist
r
ibution
value” in
th
e form of “fi
x
ed value”.
The inp
u
t-ou
tput
relation
shi
p
repre
s
e
n
ted b
y
the defuzzifi
cation i
s
:
12
(,
,
,
)
n
yx
x
x
(7)
In whic
h
is t
he d
e
fuzzification fun
c
tion
. Comm
on
de
fuzzifi
cation
method
s i
n
cl
ude th
e
maximum-ta
king meth
od
and the
ce
ntroid
-taki
ng m
e
thod. Th
e
maximum-ta
king meth
od i
s
to
extract the
p
o
int value
of
the “di
s
trib
ution value
”
fun
c
tion
at the
maximum p
o
i
n
t as th
e d
e
finite
output value.
The
ce
ntroi
d
-taki
ng m
e
thod i
s
to
extract the
val
ue of the
“di
s
tributio
n val
ue”
function at th
e centroid poi
nt as the defi
n
ite value[9].
(3) F
u
zzy logi
c neu
ron
Fuzzy logi
c
neuron i
s
the mo
st important and m
o
st
comm
onl
y used fuzzy neural
netwo
rk, an
d its input-o
utp
u
t relation
shi
p
is:
(,
)
()
uI
x
w
yf
u
(8)
In whic
h,
12
(,
,
,
)
[
0
,
1
]
N
N
xx
x
x
is
the neu
ron i
nput,
12
(,
,
,
)
[
0
,
1
]
N
N
is the
neuron con
n
e
ction wei
ght
,
and
u
is the
neuron in
ne
r
state,
y
is the
neuron
outpu
t,
is the
neuron valve
value,
f
is th
e monoto
n
ically increa
sin
g
output fun
c
tion,
I
is
the fuz
z
y
logic
function o
r
fuzzy integ
r
ate
d
function an
d its spe
c
ific form is determined by act
ual con
d
ition an
d
need
s. For ex
ample, all the
following fun
c
tion
s
ca
n be
taken a
s
sp
e
c
ific expressi
on form:
·Weighted su
m
function:
ii
ux
.
Integration by
taking small first and the
n
large:
()
ii
ux
.
·Integration b
y
quadratu
r
e
first and then
taking la
rge:
()
ii
ux
.
·Integration b
y
quadratu
r
e
first and then
taking la
rge:
()
ii
ux
.
4. Establish
m
ent Of Th
e Image Comp
ression
Algo
rithm Bas
e
d
On Fuzz
y
Neural Ne
t
w
o
r
k
4.1. Fuzzy
neural net
w
o
r
k struc
t
ur
e
Typical fuzzy
neural network structu
r
e is
as sho
w
n in
Figure 1:
(1) Th
e first
layer is inp
u
t layer: the input
layer no
de in fuzzy
neural netwo
rk is the
entran
c
e
of the fuzzy information, tran
smitting t
he input informatio
n to the next layer. Each
n
ode
in su
ch layer resp
ectively rep
r
e
s
ent
s the input information
(1
,
2
,
,
)
i
x
in
, therefore, the valu
e
of node
s in i
nput layer is determi
ned
by the dim
e
n
s
ion
of the i
nput informat
ion, and thi
s
is
1
Nn
.
(2) The second layer is f
u
zzific
atio
n layer: the rol
e
of this layer plays in t
he entire
netwo
rk is to
cal
c
ulate th
e
sub
o
rdi
nate
function
(1
,
2
,
;
1
,
2
,
,
)
j
ii
in
j
m
, in whic
h,
n
is the
dimen
s
ion
of the input a
m
ount. Each compon
ent ha
s its
co
rre
sp
o
nding
nod
e, that is to
say,
the
node nu
mbe
r
i
m
corre
s
p
ond
s with the num
ber of fuzzy
cla
ssifi
cation
i
x
. The commo
nly used
Gau
ssi
an su
bordi
nate fun
c
tion is
rep
r
e
s
ente
d
as
2
exp(
(
)
)
ii
j
j
i
ij
xc
, in which
ij
c
and
ij
are
the cente
r
a
nd width of the su
bordina
te function re
spe
c
tively. The total node
numbe
r of this
layer is
2
1
n
i
i
Nm
.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression Schem
e Based
on Fuzzy Ne
ural Netwo
r
k (Bo Wa
ng)
141
Figure 1. Fuzzy neural net
work st
ru
cture
(3) T
he third layer is the f
u
zzy rea
s
o
n
i
ng layer: ea
ch node in th
e
fuzzy re
ason
ing layer
contai
ns cert
ain fuzzy rule
s. The data fuzzified ma
tch with the fuzzy rule
s of the fuzzy re
aso
n
ing
layer, and at the sam
e
time
, the fitness o
f
each fuzzy rule ca
n be m
easure
d
:
12
12
n
i
ii
j
n
a
(9)
In which,
{1
,
2
,
,
}
nn
im
,
11
{1
,
2
,
,
}
im
,
1,
2
,
,
j
m
;
1
n
i
i
mm
The total nodes
th
is l
a
ye
r is
3
Nm
. The i
nput
variable
s
are
given, an
d th
e mem
b
e
r
ship
value only exists wh
en the langu
age vari
able value is
nearly cl
ose to the input variable, an
d the
membership value
will be very
small
if the
lang
uage variable value is
quite far away from t
he
input point. Whe
n
memb
ership de
gre
e
is very
small, such a
s
less than
0.03, it can
be
approximated
to 0, so, the
output re
sult
j
a
of most node
s is 0.
(4) Th
e fourt
h
layer is the
normali
zed l
a
yer: node n
u
mbe
r
of this layer is
43
NN
m
.
The fun
c
tion
of this layer i
s
to cond
uct
norm
a
lized
calcul
ation to
wards th
e fitness valu
e of
the
third layer:
1
,1
,
2
,
,
j
j
m
i
i
a
aj
m
a
(10)
(5)
The fifth l
a
yer i
s
the
o
u
tput layer: it
is
al
so
call
e
d
t
he a
n
t
i
-f
uz
zif
i
cat
i
o
n
lay
e
r.
Th
e
clea
rne
s
s of the fuzzy neural netwo
rk i
s
reali
z
ed in thi
s
layer.
1
,1
,
2
,
,
m
ii
j
j
j
ya
i
r
(11)
In which,
i
y
is the re
sult ca
u
s
ed by the fu
zzy ne
ural n
e
t
work pa
sses
the output layer.
The lea
r
ni
ng
para
m
eter
of
fuzzy n
e
u
r
al
netwo
rk i
n
cl
u
des t
w
o
kind
s: one is th
e
ij
c
value
of su
bo
rdinat
e fun
c
tion
an
d its
ij
value;
the oth
e
r is
ij
value
of outp
u
t
layer
of fuzzy ne
ura
l
network
at the las
t
time [10].
11
…
m
a
1
1
1
a
1
a
2
a
2
a
1
x
1
i
m
1
y
12
1
m
…
1
r
…
m
a
…
i
m
n
1
n
n
…
r
y
rm
2
r
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 137 – 1
4
5
142
4.2. Establishment of
the
image compression algo
rithm based
on fuzz
y
neural net
w
o
r
k
4.2.1. Basic idea
The b
a
si
c i
d
e
a
u
s
ing th
e m
ode
co
nversi
on
cap
ability of multi-laye
r
feed-fo
rward
netwo
rk
to re
alize
the
data tran
sformation
(code
) is: m
ap
one
grou
p of
inp
u
t mod
e
to
one
group
of o
u
tput
mode throug
h the middle
layer (i.e. fuzzifi
cation lay
e
r, fuzzy rea
s
oni
ng layer
and no
rmali
z
ed
layer), an
d
make th
e o
u
tput model
equal to th
e input mod
e
as
soo
n
as po
ssible.
The
transfo
rmatio
n of inp
u
t lay
e
r a
nd mi
ddl
e layer can b
e
taken a
s
th
e comp
re
ssin
g
and
en
co
di
ng
pro
c
e
ss; a
n
d
the tran
sformation of th
e middl
e l
a
yer an
d outp
u
t
layer ca
n
be taken a
s
the
decodin
g
pro
c
e
ss. Fig
u
re
2 gives a bri
e
f explanation
of this idea.
Figure 2. the basi
c
ide
a
of image co
mp
re
ssi
on ba
se
d on fuzzy neural netwo
rk
Assu
me that
the network input layer and out
put l
a
yer are
re
spectively co
mposed of
same
M
neuro
n
s, and ne
uron numb
e
r
K
of the middle layer is small
e
r than
M
Provide the
same
lea
r
nin
g
mod
e
in th
e input l
a
yer
and o
u
tput la
yer(that i
s
, th
e tea
c
he
r mo
de is the i
n
p
u
t
mode
). After netwo
rk
stu
d
ying,
its un
derlying l
a
ye
r sh
all be a
b
le to give d
i
fferent en
co
ding
expre
ssi
on fo
r ea
ch in
put
mode
s am
on
g
M
input mo
d
e
s. Its ba
si
c i
dea i
s
to ma
ke the ori
g
inal
data pa
ss th
e wai
s
t type netwo
rk b
o
ttleneck, and
expect to g
a
in rel
a
tively comp
act da
ta
expre
ssi
on at
the network
bottlene
ck, in
orde
r to
ach
i
eve the purp
o
se of compression. In the
pro
c
e
s
s of
netwo
rk lea
r
ning, a
d
ju
st
the net
wo
rk wei
ghts through
traini
n
g
an
d m
a
ke
the
recon
s
tru
c
tio
n
imag
e simil
a
r
with the training im
age
possibly in
m
ean e
r
ror
se
n
s
e. Th
e trai
n
e
d
netwo
rk can
be u
s
e
d
to p
e
rform
the
d
a
ta comp
re
ssion ta
sk,
and
the
weighte
d
value
betwee
n
netwo
rk in
pu
t layer and the middle la
yer is equiva
lent to enco
der. The o
r
ig
inal image d
a
t
a
transmitted from the input end is
processed by the fuzzy neu
ral ne
twork to gain the output da
ta
in the middle
layer, and th
e output data
is the co
m
p
ression
cod
e
of the original
image, and t
he
vector of the
output layer i
s
the re
con
s
tr
ucted ima
ge
data after the
comp
re
ssi
on
[11].
Figure 3. Schematic diagram of image co
mpression based on fuzzy neural net
work
Inpu
t
la
y
e
r
Ou
tpu
t
la
y
e
r
Mi
d
d
l
e
la
y
e
r
Co
de
Co
di
ng
result
Dec
o
d
Mi
d
d
l
e
la
y
e
r
K<M
un
i
t
s
Inp
u
t
la
y
e
r
M
un
its
Ou
tp
ut
la
y
e
r
M
un
i
t
s
In
pu
t
th
e
im
ag
e
Ou
tp
ut
the
im
a
g
e
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression Schem
e Based
on Fuzzy Ne
ural Netwo
r
k (Bo Wa
ng)
143
Network in
clu
des the in
put
layer, middl
e
laye
r a
nd
out
put layer. In
the lea
r
ni
ng p
r
ocess,
the image
da
ta is not o
n
ly sent to the
i
nput la
yer,
b
u
t also to th
e
output laye
r as the te
ach
e
r
sign
als. Whe
n
the netwo
rk is pr
o
perly trained, the pro
c
e
ss from
the
input layer to
hidden laye
r is
the netwo
rk e
n
co
ding p
r
o
c
ess, and the
pro
c
e
ss f
r
om
hidde
n layer t
o
output laye
r is the net
work
decodin
g
pro
c
e
ss. The
co
ntinuou
s net
work trai
ni
ng
and netwo
rk weight adju
s
tment minim
i
ze
make
the
net
work i
nput
a
nd o
u
tput m
ean
sq
uar
e error, whi
c
h eventually co
mpre
sse
s
th
e
N-
dimen
s
ion v
e
ctor i
n
to K-dimen
s
ion v
e
ctor
()
K
N
.
The a
l
gorit
hm
st
ep
s ado
pt
ed t
h
e f
u
zzy
neural network to com
p
re
ss image a
r
e a
s
follows:
The algo
rithm
flow cha
r
t is
sho
w
n in Fig
u
re 4.
Figure 4. Flow ch
art of image comp
re
ss
ion b
a
sed o
n
fuzzy ne
ura
l
network
5. Experimental simulati
on and analy
s
is
In ord
e
r to
d
e
mon
s
trate
the effe
ctiveness of
thi
s
al
gorithm,
we
con
d
u
c
t a
co
mparative
contrast o
n
the re
co
nstru
c
ted ima
ge b
y
Figure
5. T
he sel
e
cte
d
L
ena ima
ge si
ze is
512 x 5
12.
Due to the i
m
age u
s
e
d
is larger,8
×8
module i
s
use
d
in this experime
n
t to improve
the
N
Y
Y
En
d
In
itia
liz
a
t
io
n
Di
v
i
de
th
e
or
iginal
imag
e
in
t
o
8x8
piece
s
wi
t
h
pix
e
l
va
l
u
e
of
ea
ch
piec
e
as
th
e
tr
aining
Inp
u
t
thesample
and
ca
l
c
u
l
a
t
e
th
e
output
of
mi
d
d
l
e
la
y
er
and
out
p
ut
la
y
er
Calcula
t
e
er
r
o
r
s
Calcula
t
e
erro
r
si
g
n
a
l
of
ea
ch
la
y
e
r
Co
rr
ect
th
e
we
i
g
ht
of
ea
ch
la
y
e
r
Le
ss
than
the
exp
e
ct
e
d
error
Less
than
th
e
ma
x
tr
a
i
n
i
ng
la
y
e
r
nu
mb
e
r
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 16
93-6
930
TELKOM
NIKA
Vol. 13, No. 1, March 2
015 : 137 – 1
4
5
144
comp
re
ssion
efficien
cy. We ca
n see th
at the co
mpression m
e
tho
d
ba
sed
on t
he fuzzy ne
u
r
al
netwo
rk i
s
ve
ry clea
r and i
t
is hard fo
r the huma
n
ey
e to find any trace
of disto
r
tion. Whil
e, the
image
ba
sed
on traditiona
l BP artificial
neural n
e
two
r
k
ha
s o
b
vio
u
s m
odul
e ef
fect an
d it lo
oks
that the whol
e image not smooth
with the subje
c
tive feeling that the image is com
p
o
s
ed
of
“gri
ds”. This
shows that the
image comp
ression b
a
sed
on the fuzzy neural network is
stron
g
e
r
in
the image
compressio
n
cap
a
city. Th
e algo
rithm i
n
this p
ape
r treats
ea
ch
coeffici
ent a
s
the
function
of the
coo
r
din
a
te and
al
so
combi
n
e
s
th
e lea
r
nin
g
, self-ada
ptiven
ess, imagi
na
tion,
recognitio
n
a
nd fu
zzy
info
rmation
p
r
o
c
essing.
The
fuzzy rea
s
oni
ng n
e
two
r
k training
a
c
hiev
es
the effect of
function
app
roximation, an
d the
de
co
di
ng en
d can
decode
dire
ctly only with the
netwo
rk
wei
g
hts, whi
c
h ef
fectiv
ely avoids the d
e
fect of tradition
al neu
ral net
work in im
ag
e
comp
re
ssion
algorith
m
a
n
d
improves the
co
mpress
io
n
pe
rform
a
n
c
e
and
the
subj
ective
quality
of
recon
s
tru
c
ted
image.
(a) Ori
g
inal i
m
age
(b) Ima
ge co
mpre
ssion m
e
thod ba
se
d (c) Image
compressio
n method ba
se
d
on fuzzy neural net
work
on BP neural n
e
twork
Figure 5. Co
mpari
s
o
n
of image comp
re
ssi
on ba
se
d on different m
e
thod
s
6. Conclusio
n
Because m
o
st neural net
work model
s have st
rong ability to reco
gnize and classify
patterns,
and this kind
of pattern
recogni
t
ion and pattern
classifi
cati
on ability provides
a powerful
tool to
solve
the p
a
ttern
cla
ssifi
cation
probl
em i
n
i
m
age
co
ding
schem
e. Thi
s
p
ape
r
studi
es
deeply th
e th
eoreti
c
al
ba
si
s, alg
o
rithm,
netwo
rk
mo
d
e
l an
d al
go
rithm reali
z
atio
n of th
e ima
ge
comp
re
ssion
based on the
fuzzy neu
ral
network
an
d
also re
alizes the image compressio
n and
recon
s
tru
c
tio
n
to gai
n the
highe
r
comp
ression
ratio. I
n
face
of the
curre
n
t hug
e
amount
s of d
a
ta
stora
ge, h
o
w
to reali
z
e
the
situatio
n that
wh
at we
do
i
s
what we st
ore and
ho
w to
com
p
lete
t
h
e
deman
d a
nal
ysis of
huma
n
eye in th
e
storage
sta
ge
with view to l
a
rgely
com
p
ress the
data,
and
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
1693-6
930
An Im
age Com
p
ression Schem
e Based
on Fuzzy Ne
ural Netwo
r
k (Bo Wa
ng)
145
how to ma
ke
limited spa
c
e store m
o
re
useful
data,
are wh
at we sho
u
ld co
n
c
ern and try to
ac
compli
sh.
Referen
ces
[1]
Roma
n S. Ne
w
S
i
mpl
e
a
nd
Efficient Co
lor
S
pace T
r
ansfo
rmations for
Lo
ssless Imag
e
Compress
io
n.
Journ
a
l of Visu
al Co
mmu
n
ic
at
ion a
nd Imag
e Repr
esentati
o
n
. 2014; 25(
5): 1056-
106
3.
[2]
Mario
ARD,
H
e
rmilo
SC.
Re
fined
F
i
xe
d D
oub
le
Pass B
i
nar
y Ob
ject
C
l
assificati
on
fo
r Doc
u
ment
Image Com
p
re
ssion.
Di
gital S
i
gn
al Process
i
n
g
. 2014; 3
0
(7): 114-
130.
[3]
Jin-Yu Z
,
W
e
i Z
,
Z
heng-W
e
i Y, Gan
T
.
A Novel
Al
gorithm
for F
a
st Compressio
n
an
d Re
constructio
n
of Infrared T
hermogra
phic S
e
que
nc
e Bas
ed
on Imag
e Seg
m
entatio
n.
Infrared Phys
ics &
T
e
chnol
ogy
.
201
4; 67(1
1
): 296-3
05.
[4
]
Ma
h
m
oo
d
O.
Fu
l
l
y
Fu
zzy
Po
ly
no
mi
al
Re
gre
ssi
on
w
i
th
F
u
zzy
Ne
u
r
al
Ne
tw
orks.
N
eur
oco
m
p
u
ting
.
201
4; 142(
22): 486-
493.
[5]
Kartik S, Ratan KB, Amitabha C.
Image
Compress
io
n Based o
n
Blo
ck
T
r
uncation
Codi
ng us
in
g
Clifford Al
gebr
a
.
Procedi
a T
e
chno
logy
. 2
013
; 10: 699-7
06.
[6]
Jianj
i W
,
Nann
ing Z
,
Yuehu
L, Gang Z
.
Pa
rameter Ana
l
ysis of F
r
actal Image Com
p
re
ssion a
nd Its
Appl
icatio
ns i
n
Image Sh
arp
e
n
in
g an
d Smo
o
thin
g.
Sig
nal Processi
ng:
Image Co
mmu
n
ic
ation
. 20
13
;
28(6): 68
1-6
8
7
.
[7]
Bo M, Yifang
B. General
izat
ion
of 3D Bu
il
din
g
T
e
xture usin
g
Image Compress
io
n and
M
u
ltip
l
e
Repr
esentati
o
n
Data Structure
.
SPRS Journa
l of Photogra
m
metry an
d Re
mote S
ensi
n
g
. 201
3; 79(5)
:
68-7
9
.
[8]
S Muralis
ank
a
r
, N Gopal
akri
shna
n. Rob
u
st
St
abilit
y Cr
ite
r
ia for T
a
kagi
–Sug
en
o Fuzz
y C
o
h
en–
Grossberg N
e
u
r
al Net
w
orks
of Neura
l
T
y
p
e
.
N
e
u
r
o
c
om
pu
ti
ng
. 2014; 1
44(2
0
): 516-5
25.
[9]
Cho
on KA.
Re
cedi
ng
Horiz
o
n
Distur
banc
e A
ttenuatio
n for
T
a
kagi–Su
gen
o F
u
zz
y S
w
i
t
c
hed
D
y
n
a
mi
c
Neur
al Net
w
o
r
ks.
Informatio
n
Sciences
. 20
1
4
; 280(1
0
): 53-
63.
[10]
F
a
y
e
z F
M
ES. Adaptiv
e H
y
br
i
d
Contro
l S
y
stem usin
g A Re
current
RBF
N
-
base
d
Self-Ev
o
lvin
g F
u
zz
y-
Neur
al-N
et
w
o
r
k
for PMSM Servo Drives.
Ap
plie
d Soft Co
mputin
g.
201
4; 21(8): 509-
53
2.
[11]
J Yang, H S
h
i, B F
eng, L Z
h
a
o
, C Ma, X M
e
i
.
Appl
yi
ng N
e
u
r
al Net
w
o
r
k Ba
sed o
n
F
u
zz
y
Cluster Pre
-
process
i
ng to
T
hermal Error
Mode
lin
g for Coord
i
nate Bor
i
n
g
Machin
e.
Pro
c
edi
a CIRP
. 20
14; 17: 698-
703.
Evaluation Warning : The document was created with Spire.PDF for Python.