TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol.12, No.4, April 201
4, pp. 2969 ~ 2
9
7
6
DOI: http://dx.doi.org/10.11591/telkomni
ka.v12i4.4795
2969
Re
cei
v
ed Se
ptem
ber 7, 2013; Re
vi
sed
Octob
e
r 27, 2
013; Accepte
d
No
vem
ber
18, 2013
Fuzzy Rough Set Conditional Entropy
Attribute
Reduction Algorithm
Jinsheng
Re
n*, Haitao Ji
a
Schoo
l of computer scie
n
ce a
nd en
gi
neer
ing
,
Universi
t
y
of Electron
ic Scie
nce an
d T
e
chnolo
g
y
of Chi
n
a
Che
ngd
u, Sich
uan, Ch
in
a, 61
173
1
*Corres
p
o
ndi
n
g
author, e-ma
i
l
: jsren@u
e
stc.edu.cn
A
b
st
r
a
ct
Moder
n scienc
e is increas
ing
l
y data-driv
en
and co
lla
borati
v
e in nature.
Co
mp
arin
g to ordi
nar
y
data pr
ocessi
n
g
, big d
a
ta pro
c
essin
g
that is
mix
ed w
i
th gre
a
t miss
ing
date
must b
e
proc
e
ssed rap
i
d
l
y. T
h
e
Rou
gh S
e
t w
a
s gen
erate
d
to
dea
l w
i
th the l
a
rge d
a
ta. T
he
QuickRe
duct is
a po
pu
lar attri
bute a
l
g
o
rith
m
as
the attribute re
ductio
n
of big
dat
ab
ase. But less effort has been p
u
t on fu
zz
i
n
ess and
vagu
en
ess dat
a.
Consi
der
ing t
h
is req
u
ire
m
ent
this pa
per
propos
es a
n
i
m
prove
d
attrib
ute red
u
ction
ba
sed o
n
co
nditi
o
n
entropy of fu
zz
y
rou
gh sets (F
RCE) w
h
ich can de
al
w
i
th the conti
nuo
us and fu
zz
y
data
.
T
h
is algorith
m
rew
r
ites the ex
pressi
on of co
nditi
on e
n
tropy
by usin
g
the i
n
formatio
n
the
o
ry. Last this p
aper tak
e
s the
UC
I
datab
ase to si
mu
late the effic
i
ency of this al
gorith
m
.
Ke
y
w
ords
:
fuzz
y, rough set, a
ttribute red
u
ct
ion
Copy
right
©
2014 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
IBM estimates that
every
day 2.5 quintilli
on by
tes of
data are
creat
ed
so
much t
hat 90%
of the data in the world to
day has be
e
n
cre
a
ted
in the last two y
ears [1]. Big
data is not o
n
ly
becoming m
o
re availa
ble
but also m
o
re u
nde
rsta
ndabl
e to co
mputers. But there are m
o
re
chall
enge
s in
big data p
e
riod that great
volume
un
st
ructu
r
e
d
data
must be
rap
i
dly pro
c
e
s
se
d
and a
nalyzed
.
Un
stru
cture
d
data
refe
rs to info
rm
atio
n that eithe
r
doe
s not
hav
e a p
r
e
-
defin
ed
data model o
r
missin
g
data
.
Rou
gh-set theory, propo
sed by Pawlak an
d Sko
w
ro
n [2], has be
come a
well-
establi
s
h
ed
mech
ani
sm for un
ce
rtainty manag
emen
t
in a wide va
riety of applications
relate
d
to
unstructu
re
d
data [3
-4]. In
this frame
w
o
r
k, an
attr
ib
ute
set
is viewed
as a
g
r
anul
a
r
spa
c
e,
whi
c
h
partition
s th
e universe i
n
to so
me
knowl
edge
granule
s
o
r
el
emental
con
c
ept
s. In m
o
st
informatio
n d
e
ci
sion
supp
orting
proble
m
, the d
e
ci
sion i
s
ta
ken
ba
sed
on
known valu
es of
informatio
n at
tribute r
epressed
by the vector
12
[,
,
,
]
n
A
aa
a
. The g
oal of de
cisi
o
n
su
ppo
rting
is to determi
n
e
if the state
x
belon
gs to de
cisi
on
j
d
or not,
1,
2
jm
.
The one of
most impo
rta
n
ce a
pplication of r
oug
h set is the attributes redu
cti
on for big
data, whi
c
h
woul
d find a
n
attribute
su
bset fro
m
th
e
origin
al attri
butes that
co
ntains the
whole
kno
w
le
dge a
s
the origi
nal
one. Rou
gh
sets a
pproa
ch of attributes red
u
ctio
n can be u
s
ed a
s
a
purely stru
ct
ural
m
e
thod for
re
du
cing dimen
s
io
n
a
lity using i
n
formation
conta
i
ned
within t
h
e
dataset and p
r
eservin
g
the meanin
g
of the feature
s
.
No
wad
a
ys m
o
st traditional
attribute
re
d
u
ction
alg
o
rit
h
ms were
p
r
ese
n
ted to
d
eal
with
symboli
c
or real
-valued d
a
taba
se
s.
Th
os
e al
gorith
m
s could no
t manage th
e fuzzi
ne
ss
and
vaguen
ess d
a
taba
se. On
e
way to
solve
this p
r
obl
em
is to di
scretize the
attribu
t
e value bef
o
r
e
attribute re
du
ction processi
ng [5]. But thi
s
me
thod
will lead ne
w erro
rs into the
system.
For the abov
e attribute re
ductio
n
issue
s
of fuzzy rou
gh set, Richa
r
d Je
nsen an
d Qiang
Shen propo
sed Qui
c
kRe
d
u
ct algo
rithm
to achieve
a
ttribute redu
cti
on of fuzzy ro
ugh set [6]. This
is a algo
rith
m which expl
oits dep
ende
nt function
to
achieve attri
bute colle
ctio
n redu
ction.
The
algorith
m
is currently the most pop
ular al
gorit
hm o
n
attribute redu
cti
on of fuzzy ro
ugh set.
QuickRed
uct algorith
m
i
s
based on
a
purely alge
braic point of
view, so
it
i
s
relatively
less intuitive
to u
nde
rsta
nd. Thi
s
pap
er i
n
tr
od
uces mutual
information th
eory to re
du
ctio
n
algorith
m
of fuzzy roug
h set and propo
se
s an im
p
r
o
v
ed attribute
redu
ction b
a
sed on conditi
on
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2969 – 2
976
2970
entropy of f
u
zzy rou
gh
sets
(F
RCE
). It is
more
intuitive and unde
rstan
dable, while
its
comp
utation
has al
so b
e
e
n
redu
ce
d, so
it
advantage
s attribute red
u
ction of ma
ssive data.
2. Theore
t
ic
al Backg
rou
nd
Central to fuzzy rough set rule
generation is the concept of
fuzzy
indiscernibility. Here
let
U
den
ote a
finite an
d n
on-empty set
cal
l
ed the
unive
rse. And
A
is a
non-empty fin
i
te set
of
attributes su
ch
that
:
a
aU
V
for every
aA
.With every subset of attribute
BA
there are
an
indiscernibilit
y relation:
2
(
)
{
(
,
)
|
,
()
()
}
I
ND
B
x
y
U
a
B
x
a
y
a
(1)
Usi
ng
[]
B
x
we d
e
note the
equi
valence cl
ass of
()
I
ND
B
incl
udin
g
x
. Given an
arbitrary
set
X
U
, one
ca
n
ch
aracte
rize
X
by a
pai
r
of lower an
d u
pper a
pproximations.
The
lowe
r
approximatio
n
R
X
is th
e great
est defin
able
set contain
e
d
in
X
, and th
e u
pper app
roxi
mation
R
X
is the
lea
s
t d
e
finable
set
contai
ning
X
. Formally, the
lowe
r a
nd
up
per
app
roxim
a
tion
can
b
e
defined a
s
fol
l
ows [7]:
{|
[
]
}
RR
X
xx
X
(2)
{|
[
]
}
R
R
Xx
x
X
(3)
For
,
P
QA
is the indi
scerni
bility rel
a
tion the po
si
tion can b
e
d
e
fined a
s
:
/(
)
()
P
P
XU
I
N
D
Q
POS
Q
X
(4)
Fuzzy Rou
g
h
Set was firstl
y propo
sed b
y
Dubois a
n
d
Prade [8] and then studie
d
in [9].
Suppo
se
U
is
a
non
empty u
n
iverse
(may
not b
e
fi
nite) and
R
is a bi
n
a
ry fuzzy
rela
tion on
U
,
then the fuzzy approximat
ion operators can be
sum
m
ari
z
ed as f
o
llows. For
every fuzzy
se
t
()
A
FU
:
()
s
u
p
(
(
,
)
,
()
)
T
uU
RA
x
T
R
x
u
A
u
(5)
()
i
n
f
(
(
(
,
)
)
,
()
)
S
uU
RA
x
S
N
R
x
u
A
u
(6)
Here
a
tri
a
n
gular no
rm, or sho
r
tly
T
-n
orm, is
a fu
nction
T
:
[0
,
1
]
[
0
,
1
]
[0
,
1
]
th
at
satisfie
s th
e followin
g
conditio
n
s:
monotoni
ci
ty [if
,
xy
,then
(,
)
(
,
)
Tx
y
T
],
comm
utativity [
(,
)
(
,
)
Tx
y
T
y
x
] , as
s
o
ciativity
[
((
,
)
,
)
(
,
(
,
)
)
TT
x
y
z
T
x
T
y
z
], and bou
nda
ry
condition [
(,
1
)
Tx
x
]. T
he mo
st po
p
u
lar
contin
uo
us
T
-no
r
m in
cl
ude the
stan
dard
min o
p
e
r
ator
(,
)
m
i
n
{
,
}
M
Tx
y
x
y
and the bo
un
ded interse
c
ti
on
(,
)
m
a
x
{
0
,
1
}
L
Tx
y
x
y
[10].
A triangula
r
cono
rm, o
r
sho
r
tly
T
-co
norm, i
s
a
increa
sing,
comm
utative, and
asso
ciative functio
n
S
:
[0
,1
]
[
0
,
1
]
[0
,1
]
that satisfie
s the bou
nda
ry
con
d
itions:
[0
,1
]
x
,
(,
0
)
Sx
x
. The most well-kn
own continuo
us
T
-conorm inclu
d
e
the standa
rd max ope
rator
(,
)
m
a
x
{
,
}
M
Sx
y
x
y
and bo
und
ed
sum
(,
)
m
i
n
{
1
,
}
L
Sx
y
x
y
[11] .
3. QuickRed
uct Algo
rith
m
The ide
a
of Q
u
ickRedu
ct is: first, take a
n
em
pty set
R, then ad
d the
con
d
ition attributes,
whi
c
h
ca
use t
he d
epe
ndent
functio
n
'(
)
R
D
und
er th
e fu
zzy
rough
sen
s
e
g
e
t its m
a
ximu
m, to the
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Fuzzy Rou
g
h
Set Conditio
nal Entrop
y Attribute Red
u
c
tion Algo
rith
m
(Jinsh
eng
Ren
)
2971
colle
ction, wh
en
'(
)
R
D
gets the
maximum val
ue, the al
gorit
hm en
ds. Fi
g
u
re
1 sho
w
s t
he spe
c
ific
step
s of the algorithm.
Figure 1. Flow of QuickRe
duct
As we
can see from Figu
re 1, if the numbe
r
of con
d
ition attribut
e deci
s
io
n table is n,
then in t
he
worst
ca
se,
Qu
ickRe
d
u
c
t alg
o
rithm'
s time
compl
e
xity is O(n
(
n
+
1
)
/2),
whil
e the tim
e
compl
e
xity of all the redu
ct
ion is O
(
2n
). So
the sea
r
ch algorith
m
b
a
se
d on Qui
c
kRedu
ct ca
n not
only get
an
optimal
soluti
on, even
in
the case of
do n
o
t calcul
ate all th
e p
o
ssible
rel
a
tive
redu
ction, b
u
t
also lo
we
r
comp
utation
compl
e
xi
ty and faste
r
se
arch
spee
d,
so it ha
s b
e
e
n
applie
d wide
sprea
d
.
4. An Improv
ed
Attribu
t
e Redu
ction
Based on
Conditio
n
Entropy
of Fuzzy
Rough Sets
(FR
C
E)
For n
e
wly gi
ven fuzzy attribute
s
P
and
Q
, the divided
results of th
e domai
n U
are
12
X=
U/
P
=
{
X
,
X
,
,
X
}
n
,
12
Y=
={
Y
,
Y
,
,
Y
}
m
U
Q
. Acco
rdin
g to the introdu
ce
d fuzzy set
membe
r
ship
function
in f
u
zzy ro
ugh
set, for
U
k
x
, m
e
mbe
r
ship b
e
longi
ng to t
he fu
zzy
equivalen
c
e
cla
s
s
XX
i
can
also
be
exp
r
essed
a
s
th
e p
r
ob
ability of b
e
longi
n
g
to th
e
equivalen
c
e
cla
ss, then th
e prob
ability
(X
)
i
p
of
X
i
can be d
e
termin
ed throug
h ea
ch o
b
ject,
namely:
X
U
X
XX
U
()
(
X
)
=
, =
1
,2
,
,
()
i
k
l
lk
k
x
i
k
x
x
p
in
x
(7)
Similarly,
Y
U
Y
YX
U
()
(
Y
)
=
, =
1
,2
,
,
()
j
k
l
lk
k
x
j
k
x
x
p
jn
x
(8)
Input
:
fuzzy
de
ci
si
on ta
bl
e DT
=
(
U,
A=
CD
,V
, f)
Output
:
a
rel
a
tiv
e redu
ction
step1
,'
0
,
'
0
be
st
pre
R
;
step2
TR
,
''
pre
b
e
s
t
;
step3 for
-
i
cC
R
, if
T
>
i
cR
DD
,
th
en
T=
i
R
c
,
T
best
D
,
until
travers
i
ng
al
l
i
c
;
step4 if
=
pr
e
b
e
s
t
,
then g
o
to St
ep5
,
el
se
l
e
t
RT
,
and go
to S
t
ep2
;
step5 outpu
t R
,
end
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2969 – 2
976
2972
Now we deduct
the
joi
n
t probability
expressi
on
(X
Y
)
ij
p
, firs
tly ac
cording to the
definition of fuzzy roug
h set, we get the partition re
su
lt of domain U usi
ng set
P,
Q
, namely:
U
/
A
=
{Z
=X
Y
|
X
U
/P
, Y
U
/Q
}
lr
t
r
t
(9)
There in,
1,
1
rn
tm
. Usi
ng memb
ersh
ip, the joint probability ca
n be re
written
as:
XY
U
U/
{
P
,
Q
}
U
()
(X
Y
)
=
()
ij
k
l
lk
k
x
ij
Z
k
Zx
x
p
x
(10)
Whe
r
e:
XY
X
Y
()
=
(
)
=
m
i
n
(
()
,
()
)
lr
t
r
t
Zk
k
k
k
x
xx
x
(11)
Becau
s
e e
a
ch obje
c
t only usin
g two values, 0 an
d 1, to denote wh
ether it belon
gs to Q
(de
c
isi
on attribute) equiva
lent
cl
ass, so
a
c
co
rdi
ng
to the fo
rmul
a (1
0), th
e j
o
int proba
bili
ty
(X
Y
)
ij
p
can b
e
re
writt
en as:
XY
U
X
XX
U
()
(X
Y
)
=
()
ij
k
l
lk
k
x
ij
k
x
x
p
x
(12)
Then the
con
d
ition entro
py of Q relative to P can be turne
d
into the
following form:
i
i
=1
=1
=1
=1
|U
|
X
XY
U
=1
|U
|
=1
X
X
XX
U
=1
(Q
|
P
)=
-
(
X
)
(Y
|
X
)
l
o
g
(
Y
|
X
)
(X
Y
)
(
X
Y
)
=
-
(
X
)
l
o
g
(X
)
(
X
)
()
()
=
-
l
o
g
()
()
i
j
k
l
lk
nm
ij
i
j
i
ij
nm
ij
ij
i
ij
ii
k
k
m
x
k
j
k
k
x
k
Hp
p
p
pp
p
pp
x
x
x
x
i
i
|U
|
XY
=1
|U
|
=1
X
=1
()
()
j
k
n
k
i
k
k
x
x
(13)
It should
be
noted th
at
wheth
e
r
different
fu
zzy
e
quivalen
c
e
cl
ass is empty
can
be
judge
d from
the memb
ership d
enot
ing every o
b
ject bel
ong
s to this int
e
rsectio
n
. if the
membe
r
ship
of each o
b
je
ct is 0, indicati
ng that
this in
terse
c
tion m
u
st be empty, becau
se there is
no o
b
je
ct bel
ongin
g
to it,
for
X
i
、
Y
j
、
XY
ij
in the
formul
a(13),
these fu
zzy equival
e
n
c
e
cla
s
ses i
s
not
empty.
From
form
ul
a(13
), it
ca
n
be
seen
th
at
the val
u
e
of the
cond
ition ent
ropy
here
are
entirely d
e
termine by e
a
ch obje
c
t'
s m
e
mbe
r
ship, so usi
ng th
e
above tran
sformatio
n
form,
informatio
n e
n
tropy, con
d
ition entro
py and mutual
inf
o
rmatio
n ca
n be appli
ed to fuzzy rough
set.
The sa
me as
informatio
n entropy in rou
g
h
set, if
H(Q|P
)
smal
ler, then the extent which
indicates
ho
w much i
n
form
ation of attrib
ute Q is
determined by attri
bute P is la
rg
er, nam
ely P is
more imp
o
rta
n
t to Q. So u
s
ing
con
d
itio
n entr
opy to
descri
be the
attribute impo
rtance and m
a
ke
redu
ction i
s
feasibl
e
, and
the co
mput
ation is
mu
ch less tha
n
mutual info
rmation by u
s
ing
con
d
ition entropy to descri
be the attribut
e importa
nce.
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Fuzzy Rou
g
h
Set Conditio
nal Entrop
y Attribute Red
u
c
tion Algo
rith
m
(Jinsh
eng
Ren
)
2973
Similar
with
QuickRed
uct,
attribute
re
d
u
ction
algo
rit
h
m ba
se
d o
n
co
ndition
ent
ropy
can
also u
s
e con
d
ition entro
py to determine
attribut
e importan
c
e an
d grad
ually add
attributes to the
curre
n
t redu
ction
coll
ecti
on. Be
cau
s
e
the
co
ndi
tio
n
ent
ropy i
s
mon
o
toni
cal
l
y non
-incre
a
s
ing
function,
so
t
he id
ea
of att
r
ibute
re
du
ction al
gorith
m
based on co
n
d
itional
entro
py is addi
ng
the
con
d
ition attri
bute
whi
c
h m
a
ke
s
H(
D
|
R
)
re
du
ce
the mo
st to t
he redu
ction
colle
ction, Fi
gure
2
sho
w
s a flow
cha
r
t of the algorithm.
In Figure 2,
we
set thre
sh
old
, mainly consi
deri
ng
so
me prope
rtie
s, thoug
h it may be
redu
nda
nt fo
r redu
ction
set , it co
ntai
ns
a
ce
rt
ain
amou
nt of i
n
formatio
n,
whi
c
h m
a
kes it
impossibl
e to
be
equ
al to
the
con
d
itiona
l entropy of
t
he o
r
igin
al att
r
ibute
set. Fo
r the
amo
unt
of
informatio
n is too small, almost negli
g
ib
le, so t
here i
s
a need to se
t a threshol
d to terminate the
algorith
m
, ge
nerally let
-3
=10
.
T
H(D
|
C
)
H(D|
R
)
-
H
(
D
|
C
)
RT
H(D
|
R
{
A
}
)<
H
(
D|
T
)
i
|C
-
R
|
i
TR
{
A
}
i
Figure 2. Flow Ch
art of FRCE Algo
rith
m
5. Simulation
To verify the
validity of FRCE alg
o
rithm
,
we
cho
o
se the
in
stan
ce of
frost re
si
stance
of
con
c
rete to perform the re
ductio
n
expe
riment. Let
-3
=10
, firstly we calculate the con
d
itional
entropy H(D|
C), then we get
H
(
D
|
C)
=0.4618
, Figure 3 sho
w
s the
entir
e process of attribute
redu
ction.
As ca
n be
seen fro
m
Fi
gure
3, wh
e
n
1
345
H
(
D
|
{
A
,
A
,
A
,
A
})-
H
(D
|
C
)=0.
000
6<
, the
algorith
m
terminates, the
redu
ction
set
we get is {
1
A
,
3
A
,
4
A
,
5
A
} and this
re
sult is the sa
me
with Q
u
ickRe
duct,
whi
c
h i
ndicates the
prop
osed
re
d
u
ction
alg
o
rit
h
m i
s
effe
ctive an
d
can
ob
tain
a redu
ction
re
sult of a deci
s
ion table.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2969 – 2
976
2974
2
A
4
A
5
A
1
A
3
A
4
A
2
A
1
A
1
A
1
A
5
A
1
A
3
A
4
A
2
A
1
A
1
A
4
A
5
A
5
A
4
A
1
A
5
A
3
A
4
A
1
A
5
A
2
A
1
A
3
A
Figure 3. Attribute Re
du
ction of
the Instance of Fro
s
t Re
sista
n
ce of Con
c
rete Ba
sed o
n
FRCE
In ord
e
r to v
e
rify the re
du
ction effici
en
cy
of this
alg
o
rithm, u
s
ing
QuickRed
uct
and the
prop
osed F
R
CE alg
o
rithm
re
spe
c
tively perfo
rm
attribute redu
ctio
n on five
kin
d
s
of data
set,
wine, i
r
i
s
, he
art, gla
s
s, Io
nosphe
re, fro
m
UCI d
a
tab
a
se.
Tabl
e 1
sh
ows the
prop
ertie
s
an
d
numbe
r information of these five kind
s
of data set.
Table 1. Information of Five Kinds of Da
ta Set from UCI
condition attribute
species number
sample number
Iris
4 continuous attributes
3
150
Wine
13 continuous att
r
ibutes
3
178
Glass
9 continuous attributes
7
214
Heart
13 continuous att
r
ibutes
4
270
Ionosphere
34 continuous att
r
ibutes
2
351
First, we ap
p
l
y fuzzy met
hod b
a
sed o
n
Adaption
F
u
zzy C-mea
n
s
Cl
uste
ring
to fuzz
each data set, then we get redu
ction
re
su
lt which ca
n be se
en from
Figure 4.
Figure 4. Red
u
ction
Re
sult of FRCE an
d QuickRed
uct
0
1
2
3
4
5
6
0
5
10
15
20
25
30
35
i
n
s
w
i
n
e
g
l
a
s
s
h
e
a
r
t
I
o
n
o
s
p
h
e
r
e
The nu
m
ber
o
f
c
o
n
d
i
t
i
on
at
t
r
i
b
u
t
e
T
h
e
nu
m
b
er
of
o
r
i
g
i
n
a
l
at
t
r
i
b
ut
e
qui
c
k
r
edu
c
t
FR
C
E
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Fuzzy Rou
g
h
Set Conditio
nal Entrop
y Attribute Red
u
c
tion Algo
rith
m
(Jinsh
eng
Ren
)
2975
As can be seen
from
Fi
g
u
re 4,
comp
ared
to Q
u
ickRedu
ct al
go
rithm, the
propo
sed
FRCE
alg
o
rit
h
m
can
find
a smalle
r o
r
the same
si
ze of
red
u
ctio
n set. For iri
s
a
nd
gla
s
s
data
sets, the two
algorithm
s g
e
t the sam
e
redu
ction
re
sult, while for
wine, he
art a
nd Iono
sph
e
re
data set
s
, the FRCE algo
rithm obtain
s
a smalle
r re
ductio
n
set t
han Qui
c
kRe
duct. Thu
s
, for
some
high
-di
m
ensi
onal
d
a
ta, the pro
posed r
edu
ction algo
rith
m ca
n find
a more con
c
ise
redu
ction
set, compa
r
e
d
to QuickRed
uct
algorith
m
.
Figure 5 sh
o
w
s the time
con
s
um
ption
of the
two re
ductio
n
algo
ri
thms, it can be se
en
from Figu
re 5
,
for the iris, Wine a
nd gla
ss, wh
ich po
sse
ss
relativel
y
small sam
p
le numbe
r, the
time co
nsum
ption of the t
w
o al
go
rithm
s
is almo
st t
he same,
wh
ile for h
e
a
r
t and Ion
o
sph
e
re,
whi
c
h
po
sse
s
s la
rge
sa
mple n
u
mb
er, time con
s
u
m
ption of
F
RCE
is sig
n
i
f
icantly le
ss
than
QuickRed
uct.
Figure 5. Time Con
s
um
ption between F
RCE an
d Qui
c
kRe
d
u
c
t
The ab
ove a
nalysi
s
sh
ows that the p
r
opo
sed F
RCE algorithm
can
get an
optimal
redu
ction
of d
e
ci
sion ta
ble,
esp
e
ci
ally the data
sets
o
r
de
cisi
on tab
l
es p
o
sse
s
s h
i
gh dime
nsi
o
ns
and la
rge
nu
mber
of obje
c
ts, the propo
sed alg
o
ri
thm comp
ared
to QuickRed
uct can get
a
m
o
re
con
c
i
s
e re
du
ction set and
con
s
um
e less time.
6. Conclusio
n
This pa
pe
r analyze
s
the
QuickRed
uct
algorith
m
in rough
set, whi
c
h is le
ss intuitive and
unde
rsta
nda
b
l
e, and
then
we
modify t
he exp
r
e
ssi
o
n
of info
rmat
ion e
n
tropy
and
co
nditio
nal
entropy
i
n
ro
ugh set,
o
n
this
b
a
si
s, we
propo
se
a
n
imp
r
oved
attribute red
u
c
tion ba
sed
o
n
con
d
ition ent
ropy of fuzzy
rough
sets
algorith
m
(F
RCE). In orde
r to prove the validity of
th
e
algorith
m
, we
re
sp
ectively
perfo
rm
attrib
ute redu
ction
on
Qui
c
kRe
duct fo
ur ki
n
d
s
of
UCI
da
ta
sets
usi
ng Q
u
ickRedu
ct a
nd FRCE al
gorithm.
It can be
se
en
from the exp
e
rime
nt re
sul
t
s,
comp
ared to
QuickRedu
ct, the algorith
m
can
obtai
n
a smalle
r o
r
equal redu
ct
ion set, it also
c
o
ns
umes
less
time, s
o
it is c
l
ear that
the
propo
se
d FRCE algo
rithm is effective.
Referen
ces
[1]
Improvin
g Dec
i
sion M
a
ki
ng i
n
the W
o
rl
d o
f
Big
Data http://
w
w
w
.
for
bes
.com/sites/christopherfra
n
k
/
201
2/03/2
5
/ improvi
ng-d
e
cisi
o
n
-makin
g
-i
n-th
e-
w
o
rl
d-of-b
ig-
dataM.
[2] Z
Pa
w
l
ak.
Ro
ugh
Sets: T
h
e
o
retica
l Asp
e
ct
s of R
eas
oni
n
g
Ab
out
Data
System T
heor
y
. Kn
o
w
le
dge
Engi
neer
in
g a
nd Prob
lem S
o
lvin
g, Klu
w
e
r
Academic Pu
blish
e
rs, Dordr
e
cht, Netherl
a
nds. Klu
w
e
r
.
199
1; .9.
[3]
Yi Zhang. Ro
u
gh Set and D
EA of Strategi
c Allianc
e Stable Dec
i
sio
n
-m
akin
g Mode
l.
TELKO
M
NI
KA
.
201
3; 11(1
2
): 7295-
730
1.
[4]
Yusu
Xi
on
g. A Cluster
ing A
l
gorithm
B
a
se
d
on R
o
u
gh S
e
t and Ge
netic
Algor
ithm.
TEL
K
OMNIKA
.
201
3; 11(1
0
): 5782-
578
8.
[5]
HS Nguy
en, A Sko
w
ron.
Qu
anti
z
a
t
i
on of r
eal va
lu
e attri
butes.
Proce
e
d
in
g of seco
n
d
Joint A
nnu
al
0
5
10
15
20
25
30
i
r
i
s
w
i
n
e
g
l
a
s
s
h
e
a
r
t
I
o
n
o
sp
h
e
r
e
ti
m
e
(
s
)
qui
c
k
r
ed
uc
t
F
RCE
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 12, No. 4, April 2014: 2969 – 2
976
2976
Confer
ence Inf
o
rmatio
n
Scien
c
e. W
r
ightsv
ill
e
Beach, North
Carol
i
n
a
.19
95: 34–
37.
[6]
Richar
d Je
nse
n
, Qian
g Sh
en
.
F
u
zz
y
ro
ug
h
attribute r
educt
i
on w
i
th
ap
plic
ation t
o
w
eb c
a
tegor
i
z
a
t
io
n
.
F
u
zz
y
sets an
d
S
y
stem. 200
4; 141: 46
9-48
5.
[7] M
Quafafou.
α
-RST
: A genera
lizatio
n of rou
g
h
set theor
y
.
In
formati
on Sci
e
nce
. 200
0; 124
: 301–3
16.
[8]
D Du
bo
is, H
Prade.
P
u
tting
rou
gh s
e
ts a
nd fu
zz
y
sets
togeth
e
r
i
n
I
n
telli
ge
nt D
e
ci
sion
Sup
port
.
Han
dbo
ok of
Appl
icatio
ns a
nd Ad
v
anc
es of the Ro
ug
h
Sets T
heor
y
,
R Slo
w
i
n
ski, E
d
. Nor
w
e
ll, MA
:
Klu
w
er. 1
9
9
2
: 203
–2
32.
[9]
Su
yun
Z
hao, E
r
ic CC T
s
ang,
Deg
ang
Ch
en.
T
he
Model
of F
u
zz
y Vari
ab
le
Precisio
n R
o
u
g
h
Sets.
IE
EE
Transactions on Fu
z
z
y
System
s.
20
09; 17(
2
)
: 451-46
7.
[10]
Yan Li, Simo
n
CK Shiu, Sa
nk
ar K Pal. Com
b
ini
ng F
eat
ure
Red
u
ction
and
Case Sel
e
ctio
n in Bui
l
di
ng
CBR Class
ifier
s
.
IEEE
Transactions on Knowled
ge and Data Engineering
. 2
006: 18(
3): 415
-429.
[11]
Q Shen, R J
e
nsen.
Se
lecti
n
g Infor
m
ative
F
eatures w
i
th
F
u
zz
y
-ro
ug
h S
e
ts and
Its Applicati
on for
Com
p
lex System
s Monitoring
.
Pattern Reco
g
n
itio
n. 200
4; 37: 1351
–1
36
3.
Evaluation Warning : The document was created with Spire.PDF for Python.