Int
ern
at
i
onal
Journ
al of Ele
ctrical
an
d
Co
mput
er
En
gin
eeri
ng
(IJ
E
C
E)
Vo
l.
8
, No
.
6
,
Decem
ber
201
8,
pp. 4
705~
4712
IS
S
N: 20
88
-
8708
,
DOI: 10
.11
591/
ijece
.
v8
i
6
.
pp
4705
-
47
12
4705
Journ
al h
om
e
page
:
http:
//
ia
es
core
.c
om/
journa
ls
/i
ndex.
ph
p/IJECE
Ant System
and Wei
ghted
Vo
tin
g Meth
od fo
r Mult
ipl
e
Classifie
r System
s
Ab
d
ull
ah
Hus
in
1
, K
u
R
u
han
a
K
u
-
M
aham
ud
2
1
Depa
rtment of I
nform
at
ion
S
y
s
t
ems
,
Univer
sit
as
Islam Indra
g
iri,
Indone
sia
2
School
of
Com
puti
ng,
Univer
sit
i
Utar
a
Ma
lay
sia
,
Mal
a
y
s
ia
Art
ic
le
In
f
o
ABSTR
A
CT
Art
ic
le
history:
Re
cei
ved
J
ul
11
, 2
01
8
Re
vised
A
pr 16
, 2
01
8
Accepte
d
Apr 30
, 201
8
Com
bini
ng
m
ul
ti
ple
cl
assifi
ers
is
conside
red
as
a
gene
ra
l
soluti
on
fo
r
cl
assifi
ca
t
ion
ta
s
ks.
How
eve
r,
th
ere
ar
e
two
prob
le
m
s
in
combining
m
ult
iple
cl
assifi
ers:
const
ruc
ti
ng
a
div
ers
e
class
ifi
er
ense
m
ble
;
and
,
cons
truc
ti
ng
an
appr
opriate
co
m
bine
r.
In
this
s
tudy
,
an
i
m
prove
d
m
ult
ipl
e
class
ifi
er
combinat
ion
sch
eme
is
propose
.
A
dive
rse
class
ifi
er
ense
m
ble
is
construc
t
ed
b
y
tr
ai
ning
th
e
m
with
diffe
ren
t
fea
ture
se
t
par
t
i
ti
ons.
The
ant
sy
stem
-
base
d
al
gorit
hm
is
use
d
to
form
th
e
op
ti
m
al
f
ea
tur
e
se
t
par
t
it
ions.
W
ei
g
hte
d
vot
ing
is
used
to
combi
ne
the
class
ifi
ers
’
output
s
b
y
con
sideri
ng
the
stre
ngth
of
the
cl
assifi
ers
prior
to
voti
ng.
Ex
per
iments
were
ca
rri
ed
out
u
sing
k
-
NN
ense
m
ble
s
on
b
e
nchmark
dataset
s
from
the
Univ
ersity
of
Califor
nia
,
Irvin
e,
to
eva
l
uate
the
cre
d
ib
il
i
t
y
of
t
he
proposed
m
et
hod.
Expe
rim
ent
a
l
result
s
show
ed
tha
t
the
proposed
m
et
h
od
has
succ
essfull
y
construc
t
ed
bet
te
r
k
-
NN
ense
m
ble
s.
Further
m
ore
the
proposed
m
et
hod
ca
n
be
used
to
deve
lop
oth
er
m
ult
ipl
e
class
ifier
s
y
s
te
m
s.
Ke
yw
or
d:
An
t sy
ste
m
Cl
assifi
er en
se
m
ble
const
ru
ct
io
n
Com
bin
er cons
tructi
on
Feat
ur
e
set pa
r
ti
ti
on
ing
Mult
iple cl
assifi
er s
yst
em
Copyright
©
201
8
Instit
ut
e
o
f Ad
vanc
ed
Engi
n
ee
r
ing
and
S
cienc
e
.
Al
l
rights re
serv
ed
.
Corres
pond
in
g
Aut
h
or
:
Abd
ullah
Husin
,
Dep
a
rt
m
ent o
f
Inform
at
ion
System
,
Un
i
ver
sit
as I
sla
m
I
ndra
giri
,
Jal
an
P
rovin
si
Parit 1
Tem
bil
ahan I
ndra
giri
Hili
r
Ri
au,
I
ndon
e
sia
.
Em
a
il
:
abd
ia
la
m
@
ya
ho
o.
c
om
1.
INTROD
U
CTION
Cl
assifi
cat
ion
is
an
i
m
po
rtant
functi
on
in
data
m
ining
.
On
e
of
the
m
ai
n
issues
i
n
perf
or
m
ing
cl
assifi
cat
ion
i
s
to
ide
ntify
th
e
cl
assifi
er
in
order
to
obta
in
good
cl
assifi
c
at
ion
acc
ur
acy
.
The
us
e
of
a
sing
le
cl
assifi
er
provi
des
m
ini
m
al
exp
l
oitat
ion
of
com
pl
e
m
entary
inform
at
ion
fr
om
oth
er
cl
assifi
ers,
w
hile
the
com
bin
at
ion
of
m
ulti
ple
cl
a
ssifie
rs
m
a
y
pr
ovide
s
uch
a
dd
it
io
nal
inf
orm
at
ion
[1
]
.
The
goal
of
m
ulti
ple
cl
assifi
er
c
ombinati
on
is
to
ob
ta
in
a
c
om
pr
ehe
ns
ive
re
sul
t
by
com
bin
in
g
t
he
ou
t
pu
ts
of
sev
eral
i
nd
i
vidual
cl
assifi
ers
[
2].
This
co
ns
ist
s
of
a
set
of
cl
ass
ifie
rs
cal
le
d
cl
assifi
er
en
sem
ble
an
d
a
c
ombinati
on
strat
e
gy
f
or
integrati
ng clas
sifie
r ou
t
pu
ts c
al
le
d
com
bin
er
.
Mult
iple
cl
assif
ie
r
com
bin
at
ion
has
bee
n
w
idely
us
ed
in
m
any
app
li
cat
i
on
do
m
ai
ns
su
ch
a
s:
sp
eec
h
recog
niti
on
[
3]
,
hu
m
an
e
m
otion
r
eco
gnit
ion
[
4],
vid
e
o
cl
assifi
cat
ion
[5
]
,
face
rec
ogniti
on
[6
]
,
e
m
ail
cl
assifi
cat
ion
[
7],
cance
r
cl
as
sific
at
ion
[8
]
,
plant
le
af
i
de
nt
ific
at
ion
[
9],
con
ce
pt
dr
ift
i
den
ti
ficat
io
n
[
10
]
a
nd
su
ku
k
rati
ng
pr
e
dicti
on
[11
]
.
Mult
iple
cl
assifi
er
com
bin
at
ion
has
be
en
ve
ry
us
e
f
ul
in
enh
a
nci
ng
the
perform
ance
of
cl
assifi
cat
ion.
H
ow
e
ve
r,
there
are
two
prob
le
m
s
in
dev
el
opi
ng
m
ulti
ple
cl
assifi
er
com
bin
at
ion
s:
const
ru
ct
in
g
th
e
cl
assifi
er
en
sem
ble;
and
,
c
on
st
ru
ct
in
g
the
com
bin
er.
T
he
re
are
no
sta
nd
a
r
d
gu
i
delines
co
nc
ern
i
ng
how
to
co
ns
tr
uct
a
set
of
di
ve
rse
and
acc
ur
at
e
c
la
ssifie
rs
an
d
how
to
com
bin
e
the
cl
assifi
er
ou
t
puts
[
12]
.
M
os
t
previ
ou
s
stu
di
es
f
ocus
on
cl
assifi
er
e
ns
em
ble
c
on
st
ru
ct
io
n
a
nd
a
pp
ly
a
si
m
ple
fixe
d
com
bin
e
r
to
com
bin
e
the
outp
uts
[13].
This
stu
dy
fo
c
us
e
d
on
bo
t
h
pro
blem
s
and
rev
ie
ws
wer
e
perform
ed
on fe
at
ur
e set
pa
rtit
ion
in
g
a
nd w
e
igh
te
d v
otin
g
c
om
bin
er.
Ther
e
a
re
se
ve
ral
appr
oach
e
s
to
co
ns
tr
uct
a
cl
assifi
er
en
sem
ble.
All
su
ch
ap
proac
hes
at
tem
pt
to
gen
e
r
at
e
di
versi
ty
b
y
creati
ng
cl
assifi
ers
th
at
m
ake
err
ors
on
dif
fer
e
nt
pa
tt
ern
s,
th
us
th
ey
can
be
c
ombin
e
d
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
8
, N
o.
6
,
Dece
m
ber
2
01
8
:
4705
-
4712
4706
eff
ect
ively
.
Th
e
div
ersit
y
a
m
ong
cl
assifi
ers
in
ensem
ble
is
deem
ed
to
be
a
key
su
cc
ess
factor
when
t
o
const
ru
ct
in
g
cl
assifi
er
e
ns
em
ble.
T
he
or
et
ic
a
ll
y
and
em
pirical
ly
,
it
has
be
en
s
how
n
th
at
a
good
e
ns
em
ble
has
bo
t
h
acc
ur
acy
and
di
ver
sit
y
[
14
]
.
O
ne
of
th
e
ap
proac
hes
use
d
t
o
c
on
st
ru
c
t
a
cl
assifi
er
e
nse
m
ble
is
the
f
eat
ur
e
deco
m
po
sit
io
n
m
e
tho
d
w
hich
m
anipu
la
te
s
input
feat
ur
e
s
in
c
on
st
ru
ct
in
g
a
div
e
rse
cl
ass
ifie
r
ensem
ble.
T
his
m
et
ho
d
dec
om
po
ses
the
in
pu
t
featu
res
wh
il
e
trai
ning
the
cl
assifi
e
r
ensem
ble.
T
he
refor
e
this
m
et
hod
is
appr
opriat
e f
or h
ig
h dim
ension
al
it
y data set
s
[15
]
.
On
e
of
t
he
cas
es
of
featu
re
de
com
po
sit
ion
i
s
feature
set
pa
rtit
ion
in
g.
I
np
ut
featu
res
are
rando
m
ly
par
ti
ti
on
e
d
to
s
ever
al
dis
j
oi
nted
subsets.
Co
ns
e
qu
e
ntly
,
each
cl
assifi
er
is
trai
ned
on
di
fferent
subsets.
Feat
ur
e
set
par
ti
ti
on
i
ng
is
a
pprop
riat
e
for
cl
assifi
cat
ion
ta
sks
c
on
ta
ini
ng
la
r
ge
num
ber
of
featur
e
s
[
16]
,
[17].
Howe
ver,
it
is
diff
ic
ult
to
det
erm
in
e
ho
w
to
fo
rm
op
ti
m
al
featur
e
set
par
t
it
ion
to
trai
n
cl
assifi
ers
to
pro
du
ce
good
perform
a
nce.
Re
views
o
f
th
e
set
par
ti
ti
on
ing
prob
le
m
hi
gh
li
ght
that
the
ant
syst
e
m
,
wh
ic
h
is
a
var
ia
nt
of
an
t
colo
ny
opti
m
i
zat
ion
(
ACO
),
is
the
m
os
t
pr
om
isi
ng
te
ch
ni
qu
e
to
be
a
pp
li
ed
[18].
T
he
ACO
al
gorith
m
was
introd
uced
by
Ma
rco
Dorig
o
in
the
early
19
90
s
.
T
his
al
gor
it
h
m
is
insp
ired
by
the
beh
a
vi
or
of
ants
i
n
fi
nd
i
ng
the
sho
rtest
pa
th
f
ro
m
the
col
on
y
t
o
the
f
ood;
in
orde
r
to
f
ind
t
he
s
hortes
t
route
they
le
ave
a
pher
om
on
e
on
t
heir
tour
pat
hs.
The
ant
-
base
d
al
gorithm
has
sh
ow
n
bette
r
perform
ance
than
oth
e
r
po
pu
l
ar
he
ur
ist
ic
s
suc
h
as
si
m
ulate
d
ann
e
al
ing
a
nd
ge
netic
al
go
rithm
s
[
19
]
.
T
he
a
nt
sy
stem
(A
S)
al
gorithm
is
a
var
ia
nt
of
the
a
nt
ba
sed
-
al
gorithm
.
This
is
an
or
i
gi
nal
and
m
ost
us
ed
a
nt
-
ba
sed
al
gorithm
in
so
lvi
ng
m
any
op
ti
m
izati
on
pro
blem
s
[2
0].
The
ant
syst
em
has
al
so
been
us
e
d
to
so
l
ve
the
set
par
ti
ti
on
ing
pro
blem
.
Set
par
ti
ti
o
ning
pro
blem
s
are
diff
ic
ult
an
d
ver
y
c
om
plicated
com
bin
at
ori
al
issues
[21
]
.
The
us
e
of
ant
syst
em
fo
r
set
par
ti
ti
on
i
ng pr
ob
le
m
h
as
bee
n
a
pp
li
ed
in
constr
ucting a cl
a
ssifie
r
e
ns
em
ble [22
]
.
The
m
os
t
po
pula
r,
f
undam
ental
an
d
strai
gh
t
forw
a
r
d
c
om
bin
er
is
m
ajo
rity
voti
ng
[
23
]
.
E
ver
y
ind
ivi
du
al
cl
assifi
er
vo
te
s
f
or
one
cl
ass
la
be
l.
The
cl
ass
la
bel
that
m
os
t
f
re
que
ntly
app
e
ars
in
the
outp
ut
of
ind
ivi
du
al
cl
as
sifie
rs
is
the
fi
nal
outp
ut.
T
o
avo
i
d
the
draw
pro
blem
,
the
nu
m
ber
of
cl
as
sifie
rs
perform
ed
f
or
vo
ti
ng
is
usua
ll
y
od
d.
Ma
jo
rity
vo
ti
ng
is
of
te
n
use
d
to
com
bin
e
m
ul
ti
ple
cl
assifi
ers
in
order
to
so
lv
e
cl
assifi
c
at
ion
pro
blem
s
[2
4].
Pr
e
viously
po
pu
la
r
en
sem
bl
e
m
e
tho
ds
s
uc
h
as
ba
gg
i
ng,
boos
ti
ng
a
nd
r
andom
forest
hav
e
us
e
d
m
ajo
rity
voti
ng
in
c
om
bin
ing
cl
assifi
er
outp
uts.
T
he
ad
van
ta
ges
of
m
ajorit
y
voti
ng
incl
ud
e
si
m
plici
t
y
and
lowe
r
com
pu
t
at
ion
al
cost.
Ma
j
ori
ty
vo
ti
ng
ena
bles
c
ombinati
on
of
the
ou
t
pu
t
of
cl
as
sifie
r
s
reg
a
rd
le
ss
of
wh
at
cl
assifi
er
is
us
ed
.
It
is
a
n
opti
m
a
l
co
m
bin
e
r
in
se
ver
a
l
ensem
ble
m
e
thods
[
25]
.
H
oweve
r,
the d
isa
dvanta
ge of
this c
ombine
r
is t
hat
it
d
oe
s
no
t c
onsider
the st
rengt
h of t
he
cl
assifi
er [2
6].
Weig
hted
voti
ng
is
a
trai
na
ble
ve
rsion
of
m
ajo
rity
vo
ti
ng
w
hich
,
unl
ike
m
ajo
rity
voti
ng,
giv
e
s
weig
ht
to
each
cl
assifi
er
be
fore
voti
ng.
T
o
m
ake
an
overall
pr
e
dicti
on
,
a
weig
hte
d
vo
te
of
the
cl
assifi
er
pr
e
dicti
on
s
is
perf
or
m
ed
to
pr
e
dict
the
cl
ass.
T
her
e
are
se
ver
al
ways
to
deter
m
ine
the
wei
gh
t
of
cl
assifi
ers
[
27]
.
The
ad
va
ntages
of
weig
ht
ed
voti
ng
i
nc
lud
e
it
s
flexi
bi
li
t
y
and
the
po
te
ntial
to
produce
bette
r
perf
or
m
ances
tha
n
m
ajorit
y
vo
ti
ng.
This
com
bin
er
has
the
po
te
ntial
to
m
ak
e
m
ulti
ple
cl
a
ssifie
r
c
om
bin
at
ion
s
m
or
e
ro
bust
to
the
cho
ic
e
of
the
num
ber
of
in
div
id
ua
l
cl
assifi
ers
[2
8].
I
n
ad
diti
on
the
accuracies
of
the
cl
assifi
ers
c
an
be
reli
ably
est
i
m
at
ed,
after
w
hich
wei
ghte
d
voti
ng
m
a
y
be
co
ns
i
der
e
d
[
29]
.
Seve
ral
stu
die
s
ha
ve
c
oncent
rated
on
weig
hted
v
otin
g
a
nd
hav
e
been
pro
ve
n
to
so
l
ve
real
-
world
pr
ob
le
m
s
su
c
h
as
face
a
nd
vo
ic
e
recog
ni
ti
on
[
30
]
a
nd
li
ste
d
com
pan
ie
s’
fi
nan
ci
al
dis
tress
pre
dicti
on
[
31]
.
T
her
e
fore,
in
this
stud
y
the
weig
hted
voti
ng
com
bin
er
is
adap
te
d
as
a
com
bin
er
w
hich
con
si
der
s
the
per
form
ance
of
eac
h
cl
assifi
er.
2.
RESEA
R
CH MET
HO
D
Ther
e
a
re
thr
e
e
ste
ps
to
the
researc
h
w
ork:
(1)
cl
assifi
er
ensem
ble
con
st
ru
ct
io
n;
(
2)
com
bin
er
const
ru
ct
io
n;
and
(
3)
evaluati
on.
In
dev
el
oping
the
m
ulti
p
le
cl
assifi
er
sy
stem
,
eff
ect
ive
com
bin
at
ion
m
us
t
address
t
he
fi
r
st
two
ste
ps
of
ensem
ble
const
ru
ct
io
n
a
nd
c
om
bin
er
c
on
st
r
uction.
T
he
a
nt
syst
e
m
feature
set
par
ti
ti
on
i
ng
al
gorithm
is
app
li
ed
to
const
ruct
cl
assifi
er
ensem
ble,
wh
il
e
th
e
weig
hted
vo
ti
ng
te
ch
niq
ue
i
s
app
li
ed
as
a
c
om
bin
er.
Fi
gure
1
sho
ws
th
e
arch
it
ect
ure
of
t
he
pr
opose
d
m
et
ho
d
whic
h
co
ns
ist
s
of
t
w
o
com
po
ne
nts
na
m
el
y t
he
ant syst
e
m
f
eat
ur
e
s
e
t
par
ti
ti
on
i
ng a
nd the
weig
hte
d vo
ti
ng c
om
bin
er
.
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
Ant
System
an
d
Wei
gh
te
d
V
ot
ing
Met
hod
f
or Mult
iple
…
(
Ab
du
ll
ah
H
us
in
)
4707
Figure
1. A
rch
i
te
ct
ur
e
of
t
he p
rop
os
ed
m
et
ho
d for m
ulti
ple cla
ssifie
r
syst
em
2.1.
Clas
sifie
r
Ens
embl
e Cons
tr
uctio
n
The
cl
assi
fier
ensem
ble
is
bu
il
t
based
on
t
he
featu
re
s
et
pa
rtit
ion
in
g
al
go
r
it
h
m
.
A
disjoi
nt
featu
re
set
par
ti
ti
on
is
car
ried
out
base
d
on
the
in
pu
t
featur
e
set
.
A
n
al
go
r
it
hm
based
on
a
nt
sys
tem
is
dev
el
op
ed
t
o
perform
feature
set
par
ti
ti
on
i
ng.
T
he
num
ber
of
feat
ur
e
part
it
ion
s
is
deter
m
ined
by
the
num
ber
of
i
nd
i
vi
du
al
cl
assifi
ers.
T
he
require
d
i
nputs
inclu
de
fe
at
ur
e
set
a
nd
cat
egory
la
bel
s
of
the
or
i
gina
l
data
set
.
T
he
inp
ut
featur
e
set
is p
arti
ti
on
ed
into
d
iffe
re
nt
feat
ure
subsets
a
nd no f
eat
ure
in
the
trai
ning set
is rem
ov
ed.
T
herefo
re,
each
in
di
vidua
l
cl
assifi
er
is
t
raine
d
on
a
diff
e
ren
t
pro
j
ect
ion
of
t
he
trai
ni
ng
set
.
T
he
flow
c
ha
rt
f
or
fe
at
ur
e
deco
m
po
sit
io
n i
s d
e
picte
d
in
F
igure
2.
Fig
ure
2. Flo
w
char
t
of the
ant
syst
e
m
-
based
featur
e
set
par
t
it
ion
ing al
gorit
hm
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
8
, N
o.
6
,
Dece
m
ber
2
01
8
:
4705
-
4712
4708
2.2.
Co
m
biner
C
onstruc
tio
n
In
t
his
co
ns
tr
uc
ti
on
sta
ge
,
th
e
weig
hted
voti
ng
m
et
ho
d
is
us
e
d
as
the
co
m
bin
er.
A
le
ar
ning
proce
s
s
for
each
cl
assifi
er
on
dif
fer
e
nt
pa
rtit
ion
s
of
feature
s
is
pe
r
form
ed
by
the
ant
syst
e
m
al
go
rithm
.
W
ei
gh
ts
are
giv
e
n
acc
ordin
g
to
the
pe
rform
ance
of
eac
h
cl
as
sifie
r.
Th
e
pe
rfor
m
ance
of
eac
h
cl
assi
fier
dep
e
nds
on
th
e
featur
e
set
par
t
it
ion
.
The
re
for
e,
the
voti
ng
weig
hts
of
each
cl
assifi
er
are
updated
dyna
m
ic
al
l
y
based
on
t
he
featur
e
set
par
t
it
ion
.
T
he
i
dea
be
hind
t
his
a
ppr
oac
h
is
t
hat
the
cl
assifi
e
r
w
hich
is
trai
ne
d
by
di
ff
e
ren
t
fe
at
ur
e
set
par
ti
ti
on
s
will
prov
i
de
di
ff
ere
nt
accu
ra
ci
es
al
tho
ug
h
on
e
ty
pe
of
c
la
ssifie
r
is
us
e
d
in
the
e
ns
e
m
ble.
Cl
assifi
ers
that
pro
vid
e
a
high
accu
racy
are
m
or
e
li
kely
to
cl
assify
patte
r
ns
co
rr
ect
ly
.
Let
=
{
1
,
…
,
}
be
a
set
of
in
div
i
du
al
cl
assi
fiers
(or
a
n
e
ns
em
ble
of
cl
assifi
er
s)
wh
e
re
is
th
e
num
ber
of
in
div
id
ual
cl
assi
fiers
.
Let
=
{
1
,
2
,
3
,
…
,
}
be
a
set
of
cl
a
ss
la
bels
w
he
r
e
c
is
the
nu
m
ber
of
cl
asses.
Let
=
{
,
}
be
a
trai
ning
set
(a
l
abell
ed
dataset
)
w
her
e
=
1
…
,
is
the
nu
m
ber
of
in
sta
nces,
∈
ℜ
is
the
dim
ension
al
featur
e
vecto
r
of
i
-
t
h
instan
c
e
and
∈
{
1
,
…
,
}
is
the
cl
ass
la
bel
of
th
e
i
-
th
instance
.
Each
cl
assifi
e
r
assigns
a
n
i
nput
feat
ur
e
vector
to
one
of
t
he
pr
e
de
fine
d
cl
ass
la
bels,
i.
e.,
:
ℜ
→
.
T
he
outp
ut
of
a
cl
assifi
er
ense
m
ble
is
an
dim
ension
al
cl
as
s
la
bel
vect
or
[
1
(
)
,
…
,
(
)
]
.
The
ta
s
k
is
to
com
bin
e
of
ind
ivi
du
al
cl
as
sifie
r
outp
uts
to
predict
the
c
la
ss
la
bel
fr
om
a
set
of
possi
ble
cl
ass
la
bels
that
m
ake
the
best
cl
assi
ficat
ion
of the
un
known
patte
rn.
In
f
or
m
ulati
ng
the
wei
gh
te
d
vo
ti
ng
c
om
bin
er,
le
t
us
ass
um
e
that
on
ly
the
cl
ass
la
be
ls
are
a
vaila
ble
from
the
cl
assifi
er
outp
uts,
and
de
fine
t
he
decisi
on
of
t
he
j
-
th
cl
assifi
er
as
d_(
j
,
k)
∈
{0,1},
j
=
1,
…
,
L
an
d
k=1,…,C
,
wh
e
re
L
is
the
nu
m
ber
of
cl
as
sif
ie
rs
an
d
C
is
t
he
nu
m
ber
of
cl
asses.
I
f
j
-
t
h
cl
assifi
er
D
_j
chooses
cl
ass
ω_k,
t
he
n
d_(j,k)=
1
a
nd
0
ot
herwise.
The
e
ns
em
ble
decisi
on
f
or
th
e
pro
posed
we
igh
te
d
voti
ng
can
be
descr
i
bed as
fol
lows
: c
hoos
e
cl
ass ω_(
k*)
if
∑
,
∗
=
1
(
)
=
∑
,
=
1
(
)
(1)
wh
e
re
is
the
a
ccur
acy
(or
we
igh
t)
of
cl
assifi
er
.
The
vote
s
are
m
ulti
plied
by
a
wei
gh
t
be
fore
the
act
ual
vo
ti
ng.
T
he we
igh
t i
s
obta
ine
d by est
im
a
ti
ng
the
classi
fica
ti
on
acc
ur
acy
on a
validat
io
n
s
et
.
2.3.
Eva
lu
at
i
on
In
this
ste
p,
the
pe
rfor
m
ance
of
m
ulti
ple
cl
assifi
ers
co
nst
ru
ct
ed
by
th
e
propose
d
an
t
syst
e
m
and
weig
hted
vo
ti
ng
(AS
WV
)
m
et
ho
d
is
m
e
asur
e
d
a
nd
c
om
par
ed
with
sever
al
ot
her
ensem
ble
m
et
hods
.
Ex
per
im
ents
wer
e
c
onduct
ed
on
9
(
nin
e
)
ben
c
hm
ark
dat
aset
s
ta
ken
f
rom
the
Un
iver
s
it
y
Califor
nia,
Ir
vi
ne
(U
CI
)
re
posit
ory
.
The
k
-
Nea
r
est
Neig
hbour
(k
-
N
N)
e
ns
em
ble
has
al
s
o
be
en
use
d
i
n
the
exp
e
rim
ents.
Table
1
sh
ows
a s
umm
ary o
f
the
d
at
as
et
s u
se
d
in
the
exp
e
rim
ents.
Table
1.
Su
m
m
ary o
f Dat
aset
s
No
.
Datasets
Nu
m
b
e
r
o
f
Ins
tan
ces
Nu
m
b
e
r
o
f
Clas
ses
Nu
m
b
e
r
o
f
Featu
res
Featu
res T
y
p
es
1
Hab
er
m
an
306
2
3
Integ
er
2
Ir
is
150
3
4
Real
3
Lens
es
24
3
4
Categ
o
rical
4
Liver
345
2
6
Categ
o
rical,
Integ
er
Real
5
Ecoli
336
8
7
Real
6
Pri
m
a
I
n
d
ian
s Dia
b
etes
768
2
8
Integ
er,
Rea
l
7
Tic
-
Tac
-
Toe
958
2
9
Categ
o
rical
8
Glass
214
6
9
Real
9
Breast Can
cer
(W
isco
n
sin
)
699
2
9
Categ
o
rical
The
k
-
f
old
c
r
os
s
-
validat
io
n
m
et
ho
d
was
a
pp
li
ed
i
n
the
process
of
obta
ining
the
cl
assifi
cat
ion
accuracy
[
32
]
.
A
set
of
la
bele
d
sam
ples
are
r
andom
ly
par
ti
ti
on
e
d
i
nto
k
di
sjoint
fo
l
ds
of
equ
al
siz
e.
T
he
n,
one
of
t
he
k
f
old
s
i
s
rand
om
ly
se
l
ect
ed
as
the
te
s
ti
ng
set
an
d
t
he
rem
ai
nin
g
(
k
-
1)
f
old
s
are
se
le
ct
ed
as
the
tr
ai
ning
set
with
t
he
a
s
su
m
ption
that
there
is
at
le
as
t
one
sam
ple
per
cl
ass.
The
cl
assifi
cat
ion
a
ccur
acy
(acc)
i
s
the
rati
o
of
num
ber
s
of
al
l
co
r
rectl
y
cl
assifi
ed
i
ns
ta
nces
a
nd
the
total
nu
m
ber
of
in
sta
nces
as
s
how
n
i
n
Eq
uation 2
.
=
.
∗
100%
(2)
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
Ant
System
an
d
Wei
gh
te
d
V
ot
ing
Met
hod
f
or Mult
iple
…
(
Ab
du
ll
ah
H
us
in
)
4709
Finall
y,
the
e
stim
ation
of
cl
as
sific
at
ion
acc
uracy
is
ob
ta
ine
d
by
div
idi
ng
t
he
t
otal
of
al
l
c
la
ssific
at
ion
accuracies
b
y t
he
total
nu
m
be
r of
f
old
s
or ro
unds
as s
how
n i
n
E
qu
at
io
n 3.
=
1
∑
=
1
(3)
is
t
he
cl
assifi
cat
ion
accu
racy
of
r
ound
i
a
nd
k
is
t
he
num
ber
of
fo
l
ds
.
A
com
m
on
ch
oic
e
f
or
k
-
f
ol
d
cr
os
s
validat
ion
is
k
=
10
.
Exte
ns
ive
e
xperim
ents
have
show
n
that
10
(ten
)
is
the
best
ch
oice
to
get
a
n
accurate
est
im
at
e
[33]
–
[
35
]
.
To
obta
in
pow
erful
perf
or
m
a
nce
est
i
m
at
ion
and
c
om
par
iso
ns
,
a
la
r
ge
nu
m
ber
of
est
i
m
at
es
are
a
lway
s
pr
efe
rr
e
d.
The
re
f
ore,
in
this
researc
h,
the
exp
e
rim
e
nts
are
cond
uc
te
d
on
te
n
tim
es
the
10
-
f
old cr
os
s
-
va
li
dation
m
et
ho
d.
3.
RESU
LT
S
A
ND AN
ALYSIS
The
a
nt
syst
e
m
a
lgo
rithm
was
us
e
d
to
pa
rtit
ion
the
fea
ture
set
a
nd
w
ei
gh
te
d
voti
ng
was
us
e
d
t
o
com
bin
e
cl
assifi
er
outp
uts.
E
xp
e
ri
m
ents
wer
e
car
ried
ou
t
on
nin
e
(9)
da
ta
set
s
fr
om
t
he
UC
I
re
posit
or
y.
Ten
(
10)
e
xpe
rim
ents
wh
ic
h
co
ns
ist
of
10
-
f
old
cr
os
s
val
idati
on
m
et
ho
d
wer
e
car
ried
out
to
vali
date
the
accuracy
of
si
ng
le
k
-
NN
a
nd
co
ns
tr
ucted
k
-
NN
e
ns
em
bles.
Ta
bles
2
sh
ows
th
e
aver
a
ge
a
nd
st
and
a
r
d
dev
ia
ti
on
of
the
cl
assifi
cat
ion
accu
racies
of
sin
gle
k
-
N
N,
c
onstr
ucted
k
-
N
N
e
ns
em
bles
based
on
ra
ndom
su
bspace
an
d
const
ru
ct
e
d
k
-
NN
e
ns
em
bles
with
t
he
us
ed
of
a
nt
sys
tem
-
based
fea
ture
set
par
ti
ti
on
i
ng
resp
ect
ively
.
It
can
be
s
how
n
that
a
s
m
al
l
stan
da
r
d
de
viati
on
was
obta
ined
for
al
l
m
et
ho
d
wh
ic
h
in
dicat
es
the
exp
e
rim
ents
wer
e
sta
ble.
The
aver
a
ge
accu
r
acy
of
the
co
nst
ru
ct
ed
m
ultip
le
k
-
N
N
by
the
pro
posed
m
et
hod
was
c
om
par
ed
with
the
ave
ra
ge
acc
ur
aci
es
of
ori
gin
al
si
ngle
k
-
N
N
a
nd
const
ru
ct
e
d
k
-
NN
e
ns
em
bles
by
the
rand
om
su
bs
pa
ce
m
et
ho
d.
It
c
an
al
so
be
see
n
that
the
pro
pose
d
m
e
tho
d
pro
vid
es
bette
r
accuracy
tha
n
sing
le
appr
oach
a
nd
ra
ndom
sp
ace
m
et
ho
d
in
c
ons
tructi
ng
k
-
N
N
ensem
bles.
Im
pro
vem
ents
in
accuracy
are obtai
ned
on all
d
at
aset
s.
Th
e
co
m
par
is
on of acc
ur
aci
e
s is as s
how
n
i
n
Ta
ble
2.
Table
2.
T
he
Ac
cur
acy
of Sin
gle k
-
NN, Ran
do
m
Subs
pace
and P
rop
os
ed
Me
thod
No
Dataset
Sin
g
le
k
-
NN
k
-
NN with
Ran
d
o
m
Sub
sp
ace
k
-
NN with
An
t Syas
te
m
Prop
o
sed
M
eth
o
d
Av
erage
Stan
d
ard
Dev
iatio
n
Av
erage
Stan
d
ard
Dev
iatio
n
Av
erage
Stan
d
ard
Dev
iatio
n
1
Hab
er
m
an
6
8
.83
1
.37
6
7
.91
1
.96
6
8
.53
0
.79
2
Ir
is
9
5
.67
0
.47
9
3
.40
0
.47
9
6
.34
0
.35
3
Lens
es
7
7
.92
2
.81
6
2
.50
4
.17
8
6
.67
1
.76
4
Liver
6
2
.32
1
.00
6
0
.06
3
.48
6
5
.48
1
.35
5
Ecoli
8
1
.19
0
.61
8
1
.19
1
.70
8
1
.91
0
.31
6
Pi
m
a
6
7
.37
0
.81
7
0
.59
1
.32
7
1
.22
0
.00
7
Tic
-
Tac
-
Toe
7
5
.77
0
.45
7
5
.70
2
.19
7
8
.81
0
.39
8
Glass
7
2
.71
0
.83
7
2
.71
1
.86
7
3
.54
0
.43
9
Breast
Can
cer
9
5
.78
0
.28
9
7
.23
0
.31
9
8
.09
0
.00
T
he
pro
posed
al
gorithm
was
su
ccess
f
ully
app
li
ed
to
f
or
m
featu
re
set
pa
rtit
ion
.
Ta
ble
3
s
hows
th
e
su
m
m
ary
of
the
resu
lt
of
im
p
lem
enting
this
pro
po
se
d
al
gor
it
h
m
.
This
ta
ble
pr
ese
nts
the
featur
e
set
pa
rtit
ion
and the
num
ber
of
classi
fier
s
.
Table
3
. O
btained
Feat
ure
Se
t Parti
ti
on
a
nd
Nu
m
ber
of Cl
assifi
ers
No
Dataset
Partition
Nu
m
b
e
r
of
Clas
sif
iers
1
Hab
er
m
an
[
1
3][
2
]
2
2
Ir
is
[
1
2 3
4]
1
3
Lens
es
[
1
2 3
4]
1
4
Liver
[
1
4 6
][
3
5]
[
2
]
3
5
Ecoli
[
1
2 3
4 5
6 7
]
1
6
Pi
m
a
[
1
3 4
7][
5
6 8
]
[
2
]
3
7
Tic
-
Tac
-
Toe
[
1
2 3
4 5
6 7
8 9
]
1
8
Glass
[
1
2 3
4 5
6 7
8 9
]
1
9
Breast
Can
cer
[
1
2 4
7 9
][
3
5]
[
6
][
8
]
4
The
accu
racy
of
the
pro
pose
d
m
et
ho
d
was
al
so
com
par
e
d
to
the
oth
e
r
com
m
on
m
et
ho
ds
a
s
sh
ow
n
in
Table
4
.
T
he
accuracy
of
th
e
pr
op
os
e
d
m
et
hod
was
ev
al
uated
by
co
m
par
ing
the
r
esults
to:
(1
)
Sing
le
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
8
, N
o.
6
,
Dece
m
ber
2
01
8
:
4705
-
4712
4710
cl
assifi
er
ap
proach,
(
2)
dynam
ic
weigh
te
d
vo
ti
ng
[28]
,
(3)
im
pr
ov
e
d
k
-
N
N
cl
assifi
c
at
ion
usi
ng
g
e
netic
a
lgorit
hm
(G
A
k
-
N
N)
[
36]
,
(4)
si
m
ultaneo
us
m
et
aheu
risti
c
featur
e
sel
ect
io
n
(S
MF
S)
[
37]
,
(5)
wei
gh
te
d
k
-
NN
ensem
ble
m
eth
od
[
27]
,
(
6)
direct
bo
os
ti
ng
al
go
rit
hm
[3
8]
,
(7
)
cl
us
te
r
-
o
riented
e
ns
em
ble
c
la
ssifie
r
(
COEC
)
[39]
and
(
8)
e
vi
den
ti
al
n
eu
ral
n
et
w
ork
[
40
]
.
The
k
-
NN
cl
as
sifie
r
was
us
e
d
as
the
base
cl
assifi
er.
B
ased
on
the
resu
lt
s,
it
can
be
see
n
t
hat
th
e
pro
posed
m
e
thod
giv
es
the
best
cl
assi
ficat
ion
acc
ur
aci
es
as
c
om
par
ed
t
o
t
he
oth
e
r
m
et
ho
ds
on
ha
ber
m
ann
an
d
br
east
cancer
dataset
.
In
ge
ner
al
,
the
pro
posed
m
et
ho
d
giv
es
good
cl
assifi
cat
ion
r
esults an
d
is
com
par
able w
it
h othe
r
m
et
ho
ds
.
Table
4
. C
om
par
iso
n of
Acc
uraci
es
with
Co
m
m
on
En
sem
ble Met
hods
Dataset
1
2
3
4
5
6
7
8
9
Hab
er
m
an
6
6
.83
-
-
-
7
1
.89
-
-
-
7
2
.75
Ir
is
9
5
.67
9
7
.33
-
-
9
5
.20
9
6
.70
9
6
.00
9
4
,93
9
6
.34
Ecoli
8
1
.19
-
-
-
8
2
.89
-
-
-
8
1
.91
Glass
7
2
.71
-
-
-
7
4
.23
7
2
.50
-
-
7
3
.54
Pi
m
a
6
7
.37
7
2
.68
-
7
1
.90
-
7
5
.70
-
7
1
.79
7
1
.22
Breast
Can
cer
9
5
.78
9
6
.35
9
7
.92
9
7
.50
-
-
9
7
.72
-
9
8
.09
1.
Sin
g
le
k
-
NN
2.
D
y
n
a
m
i
c w
eig
h
ted
vo
tin
g
3.
G
A
k
-
NN
4.
SMFS
5.
W
eig
h
ted
k
-
N
N en
se
m
b
l
e
m
eth
o
d
6.
D
irect
b
o
o
stin
g
algo
rith
m
7.
COEC
8.
E
v
id
en
tial
n
eu
ral
n
etwo
rk
9.
Prop
o
sed
ASW
V
m
e
th
o
d
4.
CONCL
US
I
O
N
A
new
m
et
ho
d
base
d
on
the
i
nteg
rati
on
of
t
he
a
nt
syst
em
and
wei
gh
te
d
vo
ti
ng
f
or
m
ulti
ple
cl
assifi
er
syst
e
m
s
has
be
en
pr
e
sente
d.
The
a
nt
syst
e
m
was
ap
plied
to
optim
iz
e
the
featu
re
set
pa
rtit
ion
act
ivit
y
wh
il
e
weig
hted
voti
ng
wa
s
use
d
a
s
a
com
bin
er.
Ex
per
im
ent
resu
lt
s
sho
w
tha
t
the
ap
plica
ti
on
of
this
m
eth
od
in
com
bin
ing
sev
eral
k
-
NN
s
as
base
cl
assifi
e
r
ou
t
perform
s
sin
gle
k
-
NN,
com
par
able
with
oth
e
r
e
nse
m
ble
m
et
ho
ds.
The
r
esults
ind
ic
at
e
that
the
pro
po
s
ed
m
et
ho
d
can
be
ap
plied
in
gen
e
rati
ng
bett
er
k
-
N
N
ense
m
bles.
Fu
rt
her
m
or
e,
t
his
m
et
ho
d
ca
n
dete
rm
ine
the
nu
m
ber
of
t
he
com
bin
e
d
cl
assifi
ers
ba
s
ed
on
t
he
nu
m
ber
of
form
ed
par
ti
ti
ons.
Fu
tu
re
resea
rc
h
is
to
app
ly
this
m
et
ho
d
on
oth
er
cl
assifi
e
rs
su
c
h
as
the
Suppor
t
V
ect
or
Ma
chine,
Neural
Netw
ork
a
nd
Decisi
on
T
ree.
T
he
dy
nam
ic
featur
e
pa
rtit
ion
-
sel
ec
ti
on
a
ppr
oach
can
be
c
onside
red
t
o
enh
a
nce
t
he
pe
rfor
m
ance
of
this
m
et
ho
d.
T
he
m
e
tho
d
will
,
hope
fu
ll
y,
be
a
ble
to
par
ti
ti
on
the
feat
ur
e
s
et
into
sever
al
l
ow
e
r
-
dim
ension
al
f
e
at
ur
e
set
s,
w
hich
w
ould
al
lo
w
a
set
of
cl
a
ssifie
rs
to
pro
cess
low
dim
e
ns
io
na
l
featur
e
vect
ors
si
m
ultaneo
usl
y.
Ther
e
f
or
e
,
te
sti
ng
the
abili
ty
of
this
m
et
ho
d
t
o
ov
e
rc
om
e
the
high
dim
ension
al
d
a
ta
an
d sm
al
l t
ra
ining sam
ple proble
m
s can
be
conside
red.
ACKN
O
WLE
DGE
MENTS
Th
is
work
is
su
pp
or
te
d
by
the
Hi
gh
e
r
E
ducat
ion
Mi
nistry
of
Ma
la
ysi
a
unde
r
the
L
ong
Te
rm
Re
search
Gran
t Schem
e (S
/O
cod
e:
1249
0)
.
REFERE
NCE
S
[1]
K.
Kim
,
H.
Li
n
,
J.
Y.
Cho
i,
an
d
K.
Choi,
“
A
design
fra
m
ewo
rk
for
hie
r
arc
h
i
ca
l
ense
m
ble
of
m
ult
ipl
e
f
eatur
e
ext
ra
ct
ors
and
m
ult
iple
cl
assifi
ers,
”
Pat
t
ern
R
ec
og
nit
ion
,
2015
.
[2]
G.
Baron,
“
Inte
l
li
gent
Dec
ision
Te
chno
logi
es,
”
Anal
ysis
of
Multiple
C
lassifi
ers
Pe
rform
ance
for
Discreti
zed
Dat
a
in
Au
thorship A
t
tribut
ion
,
vol
.
73
,
pp
.
33
–
42
,
201
7.
[3]
S.
Hegde
,
K.
.
A
cha
r
y
,
and
S.
Shetty
,
“
A m
ult
ipl
e
cl
assifie
r
s
y
s
tem
for
aut
om
at
ic
spee
ch
re
cogni
t
i
on,
”
Int
ernati
on
al
Journal
of
Computer
App
li
ca
ti
on
s
,
vol. 101, no. 9
,
pp
.
38
–
43
,
201
4.
[4]
C.
H.
W
u
and
W.
Bin
Li
ang
,
“
Emotion
rec
ognit
i
on
of
aff
ec
ti
v
e
spee
ch
base
d
on
m
ult
ipl
e
class
ifiers
using
ac
oustic
-
prosodic
informati
on
and
sem
an
ti
c
l
abels,”
IE
E
E
Tr
ansacti
ons
on
Af
fective
Co
mputing
,
vol
.
2,
no.
1,
pp.
10
–
2
1,
2015.
[5]
M.
H.
Sigari
,
S.
A.
Sureshjani
,
a
nd
H.
Solta
nia
n
-
Za
deh
,
“
Sport
vide
o
cl
assifi
ca
t
io
n
using a
n
ense
m
ble
cl
assifie
r
,
” in
Proce
ed
ings o
f
7
th
Iranian
Con
fer
enc
e
on
Mac
h
in
e
V
ision
and
Image
Proc
essing
,
2
011,
pp
.
2
–
5.
[6]
B.
Zha
ng,
“
Rel
i
a
ble
face
rec
ogni
t
ion
b
y
ran
dom
subs
pac
e
support
vec
tor
m
ac
hin
e
ense
m
ble
,
”
in
Pr
oce
ed
ings
of
the
2012
Inte
rnat
ion
al
Conf
ere
nce o
n
Mac
hin
e
Lear
ning
and
C
ybe
rn
et
i
cs
,
2012
,
vol
.
1,
pp
.
415
–
420
.
[7]
A.
Chharia
and
R.
K.
Gupta
,
“
E
m
ai
l
class
ifi
er:
An
ense
m
ble
using
proba
bi
li
t
y
and
rule
s
,
”
in
P
roce
edi
ngs
of
6
t
h
Inte
rnational
Co
nfe
renc
e
on
Con
te
mpor
ary
Computing
,
2013,
pp.
130
–
136.
Evaluation Warning : The document was created with Spire.PDF for Python.
In
t J
Elec
&
C
om
p
En
g
IS
S
N: 20
88
-
8708
Ant
System
an
d
Wei
gh
te
d
V
ot
ing
Met
hod
f
or Mult
iple
…
(
Ab
du
ll
ah
H
us
in
)
4711
[8]
A.
Margoosia
n
a
nd
J.
Abouei,
“
E
nsem
ble
-
base
d
c
la
ss
ifi
ers
for
ca
n
ce
r
c
la
ss
ifica
t
ion
using
hum
an
tumor
m
ic
roa
rr
a
y
dat
a
,
”
in
Procee
dings of
the 21st
Inte
rnat
ional
C
onfe
renc
e
on
Ele
ct
rical E
ng
ineering
(
ICEE
)
,
2013,
pp
.
1
–
6.
[9]
R.
P.
A.
Pram
est
i,
Y.
Herdi
y
eni,
and
A.
S.
Nugro
ho,
“
W
ei
ghte
d
Ensemble
Cla
ss
ifi
er
for
Plant
Le
af
Ide
nti
fi
ca
t
ion
,
”
TEL
KOMNIKA
,
vol.
16
,
no
.
3
,
pp
.
1386
–
1393
,
20
18.
[10]
L.
Deshpan
de
a
nd
M.
N.
Rao,
“
Conce
pt
Drift
I
dent
ifica
ti
on
usi
ng
Cla
ss
ifi
er
En
sem
ble
Approac
h,
”
Int
ernati
ona
l
Journal
of
Elec
t
rical
and
Computer
Eng
ine
ering
(
IJE
CE)
,
vol. 8,
no.
1
,
pp
.
19
–
25
,
2018.
[11]
M.
Kart
iwi,
T.
S.
Gunawan,
T.
Arundina,
and
M.
A.
Om
ar,
“Sukuk
R
at
ing
Prediction
using
Voting
Ensemble
Strat
eg
y
,
”
Inte
rn
ati
onal Journal of
E
le
c
tric
al
and
Computer
Eng
i
nee
ring (
IJE
C
E)
,
vol
.
8
,
no
.
1
,
pp
.
299
–
303
,
2018
.
[12]
D.
Herná
nde
z
-
L
obat
o,
G
.
Mart
í
nez
-
Muñoz,
and
A.
Suáre
z,
“
How
la
rge
shoul
d
ense
m
ble
s
of
cl
assifi
ers
be
?
,
”
Pat
te
rn
Recogni
t
ion
,
vo
l. 46, no.
5,
pp
.
1323
–
133
6,
2013
.
[13]
P.
Du,
J.
Xia,
W
.
Zha
ng,
K.
Ta
n,
Y.
L
iu,
an
d
S.
Li
u,
“
Multi
ple
c
la
ss
ifi
er
s
y
stem
for
remote
sensing
image
cl
assifi
ca
t
ion:
A
rev
ie
w,
”
S
ensors
,
vol
.
12
,
no
.
4
,
p
p.
4764
–
4792
,
2
012.
[14]
S.
Li,
Z.
Zhe
ng
,
Y.
W
ang,
C
.
Ch
ang,
and
Y.
Yu,
“
A
new
h
y
p
erspe
ct
r
al
b
and
se
lec
ti
on
and
c
la
ss
ific
at
ion
fra
m
ework
base
d
on
combin
ing
m
ult
iple
class
ifi
er,”
Pat
t
ern
Re
cogn
it
ion
Letters
,
pp.
1
–
8,
201
6.
[15]
L.
Rok
ac
h
,
“
Ens
emble
-
base
d
cla
ss
ifi
ers,
”
Arti
f
ici
al
Int
e
lligence R
ev
i
ew
,
vol. 33, n
o.
1
–
2
,
pp
.
1
–
39
,
2010.
[16]
O.
Maimon
an
d
L.
Roka
ch,
Dec
ompos
it
ion
methodol
ogy
fo
r
knowl
edge
d
i
scov
ery
and
da
ta
mining
.
B
erlin
Heide
lb
erg
:
Spri
nger
-
Verl
ag
.
,
20
05.
[17]
H.
Ahn,
H.
Moo
n,
M.
J.
Fa
zzari
,
N.
Li
m
,
J
.
J.
C
hen,
and
R.
L.
Kodell
,
“
Cla
ss
ifica
t
ion
b
y
ense
m
ble
s
from
ran
dom
par
ti
t
ions
of
hig
h
-
dimensional
d
at
a
,
”
Computa
tional
Statis
ti
cs
a
nd
Data
Analysi
s
,
vol.
51
,
no
.
1
2,
pp.
6166
–
617
9,
2007.
[18]
B.
Crawford
,
R.
Soto
,
E.
Monfr
o
y
,
C.
Castro
,
W
.
Palma,
and
F.
Pare
des
,
“
A
h
y
brid
soft
computing
app
roa
ch
fo
r
subs
et
proble
m
s,
”
Math
emati
c
al Proble
ms
in Eng
ine
ering
,
vol
.
20
13,
2013
.
[19]
G.
Chic
co,
“
Ant
col
on
y
s
y
s
te
m
-
base
d
appl
i
ca
t
io
ns
to
el
ec
tr
ic
a
l
distri
buti
on
s
y
s
t
em
opti
m
i
za
ti
on
,
”
in
Ant
Colon
y
Optimizati
on
-
Me
thods and Ap
pli
cations
,
A
.
Ot
sfeld,
Ed. Ri
j
eka,
Croa
tia:
In
Tech
Open,
2011
,
pp
.
237
–
262.
[20]
R.
Rib
ei
ro
and
F
.
En
embrec
k,
“
A
sociol
ogi
call
y
i
nspired
heur
isti
c
for
opt
imiza
t
ion
al
gor
it
hm
s:
A
c
ase
stud
y
on
ant
s
y
stems
,
”
Ex
p
ert
Syste
ms
wi
th Ap
pli
cations
,
vo
l. 4
0,
no
.
5
,
pp
.
181
4
–
1826,
2013
.
[21]
E.
Ce
ti
n
,
“
The
s
olut
ion
of
cre
w
sche
duli
ng
prob
l
em
with
set
p
artiti
oning
m
odel
,
”
Av
ia
ti
on
and
Sp
ace
Te
chnol
ogy
,
vol.
3
,
no
.
(4)
,
p
p.
47
–
54
,
2008
.
[22]
Abdulla
h
and
K.
R.
Ku
-
m
aha
m
ud,
“
Ant
sy
stem
-
base
d
fea
tur
e
set
par
titioni
ng
a
lgori
thm
for
cl
a
ss
ifi
er
ense
m
ble
construc
t
ion,
”
In
te
rnational
Jour
nal
of
Soft Computing
,
vol. 11
,
n
o.
3
,
pp
.
176
–
18
4,
2016
.
[23]
L.
Hansen
and
P.
Sala
m
on,
“
N
eur
al
ne
twork
e
nsem
ble
s,”
IEEE
Tr
ansacti
on
on
Pat
te
rn
Analy
sis
and
Mac
hine
Inte
lligen
ce
,
vol
.
12,
no.
10,
pp.
9
93
–
1001,
1990
.
[24]
A.
Hajdu,
L
.
Ha
jdu,
A.
Jonas,
L
.
Kovac
s,
and
H
.
Toman,
“
Gene
ral
i
zi
ng
th
e
m
ajorit
y
voti
ng
sch
eme
to
spati
a
l
l
y
constra
in
ed
vot
i
ng,
”
I
EEE
Tr
ansacti
ons on
Image
Proc
essing
,
vo
l.
22
,
no
.
11
,
pp
.
4182
–
4194,
201
3.
[25]
M.
P.
Ponti,
“
Co
m
bini
ng
cl
assifi
ers:
From
the
cre
ation
of
ense
m
ble
s
to
the
decision
fusion,
”
in
Proce
ed
ings
of
24th
SIBGR
AP
I
Conf
ere
nce on
Gr
aph
ic
s, Patterns
,
an
d
Images Tutori
als,
SIBGR
AP
I
-
T 2011
,
2011
,
no
.
Icmc
,
pp
.
1
–
10
.
[26]
W
.
W
ang,
Y.
Z
hu,
X.
Huang
,
D.
Lopre
sti
,
Z
.
Xue,
R.
Long,
S.
Antani
,
and
G.
Thoma,
“
A
Cla
ss
ifi
er
Ense
m
ble
Based
on
per
fo
rm
anc
e
le
v
el
es
ti
m
at
ion
,
”
in
In
te
rnational
S
ymposium
on
Bi
omedic
al
Imagin
g:
From
Nano
to
Mac
ro
,
2009
,
pp
.
342
–
345
.
[27]
S.
Ham
ze
loo
,
H.
Shahpar
ast
,
and
M.
Zo
lgha
dri
Ja
hrom
i,
“
A
novel
weighted
ne
are
s
t
nei
ghbo
r
ense
m
ble
class
ifi
er
,
”
in
Proceedi
ngs
of
the
16
th
Int
e
rnational
Sympo
sium
on
Arti
fici
al
Intelli
g
ence
a
nd
Signal
Proc
e
ss
ing
,
2012,
pp.
413
–
416.
[28]
R.
M.
Valdovinos
and
J.
S.
S
ánc
hez,
“
Combi
ning
m
ult
iple
cl
assifiers
with
d
y
namic
we
i
ghte
d
voti
ng
,
”
in
Proce
ed
ings
of
t
he
4th
Inte
rnat
i
onal
Confe
renc
e
on
Hybrid
Arti
f
ic
ial
In
te
l
li
g
ence
Syste
ms
(
HAIS
’09)
,
2009,
pp.
510
–
516.
[29]
R.
Polika
r
,
“
Ensemble
base
d
s
y
s
te
m
s
in
decision
m
aki
ng,
”
C
ircuits
and
Syste
ms
Magazine,
I
EE
E
,
vol
.
6,
no
.
3
,
p
p.
21
–
45,
2006
.
[30]
X.
Mu,
J.
Lu,
P.
W
at
ta
,
and
M.
H.
Hass
oun,
“
W
ei
ghte
d
voti
ng
-
b
ase
d
ense
m
ble
clas
sifie
r
with
app
li
c
at
ion
to
hum
an
fac
e
r
ec
ogni
t
ion
and
voic
e
re
co
gnit
ion,”
in
Pro
ce
ed
ings
of
Inter
nati
onal
Joi
nt
Confe
renc
e
on
Neural
Net
work
s
,
2009,
pp
.
2168
–
2171.
[31]
J.
Sun
and
H.
Li
,
“
Li
sted
compa
nie
s’
fina
n
ci
a
l
distre
ss
pre
dic
t
io
n
base
d
on
weight
ed
m
aj
ority
vo
ti
ng
combinatio
n
of
m
ult
iple
cl
ass
ifi
ers,”
Ex
pert
S
yste
ms
wit
h
Applicat
ions
,
vol
.
35
,
no.
3,
pp.
818
–
8
27,
2008
.
[32]
P.
M.
Jena
and
S.
R.
Na
y
ak
,
“
Angular
Sy
m
m
et
ric
Axis
Constel
lation
Model
for
Off
-
li
ne
Odia
Handwrit
t
en
Chara
c
te
rs
Rec
o
gnit
ion,”
Inte
rn
ati
onal
Journal
of
Adv
anc
es
in
Appl
ie
d
S
ci
e
n
ces
(
IJA
AS)
,
vol.
7,
no.
3,
pp.
26
5
–
272,
2018
.
[33]
R.
Kohavi
,
“
A stud
y
of
cro
ss
-
val
i
dat
ion and
boots
tra
p
for ac
cur
a
c
y
esti
m
at
ion
and m
odel
select
ion
,
”
in
Proceedi
ngs
of
th
e
14th
Int
ernati
onal Joi
n
t
C
onfe
renc
e
on
Art
if
icial
Intelli
g
en
ce
,
1995,
no.
2,
pp.
1137
–
1145
.
[34]
M.
W
ozni
ak,
“
Cla
ss
ifi
er
fusion
base
d
on
weig
hte
d
vot
ing
-
A
naly
tica
l
and
ex
per
imental
resul
ts,”
2008
Ei
ght
h
Inte
rnational
Co
nfe
renc
e
on
Intel
li
gent Sy
st
ems Design
and
Appl
i
c
ati
ons
,
pp
.
687
–
692,
2008
.
[35]
M.
W
ozni
ak,
“
Evol
uti
onar
y
appr
oac
h
to
produc
e
cl
assifi
er
ense
m
ble
base
d
on
wei
ghte
d
vot
ing,”
in
Proce
ed
ings
of
World
Congress
on
Nature
and
B
iol
ogic
a
ll
y
Inspi
red
Computing,
NaBIC.
,
2009,
p
p.
648
–
653
.
[36]
N.
Suguna
and
K.
Tha
nushkodi,
“
An
Im
prove
d
k
-
Nea
rest
Neigh
bor
Cla
ss
ifi
catio
n
Algorit
hm
Us
i
ng
Share
d
Nea
re
st
Neighbor
Sim
il
a
rity
,
”
In
te
rnatio
nal
Journal
of
C
omputer
Scienc
e
,
vol
.
7
,
no
.
4
,
pp
.
18
–
21
,
2010
.
[37]
M.
A.
Ta
hir
and
J.
Sm
it
h,
“
Cre
ating
div
erse
ne
ar
est
-
nei
ghbour
en
sem
ble
s
using
si
m
ult
ane
ous
m
etahe
urist
ic
f
eatur
e
sele
c
ti
on,
”
Pat
t
e
rn
Re
cogn
it
ion
Letters
,
vol
.
31
,
n
o.
11
,
pp
.
1470
–
1480,
2010
.
[38]
T.
Koon,
C.
Neo,
and
D.
Ventur
a,
“
A
dire
ct
bo
osting
al
gorit
hm
for
the
k
-
nea
rest
nei
ghbor
cla
ss
ifi
er
via
local
Evaluation Warning : The document was created with Spire.PDF for Python.
IS
S
N
:
2088
-
8708
In
t J
Elec
&
C
om
p
En
g,
V
ol.
8
, N
o.
6
,
Dece
m
ber
2
01
8
:
4705
-
4712
4712
warpi
ng
of
the d
ista
nc
e
m
et
r
ic
,
”
Pat
te
rn
Recogni
t
ion
Letters
,
vo
l.
33,
no
.
1
,
pp
.
92
–
102,
2012
.
[39]
B.
Verm
a
and
A.
Rahman,
“
Cluste
r
Orien
t
ed
Ensemble
Cla
ss
ifi
er
:
Im
pac
t
of
Multi
cl
uste
r
Chara
c
te
ri
zation
on
Ensemble
Cl
assifie
r
Le
arn
ing,”
I
EE
E
Tr
ansacti
o
ns
on
Knowle
dg
e
and
Data
Enginee
ring
,
vol.
24,
no.
4
,
pp
.
605
–
618,
2012
.
[40]
H.
He
,
D.
Han
,
and
Y.
Yang,
“
Design
of
m
ult
ipl
e
c
la
ss
ifi
er
sy
stems
base
d
o
n
evi
d
ent
i
al
n
eu
ral
n
et
work,
”
in
Chine
se
Con
trol
Confe
ren
ce
,
20
17,
pp
.
5496
–
55
01.
BIOGR
AP
H
I
ES
OF
A
UTH
ORS
Ab
d
ullah
Hus
in:
He
recive
d
h
i
s
Sarja
na
and
M
aste
r
d
egr
e
es
in
Com
pute
r
Scie
n
ce
from
Gadj
ah
Mada
Univer
sit
y
Yog
y
aka
r
ta
I
ndonesia
.
Now
,
he
is
working
on
the
Bac
helor
’s
degr
ee
in
Inform
at
ion
S
y
s
te
m
Program
Stud
y
,
Engi
n
ee
r
in
g
and
Com
pute
r
Scie
nc
e
Facu
lty
,
Univer
sit
as
Islam
Indra
giri
Indone
sia
.
He
r
ec
e
ive
d
his
Ph.
D
degr
ee
in
Co
m
pute
r
Scie
nc
e
in
201
5
f
rom
Univer
siti
Ut
ara
Malay
sia
(UU
M).
His
rese
ar
c
h
int
er
est
in
cl
u
des
Data
Min
in
g
,
Optimiz
at
ion
Algorit
hm
,
Mult
imedia
Da
ta
b
ase
and
Pa
ttern
Re
c
ognit
ion and
C
las
sific
at
ion
.
Ku
Ruhan
a
K
u
-
Mahamud
:
S
he
holds
a
Bachel
or
in
Mathem
at
ic
a
l
Scie
nc
e
s
and
a
Master
s
degr
ee
in
Com
puti
ng,
bo
th
fro
m
Bradf
ord
Univer
sit
y
,
Unit
ed
Kingdom
in
1983
and
1986
respe
ctively
.
He
r
PhD
in
Co
m
pu
te
r
Scie
n
ce
was
obta
in
ed
from
U
nive
rsiti
Per
ta
ni
an
Malay
sia
in
1994.
As
an
ac
ad
emi
c,
h
er
rese
arc
h
int
er
es
ts
inc
lude
ant
col
on
y
opti
m
i
za
t
ion,
pa
tt
ern
cl
assifi
ca
t
ion an
d
vehicle routing
proble
m
.
Evaluation Warning : The document was created with Spire.PDF for Python.