TELKOM
NIKA
, Vol. 11, No. 7, July 201
3, pp. 3491 ~
3497
e-ISSN: 2087
-278X
3491
Re
cei
v
ed
Jan
uary 13, 201
3
;
Revi
sed Ma
rch 1
3
, 2013;
Acce
pted Ma
rch 2
6
, 2013
RBF Ne
ural Networks Optimization Algorithm and
Application on Tax Forecastin
g
YU Zhijun
Hefei U
n
ivers
i
ty of T
e
chnolo
g
y
, Hefe
i 230
00
9, Chin
a
T
e
lp: +
0086-15
055
70
020
8, e-mail: 84
767
23
86@
qq.com
A
b
st
r
a
ct
T
a
x plays a si
gnific
ant rol
e
i
n
Chi
na'
s rap
i
d ec
on
o
m
ic gr
ow
th. T
herefore it is of parti
cular
importa
nce to improve th
e pr
edicta
b
il
ity and
accuracy
of the tax pla
n
. T
he tax data is
character
i
z
e
d
by
bei
ng s
o
hi
gh
l
y
non
lin
ear
a
nd co
up
lin
g that it is
di
ffic
u
lt to b
e
repr
esente
d
by
u
s
ing
an
ana
lytica
l
math
e
m
atic
al
mo
de
l i
n
a
n
a
ccurate w
a
y.In
this p
a
p
e
r, a
new
opti
m
i
z
a
t
i
on
alg
o
rith
m
b
a
sed
on
sup
p
o
rt
vector mac
h
in
e and g
enetic
alg
o
rith
m for RBF
neural net
w
o
rk is presen
ted. F
i
rs
t
the genetic a
l
gor
ith
m
i
s
used
to se
lect t
he
para
m
eters
auto
m
atic
ally
o
f
supp
or
t vecto
r
machi
ne,
and
then s
u
p
port v
e
ctor
mac
h
in
e i
s
used to
he
lp c
o
nstructing t
he
RBF
neur
al
net
w
o
rk. T
he netw
o
rk on
bas
is of
this al
gorit
hm c
an b
e
a
p
p
lie
d t
o
non
lin
ear
system
identific
ation like tax reve
nue forecasting. Case study on
Chinese tax re
venue during t
h
e
last 30 y
ears
de
mo
nstrates that t
he n
e
tw
ork base
d
on t
h
i
s
alg
o
rith
m is
muc
h
mor
e
ac
curate tha
n
ot
he
r
pred
iction met
hods.
Ke
y
w
ords
: RB
F
neural n
e
tw
ork; SVM; parameter
opti
m
i
z
a
t
ion; tax foreca
sting
Copy
right
©
2013 Un
ive
r
sita
s Ah
mad
Dah
l
an
. All rig
h
t
s r
ese
rved
.
1. Introduc
tion
Since the im
plementatio
n
of Reform
and
Op
enin
g
Policy, Chin
a has a
c
hi
eved high
eco
nomi
c
g
r
o
w
th an
d hi
sto
r
ic
pro
g
re
ss i
n
taxation. T
a
x revenu
e h
a
s b
e
come a
major
so
urce of
govern
m
ent'
s
reven
ue. Wit
h
the furth
e
r
eco
nom
i
c
de
velopment a
n
d
the continu
a
l improveme
n
t
on the market mechanism, the tax
policy will pl
ay an increasingly import
ant role in t
he
developm
ent
of the market econ
omy in Chin
a.
On the one ha
n
d
, the policy not only affects
citize
n's disp
osal i
n
come
and
con
s
umi
ng be
havior,
but also ha
s
an influe
nce
on the e
n
terp
rise
's
financi
a
l b
u
rd
en a
nd
devel
opment
pote
n
tial. On th
e
other
han
d, t
he
state b
u
d
get form
ulati
on,
the implem
e
n
tation of ma
cro
-
control a
nd the exp
e
n
d
iture of i
n
fra
s
tru
c
ture con
s
tru
c
tion
are
all
depe
nded o
n
the total amount of tax revenue. Fu
rt
herm
o
re, it is the premi
s
e
and only wa
y,
whi
c
h m
a
kes the tax plan
head fo
r m
u
ch more
sci
ent
ific and
ratio
n
al ori
entation,
to co
nst
r
uct
a
stable
and
a
c
curate tax foreca
sting
mod
e
l.Therefor
e,
in order to p
r
ovide reliable
informatio
n f
o
r
tax plannin
g
and the
de
ci
sion
of bud
g
e
t and e
c
o
n
o
m
y, the tax authoritie
s at
all levels
sh
o
u
ld
stren
g
then
th
e analy
s
is of
tax fore
ca
sting an
d set up
a set of predi
ction
sy
stem,
which ,o
f
cou
r
se, need
s to go ha
n
d
in hand
with the impro
v
ement on q
uality and efficien
cy of tax
colle
ction a
n
d
manage
ment
.
From differen
t
persp
ective
s, schola
r
s at
home and a
b
roa
d
have condu
cted research on
tax revenue fore
ca
sts an
d put forwa
r
d q
u
ite a fe
w foreca
sting met
hod
s. Althoug
h these meth
od
s
have pl
ayed an im
portant guiding rol
e
in
prac
tical work, there are st
ill some
defi
c
ienci
e
s.
G.Dun
c
a
n
a
nd other
re
searche
r
s hav
e develop
ed
Bayesian fo
recastin
g mo
del. The re
sult
sho
w
s that Bayesia
n
fore
casting m
odel
is su
pe
rior
to
the singl
e-va
riable multi-st
ate Kalman fil
t
er
method. Be
sides, the
rel
a
tive accura
cy increa
se
s with the re
ductio
n
of th
e length of
th
e
histori
c
al tim
e
seri
es [1]; This supp
ort vect
or ma
chi
ne based on
princi
pal co
mpone
nt ana
lysis
can elimi
nate
the redun
da
nt informatio
n of eac
h in
dex and re
d
u
ce the in
pu
t dimensio
ns of
sup
port
vect
or m
a
chine,
but it
will l
o
se
some
valuabl
e info
rmation
whe
n
ma
king
spa
t
ial
recon
s
tru
c
tio
n
on
su
ppo
rting ve
ctor machine
in
dicato
rs dat
a [2]. The
Erro
r
Corre
c
tion
Model,which i
s
ba
sed
on the co
-inte
g
ra
tion t
heory, contain only G
D
P and tax revenue, with
out
con
s
id
erin
g o
t
her
explanat
ory vari
able
s
.
In that
ca
se,
it will influ
e
n
c
e the i
n
terpre
tation qu
ality of
the model [3
-4]. The time
seri
es
metho
d
s al
so
have
limitations. F
o
r exam
ple, the ARIMA m
odel
can
not
refl
ect the
relat
i
onship
s
b
e
twee
n tax a
nd e
c
o
nomi
c
elem
ents. I
n
ad
dition, t
he
descri
p
tion
of simpl
e
time
seri
es metho
d
s o
n
extern
al facto
r
s is
not cle
a
r
eno
ugh [5
-8]. Th
e
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
RBF
Ne
ural Networks Opt
i
m
i
zation
Algorit
hm
and Application on
Tax Fo
re
ca
sti
n
g
(YU Z
h
ijun)
3492
relation
shi
p
b
e
twee
n Chi
n
a
'
s tax and
economi
cs
ch
a
nge
s a lot. M
o
reove
r
, the
contin
uing
ref
o
rm
in tax rule
s l
ead
s to the changi
ng of th
e tax statistic calib
er. Fo
r this rea
s
on, t
he pa
ramet
e
r or
stru
cture of t
he g
r
ey pr
edi
ction m
odel i
s
no
long
er
a
pplicable [9
-1
0]. Neu
r
al n
e
twork p
r
edi
cti
o
n
method
req
u
i
r
es a la
rge
n
u
mbe
r
of sa
mples,
and th
e co
nverg
e
n
c
e sp
eed
of le
arnin
g
p
r
o
c
e
s
s is
slo
w
. There
are some ot
her defe
c
ts,
su
ch as ov
e
r
-fitting, less stron
g
ca
pab
ility of the m
odel
gene
rali
zatio
n
an
d so o
n
. The
neu
ral
netwo
rk trai
ning a
nd le
arni
ng a
r
e
ba
se
d on th
e
cau
s
a
l
relation
shi
p
b
e
twee
n depe
ndent varia
b
l
e
s and in
dep
ende
nt varia
b
les which is implied in the
sampl
e
s.
Wh
at's m
o
re,
this le
arni
ng
ca
nnot reflec
t th
e chan
ge
of
external fa
cto
r
s
and
its
effect
s
on the predi
ct
ion. When th
e environm
en
t of predi
ction
objects
cha
n
ges, the pre
d
i
c
tion accu
ra
cy
will be g
r
eatl
y
redu
ced [1
1]. Rough Se
t Theory is
a
kind of mo
re
valid method
to deal with t
he
compli
cate
d
system
s. It can see
k
for
the re
l
a
tionship bet
wee
n
tax revenue
and influ
e
n
c
i
n
g
factors
by removing th
e
re
dund
an
cy inform
atio
n
from
the
data
directl
y
. However,
the
con
s
i
s
ten
c
y of sam
p
le
cl
assificatio
n
and d
a
ta
ch
ara
c
teri
zatio
n
are
clo
s
ely
relate
d to t
h
e
cal
c
ulatin
g speed of the
method an
d the pr
edi
ction accuracy, whether the method
can
guarantee hi
g
her p
r
edi
ctive
accura
cy is
g
r
eat deal of u
n
ce
rtainty [12].
On the ba
sis of the literature
s
above,
con
s
id
erin
g the ba
sic cha
r
acte
ri
stics of China’
s
eco
nomi
c
op
eration, a
ne
w optimi
z
atio
n algo
ri
thm b
a
se
d on
sup
p
o
rt vecto
r
ma
chin
e and
ge
netic
algorith
m
for
RBF neu
ral n
e
twork i
s
pre
s
ente
d
. In
which g
eneti
c
algorith
m
is chosen to sel
e
ct
the paramete
r
s a
u
tomatica
lly for supp
ort
vector
ma
chi
ne. and the
n
the sup
p
o
r
t vector
machin
e
is u
s
ed t
o
he
lp co
nst
r
u
c
t RBF ne
ural n
e
twork.
Acco
rding
to which, a fore
ca
sti
ng mo
del of t
a
x
revenu
e i
s
set up. Thi
s
al
gorithm
ca
n
avoid not
onl
y the sh
ortco
m
ings of tra
d
i
tional alg
o
rit
h
m
whi
c
h i
s
e
a
sy
to get lo
cal
minimal valu
e
,
but also a l
a
rge
numb
e
r
o
f
experime
n
ts or exp
e
ri
en
ces
whi
c
h
are
n
eede
d to
pre-spe
c
ify net
work structu
r
e. In the
la
st pa
rt of th
is p
ape
r, th
ese
algorith
m
s li
ke
A
RM
A
,
GM
,
L
SS
V
R
are
co
mpared to ve
rify the
ration
ality and effectivene
ss of
the metho
d
. The results
show th
at the f
o
re
ca
sti
ng m
odel b
a
sed o
n
this m
e
thod
can
obtain
b
e
tter
perfo
rman
ce
in tax fore
ca
sting than th
e
other m
odel
s.
So it ca
n be
use
d
a
s
a
ne
w ap
proa
ch f
o
r
tax forec
a
s
t
ing.
2. Res
earc
h
Method
2.1. SVM Pro
v
iding the Oretical Foud
ation for Str
u
ctur
e and P
a
rameters o
f
RBF
RBF n
e
two
r
k, whi
c
h
is a
th
ree
-
layered f
eed fo
rward
netwo
rk,
map
s
di
re
ctly inp
u
t vecto
r
onto the hi
dd
en layer spa
c
e by u
s
in
g
radial
ba
si
s f
unctio
n
. The
output of the
netwo
rk is t
he
linear
weig
hted sum of hid
den unit’
s out
put.
The m
appi
ng
of
RBF n
e
twork fro
m
in
pu
t colu
mn
s to
output
colu
m
n
s i
s
no
nline
a
r,
while
the output of the network is linear in term
s of the weig
hts, and the o
u
tput of the
th
k
hidden u
n
it is
2
2
()
e
x
p
(
)
2
k
k
k
xc
x
(1)
Whe
r
e
is Eucl
idean n
o
rm,
k
is the width
of hidden lay
e
r nod
es.
k
c
is the ce
nter of
the hidden la
yer node
s,
i
X
is the
th
i
input variable.
N
denote
s
the numb
e
r
of the hidden
units,
k
W
is
the con
n
e
c
tio
n
weight
s be
tween o
u
tput
and t
he hid
den layer n
o
des, then the
output of RBF
netwo
rks is
2
2
1
()
e
x
p
(
)
2
N
k
k
k
k
xc
fx
w
(2)
In
a
c
c
o
r
d
an
ce
w
i
th
Me
rc
er C
o
nd
itio
ns
, ke
r
n
e
l
fu
nc
tio
n
is
us
ed
to
ma
p
th
e
s
a
mp
le
in
th
e
origin
al sp
ace to a vector i
n
high
-dime
n
s
ion
a
l f
eature
spa
c
e. The
a
pplicat
ion of
Gau
ssi
an kernel
function u
s
e
d
here is a
s
fol
l
ows.
2
2
(,
)
e
x
p
(
)
2
i
i
i
xv
Kx
v
(3)
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN:
2087
-27
8
X
TELKOM
NIKA
Vol. 11, No
. 7, July 2013
: 3491 – 349
7
3493
The n
u
mbe
r
of the hid
den u
n
its of
SVM is th
e numb
e
r
o
f
Support V
e
ctor,
g
rep
r
e
s
ent
s the numbe
r of sup
port vecto
r
.
i
W
is the
th
i
th weights b
e
twe
en the hidde
n units an
d
the outputs.
i
Z
denote
s
Sup
port Vecto
r
.
b
is the bias. SVM is the linear
combi
n
ation of the
hidde
n units,
then
2
2
11
(
)
(
,
)
e
xp(
)
2
gg
i
ii
i
ii
i
xv
f
xw
K
x
v
b
w
b
(4)
The prin
cipl
e
s
of RBF network an
d SVM con
s
tru
c
t RBF ke
rnel
space are different,
however, th
ey are
com
para
b
le, there is
a o
n
e
-
to-one
co
rre
s
po
nde
nce among
network
para
m
eters,
and the o
u
tp
ut of the network a
r
e
the
linear
wei
ght
ed su
m of hi
dden laye
r n
ode
s’
output.
2.2. GA Providing SVM M
odels Param
e
ters
The ge
netic a
l
gorithm i
s
a
kind of
stoch
a
st
ic optimiza
t
ion
method, whi
c
h
can si
mulate
natural
sel
e
ct
ion and
gen
e
t
ic variation i
n
the pr
ocess of biolo
g
ical ev
olution.
It has a st
ro
ng
cap
ability of
global sea
r
ch
, and this abi
lity is not depende
nt on a spe
c
ific
soluti
on model. Th
is
algorith
m
can
provide an e
ffective
way
to
solve
the
selectio
n
of su
pport
vecto
r
machi
ne mod
e
l
para
m
eters.
This alg
o
rith
m is ap
plied
to optim
i
z
e suppo
rt vector mach
in
e mo
del paramete
r
s,
inclu
d
ing th
e
para
m
eter
, p
enalty facto
r
C
and in
se
nsitiv
e loss fun
c
tio
n
, and the
ba
sic
steps
of this algorit
hm are a
s
foll
ows:
Step 1: Select the initial population of ev
ery individual
rand
omly;
Step 2: Evaluate the fitness func
tio
n
value of each in
dividual;
Step 3: Choo
se a ne
w ge
neratio
n of populatio
n fro
m
the prior g
eneration by usin
g the
method ba
se
d on sel
e
ctio
n operator;
Step 4: Ta
ke
the eval
uati
on, sele
ction,
cro
ssove
r a
nd
mutatio
n
operation on the
ne
w
popul
ation af
ter the
crossover o
peration an
d
mut
a
tion op
eration a
r
e
use
d
on the
current
popul
ation, a
nd co
ntinue.
Step 5: If the fitness fu
nctio
n
value of o
p
timal individu
a
l
is la
rge
eno
ugh o
r
the
al
gorithm
has ru
n
seq
u
entially a l
o
t
of gen
eratio
n
s
, an
d the
op
timal fitness v
a
lue
of the i
n
dividual
can’t
be
imp
r
o
v
ed
pe
rc
e
p
t
ib
ly, th
en
we
ob
tain insensitive l
o
ss function
, pen
alty fact
or
C
, and th
e
optimal valu
e
of kernel fu
n
c
tion
paramet
er
, and
we
ca
n al
so
get the
optimal
cl
assifier by
usi
n
g
the training d
a
ta sets.
The g
enetic
algorith
m
opti
m
ize
s
the fit
ness
fu
ction
dire
ctly, and
the sel
e
ctio
n
of SVM
model i
s
to o
p
timize mi
nim
i
zed
gen
erali
z
ation
per
fo
rmance indi
ca
tors. Th
eref
ore, it is ne
ce
ssary
to transfo
rm the minimi
zed
generalizatio
n
index to the maximum fitness fun
c
tion
.
1
(
0.
05
)
fi
t
T
(5)
Whe
r
e
22
TR
l
is the t
e
sting
erro
r b
ound,
1
w
is
the interval value,
l
is the
num
be
r
of the sample
s.
After determi
ning th
e fitne
s
s fun
c
tion,
we
ca
n follo
w the
above
step
s of th
e
geneti
c
algorith
m
to search m
o
re
o
p
timal mod
e
l para
m
eter
s f
o
r supp
ort ve
ctor
reg
r
e
ssi
o
n
machine, a
nd
then a su
ppo
rt vector reg
r
e
ssi
on ma
chin
e is obt
ain
ed
from the stud
y of training sample
s.
2.3. SVM Pro
v
iding Net
w
ork Struc
t
ur
e and Param
e
ter
s
for RBF
The lea
r
nin
g
of Suppo
rt vector
reg
r
es
sio
n
ma
chine p
r
o
c
e
s
s is
a qua
dratic
prog
ram
m
ing
probl
em with
linear rest
rict
ions, an
d the
reg
r
e
ssi
on m
a
chi
ne that h
a
s be
en trai
n
e
d
well could b
e
use
d
to d
e
termin
e the
stru
ct
ure
a
nd pa
ramete
rs of the RBF netwo
rk.
In
con
s
id
eratio
n
of the linear
reg
r
e
s
sion
si
tuatio
n, we give the following
sampl
e
s
11
2
2
(
,
)
,
(
,
)
,
...,
(
,
)
n
nn
X
YX
Y
X
Y
R
R
,
and set the linear fu
nct
i
on to be
()
f
xw
x
b
.So the
optimizatio
n probl
em is to
minimize the followin
g
function
**
1
1
(,
,
)
2
n
ii
i
Rw
w
w
C
(6)
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
RBF
Ne
ural Networks Opt
i
m
i
zation
Algorit
hm
and Application on
Tax Fo
re
ca
sti
n
g
(YU Z
h
ijun)
3494
*
*
(
)
,
1
,
...
,
..
(
)
,
1
,
.
.
.
,
,
0
,
1
,
...,
ii
i
ii
i
ii
f
xy
i
n
s
ty
f
x
i
n
in
(7)
We solve the para
m
eters b
y
using La
gra
nge fun
c
tion
and get the m
a
ximum funct
i
on
**
*
1,
1
**
11
1
(,
)
(
)
(
)
(
)
2
()
(
)
n
ii
i
i
j
j
i
j
ij
nn
ii
i
i
i
ii
Wx
x
y
(8)
*
1
*
()
0
..
0
,
,
1
,
...
,
n
jj
i
ii
st
Ci
n
(9)
We solve this quadratic opt
imization p
r
o
b
lem to get
*
,
ij
,
then
*
1
()
n
ii
i
i
wx
we
can al
so g
e
t
b
by
ii
ii
by
w
x
by
w
x
(10
)
Thus, the reg
r
essio
n
functi
on is
*
1
()
(
)
(
)
(
)
n
ii
i
i
f
xw
x
b
x
x
b
(11
)
Con
s
id
erin
g i
t
is a no
n-lin
ear
reg
r
e
ssi
o
n
, we u
s
e
G
aussia
n
fun
c
tion a
s
the
kernel
function
(,
)
K
xy
, then the regressio
n
function
will
be
*
1
()
(
)
(
,
)
n
ii
i
i
f
xK
x
x
b
(12
)
Assu
ming tha
t
the numbe
r of supp
ort ve
ctors obtai
ne
d from SVM trainin
g
is
g
(g
n
)
,
sup
port ve
ctors a
r
e
,1
,
.
.
.
,
i
vi
g
, the offset coefficient is
b
. The weight value
s
are
*
,
1
,
...,
ii
i
wi
g
,
we can
co
nstru
c
t the
RBF neural
n
e
twork by u
s
ing the
s
e p
a
r
amete
r
s
,
whe
r
e Ga
ussian functio
n
is used a
s
the kernel
fun
c
tion, the number of inp
u
t
nodes i
s
the
dimen
s
ion
s
o
f
input matri
c
es, the
numb
e
r of hi
dde
n
units i
s
g
,
the numbe
r of
ou
tput node
is
1,
whi
c
h is the
same a
s
th
e SVM. The radial ba
si
s function ce
nters
are
,1
,
.
.
.
,
i
vi
g
, the offs
et
coeffici
ent is
b
,the weight value
s
are
*
,
1
,
...,
ii
i
wi
g
. Since SVM trai
ning is to sol
v
e th
e
quad
ratic opti
m
ization
p
r
ob
lem a
nd it i
s
cha
r
a
c
teri
ze
d
by b
e
ing
hig
h
ly lea
r
ning
efficient, glo
b
a
l
optimize
d
, RBF netwo
rk
con
s
tru
c
ted
based on SV
M can h
a
ve better pe
rformance. The
flow
cha
r
t of the propo
se
d meth
od is cl
early shown in Figu
re 1.
3. Resul
t
s
and
Analy
s
is
3.1. Selectio
n of Trained
Sample Data
and Index
In this
pape
r,
we
sel
e
ct the
releva
nt e
c
o
nomic data
from 19
80 to
2
011. Th
e d
a
ta from
1980
to 2
008
is
used
as training
sampl
e
s, a
nd th
e rest i
s
u
s
e
d
a
s
te
st sample
s. Th
e data
o
f
tax
revenu
e i
s
set as cha
r
a
c
teristi
c
seq
u
e
n
ce
s X
0
, an
d
then
we
sel
e
ct the
follo
wing 7
ind
e
xes as
relevant
facto
r
s sequ
en
ce
to ma
ke
an
a
nalysi
s
a
c
cording to
the
si
ze
of the
infl
uen
cing
facto
r
s,
the comp
arab
ility of information and t
he requi
rem
ents of
predi
ction model.
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN:
2087
-27
8
X
TELKOM
NIKA
Vol. 11, No
. 7, July 2013
: 3491 – 349
7
3495
Figure 1. Pro
posed metho
d's flow
cha
r
t
Among th
ese
indexe
s
, o
n
e
of them
is th
e three i
ndu
stries index,
which
have
a
d
i
rect
impa
ct o
n
the level
of tax reven
ue,
su
ch
as the v
a
lue-add
ed
o
f
the first ind
u
stry
(X1), th
e value
-
ad
de
d of
the second
ary industry (X2), and the value-add
ed o
f
the tertiary
indu
stry
(X3).
And there are
indexe
s
whi
c
h ca
n sh
ow
the si
ze of tax revenu
e
dire
ctly or in
dire
ctly, inclu
d
ing fixed a
s
set
investment a
c
ro
ss the co
u
n
try (X4) an
d
total vo
lume of foreign tra
de (X5). And
some
can
sh
ow
the people
'
s l
i
ving standa
rds, and have
a direct imp
a
ct on the st
atus of the total tax revenue
gro
w
th,
li
ke
t
o
tal
retail sal
e
s of so
cial consumpt
io
n go
o
d
s
(
X
6
)
. An
d
th
er
e is an
in
de
x wh
ich c
a
n
reflect th
e
rel
a
tionship
bet
wee
n
revenu
e growth
an
d
economi
c
d
e
velopme
n
t, namely the
rural
and urban
re
side
nts' de
po
sit balan
ce (X
7).
3.2. Identific
a
tion Results
The inp
u
t layer no
de n
u
m
ber a
nd the
o
u
tput layer n
ode n
u
mbe
r
are 7
and
1, so
we
set the numb
e
r of hidde
n units in RBF n
e
twork is
6. T
he Gau
s
sian
function
cent
er vecto
r
are
the
sup
port ve
ctors, the
widt
h of Gau
ssi
a
n
functi
on i
s
the sam
e
with the regression m
a
chi
ne.
A
cco
rdi
ng t
o
*
,
1
,
...,
ii
i
wi
g
, we kno
w
the corre
s
po
n
d
ing
weight
s w = [
-
0.00
9
3
0.058
2
0.1209 0.2
8
6
0
0.9892 1
1
.1850].
T
h
ro
u
gh stan
dardization of data
pro
c
e
ssi
ng, the identification
results a
r
e sh
own in Fig
u
re
2.
0.
0
0.
2
0.
4
0.
6
0.
8
1.
0
19
8
0
19
85
19
90
19
95
20
00
20
05
I
d
e
n
ti
f
i
c
a
ti
o
n
R
e
s
u
l
t
s
T
r
ai
ni
ng D
a
t
a
Figure 2. Identification re
sults
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
e-ISSN:
2087
-278X
RBF
Ne
ural Networks Opt
i
m
i
zation
Algorit
hm
and Application on
Tax Fo
re
ca
sti
n
g
(YU Z
h
ijun)
3496
3.3. Error Co
mparison An
aly
s
is
The validity of the foreca
sting metho
d
c
an be eval
uated by usi
ng the followi
ng two
error indi
cato
rs.
(1) Me
an ab
solute perce
ntage erro
r
1
1
ˆ
()
N
tt
t
i
M
AP
E
x
x
x
N
(13
)
(2) Me
an s
q
u
a
re p
r
edi
ction
erro
r
2
1
1
ˆ
[(
)
]
N
tt
t
i
M
SP
E
x
x
x
N
(14
)
Whe
r
e in
t
X
is the a
c
tual val
ue of
X
at the
time of
t
,
ˆ
t
x
shall
be it’s p
r
edi
ction value.
Comp
ari
s
o
n
of the predi
cted re
sults i
s
sho
w
n in
Fig
u
re 3,an
d table 1 sho
w
s
compa
r
ison of the
forecastin
g errors a
c
cordi
n
g to the above two indi
cato
rs.
Figure 3. Compa
r
ison of the predi
cted
results
Table 1. Co
m
pari
s
on of the
foreca
sting e
rro
rs
The evaluation standards of fo
rec
a
sting
perform
ance
M
AP
E
M
SP
E
The algorithm in
this paper
0.0122
0.0080
G
M
model
0.0557
0.0338
LS
SV
R
model
0.0548
0.0332
AR
M
A
model
0.0767
0.0473
The
results
shown in Fi
gu
re 3
and
tabl
e 1 d
e
mon
s
trate that the t
w
o e
r
ror i
nde
x of this
predi
ction
m
e
thod
are
lo
wer than
oth
e
r
singl
e fo
reca
sting
mo
dels,
and
it indi
cates th
at the
predi
ction m
e
thod pro
p
o
s
e
d
in this pap
e
r
can
effe
ctively improve the pre
d
ictio
n
accuracy.
4. Conclu
sion
Accu
rate
tax reve
nue
fo
recastin
g
be
come
s a
m
o
st im
porta
n
t
mana
geme
n
t goal.
Ho
wever, tax
reven
ue, in
the soci
o-e
c
on
omic sy
stem, is
subj
e
c
t to quite
a
few u
n
certa
i
n
qualitative an
d qu
antitative
factors, it oft
en p
r
e
s
ent
s n
online
a
r
data
pattern
s. And
for exa
c
tly th
at
rea
s
on, it is difficult to be repre
s
e
n
ted b
y
usi
ng an an
alytical mathe
m
atical mod
e
l
in an accura
te
way. The
r
efo
r
e, a
rigid fo
reca
sting
app
roach
with strong
g
ene
ral nonlin
ear ma
pping ca
pabil
i
ties
is essential.
Evaluation Warning : The document was created with Spire.PDF for Python.
e-ISSN:
2087
-27
8
X
TELKOM
NIKA
Vol. 11, No
. 7, July 2013
: 3491 – 349
7
3497
The al
gorith
m
propo
sed
i
n
this
pap
er f
i
rst u
s
e
ge
ne
tic algo
rithm
to optimize t
he mo
del
para
m
eters o
f
supp
ort vector re
gre
s
sio
n
mach
ine, a
nd then thi
s
reg
r
e
ssi
on m
a
chi
ne suppli
e
s
RBF neu
ral
netwo
rk
with
a sup
e
rio
r
stru
cture
and
param
eters.
A foreca
stin
g model of t
a
x
revenu
e in a
c
cord
an
ce
with RBF neu
ral netwo
rk
is put forwa
r
d,
aiming at the pro
b
lem of
tax
revenu
e fore
ca
st. Comp
ared with the traditional
met
hod of tax re
venue fore
ca
st, it avoids the
disa
dvantag
e
of traditional algorith
m
s
which a
r
e ofte
n trappe
d to local mini
mat
h
is, wh
at’s m
o
re,
this metho
d
effectively improve
s
ge
ne
ralizat
io
n and
don’t ne
ed a
large
num
ber of experim
e
n
ts
or em
piri
cal
experie
nces t
o
pre
-
spe
c
ify netwo
rk stru
cture.
Acco
rd
ing to case study,the method
has hi
ghe
r preci
s
ion,g
ood
gene
rali
zatio
n
ability and classificatio
n
a
b
ility.
Referen
ces
[1]
Unca
ng, Gorr
w
,
Szcz
yp
ul
aj. Ba
yesi
an forec
a
sting for s
e
e
m
ingl
y
unr
elat
ed time ser
i
es:
appl
icati
on t
o
local
gover
nme
n
t revenu
e fore
casting .
Man
a
g
e
ment Scie
nce
. 1993; 39(
3): 275-2
93.
[2]
Z
hang
Yu, Yi
n T
eng–fei. St
ud
y o
n
T
a
x F
o
recasti
ng B
a
se on
Princ
i
p
a
l C
o
mpo
n
e
n
t Anal
ys
is a
n
d
Supp
ort Vector
Machin
e (in C
h
in
ese)
. Co
mp
uter Simul
a
tion
. 2011; 9(2
8
): 357-3
60.
[3]
Z
hang
Sh
ao-q
i
u. Res
earc
h
o
n
ta
x
pre
d
ictio
n
err
o
r corr
ecti
on m
o
d
e
l
bas
e
d
o
n
c
o
-inte
g
ra
tion th
eor
y [J
]
(in Chi
nes
e).
Journ
a
l of south
china n
o
r
m
al
univ
e
rsity (Nat
ural sci
ence e
d
t
ion)
. 200
6; (1): 9-14.
[4]
Xu
e W
e
i, Z
han
g Man. Grang
e
r
Cause a
nd C
o
-int
e
g
ratio
n
T
e
st on Chi
n
a
’
s T
a
x Reven
ue (
i
n Chi
nes
e).
Journ
a
l of Ce
ntral Univ
ersity o
f
F
i
nance & Econo
mics
. 200
5; (11): 6-19.
[5]
Z
hang
Xi
n-b
o
. T
he applic
atio
n of time serie
s
model i
n
ta
x forecastin
g (in
Chin
ese).
Jour
nal of Hu
na
n
Tax Coll
ege
. 2
010; 8(2
3
): 30-
32.
[6]
Z
hang Me
g-
ya
o, Cui Jin
e
-h
u
an.
Stud
y on
monthl
y ce
ntra
l tax rev
enu
e forecasti
ng mo
dels b
a
se
d o
n
time series met
hod.
J. Sys. Sc
i & Math. Scis
.
28 (11): 13
83 -
139
0.
[7]
Shan
g Ka
i, Z
hang Z
h
i-hu
i. T
he Eco
nomic
F
a
ctor
Anal
ys
i
s
of Impacts o
n
Ch
ang
es of
Chin
a'
s T
a
x
Reve
nue Gro
w
th Rate (in Ch
i
nese).
Econ
o
m
y and Man
a
g
e
m
e
n
t
. 2008; 7(
22): 15-1
9
.
[8]
Mocan
hn, Aza
d
s. Accurac
y
and rati
on
alit
y of
state gene
ral fund r
e
ven
ue fo
rec
a
sts: evid
ence fro
m
pan
e data.
Inte
rnatio
nal Jo
urn
a
l of F
o
recast
i
ng. 199
5; (11): 417-
427.
[9]
Yu Qun, Li W
e
i-min, SHEN M
ao-
xi
ng. Appl
ic
ation of Gra
y
S
equ
enc
e Predi
ction in T
a
x F
o
recastin
g (i
n
Chin
ese). J
our
nal of Syste
m
Simulati
on
. 20
06; 8(18): 9
71-
972.
[10]
Sun Z
h
i-
yo
n
g
. A Stud
y
on T
a
x F
o
recasti
n
g
Mode
l Bas
e
d
on Gre
y
T
h
e
o
r
y
(in
Chi
nes
e).
Jo
u
r
na
l
of
Cho
ngq
in
g Uni
v
ersity (Socia
l Scienc
e Editio
n)
. 2010; (1
6): 41-4
5
.
[11]
Z
hang S
hao-
q
i
u, Hu Yu
e-mi
ng. T
a
xatio
n
F
o
recastin
g Mode
l Base
d o
n
BP Neur
al
Net
w
ork (i
n
Chin
ese).
Jo
ur
nal
of South C
h
in
a Un
iversity
of
T
e
chno
logy
(Natural Sc
ie
n
c
e Editi
on)
. 20
06; 34(
6): 55-
58.
[12]
Liu Y
un-z
h
o
ng
, Xu
an
Hu
i-
yu,
Lin G
uo-
xi. A
p
plicat
i
o
n
Res
e
arch o
n
T
a
x F
o
recastin
g i
n
C
h
ina B
a
se
d o
n
Rou
gh Set T
heor
y
(i
n Chi
nes
e
)
.
Systems Eng
i
ne
erin
g T
heor
y & Practice
. 2004; 10: 9
8
-10
3
.
[13]
K a
y
athr
i, N
K
u
mara
ppa
n. A
ccurate fa
ult
lo
cati
on
o
n
EHV
li
nes
usi
n
g
bo
th RBF
b
a
se
d
supp
ort vect
o
r
machi
ne
and
SCALCG b
a
se
d ne
ura
l
n
e
t
w
ork.
Expert Sy
stem w
i
th A
p
p
licatio
ns
. 2
010
; (37): 88
22-
883
0.
[14]
W
ang
Lin
g
-zhi,
W
u
Jia
n
-sh
e
n
g
.
App
licati
o
n
of Hybri
d
R
B
F
Neur
al N
e
tw
ork Ense
mble
M
ode
l Bas
e
d
o
n
W
a
velet S
u
p
p
o
rt Vector M
a
c
h
in
e R
egr
essio
n
i
n
R
a
inf
a
ll
Ti
me
Seri
es For
e
castin
g
. Proc
eed
ings
of th
e
201
2 5th Inter
natio
nal J
o
i
n
t Confer
ence
on
Com
put
ation
a
l
Scienc
es a
n
d
Optimizati
on
, CSO 2012
:
867-
871.
[15]
Olej, Vla
d
imír. Filip
ová, Ja
n
a
. Mod
e
ll
ing
o
f
W
eb Dom
a
i
n
Visits
b
y
R
adi
al B
a
sis Fu
nction
Ne
ur
a
l
Net
w
orks a
nd
Supp
ort Vector
Machin
e Re
gr
essio
n
.
IF
IP Ad
vances i
n
Infor
m
ati
on a
nd C
o
mmu
n
icati
o
n
T
e
chno
logy.
2
011; 36
4(2):
2
29-2
39.
[16]
Olej, Vladimír. Filipová, Jana.
Short ti
me se
ries of w
ebsite
visits pre
d
ictio
n
by RBF
n
eur
al n
e
tw
ork
s
and su
pport v
e
ctor mac
h
i
n
e
regressio
n
.
Lecture Notes i
n
Comput
er Scienc
e (incl
udi
ng sub seri
e
s
Lecture N
o
tes i
n
Artificial Intel
l
i
ge
nce a
nd Lec
tu
re Notes in B
i
oinf
ormatics). 201
2; 726
7(1): 135-
142.
[17]
Ren Ji
n-
xi
a, Yang S
a
i.
RB
F
Neural
Net
w
orks Optimi
zation A
l
gor
ith
m
Bas
ed
on
Supp
ort Vecto
r
Machi
ne a
nd Its Applic
atio
n
. 2nd Inter
natio
n
a
l Co
nfere
n
ce
on Informati
on
Engi
neer
in
g an
d Comp
uter
Scienc
e Proce
edi
ngs, ICIECS 2010.
[18]
W
ang B
i
ng,
W
ang
Xiao
li.
Perce
p
tio
n
Neur
al
Net
w
o
r
ks for Activ
e
N
o
ise
C
o
n
t
rol S
y
stems.
T
E
LKOMNIKA Indon
esi
an Jou
r
nal of Electric
al Eng
i
ne
eri
n
g
.
2012; 1
0
(7): 1
815-
182
2.
[19]
Patricia Me
lin,
Victor Herrera,
Dann
iel
a
Rom
e
ro, Fevrier Va
ldez, Oscar Ca
stillo. Genetic
Optimizatio
n
of Neur
al N
e
tw
o
r
ks for Pers
on R
e
cog
n
iti
o
n
Based
on th
e
Iris.
T
E
LKOMNIKA Indon
esi
an Jo
urna
l o
f
Electrical E
ngi
neer
ing
. 2
012;
10(2): 30
9-3
2
0
.
Evaluation Warning : The document was created with Spire.PDF for Python.