TELKOM
NIKA Indonesia
n
Journal of
Electrical En
gineering
Vol. 16, No. 3, Dece
mbe
r
2
015, pp. 423
~ 430
DOI: 10.115
9
1
/telkomni
ka.
v
16i3.937
5
423
Re
cei
v
ed
Jul
y
21, 201
5; Revi
sed O
c
tob
e
r 19, 201
5; Acce
pted No
vem
ber 9, 20
15
Layer Recurrent Neural Network Based
Power System
Load Forecasting
Nikita Mittal
1
*, Akash Sax
e
na
2
1
Departme
n
t of Electrical En
gi
neer
ing,
Ya
g
y
a
v
alk
y
a Institute
of
T
e
chnolo
g
y
2
S
w
am
i Keshv
ana
nd Institute
of
T
e
chnolo
g
y
,Jaipur, Indi
a-3
020
17
*Corres
p
o
ndi
n
g
author, e-ma
i
l
-er.nikitam
i
ttal
@
gmai
l.com
A
b
st
r
a
ct
This p
aper
pr
e
s
ents a
strai
g
h
t
forw
ard ap
pl
i
c
atio
n
of
Layer
Rec
u
rrent
Ne
ural
Netw
ork (
L
RNN)
to
pred
ict the l
o
a
d
of a l
a
rg
e dis
t
ributio
n n
e
tw
ork. Shor
t term
l
oad for
e
casti
n
g prov
ides
i
m
p
o
rtant infor
m
ati
o
n
abo
ut the syst
em
’
s
l
oad
patt
e
rn, w
h
ich is
a pre
m
ier
re
q
u
ire
m
e
n
t in
pl
ann
ing
per
iod
i
cal o
perati
ons
and
facility exp
ans
i
on. Appr
oxi
m
a
t
ion of data
pa
tterns for
forecasting is
not a
n
easy task to
perfor
m
. In pa
st,
vario
u
s a
ppr
oa
ches
hav
e b
e
e
n
a
ppl
ie
d for
forecasti
ng.
In
this w
o
rk
ap
plic
ation
of
LRN
N
is ex
plor
ed. T
h
e
results of
pro
p
o
sed
arch
itecture ar
e c
o
mpa
r
ed w
i
th
ot
her
conve
n
tio
nal
topo
log
i
es
of n
eura
l
n
e
tw
orks
o
n
the basis of R
oot Mean Sq
u
a
re of Error (RMSE)
, Mean
Absolute Perc
entag
e Error (MAPE) and M
e
a
n
Absol
u
te Error
(MAE). It is
observ
ed that
the re
sults o
b
tain
ed fro
m
LRNN ar
e co
mp
arativ
ely more
signific
ant.
Ke
y
w
ords
:
artificial
ne
ural
netw
o
rk, elec
tricity loa
d
fo
r
e
casting, layer
recurrent
neu
ral n
e
tw
ork, line
a
r
regressi
on, sh
ort term lo
ad fo
recastin
g
Copy
right
©
2015 In
stitu
t
e o
f
Ad
van
ced
En
g
i
n
eerin
g and
Scien
ce. All
rig
h
t
s reser
ve
d
.
1. Introduc
tion
Load
is
a dev
ice or a set of
device,
which
ar
e
con
s
u
m
ing e
nergy fr
o
m
the power system
netwo
rks.
Co
nsum
ption
of
ene
rgy va
rie
s
with
re
sp
e
c
t to time
be
cause the
patt
e
rn
of u
s
a
g
e
of
electri
c
ity by con
s
um
ers
can
not be
controlle
d [1]. Fore
ca
sting
of load is
an impo
rtant
and
chall
engin
g
task due to its non-smo
o
th behavio
r.
Loa
d forecastin
g help
s
the po
wer di
sp
atchi
n
g
depa
rtment t
o
a
c
curately
and
co
nvenie
n
tly gene
rate
ele
c
tri
c
ity an
d hel
ps in
co
st saving
s. S
hort
Term
Loa
d F
o
re
ca
sting
(S
TLF) i
s
an im
portant to
ol f
o
r
co
st savin
g
s
and it
help
s
to m
a
intain
the
contin
uity of electri
c
ity su
pply [2]. STLF is
impo
rta
n
t when l
oad
pattern
s are
requi
red to
be
predi
cted i
n
a
d
vance. Cu
st
omers a
r
e giv
en in
centive
s
to modify the
i
r u
s
age
patte
rn to avoi
d the
usa
ge of ene
rgy at peak ho
urs
whi
c
h can
help to
redu
ce the burd
en
on ele
c
tricity utilities [3].
Load forecasting
can
be bro
adly
divided into three
catego
rie
s
: short
term
fore
ca
st
s
whi
c
h a
r
e u
s
ually from o
ne hou
r to one wee
k
, medi
um-term
fore
ca
sts
which are
usu
a
lly from
a we
ek to
a year, an
d lon
g
-
term fo
re
ca
sts
whi
c
h a
r
e l
onge
r than
a
year [4]. In order
to meet load requi
rem
e
nts and cost efficiency,
accurate loa
d
forecastin
g
is nece
s
sa
ry.
Und
e
re
stimat
ion of load
may lead to
brea
kd
ow
n of powe
r
system netwo
rk due to sta
b
ility
probl
em
s whi
l
e overe
s
tima
tion leads to
starting of
ex
tra gene
ratin
g
units and a
cost inefficie
n
t
system. Also
in sma
r
t prep
aid meters, u
n
its of
energy need
s to be purcha
s
e
d
in advan
ce, so i
t
is
requi
re
d to d
e
termin
e the
accurate valu
e of daily
con
s
umptio
n of u
n
its in a
d
van
c
e th
rou
gh lo
ad
forecastin
g [5]. For accu
rate fore
ca
sting we
nee
d to kno
w
the f
eature
s
whi
c
h affect the l
oad
pattern. The
s
e feature
s
m
a
y be tempe
r
ature a
nd ot
h
e
r weathe
r co
ndition
s like h
u
midity, price
of
electri
c
ity, typ e of area
(comm
e
rcial, i
ndu
stri
al an
d
dome
s
tic)
or type
of consumers. Featu
r
e
sele
ction i
s
the pro
c
e
s
s of sele
cting
a set of
rep
r
ese
n
tative feature
s
that are rel
e
vant a
n
d
sufficie
n
t for building a p
r
e
d
iction mo
del
. Appropri
a
te
feature sele
ction improve
s
the accuracy
of
predi
ction (fo
r
ecastin
g
) mo
dels [6]. Man
y
load fo
reca
sting techniq
ues h
a
ve bee
n prop
osed a
nd
applie
d to foreca
sting m
o
d
e
ls
su
ccessfu
lly. Broadl
y
they
can be cla
ssifie
d
into co
nventional an
d
advan
ced
lea
r
ning
(artifici
a
l intellige
n
ce) tech
niqu
es.
Conve
n
tional
metho
d
s a
r
e
ba
sed
o
n
th
e
relation
shi
p
b
e
twee
n load
and facto
r
s a
ffecting
load. These metho
d
s are simpl
e
but erron
eou
s
due to non
-lin
ear relation
sh
ip betwe
en lo
ad and fa
ctors affecting lo
ad [9].
No
wad
a
ys v
e
ctor ma
chin
e mod
e
l the
o
ry [6], artificial neu
ral
net
work,
sup
port vector
machi
ne m
o
del a
r
e
been
used fo
r
sh
ort term
loa
d
fore
ca
sting.
In [7] autho
r pre
s
e
n
ted
self-
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 16, No. 3, Dece
mb
er 201
5 : 423 – 430
424
simila
rity theory combin
ed
with fra
c
tal i
n
terpol
at
ion t
heory to
sh
ort term po
we
r
load fo
re
ca
sting.
Variou
s
other tech
niqu
es
are
used fo
r load
fore
ca
sting, they ca
n be
cl
assifi
ed a
s
traditio
nal
techni
que
s, classical tech
nique
s or hy
brid of
both.
Artificial neu
ral netwo
rks
are u
s
ed in l
oad
forecastin
g d
ue to their capability to model no
n-li
near m
appi
n
g
relation
s b
e
twee
n input
s and
outputs a
nd they can le
arn
from a set of example
s
[8].
STLF is a po
tential area o
f
resea
r
ch an
d variou
s techniqu
es have
been emplo
y
ed by
resea
r
chers to forecast the
load [4-9]. A
brief su
rv
ey of literature is
pre
s
ente
d
bel
ow to establi
s
h
relevan
c
e
of
work
pre
s
e
n
ted in thi
s
p
a
per. In
Linea
r regressio
n
method, fun
c
tional cl
uste
ri
ng
pro
c
ed
ure is
use
d
to
cla
s
sify daily load
curve
s
and th
en a family
of function
al lin
ear
reg
r
e
s
sio
n
model
s is
de
fined. For fo
recastin
g, a
new l
oad
cu
rves i
s
a
ssi
g
ned to
clu
s
ters, a
pplying
a
function
al di
scrimin
ant a
nalysi
s
[10]. Kalm
an filter i
s
u
s
ed
to estimate
the load
mo
del
para
m
eters.
Model
s p
r
op
ose
d
in
co
nju
n
ction
with
kalman
filters estimation
co
nsid
er either the
depe
nden
ce
of the lo
ad
o
n
the
we
ather or on th
e p
r
e
v
ious l
oad
a
s
a time
se
rie
s
autoreg
re
ssi
ve
model
s.Hyb
r
i
d
mo
del
s
can
also
be
used
to exp
r
e
s
s th
e loa
d
a
s
a
combinatio
n of
both
to p
r
e
d
ict
f
u
t
u
re loa
d
s [
11]
.
Fuz
z
y
ex
pert
sy
st
em
s ca
n in
co
rp
orate a
set o
f
IF-THEN
ru
les an
d expe
rt’s
opinio
n
. Hi
sto
r
ical
data
are
conve
r
ted i
n
to fuzzy
info
rmation an
d t
hen fo
re
casti
ng is
pe
rform
ed.
Fuzzy expert
system
s can
give resu
lts with high accu
racy [12, 13].
Suppo
rt vector ma
chin
es
(SVM) ge
nerate
a model
whi
c
h will
predict un
kn
own outpu
t
based on kno
w
n input [21]. SVM’s are b
a
se
d on prin
ci
ple of stru
ctu
r
al risk minim
i
zation which is
use
d
in n
eural networks [
6
]. In [14] author h
a
ve ap
p
lied ba
ck-p
ro
pagatio
n lea
r
ning al
gorith
m
to
train ANN for forec
a
s
t
ing time series
.In [15]
autho
r h
a
s
de
mon
s
trat
ed that ANN
can
be a
pplie
d
in STLF
with accepted accuracy.
Dillon et al
. [16] used
adaptive patte
rn recognition
and self-
orga
nizi
ng te
chni
que
s for
STLF.
Different techniqu
es na
m
e
ly; regre
s
sion,
multiple reg
r
e
ssi
on, exponential smoothing
,
iterative reweighted lea
s
t squares,
adaptive loa
d
forecastin
g, stoch
a
sti
c
time seri
es
autore
g
ressiv
e,
ARMA m
odel [17], ARIMA model
[18], suppo
rt vector ma
chin
e [19], soft
computing based
model
s-
genetic al
gorithm
s
, fuzzy logi
c, neural
net
works and knowl
e
dge
based exp
e
rt
system
s et
c.
have b
een
a
pplied to
l
oad
fore
ca
sting.
It is co
ncl
ude
d that dem
an
d
forecastin
g te
chni
que
s b
a
sed on
soft co
mputing m
e
th
ods
are gai
ni
ng maj
o
r
adv
antage
s fo
r th
eir
effective use
[20].
In this pa
per we p
r
e
s
ent
STLF u
s
ing
LRNN.L
RNN contain
s
a
t
least on
e feedb
ack
con
n
e
c
tion, so the a
c
tivations
ca
n flow
roun
d in
a lo
op.This en
abl
es the
net
work to d
o
temp
oral
pr
oc
es
s
i
ng and lear
ning
s
e
quenc
e
s
.It is
obs
e
r
v
ed that LRNN leads
to a r
e
duc
t
ion in a
forecastin
g e
r
ror.
Levenb
erg-Ma
rqu
a
rdt algorith
m
is
u
s
ed
to trai
n the net
wo
rk which i
s
th
e m
o
st
widely u
s
ed o
p
timization al
gorithm.
This
pap
er i
s
org
ani
zed
a
s
follo
ws:
se
ction 2 p
r
e
s
e
n
ts an
overvie
w
of lo
ad fo
reca
sting
usin
g ANN a
nd Leven
be
rg-Ma
r
qu
ardt algorith
m
is
explained.Se
ction 3 p
r
e
s
e
n
ts the Probl
em
formulatio
n a
nd evaluatio
n crite
r
ion. Section
4 p
r
e
s
ent
s the re
sults a
nd an
alysis. Sectio
n 5
pre
s
ent
s the con
c
lu
sio
n
of the work.
2. Artificial Neural Ne
t
w
o
r
k
The artifici
al neural network wa
s invente
d
in1958 by p
s
ych
o
logi
st Fran
k Ro
se
nbl
att [22].
ANN
ope
rate
s by
cre
a
ting
many different pr
ocessi
ng ele
m
ent
s, each a
nalo
gou
s to a
si
ngle
neuron in a biologi
cal bra
i
n. The ANNs are tr
ai
ned
by adapting a netwo
rk an
d comp
arin
g the
output o
b
tain
ed
with the
in
put trai
ning
a
nd ta
rget
dat
a. The
traini
n
g
is carried
o
u
t to mat
c
h t
h
e
netwo
rk
outp
u
t to target d
a
ta. The ANN con
s
i
s
ts of a
n
input laye
r
and a
n
outpu
t layer. The la
yer
in between t
hese two l
a
yers is
hidde
n
layer. T
he n
eural networks
u
s
e weig
hts
for ea
ch
in
put
variable
and
a bia
s
that a
c
t as a
thre
sh
o
l
d to pr
odu
ce
outputs. A
sig
m
oid fun
c
tion
is
cho
s
e
n
a
s
a
function
that cal
c
ulate
s
ou
tput
usi
ng weights
an
d
bi
ase
s
as it
sh
ows a
g
r
eat
simila
rity to real
neuron
s.
z
e
z
1
1
)
(
(
1
)
Let the inp
u
t vector
at laye
r 1 b
e
de
note
d
by
x.
Let
w
1
j
,k
denote the
weig
ht for
conne
ction fro
m
the k
th
neuro
n
in the (l-1)
th
layer to the j
th
neuron in th
e l
th
layer
.
Let the total number of neu
rons
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Layer Recurrent Neu
r
al Network Ba
sed
Powe
r S
yste
m
Load Fore
ca
sting (Nikit
a Mittal)
425
in the (l-1)
th
l
a
yer and l
th
layers be K an
d J, respe
c
tively. Let L be
the total number of layers,
1
j
b
denote the bi
as of the j
th
neuro
n
in the l
th
layer. It is d
enoted a
s
:
k
j
l
k
k
j
b
a
w
1
1
1
,
,
j=1,……,
J
(
2
)
Given the last
layer, the act
i
vation functio
n
for the j
th
neuron i
s
co
mp
uted as:
L
j
L
k
k
L
k
j
L
j
b
a
w
a
1
,
,
j=
1,……..,
J
(
3
)
Whe
r
e
z
z
is a linear fun
c
tion. The wei
ght
l
k
j
w
,
is an entry of the weight m
a
trix W
l
defin
ed
as:
K
J
l
J
l
l
l
l
w
w
w
w
W
l
K
J,
l
J,2
1
,
l
k
2,
l
2,2
1
,
2
l
k
1,
2
,
1
1
,
1
w
.
.
w
.
.
.
.
.
.
.
.
.
.
w
.
.
w
w
.
.
(
4
)
Whe
r
e
1
j
a
and
1
j
b
are ent
rie
s
of the vectors d
e
fined a
s
:
]
a
........,
,
a
,
[
l
J
l
2
1
l
l
a
a
l
J
l
2
1
.......b
..
,
b
,
l
l
b
b
(
5
)
This
activatio
n
functio
n
is
the output
of
the ne
ural
n
e
tworks. We
cho
o
se si
gm
oid an
d
linear fun
c
tio
n
s. The rea
s
on is that a netwo
rk
co
n
s
isting of two l
a
yers, whe
r
e
the first layer is
sigmoi
d a
nd
the second
l
a
yer i
s
lin
ea
r, can
be
trai
ned to
ap
pro
x
imate any f
unctio
n
h
a
vin
g
a
finite numbe
r of disco
ntinu
i
ties. The neu
ral network finds
weig
hts
and bia
s
e
s
b
y
minimizing
the
following c
o
s
t
func
tion:
2
2
1
t
L
t
t
a
y
C
(
6
)
Whe
r
e y
t
i
s
th
e re
quired
kn
own
output
at insta
n
t t, and
L
t
a
is
the output
from the final
L
th
layer
at
inst
an
ce t
.
2.1. Lev
e
nberg-Mar
quar
d
t Algorithm
The L
e
venb
erg
-
Ma
rqu
a
rd
t (LM)
algo
rithm is the
most
widel
y used
opti
m
ization
algorith
m
[23]. It outperforms sim
p
le gradient
de
sce
n
t and other
conj
ugate g
r
a
d
ient method
s in
a wi
de va
riet
y of pro
b
lem
s
. In fitting a fu
nction
y’(t;
p
)
of an i
ndep
en
dent vari
able
and
a ve
ctor
of
n paramete
r
s
p
to a set of m data poi
nts (t
i
,y
i
), it is
cu
stoma
r
y an
d co
nvenie
n
t to minimize the
sum of the weighted squa
res of the e
rrors b
e
twe
en
the measure
d
data y(t
i
) and the cu
rve-fit
function y’(t
i ;
p
). This scal
ar-val
ued go
odne
ss-of
-
fit measure is calle
d the ch
i-sq
ua
red e
r
ror
criterion.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 16, No. 3, Dece
mb
er 201
5 : 423 – 430
426
2
1
2
;
'
m
i
i
i
i
w
t
y
t
y
X
p
p
(
7
)
'
'
'
2
'
'
Wy
y
Wy
y
Wy
y
y
y
W
y
y
T
T
T
T
p
p
(8)
The value
i
w
is a measure of the error in m
easure
m
ent
y(
t
i
)
This al
go
rith
m adaptively
varies th
e pa
ramete
r up
da
tes bet
wee
n
the gra
d
ient
desce
nt
update a
nd th
e gau
ss n
e
wt
on upd
ate,
'
y
y
W
J
h
I
WJ
J
T
lm
T
(9)
Whe
r
e
a
sm
a
ll value
s
of
th
e alg
o
rithmi
c
para
m
eter
re
sult in a
G
a
u
s
s-Ne
wton
up
d
a
te an
d la
rg
e
values of
result in a gra
d
ient desce
nt u
pdate. The p
a
ram
e
ter
is ini
t
iated to be large
so that
first
up
date
s
are small ste
p
s
in a steep
est-d
e
sce
n
t
d
i
rectio
n. If an
iteration
happ
ens to
re
sult i
n
a
worse app
rox
i
mation,
is in
crea
sed. As th
e solutio
n
im
prove
s
,
is de
cre
a
sed, the
Levenb
erg-
Marq
uardt method a
p
p
r
oa
che
s
the
Gau
s
s-
Newton method,
and the
solution typically
accele
rate
s to the local mi
nimum. Marq
uardt’
s
sugge
sted up
date relation
ship m
a
ke
s the effe
cts
of the pa
rticular val
u
e
s
o
f
less p
r
obl
e
m
sp
ecifi
c
, a
nd i
s
u
s
ed
i
n
the L
e
ven
berg
-
Ma
rq
uardt
algorith
m
imp
l
emented in t
he MATLAB functio
n
.
'
y
y
W
J
h
WJ
J
diag
WJ
J
T
lm
T
T
(
1
0
)
3. Preliminaries
3.1. Problem Formulation
In this wo
rk,
the load fore
ca
sting in the
or
de
r of a few minute
s
is
unde
r co
nsi
d
eration.
The b
a
si
c p
r
edi
ction
can
be exp
r
e
s
sed in a tim
e
-se
r
ie
s mo
de
l, in whi
c
h t
he future lo
ad
exclu
s
ively relies o
n
the
histori
c
al
dat
a of load
.In the mod
e
ling
pro
c
e
ss, the
data is
sele
cted
from a
distri
b
u
ted g
r
id. A set of 1000
da
ta points
i
s
selecte
d
for
training and an
other set
of
1
000
data point
s i
s
u
s
edfo
r
mo
del validation
.
Numbe
r
of
layers i
n
LRNN i
s
2 an
d
data divisio
n
is
rand
om.Train
ing algo
rith
m used in
LRNN is
L
e
v
enberg-M
a
rquardt algori
t
hm. Numbe
r
of
neuron
s in fi
rst laye
r is
10. Figu
re 1
depi
cts
so
me po
rtion
s
of the trainin
g
and valid
a
t
ion
datasets.
W
1j
W
2j
W
3j
W
nj
X
1
X
2
X
3
X
n
weights
transfer
function
net input
threshold
activation
activation
function
inputs
Figure 1. Architecture of Lay
er Re
cu
rre
n
t Neural Net
w
ork
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Layer Recurrent Neu
r
al Network Ba
sed
Powe
r S
yste
m
Load Fore
ca
sting (Nikit
a Mittal)
427
Figure 2. Loa
d pattern
sele
cted for (a) training; (b
) val
i
dation
3.2. The Ev
aluation Criter
ia
For evalu
a
tin
g
the perfo
rm
ance of the model fo
r lo
a
d
fore
ca
sting,
def
ine erro
rs (e), the
root m
ean
square e
r
rors
(RMSE), th
e
mean
ab
sol
u
te erro
rs (M
AE), and th
e
mean
ab
sol
u
te
perc
entage erro
rs
(MAPE) as
:
i
y
i
y
i
y
e
act
pre
act
]
[
(
1
1
)
2
1
1
N
i
pre
act
i
y
i
y
N
RMSE
(
1
2
)
N
i
pre
act
i
y
i
y
N
MAE
1
1
(
1
3
)
%
100
]
[
1
i
y
i
y
i
y
N
MAPE
act
pre
act
(
1
4
)
Whe
r
e
N=10
00 i
s
the
nu
mber of valid
ation d
a
ta,
act
y
is the real
outp
u
t and
pre
y
is the
predi
cted out
put
in
this pa
per.Comp
a
ri
sons we
re m
a
de amo
ng th
e FFNN a
nd
the LRNN with
the same in
p
u
t output data
sho
w
n in Fig
u
re 1.
4. Result a
n
d Analy
s
is
Load i
s
forecasted
with th
e help
of five input
features i.e. dry bulb,
dew
point, wet bulb,
humidity an
d
elect
r
icity p
r
i
c
e. Ri
ch
data
pattern
s of
hourly l
oad
o
f
500 d
a
ys a
r
e take
n. All the
input and out
put data is n
o
rmali
z
e
d
in the rang
e
of (0.1-0.9) to a
v
oid conve
r
g
ence pro
b
le
m.
Load
is forecasted
with
th
e hel
p
of two topol
ogie
s
of neu
ral
net
works i.e.
FF
NN an
d
LRNN.
0
10
20
30
40
50
60
70
80
90
10
0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
Sa
m
p
l
e
Load
0
50
100
150
200
25
0
300
350
400
450
50
0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
sa
m
p
le
Load
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 16, No. 3, Dece
mb
er 201
5 : 423 – 430
428
Figure 3 sho
w
s the mo
del
output and
the plant outp
u
t. It is obse
r
v
ed from the Figure 3(a
)
that
LRNN m
odel
gives the b
e
s
t po
ssibl
e m
a
tch bet
we
e
n
the actual l
o
ad data a
nd
predi
cted l
oad
data. Figure
3(b)
sh
ows the model
output
and
plant outpu
t for FFNN model which is
comp
aratively less
accu
rate. Figure
4 sh
ows th
e
distrib
u
tion
of errors o
b
tained from t
he
forecaste
d
re
sults. It ca
n b
e
se
en that F
F
NN
ha
s sm
all error va
ria
n
ce, a
nd the
errors obtai
n
e
d
by using L
RNN ha
s lowest
varian
ce
s the
plot
of erro
r distrib
u
tion u
s
ing L
R
NN i
s
narro
wer.
Table 1
shows the e
r
ror i
n
dice
s fo
r the
LRNN
and
F
F
NN mod
e
ls.
It is se
en th
at LRNN
has the lea
s
t
value of errors i.e. MAE is
0.0795, RMSE is 0.1059 and MAP
E
is 0.2261. For
FFNN the va
lue of e
r
rors
is sli
ghtly mo
re i.
e. MAE i
s
0.08
71,
RMSE is 0.11
69 an
d MAP
E
is
0.2339. From
Table 1 it is clea
r that the accura
cy of L
R
NN mod
e
l is be
st, and accura
cy of FFNN
is little lower t
han L
R
NN.
Figure 3(a
)
. Comp
ari
s
o
n
of Actual Loa
d and Fo
re
ca
sted loa
d
by LRNN Mo
del
Figure 3(b
)
. Comp
ari
s
o
n
of Actual
Loa
d and Fo
re
ca
sted loa
d
by FFNN Model
Figure 4. Erro
rs bet
wee
n
target
data an
d
predi
cted mo
del output
0
50
100
150
20
0
250
300
350
40
0
450
500
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
Sa
m
p
l
e
Load
FFN
N
M
o
d
e
l
A
c
t
ual
Loa
d
0
50
10
0
15
0
20
0
25
0
30
0
35
0
40
0
45
0
50
0
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
Sa
m
p
l
e
Lo
ad
L
RNN M
o
d
e
l
A
c
tu
al
L
oad
0
50
100
150
200
25
0
300
350
400
450
50
0
-0
.
3
-0
.
2
-0
.
1
0
0.
1
0.
2
0.
3
0.
4
0.
5
Sa
m
p
l
e
s
Er
r
o
r
o
f
F
F
N
N
0
50
10
0
15
0
20
0
25
0
30
0
35
0
40
0
45
0
50
0
-0
.
3
-0
.
2
-0
.
1
0
0.
1
0.
2
0.
3
0.
4
0.
5
sa
m
p
le
s
E
rro
r o
f
L
R
N
N
Evaluation Warning : The document was created with Spire.PDF for Python.
TELKOM
NIKA
ISSN:
2302-4
046
Layer Recurrent Neu
r
al Network Ba
sed
Powe
r S
yste
m
Load Fore
ca
sting (Nikit
a Mittal)
429
To com
pare
the efficacy
of
the netwo
rks vario
u
s i
ndices a
r
e d
e
fined in section 3.2.
Table 1 sho
w
s the value
s
and fig. 5 sho
w
s t
he pi
ctori
a
l rep
r
e
s
entat
ion of the indi
ce
s.
Table 1. The
comp
ari
s
o
n
o
f
erro
r indices.
MAE
RMSE
MAPE
FFNN
0.0871
0.1169
0.2339
LRNN
0.0795
0.1059
0.2261
Figure 5. Erro
rs
with FFNN
and L
R
NN m
odel
5. Conclusio
n
This p
ape
r p
r
esents
an a
pplication of
LRNN in
hou
rly forecast o
f
the load of a larg
e
distrib
u
tion p
o
we
r network. Five significant
feature
s
have been
chosen for trai
ning the neu
ral
net.The
re
sul
t
s obtai
ned
from L
R
NN
are si
gnificant
and te
sted
with
four stati
s
tical p
a
ramet
r
ic
tests MAE,
MAPE and
RMSE. Resu
lts of proposed architectu
re i
s
compared with
FFNN
techni
que. It
is o
b
serve
d
t
hat the
L
R
NN
sho
w
s
pr
o
m
ising
results as obtai
ne
d erro
rs in a very
narro
w margi
n
and train
ed
netwo
rk p
o
ssess high
er re
gre
ssi
on resu
lts.
Referen
ces
[1]
AS Kh
w
a
j
a
, M Naeem, A An
pal
aga
n, A Venetsan
o
p
oul
os, B Venkatesh
.
Improved sh
o
r
t-term loa
d
forecastin
g usi
ng ba
gg
ed n
e
u
r
al net
w
o
rks.
Electric Pow
e
r Systems Res
ear
ch.
2015; 1
25: 109-
115.
[2]
L Herna
n
d
e
z, C Baladr
on, J Agui
ar, B Carro, A
Sanchez-
E
sguev
ill
as, J
Llor
et, J Massana. A surv
e
y
on
electric
p
o
w
e
r
dema
n
d
forecasti
ng: fut
u
re tre
nds
ins
m
art gri
d
s, mi
crogrids
a
n
d
s
m
art bu
ild
in
gs.
IEEE Commun. Surveys Tutor.
2014; 16(
3): 1460-
149
5.
[3]
I Fernandez,
C Borges, Y
Peny
a.
Efficient building
load forec
a
sting
.
IEEE 16th Conference
on
Emergi
ng T
e
chnol
ogi
es an
d F
a
ctor
y
A
u
tomat
i
on. 20
11: 1-8.
[4]
SC Chan, KM
T
s
ui, HC Wu,
Y Hou, YC Wu, FF Wu.
Load
/price
for
e
casti
ng and
ma
nag
ing
de
man
d
response for sm
art grids.
IEEE Signal Proc. Mag. 2012: 68-
85.
[5]
P Da
y
,
M F
abi
an, D N
obl
e, G Ru
w
i
sc
h, R S
penc
er, J Stevenso
n
, R T
hop
pa
y.
Res
i
de
nti
a
l p
o
w
e
r loa
d
forecasting.
Proc. Comput. Sci. 2014; 2
8
: 457-4
64.
[6] SR
Gunn.
Su
pport Vector
Machi
nes for Classific
a
tio
n
and R
egress
i
o
n
.
T
e
chnical Report, Image
Speec
h an
d Intelli
ge
nt S
y
stem
s Researc
h
Group. Un
iversit
y
of Southampto
n
. 1998.
[7]
Ming-Y
ue Z
hai
. A ne
w
meth
od for short-te
rm load forec
a
sting b
a
se
d on fractal inter
pol
ation a
n
d
w
a
vel
e
t an
al
ysi
s
.
Electrical Po
w
e
r & Energy Systems.
20
15
; 69: 241-2
45.
[8]
Chin
W
ang
Lo
u, Ming
Ch
ui
Don
g
. A nov
el
rand
om fuzz
y neur
al
net
w
o
r
k
for tackli
ng
u
n
certai
nties
o
f
electric l
oad for
e
castin
g.
Electrical Pow
e
r an
d ener
gy systems.
20
15; 73: 34-4
4
.
[9]
T
Senj
yu, H T
a
kara, K
Uez
a
to, T
F
unabas
hi
. One-ho
ur-ah
ead
lo
ad
forec
a
sting
usi
ng
n
eura
l
n
e
t
w
ork
.
IEEE Trans. P
o
wer Syst
. 2002; 17(1): 11
3-1
18.
[10]
Goia A, Ma
y
C, F
u
sai G. Functio
nal c
l
ust
e
rin
g
an
d li
ne
ar regress
i
o
n
for peak l
o
a
d
forecastin
g.
Internatio
na
l jo
urna
l of foreca
sting
. 201
0; 26
(4): 700-7
11.
[11]
Al-Hama
d
i
H,
Solima
n
S. S
h
ort-term electri
c
loa
d
for
e
casti
ng
base
d
o
n
K
a
lma
n
filteri
n
g
alg
o
rithm
w
i
t
h
mo
vi
ng
w
i
nd
ow
w
e
a
t
he
r a
n
d
l
o
ad
mo
d
e
l
.
Electr Pow Syst
Res.
200
4; 68(
1): 47-59.
Evaluation Warning : The document was created with Spire.PDF for Python.
ISSN: 23
02-4
046
TELKOM
NI
KA
Vol. 16, No. 3, Dece
mb
er 201
5 : 423 – 430
430
[12]
D Ran
a
w
e
era,
N Hube
le, G Karad
y
. F
u
zz
y logic for sh
ort term load fore
casting
. Int. J.
Electr. Power
Energy Syst
. 1996; 18(
4): 215
-222.
[13]
Kiartzis S, Ba
kirtzi
s A, T
heocharis J, T
s
agas G.
A fu
z
z
y
expert system
for peak
load forec
a
sting,
Application to t
he greek power system
. 10
th
Electrotech
n
ic
al confer
enc
e. 200
0; 3: 1097-
110
0.
[14]
DE Rumel
hart,
GE Hinton, RJ Willams.
Lear
nin
g
intern
al re
prese
n
tatio
n
b
y
error prop
ag
ation
. Para
lle
l
Distribut
ed Pro
c
essin
g
. Camb
ridg
e. 198
6; 1: 318-
362.
[15]
KY Lee, YT
C
ha, JH Park.
Artificial n
eura
l
netw
o
rk meth
o
dol
ogy for shor
t-term loa
d
for
e
castin
g
. NSF
W
o
rkshop
on
Artificial
Ne
u
r
al N
e
t
w
ork
Method
olo
g
y
i
n
Po
w
e
r S
y
st
em En
gin
eeri
ng. Cl
emso
n
Univers
i
t
y
.
199
0.
[16]
Dill
on T
S
, Morszt
y
n
K, P
hua
K.
Short ter
m
loa
d
forec
a
sti
ng us
in
g a
dap
tive patter
n
re
cogn
ition
an
d
self-org
ani
z
i
ng
techniq
ues
.
Procee
din
g
s
F
i
fth W
o
rld Po
w
e
r S
y
stem
Computati
o
n
Confere
n
ce
.
Cambri
dg
e. 19
75; 2(4/3): 1-1
5
.
[17]
Papp
as S, Ekonomo
u
L, Kara
m
ousa
n
tas D, Chatzar
a
kis G, Katsikas
S, Liatsis P. Electricit
y
dem
an
d
loa
d
s mod
e
li
n
g
usi
ng a
u
tore
gressiv
e
movi
ng av
erag
e (A
RMA) mode
ls.
Energy
. 2
0
0
8
;
33(9): 13
53-
136
0.
[18]
Lee CM, Ko
CN. Short-ter
m
load for
e
ca
st
ing usi
ng l
i
fting sch
eme a
nd {ARIMA}
mode
ls.
Exper
t
Systems w
i
th Appl
icatio
ns
. 2
011; 38(
5): 590
2-59
11.
[19]
Che
ng T
i
ng Li
n, Li Der Ch
o
u
. A novel ec
onom
y refl
ecti
ng short-term
loa
d
forecasti
n
g appr
oac
h
.
Energy co
nser
vation a
nd
ma
nag
e
m
ent.
20
1
3
; 65: 331-
342.
[20]
Arunes
h Kum
a
r Sing
h, Ibra
heem, S K
hat
oon, Md Mu
a
zzam, DK Ch
aturved
i
.
L
o
ad fo
re
ca
sti
n
g
techni
qu
es a
n
d
meth
odo
lo
gi
es: A revi
ew
.
2
nd
po
w
e
r co
n
t
rol a
nd
embe
dde
d s
y
stems
confer
ence
.
201
2: 1-10.
[21]
F
eng Lv, F
e
n
gni
ng Ka
ng,
Hao Su
n. T
h
e Pr
edictiv
e
Method
of Po
w
e
r Lo
ad B
a
sed o
n
SVM.
T
e
leko
monik
a
Indo
nesi
an Jo
u
r
nal of Electric
al Eng
i
ne
eri
ng.
2014; 1
2
(4): 3
068-
307
7.
[22]
Rose
nbl
att F
.
T
he Perceptro
n: A Proba
bil
i
stic
Mode
l F
o
r In
formation Stor
age A
nd Orga
nizati
on In T
h
e
Brain.
Psychologic
a
l Rev
i
ew.
65(6): 386-
40
8.
Evaluation Warning : The document was created with Spire.PDF for Python.