Indonesian
J
our
nal
of
Electrical
Engineering
and
Computer
Science
V
ol.
19,
No.
2,
August
2020,
pp.
964
973
ISSN:
2502-4752,
DOI:
10.11591/ijeecs.v19i2.pp964-973
r
964
Deep
lear
ning
v
ersus
traditional
methods
f
or
parking
lots
occupancy
classification
Mohamed
S.
F
arag,
M.
M.
Mohie
El
Din,
H.
A.
El
Shenbary
Department
of
Mathematics,
F
aculty
of
Science
Al-Azhar
Uni
v
ersity
Cairo,
Egypt
Article
Inf
o
Article
history:
Recei
v
ed
Jan
5,
2020
Re
vised
Mar
6,
2020
Accepted
Mar
20,
2020
K
eyw
ords:
Ale
xnet
Deep
learning
D
WT
Lots
classification
PCA
Smart
P
arking
ABSTRA
CT
Due
to
the
increase
in
number
of
cars
and
slo
w
city
de
v
elopments,
there
is
a
need
for
smart
parking
system.
One
of
the
main
issues
in
smart
parking
systems
is
parking
lot
oc
cupanc
y
status
classific
ation,
so
this
pape
r
introduce
tw
o
methods
for
parking
lot
classification.
The
first
method
uses
the
mean,
after
con
v
erting
the
col
ored
image
to
grayscale,
then
to
black/white.
If
the
mean
is
greater
than
a
gi
v
en
threshold
it
is
classified
as
occupied,
otherwise
it
is
empty
.
This
method
g
a
v
e
90%
correct
classifi-
cation
rate
on
cnrall
database.
It
o
v
ercome
the
ale
xnet
deep
learning
method
trained
and
tested
on
the
same
database
(the
mean
method
has
no
training
time).
The
second
method,
which
depends
on
deep
learning
is
a
deep
learning
neural
netw
ork
consists
of
11
layers,
traine
d
and
tested
on
the
same
database.
It
g
a
v
e
93%
correct
classification
rate,
when
trained
on
cnrall
and
tested
on
the
same
database.
As
sho
wn,
this
method
o
v
ercome
the
ale
xnet
deep
learning
and
the
mean
methods
on
the
same
database.
On
the
Pklot
database
the
ale
xnet
and
our
deep
learning
netw
ork
ha
v
e
a
close
resutls,
o
v
ercome
the
mean
method
(greater
than
95%).
Copyright
©
2020
Insitute
of
Advanced
Engineeering
and
Science
.
All
rights
r
eserved.
Corresponding
A
uthor:
Mohamed
S.
F
arag,
Department
of
Mathematics,
F
aculty
of
Science,
Al-Azhar
Uni
v
ersity
,
Nasr
city
,
11884,
Cairo,
Egypt.
T
el:
0020-1006-574-243.
E-mail:
mohamed.s.f
arag@azhar
.edu.e
g
1.
INTR
ODUCTION
The
industrialization
of
the
w
orld,
slo
w
paced
city
de
v
elopment,
and
increase
in
number
of
cars
has
resulted
parking
problems
.
There
is
need
for
an
intelligent
system
to
be
used
for
allocating
free
park-
ing
lots.
Smart
P
arking
System(SPS),
is
sho
wn
as
a
small
v
ersion
of
an
Intelligent
T
ransportation
Sys-
tems
(ITS)
[1].
Using
Internet
of
Things
(IoT),
to
minimize
the
traf
fic
and
parking
congestion.
One
of
the
most
challenge
problem
is
the
w
ay
to
detect
a
parking
lot
state(
Occupied,
or
Free).
Smart
parking
sys-
tems
based
on
emer
genc
y
status
w
as
proposed
using
FPGA
to
perform
a
lot
of
tasks
lik
e
automatic
park-
ing
depends
on
the
beha
vior
of
dri
ving
and
w
arning
the
dri
v
ers
[2].
An
automated
parking
management
system,(APMS)
for
recognizing
v
ehicles
plate
numbers
in
[3]
based
on
template
matching
.
In
[4]
the
pro-
posed
w
ork
focused
on
pro
viding
a
solution
to
v
ehicle
parking
Management
System.
The
proposed
w
ork
is
de
v
eloped
using
Ultrasonic
Sensors,
Arduino
Me
g
a,
Android,
W
i-Fi
Module
and
Google
maps.
The
pro-
posed
system
is
designed
to
detect
the
v
acant
parking
slots
through
the
IO
T
technology
utilizing
Google
maps
and
Android
application.
The
W
i-Fi
Module
is
used
to
send
the
information
to
the
serv
er
.
Authors
in
[5]
presented
a
multi-camera
system
for
the
management
of
v
acant
parking
places
by
means
of
v
ehicle
detection
and
mapping
it
into
the
parking
spaces
of
a
parking
lot.
The
system
achie
v
ed
90%
correct
clas-
sification
rate.
Authors
in
[6–10]
vie
wed
a
lot
of
smart
parking
systems.
A
cro
wd
of
taxis
to
sense
on-
street
parking
space
a
v
ailability
w
as
de
v
eloped
in
[11].
A
s
up
e
rvised
learning
method
w
as
de
v
eloped
in
J
ournal
homepage:
http://ijeecs.iaescor
e
.com
Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian
J
Elec
Eng
&
Comp
Sci
ISSN:
2502-4752
r
965
[12]
to
estimate
parking
occupanc
y
status
in
road
side
using
mobile
sensing
approach.
Multiple
road
tests
were
conducted
around
Oxfordshire
and
Guildford
in
the
U.K.
The
adv
antage
of
the
mobile
sensing
approach
is
that
it
requires
a
significantly
smaller
number
of
sensor
units
compared
with
the
fix
ed
sensing
solutions.
T
o
co
v
er
8000
p
a
rking
spaces,
132
mobile
sensing
unit
s
compared
with
12000
fix
ed
sensors
.
In
the
case
of
e
x-
act
GPS
r
eadings,
follo
wed
by
a
map
matching
technique,
the
classification
rate
of
the
system
w
as
abo
v
e
90%.
The
mobile
sensing
system
becomes
more
pronounced
as
the
parking
lots
number
to
be
moni
tored
increases.
A
mobile
AR-based
interacti
v
e
smart
parking
system
w
as
applied
in
[13].
The
parking
system
in
a
city
which
is
embedded
with
v
arious
features
lik
e
automated,
rotary
parking
and
nearest
parking
slot
allotment
using
IoT
and
sens
or
technology
w
as
discussed
by
authors
in
[14].
A
literature
o
v
er
the
period
of
2000-2016
on
parking
solutions
as
the
y
were
applied
to
smart
parking
de
v
elopment
and
e
v
olution,
and
propose
three
macro-themes:
information
collection,
system
deplo
ym
ent,
and
service
dissemination
w
as
reported
in
[15].
In
[16,
17]
authors
de
v
eloped
a
smart
parking
system
based
on
fog
computing,
enabled
a
fog
for
ef
ficient
car
parking
architecture.
The
video
analysis
method
has
the
adv
antages
of
easy
installation,
sa
ving
hardw
are
cost
and
e
xtending
other
functions
compared
with
the
sensing
coil
detection
and
the
infrared
detection.The
boundary
coordinates
and
central
coordinates
of
the
license
plate
re
gion
are
used
to
classify
the
occupanc
y
status
of
the
parking
space.
Authors
in
[18]
reported
that
the
classification
rate
of
the
parking
space
detection
system
is
be
yond
90%.
The
remainder
of
this
article
is
sho
wn
as
follo
ws.
Section
2
vie
w
the
standard
methods
used
for
lot
classification.
Sections
3,
4
presents
the
databases
used
for
training
and
testing
and
traditional
methods
results.
Section
5
vie
w
the
proposed
method
and
its
results
compared
with
the
ale
xnet
and
the
mean
method
results.
The
Conclusion
and
our
future
w
ork
are
sho
wn
in
Section
6.
2.
ST
AND
ARD
METHODS
2.1.
Principal
component
analysis
The
principal
com
ponent
analysis,
(PCA)
is
considered
to
be
a
statistical
model
used
for
feature
e
xtraction,
one
of
the
most
used
and
successful
techniques
in
image
processing.
PCA
is
mainly
used
for
reducing
dimensionality
of
the
ro
w
data
space
to
the
smaller
dimensionality
of
the
feature
space.
This
reduction
is
confirmed
by
the
linear
transformation
Z
=
AY
:
(1)
Where
Z,
A,
Y
are
the
feature
matr
ix,transformation
matrix
and
original
image
respecti
v
ely
.
PCA
can
gi
v
e
us
data
prediction,
compression,
redundanc
y
remo
v
al,
and
feature
e
xtraction.
The
scope
of
using
PCA
for
lots
occupanc
y
detection
is
to
e
xpress
the
lar
ge
one
dimensional
v
ector
of
pix
el
constructed
from
the
tw
o
dimensional
lot
image
into
the
feature
space(Principal
Components).
It
is
kno
wn
as
eigenspace
projection.
Eigenspace
can
be
computed
by
calculating
the
eigen
v
ectors
of
the
corresponding
co
v
ariance
matrix
of
the
training
images.
In
1991,
PCA
method
w
as
firstly
proposed
by
M.
T
urk
and
A.
Pentland
[19].
Assume
we
ha
v
e
a
dataset
of
N
slot
images
Y
1
;
Y
2
;
:::;
Y
N
.
Originally
,
each
image
is
a
2-dimentional
matrix
of
size
n
by
m.
By
con
v
erting
each
2-dimenstional
image
to
1-dimentional
column
v
ector
of
size
n
m
as
follo
ws.
Y
i
=
0
B
B
B
B
B
B
@
y
1
y
2
:
:
:
y
nm
1
C
C
C
C
C
C
A
(2)
The
images
set
will
be
Y
=
[
Y
1
;
Y
2
;
:::Y
N
]
(3)
then,
the
mean
image
Y
m
is
computed
as
follo
ws
Y
m
=
1
N
N
X
i
=1
Y
i
:
(4)
Deep
learning
ver
sus
tr
aditional
methods
for
...
(Mohamed
S.
F
ar
a
g)
Evaluation Warning : The document was created with Spire.PDF for Python.
966
r
ISSN:
2502-4752
and
the
co
v
ariance
matrix
of
the
dataset
is
gi
v
en
by
the
formula
C
=
1
N
N
X
i
=1
(
Y
i
Y
m
)(
Y
i
Y
m
)
T
:
(5)
Let
M
i
=
(
Y
i
Y
m
)
,
be
the
centered
image.
No
w
we
w
ant
to
compute
eigen
v
ectors
e
i
and
the
eigen
v
alues
i
of
this
co
v
ariance
matrix.
C
=
M
M
T
:
(6)
No
w
,
the
size
of
C
is
nm
nm
,
so
image
of
size
100
100
will
gi
v
e
a
co
v
ariance
matrix
of
size
10000
10000
which
will
not
be
practical
to
solv
e
for
the
eigen
v
ectors
of
C
directly
.
Let
i
,
d
i
be
the
eigen
v
ectors
and
eigen
v
alues
of
M
T
M
,
respecti
v
ely
.
That
is
mean
that
M
T
M
d
i
=
i
d
i
(7)
By
multiplying
both
sides
by
M(from
left)
(
M
M
T
)
M
d
i
=
i
(
M
d
i
)
(8)
The
first
N
1
eigen
v
alues
i
and
eigen
v
ectors
e
i
of
the
co
v
ariance
matrix
C
=
M
M
T
are
gi
v
en
by
M
d
i
and
i
,
respecti
v
ely
.
M
d
i
needs
to
be
normalized
in
order
to
be
equal
to
e
i
.
The
transformation
matrix
A
can
be
constructed
from
the
k
eigen
v
ectors
corresponding
to
the
k
lar
gest
eigen
v
alues
of
the
desired
co
v
ariance
matrix.
2.2.
Euclidean
distance
Euclidean
distance
is
used
to
classify
the
images
in
the
test
image
set
for
which
class
it
belong
to.
Comparing
the
weight
matrix
(feature
v
ectors)
of
the
images
in
the
training
set
with
the
corresponding
weight
matrix
of
the
test
image
using
euclidean
distance,
"
i
=
k
T
i
k
(9)
where
i
is
a
v
ector
describing
the
i
th
image
in
the
training
set.
2.3.
Discr
ete
wa
v
elet
transf
orm
W
a
v
elet
transforms
are
considered
to
be
a
mathematical
functions
used
to
con
v
ert
ro
w
data
to
frequenc
y
components,
each
component
is
treated
with
a
resolution
according
to
its
scale.
W
a
v
elets
were
presented
in
the
field
of
electrical
engineering
,mathematics
and
quantum
ph
ysics
[20].
In
the
last
decades,
man
y
ne
w
w
a
v
elet
applications
were
introduced
lik
e
Earthquak
e
predictions,
image
compression,
human
vision
and
radar
.
F
or
an
image,
the
w
a
v
elet
decomposition
function
is
defined
as
follo
ws:
V
;U
(
t
)
=
N
1
X
x
=0
N
1
X
y
=0
g
(
x;
y
)
exp
j
(
V
x
+
U
y
)
N
;
(10)
where,
the
K
ernel
function
is:
exp
j
(
V
x
+
U
y
)
N
,
g
(
x;
y
)
is
a
2D
image,
and
N
is
the
number
of
pix
els
in
the
desired
image.
The
W
a
v
elet
transform
is
considered
to
be
a
useful
computational
tool
for
signal
and
image
proces
sing
applications.
D
WT
is
used
in
a
wide
range
in
pattern
recognition
area
[21–23].
D
WT
generates
4
coef
ficients
in
each
le
v
el
decomposition.
Approximation,
Horizontal,
v
ertical
and
Diagonal
information.
The
approximation
coef
ficient
of
the
1st
le
v
el
decompositi
on
is
treated
as
the
original
image,
because
it
contains
more
information
about
the
image.
Indonesian
J
Elec
Eng
&
Comp
Sci,
V
ol.
19,
No.
2,
August
2020
:
964
–
973
Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian
J
Elec
Eng
&
Comp
Sci
ISSN:
2502-4752
r
967
2.4.
RGB
to
gray
con
v
ersion
T
o
con
v
ert
a
colored
images
to
grayscale,
we
use
the
follo
wing
equation.
X
=
0
:
2989
R
+
0
:
5870
G
+
0
:
1140
B
:
(11)
According
to
a
threshold
v
alue
,
the
gray
image
pix
els
v
alue
i
s
con
v
erted
to
a
black
and
white(0
or
1)
according
to
the
sho
wn
equation
Y
ij
=
1
w
her
eX
ij
>
=
:
0
other
w
ise:
(12)
3.
D
A
T
AB
ASE
DESCRIPTION
3.1.
PKLO
T
database
The
PKLot
database
contains
12,417
images
of
parking
images
and
695,899
images
of
parking
spaces
se
gmented
from
them,
check
ed
and
labeled
manually
.
Images
were
captured
at
the
parking
lots
of
the
Federal
Uni
v
ersity
of
P
arana
(UFPR)
and
the
Pontical
Catholic
Uni
v
ersity
of
P
arana
(PUCPR),
both
located
in
Curitiba,
Brazil
[24].
T
able
1
vie
ws
number
of
free
and
b
usy
spaces
in
the
pklot
dataset.
T
able
1.
PKLot
characteristics
P
arking
lot
W
eather
condition
No
of
days
No
of
images
No
of
occupied
No
of
empty
total
UFPR04
Sunn
y
20
2098
32166
(54.98%)
26334
(45.02%)
58400
(28
parking
spaces
)
Ov
ercast
15
1408
11608
(29.47%)
27779
(70.53%)
39387
Rain
y
14
285
2351
(29.54%)
5607
(70.46%)
7958
Subtotal
3791
46125
(43.58%)
59720
(56.42%)
105845
UFPR05
Sunn
y
25
2500
57584
(57.65%)
42306
(42.35%)
99890
(45
parking
spaces
)
Ov
ercast
19
1426
33764
(59,27%)
23202
(40.73%)
56966
Rain
y
8
226
6078
(68.07%)
2851
(31.93%)
8929
Subtotal
4152
97426
(58.77%)
68359
(41.23%)
165785
PUCPR
Sunn
y
24
2315
96762
(46.42%)
111672
(53.58%)
208433
(100
parking
spaces
)
Ov
ercast
11
1328
42363
(31.90%)
90417
(68.10%)
132780
Rain
y
8
831
55104
(66.35%)
27951
(33.65%)
83056
Subtotal
4474
194229
(45.78%)
230040
(51.46%)
424269
T
O
T
AL
12417
337780
(48.54%)
358119
(51.46%)
695899
3.2.
CNRP
ark
database
CNRP
ark
is
a
ne
w
dataset
consists
of
12,000
images
capt
u
r
ed
in
dif
ferent
days
of
No
v
ember
2015
to
February
2016
under
dif
ferent
weather
conditions
by
9
cameras
with
v
arious
angles
of
vie
w
and
perspecti
v
es.
It
has
dif
ferent
situations
of
light
conditions,
shado
wed
cars
,
and
includes
obstacles
lik
e
(lampposts,
trees,
and
other
cars).
The
se
gmented
patches
(images)
of
parking
lots
belonging
to
the
CNRP
ark
subset
ha
v
e
size
(150*150)
pix
el.
Images
of
a
real
parking
slots
in
dif
ferent
days,
with
dif
ferent
light
and
weather
conditions,
contains
images
with
high
v
ariability
related
to
occlusions,
which
mak
es
this
dataset
more
compatible
with
a
real
state
of
an
outdoor
parking
slot
[25].
T
able
2
vie
ws
number
of
free
and
b
usy
spaces
in
the
desired
dataset.
T
able
2.
CNRP
ark
dataset
Dataset
Free
Spaces
Busy
Spaces
T
otal
CNRP
ark
4181
8403
12,584
CNRP
ark-EXT
65,658
79,307
144,965
Deep
learning
ver
sus
tr
aditional
methods
for
...
(Mohamed
S.
F
ar
a
g)
Evaluation Warning : The document was created with Spire.PDF for Python.
968
r
ISSN:
2502-4752
4.
TRADITION
AL
METHODS
RESUL
T
Figure
1
and
Figure
2
vie
w
the
steps
of
training
a
n
d
testing
stages
using
D
WT
,
PCA
and
Euclidean
Distance.
T
able
3
vie
ws
the
results
of
the
abo
v
e
methods
using
dif
ferent
training,
testing
percentage
number
of
images
on
the
PKLot
and
CNR
databases.
It
is
clear
from
the
results
that
using
1
le
v
el
D
WT
before
applying
PCA
increased
the
correct
classification
rate.
in
case
of
training
80
%
of
the
dataset
D
WT
PCA
reach
a
high
correct
classification
rate
80%.
Another
method
w
as
applied
without
training
stage.
This
method
con
v
ert
the
r
gb
image
to
a
grayscale
using
11,
then
con
v
ert
it
to
a
black
and
white
according
to
a
t
h
r
eshold
12.
Then
compute
the
mean
of
the
image.
If
the
mean¿0.575
the
image
is
classified
as
occupied
slot,
else
it
is
free.
This
method
g
a
v
e
a
v
erage
90%
correct
classification
rate.
This
classification
rate
outperform
the
pre
vious
mentioned
methods(PCA,
D
WT+PCA)
and
sa
v
e
time(there
is
no
time
for
training).
Figure
1.
T
raining
and
testing
using
PCA
Figure
2.
T
raining,
testing
using
D
WT
and
PCA
T
able
3.
Classification
results
using
D
WT
and
PCA
No
T
raining
images
%
No
T
esting
images
%
PCA
D
WT+PCA
RGB2Gray+BW+Mean
(our
Method)
10
90
63
71
89
20
80
75
75
89
30
70
40
59
89
40
60
75
65
89
50
50
25
76
89
60
40
34
50
89
70
30
73
53
89
80
20
65
80
89
90
10
40
70
90
5.
RESEARCH
METHODS
Deep
Learning
is
a
branch
of
Artificial
Intelligence,
aims
for
de
v
eloping
techniques
that
allo
w
computers
to
learn
comple
x
perception
tasks,
such
as
seeing
and
hearing,
at
high
le
v
el
of
accurac
y
.
It
pro
vides
near
-human
le
v
el
accurac
y
in
object
detection,
image
classification,
speech
recognition,
v
ehicle
detection,
language
processing,
and
etc.
The
traditional
approaches
to
the
classification
problem
use
ad-hoc
functions
to
e
xtract
from
an
image
specific
feat
ures
that
are
considered
to
be
indicati
v
e
of
cert
ain
objects.
The
outputs
of
these
feature
e
xtraction
functions
are
then
gi
v
en
in
input
to
a
classification
function,
which
determines
whether
or
not
a
particular
obj
ect
w
as
detected.
Ho
we
v
er
,
this
approach
leads
to
lo
w
and
f
alse-alarm
prone
detectors.
In
addition,
it
presents
the
follo
wing
problems:
(a)
It
is
hard
to
think
of
general,
reliable
features,
rob
ust,
which
map
to
specific
object
types.
(b)
It
is
a
huge
task
to
determine
the
right
combination
of
features
for
each
type
of
object
to
detect.
(c)
It
is
dif
ficult
to
design
functions
that
are
rob
ust
to
rotations,
translations
and
scaling
of
objects.
Indonesian
J
Elec
Eng
&
Comp
Sci,
V
ol.
19,
No.
2,
August
2020
:
964
–
973
Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian
J
Elec
Eng
&
Comp
Sci
ISSN:
2502-4752
r
969
All
these
problems
mak
e
d
e
v
el
op
i
ng
high
object
detection
accurac
y
and
classifying
v
ery
hard.
The
Deep
Learning
technique,
e
xploits
a
high
number
of
labeled
data
to
kno
w
which
features
and
combinations
of
it
are
most
describing
for
each
class
of
objects
to
be
classified,
and
de
v
elop
a
combined
feature
classification
and
e
xtraction
model.
This
model
could
be
de
v
eloped
not
only
to
classify
objects
trained
on
it,
b
ut
also
unseen
objects
similar
to
them.
A
Deep
Learning
method
particularly
impacted
for
vision
tasks
using
Constitutional
Neural
Netw
orks
(CNN)
[26].
A
CNN
is
consisted
of
a
lar
ge
number
of
hidden
layers,
to
perform
mathematical
computations
on
the
input
pro
vided
by
the
pre
vious
layer
and
generate
an
output,
which
is
gi
v
en
as
input
to
the
follo
wing
layer
Figure
3.
A
CNN
dif
fers
from
neural
netw
orks
for
the
presence
of
con
v
olutional
layers,
which
can
be
a
good
model
and
discern
correlation
of
neighboring
pix
els
rather
than
fully
connected
layers.
T
o
classify
inputs,
the
final
outputs
of
the
CNN
will
be
the
label
of
classes
the
netw
ork
has
been
trained.
The
training
stage
is
usually
e
xtremely
costing
from
a
computational
point
of
vie
w
,
and
may
tak
e
a
long
time
to
complete.
After
the
netw
ork
training
stage
has
been
completed
and
the
classifier
has
been
initialized
accordingly
,
the
time
for
prediction
stage
is
quite
f
ast
and
ef
ficient.
Figure
3.
CNN
architecture
5.1.
AlexNet
vs
mean
Ale
xNet
is
considered
to
be
a
con
v
olutional
neural
netw
ork,
had
a
high
impact
on
the
machine
lear
ning
field,
specially
in
the
appli
cation
of
deep
learning
to
machine
vision.
It
f
amously
w
on
the
2012
ImageNet
LSVRC-2012
competition
by
a
lar
ge
mar
gin
(15.3%
VS
26.2%
(second
place)
error
rates)
[27].
Figure
4
sho
ws
the
architecture
of
the
Ale
xNet.
Ale
xNet
contains
8
hidden
layers.
There
e
xist
5
con
v
olutional
layers
follo
wed
by
3
fully
connected
layers.
R
ectified
Linear
Unit
(ReLU)
applied
after
all
con
v
olutional
and
fully
connected
layers
to
quick
the
train.
Dropout
applied
before
both
the
first
and
the
second
fully
connected
year
.
T
o
train
the
netw
ork,
ale
xnet
images
were
do
wn-sampled
to
256
256
pix
els
and
subtraction
of
the
mean
acti
vity
o
v
er
the
training
set
from
each
pix
el.
1.2
million
training
images
were
used,
50000
images
for
v
alidation,
and
150000
image
for
testing.
The
images
were
classified
to
1000
cate
gories,
each
cate
gory
ha
v
e
1000
images.
Figure
4.
Ale
xNet
architecture
Deep
learning
ver
sus
tr
aditional
methods
for
...
(Mohamed
S.
F
ar
a
g)
Evaluation Warning : The document was created with Spire.PDF for Python.
970
r
ISSN:
2502-4752
T
able
4
sho
ws
t
he
classification
results
when
training
Ale
xNet
on
12000
images
from
the
cnrall
database.
It
is
sho
wn
that
the
results
of
the
testing
pklot
sunn
y
g
a
v
e
83%
correct
classification
rate.
T
able
4.
T
raining
12000
images
of
cnrall
ale
xnet
deep
learning
T
esting
Database
Classification
rate
T
ime
per
image(seconds)
Pklot
cloudy
79%
0.132
Pklot
rain
y
75%
0.135
Pklot
sunn
y
83%
0.135
Cnrall
81%
0.133
T
able
5
sho
ws
the
classification
results
when
training
Ale
xNet
on
12000
images
from
the
pklot
rain
y
database
and
testing
using
the
other
sets(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
cloudy
and
sunn
y
the
Ale
xNet
g
a
v
e
high
recognition
rate
(98.5%
and
98.8%).
T
able
5.
T
raining
12000
images
of
pklot
rain
y
ale
xnet
deep
learning
T
esting
Database
Classification
rate
T
ime
per
image(seconds)
Pklot
cloudy
98.52
%
0.133
Pklot
rain
y
75%
0.135
Pklot
sunn
y
98.8%
0.132
Cnrall
89%
0.136
T
able
6
sho
ws
the
classification
results
when
training
Ale
xNet
on
12000
images
from
the
pklot
sunn
y
database
and
testi
ng
using
the
other
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
cloudy
and
rain
y
the
Ale
xNet
method
g
a
v
e
(97%
and
94.6%)
correct
classification
rate.
T
able
6.
T
raining
12000
images
of
pklot
sunn
y
ale
xnet
deep
learning
T
esting
Database
Classification
rate
T
ime
per
image(seconds)
Pklot
cloudy
97%
0.128
Pklot
rain
y
94.6%
0.130
Pklot
sunn
y
88%
0.129
Cnrall
87%
0.132
T
able
7
sho
ws
the
classification
results
when
training
Ale
xNet
on
12000
images
from
the
pklot
cloudy
database
and
testi
ng
using
the
other
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
cloudy
and
rain
y
the
Ale
xNet
method
g
a
v
e
(
100%
)
correct
classification
rate.
T
able
7.
T
raining
12000
images
of
pklot
cloudy
ale
xnet
deep
learning
T
esting
Database
Classification
rate
T
ime
per
image(seconds)
Pklot
cloudy
100%
0.132
Pklot
rain
y
100%
0.134
Pklot
sunn
y
100%
0.130
Cnrall
87%
0.14
T
able
8
sho
ws
the
results
of
classifying
images
using
the
mean
method,
D
WT
1
le
v
el
+
mean
and
D
WT
2
le
v
els
+
mean.
From
the
pre
vious
results,
it
is
clear
that
the
Ale
xNet
outperform
the
mean
method
when
testing
the
pklot
database.
When
class
ifying
the
cnrall
database,
the
mean
method
outperform
the
Ale
xnet
(90%
correct
classification
rate).
Also,
the
mean
method
has
no
training
time
lik
e
the
Ale
xnet
method.
T
able
8.
Classification
of
Pklot
database
and
cnrall
using
Mean
T
esting
Database
Rate
using
RGB2Gray
+
BW+Mean
T
ime
(seconds)
Rate
using
D
WT
1
le
v
el+
R
GB2Gray+
BW+Mean
T
ime
(seconds)
Rate
using
D
WT
2
le
v
el
+RGB
2Gray
+BW+Mean
T
ime
(seconds)
Pklot
cloudy
90%
0.0042
90%
0.0106
90%
0.0106
Pklot
rain
y
79%
0.0040
80%
0.0076
81%
0.0099
Pklot
sunn
y
83%
0.0042
%
85%
0.0092
85%
0.0104
Cnrall
90%
0.0049
86%
0.0135
86%
0.0154
Indonesian
J
Elec
Eng
&
Comp
Sci,
V
ol.
19,
No.
2,
August
2020
:
964
–
973
Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian
J
Elec
Eng
&
Comp
Sci
ISSN:
2502-4752
r
971
5.2.
Pr
oposed
deep
lear
ning
neural
netw
ork
Due
to
the
pre
vious
lo
w
classification
rate
sho
wed
using
Ale
xNet
and
the
mean
method,
we
propose
a
deep
learning
neural
netw
ork,
consists
of
11
layers
as
sho
wn
in
figure
5.
Layer
1
is
considered
as
the
input
image
layer
,
the
input
image
size
is
(150*150*3).
layer
2
is
a
2d
con
v
olutional
layer
,
11x11
con
v
olutions
with
stride
[1
1],
follo
wed
by
a
3x3
max
pooling
layer
with
stride
[2
2].
layer
4
is
a
2d
con
v
olutional
layer
,
5x5
con-
v
olutions
with
stride
[1
1],
foll
o
wed
by
a
3x3
max
pooling
layer
with
stride
[2
2].
layer
6
is
a
2d
con
v
olutional
layer
,
5x5
con
v
olutions
with
stride
[1
1],
follo
wed
by
a
Rectified
Linear
Unit
(RelU)
layer
in
order
to
quick
the
train.
Then
d
r
op
out
layer
is
used
follo
wed
by
the
fully
connected
layer
.
layer
10
is
the
softmax
layer
follo
wed
by
the
classification
output
layer
.
In
the
be
ginning
1
con
v
olution
layer
w
as
used,
learning
rat
e
w
as
0.001,
this
mak
e
the
training
t
ime
is
v
ery
high.
B
u
t
when
we
used
3
con
v
olution
layers
and
le
arning
rate
0.00001
the
training
time
w
as
decreased.
Figure
5.
proposed
Deep
Learning
Neural
Netw
ork
Design
T
able
9
sho
ws
the
classification
results
when
training
the
proposed
netw
ork
on
12000
images
from
the
pklot
sunn
y
database
and
testing
using
the
ot
her
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
sunn
y
,
rain
y
,
and
cloudy
the
proposed
netw
ork
g
a
v
e
o
v
er
(
97%
)
correct
classification
rate,
b
ut
the
classification
rate
is
lo
w
in
case
of
cnrall
database.
T
able
10
sho
ws
the
classification
results
when
training
the
proposed
netw
ork
on
12000
images
from
the
pklot
cloudy
database
and
testing
using
the
other
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
sunn
y
,
rain
y
,
and
cloudy
the
proposed
netw
ork
g
a
v
e
o
v
er
(
94%
)
correct
classification
rate,
and
the
classification
rate
w
as
increased
in
case
of
cnrall
database
84%.
T
able
9.
T
raining
12000
images
of
pklot
sunn
y
proposed
deep
learning
netw
ork
T
esting
Database
Classification
rate
Pklot
cloudy
98
%
Pklot
rain
y
97%
Pklot
sunn
y
99%
Cnrall
79%
T
able
10.
T
raining
12000
images
of
pklot
cloudy
proposed
deep
learning
netw
ork
T
esting
Database
Classification
rate
Pklot
cloudy
99
%
Pklot
rain
y
95%
Pklot
sunn
y
94%
Cnrall
84%
T
able
11
sho
ws
the
classification
results
when
training
the
proposed
netw
ork
on
12000
images
from
the
pklot
rain
y
database
and
testing
using
the
other
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
rain
y
,
and
cloudy
the
proposed
netw
ork
g
a
v
e
o
v
er
(
95%
)
correct
classification
rate,
b
ut
the
classification
rate
is
l
o
w
in
case
of
cnrall,
sunn
y
database.
T
able
12
sho
ws
the
classification
results
when
training
the
proposed
netw
ork
on
12000
images
from
the
cnrall
database
and
testing
using
the
ot
her
sets
(cnrall,
pklot
cloudy
,
sunn
y
,
rain
y).
In
the
testing
stage
using
pklot
rain
y
,
cloudy
and
rain
y
the
proposed
netw
ork
g
a
v
e
a
range
(
80
:
85
%
)
correct
classification
rate,
b
ut
the
classification
rate
increased
to
93%
using
the
cnrall
database.
Compared
with
ale
xnet
(which
g
a
v
e
81
:
89
%
)
and
the
mean
method
(which
g
a
v
e
a
v
erage
90%),
the
proposed
Deep
Learning
outperform
the
desired
methods
in
case
of
training
on
the
cnrall
database
(g
a
v
e
93%).
Also,
when
training
on
the
pklot
database
(rain
y
,
cloudy
,
sunn
y),
it
g
a
v
e
classification
rate
(94
:
99
%),
which
is
acceptable
compared
with
ale
xnet
deep
learning
method.
One
of
the
main
reasons
for
this
lo
w
rate
is
the
size
of
images
in
the
pklot
database.
In
pklot
database
image
size
is
(a
v
erage
40*40
)
and
the
cnrall
is
(a
v
erage
150*150),
before
testing
the
pklot
all
images
must
be
resized
to
150*150
to
be
with
the
same
size
of
the
input
layer
of
the
proposed
deep
learning
netw
ork.
Deep
learning
ver
sus
tr
aditional
methods
for
...
(Mohamed
S.
F
ar
a
g)
Evaluation Warning : The document was created with Spire.PDF for Python.
972
r
ISSN:
2502-4752
T
able
11.
T
raining
12000
images
of
pklot
rain
y
proposed
deep
learning
netw
ork
T
esting
Database
Classification
rate
Pklot
cloudy
95
%
Pklot
rain
y
99%
Pklot
sunn
y
89%
Cnrall
80%
T
able
12.
T
raining
12000
images
of
Cnrall
database
proposed
deep
learning
netw
ork
T
esting
Database
Classification
rate
Pklot
cloudy
85
%
Pklot
rain
y
82%
Pklot
sunn
y
80%
Cnrall
93%
6.
CONCLUSION
In
this
paper
,
we
present
tw
o
methods
for
parking
lot
occupanc
y
detection.
The
first
method
as
sho
wn
con
v
ert
t
he
colored
lot
image
to
grayscale,
then
to
black/white
and
compute
the
mean
of
the
resulted
image.
Classifying
the
image
to
occupied
or
empty
according
to
a
threshold.
This
method
reported
a
90%
correct
rate
on
cnral
l
database,
which
o
v
ercome
the
sho
wed
methods
(ale
xnet,
traditional
methods)
and
has
no
training
time.
The
second
method
depends
on
deep
learning
techniques.
As
vie
wed
it
is
a
deep
learning
netw
ork
consisting
of
11
layers,
3
of
them
are
con
v
olution
layers
with
dif
ferent
k
ernel
size.
This
method
g
a
v
e
93%
correct
classification
rate
on
cnrall
database,
which
o
v
ercome
the
ale
xnet
trained
on
the
same
database
and
the
mean
method.
T
raining
and
testing
on
pklot
database,
the
deep
learning
methods
(ale
xnet
and
the
proposed
deep
learnng
method)
ha
v
e
a
closest
classification
rate
and
o
v
ercome
the
mean
method.
REFERENCES
[1]
F
aheem,
S.
A.
Mahmud,
G.
M.
Khan,
M.
Rahman,
and
H.
Zaf
ar
,
“
A
surv
e
y
of
intel
ligent
car
parking
system,
”
J
ournal
of
Applied
Resear
c
h
and
T
ec
hnolo
gy
,
v
ol.
v
ol.11,
pp.
714
-
726,
2013.
[2]
K.
J.
Y
ong
and
M.
H.
Salih,
“Design
and
implementation
of
embedded
auto
car
parking
system
using
fpg
a
for
emer
genc
y
conditions,
”
Indonesian
J
ournal
of
Electrical
Engineering
and
Computer
Science
(IJEECS)
,
v
ol.
V
ol.
13,
No.
3,
pp.
678
–
883,
2019.
[3]
A.
Singh
and
S.
P
.
V
aidya,
“
Automated
parking
management
system
for
identifying
v
ehicle
number
plate,
”
Indonesian
J
ournal
of
Electrical
Engineering
and
Computer
Science
(IJEECS)
,
v
ol.
V
ol.
13,
No.
1,
pp.
77
–
84,
2019.
[4]
Y
.
S.
A.
W
aili,
S.
M.
Hussain,
K.
M.
Y
usof,
S.
A.
Huss
ain,
R.
Asuncion,
and
A.
Frank,
“Iot
based
parking
system
using
android
and
google
maps,
”
International
J
ournal
of
Applied
Engineering
Resear
c
h
,
v
ol.
v
ol.
13,
No.
20,
pp.
14689-14697,
2018.
[5]
R.
Mart
´
ın
Nieto,
.
Garc
´
ıa-Mart
´
ın,
A.
G.
Hauptmann,
and
J.
M.
Mart
´
ınez,
“
Automatic
v
acant
parking
places
management
system
using
multicamera
v
ehicle
detection,
”
IEEE
T
r
ansactions
on
Intellig
ent
T
r
ansporta-
tion
Systems
,
v
ol.
v
ol.
20,
No.
3,
pp.
1069-1080,
2019.
[6]
A.
Somani,
S.
Periw
al,
K.
P
atel,
and
P
.
Gaikw
ad,
“Cross
platform
sm
art
reserv
ation
based
parking
sys-
tem,
”
2018
International
Confer
ence
on
Smar
t
City
and
Emer
ging
T
ec
hnolo
gy
(ICSCET)
,
v
ol.
pp.
1-5,
2018.
[7]
T
.
Kilic
¸
and
T
.
T
uncer,
“Smart
city
application:
Android
based
smart
parking
system,
”
2017
International
Artificial
Intellig
ence
and
Data
Pr
ocessing
Symposium
(ID
AP)
,
v
ol.
pp.
1-4,
2017.
[8]
S.
Kazi,
S.
Khan,
U.
Ansari,
and
D.
Mane,
“Smart
parking
based
system
for
smarter
cities,
”
2018
Inter
-
national
Confer
ence
on
Smart
City
and
Emer
ging
T
ec
hnolo
gy
(ICSCET)
,
v
ol.
PP
.
1-5,
2018.
[9]
T
.
O.
Olasupo,
C.
E.
Otero,
L.
D.
Otero,
K.
O.
Olasupo,
and
I.
K
ostanic,
“P
ath
loss
model
s
for
lo
w-po
wer
,
lo
w-data
rate
sensor
nodes
for
smart
car
parking
systems,
”
IEEE
T
r
ansactions
on
Intellig
ent
T
r
ansporta-
tion
Systems
,
v
ol.
V
ol.
19,
No.
6,
PP
.
1774-1783,
2018.
[10]
J.
Ni,
K.
Zhang,
Y
.
Y
u,
X.
Lin,
and
X.
Shen,
“Pri
v
ac
y-preserving
smart
parking
na
vig
ation
supporting
ef
ficient
dri
ving
guidance
retrie
v
al,
”
IEEE
T
r
ansactions
on
V
ehicular
T
ec
hnolo
gy
,
v
ol.
V
ol.
67,
No.
7,
PP
.
6504-6517,
2018.
[11]
F
.
Bock,
S.
Di
Martino,
and
A.
Origlia,
“Smart
parking:
Using
a
cro
wd
of
taxis
to
sense
on-street
parking
space
a
v
ailability
,
”
IEEE
T
r
ansactions
on
Intellig
ent
T
r
ansportation
Systems
,
v
ol.
V
ol.
21,
No.
2,
PP
.
496-508,
2020.
[12]
C.
Roman,
R.
Liao,
P
.
Ball,
S.
Ou,
and
M.
de
Hea
v
er,
“Detecting
on-street
parking
spaces
in
smart
cities:
Performance
e
v
aluation
of
fix
ed
and
mobile
sensing
systems,
”
IEEE
T
r
ansactions
on
Intellig
ent
Indonesian
J
Elec
Eng
&
Comp
Sci,
V
ol.
19,
No.
2,
August
2020
:
964
–
973
Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian
J
Elec
Eng
&
Comp
Sci
ISSN:
2502-4752
r
973
T
r
ansportation
Systems
,
v
ol.
v
ol.
19,
No.
7,
pp.
2234-2245,
2018.
[13]
M.
Al-Jabi
and
H.
Sammaneh,
“T
o
w
ard
mobile
ar
-based
interacti
v
e
smart
parking
system,
”
2018
IEEE
20th
International
Confer
ence
on
High
P
erformance
Computing
and
Communications;
IEEE
16th
Inter
-
national
Confer
ence
on
Smart
City;
IEEE
4th
International
Confer
ence
on
Data
Science
and
Systems
(HPCC/SmartCity/DSS)
,
v
ol.
PP
.
1243-1247,
2018.
[14]
D.
Kanteti,
D.
V
.
S.
Srikar,
and
T
.
K.
Ramesh,
“Intelli
gent
smart
parking
algorithm,
”
2017
International
Confer
ence
On
Smart
T
ec
hnolo
gies
F
or
Smart
Nation
(SmartT
ec
hCon)
,
v
ol.
PP
.
1018-1022,
2017.
[15]
T
.
Lin,
H.
Ri
v
ano,
and
F
.
Le
Mou
¨
el,
“
A
surv
e
y
of
smart
parking
solutions,
”
IEEE
T
r
ansactions
on
Intelli-
g
ent
T
r
ansportation
Systems
,
v
ol.
V
ol.
18,
No.
12,
PP
.
3229-3253,
2017.
[16]
K.
S.
A
w
aisi,
A.
Abbas,
M.
Zareei,
H.
A.
Khattak,
M.
U.
Shahid
Khan,
M.
Ali,
I.
Ud
Din,
and
S.
Shah,
“T
o
w
ards
a
fog
enabled
ef
ficient
car
parking
architecture,
”
IEEE
Access
,
v
ol.
V
ol.
7,
PP
.
159100-159111,
2019.
[17]
C.
T
ang,
X.
W
ei,
C.
Zhu,
W
.
Chen,
and
J.
J.
P
.
C.
Rodrigues,
“T
o
w
ards
smart
parking
based
on
fog
computing,
”
IEEE
Access
,
v
ol.
V
ol.
6,
PP
,
70172-70185,
2018.
[18]
C.
Shi,
J.
Liu,
and
C.
Miao,
“Study
on
parking
spaces
analyzing
and
guiding
system
based
on
video,
”
IEEE,
23r
d
International
Confer
ence
on
A
utomation
and
Computing
(ICA
C)
,
v
ol.
pp.
1
-
5,
sept.,
2017.
[19]
M.
T
urk
and
A.
Pentland,
“Eigenf
aces
for
recognition,
”
J
ournal
of
Co
gnitive
Neur
oscience
,
v
ol.
v
ol.3,
no.1,
pp.
71-86,
1991.
[20]
R.
S.
Sabeenian,
“Hand
written
te
xt
to
di
gital
te
xt
con
v
ersion
using
radon
transform
and
back
propa-
g
ation
netw
ork
(rtbpn),
”
International
J
ournal
of
Computer
s
Information
T
ec
hnolo
gy
and
Engineering
(IJCIT
AE)
,
v
ol.
V
ol.
101,
pp.
498
-
500,
2010.
[21]
M.
M.
MohieEl-din,
N.
I.
Ghali,
A.
G.
Ahmed,
and
H.
A.
El-Shenbary
,
“
A
study
on
the
impact
of
w
a
v
elet
decomposition
on
f
ace
recognition
methods,
”
International
J
ournal
of
Computer
Applications
(IJCA)
,
v
ol.
v
ol.
87,
no.
3,
pp.
14
-
21,
Feb
.
2014.
[22]
M.
M.
MohieEl-din,
M.
Y
.
El-Nahas,
and
H.
A.
El-Shenbary
,
“Hybrid
frame
w
ork
for
rob
ust
multimodal
f
ace
recognition,
”
International
J
ournal
of
Computer
Science
Issues
(IJCSI)
,
v
ol.
v
ol.
10,
Issue
2,
no.
2,
pp.
471
-
476,
Mar
.
2013.
[23]
M.
S.
F
arag,
M.
M.
M.
E.
Din,
and
H.
A.
E.
Shenbary
,
“P
arking
entrance
control
using
license
plate
detection
and
recognition,
”
Indonesian
J
ournal
of
Electrical
Engineering
and
Comput
er
Science(IJEECS
,
v
ol.
v
ol.15,
No.
1,
pp.
476
-483,
2019.
[24]
P
.
Almeida,
L.
S.
Oli
v
eira,
J.
E.
Silv
a,
J.
A.
Britto,
and
A.
K
oerich,
“Pklot
–
a
rob
ust
dataset
for
parking
lot
classification,
”
Expert
Systems
with
Applications
,
v
ol.
v
ol.42,No.11,
pp.
4937-4949,
2015.
[25]
A.
Giuseppe,
C.
F
abio,
F
.
F
abrizio,
G.
Claudio,
M.
Carlo,
and
V
.
Claudio,
“Deep
learning
for
decentralized
parking
lot
occupanc
y
detection,
”
Expert
Systems
with
Applications
,
v
ol.
72,
pp.
327–334,
2017.
[26]
R.
Girshick,
J.
Donahue,
T
.
Darrell,
and
J.
Malik,
“Rich
feature
hierarchies
for
accurate
object
detec-
tion
and
semantic
se
gmentation,
”
In
Pr
oceedings
of
the
ieee
confer
ence
on
computer
vision
and
pattern
r
eco
gnition
,
v
ol.
pp.
580–587,
2014.
[27]
A.
Krizhe
vsk
y
,
S.
Ilya,
and
H.
E.
Geof
fre
y
,
“Imagenet
classification
with
deep
con
v
olutional
neural
net-
w
orks,
”
Advances
in
neur
al
information
pr
ocessing
systems
,
v
ol.
pp.
1097-1105,
2012.
Deep
learning
ver
sus
tr
aditional
methods
for
...
(Mohamed
S.
F
ar
a
g)
Evaluation Warning : The document was created with Spire.PDF for Python.