Inter
national
J
our
nal
of
P
o
wer
Electr
onics
and
Dri
v
e
System
(IJPEDS)
V
ol.
11,
No.
4,
December
2020,
pp.
2091
2098
ISSN:
2088-8694,
DOI:
10.11591/ijpeds.v11.i4.pp2091-2098
r
2091
A
r
eal-time
system
f
or
v
ehicle
detection
with
shado
w
r
emo
v
al
and
v
ehicle
classification
based
on
v
ehicle
featur
es
at
urban
r
oads
Issam
Atouf
,
W
ahban
Al
Okaishi,
Abdemoghit
Zaarane,
Ibtissam
Slimani,
Mohamed
Benrabh
L
TI
Lab
.
F
aculty
of
sciences
Ben
M’
sik,
Hassan
II
Uni
v
ersity
of
Casablanca,
Morocco
Article
Inf
o
Article
history:
Recei
v
ed
Feb
2,
2020
Re
vised
Apr
24,
2020
Accepted
May
19,
2020
K
eyw
ords:
Background
subtraction
Image
processing
Shado
w
remo
v
al
V
ehicle
classification
V
ehicle
detection
ABSTRA
CT
Monitoring
traf
fic
in
urban
areas
is
an
important
task
for
intelligent
transport
applica-
tions
to
alle
viate
the
traf
fic
problems
lik
e
traf
fic
jams
and
long
trip
times.
The
traf
fic
flo
w
in
urban
areas
is
more
complicated
than
the
traf
fic
flo
w
in
highw
ay
,
due
to
the
slo
w
mo
v
ement
of
v
ehicles
and
cro
wded
traf
fic
flo
ws
in
urban
areas.
In
this
pa
per
,
a
v
ehicle
detection
and
classification
system
at
intersecti
ons
is
proposed.
The
system
consists
of
three
main
phases:
v
ehicle
detection,
v
ehicl
e
tracking
and
v
ehicle
classi-
fication.
In
the
v
ehi
cle
detection,
the
background
subtraction
is
utilized
to
detect
the
mo
ving
v
ehicles
by
emplo
ying
mixture
of
Gaussians
(MoGs)
algorithm,
and
then
the
remo
v
al
shado
w
algorithm
is
de
v
eloped
to
impro
v
e
the
detection
phase
and
eliminate
the
undesired
detected
re
gion
(shado
ws).
After
the
v
ehicle
detection
phase,
the
v
ehi-
cles
are
track
ed
until
the
y
reach
the
classification
line.
Then
the
v
ehicle
dimensions
are
utilized
to
classify
the
v
ehicles
into
three
classes
(cars,
bik
es,
and
trucks).
In
this
system,
there
are
three
counters;
one
counter
for
each
class.
When
the
v
ehicle
is
clas-
sified
to
a
specific
class,
the
class
counter
is
incremented
by
one.
The
counting
results
can
be
used
to
estimate
the
traf
fic
density
at
intersections,
and
adjust
the
timing
of
traf
fic
light
for
the
ne
xt
light
c
ycle.
The
system
is
applied
to
videos
obtained
by
sta-
tionary
cameras.
The
results
obtained
demonstrate
the
rob
ustness
and
accurac
y
of
the
proposed
system.
This
is
an
open
access
article
under
the
CC
BY
-SA
license
.
Corresponding
A
uthor:
Name
Issam
Atouf,
Af
filiation
F
aculty
of
sciences
Ben
M’
sik,
Hassan
II
Uni
v
ersity
Casablanca,
Address
Casablanca,
Morocco.
Email
issamatouf@yahoo.fr
1.
INTR
ODUCTION
T
raf
fic
problems
are
one
of
the
most
problems
encountered
by
the
residents
of
lar
ge
cities.
Therefore,
the
traf
fic
management
companies
ha
v
e
paid
great
attention
to
solv
e
these
problems.
The
first
step
in
traf
fic
analysis
is
the
collection
of
traf
fic
information.
Se
v
eral
techniques
ha
v
e
been
de
v
eloped
for
traf
fic
data
collec-
tion;
man
y
detectors
(such
as
loop,
radar
,
infrared
and
micro
w
a
v
e)
are
utilized
to
do
this
task.
These
detectors
help
traf
fic
flo
w
management
by
pro
viding
information
about
the
le
v
el
of
the
traf
fic
density
on
the
roads.
Ho
w-
e
v
er
,
the
y
ha
v
e
man
y
dra
wbacks
that
lead
to
reduce
their
use,
as
their
installation
requires
pa
v
ements
cuts;
in
addition,
their
detection
zone
is
small.
In
recent
years,
the
vision-based
systems
ha
v
e
been
widely
used
in
traf
fic
management,
due
to
their
adv
antages
compared
to
electronic
sensors.
The
vision-based
systems
e
xtract
useful
data
by
co
v
ering
wide-area
detection
with
ability
in
determining
the
shape
of
the
detection
zone.
The
J
ournal
homepage:
http://ijpeds.iaescor
e
.com
Evaluation Warning : The document was created with Spire.PDF for Python.
2092
r
ISSN:
2088-8694
first
phase
to
analyze
the
traf
fic
parameters
is
the
v
ehicle
detection.
Se
v
eral
methods
ha
v
e
been
de
v
eloped
for
the
v
ehicles
detection,
and
these
methods
can
be
grouped
into
tw
o
main
approaches:
te
xture-based
approach
and
motion-based
approach.
The
first
one
utilizes
v
ehicle
features
lik
e
edges,
corners,
colors
and
so
on;
this
approach
is
good
to
detect
the
stopping
v
ehicles.
While
the
other
one
depends
on
the
mo
v
ement
of
v
ehicles,
it
is
widely
used
in
intelligent
transportation
syst
ems.
There
are
tw
o
main
methods
to
detect
the
mo
ving
ob-
jects:
optical
flo
w
and
background
subtraction.
The
optical
flo
w
is
accurate
in
the
detection
of
object
motion
and
gi
v
es
more
information
about
the
motion
lik
e
the
v
elocity
and
the
motion
direction.
Ho
we
v
er
,
it
has
high
computational
time
and
is
not
suitable
for
real
time
applications
[1].
The
background
subtraction
is
the
most
common
method
used
in
literature
for
detecting
mo
ving
v
ehicles.
In
this
w
ork,
the
background
subtraction
is
utilized
to
separate
the
mo
ving
v
ehicles
from
the
background
model.
Ho
we
v
er
,
the
shado
ws
of
mo
ving
v
ehi-
cles
are
also
detected;
thus
the
y
are
considered
as
a
part
of
v
ehicle
dimensions,
and
this
leads
to
misclassify
the
dif
ferent
v
ehicles.
T
o
cope
with
this
issue,
we
proposed
an
algorithm
to
remo
v
e
the
shado
w
re
gion
based
on
the
edges
of
detected
re
gions.
This
algorithm
outperforms
the
other
shado
w
remo
v
al
algorithms.
When
the
v
ehicles
are
detected
without
shado
ws,
the
y
will
be
track
ed
until
the
y
reach
the
classification
line.
Then
the
classification
is
performed
to
cate
gory
them
into
a
number
of
predefined
types.
The
classification
methods
can
be
grouped
into
tw
o
groups,
one
group
classifies
the
v
ehicl
es
by
measuring
the
v
ehicle
dimensions
[2],
while
the
other
utilizes
machine
learning
techniques
[3].
V
arious
systems
ha
v
e
been
de
v
eloped
to
detect
and
classify
the
v
ehicles.
A
rea
l
time
system
for
v
ehi-
cles
detection
and
classification
is
described
in
[4].
The
first
phase
of
their
w
ork
is
the
v
ehicle
detection,
the
y
used
the
background
subtraction
to
perform
this
task
by
emplo
ying
the
adapti
v
e
background
update
method,
then
the
v
ehicles
were
track
ed
by
using
the
associati
on
graph
between
the
consecuti
v
e
frames,
finally
the
y
used
the
v
ehicle
dimensions
to
classify
the
v
ehicles
into
tw
o
cate
gories,
small
v
ehicles
(cars)
and
big
v
ehicles(v
ans,
trucks
and
b
uses).
In
[3],
the
y
de
v
eloped
a
system
for
v
ehicle
classification.
The
mo
ving
objects(v
ehicles)are
separated
from
the
static
objects(background
objects)
by
using
GMM
method,
and
then
the
y
used
an
approach
based
on
tw
o
le
v
el
of
support
v
ector
machine
(SVMs)
to
class
ify
the
detected
v
ehicles.
The
second
le
v
el
w
as
emplo
yed
for
solving
the
occlusion
problem.
The
v
ehicles
were
classified
into
four
classes
(b
us,
cars,
minib
us
and
trucks).
In
[5],
an
image
processing
s
ystem
is
proposed
to
detect
and
classify
the
v
ehicles
on
rear
vie
w
v
ehicle.
This
system
utilized
the
temporal
median
filter
to
establish
the
background
model,
and
then
t
he
scene
frame
w
as
subtracted
from
the
background
model
to
detect
the
mo
ving
v
ehicles.
The
classification
w
as
per
-
formed
by
using
SVM
method
after
e
xtracting
v
ehicle
features
by
using
deep
con
v
olutional
neural
netw
ork.
The
y
classified
the
v
ehicles
into
tw
o
types,
the
passenger
v
ehicles
and
other
v
ehicle
class.
In
[2],
a
real-time
traf
fic
surv
eillance
system
has
been
proposed
to
measure
traf
fic
flo
w
by
detecting
and
counting
the
v
ehicles.
The
background
model
w
as
performed
by
using
the
temporal
information
of
the
mean
and
standard
de
viation
of
gray
le
v
el
distrib
ution
in
consecuti
v
e
frames
for
each
point.
After
the
se
gmentation
process,
each
detected
object
w
as
bounded
by
a
rectangle
box,
and
then
the
calculation
of
object
features
(height,
width
and
aspect
ratio)
w
as
established
to
achie
v
e
a
rob
ust
and
accurate
classification.
Then
the
v
ehicles
were
classifie
d
into
tw
o
classes
(cars
and
bik
es).
In
this
paper
,
a
v
ehicle
detection
and
classification
system
at
intersections
is
proposed.
The
s
ystem
consists
of
three
main
phases:
v
ehicle
detection,
v
ehicle
tracking
and
v
ehicle
classification.
In
the
v
ehicle
detection,
the
background
subt
raction
is
utilized
to
detect
the
mo
ving
v
ehicles
by
emplo
ying
mixture
of
Gaussians
(MoGs)
algorithm,
and
then
the
remo
v
al
shado
w
algorithm
is
de
v
eloped
to
impro
v
e
the
detection
phase
and
eliminate
the
undesired
detected
re
gion
(shado
ws).
After
the
v
ehicle
detecti
on
phase,
the
v
ehicles
are
track
ed
until
the
y
reach
the
classification
line.
Then
the
v
ehicle
dimensions
are
utilized
to
classify
the
v
ehicles
into
three
classes
(cars,
bik
es,
trucks).
After
the
classification
phase,
the
v
ehicles
in
each
class
will
be
counted.
The
counting
results
can
be
used
to
estimate
the
traf
fic
density
at
intersections,
and
adjust
the
timing
of
traf
fic
light
for
the
ne
xt
light
c
ycle.
The
system
is
applied
on
videos
obtained
by
stationary
cameras.
The
rest
of
paper
is
or
g
anized
as
follo
ws:
section
2
describes
the
v
ehicle
detection
and
shado
w
remo
v
al
algorithm.
The
v
ehicle
classification
method
is
presented
in
section
3.
Experimental
result
is
presented
in
section
4.
Finally
,
conclusion
is
gi
v
en
in
sections
5.
2.
VEHICLE
DETECTION
Mo
ving
object
detection
methods
including
human
[6,
7],
v
ehicles
[8,
9],
ha
v
e
been
de
v
eloped
by
se
v-
eral
researchers.
The
most
common
method
used
in
li
terature
is
the
background
subtraction
method.
In
the
past
decade,
numerous
background
subtracti
on
algorithms
ha
v
e
been
proposed
to
e
xtract
and
update
the
background
Int
J
Po
w
Elec
&
Dri
Syst,
V
ol.
11,
No.
4,
December
2020
:
2091
–
2098
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Po
w
Elec
&
Dri
Syst
ISSN:
2088-8694
r
2093
model.
These
algorit
hms
can
be
a
non-recursi
v
e
or
a
recursi
v
e,
the
non
recursi
v
e
algorithm
utilizes
a
b
uf
fer
of
video
frames
to
get
the
background,
while
the
recursi
v
e
algorithm
updates
the
background
model
recursi
v
ely
based
on
each
input
frame.
Aljammal
et
al
[10]
proposed
a
non-recursi
v
e
method
called
non-parametric
model.
In
their
method,
the
y
used
the
entire
history
of
pix
els
to
est
imate
the
background
by
calculating
the
pix
el
den-
sity
function
for
each
pix
el.
The
pix
el
is
considered
as
background
if
this
function
is
greater
than
a
predefined
threshold.
The
dra
wbacks
of
this
method
are:
it
is
time
consuming
and
requires
high
memory
storage
.Kar
-
mann
and
v
on
[11]
de
v
eloped
a
recursi
v
e
technique
for
estimating
the
background
model
based
on
Kalman
filter
,
the
y
utilized
the
intensity
and
its
temporal
deri
v
ati
v
e
model.
The
background
is
recursi
v
ely
updated
by
using
three
matrices:
the
background
dynamics
matrix,
the
measurement
matrix
and
the
Kalman
g
ain
matrix.
Ho
we
v
er
,
this
method
is
af
fected
by
the
fore
ground
pix
els
e
v
en
if
it
w
orks
in
slo
w
rate
adaptation.
Kim
et
al
[12]
proposed
a
multimodal
backgrounds
method,
it
is
called
code
book
method.
In
this
method,
each
pix
e
l
is
summarized
by
a
number
of
code
w
ords
stored
in
a
coodbook;
each
code
w
ord
contains
of
a
set
of
parameters.
The
input
pix
el
is
considered
as
a
background
pix
el
if
its
brightness
f
alls
within
the
bright
ness
range
of
some
code
w
ord.
In
addition,
the
color
distortion
of
that
code
w
ord
is
smaller
than
the
detection
threshold.
Ho
we
v
er
,
this
method
cannot
correctly
detect
the
darkgre
y
and
white
mo
ving
objects,
because
it
considers
the
darkgre
y
objects
as
a
shado
w
and
the
white
object
as
a
sudden
increase
of
illumination
[13].
In
this
paper
,
MoGs
method
[14]
is
used
to
perform
the
background
subtraction
process.
It
is
the
most
method
used
in
literature
due
to
its
rob
ustness
ag
ainst
the
en
vironmental
changes
and
its
capability
to
handle
multimodal
background
distrib
utions.
The
idea
of
this
method
is
as
such:
a
number
of
Gaussian
distrib
utions
(components)
represent
each
pix
el.
The
number
of
Gaussian
components
f
all
s
between
three
and
fi
v
e
depending
on
the
storage
limitation
and
the
pos-
sibility
of
system
realization
in
real
time,
three
components
are
suf
ficient
for
our
system.
The
component
is
considered
as
a
matched
component
if
the
dif
ference
between
the
component
mean
and
the
pix
el
v
alue
is
less
than
a
predefined
threshold,
and
then
its
parameters
are
updated
as
follo
ws:
the
weight
increases,
the
standard
de
viation
decreases,
and
the
mean
mo
v
es
to
be
close
to
the
pix
el
v
alue.
If
the
component
is
none
matched;
the
only
parameter
which
is
updated,
is
the
weight,
it
decreases
e
xponentially
.
If
the
pi
x
el
does
not
ha
v
e
an
y
matched
component;
the
component
that
has
least
weight
is
replaced
by
a
ne
w
component
with
mean
equals
the
pix
el
v
alue,
a
lar
ge
initial
v
ariance,
and
a
small
weight.
After
that
the
components
are
rank
ed
according
to
a
confidence
metric
(wei
ght/standard
de
viation),
and
then
a
predefined
threshold
is
applied
to
the
components
weights.
The
background
model
is
the
first
components,
whose
weights
are
higher
than
the
threshold.
While
the
fore
ground
pix
els
(mo
ving
objects
pix
els)
are
those
that
do
not
ha
v
e
an
y
component
in
the
background
model.
Figure
1
sho
ws
the
result
of
applying
MoGs
method
to
tw
o
dif
ferent
traf
fic
scenes.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure
1.
V
ehicles
detection
in
dif
ferent
scenes,
(a)
Scene1,
frame1938,
(b)
Scene1,
frame2015,
(c)
Scene2,
frame1056,
(d)
Scene2,
frame1733,
(e)
MoG
result,
frame
1938,
(f)
MoG
result,
frame
2015
(g)
MoG
result,
frame
1056,
(h)
MoG
result,
frame
1733
A
r
eal-time
system
for
vehicle
detection
with
shadow
r
emo
val
and
...
(Issam
Atouf)
Evaluation Warning : The document was created with Spire.PDF for Python.
2094
r
ISSN:
2088-8694
2.1.
Shado
w
r
emo
v
al
In
this
section,
an
algorithm
t
hat
remo
v
es
v
ehicle
shado
ws
will
be
de
v
eloped
to
impro
v
e
the
perfor
-
mance
of
the
system.
The
system
performance
will
be
impro
v
ed
from
tw
o
respecti
v
e
vie
ws:
It
reduces
the
cases
of
occlusion
problems
between
the
v
ehicles.
In
addition,
if
the
shado
w
re
gion
is
not
remo
v
ed,
it
will
be
consid-
ered
as
a
part
of
v
ehicle
dimensions,
and
the
v
ehicle
will
be
classified
into
the
wrong
class.
Thus,
the
shado
w
remo
v
al
algorithm
leads
to
impro
v
e
the
classification
process
and
increases
the
system
performances.
Man
y
methods
ha
v
e
been
de
v
eloped
to
detect
and
remo
v
e
v
ehicle
shado
ws
[15-19],
some
m
ethods
utilize
the
color
information
to
distinguish
between
v
ehicle
pix
els
and
shado
w
pix
els
because
of
the
dif
ference
of
the
chromatic
and
luminance
between
the
shado
w
and
v
ehicle
[20,
21].
Ho
we
v
er
,
these
methods
f
ail
to
identify
shado
w
pix
els
from
dark
v
ehicle
pix
els.
The
other
methods
emplo
y
the
te
xture
information
to
identify
the
v
ehicle
re
gion
[22].
Since
the
shado
w
has
a
little
te
xture,
it
is
simple
to
separate
the
v
ehicle
re
gion
from
the
shado
w
re
gion.
In
this
paper
,
we
proposed
an
algorithm
to
remo
v
e
the
shado
w
re
gion
based
on
the
edges
of
detected
re
gions.
The
proposed
algorithm
in
v
olv
es
the
follo
wing
steps:
When
the
background
subtraction
method
is
applied,
the
mo
ving
objects
(v
ehicles)
are
detected
with
their
shado
ws
(detected
re
gions).
Then
the
shado
w
remo
v
al
algorithm
is
applied
on
these
detected
re
gions
in
the
resulted
image
from
background
subtraction
method
and
the
gray
scale
source
image.
Figure
2
(a)
and
(b)
sho
ws
the
detected
re
gion
in
gre
y
scale
image
and
fore
ground
image
respecti
v
ely
.
The
Cann
y
edge
detector
is
a
p
pl
ied
on
these
tw
o
images.
The
results
of
edge
detection
are
sho
wn
in
Figure2
(c)
and
(d)
respecti
v
ely
.
XOR
operation
is
applied
on
the
tw
o
images
resulted
from
the
edge
detection
operation.
Figure
2
(e)
sho
ws
the
image
obtained
from
this
operation.
Due
to
applying
the
edge
detection
on
the
detected
re
gion
in
the
gray
scale
image,
it
may
e
xist
undesired
background
edges
lik
e
edges
of
white
marks,
damaged
marks,
and
trees
or
b
uildings
shado
ws.
Therefore,
Cann
y
edge
detector
is
applied
on
the
detected
re
gion
in
the
background
model
(the
image
wh
e
n
there
are
no
v
ehicles).
The
edges
of
the
background
objects
are
subtracted
from
the
edges
of
the
image
obtained
from
step
3,
and
what
will
remain
are
v
ehicle
edges.
Then
we
apply
close
operation
on
the
resulted
image
to
fill
in
spaces
between
the
edges
to
preserv
e
the
v
ehicle
details.
The
resulted
image
obtained
from
this
step
is
sho
wn
in
Figure
2
(f).
(a)
(b)
(c)
(d)
(e)
(f)
Figure
2.
Shado
w
remo
v
al
Int
J
Po
w
Elec
&
Dri
Syst,
V
ol.
11,
No.
4,
December
2020
:
2091
–
2098
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Po
w
Elec
&
Dri
Syst
ISSN:
2088-8694
r
2095
2.2.
V
ehicle
tracking
There
are
se
v
eral
methods
to
track
mo
ving
objects
through
image
sequences.
These
methods
can
be
grouped
into
tw
o
cate
gories:
feature-based
tracking
methods,
and
model-based
tracking
methods.
The
feature-
based
methods
are
most
widely
used
in
li
terature
because
of
its
rob
ustness
[2].
In
this
w
ork,
after
the
detection
of
v
ehicles
without
shado
ws,
a
feature-based
method
is
emplo
yed
to
track
the
mo
ving
v
ehicles
through
the
image
sequences
in
the
detection
zone
(that
is
designated
by
tw
o
blue
horizontal
lines
as
sho
wn
in
Figure
3).
When
the
v
ehicle
appears
in
the
detection
area,
the
centroid
of
v
ehicle
is
calculated,
and
then
this
feature
is
used
to
track
the
detected
v
ehicle
between
the
consecuti
v
e
frames.
First,
an
empty
v
ector
is
initialized
to
maintain
the
position
of
v
ehicles
centroid.
If
mo
ving
v
ehicles
are
detected
in
the
current
frame,
their
centroid
position
are
recorded
in
this
v
ector
.
In
adjacent
frames,
the
mo
ving
objects
that
are
spatially
closest
are
correlated.
Therefore,
the
measurement
of
the
distance
between
the
consecuti
v
e
frames
is
suf
ficient
to
track
the
mo
ving
objects.
The
euclidean
distance
(ED)
is
utilized
to
measure
the
distance
between
the
position
of
v
ehicle
centroid
in
the
current
frame
and
in
the
pre
vious
frame.
F
or
each
mo
ving
v
ehicle
in
the
current
frame,
a
v
ehicle
with
the
minimum
distance
is
searched
for
the
pre
vious
frame
to
change
its
record
to
the
ne
w
position.
When
the
v
ehicle
e
xceeds
the
detection
area,
the
record
of
this
v
ehicle
is
remo
v
ed
from
the
v
ector
.
(a)
(b)
(c)
(d)
Figure
3.
The
results
of
the
detection
and
classification
process,
(a)
The
result
of
the
frame
438,
(b)
The
result
of
the
frame
350,
(c)
The
result
of
the
frame
1005,
(d)
The
result
of
the
frame
1572
3.
VEHICLE
CLASSIFICA
TION
After
the
detection
of
v
ehicles
without
shado
ws,
the
tracking
process
is
implemented.
When
the
v
ehicle
reaches
the
classification
line
(that
is
designated
by
a
red
horizontal
line
as
sho
wn
in
Figure
3),
the
classification
process
is
implemented
bas
ed
on
v
ehicle
dimensions.
In
this
w
ork,
three
parameters
(aspect
ratio
(AR)
=
height/width,
height,
and
width)
of
v
ehicles
are
utilized
to
classify
them
into
three
classes
(cars,
bik
es,
and
trucks).
The
aspect
ratio
is
calculated
by
using
the
dimensions
of
dif
ferent
types
of
v
ehicles;
v
ehicle
dimensions
are
tak
en
from
[23].
W
e
took
into
our
consideration
the
transformation
from
3D
to
2D.
The
aspect
ratio
of
dif
ferent
v
ehicles
types
are
as
follo
ws:
AR
of
cars
is
between
[1.17-1.4].
AR
of
tracks
is
between
[1.3-1.9].
AR
of
bik
es
is
between
[1.8-2.4].
A
r
eal-time
system
for
vehicle
detection
with
shadow
r
emo
val
and
...
(Issam
Atouf)
Evaluation Warning : The document was created with Spire.PDF for Python.
2096
r
ISSN:
2088-8694
The
classification
process
in
v
olv
es
the
follo
wing
steps:
When
the
v
ehicle
attains
the
classification
line,
its
height
and
width
are
calculated.
If
the
v
ehicle
width
is
greater
than
a
specific
threshold;
then
there
is
a
horizontal
occlusion
(when
the
distance
between
the
tw
o
adjacent
v
ehicles
is
v
ery
small;
the
y
are
detected
as
one
object.
This
is
called
a
horizontal
occlusion).
The
dete
cted
re
gion
is
considered
as
tw
o
adjacent
v
ehicles,
and
it
is
separated
into
tw
o
re
gions
by
di
viding
the
width
by
tw
o.
If
the
v
ehicle
height
is
greater
than
a
specific
threshold;
then
there
is
a
v
ertical
occlusion
(when
the
distance
between
the
tw
o
consecuti
v
e
v
ehicles
is
v
ery
small;
the
y
are
detected
a
s
one
object.
This
is
called
a
v
ertical
occlusion).
The
detected
re
gion
is
considered
as
tw
o
consecuti
v
e
v
ehicles,
and
it
is
separated
into
tw
o
re
gions
by
di
viding
the
height
by
tw
o.
The
aspect
ratio
of
each
v
ehicle
is
calculated.
If
AR
is
between
1.17
and
1.3,
then
the
v
ehicle
is
a
car
.
If
AR
is
between
1.41
and
1.79,
then
the
v
ehicle
is
a
truck.
If
AR
is
between
1.91
and
2.4,
then
the
v
ehicle
is
a
bik
e.
If
AR
f
alls
in
the
o
v
erlapping
interv
al
[1.3-
1.4],
the
v
ehicle
height
is
used
to
distinguish
if
the
v
ehicle
is
a
car
or
a
truck,
if
the
height
is
higher
than
a
predefined
threshold,
then
the
v
ehicle
is
a
truck,
else
the
v
ehicle
is
considered
as
a
car
.
If
AR
f
alls
in
the
o
v
erlapping
interv
al
[1.8-1.9],
the
v
ehicle
width
is
used
to
distinguish
if
the
v
ehicle
is
a
bik
e
or
a
truck,
if
the
width
is
higher
than
a
predefined
threshold
then
the
v
ehicle
is
a
truck,
else
the
v
ehicle
is
considered
as
a
bik
e.
Three
counters
ha
v
e
been
proposed
to
count
the
number
of
v
ehicles
in
each
class.
These
counters
are
called
C-car
,
C-bik
e,
and
C-truck.
The
y
are
initialized
to
zero,
and
when
the
detected
v
ehicle
is
classified
into
one
class
from
the
e
xisting
three
classes,
the
counter
of
this
class
will
be
incremented
by
one.
4.
EXPERIMNT
AL
RESUL
T
In
this
paper
,
a
surv
eillance
traf
fic
system
is
proposed
to
detect
and
classify
the
di
f
ferent
types
of
v
ehicles.
In
order
to
confirm
that
our
proposed
method
is
ef
fecti
v
e
to
perform
this
task,
we
utilized
database
that
is
tak
en
from
stationary
traf
fic
cameras
of
Casablanca
city
.
This
database
contains
of
tw
o
dif
ferent
traf
fic
videos;
the
first
contains
of
3580
frames
with
a
resolution
240x320,
the
other
contains
of
2030
frames
with
the
same
resolution.
T
w
o
dif
ferent
scenes
of
traf
fic
videos
are
utili
zed.
One
video
is
tak
en
in
the
area
just
after
a
traf
fic
light
(Scene
1)
and
the
other
is
tak
en
in
an
urban
road
(Scene
2).
The
results
obtained
during
realized
system
operation
contains
of
v
ehicle
type
and
v
ehicle
number
.
V
ehicle
number
is
displayed
abo
v
e
the
boundary
box.
Each
v
ehicle
is
labelled
by
a
black
rectangular
box
until
the
y
reach
the
classification
line,
then
the
box
color
changes
according
to
classification
result.
If
the
v
ehicle
type
is
car
,
then
the
box
color
changes
to
blue.
If
the
v
ehicle
type
is
bik
e,
then
the
box
color
changes
to
yello
w
.
If
the
v
ehicle
type
is
truck,
then
the
box
color
changes
to
red.
Figure
3
(a)
sho
ws
the
result
of
the
438-th
frame
in
which
4
cars
in
the
detection
area,
three
of
t
hem
crossed
the
classification
line.
The
y
are
classified
as
cars
and
ha
v
e
been
labelled
by
blue
rectangular
box.
Figure
3
(b)
sho
ws
the
result
of
the
350-th
frame
in
which
one
truck
has
been
counted
and
labelled
by
red
rectangular
box.
Figure
3
(c)
sho
ws
the
result
of
the
1005-th
frame
in
which
one
bik
e
has
been
counted
and
labelled
by
yello
w
rectangular
box
and
one
car
has
been
counted
and
labelled
by
blue
rectangular
box.
Figure
3
(d)
sho
ws
the
result
of
the
1572-th
frame
from
video
sequences
2
in
which
tw
o
cars
ha
v
e
been
de-
tected.
The
first
one
crossed
the
classification
line,
so
it
has
been
counted
and
labelled
by
blue
rectangular
box.
According
to
these
results,
the
dif
ferent
types
of
v
ehicles
can
be
classified
and
counted
correctly
in
this
proposed
system.
T
able.1
sho
ws
the
results
of
v
ehicle
counting
for
tw
o
dif
ferent
traf
fic
scenes.
The
results
manifest
that
the
a
v
erage
accurac
y
of
the
tw
o
dif
ferent
scenes
is
96.78%.
T
able
1.
Result
of
v
ehicle
counting
Scene
1
Scene
2
V
ehicle
T
ype
Count
Error
Accurac
y
V
ehicle
T
ype
Count
Error
Accurac
y
Car
230
6
97.45%
Car
147
3
97.59%
Bik
e
26
1
96.2%
Bik
e
19
1
94.7%
T
ruck
18
1
94.73%
T
ruck
10
0
100%
A
v
erage
96.13%
A
v
erage
97.43%
T
o
e
v
aluate
the
ef
ficienc
y
of
the
proposed
system,
a
compar
ati
v
e
study
is
established
on
dif
ferent
surv
eillance
systems.
There
are
dif
ficulty
to
mak
e
a
f
air
comparati
v
e
study
due
to
the
utilization
of
dif
ferent
database.
Therefore,
we
conducted
a
qualitati
v
e
comparati
v
e
study
instead
of
quantitati
v
e
study
.
T
able
2
sho
ws
Int
J
Po
w
Elec
&
Dri
Syst,
V
ol.
11,
No.
4,
December
2020
:
2091
–
2098
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Po
w
Elec
&
Dri
Syst
ISSN:
2088-8694
r
2097
the
results
of
the
comparati
v
e
s
tudy
.
As
noted
in
this
table,
the
counting
accurac
y
of
the
proposed
algorithm
on
an
a
v
erage
of
96.78%
denotes
that
the
proposed
algorithm
is
more
ef
ficient
than
the
other
compared
algorithms.
T
able
2.
Result
of
the
comparati
v
e
study
Comparati
v
e
[24]
[2]
[3]
[25]
our
method
methods
V
ehicles
types
Cars
only
Cars
and
bik
es
Bus,
minib
us,
Cars
only
Cars,
bik
es,
and
trucks
car
,
and
truck
V
ehicle
detection
Optical
flo
w
Background
Background
Frame
dif
ferencing
Background
subtraction
method
subtraction
subtraction
V
ehicle
classification
X
Features
Hierarchical
X
Features
e
xtraction
method
e
xtraction
Multi-SVMs
V
ehicle
counting
94.04%
96.9%
Scene
1:
93%
96.04%
Scene
1:
96.13%
accurac
y
Scene
2:
88%
Scene
2:
97.43%
Scene
3:
90%
Scene
4:
95%
5.
CONCLUSION
A
system
for
v
ehicles
detection
and
classification
has
been
introduced
in
this
paper
.
The
syst
em
consists
of
three
main
phases:
v
ehicle
detection,
v
ehicle
tracking
and
v
ehicle
classification.
In
the
v
ehicle
detection,
the
background
subtraction
is
utilized
to
detect
the
mo
ving
v
ehicles
by
emplo
ying
mixture
of
Gaus-
sians
(MoGs)
algorithm,
and
then
the
remo
v
al
shado
w
algorithm
is
de
v
eloped
to
impro
v
e
the
detection
phase
and
eliminate
the
undesired
detected
re
gion.
Then
the
v
ehicles
are
track
ed
until
the
y
reach
the
classification
line.
After
that,
the
v
ehicle
dimensions
are
utilized
to
classify
the
v
ehicles
into
three
classes
(cars,
bik
es,
and
trucks).
After
the
clas
sification
phase,
the
v
ehicles
in
each
class
will
be
counted.
The
system
has
been
applied
on
v
arious
traf
fic
scenes
under
dif
ferent
weather
and
lighting
conditions.
The
e
xperimental
results
confirm
that
the
proposed
system
has
the
ability
in
detecting
and
classifying
the
v
ehicles
accurately
and
ef
ficiently
in
real
time.
REFERENCES
[1]
I.
Kajo,
A.
S.
Malik,
and
N.
Kamel,
”Motion
estimation
of
cro
wd
flo
w
using
optical
flo
w
techniques:
A
re
vie
w
,
”
9th
International
Conference
on
Signal
Processing
and
Communication
Systems
(ICSPCS)
,
pp.
1-9,
2015.
[2]
D.-Y
.
Huang,
et
al.,
”Feature-based
v
ehicle
flo
w
analysis
and
measurement
for
a
real-time
traf-
fic
surv
eillance
system,
”
Journal
of
Information
Hiding
and
Multimedia
Signal
Processing
,
v
ol.
3,
pp.
279-294,
2012.
[3]
H.
Fu,
H.
Ma,
Y
.
Liu
and
D.
Lu,
”A
v
ehicle
classification
system
based
on
hierarchical
multi-SVMs
in
cro
wded
traf
fic
scenes,
”
Neurocomputing
,
pp.
182-190,
2016.
[4]
S.
Gupte,
O.
Masoud,
R.
F
.
Martin,
and
N.
P
.
P
apanik
olopoulos
,
”Detection
and
classificati
on
of
v
ehicles,
”
IEEE
T
ransactions
on
intelligent
transportation
systems
,
v
ol.
3,
no.
1
pp.
37-47,
2002.
[5]
Y
.
Zhou,
and
C.
Ng
ai-Man,
”V
ehicle
classification
using
transferable
deep
neural
netw
ork
features,
”
arXi
v
preprint
arXi
v:1601.01145.
Health
Psychology
,
v
ol.
35,
no.
4,
pp.
397–402,
2016.
[6]
S.
Ojha,
and
S.
Sachin,
”Image
processing
techniques
for
object
tracking
in
video
surv
eillance-A
surv
e
y
,
”
IEEE
International
Conference
on
Perv
asi
v
e
Computing
(ICPC)
,
2015.
[7]
J.
Liu,
Y
.
Liu,
G.
Zhang,
P
.
Zhu,
and
YQ.
Chen,
”Detecting
and
tracking
people
in
real
time
with
RGB-D
camera,
”
P
attern
Recognition
Letters
,
v
ol.
53,
pp.
16-23,
2015.
[8]
G.
Cha
v
ez,
O.
Ricardo,
and
A.
Oli
vier
,
”Multiple
sensor
fusion
and
classification
for
mo
ving
ob-
ject
detection
and
tracking,
”
IEEE
T
ransactions
on
Intelligent
T
ransportation
Systems
,
v
ol.
17,
no.
2,
pp.
525-534,
2016.
[9]
S.
Kamkar
,
and
S.
Reza.,
”V
ehicle
detection,
counting
and
classification
in
v
arious
conditions,
”
IET
Intelligent
T
ransport
Systems
,
v
ol.
10,
no.
6,
pp.
406-413,
2016.
[10]
A.
Elg
ammal,
D.
Harw
ood,
and
L.
Da
vis.,
”Non-parametric
model
for
background
subtraction,
”
Euro-
pean
conference
on
computer
vision.
Springer
,
pp.
751-767,
2000.
A
r
eal-time
system
for
vehicle
detection
with
shadow
r
emo
val
and
...
(Issam
Atouf)
Evaluation Warning : The document was created with Spire.PDF for Python.
2098
r
ISSN:
2088-8694
[11]
K.-P
.
Karmann
and
A.
Brandt,
”Mo
ving
object
recognition
using
and
adapti
v
e
background
memory
,
”
T
ime-V
arying
Image
Processing
and
Mo
ving
Object
Recognition
,
v
ol.
2,
pp.
289-307,
1990.
[12]
K.
Kim,
et
al.,
”Real-time
fore
ground–background
se
gmentation
using
codebook
model,
”
Real-time
imag-
ing
,
v
ol.
11,
no.
3,
pp.
172-185,
2005.
[13]
Y
.
Benezeth,
et
al.,
”Comparati
v
e
study
of
background
subtraction
algorithms,
”
Journal
of
Electronic
Imaging
,
v
ol.
19,
no.
3,
2010.
[14]
C.
Stauf
fer
,
and
W
.
Grimson.,
”Learning
patterns
of
acti
vity
using
real-time
tracking,
”
IEEE
T
rans.
on
P
attern
Analysis
and
Machine
Intelligence
,
v
ol.
22,
pp.
747-757,
Aug
2000.
[15]
S.
P
.
Mohammed,
N.
Arunkumar
and
A.
Enas.,
”Automated
multim
odal
background
detection
and
shado
w
remo
v
al
process
using
rob
ust
principal
fuzzy
gradient
partial
equation
methods
in
intelligent
transporta-
tion
systems,
”
International
Journal
of
Hea
vy
V
ehicle
Systems
,
v
ol.
25,
pp.
271-285,
2018.
[16]
Karim,
Shahid
and
Zhang,
Y
e
and
Ali,
Saad
and
Asif,
Muhammad
Rizw
an.,
”An
impro
v
ement
of
v
ehicle
detection
under
shado
w
re
gions
in
satellite
imagery
,
”
Ninth
International
Conference
on
Graphic
and
Image
Processing
(ICGIP)
,
v
o.
10615,
2018.
[17]
K
umar
,
Cheruku
Sandesh
et
al.,
”Se
gmentation
on
mo
ving
shado
w
detection
and
remo
v
al
by
symlet
transform
for
v
ehicle
detection,
”
3rd
International
Conference
on
Computing
for
Sustainable
Global
De-
v
elopment
(INDIA
Com)
,
pp.
259-264,
2016.
[18]
Seenouv
ong,
Nilak
orn
and
W
atchareeruetai,
Ukrit
and
Nuthong,
Chaiw
at
and
Khongsomboon,
Kham-
phong
and
Ohnishi,
Noboru.,
”A
computer
vision
based
v
ehicle
detection
and
counting
system,
”
8th
International
Conference
on
Kno
wledge
and
Smart
T
echnology
(KST)
,
pp.
224-227,
2016.
[19]
Hanif,
Muhammad
and
Hussain,
F
a
w
ad
and
Y
ousaf,
Muhammad
Haroon
and
V
elastin,
Ser
gio
A
and
Chen,
Zezhi,
”Shado
w
Detection
for
V
ehicle
Classification
in
Urban
En
vironments,
”
International
Con-
ference
Image
Analysis
and
Recognition
,
pp.
352-362,
2017.
[20]
A.
T
iw
ari,
K.
S.
Pradee
p
,
and
A.
Sobia.,
”A
surv
e
y
on
Shado
w
Det
ection
and
Remo
v
al
in
images
and
video
sequences,
”
IEEE
sixth
International
Conference-Cloud
System
and
Big
Data
Engineering
,
2016.
[21]
Z.
Zhu,
and
X.
Lu.,
”An
accurate
shado
w
remo
v
al
method
for
v
ehicle
tracking,
”
IEEE
International
Con-
ference
on
Artificial
Intelligence
and
Computational
Intelligence
,
v
ol.
2,
pp.
59-62,
2010.
[22]
R.
P
.
A
v
ery
,
et
al.,
”In
v
estig
ation
into
shado
w
remo
v
al
from
traf
fic
images,
”
T
ransportation
Research
Record
2000
,
v
ol.
1,
pp.
70-77,
2007.
[23]
T
ransport
for
NSW
,
V
ehicle
standards
information,
no.
5,
Re
v
.
5,
Published
9
No
v
ember
2012.
[24]
HS.
Mohana,
M.
Ashw
athakumar
,
and
G.
Shi
v
akumar
.,
”V
ehicle
detection
and
counting
by
using
real
time
traf
fic
flux
through
dif
ferential
technique
and
performance
e
v
aluation,
”
IEEE
International
Conference
on
Adv
anced
Computer
Control
,
pp.
791-795,
2009.
[25]
N.
Seenouv
ong,
U.
W
atchareeruetai,
C.
Nuthong,
K.
Khongsomboon,
and
N.
Ohnishi.,
”A
computer
vision
based
v
ehicle
detection
and
counting
sys
tem,
”
IEEE
8th
International
Conference
on
Kno
wledge
and
Smart
T
echnology
(KST)
,
pp.
224-227,
2016.
Int
J
Po
w
Elec
&
Dri
Syst,
V
ol.
11,
No.
4,
December
2020
:
2091
–
2098
Evaluation Warning : The document was created with Spire.PDF for Python.