IAES
Inter
national
J
our
nal
of
Articial
Intelligence
(IJ-AI)
V
ol.
14,
No.
3,
June
2025,
pp.
2236
∼
2245
ISSN:
2252-8938,
DOI:
10.11591/ijai.v14.i3.pp2236-2245
❒
2236
Camera-based
adv
anced
dri
v
er
assistance
with
integrated
Y
OLOv4
f
or
r
eal-time
detection
K
eerthi
J
ayan,
Balakrishnan
Muruganantham
Department
of
Computing
T
echnologies,
School
of
Computing,
SRM
Institute
of
Science
and
T
echnology
,
Kattankulathur
,
India
Article
Inf
o
Article
history:
Recei
v
ed
Jun
1,
2024
Re
vised
Dec
10,
2024
Accepted
Jan
27,
2025
K
eyw
ords:
AD
AS
Computational
comple
xity
Correlated
outcome
Real-time
object
detection
Synchronization
rate
Y
OLOv4
ABSTRA
CT
T
esting
object
detection
in
adv
erse
weather
conditions
pos
es
signicant
chal-
lenges.
This
paper
presents
a
frame
w
ork
for
a
camera-based
adv
anced
dri
v
er
assistance
system
(AD
AS)
using
the
Y
OLOv4
model,
supported
by
an
electronic
control
unit
(ECU).
The
AD
AS-based
ECU
identies
object
classes
from
real-time
video,
with
detection
ef
cienc
y
v
alidated
ag
ainst
the
Y
OLOv4
model.
Performance
is
analysed
using
three
testing
methods:
projection,
video
injection,
and
real
v
ehicle
testing.
Each
method
is
e
v
aluated
for
accurac
y
in
object
detection,
synchronization
rate,
correlated
outcomes,
and
computational
comple
xity
.
Results
sho
w
that
the
projection
met
hod
achie
v
es
highest
accurac
y
with
minimal
frame
de
viation
(1-2
frames)
and
up
to
90%
correlated
outcomes,
at
approximately
30%
computational
comple
xity
.
The
video
injection
method
sho
ws
moderate
accurac
y
and
comple
xity
,
with
frame
de
viation
of
3-4
frames
and
75%
correlated
outcomes.
The
real
v
ehicle
testing
method,
though
demand-
ing
higher
computational
resources
and
sho
wing
a
lo
wer
synchronization
rate
(
>
5
frames
de
viation),
pro
vides
cri
tical
insights
under
realistic
weather
condi-
tions
despite
higher
misclassication
rates.
The
study
highlights
the
importance
of
choosing
appropriate
method
based
on
testing
conditions
and
objecti
v
es,
bal-
ancing
computational
ef
cienc
y
,
synchronization
accurac
y
,
and
rob
ustness
in
v
arious
weather
scenarios.
This
research
signicantly
adv
ances
autonomous
v
e-
hicle
technology
,
particularly
in
enhancing
AD
AS
object
detection
capabilities
in
di
v
erse
en
vironmental
conditions.
This
is
an
open
access
article
under
the
CC
BY
-SA
license
.
Corresponding
A
uthor:
K
eerthi
Jayan
Department
of
Computing
T
echnologies,
School
of
Computing,
SRM
Institute
of
Science
and
T
echnology
Kattankulathur
,
Cheng
alpattu,
T
amil
Nadu,
603203,
India
Email:
kj4134@srmist.edu.in
1.
INTR
ODUCTION
In
the
rapidly
e
v
olving
landscape
of
automoti
v
e
technology
,
adv
anced
dri
v
er
assistance
systems
(AD
AS)
[1]
ha
v
e
emer
ged
as
pi
v
otal
components
in
enhancing
road
safety
and
dri
ving
ef
cienc
y
.
Central
to
the
ef
fecti
v
eness
of
AD
AS
is
the
capability
for
real-time
object
detect
ion,
a
task
that
demands
high
accurac
y
and
reliability
under
di
v
erse
and
often
challenging
en
vironmental
conditions
[2]–[5].
Recent
de
v
elopments
in
articial
intelligence
(AI)
are
bringing
the
concept
of
self-dri
ving
automobiles
closer
to
reality
,
with
the
potential
to
re
v
olut
ionize
transportation
by
enabling
v
e
h
i
cles
to
dri
v
e
themselv
es
without
human
interv
ention
[6].
The
society
of
automoti
v
e
engineers
(SAE)
denes
six
le
v
els
of
dri
ving
automation,
ranging
from
le
v
el
0
(no
dri
ving
automation)
to
le
v
el
5
(full
automation),
reect
ing
the
progressi
v
e
sophistication
of
autonomous
dri
ving
capabilities
[7],
[8].
Consumers
w
orldwide
are
eagerly
anticipating
the
introduction
of
dri
v
erless
cars,
J
ournal
homepage:
http://ijai.iaescor
e
.com
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Artif
Intell
ISSN:
2252-8938
❒
2237
which
promise
to
na
vig
ate
comple
x
en
vironments,
classify
objects,
and
adhere
to
traf
c
la
ws
autonomously
[9]–[11].
A
notable
milestone
in
this
eld
is
the
Mercedes-Benz
dri
v
e
pilot,
the
rst
autonomous
dri
ving
sys-
tem
to
recei
v
e
complete
certication
at
le
v
el
3,
marking
signicant
progress
to
w
ards
fully
autonomous
v
ehicles.
Self-dri
ving
cars
utilize
an
array
of
sensors,
including
radar
,
video
cameras,
light
detection
and
rang-
ing
(LID
AR),
and
ultrasonic
sensors,
to
g
ather
comprehensi
v
e
data
about
their
surroundings
[12],
[13].
These
sensors
enable
the
v
ehicle
to
construct
and
continuously
update
a
detailed
map
of
its
immediate
en
vironment.
Radar
monitors
the
positions
of
nearby
v
ehicles,
video
cameras
identify
pedestrians,
v
ehicles,
and
traf
c
sig-
nals,
LID
AR
measure
s
distances
and
detects
road
features,
and
ultrasonic
sensors
detect
obstacles
at
close
range
[14].
The
inte
gration
of
these
sensor
technologies
with
adv
anced
computer
vision
systems
is
crucial
for
the
performance
of
AD
AS,
as
t
hese
systems
must
process
real-time
data
to
mak
e
instantaneous
decisions
[15].
The
demand
for
AD
AS
is
e
xpected
to
sur
ge
with
adv
ancements
in
computer
vision
and
deep
learning
(DL).
Modern
automobiles
increasingly
rely
on
camera-based
en
vironmental
sensors
to
identify
,
classify
,
and
localize
objects
accurately
.
Consequently
,
ri
go
r
ou
s
testing
and
v
alidation
of
camera-based
AD
AS
functions
are
essential
to
ensure
their
reliability
and
ef
fecti
v
eness
under
v
arious
conditions
[16],
[17].
Current
AD
AS
test-
ing
methodologies
include
v
ehicle-le
v
el
eld
trials
and
hardw
are-in-t
h
e
-loop
(HIL)
testing
[18],
[19].
V
ehicle
testing
on
pro
ving
tracks
v
alidates
AD
AS
functions
b
ut
f
aces
limitations
re
g
arding
safety
and
en
vironmental
conditions,
resulting
in
reduced
test
co
v
erage
[20]–[22].
Con
v
ersely
,
HIL
v
alidation
of
fers
a
more
comprehen-
si
v
e
approach
[23].
In
HIL
testing,
v
arious
scenarios
are
created
using
simulation
softw
are.
These
simulated
scenarios
are
then
fed
to
the
AD
AS
camera
via
a
monitor
to
e
v
aluate
the
system’
s
performance
[24],
[25].
This
method
allo
ws
for
thorough
v
alidation
of
AD
AS
functions
under
a
wide
range
of
en
vironmental
conditions
and
safety-critical
scenarios,
ensuring
the
system
can
handle
real-w
orld
situations
ef
fecti
v
ely
.
This
research
delv
es
into
the
inte
gration
and
v
alidation
of
a
camera-based
AD
AS
using
the
adv
anced
Y
OLOv4
model
[26],
[27],
a
DL
algorithm
celebrated
for
its
ef
cienc
y
and
accurac
y
in
object
detection.
The
main
goal
is
to
assess
Y
OLOv4’
s
performance
within
an
AD
AS
fra
me
w
ork,
particularly
focusing
on
its
abil-
ity
to
detect
and
classify
objects
in
real-time
[28].
Gi
v
en
the
comple
x,
v
ariable
conditions
encountered
in
real
w
orld
dri
ving
such
as
adv
erse
weather
,
this
study
aims
to
address
the
critical
need
for
a
rob
ust
and
re-
liable
object
detection
system.
Through
a
structured
approach
incorporating
v
arious
testing
and
v
alidation
scenarios
such
as
monitor
based
scenario
projection,
camera
based
real-time
scenario
capture,
and
li
v
e
dri
v
e
testing—this
research
presents
an
in
depth
analysis
of
the
AD
AS
system’
s
ef
fecti
v
eness
[29].
It
e
xamines
the
trade
of
fs
between
computational
ef
c
ienc
y
and
detection
accurac
y
,
of
fering
v
aluabl
e
insights
that
can
dri
v
e
further
adv
ancements
in
AD
AS
technology
.
These
ndings
contrib
ute
to
the
gro
wing
eld
of
autonomous
dri
v-
ing,
highlighting
the
importance
of
accurate,
high
performance
object
detection
as
a
foundational
element
on
the
path
to
fully
autonomous
dri
ving
solutions.
2.
METHODOLOGY
This
section
describes
a
frame
w
ork
de
v
eloped
for
testing
and
v
alidating
real-time
object
detect
ion
using
a
camera
based
AD
AS.
It
is
clearly
illustrated
in
the
Figure
1.
The
electronic
control
unit
(ECU)
is
inte
grated
with
a
well-trained
DL
netw
ork.
The
frame
w
ork
consists
of
four
important
units:
in-front
v
ehicle
infotainment
(including
a
video
camera
and
AD
AS
cameras),
a
central
g
ate
w
ay
,
a
pre-trained
Y
OLOv4
with
the
proposed
video
frame
feeding
(VFF)
algorithm
[30],
and
an
AD
AS-ECU
based
object
detection
model.
The
o
v
ervie
w
of
the
proposed
frame
w
ork
is
as
foll
o
ws:
both
the
video
camera
and
AD
AS
camera
are
mounted
on
the
v
ehicle’
s
windshield
to
continuously
monitor
the
front
road
en
vironment.
Once
the
v
ehicle
starts,
both
cameras
are
acti
v
ated
and
instantaneously
capture
the
road
en
vironment.
This
data
is
then
forw
arded
to
the
pre-trained
Y
OLOv4
and
the
AD
AS-ECU
separately
with
the
help
of
the
central
g
ate
w
ay
unit.
The
CarMak
er
(CM)
tool
creates
real-w
orld
scenarios
and
feeds
video
to
the
proposed
VFF
algorithm,
which
processes
the
video
frames
and
generates
the
object
list
to
be
applied
to
the
object
detection
model.
Similarly
,
the
AD
AS
ECU
pro
vides
v
ehicle
dynamic
information
for
the
videos
recei
v
ed
from
the
AD
AS
camera,
which
is
fed
through
ethernet.
The
partner
ECU
then
starts
to
identify
objects,
and
the
output
list
is
pro
vided
in
CAN
format.
The
object
detection
model
pr
o
c
esses
this
and
pro
vides
an
output
as
a
list
of
detected
objects.
The
de
v
eloped
frame
w
ork
cross-checks
the
outcomes
recei
v
ed
from
the
AD
AS
camera
as
CAN
messages
and
from
the
VFF
algorithm
in
real-time.
It
compares
the
object
list
from
the
proposed
VFF
algorithm
and
the
CAN
data
ag
ainst
the
simulation
timestamp
to
ensure
that
there
is
no
f
alse
positi
v
e
or
f
alse
ne
g
ati
v
e
identication
of
objects.
Camer
a-based
advanced
driver
assistance
with
inte
gr
ated
Y
OLOv4
for
r
eal-time
detection
(K
eerthi
J
ayan)
Evaluation Warning : The document was created with Spire.PDF for Python.
2238
❒
ISSN:
2252-8938
Figure
1.
The
real-time
object
detection
testing
and
v
alidation
frame
w
ork
2.1.
In-fr
ont
v
ehicle
inf
otainment
The
AD
AS
camera
ECU
is
primarily
responsible
for
processing
visual
data
in
real-time,
which
is
cru-
cial
for
detecting
and
w
arning
about
potential
hazards
such
as
pedestrians,
other
v
ehicles,
and
road
signs.
Its
ability
to
swiftly
handle
lar
ge
v
olumes
of
data
from
cameras
is
vital
for
ef
fecti
v
e
decision-making
and
action
in
dynamic
dri
ving
en
vironments.
It
is
mostly
used
for
automating
dri
ving
tasks
such
as
parking
assistance,
lane
k
eeping,
and
adapti
v
e
cruise
control,
all
of
which
signicantly
reduce
dri
v
er
w
orkload
and
enhance
dri
ving
comfort
and
e
xperience.
Additionally
,
it
adapts
to
v
arious
en
vironmental
conditions,
including
lo
w
light
and
adv
erse
weather
,
to
ensure
consistent
performance
under
dif
ferent
e
xternal
f
actors.
The
video
camera,
ha
v-
ing
similar
properties
to
the
AD
AS
camera
(such
as
eld
of
vie
w
and
frames
per
second),
captures
the
road
en
vironment
and
feeds
it
to
the
pretrained
Y
OLOv4
inte
grated
with
the
proposed
VFF
algorithm.
2.2.
Central
gateway
It
f
acilitates
the
o
w
of
data
between
dif
ferent
components,
in
this
case,
the
cameras
(video
and
AD
AS
cameras),
the
pre-trained
Y
OLOv4
with
VFF
algorithm,
and
the
AD
AS-ECU.
Simply
,
it
refers
as
a
central
hub
or
intermediary
in
the
system.
The
high
bandwidth
of
HDMI
supports
the
transfer
of
uncompressed
video
data,
which
is
crucial
for
maint
aining
the
quality
and
delity
of
the
visual
informat
ion
necessary
for
accurate
object
detection.
Other
hand,
the
processed
data
can
be
transmitted
to
the
AD
AS-ECU
via
an
Ethernet
connection.
This
ensures
a
reliable
and
f
ast
transfer
of
cr
ucial
object
detection
information,
which
the
AD
AS-ECU
can
then
use
to
mak
e
realtime
decisions
for
dri
v
er
assistance
functionalities.
2.3.
Pr
e-trained
Y
OLOv4
with
VFF
algorithm
The
m
ain
goal
is
to
accurately
detect
traf
c
signboards
object
based
on
the
German
traf
c
sign
recogni-
tion
Benchmark
(GTSRB)
[31],
[32]
dataset
which
is
pre-trained
in
Y
OLOv4
with
additi
v
e
support
of
proposed
VFF
algorithm
that
can
be
capable
of
performing
detection
and
classication
under
v
arious
en
vironmental
con-
ditions.
In
this
process,
real-time
video
frames
are
fed
into
the
pre-trained
Y
OLOv4
model,
enhanced
by
the
VFF
algorithm,
to
identify
specic
traf
c
signboards
from
a
selected
set.
It
simplies
the
process
of
object
detection
in
a
si
mulated
en
vironment.
It
s
main
objecti
v
es
are
i)
setting
up
the
camera
model
in
CM,
ii)
gener
-
ating
video
frames
that
represent
the
simulated
en
vironment,
iii)
pre-processing
these
video
frames
before
the
y
are
input
into
the
object
detection
model,
and
i
v)
comparing
the
detected
objects
from
the
model
with
the
data
from
CM
to
ensure
accurac
y
.
The
model’
s
performance
is
measured
by
its
ability
to
recognize
these
signboards
consistently
and
accurately
across
dif
ferent
scenarios
lik
e
day
,
foggy
day
,
cloudy
,
dusk,
foggy
night,
and
night.
The
ef
fecti
v
eness
of
the
pre-trained
Y
OLOv4
model,
coupled
with
the
VFF
algori
thm,
is
further
demonstrated
through
occlusion
testing,
where
the
model
successfully
identies
traf
c
signboards
e
v
en
when
partially
ob-
scured,
such
as
by
trees,
with
maximum
accurac
y
in
percentage.
The
primary
outcome
of
this
process
is
the
generation
of
a
reliable
and
accurate
object
list
(in
this
case,
traf
c
signboards)
under
v
arying
en
vironmental
conditions
and
occlusions,
ensuring
rob
ust
performance
of
the
object
detection
system.
Int
J
Artif
Intell,
V
ol.
14,
No.
3,
June
2025:
2236–2245
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Artif
Intell
ISSN:
2252-8938
❒
2239
2.4.
AD
AS-ECU
based
object
detection
model
The
operation
of
the
AD
AS-ECU,
which
is
interconnected
with
both
microprocessor
unit
(MPU)
and
HIL,
and
then
connected
to
an
object
detection
model,
can
be
described
simply
as
follo
ws:
The
AD
AS-ECU
serv
es
as
the
central
processing
unit
in
this
setup.
It
recei
v
es
input
from
the
MPU,
which
handles
the
initial
pro-
cessing
of
data,
such
as
signals
from
v
arious
sensors
and
cameras.
This
processed
data
is
then
sent
to
the
HIL
system,
where
real-time
simulations
are
conducted
to
emulate
dri
ving
conditions
and
scenarios.
These
sim-
ulations
are
crucial
for
testing
and
v
alidating
the
performance
of
the
AD
AS-ECU
under
dif
ferent
conditions.
The
output
from
the
HIL,
which
represents
processed
and
simulated
sensor
data,
is
then
fed
into
the
object
detection
model.
This
model,
possibly
based
on
proposed
VFF
al
gorithms,
analyzes
the
data
to
detect
and
clas-
sify
objects
in
the
v
ehicle’
s
vicinity
,
contrib
uting
to
v
arious
AD
AS
functionalities
such
as
collision
a
v
oidance,
lane
k
eeping,
or
adapti
v
e
cruise
control.
This
interconnect
ed
system
ensures
that
the
AD
AS-ECU
operates
ef
fecti
v
ely
,
accurately
processing
real
and
simulated
data
for
enhanced
v
ehicle
safety
and
dri
v
er
assistance.
3.
RESUL
TS
AND
DISCUSSION
This
section
e
xplores
the
e
v
aluation
results
of
the
de
v
eloped
frame
w
ork
used
for
object
det
ection
analysis
carried
out
in
a
real-time
outdoor
en
vironment.
Its
performance
is
analyzed
and
compared
with
e
xper
-
imental
methods
conducted
in
the
laboratory
,
such
as
the
projection
method
and
video
injection
method.
In
the
projection
approach,
an
AD
AS
camera
is
placed
in
front
of
a
monitor
to
capture
real-w
orld
scenarios,
and
video
data
is
directly
fed
to
the
AD
AS
domain.
This
process
calibrates
real-time
v
ehicle
dynamic
information
to
the
AD
AS
ECU
for
the
object
detection
model.
Simultaneously
,
the
same
video
is
processed
using
the
proposed
VFF
algorithm
from
scenarios
created
by
the
CM
en
vironment
simulation
tool.
This
tool
processes
the
video
and
pro
vides
an
output
as
a
list
of
detected
objects.
In
the
video
injection
method,
Jetson
Nano
hardw
are
is
used
instead
of
a
monitor
,
as
in
the
projection
method.
A
CSI
camera
connected
to
the
Jetson
Nano
de
vice
cap-
tures
the
synthetic
video
using
the
projection
method.
The
detection
outputs
from
the
Jetson
Nano
de
vice
are
streamed
to
the
host
PC
as
CAN
messages.
The
host
PC
runs
the
VFF
algorithm,
generating
an
object
list
from
the
synthetic
video.
A
comparati
v
e
analysis
is
conducted
between
the
laboratory
method
and
the
de
v
eloped
frame
w
ork
in
a
real
v
ehicle
for
object
detection.
This
analysis
focuses
on
indi
vidual
object
class
detection,
synchronization
rate,
perc
entage
of
corre
lated
outcomes,
and
computational
comple
xity
.
An
o
v
erall
accurac
y
of
97%
is
observ
ed
during
testing
under
normal
en
vironmental
conditions.
3.1.
Indi
vidual
object
class
detection
The
e
xperimental
results
indicat
e
that
accurac
y
slightly
decre
ases
in
rea
l
v
ehicle
tes
ting
compared
t
o
laboratory
methods.
In
the
laboratory
,
only
43
traf
c
signboard
images
are
cate
gorized
into
four
classes:
pro-
hibitory
,
danger
,
mandatory
,
and
priority
.
Additionall
y
,
about
900
real
traf
c
signboard
images
are
cate
gorized
in
a
separate
folder
for
training
and
testing.
In
the
projection
method,
approximately
50
iterations
are
conducted
to
assess
the
performance
accurac
y
of
each
object
class.
Out
of
250
tested
images,
on
a
v
erage,
15
are
misclas-
sied.
Similarly
,
the
video
injection
method
sho
ws
comparable
results,
with
an
a
v
erage
misclassication
of
18
images
out
of
250,
under
the
same
number
of
iterations.
This
discrepanc
y
is
attrib
uted
to
the
similar
appear
-
ances
of
some
object
classes.
F
or
e
xample,
the
signs
”TS16-restriction
ends
o
v
ertaking”
and
”TS17-restriction
ends
o
v
ertaking
trucks”
look
similar
from
a
distance
of
70-100
meters.
Additionally
,
the
distance
between
the
traf
c
signboard
and
the
mo
ving
v
ehicle
can
v
ary
under
dif
ferent
weather
conditions
lik
e
day
,
foggy
day
,
cloudy
,
dusk,
foggy
night,
and
night.
P
articularly
in
cloudy
and
foggy
night
conditions,
the
object
detection
model
de
viates
slightly
from
its
re
gular
performance,
often
detecting
correctly
only
when
the
v
ehicle
is
closer
to
the
sign.
Comparati
v
e
analysis
sho
ws
signicant
v
ariations
in
real
v
ehicle
testing
compared
to
laboratory
m
eth-
ods,
attrib
uted
to
the
natural
v
ersus
articially
simulated
en
vironmental
conditions
in
the
lab
.
V
ideo
cameras
struggle
to
capture
the
nuances
of
real
climatic
conditions,
af
fecting
the
algorithm’
s
ability
to
accurately
syn-
thesize
the
simulation
en
vironment.
This
leads
to
a
notable
drop
in
accurac
y
,
especially
in
dark
scenarios.
T
able
1
presents
the
misclassication
results
of
the
projection
method
across
dif
ferent
en
vironmental
condi-
tions.
T
able
2
pro
vides
the
misclassi
cation
results
of
the
v
i
deo
injection
method
under
v
arying
en
vironmental
conditions.
T
able
3
sho
ws
the
misclassication
results
from
real
v
ehicle
testing
across
di
v
erse
en
vironmental
conditions.
Laboratory
methods
generally
yield
more
accurate
classication
for
indi
vidual
object
classes,
par
-
ticularly
in
day
,
cloudy
,
and
dusk
conditions.
Ho
we
v
er
,
in
foggy
day
,
foggy
night,
and
night
conditions,
some
misclassications
are
observ
ed,
with
an
o
v
erall
a
v
erage
misclassication
of
20
to
25
images.
In
real
v
ehicle
Camer
a-based
advanced
driver
assistance
with
inte
gr
ated
Y
OLOv4
for
r
eal-time
detection
(K
eerthi
J
ayan)
Evaluation Warning : The document was created with Spire.PDF for Python.
2240
❒
ISSN:
2252-8938
testing,
although
there
are
fe
wer
errors
in
indi
vidual
object
class
detection,
the
o
v
erall
a
v
erage
number
of
mis-
classications
is
higher
compared
to
laboratory
methods,
as
seen
in
Figure
2.
T
able
1.
Misclassication
outcomes
of
the
projection
method
under
v
arious
conditions
Class
Day
F
oggy
Day
Cloudy
Dusk
F
oggy
Night
Night
A
v
erage
Prohibitory
-
9
-
-
-
10
10
Danger
30
-
-
29
8
14
20
Mandatory
-
29
-
-
14
30
24
Priority
-
-
9
-
16
-
12
A
v
erage
30
19
9
29
13
18
20
T
able
2.
Misclassication
outcomes
of
the
video
injection
method
under
v
arious
conditions
Class
Day
F
oggy
Day
Cloudy
Dusk
F
oggy
Night
Night
A
v
erage
Prohibitory
-
33
-
-
-
26
30
Danger
12
-
19
36
39
-
27
Mandatory
-
8
29
-
35
25
24
Priority
-
42
-
21
5
17
21
A
v
erage
12
28
29
20
25
27
24
T
able
3.
Misclassication
outcomes
of
real
v
ehicle
testing
under
v
arious
conditions
Class
Day
F
oggy
Day
Cloudy
Dusk
F
oggy
Night
Night
A
v
erage
Prohibitory
2
33
3
1
4
26
12
Danger
12
3
5
19
16
29
14
Mandatory
4
8
29
1
15
25
14
Priority
3
42
1
21
5
17
15
A
v
erage
21
86
38
42
40
97
54
Figure
2.
Pie
chart
of
misclassication
outcomes
under
v
arious
conditions
3.2.
Synchr
onization
rate
The
misclassication
outcome
primarily
occurs
due
to
synchronization
errors
between
the
s
yn
t
hesized
simulation
video
and
the
real-time
AD
AS
camera
capture.
This
means
the
proposed
VFF
algorithm
processes
the
entire
simulation
video
through
frame-by-frame
analysis
to
accurately
detect
traf
c
signboards
and
generate
a
list
of
detected
object
classes,
which
is
then
directly
fed
to
the
object
detection
model.
Similarly
,
the
AD
AS
domain
correlates
the
mapped
object
list,
which
is
project
ed
into
the
actual
outcome
of
the
object
detection
model.
The
irre
gular
synchronization
of
data
from
the
processed
VFF
algorithm
data
af
fects
the
mapping
fea-
ture
of
the
AD
AS
domain
concerning
the
object
list
sent
to
the
object
detection
model.
Laboratory
methods
e
xhibit
more
synchronization
compared
to
real
v
ehicle
le
v
el
testing
methods.
The
object
feature
mapping
rate
is
compromised
due
to
de
viations
in
frame-by-frame
synchronization,
which
is
of
f
by
v
e
frames
per
second
in
real
video
le
v
el
testing,
amounting
to
a
de
viation
of
nearly
25%
in
total
synchronization.
By
comparing
three
methods,
the
projection
method,
video
injection
method,
and
real
v
ehicle
testing
-
in
terms
of
their
synchroniza-
tion
rates
and
their
impact
on
object
detection
accurac
y
.
The
projection
method,
with
a
high
synchronization
rate
sho
wing
only
1-2
frames
of
de
viation,
results
in
lo
wer
misclassication
rates
due
to
its
near
real-time
Int
J
Artif
Intell,
V
ol.
14,
No.
3,
June
2025:
2236–2245
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Artif
Intell
ISSN:
2252-8938
❒
2241
processing
capabilities.
In
contrast,
the
video
injection
method
has
a
moderate
synchronization
rate
with
a
3-4
frames
de
viation,
leading
to
moderate
misclassication
rates
as
the
slight
delay
in
frame
processing
can
occasionally
af
fect
accurac
y
.
The
real
v
ehicle
testing
me
thod,
ho
we
v
er
,
has
a
lo
w
synchronization
rate
with
a
signicant
5
frames
de
viat
ion,
which
results
in
higher
misclassication
rates.
This
is
because
the
lar
ger
lag
in
processing
and
synchronizing
video
frames
leads
to
a
greater
chance
of
inaccuracies
in
detecting
and
classi-
fying
objects,
demonstrating
the
crucial
impact
of
synchronization
rates
on
the
accurac
y
of
object
detection
in
adv
anced
dri
v
er
-assistance
systems.
T
able
4
deals
with
numerical
representations
of
the
synchronization
rates
under
six
dif
ferent
weather
conditions.
The
v
alues
are
represented
in
frames
per
second
(fps)
and
indicate
the
synchronization
rates
for
the
projection
method,
video
injec
tion
method,
and
real
v
ehicle
testing
under
each
weather
condition.
A
lo
wer
fps
rate
suggests
better
synchronization
and
potentially
higher
object
detection
accurac
y
.
T
able
4.
Synchronization
rates
under
six
dif
ferent
weather
conditions
W
eather
condition
Projection
method
(fps)
V
ideo
injection
method
(fps)
Real
v
ehicle
testing
(fps)
Day
0.50
1.00
2.50
F
oggy
Day
1.00
1.50
3.00
Cloudy
0.75
1.25
2.75
Dusk
0.80
1.30
3.00
F
oggy
Night
1.20
1.70
3.50
Night
1.50
2.00
4.00
3.2.1.
P
er
centage
of
corr
elated
outcomes
Based
on
the
synchronization
rates
of
dif
ferent
methods,
the
T
able
5
pro
vides
a
percentage
for
the
correlated
outcomes.
It
implies
that
the
higher
the
synchronizati
on
rate
(i.e.,
closer
alignment
with
real-time),
the
higher
the
percentage
of
correlated
outcomes,
indicating
more
accurate
object
detection.
The
projection
method,
with
the
highest
synchronization
rate,
sho
ws
a
90%
correlati
on
in
outcomes,
suggest
ing
a
high
le
v
el
of
accurac
y
in
object
detection.
The
video
injection
method,
with
moderat
e
s
ynchronization,
sho
ws
a
75%
correlation,
indicating
moderate
accurac
y
.
In
contrast,
real
v
ehicle
testing,
with
the
lo
west
synchronization
rate,
has
only
a
60%
correlation,
reecting
the
greatest
chance
of
inaccuracies
in
detection.
T
able
6
indicates
the
percentage
of
correlated
outcomes
for
each
method
under
dif
ferent
weather
conditions.
The
projection
method
consistently
sho
ws
the
highest
percentage
of
correlated
outcomes,
indicating
its
superior
accurac
y
across
all
weather
conditions.
The
video
injection
method
demonstrates
moderate
accurac
y
,
with
its
ef
fect
i
v
eness
slightly
diminishing
in
less
f
a
v
orable
weather
conditions
lik
e
foggy
night
and
night.
The
real
v
ehicle
testing
method
has
the
lo
west
correlated
outcomes,
especially
in
challenging
weather
conditions,
reecting
the
impact
of
en-
vironmental
f
actors
on
object
detection
accurac
y
.
T
able
5.
Comparati
v
e
analysis
of
percentage
of
correlated
outcomes
of
three
methods
Method
Synchronization
rate
Correlated
outcome
(%)
Projection
method
High
(1-2
frames
de
viation)
90
V
ideo
injection
method
Moderate
(3-4
frames
de
viation)
75
Real
v
ehicle
testing
Lo
w
(5
frames
de
viation)
60
T
able
6.
The
correlated
outcomes
for
object
detection
accurac
y
under
six
dif
ferent
weather
conditions
using
three
methods
W
eather
condition
Projection
method
(%)
V
ideo
injection
method
(%)
Real
v
ehicle
testing
(%)
Day
92
80
70
F
oggy
day
88
75
65
Cloudy
90
78
68
Dusk
91
77
66
F
oggy
night
85
70
60
Night
83
68
58
Camer
a-based
advanced
driver
assistance
with
inte
gr
ated
Y
OLOv4
for
r
eal-time
detection
(K
eerthi
J
ayan)
Evaluation Warning : The document was created with Spire.PDF for Python.
2242
❒
ISSN:
2252-8938
3.2.2.
Computational
complexity
In
terms
of
computational
comple
xity
for
object
detection
models
under
v
arious
weather
conditions,
the
t
hree
methods
e
xhibit
distinct
characteristics.
The
projection
method,
typically
the
least
comple
x,
main-
tains
a
consistent
computational
load
across
dif
ferent
weather
conditions,
estimated
at
a
comple
xity
le
v
el
of
around
30%.
Its
straightforw
ard
approach
of
capturing
and
processing
real-w
orld
scenarios
contrib
utes
to
this
consistenc
y
.
The
video
injection
method,
with
added
comple
xity
due
to
the
incorporation
of
synthetic
video
and
en
vironmental
simulations,
presents
a
moderate
computational
b
urden,
a
v
eraging
about
50%
across
dif
fer
-
ent
weather
conditions.
This
method’
s
comple
xity
slightly
escalates
in
adv
erse
weather
conditions
lik
e
foggy
night,
where
additional
processing
is
required.
The
real
v
ehicle
testing
method,
ho
we
v
er
,
f
aces
the
highest
computational
challenges,
a
v
eraging
around
70%
comple
xity
.
This
method’
s
comple
xity
peaks
in
challenging
weather
scenarios
such
as
foggy
day
and
foggy
night,
where
real-time
processing
of
dynamic
en
vironmental
and
v
ehicular
data
signicantly
increases
the
computational
load.
In
essence,
the
computational
demand
for
each
method
v
aries
with
the
intricac
y
of
the
weather
conditions,
reecting
the
required
data
processing
depth
for
accurate
object
detection
in
di
v
erse
en
vironmental
scenarios.
4.
DISCUSSION
In
a
comparati
v
e
analysis
of
the
three
methods
for
object
detection
-
projection
method,
video
injec-
tion
method,
and
real
v
ehicle
testing
-
notable
dif
ferences
emer
ge
in
terms
of
indi
vidual
object
class
detection,
synchronization
rate,
correlated
outcome
percentage,
and
computational
comple
xity
.
Figure
3
s
ho
ws
compar
-
ati
v
e
analysis
of
object
detection
model
testing
methods.
F
or
indi
vidual
object
class
detection,
the
projection
method
typical
ly
sho
ws
the
highest
accurac
y
with
minimal
misclassication,
while
the
real
v
ehi
cle
testing
method,
dealing
with
dynamic
real-w
orld
scenarios,
re
gisters
a
higher
rate
of
misclassication.
Synchroniza-
tion
rates,
indicati
v
e
of
the
methods’
alignment
with
real-time
processing,
are
highest
for
the
projection
method
(1-2
frames
de
viation),
moderate
for
the
video
injection
method
(3-4
frames
de
viation),
and
lo
west
for
real
v
e-
hicle
testing
(5
frames
de
viation).
These
rates
directly
af
fect
the
percentage
of
correlated
outcomes,
with
the
projection
method
achie
ving
about
90%,
the
video
injection
method
around
75%,
and
real
v
ehicle
testing
approximately
60%.
Computational
comple
xity
follo
ws
a
similar
trend;
the
projection
method
is
the
least
com-
ple
x
at
around
30%,
the
video
injection
method
stands
at
50%,
and
real
v
ehicle
testing
is
the
most
comple
x,
a
v
eraging
70%.
This
consolidated
vie
w
highlights
the
trade-of
fs
between
these
methods
in
terms
of
accurac
y
,
real-time
dat
a
processing
capabilities
,
and
computational
demands,
underlining
the
challenges
in
optimizing
object
detection
models
for
adv
anced
dri
v
er
-assistance
systems.
Figure
3.
Comparati
v
e
analysis
of
object
detection
model
testing
methods
T
able
7
compares
three
models
for
traf
c
sign
recognition
in
terms
of
their
algorithms,
dataset,
accu-
rac
y
,
computational
ef
cienc
y
,
and
synchronization
rate.
The
proposed
model,
which
utilizes
Y
olo
v4
with
VFF
,
achie
v
es
the
highest
accurac
y
a
t
96.5%
on
the
GTSRB
dataset,
slightly
surpassing
the
model
in
[33],
which
reaches
96%
on
the
same
dataset.
Additionally
,
the
proposed
model
demonstrates
e
xceptional
computational
ef
cienc
y
,
operating
at
30
frames
per
second
(fps),
which
is
signicantly
f
aster
than
Gunasekara
et
al
.
[33]
Int
J
Artif
Intell,
V
ol.
14,
No.
3,
June
2025:
2236–2245
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Artif
Intell
ISSN:
2252-8938
❒
2243
model
(4.5
fps)
and
Santos
et
al
.
[34]
model
(8
fps).
This
ef
cienc
y
mak
es
it
more
suitable
for
real-time
applications.
Furthermore,
the
proposed
model
has
a
lo
wer
synchronization
rate
(5),
indicating
potentially
reduced
processing
delays
compared
to
the
other
models,
where
Gunasekara
et
al
.
[33]
m
odel
has
a
rate
of
10
and
Santos
et
al
.
[34]
model
has
a
rate
of
8.
A
graphical
representation
of
this
comparison
is
pro
vided
in
Figure
4,
where
our
model
demonstrates
clear
super
iority
across
all
performance
metrics
compared
to
the
other
tw
o
models.
Ov
erall,
the
proposed
model
outperforms
the
others
in
both
ac
curac
y
and
speed,
making
it
an
optimal
choice
for
real-time
traf
c
sign
recognition
tasks.
T
able
7.
Comparison
of
proposed
model
with
baseline
models
Model
Algorithm
used
Dataset
Accurac
y
(%)
Computational
ef
cienc
y
(fps)
Synchronization
rate
Gunasekara
et
al.
[33]
Y
OLO
+
Xception
GTSRB
96
4.5
10
Santos
et
al.
[34]
CNN
Napier
Uni
v
ersity
traf
c
dataset
92.97
8
8
Proposed
model
Y
olo
v4
+
VFF
GTSRB
96.5
30
5
Figure
4.
Comparison
of
proposed
model
with
baseline
models
5.
CONCLUSION
This
research
has
successfully
demonstrated
a
comprehensi
v
e
analysis
of
object
detection
in
AD
AS
using
three
distinct
methods:
the
projection
method,
video
injection
method,
and
real
v
ehicle
testing.
Our
ndings
re
v
eal
signicant
v
ariations
in
performance
metrics
such
as
indi
vidual
object
class
detection,
syn-
chronization
rate,
percentage
of
correlated
outcome,
and
computational
comple
xity
acr
o
s
s
dif
ferent
weather
conditions.
The
projection
method,
with
its
high
synchronization
rate
and
lo
wer
computational
comple
xity
,
consistently
sho
wed
the
highest
accurac
y
in
object
class
detection,
particularly
in
standard
weather
conditions.
This
method
pro
v
ed
to
be
rob
ust
in
terms
of
correlated
outcomes,
achie
ving
the
highest
percentage
of
accurac
y
across
v
arious
scenarios.
In
contrast,
the
video
injection
method,
while
moderately
comple
x,
e
xhibited
a
bal-
anced
performance
in
terms
of
synchronization
and
object
detection
accurac
y
.
This
method
w
as
particularly
ef
fecti
v
e
in
moderately
challenging
weather
conditions,
of
fering
a
viable
alternati
v
e
for
en
vironments
where
realtime
data
is
not
critical.
The
real
v
ehicle
testing
approach,
despite
its
higher
computational
demand
and
lo
wer
synchroni
zation
rate,
pro
vided
in
v
aluable
insights
into
the
performance
of
AD
AS
under
realistic
and
dynamically
changing
en
vironmental
conditions.
Although
it
recorded
a
higher
rate
of
misclassication,
this
method’
s
real-w
orld
applicability
is
undeniable,
especially
for
testing
in
adv
erse
weather
conditions.
Across
all
methods,
weather
conditions
lik
e
foggy
nights
and
hea
vy
rain
posed
s
ignicant
challenges,
af
fecting
the
accurac
y
and
reliability
of
object
detection.
These
ndings
underscore
the
need
for
further
research
and
de
v
el-
opment
in
AD
AS
technology
,
particularly
in
enhancing
object
detection
algorithms
to
cope
with
di
v
erse
and
challenging
en
vironmental
f
actors.
Ov
erall,
this
research
contrib
utes
signicantly
to
the
eld
of
autonomous
v
ehicle
technology
,
of
fering
critical
insights
into
the
strengths
and
limitations
of
v
ari
o
us
object
detection
meth-
ods.
It
lays
the
groundw
ork
for
future
adv
ancements
i
n
AD
AS,
pa
ving
the
w
ay
for
more
rob
ust,
reliable,
and
safe
autonomous
dri
ving
solutions.
Camer
a-based
advanced
driver
assistance
with
inte
gr
ated
Y
OLOv4
for
r
eal-time
detection
(K
eerthi
J
ayan)
Evaluation Warning : The document was created with Spire.PDF for Python.
2244
❒
ISSN:
2252-8938
FUNDING
INFORMA
TION
Authors
state
no
funding
in
v
olv
ed.
A
UTHOR
CONTRIB
UTIONS
ST
A
TEMENT
This
journal
uses
the
C
o
nt
rib
utor
Roles
T
axonomy
(CRediT)
to
recognize
indi
vidual
author
contrib
u-
tions,
reduce
authorship
disputes,
and
f
acilitate
collaboration.
Name
of
A
uthor
C
M
So
V
a
F
o
I
R
D
O
E
V
i
Su
P
Fu
K
eerthi
Jayan
✓
✓
✓
✓
✓
✓
✓
✓
✓
Balakrishnan
Murug
anantham
✓
✓
✓
✓
✓
✓
✓
C
:
C
onceptualization
I
:
I
n
v
estig
ation
V
i
:
V
i
sualization
M
:
M
ethodology
R
:
R
esources
Su
:
Su
pervision
So
:
So
ftw
are
D
:
D
ata
Curation
P
:
P
roject
Administration
V
a
:
V
a
lidation
O
:
Writing
-
O
riginal
Draft
Fu
:
Fu
nding
Acquisition
F
o
:
F
o
rmal
Analysis
E
:
Writing
-
Re
vie
w
&
E
diting
CONFLICT
OF
INTEREST
ST
A
TEMENT
Authors
state
no
conict
of
interest.
D
A
T
A
A
V
AILABILITY
Data
a
v
ailability
is
not
applicable
to
this
paper
as
no
ne
w
data
were
created
or
analyzed
in
this
study
.
REFERENCES
[1]
K.
Jayan
and
B.
Murug
anant
ham,
“
Adv
anced
dri
v
er
assistance
system
technologies
and
its
challenges
to
w
ard
the
de
v
elopment
of
autonomous
v
ehicle,
”
5th
International
Conference
on
Intelligent
Computing
and
Applications
(ICICA
2019),
2021,
v
ol.
1172,
pp.
55–72,
doi:
10.1007/978-981-15-5566-4
6.
[2]
V
.
W
.
Saputra,
N.
Suciati,
and
C.
F
atichah,
“F
og
and
rain
augmentation
for
license
plate
recognition
in
tropical
country
en
vironment,
”
IAES
International
Journal
of
Articial
Intelligence,
v
ol.
13,
no.
4,
pp.
3951-3961,
2024,
doi:
10.11591/ijai.v13.i4.pp3951-3961.
[3]
E.
O.
Appiah
and
S.
Mensah,
“Object
detection
in
adv
erse
weather
condition
for
autonomous
v
ehicles,
”
Multimedia
T
ools
and
Applications
,
v
ol.
83,
no.
9,
pp.
28235–28261,
2024,
doi:
10.1007/s11042-023-16453-z.
[4]
T
.
Sharma,
B.
Debaque,
N.
Duclos,
A.
Chehri,
B.
Kinder
,
and
P
.
F
ortier
,
“Deep
learning-based
object
detection
and
scene
perception
under
bad
weather
conditions,
”
Electronics,
v
ol.
11,
no.
4,
Feb
.
2022,
doi:
10.3390/electronics11040563.
[5]
H.
J.
V
ishnukumar
,
B.
Butting,
C.
M
¨
uller
,
and
E.
Sax,
“Machine
learning
and
deep
neural
netw
ork
—
Articial
intelligence
core
for
l
ab
and
real-w
orld
test
and
v
alidation
for
AD
AS
and
autonomous
v
ehicles:
AI
for
ef
cient
and
quality
test
and
v
alidation,
”
2017
Intelligent
Systems
Conference
(IntelliSys),
2017,
pp.
714-721,
doi:
10.1109/IntelliSys.2017.8324372.
[6]
M
.
Mostaf
a
and
M.
Ghantous,
“
A
Y
OLO
based
approach
for
traf
c
light
recognition
for
AD
AS
systems,
”
2022
2nd
International
Mo-
bile,
Intelligent,
and
Ubiquitous
Computing
Conference
(MIUCC),
2022,
pp.
225-229,
doi:
10.1109/MIUCC55081.2022.9781682.
[7]
L.
Masel
lo,
B.
Sheehan,
F
.
Murph
y
,
G.
Castignani,
K.
McDonnell,
and
C.
Ryan,
“From
traditional
to
autonomous
v
ehicles:
A
systematic
re
vie
w
of
data
a
v
ailability
,
”
T
ransportation
Research
Record:
Journal
of
the
T
ransportation
Research
Board,
v
ol.
2676,
no.
4,
pp.
161–193,
Dec.
2021,
doi:
10.1177/03611981211057532.
[8]
“T
axonomy
and
denitions
for
terms
related
to
dri
ving
automation
systems
for
on-road
motor
v
ehicles
-
J3016
202104,
”
SAE
International
,
Apr
.
2021.
[Online].
A
v
ailable:
https://www
.sae.or
g/standards/content/j3016
202104
[9]
D.
T
abernik
and
D.
Sk
o
ˇ
caj,
“Deep
learning
for
lar
ge-scale
traf
c-sign
detection
and
recognition,
”
IEEE
T
ransactions
on
Intelligent
T
ransportation
Systems,
v
ol.
21,
no.
4,
pp.
1427-1440,
May
2019,
doi:
10.1109/TITS.2019.2913588.
[10]
E.
G
¨
une
y
,
C.
Bayilmis
¸
,
and
B.
C
¸
akan,
“
An
implementation
of
real-time
traf
c
signs
and
road
objects
detect
ion
based
on
mobile
GPU
platforms,
”
IEEE
Access,
v
ol.
10,
pp.
86191-86203,
2022,
doi:
10.1109/A
CCESS.2022.3198954.
[11]
Q.
W
ang,
X.
Li,
and
M.
Lu,
“
An
impro
v
ed
traf
c
sign
detection
and
recognition
deep
model
based
on
Y
OLOv5,
”
IEEE
Access,
v
ol.
11,
pp.
54679-54691,
2023,
doi:
10.1109/A
CCESS.2023.3281551.
[12]
S
.
Gautam
and
A.
K
umar
,
“Image-based
automatic
traf
c
light
s
detection
system
for
autonomous
cars:
a
re
vie
w
,
”
Multimedia
T
ools
and
Applications,
v
ol.
82,
no.
17,
pp.
26135–26182,
Jan.
2023,
doi:
10.1007/s11042-023-14340-1.
[13]
´
A.
Arcos-Garc
´
ıa,
J.
A.
´
Alv
arez-Garc
´
ıa,
and
L.
M.
Soria-Morillo,
“Ev
aluation
of
deep
neural
netw
orks
for
traf
c
sign
detection
systems,
”
Neurocomputing,
v
ol.
316,
pp.
332–344,
Aug.
2018,
doi:
10.1016/j.neucom.2018.08.009.
[14]
T
.
T
ettamanti,
M.
Szalai,
S.
V
ass,
and
V
.
T
ihan
yi,
“V
ehicle-in-the-loop
test
en
vironment
for
autonomous
dri
ving
with
micro-
scopic
traf
c
simulation,
”
2018
IEEE
International
Conference
on
V
ehicular
Electronics
and
Safet
y
(ICVES),
2018,
pp.
1-6,
doi:
10.1109/ICVES.2018.8519486.
[15]
D.
Bari
´
c,
R.
Grbi
´
c,
M.
Suboti
´
c,
and
V
.
Mihi
´
c,
“T
esting
en
vironment
for
AD
AS
softw
are
solutions,
”
2020
Zooming
Inno
v
ation
in
Consumer
T
echnologies
Conference
(ZINC),
2020,
pp.
190-194,
doi:
10.1109/ZINC50678.2020.9161772.
Int
J
Artif
Intell,
V
ol.
14,
No.
3,
June
2025:
2236–2245
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Artif
Intell
ISSN:
2252-8938
❒
2245
[16]
K.
Jayan
and
B.
Murug
anantham,
“Impro
v
ed
traf
c
sign
detection
in
autonomous
dri
ving
using
a
simulation-
based
deep
learning
approach
under
adv
erse
conditions,
”
2024
International
Conference
on
Adv
ances
in
Data
Engineering
and
Intelligent
Computing
Systems
(ADICS),
2024,
pp.
1–6,
doi:
10.1109/ADICS58448.2024.10533464.
[17]
T
.
Ponn,
T
.
Kr
¨
oger
,
and
F
.
Dierme
yer
,
“Identication
and
e
xplanation
of
challenging
conditions
for
camera-based
object
detection
of
automated
v
ehicles,
”
Sensors,
v
ol.
20,
no.
13,
Jul.
2020,
doi:
10.3390/s20133699.
[18]
M.
Peperho
we,
M.
Friedrich,
and
P
.
Schmitz-V
alck
enber
g,
“Lab-based
testing
of
AD
AS
applications
for
commercial
v
ehicles,
”
SAE
International
Journal
of
Commercial
V
ehicles,
v
ol.
8,
no.
2,
pp.
529–535,
Sep.
2015,
doi:
10.4271/2015-01-2840.
[19]
M.
Feilhauer
,
J.
Haering,
and
S.
W
yatt,
“Current
approaches
in
HiL-based
AD
AS
testing,
”
SAE
International
Journal
of
Commercial
V
ehicles,
v
ol.
9,
no.
2,
pp.
63–69,
Sep.
2016,
doi:
10.4271/2016-01-8013.
[20]
C.
P
ark,
S.
Chung,
and
H.
Lee,
“V
ehicle-in-the-loop
in
global
coordinates
for
adv
anced
dri
v
er
assistance
system,
”
Applied
Sciences,
v
ol.
10,
no.
8,
Apr
.
2020,
doi:
10.3390/app10082645.
[21]
S.
Sie
gl,
S.
Ratz,
T
.
D
¨
user
,
and
R.
Hettel,
“V
ehicle-in-the-Loop
at
testbeds
for
AD
AS/AD
v
alidation,
”
A
TZelectronics
w
orldwide,
v
ol.
16,
no.
7–8,
pp.
62–67,
Jul.
2021,
doi:
10.1007/s38314-021-0639-2.
[22]
M.
F
.
Drechsler
,
V
.
Sharma,
F
.
Re
w
ay
,
C.
Sch
¨
utz,
and
W
.
Huber
,
“Dynamic
v
ehicle-in-the-loop:
A
no
v
el
method
for
testing
auto-
mated
dri
ving
functions,
”
SAE
International
Journal
of
Connected
and
Automated
V
ehicles,
v
ol.
5,
no.
4,
pp.
367-380,
Jun.
2022,
doi:
10.4271/12-05-04-0029.
[23]
P
.
Song,
R.
F
ang,
B.
Gao,
and
D.
W
ei,
“
A
HiL
test
bench
for
monocular
vision
sensors
and
its
applications
in
camera-only
AEBs,
”
SAE
technical
papers
on
CD-R
OM/SAE
technical
paper
series,
Apr
.
2019,
doi:
10.4271/2019-01-0881.
[24]
G.
Di
Mare
et
al
.,
“
An
inno
v
ati
v
e
real-time
test
setup
for
AD
AS’
s
based
on
v
ehicle
cameras,
”
T
ransportation
Research
P
art
F:
T
raf
c
Psychology
and
Beha
viour
,
v
ol.
61,
pp.
252–258,
Jun.
2018,
doi:
10.1016/j.trf.2018.05.018.
[25]
S.
Genser
,
S.
Muck
enhuber
,
S.
Solmaz,
and
J.
Reck
enzaun,
“De
v
elopment
and
e
xperimental
v
alidation
of
an
intelligent
camera
model
for
automated
dri
ving,
”
Sensors,
v
ol.
21,
no.
22,
No
v
.
2021,
doi:
10.3390/s21227583.
[26]
A.
Bochk
o
vskiy
,
C.-Y
.
W
ang,
and
H.-Y
.
M.
Liao,
“Y
OLOv4:
Optimal
speed
and
accurac
y
of
object
detection,
”
arXi
v-Computer
Science
,
pp.
1-17,
Apr
.
2020,
doi:
10.48550/arxi
v
.2004.10934.
[27]
W
.
Penghui,
W
.
Xufei,
L.
Y
if
an,
and
S.
Jeongyoung,
“Research
on
road
object
det
ection
model
based
on
Y
OLOv4
of
autonomous
v
ehicle,
”
IEEE
Access,
v
ol.
12,
pp.
8198-8206,
Jan.
2024,
doi:
10.1109/A
CCESS.2024.3351771.
[28]
T
.
Mustaf
a
and
M.
Karabatak,
“Real
time
car
model
and
plate
detection
system
by
using
deep
learning
architectures,
”
IEEE
Access,
v
ol.
12,
pp.
107616-107630,
2024,
doi:
10.1109/A
CCESS.2024.3430857.
[29]
R.
Pfef
fer
and
M.
Haselhof
f,
“V
ideo
injection
methods
in
a
real-w
orld
v
ehicle
for
increasing
test
ef
cienc
y
,
”
Auto
T
ech
Re
vie
w
,
v
ol.
5,
no.
8,
pp.
26–31,
Aug.
2016,
doi:
10.1365/s40112-016-1181-0.
[30]
K.
Jayan
and
B.
Murug
anantham,
“V
ideo
frame
feeding
approach
for
v
alidating
the
performance
of
an
object
detection
model
in
real-w
orld
conditions,
”
Automatika
,
v
ol.
65,
no.
2,
pp.
627–640,
Feb
.
2024,
doi:
10.1080/00051144.2024.2314928.
[31]
S.
Houben,
J.
Stallkamp,
J.
Salmen,
M.
Schlipsi
ng,
and
C.
Igel,
“Detection
of
traf
c
signs
in
real-w
orld
images:
The
German
traf
c
sign
detection
benchmark,
”
The
2013
International
Joint
Conference
on
Neural
Netw
orks
(IJCNN),
2013,
pp.
1–8,
doi:
10.1109/IJCNN.2013.6706807.
[32]
C
.
G.
Serna
and
Y
.
Ruichek,
“T
ra
f
c
signs
detection
and
classicat
ion
for
european
urban
en
vironments,
”
IEEE
T
ransactions
on
Intelligent
T
ransportation
Systems,
v
ol.
21,
no.
10,
pp.
4388–4399,
Oct.
2020,
doi:
10.1109/tits.2019.2941081.
[33]
S.
Gunasekara,
D.
Gunarathna,
M.
B.
Dissanayak
e,
S.
Aramith,
and
W
.
Muhammad,
“Deep
learning
based
autonomous
real-time
traf
c
sign
recognition
system
for
adv
anced
dri
v
er
assistance,
”
International
Journal
of
Image
Graphics
and
Signal
Processing,
v
ol.
14,
no.
6,
pp.
70–83,
Dec.
2022,
doi:
10.5815/ijigsp.2022.06.06.
[34]
A
.
Sa
ntos,
P
.
A.
Ab
u,
C.
Oppus,
and
R.
Re
yes,
“Real
-time
traf
c
sign
detection
and
recognition
system
for
assisti
v
e
dri
ving,
”
Adv
ances
in
Science
T
echnology
and
Engineering
Systems
Journal,
v
ol.
5,
no.
4,
pp.
600–611,
Jan.
2020,
doi:
10.25046/aj050471.
BIOGRAPHIES
OF
A
UTHORS
K
eerthi
J
ayan
recei
v
ed
the
B.T
ech.
de
gree
in
computer
science
and
engineering
from
Am-
rita
V
ishw
a
V
idyapeetham,
Amrita
School
of
Engineering,
K
erala,
India,
in
2012
and
the
M.T
ech.
de
gree
in
computer
science
and
engineering
from
Amrita
V
ishw
a
V
idyapeetham,
Amrita
School
of
Engineering,
K
erala,
India,
in
2014.
Currently
,
she
is
pursuing
a
Ph.D.
from
the
Department
of
Computing
T
echnologies,
School
of
Computing,
SRM
Institute
of
Science
and
T
echnology
,
Kat-
tankulathur
,
T
amil
Nadu,
India.
Her
research
primarily
centers
on
applying
deep
learning
to
the
de
v
elopment
of
autonomous
v
ehicles.
She
can
be
contacted
at
email:
kj4134@srmist.edu.in.
Muruganantham
Balakrishnan
recei
v
ed
the
B.E.
de
gree
in
computer
science
and
en-
gineering
from
Manonmaniam
Sundaranar
Uni
v
ersity
,
T
amil
Nadu,
India,
in
1994,
and
the
M.T
ech.
de
gree
in
computer
science
and
engineering
from
SRM
Institute
of
Science
and
T
echnology
,
T
amil
Nadu,
India,
in
2006,
and
the
Ph.D.
de
gree
in
computer
science
and
engineering
from
SRM
Insti-
tute
of
Science
and
T
echnology
,
T
amil
Nadu,
India,
in
2018.
He
be
g
an
his
career
in
1994
and
has
w
ork
ed
in
v
arious
industries.
Currently
,
he
is
w
orking
as
an
Associate
Professor
in
the
Department
of
Computi
ng
T
echnologies,
School
of
Computing,
SRM
Institute
of
Science
and
T
echnology
,
Kat-
tankulathur
,
T
amil
Nadu,
India.
He
can
be
contacted
at
email:
murug
anb@srmist.edu.in.
Camer
a-based
advanced
driver
assistance
with
inte
gr
ated
Y
OLOv4
for
r
eal-time
detection
(K
eerthi
J
ayan)
Evaluation Warning : The document was created with Spire.PDF for Python.