Inter
national
J
our
nal
of
Electrical
and
Computer
Engineering
(IJECE)
V
ol.
10,
No.
2,
April
2020,
pp.
2164
2172
ISSN:
2088-8708,
DOI:
10.11591/ijece.v10i2.pp2164-2172
r
2164
Obstacle
detection
f
or
autonomous
systems
using
ster
eoscopic
images
and
bacterial
beha
viour
Fr
edy
Mart
´
ınez,
Ed
war
J
acinto,
F
er
nando
Mart
´
ınez
F
acultad
T
ecnol
´
ogica,
Uni
v
ersdad
Distrital
Francisco
Jos
´
e
de
Caldas,
Colombia
Article
Inf
o
Article
history:
Recei
v
ed
Mar
20,
2019
Re
vised
Oct
24,
2019
Accepted
No
v
2,
2019
K
eyw
ords:
Autonomous
robot
Bacterial
beha
viour
Motion
planning
Obstacle
detection
Stereoscopic
images
ABSTRA
CT
This
paper
presents
a
lo
w
cost
strate
gy
for
real-time
estimation
of
the
position
of
ob-
stacles
in
an
unkno
wn
en
vironment
for
autonomous
robots.
The
strate
gy
w
as
intended
for
use
in
autonomous
service
robots,
which
na
vig
ate
in
unkno
wn
and
dynami
c
indoor
en
vironments.
In
addition
to
human
interaction,
these
en
vironments
are
characterized
by
a
design
c
reated
for
the
human
being,
which
is
wh
y
our
de
v
elopments
seek
mor
-
phological
and
functional
similarity
equi
v
alent
to
the
human
model.
W
e
use
a
pair
of
cameras
on
our
robot
to
achie
v
e
a
stereoscopic
vision
of
the
en
vironment,
and
we
analyze
this
information
to
det
ermine
the
distance
to
obstacles
using
an
algorithm
that
mimics
bacterial
beha
vior
.
The
algorithm
w
as
e
v
al
uated
on
our
robotic
platform
demonstrating
high
performance
in
the
location
of
obstacles
and
real-time
operation.
Copyright
c
2020
Insitute
of
Advanced
Engineeering
and
Science
.
All
rights
r
eserved.
Corresponding
A
uthor:
Fredy
Mart
´
ınez,
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas,
Carrera
77B
No.64C-74
V
illaluz,
Bogot
´
a
D.C.,
Colombia.
T
el:
(+57)
3005585481
Email:
fhmartinezs@udistrital.edu.co
1.
INTR
ODUCTION
Acti
v
e
robotic
sensors
ha
v
e
today
become
a
high-performance
tool
with
great
acceptance
at
commer
-
cial
and
military
le
v
el
[1,
2].
These
are
embedded
systems
equipped
with
sensors
that
pro
vide
specific
primary
data,
from
which
a
real-time
processor
produces
information
rele
v
ant
to
the
tasks
of
the
robot
[3].
This
kind
of
sensors
has
promoted
research
in
information-dri
v
en
strate
gies
for
the
de
v
elopment
of
tasks
with
robots,
as
well
as
the
implementation
of
algorithms
for
digital
signal
processing
and
control
schemes
oriented
to
these
sensors
[4].
When
f
aced
with
the
design
of
motion
strate
gies
for
autonomous
robotic
systems,
these
sensors
pro
v
e
to
be
v
ery
con
v
enient,
and
e
v
en
fundamental
[5,
6].
When
en
v
i
ronments
are
dynamic
(a
typical
problem
for
service
robots)
it
is
necessary
for
the
robot
to
be
able
to
identify
nearby
obstacles
in
real
time
[7,
8].
Unstructured
en
vironments
are
more
comple
x
due
to
t
heir
dynamics
and
lack
of
kno
wledge
of
identifiable
characteristics.
In
addition,
not
all
obstacles
are
the
same,
this
means
that
the
beha
vior
of
the
robot
in
front
of
each
of
them
must
be
dif
ferent
and
appropriate
in
each
case.
Between
the
minimum
capacities
that
a
robot
must
ha
v
e
is
its
capacity
to
define
its
relati
v
e
size
and
dimensions
in
the
en
vironment.
In
other
cases,
it
is
also
necessary
to
kno
w
its
height
to
define
interac-
tion
strate
gies
(pick
up
a
bottle
from
a
table,
for
e
xample).
Depending
on
the
application
it
is
possible
to
use
dif
ferent
kinds
of
sensors,
b
ut
those
capable
of
pro
viding
visual
information
are
the
o
ne
s
that
pro
vide
more
rele
v
ant
information
[9].
In
this
sense,
systems
with
tw
o
cameras
turn
out
to
be
more
adv
antageous
than
systems
with
a
single
camera
[8],
since
the
y
pro
vide
information
on
the
depth
and
orientation
of
the
obstacle
[4,10-12].
J
ournal
homepage:
http://ijece
.iaescor
e
.com/inde
x.php/IJECE
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2165
Digital
cameras
as
fundamental
elements
of
optical
sensors
ha
v
e
been
used
e
xtensi
v
ely
for
the
r
o
bot
ic
arm
motion
control
solution.
The
camera
pro
vides
the
required
feedback
informati
on
in
relation
to
the
position
of
the
objects
to
be
manipulated.
This
strate
gy
is
kno
wn
as
V
isual
Serv
oing
or
V
ision-Based
Robot
Control
(VS)
and
is
characterized
by
ha
ving
as
feedback
information
the
image
of
a
camera
[13].
The
aim
is
to
support
the
robot’
s
decision
making
with
e
yes
that
tak
e
optical
information
from
its
o
wn
perspecti
v
e
and
in
parallel
(separated
by
a
certain
distance)
[11].
The
distance
between
the
robot
and
the
obstacle
can
be
determined
depending
on
the
distance
between
the
obstacle
positions
in
both
images,
and
the
focal
distance
of
the
cameras
[14].
The
field
of
vision
can
be
increased
considerably
by
adding
a
h
yperboloid
mirror
or
a
conic
mirror
in
front
of
the
camera
lenses,
which
pro
vides
an
omnidirectional
vie
w
to
the
cameras
[15].
The
reconstruction
of
3D
models
from
2D
perspecti
v
es
(stereoscopic
vision)
is
a
strate
gy
inspired
by
animal
biology
that
allo
ws
the
collection
of
three-dimensional
information
from
the
na
vig
ation
en
vironment.
Ho
we
v
er
,
the
process
of
generating
3D
models
i
s
computati
onally
e
xpensi
v
e
[16],
and
requires
good
camera
calibration,
making
it
v
ery
dif
ficult
to
implement
in
real
time
on
embedded
systems
[17].
In
addition,
the
gener
-
ation
of
3D
models
is
highly
dependent
on
the
quality
of
tw
o-dimensional
images,
which
are
strongly
af
fected
by
lighting
conditions
[18].
The
computation
of
the
distance
to
the
obstacle
tak
es
into
account
the
angular
distance,
the
distance
between
cameras
and
the
pix
els
of
the
images
[7,
11].
Ho
we
v
er
,
in
man
y
applications,
it
is
not
necessary
to
reb
uild
the
entire
e
n
vi
ronment,
which
considerably
reduces
the
computational
requirement
[19].
In
f
act,
the
human
brain
does
something
similar
by
processing
information
from
the
e
yes,
only
focusing
on
a
portion
of
the
entire
image
that
the
e
ye
detects.
This
information
can
then
be
processed
to
find
specific
shapes
[20,
21].
There
are
tw
o
strate
gies
for
estimating
the
distance
to
the
obstacle
in
stereoscopic
vision:
ac
ti
v
e
method
and
passi
v
e
method
[10,
22].
In
the
first
case,
the
sensor
system
sends
signals
to
the
obstacle
such
as
visible
light
or
laser
signals,
which
are
then
detected
and
analyzed
[11].
The
ability
of
these
sensors
to
establish
distances
is
superior
to
human
vision,
b
ut
the
y
are
also
costly
and
comple
x
to
implement,
and
the
y
ha
v
e
unresolv
ed
problems.
F
or
e
xample,
the
las
er
deli
v
ers
the
distance
of
a
single
point.
In
f
act,
these
methods
do
not
determine
the
e
xact
3D
positions
of
all
points
of
the
obstacle.
Another
ne
g
ati
v
e
aspect
is
their
speed,
the
y
are
v
ery
slo
w
for
real-time
operation
[23].
On
the
other
hand,
the
passi
v
e
methods
estimate
the
location
of
the
obst
acle
from
the
images
of
the
en
vironment
captured
by
cameras
[19].
The
y
use
digital
processing
on
the
images
to
estimate
the
distance.
This
passi
v
e
strate
gy
has
the
additional
adv
antage
of
w
orking
with
dif
ferent
setups
(cameras,
light
conditions,
and
embedded
hardw
are).
It
should
be
clarified,
ho
we
v
er
,
that
there
are
tw
o
problems
that
cannot
be
solv
ed
with
this
strate
gy:
occlusions
and
o
v
erlapping
of
objects
[24].
In
order
for
the
solutions
to
be
real,
it
must
be
possible
to
massify
them,
and
for
this
a
lo
w
cost
and
high
performance
is
essential
[18,
23].
In
this
sense,
processing
algorithms
must
ha
v
e
v
ery
lo
w
computa-
tional
cost
in
order
to
reduce
processing
time
and
hardw
are
cost,
while
demonstrating
to
solv
e
the
problem.
This
paper
attempts
to
address
some
of
the
critical
problems
of
the
strate
gy
by
maintaining
a
lo
w
computational
cost,
in
particular
reducing
the
impact
of
lighting
on
image
quality
,
and
impro
ving
the
coincidence
between
2D
image
points.
The
main
idea
of
our
strate
gy
is
to
identify
points
of
obstacles
by
means
of
a
mo
v
ement
in
the
i
mages
based
on
bacterial
interaction,
these
points
are
mapped
in
the
planes
of
projection
of
the
en
vironment
in
order
to
establish
the
distance
to
the
obstacle,
all
this
without
the
need
to
mak
e
modifications
to
the
en
vironment
[12].
The
firmw
are
used
to
control
the
hardw
are
setup,
as
well
as
data
acquisition
and
processing,
is
written
in
Python.
W
e
detail
the
methods
and
algorithms
used
for
image
processing
and
estimation
of
the
distance
to
obstacles.
The
results
presented
are
the
product
of
real
laboratory
tests
carried
out
on
our
robot.
Our
proposed
bio-inspired
algorithm
for
three-dimensional
obstacle
reconstruction
and
the
resulting
motion
control
scheme
ha
v
e
a
number
of
adv
antages
o
v
er
other
methods
that
directly
control
the
entire
nonlinear
system
or
rely
on
dynamic
programming
for
planning
[25].
2.
PR
OBLEM
FORMULA
TION
W
e
w
ant
an
autonomous
robot
with
lo
w
resource
consumption
to
be
able
to
identify
obstacles
in
an
unkno
wn
en
vironment.
In
this
sense,
we
define
our
robot
in
a
W
w
orkspace.
Let
W
R
3
be
the
clousure
of
a
contractible
open
set
in
space
that
has
an
open
interior
connected
with
obstacles
that
represent
inaccessible
v
olumes.
Let
O
be
a
set
of
obstacles
finite
in
number
in
which
each
O
O
is
closed
and
pairwise-disjoint.
Let
E
W
be
the
free
space
in
the
en
vironment,
which
is
the
open
subset
of
W
with
the
obstac
les
remo
v
ed.
Obstacle
detection
for
autonomous
systems
using
...
(F
r
edy
Mart
´
ınez)
Evaluation Warning : The document was created with Spire.PDF for Python.
2166
r
ISSN:
2088-8708
The
robot
has
tw
o
cameras
that
form
an
optical
system
of
stereoscopic
vision.
This
system
is
located
in
r
(
t
)
2
R
3
and
has
R
(
t
)
2
S
O
(3)
orientation,
where
S
O
(3)
denotes
the
special
orthogonal
group
of
dimension
three
with
respect
to
a
global
frame
of
reference
for
e
v
ery
instant
t
0
.
T
o
determine
the
position
of
the
obstacles
with
respect
to
the
robot,
we
define
a
relati
v
e
frame
of
reference
with
respect
to
the
axis
of
the
tw
o
cameras
as
sho
wn
i
n
Figure
1.
W
e
denote
the
tw
o
cameras
by
Left
camera
(
L
c
)
and
Right
camera
(
R
c
)
.
The
L
c
and
R
c
centers
are
located
at
(
0
:
14
;
0
;
0)
and
(0
:
14
;
0
;
0)
in
the
relati
v
e
reference
frame
w
ork.
The
distance
between
the
cameras
is
b
=
0
:
14
+
0
:
14
=
0
:
28
m
.
Figure
1.
Dimensions
of
the
prototype
with
detailed
location
of
the
cameras,
three-dimensional
ax
es
for
the
location
of
bacteria,
and
their
limited
space
in
the
na
vig
ation
en
vironment
(top
vie
w)
The
obstacle
s,
inde
x
ed
by
i
2
H
=
f
1
;
2
;
3
;
;
n
g
ha
v
e
unkno
wn
position
x
i
(
t
)
,
and
can
be
mo
v
ed
in
E
o
v
er
time.
The
position
for
the
obs
tacle
O
i
with
respect
to
the
global
frame
of
reference
can
be
e
xpressed
as
(1):
x
i
(
t
)
=
R
(
t
)
p
i
(
t
)
+
r
(
t
)
(1)
where
p
i
(
t
)
corresponds
to
the
position
of
the
obstacle
with
respect
to
the
frame
of
reference
relati
v
e
to
the
cameras.
The
cameras
produce
tw
o
parallel
images
at
instant
t
with
the
location
information
p
i
(
t
)
.
Ho
we
v
er
,
obstacles
are
not
points,
the
y
are
v
olumes
whose
surf
ace
is
made
up
of
a
lar
ge
number
of
points.
W
e
do
not
w
ant
to
determine
the
position
of
all
points
of
obstacles.
Instead,
we
w
ant
to
identify
the
position
of
a
small
group
of
points
that
will
ideally
mo
v
e
to
the
surf
ace
of
the
obstacles.
Int
J
Elec
&
Comp
Eng,
V
ol.
10,
No.
2,
April
2020
:
2164
–
2172
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2167
W
e
define
a
population
of
m
bacteria
in
the
space
in
which
the
robot
may
encounter
obstacles
when
mo
ving
forw
ard
as
sho
wn
in
Figure
1.
The
initial
position
of
each
bacterium
is
random
b
ut
kno
wn.
From
the
images
of
the
tw
o
cameras,
we
can
establish
trigonometric
relationships
for
the
three-dimensional
position
of
each
bacterium.
If
the
bacteria
are
on
the
surf
ace
of
the
obstacle,
then
we
can
determine
the
dis-
tance
to
these
points
of
the
obstacle
as
depicted
in
Figure
2.
W
e
propose
a
search
algorithm
(obstacle
search)
in
which
bacteria
mo
v
e
three-dimensionally
according
to
local
information
detected
in
their
2D
projections.
In
addition,
the
algorithm
is
accelerated
according
to
the
bacterial
Quorum
Sensing
(QS),
i.e.
lar
ge
populations
of
bacteria
in
a
space
mak
e
the
space
more
attracti
v
e
to
other
bacteria.
Figure
2.
Layout
of
elements
in
the
test
hardw
are
and
images
resulting
from
the
tw
o
cameras
with
details
of
an
obstacle
and
tw
o
bacteria
(top
vie
w)
The
m
bacteria
(or
agents),
all
identical
to
each
other
,
mo
v
e
in
W
searching
for
areas
of
great
interest
to
them
(for
e
xample,
in
search
of
food).
The
v
alue
of
a
gi
v
en
position
is
determined
from
local
readings
(local
interaction
with
the
medium
)
e
v
aluated
from
its
projection
on
2D
images.
Each
bacterium
is
defined
by
its
position
in
the
en
vironment
(2):
V
=
(
p
)
(2)
where
p
is
a
point
in
3
-dimensional
space
p
2
R
3
.
The
population
density
is
e
v
aluated
using
the
distance
between
bacteria
(3):
d
ij
=
d
(
V
i
;
V
j
)
(3)
as
the
distance
between
bacteria
V
i
and
V
j
,
which
is
calculated
by
an
appropriate
norm.
The
function
used
to
e
v
aluate
the
v
alue
of
the
re
gion
where
the
bacterium
is
found
in
the
left
and
right
projections
considers
the
similarity
of
the
neighboring
pix
els
to
the
bacterium
in
the
tw
o
projections
is
depicted
in
Figure
2.
The
mathematical
e
xpression
is
(4):
F
=
jr
(
M
L
)
j
jr
(
M
R
)
j
P
col
or
s
P
(
i;j
)
2
N
L
(
x
L
+
i;y
L
+
j
)
R
(
x
R
+
i;y
R
+
j
)
2
+
f
(
QS
)
(4)
where
(
x
L
;
y
L
)
and
(
x
R
;
y
R
)
are
the
coordinates
of
the
left
and
right
projections
of
the
current
bacterium,
L
(
x
L
+
i;y
L
+
j
)
is
the
gre
y
v
alue
at
the
left
image
at
pix
el
(
x
L
+
i;
y
L
+
j
)
(in
a
similar
w
ay
for
the
right
image),
N
is
the
neighborhood
around
the
projection
of
each
bacterium,
and
jr
(
M
)
j
is
Sobel
gradient
norm
on
left
and
right
projections
(to
penalize
uniform
re
gions).
Obstacle
detection
for
autonomous
systems
using
...
(F
r
edy
Mart
´
ınez)
Evaluation Warning : The document was created with Spire.PDF for Python.
2168
r
ISSN:
2088-8708
Bacterial
QS
is
acti
v
ated
if
the
population
density
within
a
space
is
greater
than
a
threshold
v
alue
T
called
the
quorum
threshold.
It
is
the
parameter
defining
whether
or
not
it
has
reached
the
quorum.
The
beha
viors
of
bacteria
(search
in
the
en
vironment)
are
coordinated
by
the
follo
wing
rule:
-
If
the
bacterium
V
k
W
is
located
near
to
the
bacterium
V
i
W
,
i.e.
(5):
d
ik
<
h
(5)
and
the
number
of
bacteria
within
the
sphere
with
radius
h
2
and
origin
in
V
k
is
greater
than
T
,
then
the
v
alue
of
the
re
gion
increases
for
V
i
.
3.
RESEARCH
METHOD
W
e
i
n
i
tialize
the
bacterial
population
randomly
within
the
field
of
action
of
the
robot
(red
dotted
line
in
the
top
vie
w
of
Figure
1,
3
m
along
the
x
-axis,
2
m
depth
on
the
z
-axis,
and
2
m
height
abo
v
e
ground).
The
coordinates
of
each
bacterium
are
defined
with
respect
to
the
frame
of
reference
relati
v
e
to
the
cameras.
The
size
of
the
population
w
as
tak
en
as
a
performance
v
ariable
parameter
with
v
alues
between
10
and
1000.
The
cameras
are
located
on
the
robot
at
a
height
of
0.5
m
from
the
ground.
The
origin
of
the
frame
of
reference
relati
v
e
to
these
cameras
i
s
at
this
height,
in
the
middle
of
the
tw
o
cameras.
The
positi
v
e
x
-axis
corresponds
to
the
right
side
of
the
robot,
the
positi
v
e
z
-axis
corresponds
to
the
direction
of
adv
ance
of
the
robot,
and
the
positi
v
e
y
-axis
gro
ws
abo
v
e
the
robot.
The
images
of
L
c
and
R
c
are
scaled
to
800
600
pix
els.
The
projecti
on
of
each
bacterium
i
on
the
images
is
determined
with
the
fol
lo
wing
equations
(the
position
(0,0)
of
the
image
is
in
the
upper
left
side):
Left
image
:
(
x
p
=
400
+
(
x
i
+0
:
14)800
2
z
i
tan
(35
)
y
p
=
300
(
y
i
)600
2
z
i
tan
(30
)
(6)
Right
image
:
(
x
p
=
400
+
(
x
i
0
:
14)800
2
z
i
tan
(35
)
y
p
=
300
(
y
i
)600
2
z
i
tan
(30
)
(7)
where
(
x
i
;
y
i
;
z
i
)
is
the
three-dimensional
coordinate
of
the
bacterium
i
,
and
(
x
p
;
y
p
)
is
the
tw
o-dimensional
coordinate
of
the
bacterium
projected
in
the
image.
The
performance
of
the
area
adjacent
to
the
bacteria
at
each
projection
is
determined
by
(4).
The
bacteria
mo
v
e
in
the
limited
space
according
to
this
function.
If
the
bacterium
is
on
the
obstacle
surf
ace,
then
it
will
ha
v
e
similar
neighboring
pix
els
in
both
projections
as
sho
wn
in
Figure
2,
the
illumination
af
fects
both
cameras
equally),
and
the
function
will
assign
a
high
v
alue
to
the
position
of
the
bacterium.
The
more
the
neighboring
pix
els
dif
fer
,
the
less
v
alue
the
function
assigns.
The
position
of
the
bacteria
is
updated
with
the
gradient
looking
for
the
high
v
alues
(mo
v
ement
of
the
bacteria).
The
QS
forces
the
bacteria
that
are
slo
w
to
find
the
obstacle
surf
ace
to
mo
v
e
to
w
ards
the
lar
ge
groups
of
bacteria.
A
bacterium
that
does
not
appear
in
an
y
of
the
projections
obtains
the
lo
west
position
v
alue
(it
is
outside
the
robot’
s
range
of
vision).
4.
RESUL
T
AND
AN
AL
YSIS
W
e
e
v
aluate
the
performance
of
the
strate
gy
with
dif
ferent
configurations
v
arying
the
bacteria
population,
the
QS
threshold
and
the
corr
elation
windo
w
used
in
the
denominator
of
the
e
v
aluation
function.
A
lar
ger
n
um
ber
of
bacteria
allo
ws
for
reconstructing
lar
ger
portions
of
the
obstacles
without
significantly
influencing
the
computational
cost
of
the
algorithm.
The
QS
threshold
reduces
the
con
v
er
gence
time
when
it
does
not
e
xceed
the
range
of
100,
abo
v
e
this
v
alue,
does
not
ha
v
e
a
significant
ef
fect.
The
most
important
ef
fect
w
as
observ
ed
in
the
size
of
the
correlation
windo
w
of
the
function,
which
greatly
af
fects
the
bacteria’
s
ability
to
locate
the
obstacle.
Lar
ge
v
alues
impro
v
e
the
beha
vior
b
ut
considerably
increase
the
computational
cost.
Figures
3
and
4
sho
w
the
result
of
one
of
the
laboratory
tests.
Int
J
Elec
&
Comp
Eng,
V
ol.
10,
No.
2,
April
2020
:
2164
–
2172
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2169
Figure
3.
Left
and
right
images
captured
by
parallel
cameras,
scaled
to
800
600
and
con
v
erted
to
grayscales
Figure
4.
Image
of
the
left
camera
con
v
erted
to
grayscale,
scaled
to
800
600
and
with
the
bacteria
o
v
erlapped
in
its
final
position,
most
of
them
on
the
obstacle
W
e
perform
more
than
50
laboratory
tests
with
dif
ferent
obstacles
and
more
or
less
constant
lighting
conditions
for
a
human
indoor
en
vironment
(the
day
with
natural
lighting
and
night
with
LED
type
lighting).
The
distances
from
the
objects
to
the
robot
were
established
in
a
straight
line
between
0.3
and
2
m.
The
accurac
y
of
the
distance
v
alues
determined
by
the
optical
sensor
w
as
established
by
comparison
with
the
actual
v
alue,
measured
in
the
setup
with
a
tape
measure.
These
results
were
related
to
the
distance
of
the
obstacle.
Figure
5
sho
ws
these
percentages
of
accurac
y
with
respect
to
the
estimated
distance.
Figure
5.
Percentages
of
accurac
y
with
respect
to
the
estimated
distance
Our
intention
is
to
use
the
strate
gy
to
identify
obstacles
in
the
en
vironment,
and
with
this
i
nforma-
tion
coordinate
the
mo
v
ement
of
the
robot.
The
proposed
motion
pl
anning
strate
gy
based
on
the
detection
and
stereoscopic
identification
of
obstacles
considers
three
elements:
capture
and
pre-processing
of
images,
determination
of
obstacles
and
application
of
motion
pol
icies
according
to
the
i
nformation
feedback
as
sho
wn
in
Figure
6.
Obstacle
detection
for
autonomous
systems
using
...
(F
r
edy
Mart
´
ınez)
Evaluation Warning : The document was created with Spire.PDF for Python.
2170
r
ISSN:
2088-8708
Figure
6.
General
scheme
of
the
proposed
motion
planning
strate
gy
based
on
the
stereoscopic
detection
of
obstacles
5.
CONCLUSION
Considering
the
problem
of
motion
planning
of
small
autonomous
robots
in
u
nkno
wn
en
vironments,
particularly
for
service
robots
with
direct
and
continuous
interaction
with
the
human
being,
we
propose
a
lo
w-cost
computational
stereoscopic
vision
strate
gy
that
allo
ws
autonomous
na
vig
ation
in
dynamic
en
vironments.
Service
robots
perform
their
tasks
in
indoor
en
vironments,
unkno
wn,
with
a
high
probability
of
constant
change
in
the
location
of
obstacles
and
people.
The
stereoscopic
vision
systems
allo
w
to
establish
with
precision
the
three-dimensional
location
of
obstacles
and
therefore
pro
vide
complete
information
for
the
design
of
na
vig
ation
strate
gies.
Ho
we
v
er
,
their
computational
cost
is
high,
making
it
impossible
to
use
them
in
real-time
on
moderate
performance
platforms.
Our
strate
gy
proposes
a
local
reconstruction
of
a
finite
set
of
points
of
obstacles
in
the
en
vironment,
which
guarantees
a
lo
w
cost
and
a
high
performance.
W
e
perfor
med
the
calculation
of
about
100
points
corresponding
to
the
surf
ace
of
the
obstacles.
These
points
are
identified
using
an
uninformed
search
algorithm
inspired
by
bacterial
interaction.
The
bacteria
defined
in
the
2D
projections
of
the
cameras
mo
v
e
in
the
three-dimensional
space
looking
for
similar
neighboring
re
gions
in
their
projections.
The
algorithm
con
v
er
ges
with
most
bacteria
on
the
obstacles.
In
the
e
xperiments
carried
out,
it
w
as
possible
to
v
erify
percentages
of
accurac
y
to
the
obstacle
distance
higher
than
95%
and
lo
w
computational
consumption,
making
i
t
useful
for
embedded
implementations.
The
future
de
v
elopment
of
the
scheme
includes
impro
v
ements
in
the
determination
of
obstacle
s
urf
aces
using
lar
ger
bacterial
populations,
and
reduction
in
con
v
er
gence
times
through
the
use
of
the
Quorum
Sensing
(QS)
model.
A
CKNO
WLEDGEMENT
This
w
ork
w
as
supported
by
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas
and
the
Centre
for
Scien-
tific
Research
and
De
v
elopment
(CIDC)
through
the
project
1-72-578-18.
The
vie
ws
e
xpressed
in
this
paper
are
not
necessarily
endorsed
by
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas
or
the
CIDC.
The
authors
thank
the
research
groups
ARMOS
and
SIE
and
its
research
seedbeds
for
the
e
v
aluation
carried
out
on
prototypes
of
ideas
and
strate
gies
proposed
in
this
paper
.
The
authors
declare
that
the
research
w
as
conducted
in
the
absence
of
an
y
commercial
or
financial
relationships
that
could
be
construed
as
a
potential
conflict
of
interest.
Int
J
Elec
&
Comp
Eng,
V
ol.
10,
No.
2,
April
2020
:
2164
–
2172
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2171
REFERENCES
[1]
H.
Himanshu,
D.
Deepanshu,
K.
Amit,
and
G.
Aashish,
”Autonomous
robots
for
military
,
”
National
Journal
of
Multidisciplinary
Research
and
De
v
elopment
,
v
ol.
3,
no.
1,
pp.
994–997,
2018.
[2]
M.
Ghute,
K.
Kamble,
and
M.
K
orde,
”Design
of
mi
litary
surv
eillance
robot,
”
in
First
International
Con-
ference
on
Secure
Cyber
Computing
and
Communication
(ICSCCC
2018)
,
pp.
270–272,
2018.
[3]
B.
Schlotfeldt,
V
.
Tzoumas,
D.
Thakur
,
and
G.
P
appas,
”Resilient
Acti
v
e
Information
Gathering
with
Mobile
Robots,
”
in
IEEE/RSJ
International
Conference
on
Intelligent
Robots
and
Systems
(IR
OS
2018)
,
pp.
4309–4316,
2018.
[4]
C.
Freundlich,
Y
.
Zhang,
A.
Zhu,
P
.
Mordohai,
and
M.
Za
vlanos,
”Controll
ing
a
robotic
stereo
camera
under
image
quantization
noise,
”
The
International
Journal
of
Robotics
Research
,
v
ol.
36,
no.
12,
pp.
1268–1285,
2017.
[5]
N.
Aklil,
B.
Girard,
L.
Deno
yer
,
and
M.
Khamassi,
”Sequential
action
selection
and
acti
v
e
sensing
for
b
udgeted
localization
in
robot
na
vig
ation,
”
International
Journal
of
Semantic
Computing
,
v
ol.
12,
no.
1,
pp.
109–127,
2018.
[6]
B.
Calli,
W
.
Caarls,
M.
W
isse,
and
P
.
Jonk
er
,
”Acti
v
e
V
ision
via
Extremum
Seeking
for
Robots
in
Un-
structured
En
vironments:
Applications
in
Object
Recognition
and
Manipulation,
”
IEEE
T
ransactions
on
Automation
Science
and
Engineering
,
v
ol.
15,
no.
4,
pp.
1810–1822,2018.
[7]
S.
Solak
and
E.
Bolat,
”Distance
estimation
using
stereo
vision
for
indoor
mobile
robot
applications,
”
in
9th
International
Conference
on
El
ectrical
and
Electronics
Engineering
(ELECO
2015)
,
pp.
685–688,
2015.
[8]
Y
.
Hongshan,
Z.
Jiang,
W
.
Y
aonan,
J.
W
en
yan,
S.
Mingui,
and
T
.
Y
andong,
”Obstacle
Classificatio-
nand
3D
Measurementin
Unstructured
En
vironmentsBased
on
T
oF
Cameras,
”
Sensors
,
v
ol.
14,
no.
1,
pp.
10753–10782,
2014.
[9]
Z.
Y
uanshen,
G.
Liang,
H.
Y
ixiang,
and
L.
Chengliang,
”A
re
vie
w
of
k
e
y
techniques
of
vision-based
control
for
harv
esting
robot,
”
Computers
and
Electronics
in
Agriculture
,
v
ol.
127,
no.
1,
pp.
311–323,
2016.
[10]
Y
.
Da
w
ood,
K.
Ruhana,
and
E.
Kamioka,
”Distance
measurement
for
self-dri
ving
cars
using
stereo
cam-
era,
”
in
6th
International
Conference
on
Computing
and
Informatics
(ICOCI
2017)
,
pp.
235–242,
2017.
[11]
A.
Mohamed,
Y
.
Chenguang,
and
A.
Cangelosi,
”Stereo
V
ision
based
Object
T
rackingControl
for
a
Mo
v-
able
Robot
Head,
”
in
4th
IF
A
C
International
Conference
onIntelligent
Control
and
Automation
Sciences
,
pp.
161–168,
2016.
[12]
M.
Ferreira,
P
.
Costa,
L.
Rocha,
and
A.
Moreira,
”Stereo-based
real-time
6-DoF
w
ork
tool
tracking
for
robot
programing
by
demonstration,
”
The
International
Journal
of
Adv
anced
Manuf
acturing
T
echnology
,
v
ol.85,
no.1,
pp.
57–69,
2016.
[13]
C.
Mao-Hsiung,
L.
Hao-T
ing,
and
H.
Chien-Lun,
”De
v
elopment
of
a
Stereo
V
ision
Measurement
Sys-
tem
for
a
3D
Three-Axial
Pneumatic
P
arallel
Mechanism
Robot
Arm,
”
Sensors
,
v
ol.
11,
no.
2,
pp.
2257–2281,2011.
[14]
M.
Mahammed,
A.
Melhum,
and
F
.
K
ochery
,
”Object
distance
measurement
by
stereo
vision,
”
Interna-
tional
Journal
of
Science
and
Applied
Information
T
echnology
,
v
ol.
2,
no.
2,
pp.
5–8,
2013.
[15]
J.
Y
amaguchi,
Three
Dimensional
Measurement
Using
Fishe
ye
Stereo
V
ision
,
Intech,
2011.
[16]
H.
Martins,
I.
Oakle
y
,
and
R.
V
entura,
”Design
and
e
v
aluation
of
a
head-mounted
display
for
immersi
v
e
3D
teleoperation
of
field
robots,
”
Robotica
,
v
ol.
33,
no.
10,
pp.
2166–2185,2015.
[17]
S.
Dreier
,
M.
Sa
vran,
L.
K
onge,
and
F
.
Bjerrum,
”Three-dimensional
v
ersus
tw
o-dimensional
vision
in
laparoscop
y:
a
systematic
re
vie
w
,
”
Sur
gical
Endoscop
y
,
v
ol.
30,
no.
1,
pp.
11–23,
2015.
[18]
K.
P
anjv
ani,
A.
Dinh,
and
K.
W
ahid,
”LiD
ARPheno
-
A
Lo
w-
Cost
LiD
AR-based
3D
Scanning
System
for
Leaf
Morphological
T
rait
Extraction,
”
Frontiers
in
Plant
Science
,
v
ol.
10,
no.
147,
pp.
1–17,
2019.
[19]
S.
Boonkw
ang
and
S.
Saiyod,
”Distance
measurement
using
3D
s
tereoscopic
technique
for
robot
e
yes,
”
7th
International
Conference
on
Information
T
echnology
and
Electrical
Engineering
(ICITEE
2015)
,
pp.
232–
236,
2015.
[20]
O.
Bertel,
C.
Moreno,
and
E.
T
oro,
”Aplicaci
´
on
de
la
transformada
W
a
v
elet
para
el
rec
on
oc
imiento
de
formas
en
visi
´
on
artificial,
”
T
ekhn
ˆ
e
,
v
ol.
6,
no.
1,
pp.
3–8,
2009.
[21]
S.
Mehta,
”V
ision-based
localization
of
a
wheeled
mobile
robot
for
greenhouse
applications:
A
daisy-
chaining
approach,
”
Computers
and
Electronics
in
Agriculture
,
v
ol.
63,
no.
1,
pp.
28–37,
2008.
[22]
D.
P
atel,
P
.
Bachani,
and
N.
Shah,
”Distance
measurement
system
using
binocular
stereo
vision
approach,
”
Obstacle
detection
for
autonomous
systems
using
...
(F
r
edy
Mart
´
ınez)
Evaluation Warning : The document was created with Spire.PDF for Python.
2172
r
ISSN:
2088-8708
International
Journal
of
Engineering
Research
&
T
echnology
,
v
ol.
2,
no.
12,
pp.
2461–2464,2013.
[23]
Y
.
Si,
G.
Liu,
and
J.
Feng,
”Location
of
apples
in
trees
using
stereoscopic
vision,
”
Computers
and
Elec-
tronics
in
Agriculture
,
v
ol.
112,
no.
1,
pp.
68–74,
2015.
[24]
M.
Mehrabi,
E.
Peek,
B.
W
uensche,
and
C.
Lutteroth,
”Making
3D
W
ork:
A
Classification
of
V
isual
Depth
Cues,
3D
Display
T
echnologies
and
Their
Applications,
”
in
Proceedings
of
the
F
ourteenth
Australasian
User
Interf
ace
Conference
(A
UIC
13
)
,
v
ol.
139,
2013,
pp.
91–100.
[25]
R.
Lins,
S.
Gi
vigi,
and
P
.
Gardel,
”V
ision-Based
Measurement
for
Localization
of
Objects
in
3-D
for
Robotic
Applications,
”
IEEE
T
ransactions
on
Instrumentation
and
Measurement
,
v
ol.
64,
no.
11,
pp.
2950–2958,2015.
BIOGRAPHIES
OF
A
UTHORS
Fr
edy
Mart
´
ınez
is
a
professor
at
the
F
acultad
T
ecnol
´
ogica,
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas,
Bogot
´
a
D.C.-Colombia.
He
obtained
his
Bachelor’
s
De
gree
in
Electrical
Engineering
and
his
Ph.D
in
Engineering
-
Systems
and
Computing
from
the
National
Uni
v
ersi
ty
of
Colombia
(Colombia)
in
1997
and
2018
re
specti
v
ely
.
Since
2000
he
leads
the
ARMOS
research
group
at
the
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas
(Colombia).
His
research
focuses
on
electronics,
control
systems,
h
ybrid
architectures,
autonomous
robotics
and
intelligent
systems.
The
application
of
robotic
systems
in
the
pro
vision
of
services
to
people
has
recently
been
addressed.
Ed
war
J
acinto
is
a
professor
at
the
F
acultad
T
ecnol
´
ogica,
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas,
Bogot
´
a
D.C.-Colombia.
He
obtained
his
Bachelor’
s
De
gree
in
Control
Engineering
and
his
Master
De
gree
in
Sciences
of
the
Information
and
Communications
from
the
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas
(Colombia)
in
2004
and
2015
respecti
v
ely
.
His
research
focuses
on
the
fields
of
electronics,
control
systems,
embedded
systems,
communication
solutions
and
custom
encryption.
The
application
of
hardw
are-based
encryption
for
decentralized
communication
of
mobile
nodes
has
recently
been
addressed.
He
is
af
filiated
with
IEEE
as
professional
member
.
F
er
nando
Mart
´
ınez
is
a
professor
at
the
F
acultad
T
ecnol
´
ogica,
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
C
aldas,
Bogot
´
a
D.C.-Colombia.
He
obtained
his
Bachelor’
s
De
gree
in
Control
Engineering
and
his
Master
De
gree
in
Electronic
and
Computer
Engineering
from
the
Uni
v
ersidad
Distrital
Francisco
Jos
´
e
de
Caldas
(Colombia)
in
2004
and
2012
respecti
v
ely
.
His
research
focuses
on
the
fields
of
elec-
tronics,
instrumentation
systems,
real-time
image
and
video
processing,
embedded
signal
processing
solutions.
Recently
,
the
de
v
elopment
of
autonomous
na
vig
ation
strate
gies
based
on
images
has
been
tackled.
Int
J
Elec
&
Comp
Eng,
V
ol.
10,
No.
2,
April
2020
:
2164
–
2172
Evaluation Warning : The document was created with Spire.PDF for Python.