Inter
national
J
our
nal
of
Electrical
and
Computer
Engineering
(IJECE)
V
ol.
9,
No.
4,
August
2019,
pp.
2394
2402
ISSN:
2088-8708,
DOI:
10.11591/ijece.v9i4.pp2394-2402
r
2394
Computer
vision
based
3D
r
econstruction
:
A
r
e
view
Hanry
Ham,
J
ulian
W
esley,
Hendra
Computer
Science
Department,
School
of
Computer
Science,
Bina
Nusantara
Uni
v
ersity
,
Indonesia
Article
Inf
o
Article
history:
Recei
v
ed
Jan
15,
2018
Re
vised
Jan
23,
2019
Accepted
Mar
4,
2019
K
eyw
ords:
3D
alignment
3D
point
clouds
3D
reconstruction
ABSTRA
CT
3D
reconstruction
are
used
in
man
y
fields
starts
from
the
object
reconstruction
such
as
site,
cultural
artif
acts
in
both
ground
and
under
the
sea
le
v
els,
medical
imaging
data,
nuclear
substantional.
The
scientist
are
beneficial
for
these
task
in
order
to
learn,
k
eep
and
better
visual
enhancement
into
3D
data.
In
this
paper
we
dif
ferentiate
the
algorithm
used
depends
on
the
input
image
:
single
still
image
,
RGB-Depth
image,
multiperspecti
v
e
of
2D
images,
and
video
sequences.
The
prior
w
orks
also
e
xplained
ho
w
the
3D
reconstruction
perform
in
man
y
fields
and
using
v
arious
algorithms.
Copyright
c
2019
Institute
of
Advanced
Engineering
and
Science
.
All
rights
r
eserved.
Corresponding
A
uthor:
Hanry
Ham,
Computer
Science
Department,
School
of
Computer
Science,
Bina
Nusantara
Uni
v
ersity
,
Jakarta,
11480
-
Indonesia.
Email:
hanry
.ham@binus.edu
1.
INTR
ODUCTION
3D
Reconstruction
task
is
one
of
the
interesting
task
that
meet
i
ts
maturity
already
.
These
can
be
seen
from
the
commercial
products
such
as
product
from
Agisoft
and
Pix4D
that
are
capable
of
produced
high
quality
of
lar
ge
scal
e
3D
models.
Furthermore,
the
hardw
are
such
as
the
computer
vision
has
been
de
v
eloped
and
impro
v
e
si
nce
then.
There
are
some
setup
camera
introduced
in
the
research
such
as
stereo
camera
and
Kinect.
In
addition
to
the
vision
setup,
ki
nect
camera
sho
ws
a
great
positi
v
e
feedback
from
the
r
esearchers,
pro
v
ed
by
common
vision
setup
that
can
be
found
in
the
literature
re
vie
w
.
Not
only
that,
stereo
camera
setup
can
be
found
among
the
literature
re
vie
w
.
In
addition
to
the
stereo
camera,
custom
stereo
camera
are
quite
popular
among
the
researchers
by
combining
tw
o
equals
web
camera
that
positioned
by
period
of
distance.
The
algorithm
to
perform
3D
reconstruction
between
these
camera
are
dif
ferent
due
to
the
produced
images
are
dif
ferent
as
well.
Kinect
abilities
allo
ws
RGB
image
and
depth
map
produced,
on
the
other
hand
Stereo
camera
has
to
perform
another
depth
map
acquisition
algorithm
by
combining
2
RGB
images.
Numerous
numbers
of
3D
reconstruction
task
can
be
found
in
capturing
the
site,
cultural
artif
acts
both
in
ground
and
under
the
sea
le
v
els
[1].
The
e
xtinction
f
actor
is
the
most
prominent
issue
in
these
area.
Moreo
v
er
,
3D
imaging
data
also
could
help
impro
v
e
the
accurac
y
of
the
anatomical
features
in
order
to
observ
e
some
areas
before
coming
to
the
sur
gery
action
.Furthermore,
in
order
to
perform
3D
reconstruction,
there
are
multiple
approaches
found
in
the
literature
re
vie
w
such
as
from
the
broad
ra
ng
e
s
of
vision
setup,
v
arious
types
of
inputted
image
to
construct
3D
reconstruction.
Thus,
In
this
paper
will
describe
more
on
those
approaches.
The
great
numbers
of
the
resear
chers
along
with
the
hardw
are
supports
allo
ws
such
algorithm
to
do
high
processing
calculation
in
order
to
perform
reconstruction
task.
There
are
some
sections
mentioned
in
part
2..
The
benefits
of
reconstruction
are
to
perform
3D
recording,
visualization,
representation
and
reconstruction
[2].
Moreo
v
er
Tsiaf
aki
and
Michailidou
e
xplained
that,
there
are
6
benefits
in
performing
reconstruction
and
visualization:
limiting
the
destructi
v
e
nature
of
e
xca
v
ating,
placing
e
xca
v
ation
data
into
the
bigger
picture,
limiting
fragmentation
of
archaeological
remains,
cl
assifying
archaeological
finds,
limiting
subjecti
vity
and
publication
delays,
enriching
and
e
xtending
archaeological
research.
J
ournal
homepage:
http://iaescor
e
.com/journals/inde
x.php/IJECE
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2395
Some
algorithms
found
in
the
literature
re
vie
w
introduced
the
usage
of
single
and
multiple
i
mages
approaches
to
perform
3D
reconstruction.
There
are
some
characteristics
of
the
algorithms
in
the
literature
specifically
b
uilt
for
single
or
multiple
images,
adv
antages
and
dra
wbacks
e
xplained
in
this
paper
.
In
this
paper
will
described
the
vision
setup
by
3
cate
gories
as
follo
ws:
1.
Single
Camera
A
single
camera
is
si
mple
to
calibrate,
computationally
ef
ficient
more
compact.
Ho
we
v
er
,
the
y
are
lack
of
the
depth
information.
It
requires
prior
kno
wledge
from
other
sensor
to
determine
the
depth
scale
[3].
2.
Stereo
Camera
In
stereo
camera
mechanism
is
that
the
images
captured
either
using
2
equals
web
camera
[4]
or
an
y
cameras.
The
y
are
set
by
a
defined
distance.
In
addition
to
2
images
captured,
an
algorithm
is
used
to
generate
depth
map.
Ho
we
v
er
,
stereo
matching
ha
v
e
se
v
eral
issue
when
the
scene
contains
weekly
te
xtured
areas,
repet
iti
v
e
patterns
or
occlusions
occur
in
both
indoor
and
outdoor
en
vironments
[5]
as
sho
wn
in
Figure
1.
Figure
1.
Stereo
Camera
3.
Kinect
/
Structured
Light
/
T
ime
of
Flight
Structured
Light
sensor
is
able
to
perform
range
detection,
an
accurate
distance
measurement
is
the
output
[6].
Kinect
came
ra
is
a
product
from
Microsoft
that
has
an
RGBD
camera.
The
product
comes
with
nati
v
e
SDK
that
allo
ws
user
to
call
the
API
to
perform
some
vision
task
such
as
sk
eleton
detection.
4.
Fusion
Some
researchers
also
tried
possibilities
of
using
fusion
approach
where
a
s
combining
depth
map
pro-
duced
by
Stereo
and
kinect
camera
to
achie
v
e
h
i
gher
accurac
y
in
depth
map
precision.
T
o
such
de
v
elop-
ment
allo
ws
to
produce
better
3D
Reconstruction
object,
rich
in
features
details.
Range
cam
eras
are
lo
w
cost
and
ease
to
use
t
o
construct
3D
point
clouds
in
r
eal
time.
One
issue
arise
is
that
the
transparent
and
reflecti
v
e
surf
aces
[7].
on
the
other
hand,
3D
model
produced
by
stereo
vision
are
mostly
incomplete
in
lo
w
te
xture
re
gions.
The
possibilities
of
combining
both
approached
could
lead
to
better
depth
map
quality
.
Fusion
approach
is
sho
wn
in
Figure
2.
Figure
2.
Fusion
Approach
Computer
vision
based
3D
r
econstruction
:
A
r
e
vie
w
(Ham
Hanry)
Evaluation Warning : The document was created with Spire.PDF for Python.
2396
r
ISSN:
2088-8708
The
algorithms
v
ary
due
to
the
characteristics
of
the
inputted
image.
Therefore,
in
this
paper
we
described
the
inputted
image
into
2
cate
gories
:
single
and
multiple
images.
Single
image,
the
characteristic
image
can
be
described
as:
1.
Single
Still
Image
Single
still
image
here
using
an
RGB
image.
This
image
can
be
tak
en
by
a
re
gular
camera.
2.
RGB-Depth
Image
RGB
image
is
tak
en
with
the
setup
camera
that
produced
RGB-D
format
image.
Mostly
,
the
setup
used
is
commercial
camera
such
as
Kinect,
Intel
real
sense
camera.
On
the
other
hand,
the
multiple
images
can
be
described
as:
1.
Multiperspecti
v
e
of
2D
images
[8]
The
idea
of
this
aprroach
is
to
tak
e
some
images
dif
ferentiate
in
its
perspecti
v
e
to
the
object.
Thus
the
area
of
the
object
are
co
v
ered
properly
using
filter
[9].
In
addition
to
that,
Xian-hua
and
Y
uan-qing
[10]
said
that
in
order
to
perform
3D
reconstruction,
an
ef
fecti
v
e
matching
of
a
feature
is
the
prominent
f
actor
in
later
stage.
The
y
implemented
a
feature
matching
error
elimination
method
based
on
collision
detection.
2.
V
ideo
Sequences
The
using
of
the
input
video
sequences
as
kno
wn
as
structure
from
motion.
Sepehrinour
and
Kasaei
e
xplained
that
these
methods
are
using
the
shared
information
of
consecuti
v
e
frames,
in
the
form
of
tracking
information
of
feature
points
in
a
sequence
of
images.
The
f
actors
may
impact
to
the
de
v
eloped
methods:
the
kno
wledge
or
lack
of
kno
wledge
of
camera
calibration
parameters,
ha
ving
multiple
cameras
with
dif
ferent
vie
wing
angles
or
only
one
mo
ving
camera,
and
rigid
or
non-rigid
shape
reconstruction
based
on
the
incoming
video
stream.
2.
T
AXONOMY
OF
3D
RECONSTR
UCTION
3D
Reconstruction
plays
an
important
roles
in
se
v
eral
aspects
such
as
medical
imaging
data,
site
and
cultural
artif
act
reconstruction.
(a)
Medical
Imaging
Data
Common
sur
gery
operation
procedures
uses
X-Ray
as
a
reference
for
the
doctor
to
operate
on
specific
section.
Ho
we
v
er
,
some
important
features
cannot
be
visualized
well
in
2D
images
[12].
In
addition
to
2D
images,
the
accurac
y
may
increase
depends
on
se
v
eral
aspects
such
as:
number
of
2D
V
ie
ws,
the
image
noise,
and
the
image
distortion.
Magnetic
resonance
Images
also
holds
an
important
method
while
considering
the
operation
process.
The
gi
v
en
output
of
MRI
are
in
2D
images,
ho
we
v
er
there
are
some
literature
can
be
found
in
manipulating
those
images
into
3D
space.
By
implementing
such
method,
the
y
w
ould
lik
e
to
pro
v
e
the
more
features
captures,
the
more
accurate
result
is.
a
w
ork
from
Hichem
et
al.
introduced
a
geometric
interpretation
of
the
3D
model
reconstruction
of
the
blood
v
ess
el
of
the
human
retina.
Sumijan
et
al.
[14]
in
their
w
ork
introduced
a
method
to
calculate
v
olume
Hemorrhage
Brain
on
CT
-Scan
Image
and
3D
Reconstruction.
The
idea
of
this
w
ork
is
to
calculate
of
the
bleeding
area
in
the
brain
on
each
image
slide
CT
-scan.
As
it
is
said
in
the
pre
vious
w
ork[15],
brain
injury
is
one
of
the
most
causes
that
cause
the
death
of
human.
In
addition
to
the
pipeline,
the
e
xtraction
the
bleeding
area
of
the
brain
using
Otsu
algorithm
combining
with
the
morphological
features
algorithm.
Therefore
by
visualizing
the
brain
v
olume
aim
at
impro
ving
visual
enhancement
for
the
doctor
to
gi
v
e
the
best
medical
treatment.
(b)
Site
and
cultural
artif
acts
Reconstruction
The
site
reconstruction
has
been
widely
an
issue
to
the
archaeology
in
order
to
capture
the
social,
culture
through
the
b
uilding,
the
y
do
the
reconstruction.
Re
gular
camera
can
only
allo
w
to
capture
in
2D
space
format.
Not
all
the
details
from
the
b
uilding
can
be
captured
and
closely
observ
ed.
Since
then,
by
using
stereo
camera
or
Kinect
mak
e
this
task
possible
along
with
the
algorithm
de
v
eloped
in
the
current
research.
The
archaeological
sites
are
not
only
on
the
ground
b
ut
also
under
the
sea.
The
reconstruction
which
performed
under
the
sea
rises
another
issue
to
the
images
captured
such
as
de
gradation
quality
Int
J
Elec
&
Comp
Eng,
V
ol.
9,
No.
4,
August
2019
:
2394
–
2402
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2397
if
underw
ater
images,
une
v
en
illumination
of
light
on
the
surf
ace
of
objects.
scattering
and
absorption
ef
fects
[1].
(c)
Nuclear
Substantial
Reconstruction
Monterial
et
al.
[16]
used
3D
image
reconstruction
of
neutron
sources
that
emit
correlated
g
ammas.
This
aim
at
pre
v
enting
nuclear
threat
search,
safe
guards
and
non-proliferation.
This
research
is
prominent
and
under
supervision
of
le
g
al
di
vision.
In
addition
to
that,
nuclear
had
been
used
as
source
of
ener
gy
,
yet
some
contro
v
ersies
arise
about
the
impact
of
harmful
substantial.
2.1.
Single
still
image
appr
oach
The
first
part
will
describe
the
algorithms
found
in
the
literature
re
vie
w
using
single
still
image.
Com-
pared
to
the
multiple
images,
single
image
occurs
tend
to
ha
v
e
more
challenges.
Sax
ena
et
al.
e
xplained
that
one
of
the
issued
is
to
create
a
depth
map
due
to
the
loca
l
features
are
insuf
ficient
to
estimate
depth
at
a
point.
In
addition,
single
still
image
approach
is
relati
v
ely
less
studied
in
the
literature.
Sax
ena
et
al.
[17]
introduced
a
3D
depth
reconstruction
using
a
single
still
image.
A
s
upervised
learning
approach
w
as
proceeded
by
taking
a
training
set
including
the
unstructured
indoor
and
outdoor
en-
vironments
and
their
corresponding
ground-truth
depthmaps.
Their
proposed
algorithms
a
w
are
of
the
global
structure
of
the
image,
based
on
modeling
depths
and
relationships
between
depths
using
proposed
multiple
spatial
scales
using
a
hierarchical,
multiscale
Mark
o
v
Random
Field.
Ground
truth
were
tak
en
using
3D
scan-
ner
.
Y
an
et
al.
[8]
proposed
a
system
called
Perspecti
v
e
T
ransformer
nets.
The
model
w
as
b
uilt
by
ignoring
the
color
and
te
xture
f
actors.
In
addition
to
that,
the
e
xperiments
sho
ws
that
e
xcellent
performance
of
the
pro-
posed
model
in
reconstructing
the
object
without
ground-truth
3D
v
olume
as
supervision.
The
input
used
were
pro
vided
by
Chang
et
al.
[18]
w
orks.
The
images
input
proposed
is
a
single
vie
w
3D
v
olume
reconstruction
[19]
with
perspecti
v
e
transformation
[20]
run
through
defined
encoder
-decoder
netw
ork
that
consists
of
a
2D
con
v
olutional
encoder
,
a
3D
up-con
v
olutional
decoder
and
a
perspecti
v
e
transformer
netw
orks.
F
an
et
al.
[21]
applied
a
re
gion-based
gro
wing
algorithm
for
3D
reconstruction
by
using
brain
MRI
images.
There
are
3
steps
in
their
proposed
pipeline
:
First,
the
seed
element
is
the
initial
state
of
the
se
gmenta-
tion.
Second,
start
the
gro
wing
process
from
the
seed
element.
There
are
4
areas
of
the
gro
wth
area.
Ho
we
v
er
there
are
some
defined
threshold
v
alue
to
meet
the
pattern
of
gro
wth.
Third,
us
e
the
points
which
sat
isfy
the
gro
wing
requirement
as
seed
element,
and
continue
to
gro
w
.
in
addition
to
the
result,
their
proposed
method
could
achie
v
e
90.52%
compared
to
Nadu
[22]
w
orks.
2.2.
RGB-depth
image
appr
oach
Zhang
et
al.
[23]
de
v
eloped
a
feature-based
RGBD
camera
pose
optimization
for
real-time
3D
recon-
struction.
Their
proposed
w
ork
are
ignoring
corner
-based
feature
detectors
such
as
BRIEF
and
F
AST
due
to
acquired
images
contains
huge
noise
around
object
contours.
Subsequently
,
SURF
detector
w
as
chosen
due
to
the
f
act
that
its
rob
ustness,
stability
,
scaleable
and
rotation
in
v
ariant
[24].
In
addition
to
that,
SURF
can
be
com-
puted
in
parallel
on
the
GPU
[25].
The
miss-matched
pairs
in
feature
matching
can
be
remo
v
ed
using
RANSA
C
algorithm.
The
consistenc
y
of
the
global
po
s
itions
of
matched
features
are
track
ed
by
proposed
feature
cor
-
respondence
list
and
camera
pose
optimization
both
in
the
spatial
and
temporal
dimension.
Subsequently
,
in
order
to
e
v
aluate
the
method,
v
ox
el-hashing
w
as
used
for
each
camera
poses
compared
to
the
proposed
method.
It
is
pro
v
ed
that
their
proposed
optimized
camera
poses
outperforms
the
structure
of
the
reconst
ruct
model
for
the
real
scene
data
captured
by
a
f
ast
mo
ving
camera.
Group
et
al.
[26]
e
xplained
that
a
fully
con
v
olutional
3D
denoising
autoencoder
neural
netw
ork.
The
y
e
xperimented
using
RGBD
dataset
and
it
is
pro
v
ed
that
the
netw
ork
could
reconstruct
a
full
sc
ene
from
a
single
depth
image
by
filling
holes
and
hidden
element.
The
netw
ork
is
capable
of
learn
the
object
shape
by
inferring
similarities
in
geomet
ry
.
A
real-w
ord
dataset
of
table
top
scenes
[27]
w
as
used
using
KinectFusion.
Their
steps
can
be
mentioned
as
follo
ws:
acquisition
RGBD
image
using
Kinect,
denoising
and
hole
filling
depth
channel
using
[28]
algorithm,
projection
of
the
pix
el
into
3D
space
using
preset
equations,
retrie
v
e
sensor
pose
from
accelerometer
and
align
point
cloud
data,
v
ox
elize
the
point
cloud,
and
A
predefined
CNN
layer
w
as
trained.
In
addition
to
that,
the
netw
ork
is
not
constrained
to
a
fix
ed
3D
shape
and
it
is
capable
successfully
reconstructing
arbitrary
scenes.
Jaisw
al
et
al.
[29]
used
Kinect
to
assess
3D
object
modelling.
The
proposed
pipeline
are
as
f
o
l
lo
ws:
first,
3D
point
cloud,
a
green
surf
ace
w
as
placed
behind
and
under
t
he
object
to
do
the
histogram-based
se
g-
Computer
vision
based
3D
r
econstruction
:
A
r
e
vie
w
(Ham
Hanry)
Evaluation Warning : The document was created with Spire.PDF for Python.
2398
r
ISSN:
2088-8708
mentation
out
the
object
from
the
RGB
images.
Afterw
ards,
RANSA
C
algorithm
is
used
to
perform
a
coarse
alignment.
Second,
the
re
gistration
using
SIFT
based
[30]
to
o
v
ercome
the
lack
structural
features
or
under
go
significant
changes
in
camera
vie
w
.
Third,
global
alignment
is
used
ti
eliminate
inaccurac
y
at
each
re
gistration
that
could
lead
to
significant
misalignment
between
the
first
and
last
frame.
F
ourth,
3D
point
cloud
denoising
is
performed
to
refine
the
3D
object
model,
in
this
case
M
o
ving
Least
Square
(MLS)
3D
model
denoising
method
[31].
Fifth,
surf
ace
reconstruction
using
Delaunay
triangulation
method
[32]
to
con
v
ert
3D
point
clouds
into
meshed.
Afterw
ards,
coloring
task
is
performed
to
each
v
erte
x
and
simply
interpolate
the
color
in
each
triangle
f
aces.
2.3.
Multiperspecti
v
e
of
2D
images
appr
oach
K
o
w
alski
et
al.
[33]
created
an
open
source
system
for
li
v
e,
3D
data
acquisition
using
multiple
kinect
v2
Sensors.
T
o
o
v
ercome
the
ability
of
the
nati
v
e
Kinect
V2
SDK,
the
y
made
this
fle
xible
frame
w
ork.
There
are
3
coordinates
system
of
a
mark
ers:
Kinect
v2
sensor
,
coordinate
system
of
a
mark
er
which
is
located
at
a
center
on
a
gi
v
en
mark
er
and
the
w
orld
coordinate.
The
proposed
pipeline
as
follo
ws:
first
calibrations
were
done
by
calibrating
2
types
of
defined
mark
ers.
Subsequently
the
Iterati
v
e
Closest
Points
(ICP)
algorithm
[34]
w
as
used
to
refine
the
initial
estimation.
Ev
angelidis
et
al.
[5]
combined
lo
w-resolution
depth
data
with
high
resolution
stereo
data
to
o
v
ercome
the
construction
of
high-resolution
depth
maps
for
the
range-stereo
fusion
problem.
The
input
used
stereo
images
(high
resolution)
and
depth
data
(lo
w
resolut
ion)
from
the
range
camera.
The
lo
w
resolution
depth
data
are
projected
into
the
color
data
and
refined
a
h
i
gh
resolution
sparse
disparity
map.
Subsequently
,
the
depth
up-sampling
algorithms
were
perform
such
as
triangulation-based
interpolation
and
join
bilateral
filter
.
then
a
re
gion
gro
wing
fusion
were
performed
and
final
denser
High
resolution
map
as
the
result.
Burns
[35]
introduced
a
te
xture
super
resolution
(TSR)
method
for
3D
multi-vie
w
reconstruction.
In
addition,
their
w
ork
used
video
sequence
as
the
input.
Moreo
v
er
to
the
proposed
pipeline,
a
Photoscan
from
Agisoft
is
used
to
do
multi-vie
w
stereo
reconstruction
and
3D
mesh
model.
Then,
optical
flo
w
algorithm
is
inte
grated
in
order
to
re
gister
each
pix
el
of
neighboring
to
the
closest
k
e
y-frame
using
KL
T
feature
track
er
[36].
Afterw
ards,
to
support
rob
ustness
to
outliers
the
fundamental
matrix
filtering
of
the
track
ed
2D
points
and
RANSA
C
filtering
of
the
2D/3D
correspondences.
Due
to
the
piece-wise
af
fine
surf
ace
approximation
constructed
in
3D
mesh,
this
may
lead
to
pix
els
re
gistration
error
.
T
o
o
v
ercome
that
issue,
to
locate
the
dis-
placements,
an
optical
flo
w
estimation
is
used
[37].
The
object
used
is
2mx1m
desk
that
has
man
y
te
xtured
objects
on
it
as
gray-scale
images
along
with
the
subsampling
applied
to
it.
It
is
acquired
using
a
camera
with
5.5mm
focal
length
at
f/2.8
mounted
on
a
Byaer
1/18”
e2v
detector
.
There
are
3
e
xperiments
conducted
and
it
sho
ws
that
the
proposed
methods
outperforms
compared
to
the
re
gistration
with
mesh
and
camera
poses
only
,
re
gistration
with
optical
flo
w
only
.
T
ulsiani
et
al.
[38]
studied
multi-vie
w
supervision
for
single-vie
w
reconstruction
and
a
dif
ferentiable
ray
consistenc
y
(DRC)
term
w
as
introduced
which
allo
ws
computing
gradients
of
the
3D
shape
gi
v
en
an
ob-
serv
ation
from
an
arbitraty
vie
w
.
The
dataset
used
is
called
ShapeNet
dataset.
The
follo
wing
steps
to
perform
their
methods
are:
formulation
,
vie
w
consistenc
y
loss
function
is
introduced
aim
at
measuring
the
inconsistenc
y
between
a
predicted
3D
share
and
a
corresponding
observ
ation
image.
shape
r
epr
esentation
,
The
assumption
made
w
as
it
is
possible
to
trace
trays
accross
the
v
ox
el
grid
and
compute
intersection
with
cell
boundaries.
The
3D
shape
representation
is
parametrized
in
a
discretized
3D
v
ox
el
grid.
Observ
ation,
This
aim
at
achie
ving
the
shape
to
be
consistent
with
some
a
v
ailable
observ
ation
such
as
depth
image,
object
fore
ground
mask.
Also
CNN
model
w
as
used
as
a
simpl
e
encoder
-decoder
which
predicts
occupancies
in
a
v
ox
el
grid
from
the
input
RGB
image.
The
result
outperformed
all
the
algorithms
found
in
the
literature
re
vie
w
.
Martin-Brualla
et
al.
[39]
e
xtended
3D
time-lapse
reconstruction
where
a
virtual
camera
mo
v
es
con-
tinuously
in
time
and
space
using
internet
photos.
Pre
vious
w
ork
assumed
a
static
camera,
the
addition
of
camera
motion
during
the
time-lapse
produces
a
v
ery
compelling
impression
of
parallax.
The
first
step
is
a
pre-processing
step,
computing
3D
pose
of
the
inputted
image
using
structure
from
motion
algorithm.
Sub-
sequently
,
the
desired
path
has
to
be
specified
through
the
reconstructed
scene.
Then,
the
algorithm
compute
time-v
arying,
temporally
consistent
depthmaps
for
all
output
frames
in
the
sequences.
Proposed
3D
time-lapse
reconstruction
computes
t
ime
v
arying,
re
gularized
color
profiles
for
3D
tracks
in
the
scene.
output
video
frames
are
reconstructed
from
the
projected
color
profiles.
Int
J
Elec
&
Comp
Eng,
V
ol.
9,
No.
4,
August
2019
:
2394
–
2402
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2399
2.4.
V
ideo
sequences
Sepehrinour
and
Kas
aei
[11]
introduced
a
no
v
el
algorithm
for
perspecti
v
e
projection
reconst
ruction
using
single
vie
w
videos
of
non-rigid
surf
aces.
The
system
i
np
ut
is
a
single
vie
w
video
that
tak
en
in
a
totally
natural
en
vironm
ent.
In
addition
to
that,
the
features
e
xtracted:
projecti
v
e
depth
coef
ficients
of
all
points
in
each
of
the
input
frames,
projection
matrix
components
(camera
calibration,
rotation
matrix,
and
transmission
v
ector).
Xu
et
al.
[40]
de
v
eloped
underw
ater
3D
object
reconstruction
with
mul
tiple
vie
ws
in
video
st
ream
via
structure
from
motion
(SFM).
The
y
are
trying
to
capture
the
inherent
geometrical
v
ariation
of
3D
objects
at
multiple
visual
angles
using
a
myring
streamline
A
UV
system
with
CCD
camera
with
resolution
of
480
TVL/PH
and
the
m
inimum
scene
illumination
0.28
lux
on
board.
The
proposed
pipeline
:
continuous
videos
stream
combining
SFM
with
object
tracking
strate
gies.
An
object
tracking
so
called
particle
filter
has
been
introduced
in
image
sequence
with
multiple
vie
ws
to
focus
on
the
motion
trajectories
of
underw
ater
3D
objects
all
the
time.
a
process
of
triangulation,
iterati
v
e
process,
and
other
parameter
adjustment
is
set
for
SFM
algo-
rithm
to
reco
v
er
and
estimate
the
pos
ition
of
the
camera
calibration
and
the
geometry
of
underw
ater
scene
with
sparse
3D
point
cloud.
Lapandic
et
al.
[41]
introduced
a
frame
w
ork
for
automated
reconstruction
of
3D
model
from
mul
tiple
2D
Aerial
im
ages
using
Unmanned
Aerial
V
ehicle
(U
A
V).
The
objecti
v
e
of
this
w
ork
is
to
a
chie
v
e
near
real-time
performance
with
reliable
accurac
y
and
e
x
ecution
time.
The
proposed
pipeline
as
follo
ws:
feature
detection
and
e
xtraction
using
F
AST
algorithm
and
Lucas-Kanade
method
respecti
v
ely
,
2D
point
correspondence,
point
cloud
filtering,
camera
pose
estimation,
points
triangulation
and
point
cloud
calculation.
3.
DISCUSSION
The
oldest
paper
cited
in
this
paper
is
1981
and
the
research
about
3D
reconstruction
is
still
going
on.
This
pro
v
ed
that
the
maturity
of
the
research
in
this
area
is
achie
v
ed.
There
are
numerous
algorithms
is
described
in
solving
numerous
of
problem
s.
In
addition,
the
commercial
softw
are
such
as
Microsoft,
Agisoft,
intel
real
sense,
asus
and
man
y
others
companies
de
v
elop
softw
are
and
hardw
are
to
perform
such
calculation.
The
general
pipelines
found
in
the
literature
re
vie
ws
are:
first,
image
acquisition.
There
are
some
datasets
a
v
ailable
that
can
be
used
in
order
to
e
v
aluate
the
per
formance
of
the
proposed
algorithms.
Moreo
v
er
,
chances
to
create
o
wn
object
using
vision
setup
mentioned
earlier
in
section
??
.
Second,
Pre-processing
step
by
allo
wing
some
filters
applied
to
get
the
best
images
to
construct.
Third,
3D
cloud
points.
The
alignment
algorithm
plays
an
important
role
to
get
decent
accurac
y
.
Along
with
the
refinement
method
in
mismatched
3D
cloud
re
gistration.
F
ourth,
3D
reconstruction
is
where
the
te
xturing
and
meshed
are
applied
as
the
final
result.
4.
CONCLUSION
In
this
paper
e
xplains
se
v
eral
current
3D
reconstruction
methods
from
literature
re
vie
w
.
There
are
v
arious
algorithm
in
order
to
perform
each
step
of
general
algorithm
of
3D
reconstruction.
Each
object
con-
structed
required
special
algorithms
depends
on
the
vision
setup,
the
te
xture
and
size
of
the
observ
ed
object.
The
impro
v
ement
of
the
sensor
could
lead
to
the
higher
accurac
y
of
creating
3D
reconstruction
in
the
future
be-
sides
the
ef
ficient
algorithms
.
Modeling
using
neural
netw
ork
sho
ws
a
great
adv
antages
[26],
[8].
The
defined
netw
ork
will
try
to
learn
the
shapes
and
will
fill
the
occlusion
re
gion
automatically
.
A
CKNO
WLEDGEMENT
The
author
also
w
ould
lik
e
t
o
ackno
wledge
Bina
Nusantara
Uni
v
ersity
for
the
grant
research
funding.
REFERENCES
[1]
A.
Anwer
,
S.
S.
A.
Ali,
and
F
.
Meriaudeau,
“Underw
ater
online
3D
mapping
and
scene
reconstruction
using
lo
w
cost
kinect
RGB-D
sensor,
”
2016
6th
International
Confer
ence
on
Intellig
ent
and
Advanced
Systems
(ICIAS)
,
pp.
1–6,
2016.
[Online].
A
v
ailable:
http://ieee
xplore.ieee.or
g/document/7824132/
[2]
D.
Tsiaf
aki
and
N.
Michailidou,
“Benefits
and
Problems
Through
the
Application
of
3D
T
echnologies
in
Archaeology:
Recording,
V
isualisation,
Representation
and
Reconstruction,
”
SCIENTIFIC
CUL
TURE
Tsiafaki
&
Mic
hailidou
SCIENTIFIC
CUL
TURE
,
v
ol.
1,
no.
3,
pp.
37–45,
2015.
Computer
vision
based
3D
r
econstruction
:
A
r
e
vie
w
(Ham
Hanry)
Evaluation Warning : The document was created with Spire.PDF for Python.
2400
r
ISSN:
2088-8708
[3]
F
.
Santoso,
M.
Garratt,
M.
Pick
ering,
and
M.
Asikuzzaman,
“3D-Mapping
for
V
isualisation
of
Rigid
Structures:
A
Re
vie
w
and
Comparati
v
e
Study,
”
IEEE
Sensor
s
J
ournal
,
v
ol.
PP
,
no.
99,
pp.
1–1,
2015.
[Online].
A
v
ailable:
http://ieee
xplore.ieee.or
g/lpdocs/epic03/wrapper
.htm?arnumber=7322186
[4]
A.
Harjok
o,
R.
M.
Hujja,
and
L.
A
w
aludin,
“Lo
w-cost
3D
surf
ace
reconstruction
using
Stereo
camera
for
small
object,
”
2017
International
Confer
ence
on
Signals
and
Systems
(ICSigSys)
,
pp.
285–289,
2017.
[Online].
A
v
ailable:
http://ieee
xplore.ieee.or
g/document/7967057/
[5]
G.
D.
Ev
angeli
dis,
M.
Hansard,
and
R.
Horaud,
“Fusion
of
Range
and
Stereo
Dat
a
for
High-Resolution
Scene-Modeling,
”
IEEE
T
r
ansactions
on
P
attern
Analysis
and
Mac
hine
Intellig
ence
,
v
ol.
37,
no.
11,
pp.
2178–2192,
2015.
[6]
G.-v
.
J.
M
and
M.-v
.
J.
C,
“Simple
and
lo
w
cost
scanner
3D
system
based
on
a
T
ime-of-Flight
ranging
sensor,
”
pp.
3–7,
2017.
[7]
R.
Ra
v
anelli,
A.
Nascetti,
and
M.
Crespi,
“Kinect
V2
and
Rgb
Stereo
Cameras
Inte
gration
for
Depth
Map
Enhancement,
”
ISPRS
-
International
Ar
c
hives
of
the
Photo
gr
ammetry
,
Remote
Sensing
and
Spatial
Information
Sciences
,
v
ol.
XLI-B5,
no.
July
,
pp.
699–702,
2016.
[Online].
A
v
ailable:
http://www
.int-arch-
photogramm-remote-sens-spatial-inf-sci.net/XLI-B5/699/2016/isprs-archi
v
es-XLI-B5-699-2016.pdf
[8]
X.
Y
an,
J.
Y
ang,
E.
Y
umer
,
Y
.
Guo,
and
H.
Lee,
“Perspecti
v
e
T
ransformer
Nets:
Learning
Single-V
ie
w
3D
Object
Reconstruction
without
3D
Supervision.
”
[9]
Q.
Hao,
R.
Cai,
Z.
Li,
L.
Zhang,
Y
.
P
ang,
F
.
W
u,
and
Y
.
Rui,
“Ef
ficient
2D-to-3D
correspondence
filtering
for
scalable
3D
object
recognition,
”
Pr
oceedings
of
the
IEEE
Computer
Society
Confer
ence
on
Computer
V
ision
and
P
attern
Reco
gnition
,
no.
1,
pp.
899–906,
2013.
[10]
J.
Xian-hua
and
Z.
Y
uan-qing,
“Error
Elimination
Algorithm
in
3D
Image
Reconstruction,
”
v
ol.
12,
no.
4,
pp.
2690–2696,
2014.
[11]
M.
Sepehrinour
and
S.
Kasaei,
“Perspecti
v
e
reconstruction
of
non-rigid
surf
aces
from
single-vie
w
videos,
”
2017
25th
Ir
anian
Confer
ence
on
Electrical
Engineering
,
ICEE
2017
,
no.
Icee20
17,
pp.
1452–1458,
2017.
[12]
J.
Y
ao
and
R.
T
aylor
,
“Assessing
accurac
y
f
actors
in
deformable
2D/3D
medical
image
re
gistration
using
a
statistical
pelvis
model,
”
Pr
oceedings
of
the
IEEE
International
Confer
ence
on
Computer
V
ision
,
v
ol.
2,
no.
Iccv
,
pp.
1329–1334,
2003.
[Online].
A
v
ailable:
http://www
.scopus.com/inw
ard/record.url?eid=2-
s2.0-0344983014&partnerID=tZOtx3y1
[13]
G.
Hichem,
F
.
Chouchene,
and
H.
Belmabrouk,
“3D
model
reconstruction
of
blood
v
essels
in
the
retina
with
tub
ular
structure,
”
International
J
ournal
on
Electrical
Engineering
and
Informatics
,
v
ol.
7,
no.
4,
pp.
724–734,
2015.
[14]
S.
Sumijan,
S.
Madenda,
J.
Harlan,
and
E.
P
.
W
ibo
w
o,
“Hybrids
Otsu
method,
Feature
re
gion
and
Mathematical
Morphology
for
Calculating
V
olume
Hemorrhage
Brain
on
CT
-Scan
Image
and
3D
Reconstruction,
”
TELK
OMNIKA
(T
e
lecommunication
Computing
Electr
onics
and
Contr
ol)
,
v
ol.
15,
no.
1,
p.
283,
2017.
[Online].
A
v
ailable:
http://journal.uad.ac.id/inde
x.php/TELK
OMNIKA/article/vie
w/3146
[15]
F
.
Care
gi
v
er
,
A.
Introduction,
D.
T
raumatic,
M.
Tbi,
M.
Tbi,
S.
Tbis,
A.
Tbi,
T
.
B.
I.
Penetration,
F
.
V
io-
lence,
C.
Changes,
and
P
.
Changes,
“F
act
Sheet
T
raumatic
Brain
Injury
,
”
pp.
1–6,
2018.
[16]
M.
Monterial,
P
.
Marleau,
and
S.
A.
Pozzi,
“Single-V
ie
w
3-D
Reconstruction
of
Correlated
Gamma-
Neutron
Sources,
”
IEEE
T
r
ansactions
on
Nuclear
Science
,
v
ol.
64,
no.
7,
pp.
1840–1845,
2017.
[17]
A.
Sax
ena,
S.
H.Chung,
and
A.
Y
.
Ng,
“Depth
reconstruction
from
a
single
still
image.”
Ijcv
,
v
ol.
74,
no.
1,
2007.
[18]
A.
X.
Chang,
T
.
Funkhouser
,
L.
Guibas,
P
.
Hanrahan,
Q.
Huang,
Z.
Li,
S.
Sa
v
arese,
M.
Sa
vv
a,
S.
Song,
H.
Su,
J.
Xiao,
L.
Y
i,
and
F
.
Y
u,
“ShapeNet:
An
Information-Rich
3D
Model
Repository,
”
2015.
[Online].
A
v
ailable:
http://arxi
v
.or
g/abs/1512.03012
[19]
D.
J.
Rezende,
S.
M.
A.
Eslami,
S.
Mohamed,
P
.
Battaglia,
M.
Jaderber
g,
and
N.
Heess,
“Unsupervised
Learning
of
3D
Structure
from
Images,
”
2016.
[Online].
A
v
ailable:
http://arxi
v
.or
g/abs/1607.00662
[20]
J.
W
u,
T
.
Xue,
J.
J.
Lim,
Y
.
T
ian,
J.
B.
T
enenbaum,
A.
T
orralba,
and
W
.
T
.
F
reeman,
“Single
image
3D
interpreter
netw
ork,
”
Lectur
e
Notes
in
Computer
Science
(including
subseries
Lectur
e
Notes
in
Artificial
Intellig
ence
and
Lectur
e
Notes
in
Bioinformatics)
,
v
ol.
9910
LNCS,
pp.
365–382,
2016.
[21]
B.
F
an,
Y
.
Rao,
W
.
Liu,
and
Q.
W
ang,
“Re
gion-Based
Gro
wing
Algorithm
for
3D
Reconstruction
from
MRI
Images,
”
pp.
521–525,
2017.
[22]
T
.
Nadu,
“Brain
T
umor
Se
gmentation
of
MRI
Brain
Images
through
FCM
clustering
and
Seeded
Re
gion
Gro
wing
T
echnique,
”
v
ol.
10,
no.
76,
pp.
427–432,
2015.
Int
J
Elec
&
Comp
Eng,
V
ol.
9,
No.
4,
August
2019
:
2394
–
2402
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
r
2401
[23]
M.
Zhang,
Z.
Zhang,
and
W
.
Li,
“3D
Model
Reconstruction
based
on
Plantar
Image
’
s
Feature
Se
gmen-
tation,
”
pp.
1–5,
2017.
[24]
L.
Juan
and
O.
Gwun,
“A
comparison
of
sift,
pca-sift
and
surf,
”
International
J
ournal
of
Ima
g
e
Pr
ocessing
(IJIP)
,
v
ol.
3,
no.
4,
pp.
143–152,
2009.
[25]
W
.
Y
an,
X.
Shi,
X.
Y
an,
and
L.
W
ang,
“Computing
OpenSURF
on
OpenCL
and
general
purpose
GPU,
”
International
J
ournal
of
Advanced
Robotic
Systems
,
v
ol.
10,
pp.
1–12,
2013.
[26]
M.
L.
Group,
M.
Intel,
D.
Ireland,
A.
P
alla
,
D.
Molone
y
,
and
L.
F
anucci,
“Fully
Con
v
olutional
Denoising
Autoencoder
for
3D
Scene
Reconstruction
from
a
single
depth
image,
”
no.
Icsai,
pp.
566–575,
2017.
[27]
M.
Firman,
O.
M.
Aodha,
S.
Julier
,
and
G.
J.
Brosto
w
,
“Structured
Prediction
of
Unobserv
ed
V
ox
els
from
a
Single
Depth
Image,
”
2016
IEEE
Confer
ence
on
Computer
V
ision
and
P
atte
rn
Reco
gnition
(CVPR)
,
pp.
5431–5440,
2016.
[Online].
A
v
ailable:
http://ieee
xplore.ieee.or
g/document/7780955/
[28]
S.
Liu,
C.
Chen,
and
N.
K
ehtarna
v
az,
“A
computationally
ef
ficient
denoising
and
hole-filling
method
for
depth
image
enhancement,
”
v
ol.
9897,
p.
98970V
,
2016.
[Online].
A
v
ailable:
http://proceedings.spiedigitallibrary
.or
g/proceeding.aspx?doi=10.1117/12.2230495
[29]
M.
Jaisw
al,
J.
Xie,
and
M.
T
.
Sun,
“3D
object
modeling
with
a
Kinect
camera,
”
2014
Asia-P
acific
Signal
and
Information
Pr
ocessing
Association
Annual
Summit
and
Confer
ence
,
APSIP
A
2014
,
2014.
[30]
J.
Xie,
Y
.
Hsu,
R.
Feris,
and
M.
Sun,
“Fine
re
gistration
of
3D
point
clouds
with
iterati
v
e
closest
point
using
an
RGB-D
camera,
”
Cir
cuits
and
Systems
(ISCAS
.
.
.
,
pp.
1–4,
2013.
[Online].
A
v
ailable:
http://staf
f.w
ashington.edu/junx/publication/Fine
Re
gistra-
tion
ISCAS13.pdf%5Cnhttp://ieee
xplore.ieee.or
g/xpls/abs
all.jsp?arnumber=6572486
[31]
H.
A
vron,
A.
Sharf,
C.
Greif,
and
D.
Cohen-Or
,
“
<
sub
>
1
<
/sub
>
-Sparse
reconstruction
of
sharp
point
set
surf
aces,
”
A
CM
T
r
ansactions
on
Gr
aphics
,
v
ol.
29,
no.
5,
pp.
1–12,
2010.
[Online].
A
v
ailable:
http://portal.acm.or
g/citation.cfm?doid=1857907.1857911
[32]
M.
Isenb
ur
g,
Y
.
Liu,
J.
She
wchuk,
and
J.
Snoe
yink,
“Streaming
computation
of
Delaunay
triangulations,
”
A
CM
T
r
ansactions
on
Gr
aphics
,
v
ol.
25,
no.
3,
p.
1049,
2006.
[Online].
A
v
ailable:
http://portal.acm.or
g/citation.cfm?doid=1141911.1141992
[33]
M.
K
o
w
alski,
J.
Naruniec,
and
M.
Daniluk,
“Li
v
e
Scan3D:
A
F
ast
and
Ine
xpensi
v
e
3D
Data
Acquisition
System
for
Multiple
Kinect
v2
Sensors,
”
Pr
oceedings
-
2015
Int
ernational
Confer
ence
on
3D
V
ision,
3D
V
2015
,
pp.
318–325,
2015.
[34]
P
.
Besl
and
N.
McKay
,
“A
Method
for
Re
gistration
of
3-D
Shapes,
”
pp.
239–256,
1992.
[35]
C.
Burns,
“T
e
xture
Super
-Resolution
for
3D
Reconstruction,
”
pp.
4–7,
2017.
[36]
J.-y
.
Bouguet,
V
.
T
arasenk
o,
B.
D.
Lucas,
and
T
.
Kanade,
“Pyramidal
Implementation
of
the
Lucas
Kanade
Feature
T
rack
er
Description
of
the
algorithm,
”
Ima
ging
,
v
ol.
130,
no.
x,
pp.
1–9,
1981.
[37]
A.
Plyer
,
G.
Le
Besnerais,
and
F
.
Champagnat,
“Massi
v
ely
parallel
Lucas
Kanade
optical
flo
w
for
real-
time
video
processing
applications,
”
J
ournal
of
Real-T
ime
Ima
g
e
Pr
ocessing
,
v
ol.
11,
no.
4,
pp.
713–730,
2016.
[38]
S.
T
ulsiani,
T
.
Zhou,
A.
A.
Efros,
and
J.
Malik,
“Multi-vie
w
supervision
for
single-vie
w
reconstruction
via
dif
ferentiable
ray
consistenc
y,
”
Pr
oceedings
-
30th
IEEE
Confer
ence
on
Computer
V
ision
and
P
attern
Reco
gnition,
CVPR
2017
,
v
ol.
2017-Janua,
pp.
209–217,
2017.
[39]
R.
Martin-Brualla,
D.
Gallup,
and
S.
M.
Seitz,
“3D
T
ime-Lapse
Reconstruction
from
Internet
Photos,
”
International
J
ournal
of
Computer
V
ision
,
v
ol.
125,
no.
1-3,
pp.
52–64,
2017.
[40]
X.
Xu,
R.
Che,
R
.
Nian,
and
B.
He,
“Underw
ater
3D
Object
Reconstruction
with
Multiple
V
ie
ws
in
V
ideo
Stream
via
Structure
from
Motion,
”
pp.
0–4,
2016.
[41]
D.
Lapandic,
J.
V
elagic,
and
H.
Balta,
“Frame
w
ork
for
automated
reconstruction
of
3D
model
from
mul-
tiple
2D
aerial
images,
”
Pr
oceedings
Elmar
-
International
Symposium
Electr
onics
in
Marine
,
v
ol.
2017-
Septe,
no.
September
,
pp.
18–20,
2017.
Computer
vision
based
3D
r
econstruction
:
A
r
e
vie
w
(Ham
Hanry)
Evaluation Warning : The document was created with Spire.PDF for Python.
2402
r
ISSN:
2088-8708
BIOGRAPHY
OF
A
UTHORS
Hanry
Ham
is
a
lecturer
and
research
assistant
at
Bina
Nusantara
Uni
v
ersity
with
Mas-
ter
of
Engineering
from
The
Sirindhorn
International
Thai-German
Graduate
School
of
Engineering
in
Thailand
and
German
(2016).
He
obtained
Bachelor
De
gree
in
Computer
Science
from
Bina
Nusantara
Uni
v
ersity
(Indonesia)
in
2014.
His
researches
are
in
fields
of
image
processing,
computer
vision
and
computer
graphics.
He
is
af
filiated
with
IEEE
as
student
member
.
Besides,
he
is
also
in
v
olv
ed
in
student
associations,
and
committee
of
se
v
eral
competitions
such
as
BNPCHS
and
A
CM-ICPC
Re
gional
Asia
Site.
J
ulian
W
esley
is
a
lecturer
at
Bina
Nusantara
Uni
v
ersity
with
Ma
ster
of
Computer
Sci-
ence
(M.TI.)
major
from
Bina
Nusantara
Uni
v
ersity
in
2016.
His
researches
are
in
fields
of
image
processing,
computer
vision,
and
virtual
reality
.
Besides,
he
is
also
w
ork
as
a
technology
consultant
who
focused
on
IT
financial
industries.
He
is
leading
a
R&D
team
in
Emerio
Indonesia
and
guide
intern
students
from
multiple
uni
v
ersities
in
Indonesia.
Hendra
is
a
lecturer
at
Bina
Nusantara
Uni
v
ersity
.
He
w
as
born
in
T
anjungpandan,
18
July
1992.
He
completed
his
bachelor
de
gree
in
Bina
Nusantara
Uni
v
ersity
on
2010.
Subsequently
he
obtained
his
master
de
gree
on
2018.
Both
de
gree
are
in
Information
T
echnology
.
No
w
,
he
is
w
orking
as
a
Softw
are
Engineer
at
a
start-up
compan
y
in
Indone-
sia.
Int
J
Elec
&
Comp
Eng,
V
ol.
9,
No.
4,
August
2019
:
2394
–
2402
Evaluation Warning : The document was created with Spire.PDF for Python.