Internati
o
nal
Journal of Ele
c
trical
and Computer
Engineering
(IJE
CE)
V
o
l.
6, N
o
. 4
,
A
ugu
st
2016
, pp
. 16
37
~
1
646
I
S
SN
: 208
8-8
7
0
8
,
D
O
I
:
10.115
91
/ij
ece.v6
i
4.9
756
1
637
Jo
urn
a
l
h
o
me
pa
ge
: h
ttp
://iaesjo
u
r
na
l.com/
o
n
lin
e/ind
e
x.ph
p
/
IJECE
Accurat
e
I
r
is L
o
cali
zation Us
ing Edge Map Generation and
Adaptive Circular Hough Tran
s
f
o
rm
for Less
Constrain
e
d
Iris Images
Vineet Kum
a
r
,
Abhijit As
ati, Anu
Gup
t
a
Departm
e
nt
of
Ele
c
tri
cal
and
E
l
ectron
i
cs
Eng
i
ne
ering,
Birla Institute of
Techno
lo
g
y
and Sc
ien
c
e
Pilani,
Pilani-333031
, I
ndia
Article Info
A
B
STRAC
T
Article histo
r
y:
Received Dec 27, 2015
Rev
i
sed
Ap
r 6, 20
16
Accepted Apr 21, 2016
This
paper prop
os
es
an accur
a
t
e
iris
loc
a
li
zat
ion
algorithm
for th
e
iris
im
ages
acquir
e
d under
near inf
r
ared
(NIR) illum
i
nation
s
and having n
o
ise due
to
ey
el
ids, ey
elash
e
s, light
ing refl
ecti
ons, non-unif
o
rm
illum
i
nation
,
ey
egl
a
sses
and e
y
ebrow hai
r
etc. Th
e two m
a
in contributio
ns
in the paper are an edg
e
map generation
techniqu
e for pupil bounda
r
y
detection and
an adaptive
circu
l
ar Hough
transform (CHT) algor
ithm for limbic boundar
y
detectio
n
,
which not
onl
y
m
a
ke the
ir
is
lo
cal
iza
tion
m
o
re
accur
a
t
e
but
fas
t
er a
l
s
o
.
The
edge map for pupil boundar
y
d
e
tection is
gen
e
r
a
ted on in
tersection (logical
AND) of two bi
nar
y
edge maps obtai
n
e
d using thresholding, morphological
operations and
Sobel edge d
e
tection
,
which res
u
lts in m
i
nim
a
l
false edg
e
s
caused b
y
the
noise. Th
e ad
aptive
CHT
algo
rithm for limbic boundar
y
dete
ction s
e
arch
es
for a s
e
t of
t
w
o arcs
in an
i
m
a
ge ins
t
ead
of
a full
cir
c
le
that counters ir
is-occlusions b
y
the ey
elids
and
ey
elashes. Th
e pr
oposed CHT
and adap
tiv
e C
H
T implementations fo
r pupil
an
d limbic bound
ar
y
d
e
tection
res
p
ect
ivel
y us
e
a two-dim
e
ns
io
nal a
ccum
u
lato
r
arra
y th
at r
e
du
ces
m
e
m
o
r
y
requirem
e
nts.
T
h
e proposed alg
o
rithm
gives the accu
rac
i
es of
99.7% and
99.38% for the challenging CASIA-Iris-
Thousand (version 4.0) and CASI
A-
Iris
-
Lam
p
(vers
i
on 3.0) databas
e
s
res
p
ectiv
el
y.
The aver
age ti
m
e
cos
t
per
image is 905 m
s
ec.
The propos
ed algo
rith
m is
compared with
the prev
ious
work and shows
better r
e
sults.
Keyword:
Cir
c
u
l
ar
Ho
ugh
tr
an
sf
or
m
Ed
ge m
a
p ge
n
e
rat
i
o
n
Iris l
o
calization
Iris recogn
itio
n
Iris se
gm
entation
Copyright ©
201
6 Institut
e
o
f
Ad
vanced
Engin
eer
ing and S
c
i
e
nce.
All rights re
se
rve
d
.
Co
rresp
ond
i
ng
Autho
r
:
Vineet Kum
a
r,
Depa
rt
m
e
nt
of
El
ect
ri
cal
and
El
ect
roni
cs
E
n
gi
nee
r
i
n
g,
Birla In
stitu
te
o
f
Techno
log
y
an
d Scien
c
e Pi
lan
i
,
Pilan
i
-
333
031
, In
d
i
a.
Em
a
il: v
i
n
eet@p
ilan
i
.b
its-p
ilan
i
.ac.in
1.
INTRODUCTION
Iris rec
o
gnition [1]-[3] is ac
cepted as
one of the m
o
st ac
curate bi
om
etr
i
c technologie
s to identify
i
ndi
vi
dual
s
an
d
has a
p
pl
i
cat
ions
i
n
m
a
ny
di
st
i
n
ct
d
o
m
a
i
n
s suc
h
as
b
o
r
de
r-c
ont
rol
se
r
v
i
ces, l
a
w
en
fo
rc
em
ent
,
secure tra
n
sact
ions and payments, cust
om
er
authentica
tion, social-m
edia
forum
s
,
s
m
art
devices
, pri
v
ac
y and
d
a
ta pro
t
ection etc. Th
e iris seg
m
en
tatio
n
is an
im
p
o
r
ta
n
t
stag
e in
an
iris recogn
itio
n
sy
ste
m
, wh
ich
m
a
in
l
y
d
eals
with
l
o
calizin
g
iris’s i
n
n
e
r and
ou
ter bo
und
aries (i
.e. iris l
o
calizatio
n) in th
e captu
red
iris im
ag
e. Th
e
highly accurat
e
iris recognit
i
on system
s de
m
a
nd for th
e
iris im
ages c
a
ptured
unde
r const
r
aine
d imaging
envi
ro
nm
ent
s
and
wi
t
h
su
b
j
e
c
t
s
’ ful
l
co
op
er
at
i
on [
4
]
.
H
o
wev
e
r, th
is restri
cts th
e rang
e of do
m
a
in
s wh
ere th
e
iris recognition can
be applied. The
iris localization with high accuracy
can be ac
hieved in the c
o
nstraine
d
iris reco
gn
itio
n syste
m
s, bu
t it is ch
alleng
ing
to
g
e
t accura
te
iris lo
calizatio
n
in th
e less con
s
train
e
d
system
s.
The less constraine
d (noisy) iris images (Figure
1
(
b
))
m
a
y
cont
ai
n refl
ect
i
ons ca
us
ed by
a l
i
ght
so
urce and
non-un
ifo
r
m
il
lu
min
a
tio
n
caused
b
y
th
e p
o
s
ition an
d
ang
l
e of th
e lig
h
t
sou
r
ce w
h
ile acq
u
i
ring
th
e
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 4
,
Au
gu
st 2
016
:
16
37
–
1
646
1
638
im
ages. T
h
e
other
non-ideal
issues
in the
noisy i
r
is
im
a
g
es a
r
e
hea
v
y iris-occlusi
o
ns by t
h
e eyelids
a
nd
eyelashes, eyeglasses, low c
ont
rast,
and e
y
ebrow
hair etc. [5]. More
ov
er, th
e iris imag
es m
a
y
h
a
ve n
o
n
-
fro
n
t
al v
i
ew
wh
en
u
s
er is no
t
lo
ok
ing
ahead
to
ward
s t
h
e ca
mera. The iris
im
ages capture
d using nea
r
infra
red
(NIR) illu
m
i
n
a
to
rs are preferred
ov
er v
i
si
b
l
e wav
e
len
g
t
h
(VW) im
ag
es as th
eir irises
rev
eal rich
and
co
m
p
lex
f
eatu
r
es [2
],
[6
].
Th
er
efo
r
e,
mo
st of
th
e stand
a
rd
ir
is
dat
a
b
a
ses avai
l
a
bl
e
on t
h
e i
n
ternet
are the
NIR i
m
ages
[4]
.
Fi
gu
re 1 s
h
o
w
s t
h
e sam
p
l
e
im
ages from
t
w
o di
ffe
rent
NIR
dat
a
bases;
whe
r
e Fi
g
u
re
1(a
)
i
s
a m
o
re cl
ose-
up i
r
is im
age a
s
com
p
ared t
o
Figure
1(b). It
is easier to
lo
calize iris in
Fig
u
re
1
(
a) as it h
a
s
b
i
gg
er
pu
pil and
iris reg
i
on
s
with
a lesser surrou
nd
ing
ar
ea, as
com
p
ared to
Figure
1(b).
(a)
(
b
)
Fi
gu
re 1.
(a
)
I
r
i
s im
age fr
om
C
A
SI
A-
Iri
s
-
I
n
t
e
rval
,
ve
rsi
o
n
3.
0;
(
b
)
Less
c
onst
r
ai
ne
d
iris
i
m
ag
e fro
m
C
A
SIA-
I
r
i
s-Thou
sand
, v
e
r
s
i
o
n
4.0
Th
e earlier i
r
is reco
gn
itio
n system
s are typ
i
ca
lly b
a
sed
on
Daug
man
’
s [1
] an
d
W
i
l
d
es’ [2
]
al
go
ri
t
h
m
s
, whi
c
h use i
n
t
e
g
r
o
-
di
f
f
ere
n
t
i
a
l
op
erat
or
(I
DO
) a
nd ci
rc
ul
ar
Ho
ug
h t
r
a
n
sf
orm
(C
HT
) res
p
ect
i
v
el
y
to
lo
calize irises. Howev
e
r, t
h
eir iris lo
cali
zatio
n
algo
rithm
s
wo
rk
u
n
d
e
r v
e
ry con
t
ro
lled
env
i
ron
m
en
ts and
they do
not
perform
very accurately
while de
aling with
the
noisy
im
ages
[6]. S
o
m
e
recent
m
e
thods t
o
localize
ir
ises in
no
isy N
I
R an
d
VW
imag
es ar
e d
e
scr
i
b
e
d
in
[5
],
[7]-
[
9
] and
[10
]-[
12
] r
e
sp
ectiv
ely.
H
o
u
g
h
tr
ansf
or
m
(HT) b
a
sed
iri
s
lo
calizatio
n
alg
o
rith
m
s
consider the iris as
a circular ri
ng an
d
th
e C
H
T
is u
s
ed
to
d
e
tect th
e
circles as illu
st
rated
i
n
[7
],[10],[12
].
Th
e literature rev
i
ew
rev
eals t
h
at th
e ex
isting
iris
lo
calizatio
n
algo
rith
m
s
for th
e
NIR imag
es
d
e
tect
th
e p
u
p
il u
s
ing
eith
er in
tensity th
resh
o
l
d
i
n
g
[13
]
,[
1
4
]
o
r
ed
ge det
ect
i
on
base
d seg
m
ent
a
t
i
on t
echni
que
s
[7
],[15
]
,[16
]. In
th
e CHT b
a
sed
al
g
o
rith
m
s
, first
op
tim
a
l
edge m
a
ps of the iris im
age are
ge
nerate
d tha
t
contain m
i
nimal false edges,
so that t
h
e iris
circles can
be detected
acc
urately
and
efficiently as dem
o
nstrate
d
i
n
[7]
an
d [
1
5
]
. The ge
nerat
i
ng
opt
i
m
al
ed
ge m
a
ps ge
t m
o
re challengi
ng if t
h
e im
ages are noisy such as
C
A
SI
A-
Iri
s
-
T
h
ous
an
d,
versi
o
n 4
.
0
(C
IT
HV
4)
dat
a
base
[1
7]
im
ages. The
noi
sy
i
m
ages
are fi
rst
pre
p
r
o
cesse
d
t
o
rem
ove
t
h
e
noi
se
su
ch
as l
i
ght
i
n
g
refl
ect
i
ons
,
n
o
n
-
u
n
i
f
o
r
m
i
l
l
u
m
i
nat
i
on a
n
d l
o
w c
o
n
t
rast
as
desc
ri
b
e
d i
n
[6]-[9], which im
proves
the
accuracy a
n
d t
i
m
e
perform
a
nce of the
ir
is l
o
calization. T
h
e im
age inpa
inting
t
echni
q
u
es a
r
e
used
fo
r re
m
ovi
ng t
h
e l
i
ght
i
n
g re
fl
ect
i
on s
p
ot
s o
f
t
h
e i
r
i
s
i
m
ages and t
h
e hi
s
t
og
ram
eq
u
a
lization
is u
s
ed
fo
r com
p
en
satin
g
the n
o
n
-
un
ifor
m illu
min
a
tio
n
an
d
low con
t
rast [7
]. For th
e iris
l
o
cal
i
zat
i
on i
n
noi
sy
N
I
R
i
m
ages f
r
om
C
I
THV
4
dat
a
base,
W
a
n
g
et
al
. [
7
]
pr
o
pose
d
a
n
i
npai
n
t
i
n
g t
e
c
hni
que
base
d o
n
Na
vi
er-St
oke
s eq
ua
t
i
ons t
o
rem
o
v
e
t
h
e l
i
ght
i
ng
r
e
fl
ect
i
on s
pot
s
and P
r
o
b
a
b
l
e
bo
u
nda
ry
(P
b)
edge
d
e
tectio
n op
erato
r
t
o
co
un
ter th
e
n
on-un
ifo
r
m
illu
m
i
n
a
tio
n
.
In
t
h
is
p
a
p
e
r,
th
e pro
p
o
s
ed
i
r
is lo
calizatio
n alg
o
rithm
has advanta
g
es t
h
at it eliminates the im
age
pre
p
r
o
cessi
ng
st
eps suc
h
as
i
npai
n
t
i
n
g t
o
rem
ove refl
ect
i
ons an
d m
e
t
h
o
d
s t
o
com
p
en
sat
e
no
n
-
u
n
i
f
orm
illu
m
i
n
a
tio
n
,
bu
t still
red
u
ces th
e false ed
ges cau
sed
b
y
d
i
fferen
t
typ
e
s o
f
no
ise v
e
ry
sig
n
i
fican
tly. In
th
e
pr
o
pose
d
al
g
o
r
i
t
h
m
,
t
h
e edge
m
a
p fo
r p
upi
l
bo
u
nda
ry
det
e
ct
i
on i
s
o
b
t
a
i
n
ed by
com
b
i
n
i
ng t
w
o
di
ffe
re
nt
ed
ge
map
s
u
s
i
n
g in
t
e
rsectio
n op
eratio
n
o
n
im
ag
es, wh
ereas th
e
p
r
ev
iou
s
i
r
is localizatio
n
m
e
t
h
od
s i
n
t
h
e literatu
re
are
not
base
d
o
n
c
o
m
b
i
n
i
n
g t
w
o
o
r
m
o
r
e
ed
ge m
a
ps i
n
a
si
n
g
l
e
e
d
ge m
a
p.
Havi
ng
det
ect
ed t
h
e p
u
p
i
l
bo
u
nda
ry
usi
n
g C
H
T, t
h
e
p
r
o
p
o
sed a
d
apt
i
ve C
H
T i
s
us
ed t
o
det
ect
t
h
e l
i
m
b
i
c
bou
nda
ry
(i
ri
s’s
out
e
r
bounda
ry). The proposed a
d
aptive CH
T de
tects arcs in the im
age as th
e
eyelids and e
y
elashes occl
ude the
l
i
m
b
i
c
bou
nda
ry
, w
h
ereas t
h
e pre
v
i
o
us C
H
T base
d i
r
i
s
lo
calizatio
n
m
e
th
od
s search
fo
r a fu
ll circle. The
p
r
op
o
s
ed
al
g
o
rith
m
targ
ets fron
tal v
i
ew,
bu
t no
isy NI
R
im
ages (Figure 1(b))
ha
ving non-ideal iss
u
es as
di
scuss
e
d
bef
o
re. T
o
e
v
al
uat
e
t
h
e pe
rf
orm
a
nce
of t
h
e p
r
op
ose
d
al
g
o
r
i
t
h
m
,
t
h
e chal
l
e
ngi
ng C
I
T
H
V
4
an
d
C
A
SI
A-
Iri
s
-
La
m
p
, ver. 3.
0 (
C
ILV
3
) i
r
i
s
da
t
a
bases [
17]
w
e
re use
d
. T
h
e ob
ject
i
v
e
of t
h
e wor
k
p
r
ese
n
t
e
d i
n
this pa
per is to
ove
rc
om
e the const
r
aints i
n
ac
hi
eving highly accurate biom
etric iris rec
o
gni
tion.
The rest
o
f
t
h
e pape
r i
s
org
a
ni
zed as f
o
l
l
o
ws. Sect
i
o
n 2
descri
bes t
h
e
pr
o
pose
d
al
g
o
r
i
t
h
m
and i
t
s
im
pl
em
ent
a
t
i
o
n, w
h
ere
a
s sect
i
on 3 di
sc
uss
e
s t
h
e perf
ormance evaluati
on res
u
lts
and the com
p
arison with
ot
he
r m
e
t
hods.
Sect
i
o
n
4
co
nc
l
udes
t
h
e
w
o
r
k
i
n
t
h
e
pa
per
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Accur
a
t
e
Iri
s
L
o
cal
i
z
at
i
o
n
Usi
n
g
E
dge
Ma
p
Gene
rat
i
o
n
a
n
d
A
d
apt
i
ve C
i
r
c
ul
ar
H
o
ug
h
..
.. (
V
i
n
eet
K
u
m
a
r)
1
639
2.
THE PR
OPO
S
ED I
R
IS
LO
CALIZ
ATIO
N
ALGO
RIT
H
M
The
pr
op
ose
d
al
go
ri
t
h
m
achieves i
r
i
s
l
o
cal
i
zat
i
on f
o
r th
e
NIR im
ag
es in
two
phases: Phase
1) Pupil
bo
u
nda
ry
det
ect
i
on,
an
d
Pha
s
e 2
)
Li
m
b
i
c
b
o
u
n
d
ary
det
ect
i
on.
T
h
e eac
h
pha
se c
onsi
s
t
s
of
t
w
o
pr
ocess
st
ep
s
,
whi
c
h are e
d
g
e
m
a
p ge
nerat
i
on
fr
om
i
r
i
s
im
age an
d ci
rcl
e
det
ect
i
on i
n
t
h
e ed
ge m
a
p. The
goal
of t
h
e edg
e
map generation is to
pre
p
are
appropriat
e input for C
H
T s
o
that the iris ci
rcles can be
det
ected accurately and
rapi
dl
y
.
The o
r
i
g
i
n
al
i
r
i
s
im
age of si
ze 6
4
0
×4
8
0
pi
xel
s
i
s
scal
ed do
w
n
t
o
32
0×2
4
0
p
i
xel
s
usi
n
g a scal
i
n
g
fact
or
o
f
0.
5 t
o
spee
d
up
t
h
e
pr
ocessi
ng
. T
h
e pr
o
p
o
s
ed
algo
rith
m
is ap
p
l
i
e
d
o
n
th
e scaled
iris im
ag
e and
the
o
b
t
ain
e
d
ci
rcle’s
p
a
ram
e
ters are m
u
ltip
lied
by two
fo
r m
a
p
p
in
g
t
h
e
p
a
ram
e
ters in
t
h
e
o
r
i
g
in
al iris im
ag
e.
2.
1.
Phase
1: P
upil bound
ar
y de
tection
Th
e two
step
s
in
vo
lv
ed
in the pu
p
il
b
oun
d
a
ry d
e
tection
are th
e ed
g
e
m
a
p
g
e
n
e
ration
an
d th
e C
H
T
fo
r pu
pi
l
b
o
u
n
d
ary
det
ect
i
o
n
,
w
h
i
c
h
a
r
e di
sc
usse
d bel
o
w.
2.
1.
1.
E
d
ge m
a
p
ge
n
erati
on
The i
d
ea
o
f
ge
nerat
i
n
g a
n
op
t
i
m
a
l
edge m
a
p f
o
r
p
u
p
i
l
bo
un
da
ry
det
ect
i
on
rel
i
e
s o
n
c
o
m
b
i
n
i
ng t
w
o
edge m
a
ps obt
ai
ned vi
a t
w
o pat
h
s:
Pat
h
1 i
s
appl
y
i
n
g
in
ten
s
ity th
resho
l
din
g
on
th
e iris i
m
ag
e to
seg
m
en
t th
e
p
u
p
il reg
i
on
follo
wed b
y
t
h
e
ed
g
e
d
e
tection; an
d Pat
h
2
is app
l
yin
g
th
e
ed
g
e
d
e
tection on
th
e in
ten
s
ity iris
im
age. Si
nce b
o
t
h
t
h
e ed
ge m
a
ps obt
ai
ne
d
vi
a Pat
h
1 an
d Pat
h
2 ha
ve
pu
pi
l
cont
ou
r i
n
com
m
on, t
h
ey
are
com
b
i
n
ed i
n
a
si
ngl
e ed
ge m
a
p usi
ng t
h
e i
n
t
e
rsect
i
o
n operation
(logical AND),
whic
h minim
i
zes the false
edge
s due t
o
noise s
u
ch a
s
e
y
elids, eyelashes and ligh
tin
g reflection
s
etc. sign
if
icantly. The
proposed edge
map
g
e
n
e
ration
is illu
strated with
h
e
lp
o
f
Fig
u
re
2
and
Fig
u
re 3. Th
e ed
g
e
m
a
p
in
Fi
g
u
re
2
(
e) ob
tai
n
ed
v
i
a
Path
1
ex
clud
es th
e effect of
reflection
s
, bu
t
co
n
t
ain
s
t
h
e ed
g
e
s du
e to
d
a
rk
illu
m
i
n
a
tio
n
,
wh
ereas th
e
ed
ge
map
in
Fig
u
re
2
(
f)
o
b
t
ain
e
d
usin
g
Pat
h
2
exclu
d
e
s th
e edges d
u
e
t
o
d
a
rk
illu
m
i
n
a
tio
n
,
bu
t con
t
ain
s
th
e ed
g
e
s
due
t
o
re
fl
ect
i
ons
.
The
r
ef
ore
,
t
h
e
i
n
t
e
rsect
i
o
n
o
p
e
r
at
i
o
n
o
n
t
h
e t
w
o
ed
ge
m
a
ps (Fi
g
u
r
e
2(e
)
a
n
d Fi
gu
r
e
2
(
f
)
)
rem
o
v
e
s th
e effect o
f
b
o
t
h
reflectio
n
s
and
d
a
rk
illu
m
i
n
a
tio
n
as sh
own
in
Fig
u
re 2
(
g). To
g
e
t
m
o
re adv
a
n
t
ag
e
o
u
t
o
f
th
e i
n
tersectio
n
op
eratio
n in
redu
ci
n
g
th
e false
edge
s, the t
w
o m
o
rphological ope
r
ations
are
also
us
e
d
in
Path 1.
Fi
gu
re
2.
Ed
ge
m
a
p gene
rat
i
o
n
fo
r
pu
pi
l
b
o
u
nda
ry
det
ect
i
on:
(a
)
Iri
s i
m
age (
3
2
0
×
2
4
0
)
f
r
o
m
C
I
THV
4
;
(
b
)
Gaus
sian sm
oothed i
r
is im
age (
σ
=1
.0
, k
=
5);
(c) Bi
n
a
ry im
a
g
e after ap
p
l
yi
n
g
in
tensity th
resho
l
d
i
ng
on
(b
);
(d) Clean
ed
b
i
n
a
ry im
ag
e ob
tain
ed
fro
m
(c)
u
s
ing
ho
le fillin
g fo
llowed b
y
i
m
ag
e op
en
i
n
g
(se=‘d
isk’, k=7
)
;
(e)
Ed
ge i
m
age o
b
t
a
i
n
ed
aft
e
r
appl
y
i
n
g
S
o
bel
ed
ge
det
ect
or
wi
t
h
o
u
t
t
h
i
nni
n
g
on
(
d
);
(f
) E
d
ge i
m
age obt
ai
ned
aft
e
r a
ppl
y
i
n
g
So
bel
ed
ge
det
ect
or
wi
t
h
out
t
h
i
n
ni
n
g
on
(
b
);
(
g
)
Ed
ge m
a
p
obt
ai
ne
d
by
i
n
t
e
rsect
i
o
n
(l
ogi
cal
AN
D)
o
p
e
r
atio
n
on
(e
) a
n
d
(
f
)
;
(h
)
Iris im
age with
p
up
il localizatio
n
(sho
wn
b
y
wh
ite circle) ob
tain
ed
after
appl
y
i
n
g
C
H
T
on
(
g
)
The t
w
o m
o
rp
hol
ogi
cal
o
p
er
at
i
ons are a
ppl
i
e
d on t
h
e bi
na
ry im
age in Figure 2(c) to
ge
t the cleaned
bi
na
ry
i
m
age sho
w
n i
n
Fi
g
u
re
2(
d
)
;
an
d the objective of t
h
es
e operations is
reducing t
h
e noise-size s
o
t
h
a
t
the
n
o
i
se edg
e
s can
b
e
avo
i
d
e
d
i
n
t
h
e in
tersectio
n op
eratio
n,
wh
ich
is illu
st
rated
later
u
s
i
n
g Figu
re
3
.
First
,
a
ho
le
fillin
g
o
p
e
ration
is app
lied
on
th
e b
i
n
a
ry
imag
e
i
n
Fi
g
u
re 2
(
c)
t
o
fill
th
e wh
ite do
ts
in
th
e pu
p
il reg
i
on
and
t
h
en t
h
e i
m
age ope
ni
ng
o
p
era
t
i
on f
o
r bl
ac
k
ob
ject
s
usi
n
g a
st
ruct
uri
n
g el
e
m
ent
of t
y
pe
d
i
sc [1
8]
i
s
ap
pl
i
e
d t
o
reduce the
size of t
h
e noise
due to ey
elids, e
y
elashes and e
y
ebrow etc.
Fi
gu
re
2(
d) s
h
o
w
s t
h
e cl
ea
ned
bi
na
ry
i
m
ag
e in
wh
ich
th
e no
ise
du
e to
eyelid
s and eyelash
e
s
h
a
s
b
een co
m
p
letely rem
o
v
e
d
,
bu
t
if th
e no
ise
doesn
’t
rem
ove com
p
letely, its size reduce
s
becaus
e
the
black re
gions
of eyelids
along
with
eyelashes in the
bina
ry
im
age are not
solid
bounda
ry com
p
act obje
cts like the pupil and the im
age
ope
n
ing operation rem
oves
the
pi
xel
s
at
t
h
ei
r
bo
u
nda
ri
es.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 4
,
Au
gu
st 2
016
:
16
37
–
1
646
1
640
Fi
gu
re
3.
Ed
ge
m
a
p gene
rat
i
o
n
fo
r
pu
pi
l
b
o
u
nda
ry
det
ect
i
on:
(a
)
Ideal
e
d
g
e
m
a
p (im
a
ge 7)
t
h
at
c
ont
ai
n
s
p
upi
l
bo
u
nda
ry
e
dge
s o
n
l
y
;
(b
) E
d
g
e
m
a
p (im
a
ge 7)
t
h
at
c
ont
ai
n
s
p
upi
l
b
o
u
n
d
a
r
y
ed
ges a
s
wel
l
as fal
s
e e
d
ges
;
t
h
e
im
ages in (a
) a
n
d (b) are:
1. Iris im
age from
CILV
3;
2.
Sm
oot
hed
i
r
i
s
i
m
age;
3
.
B
i
na
ry
i
m
age aft
e
r
t
h
res
hol
di
n
g
2;
4.
C
l
eane
d
bi
n
a
ry
i
m
age obt
a
i
ned
fr
om
3;
5.
Ed
ge i
m
age o
f
4;
6.
Ed
ge i
m
age
of
2;
7.
Ed
g
e
map
ob
tain
ed
b
y
in
ter
s
ecti
o
n op
er
ation
on
5 and
6
;
8.
Pup
il lo
calized
iris imag
e ob
tain
ed
after ap
p
l
ying
CHT
on
7
.
The ed
ge i
m
age of t
h
e cl
ea
ne
d bi
nary
i
m
age, sh
ow
n i
n
Fi
g
u
re
2(e
)
,
has t
h
e fal
s
e e
dges
due t
o
da
r
k
illu
m
i
n
a
tio
n
an
d
eyeg
lass fra
m
e
, b
u
t
it co
u
l
d
h
a
v
e
contain
e
d
o
t
h
e
r
false ed
g
e
s due to
th
e eyelid
s an
d
eyelash
e
s
th
at can
b
e
rem
o
v
e
d
o
r
m
i
n
i
mized
b
y
th
e
in
tersectio
n
o
p
e
ration
as
illu
strated
usin
g
Figu
re
3.
Figure 3 s
h
ows that the edge
im
age of the cleaned
bi
nary
im
age (im
a
ge 5) co
nt
ai
ns t
h
e fal
s
e edges
due t
o
eyelids and eyelashe
s,
but these
false edge
s are
rem
o
v
e
d
co
m
p
lete
ly o
r
p
a
rtially after th
e in
tersectio
n
ope
rat
i
o
n as sh
ow
n i
n
i
m
age 7. T
h
e i
m
age openi
ng
o
p
erat
i
on
on the bi
nary im
age (im
a
g
e
3) re
duces the size
of the
noise due to the eyelids and eyelashe
s and henc
e
,
the re
duce
d
noi
s
e-size in
the cleaned bi
nary image
(i
m
a
ge 4) i
s
not
sam
e
as det
ect
ed by
t
h
e e
dge
det
ect
i
on
on t
h
e ori
g
i
n
al
i
r
i
s
im
age (im
a
ge 6
)
. T
h
ere
f
ore
,
t
h
e
in
tersection
operatio
n
on
th
e
i
m
ag
e 5
and
the i
m
ag
e 6
avo
i
d
s
th
e
no
ise-edg
e
s co
m
p
letel
y
o
r
p
a
rtially. Fig
u
re
3
(
a) shows an
id
eal situ
atio
n
,
wh
ere th
e in
tersectio
n
op
erat
ion rem
oves the false edge
s com
p
letely (im
a
ge 7),
but
t
h
e e
dge
m
a
p i
n
Fi
g
u
r
e
3(
b)
has
a
few
fa
l
s
e ed
ges al
s
o
(
i
m
a
ge 7)
.
2.
1.
2.
CHT
for
pu
pil bou
n
d
a
ry de
tection
There a
r
e a num
ber of differe
n
t approac
h
es that can
be t
a
ke
n i
n
t
h
e C
H
T i
m
pl
em
ent
a
t
i
on [1
8]
-[
2
0
]
.
To
m
eet th
e req
u
i
rem
e
n
t
o
f
detectin
g
a circl
e
in
th
e edg
e
map
o
f
iris imag
e,
we
p
r
op
ose an
im
p
l
e
m
e
n
tatio
n
technique
for
CHT that dete
cts a si
ngle strongest circle in an im
age.
The pr
op
ose
d
C
H
T
i
m
pl
em
ent
a
t
i
o
n
descri
bed i
n
Al
go
ri
t
h
m
1 uses
a 2-D acc
um
ul
at
or t
o
st
o
r
e v
o
t
e
s f
o
r
one
ra
di
us at
a t
i
m
e
,
whe
r
eas t
h
e st
a
nda
r
d
CHT
requ
ires
a 3-D accu
m
u
l
ato
r
t
o
st
o
r
e vo
tes
for m
u
ltip
le rad
ii t
h
at resu
lts in larg
e sto
r
ag
e
requ
iremen
ts
and l
o
n
g
pr
oce
ssi
ng t
i
m
es [2
0]
. At
al
l
t
h
e edge
pi
xel
s
(a
,b), wh
ich
are the wh
ite p
i
x
e
ls in
th
e ed
g
e
m
a
p
,
th
e
vi
rt
ual
ci
rcl
e
s are dra
w
n wi
t
h
di
ffere
nt
ra
di
i
usi
n
g Eq
uat
i
o
n
(1)
.
A ci
rcl
e
wi
t
h
radi
us
r
and ce
nt
er
(a
,b
)
can be
descri
bed
wi
t
h
param
e
t
r
i
c
equ
a
t
i
ons
bel
o
w.
(1)
W
h
en
an
g
l
e
θ
swee
ps by
f
u
l
l
36
0
deg
r
ees, t
h
e ci
rcl
e
-p
oi
nt
s
(x
,y
) l
y
i
ng
on t
h
e pe
ri
m
e
t
e
r of t
h
e ci
rcl
e
are g
e
n
e
rated
.
A
2
-
D accu
m
u
l
ato
r
array of size sam
e
as
th
e i
m
ag
e is in
itialized
to
zero.
Th
e cells’
v
a
l
u
es in
th
e arr
a
y ar
e in
cr
em
en
ted
b
y
on
e ev
er
y ti
me a cir
c
le passes thr
oug
h
th
e cells; th
e
p
r
o
cess is know
n as
accum
u
lator voting a
s
shown in
Algor
ithm 1. The
pea
k
(m
axim
u
m
va
lue)
in t
h
e 2-D accum
u
lator array is
det
e
rm
i
n
ed f
o
r
every
radi
us.
The m
a
xim
u
m
am
ong al
l
the
peaks
gi
ve
s cent
e
r a
n
d r
a
di
us
o
f
t
h
e
d
e
t
ect
ed
circle. The
2-D accum
u
lator array after
voting is s
h
own in
Fig. 4
when
the CHT is applied on the e
d
ge m
a
p
of Fi
gu
re 2
(
g
)
.
In Fi
g
u
r
e 4, t
h
e radi
us
(r
) i
s
equal
t
o
t
h
e
p
upil rad
i
u
s
; th
erefore, th
e co
ordin
a
tes o
f
th
e
peak
i
n
the 2-D acc
umulator array are
the
co
ord
i
n
a
tes of th
e pup
il cen
ter.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Accur
a
t
e
Iri
s
L
o
cal
i
z
at
i
o
n
Usi
n
g
E
dge
Ma
p
Gene
rat
i
o
n
a
n
d
A
d
apt
i
ve C
i
r
c
ul
ar
H
o
ug
h
..
.. (
V
i
n
eet
K
u
m
a
r)
1
641
A
l
go
r
i
t
h
m
1.
C
H
T
f
o
r
pup
i
l
bo
und
a
r
y
de
t
e
c
t
i
o
n u
s
i
n
g
2-
D
ac
c
u
m
u
lato
r ar
r
a
y
In
p
u
t
s
:
E
d
g
e
m
a
p of
i
r
i
s
i
m
a
g
e
,
m
i
ni
m
u
m
pu
p
i
l
r
a
d
i
u
s
(
r
mi
np
) an
d
m
axi
m
u
m
pu
p
i
l
r
a
d
i
u
s
(
r
ma
x
p
)
Ou
t
p
u
t
s:
pu
pi
l
c
i
r
c
l
e
r
a
di
us
(
r
p
)
a
n
d
c
e
nt
er
c
o
o
r
d
i
na
t
e
s
(
x
p
,y
p
)
1.
fo
r
pu
p
i
l
_
r
a
di
us
=
r
mi
np
:1
: r
ma
x
p
do
/
/
co
m
m
e
n
t
s
:
2.
A
=
z
e
r
o
s
(
r
o
ws
,
c
o
l
s
)
;
//
2
-
D
a
c
c
u
m
u
l
a
t
o
r
o
f
i
r
i
s
i
m
a
g
e
s
i
z
e
3
.
fo
r
a
l
l
“wh
i
t
e
p
i
xe
l
s
”
i
n
ed
g
e
m
a
p
o
f
i
r
i
s
i
m
a
g
e
do
4
.
fo
r
θ
=1
t
o
3
6
0
o
do
5
.
C
a
lc
u
l
at
e
(x
,y
) u
s
i
n
g
E
q
u
a
t
i
o
n
(
1
)
6
.
if
(
x
,
y
)
i
s
i
n
i
m
a
g
e
bo
unds
do
7
.
A
(
x
,
y
)
=
A
(
x
,
y
)
+1
;
//
A
c
c
u
mu
l
a
t
o
r
-
v
o
t
i
n
g
s
t
e
p
8
.
en
d
if
9
.
en
d
f
o
r
1
0
.
en
d
f
o
r
1
1
.
F
i
n
d
m
a
x
i
m
u
m
v
a
lue
i
n
A
:
1
2
.
M
=
A
(
x
’
,
y
’
)
;
/
/
M
i
s
m
a
x
i
mu
m
v
a
l
u
e
i
n
A
13
.
M
a
x_A
r
r
a
y
(
p
upi
l
_
r
a
d
i
u
s
)
=
M
;
14
.
X
_
A
r
r
a
y
(
pup
i
l
_
r
ad
i
u
s
)
=
x
’
;
15
.
Y
_
A
r
r
a
y
(
pup
i
l
_
r
ad
i
u
s
)
=
y
’
;
1
6
.
en
d
f
o
r
17
.
Fi
n
d
m
a
xi
m
u
m
i
n
M
a
x_A
r
r
a
y
:
18
.
M
’
=
M
a
x_A
r
r
a
y
(
i
nde
x)
/
/
M
’
i
s
m
a
xi
m
u
m
v
a
l
u
e
i
n
M
a
x
_
A
r
r
a
y
1
9
. r
p
= i
n
d
e
x
;
x
p
=
X_
A
r
r
a
y
(
i
n
d
e
x
)
;
y
p
=
Y_
A
r
r
a
y
(
i
n
d
e
x
)
;
/
/
E
n
d
o
f
CHT
a
l
g
o
r
i
t
h
m
2.
2.
Phase
2: Lim
b
ic bound
ar
y
detecti
o
n
The ce
nt
er
o
f
t
h
e
pu
pi
l
ci
rcl
e
i
s
use
d
as
a
n
i
n
put
i
n
det
ect
i
n
g t
h
e l
i
m
bi
c bo
un
da
ry
as s
h
o
w
n
i
n
Fi
g
u
r
e
5.
The
ed
ge m
a
p
gene
rat
i
o
n a
n
d
ada
p
t
i
v
e C
H
T
fo
r l
i
m
bi
c bo
u
nda
ry
det
ect
i
on a
r
e
di
scus
sed
bel
o
w.
2.
2.
1.
E
d
ge m
a
p
ge
n
erati
on
The l
i
m
bi
c bo
un
da
ry
det
ect
i
o
n
m
a
y
be h
u
r
d
l
e
d
by
t
h
e
ey
el
i
d
s, ey
el
ashes,
re
fl
ect
i
ons
a
n
d
l
o
w
cont
rast bet
w
e
e
n the i
r
is and
sclera in t
h
e iri
s
im
ages
[5].
A s
ubim
a
ge is extracted
fro
m
th
e iris im
ag
e u
s
ing
a
rectangle cente
red at the
pupil
center as
s
h
o
w
n
i
n
Fi
g
u
re
5
(
a) a
n
d Fi
gu
re
5(
b)
. T
h
e
wi
dt
h
of
t
h
e
rect
an
gl
e (
o
r
subi
m
a
ge) i
s
t
w
i
ce t
h
e m
a
xim
u
m
possi
bl
e
val
u
e
of t
h
e l
i
m
b
i
c
boun
da
ry
radi
u
s
an
d t
h
e
hei
g
ht
i
s
hal
f
of t
h
e
wi
dt
h
.
T
h
e hei
ght
of t
h
e rect
a
ngl
e ca
n be i
n
c
r
eased
f
u
rt
he
r i
f
the iris-occlusion
by the eye
lids and eyelashes is
n
o
t
m
u
ch
. Th
e size of th
e rectan
g
l
e
rem
a
in
s
con
s
tan
t
for all th
e i
m
ag
es fro
m
a d
a
tab
a
se, bu
t th
e lo
catio
n
of
the recta
ngle i
n
the
im
age change
s as t
h
e
re
ct
angl
e i
s
p
o
si
t
i
one
d
usi
n
g t
h
e
p
upi
l
ce
nt
er.
Fi
gu
re
5.
Li
m
b
i
c
bo
u
nda
ry
de
t
ect
i
on:
(a
) I
r
i
s
im
age (
3
2
0
×
2
40
) a
f
t
e
r
p
upi
l
bo
u
nda
ry
det
ect
i
on;
t
h
e
rect
an
gl
e
i
n
w
h
i
t
e
i
n
di
ca
t
e
s t
h
e si
ze
of
s
ubi
m
a
ge t
o
be
pr
ocesse
d
fo
r l
i
m
b
i
c
bo
un
da
r
y
det
ect
i
on;
(b
)
The
su
bi
m
a
ge
(130
×65
)
ex
t
r
acted
fro
m
th
e i
r
is im
ag
e u
s
ing th
e
rectang
l
e in
(a);
(c)
Filtered
sub
i
m
a
g
e
after app
l
yin
g
a
med
i
an
filter
of size
9
×
9 on
(b
); t
h
e two
rect
an
g
l
es in
wh
ite on
left and
ri
gh
t sid
e
s of t
h
e
p
u
p
il are u
s
ed
t
o
cove
r t
h
e i
r
i
s
’
s
ve
rt
i
cal
cont
o
u
rs;
(
d
)
Ed
ge
m
a
p o
b
t
a
i
n
e
d
after ap
p
l
ying
So
b
e
l edg
e
d
e
tectio
n
with
ou
t t
h
inn
i
ng
in
horizon
tal d
i
rectio
n in
si
d
e
t
h
e two
rectangles in
(c
); (e) C
i
rcle d
e
tectio
n
after ap
p
l
ying
t
h
e ad
ap
tive C
H
T
on
(
d
);
(f
)
Iris l
o
calized im
age (
3
2
0
×
2
4
0
)
Figure
4. The
s
u
rf
ace plot of
the 2-D
accum
u
lator array afte
r voting is
over
cor
r
es
po
n
d
i
n
g t
o
o
n
e ra
di
us
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 4
,
Au
gu
st 2
016
:
16
37
–
1
646
1
642
Th
e su
b
i
m
a
g
e
in
Figu
re
5
(
b
)
is filtered
u
s
i
n
g
a m
e
d
i
an
filt
er [2
1
]
to
su
ppress th
e
n
o
i
se
su
ch
as th
e
eyelash
h
a
ir an
d
un
ev
en
p
i
xel in
ten
s
ities with
ou
t d
a
m
a
g
i
ng
th
e ed
g
e
stru
cture. The u
p
p
e
r and
/
or lo
wer
eyelid
s o
cclude th
e iris in
the n
o
i
sy iris i
m
ag
es,
b
u
t
th
e
vertical iris contours are alwa
ys visible, which are
use
d
f
o
r
det
ect
i
ng t
h
e
l
i
m
b
i
c
bo
u
nda
ry
. T
h
e
vert
i
cal
i
r
i
s
c
ont
ou
rs a
r
e co
vere
d
usi
n
g t
w
o rect
a
ngl
es t
h
at
are
placed a
s
s
h
own in
Figure
5(c). The
three sides of
eac
h rectangle
touc
h
th
e s
ubim
a
ge borders a
n
d the fourt
h
side of each
rectangle is at
a distance of pupil radius (r
p
) +
5
fro
m
th
e p
upil cen
ter. To
g
e
t th
e ed
g
e
p
i
x
e
ls, th
e
So
bel
ed
ge det
ect
i
on wi
t
h
o
u
t
t
h
i
nni
ng
o
p
er
at
i
on i
s
ap
pl
i
e
d i
n
t
h
e t
w
o
r
ect
angl
es i
n
h
o
ri
z
ont
al
(
x
)
d
i
rect
i
o
n
onl
y
.
Fi
g
u
r
e 5(
d) s
h
o
w
s t
h
e e
dge
pi
xel
s
t
h
at
are used
fo
r t
h
e l
i
m
b
i
c
bou
nda
ry
det
ect
i
o
n usi
ng t
h
e
pr
op
ose
d
adaptive
CHT.
2.
2.
2.
Ad
apti
ve
CH
T for
limbic b
o
un
dar
y
detec
t
ion
A. R
a
dm
an et
al
. [2
2]
had
pr
op
ose
d
an a
d
a
p
t
i
v
e I
DO
fo
r t
h
e l
i
m
b
i
c
bou
n
d
ary
det
ect
i
o
n,
but
he
re,
we
pr
o
pose a
n
a
d
a
p
t
i
v
e C
H
T
f
o
r
t
h
e l
i
m
b
i
c
bo
u
nda
ry
det
ect
i
o
n.
Inst
ea
d o
f
u
s
i
ng t
h
e ge
ne
ra
l
C
H
T al
go
ri
t
h
m
for
th
e circle
d
e
tectio
n
[20
]
, an
ad
ap
tiv
e CHT fo
r th
e ci
rcu
l
ar arc detection
is
applied
on
th
e edg
e
m
a
p
show
n i
n
Fi
gu
re
5
(
d
)
.
T
h
e a
d
a
p
t
i
v
e C
H
T
det
ect
s a
st
ruct
ure
o
f
t
w
o
ci
rcul
ar
arcs
d
e
fi
ne
d
by
-
4
5:
45
a
n
d 1
35:
22
5 de
gre
e
as shown in solid in Figure 6. Th
e voti
ng s
p
ace in the adaptive CHT is
lim
i
ted to a s
m
all region around the
p
u
p
il cen
ter i
n
stead
of th
e
wh
o
l
e im
ag
e. Th
e ad
ap
tiv
e C
H
T for li
m
b
ic b
oun
d
a
ry d
e
tectio
n
is u
s
efu
l
for th
e
im
ages
ha
vi
n
g
i
r
i
s
-occl
usi
o
ns by
the eyelids
and eyelashes.
Fi
gu
re
6.
A
set
o
f
t
w
o
vert
i
cal
arcs
t
h
at the
a
d
aptive
CHT
fi
nds
in an im
ag
e
The acc
um
ul
ator
v
o
t
i
n
g
part
o
f
t
h
e
ada
p
t
i
v
e C
H
T
fo
r l
i
m
bi
c bou
nda
r
y
det
ect
i
o
n
i
s
descri
bed
i
n
Al
g
o
ri
t
h
m
2.
At
al
l
t
h
e
w
h
i
t
e
pi
xel
s
(a
,b
) i
n
t
h
e e
dge
m
a
p, t
h
e a
r
cs
’ st
r
u
ct
u
r
e s
h
ow
n i
n
Fi
gu
re
6
i
s
dra
w
n
usi
n
g t
h
e E
q
ua
t
i
on (
1
)
f
o
r a
r
a
di
us
(r
) an
d c
o
r
r
es
po
n
d
i
n
g v
o
t
i
ng i
s
d
one
.
The si
ze o
f
t
h
e
2-
D acc
um
ul
at
or i
s
sam
e
as the subim
a
ge, but
voting
space i
n
the accum
u
lato
r is lim
ited to a 10×
1
0 rect
angle ce
ntere
d
at the
pupil cente
r
be
cause t
h
e ce
nters
of t
h
e
p
u
p
i
l an
d li
m
b
ic bo
und
ary circles lie with
in
a small win
d
o
w
[6
]. Th
e
peak in t
h
e
2-D acc
um
ulator is determ
ined
corres
p
onding
to each ra
dius
and the m
a
xim
u
m
am
ong the
peaks
gives t
h
e ce
nter and t
h
e ra
dius of the lim
bic
boundary
circle. Th
e
2-D accu
m
u
lato
r after vo
ting
is sh
own
i
n
Fi
gu
re
7
whe
n
t
h
e ada
p
t
i
v
e
C
H
T i
s
a
ppl
i
e
d
on t
h
e e
dge
m
a
p of
Fi
g
u
re
5(
d)
. T
h
e Fi
gu
re 7
sh
o
w
s t
h
e
sur
f
ace
plot of the 2-D accum
u
lator corre
spondi
ng to a radius equal to the limbic
bounda
ry radius a
nd
hence, the
coordinates of the pea
k
in the accu
m
u
lator are the ce
nte
r
coordinates
of the lim
bic
bounda
ry circle. The
adapt
i
v
e
C
H
T
fo
r l
i
m
bi
c bou
nda
ry
det
ect
i
o
n i
s
fast
e
r
al
so
as it searches
for
half t
h
e circ
le length i
n
stea
d of
a
full circle, whi
c
h
requires
onl
y
half t
h
e
virtual
circle lengt
h to
be
dra
w
n at
each e
d
ge
pixel.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Accur
a
t
e
Iri
s
L
o
cal
i
z
at
i
o
n
Usi
n
g
E
dge
Ma
p
Gene
rat
i
o
n
a
n
d
A
d
apt
i
ve C
i
r
c
ul
ar
H
o
ug
h
..
.. (
V
i
n
eet
K
u
m
a
r)
1
643
Al
g
o
r
i
th
m
2
.
H
T
a
c
c
u
m
u
la
to
r
v
o
t
i
n
g
i
n
a
d
a
p
t
i
v
e
C
H
T
f
o
r lim
b
i
c
bo
un
dary
d
e
t
e
c
t
i
o
n
C
o
m
put
e
:
C
e
n
t
e
r
o
f
s
u
bi
m
a
ge
(
x
o
,y
o
)
;
im
in
=
x
o
-5
; i
m
a
x
=x
o
+5
;
j
mi
n
=
y
o
-5
;
j
m
a
x
=
y
o
+5
1
.
-
-
-
2.
A
=
z
e
r
o
s
(r
o
w
s
,
c
o
l
s
)
/
/
2
-
D
a
c
c
u
mu
l
a
t
o
r
o
f
s
u
b
i
ma
g
e
s
i
z
e
3
.
fo
r
a
l
l
“
w
hi
t
e
pi
x
e
l
s
” i
n
ed
g
e
m
a
p
of
s
u
bi
m
a
ge
do
4
.
fo
r
θ
= -4
5
o
to
4
5
o
do
5
.
C
a
l
c
u
l
a
t
e
(x
,y
) u
s
i
n
g
E
q
u
a
t
i
o
n
(
1
)
6
.
if
(i
mi
n
≤
x
≤
im
a
x
)
a
n
d
(
j
m
i
n
≤
y
≤
jm
a
x
)
th
e
n
7
.
A
(
x
,
y
)
=
A
(
x
,
y
)
+
1
;
/
/
A
c
c
u
m
u
la
to
r-
v
o
tin
g
s
t
e
p
8
.
e
nd i
f
9
.
en
d
f
o
r
1
0
.
fo
r
θ
=1
3
5
o
to
2
2
5
o
do
1
1
.
r
e
p
e
a
t
s
t
e
p
s (
l
i
n
e
s
) 5
,
6
,
7
,
8
1
2
.
en
d
f
o
r
1
3
.
end
f
o
r
3.
PERFO
R
MA
NCE E
V
ALU
A
TIO
N
I
n
t
h
is section
,
th
e
p
e
rf
or
m
a
n
ce of
th
e
pr
opo
sed
algo
r
ith
m
is ev
aluated
by co
ndu
ctin
g ex
p
e
r
i
m
e
n
t
s
o
n
CASIA iris
d
a
tab
a
ses, t
h
e i
r
is lo
calizatio
n resu
lts
are
s
u
mmarized and
the re
sults
a
r
e com
p
ared with
som
e
state-of-the
-art
iris localization m
e
t
hods i
n
the literature
. T
h
e datasets use
d
in t
h
e e
xpe
rim
e
nts to eval
uate the
pr
o
pose
d
al
go
r
i
t
h
m
are des
c
ri
bed
bel
o
w
.
3.
1.
Da
ta
sets
used
The
dat
a
set
s
ar
e t
a
ken
f
r
om
two
C
A
S
I
A
i
r
i
s
dat
a
base
s [
1
7]
:
C
I
TH
V4
an
d
C
I
LV
3.
Th
ese
dat
a
bas
e
s
are chose
n
be
cause they c
o
ntain the
nois
y
im
ages
h
a
vin
g
th
e
no
ise su
ch
as r
e
f
l
ectio
n
s
,
no
n-
unif
o
r
m
illu
m
i
n
a
tio
n
s
, lo
w con
t
rast,
eyeg
lasses and in
tru
s
ion
s
b
y
th
e eyelid
s,
eyelash
e
s an
d eyeb
row h
a
ir. Bo
t
h
C
I
TH
V4
a
n
d
C
I
LV
3 c
o
nt
ai
n
8
-
bi
t
gray
-l
e
v
el
JPEG
i
m
ages wi
t
h
res
o
l
u
t
i
on
o
f
6
40×
4
8
0
pi
xel
s
.
CITHV4 d
a
taset: Th
e to
tal nu
m
b
er of im
ag
es in
th
is
d
a
tab
a
se
is 2
000
0 co
llected
fr
o
m
10
00
d
i
f
f
er
ent
subjects [17]. Each subject cont
ribu
tes 20 i
m
ages, which include 10 im
a
g
es from
each left and ri
ght e
y
e.
For e
x
t
e
nsi
v
e
expe
ri
m
e
nt
at
i
o
n wi
t
h
t
h
i
s
dat
a
base, t
h
e i
m
ages f
r
om
al
l
100
0 s
u
b
j
ect
s ar
e chos
en
. A t
o
t
a
l
5600 im
ages are c
hose
n
which incl
ude
all the im
ages
of
t
h
e f
i
r
s
t
10
0 subjects an
d
3600 i
m
ages from
the
rest of
900 different s
u
bjects
(sel
ecting 4
im
a
g
es from
each subject).
C
I
LV
3
dat
a
set
:
Thi
s
dat
a
base
cont
ai
ns i
m
ages fr
om
411
di
ffe
rent
s
u
b
j
ect
s [1
7]
. T
h
e t
o
t
a
l
num
ber o
f
t
h
e
i
m
ag
es in
th
e d
a
tab
a
se is 162
12
. For th
orou
gh
exp
e
rim
e
n
t
atio
n
with
th
e d
a
tab
a
se,
81
1
im
ag
es were
cho
s
en
selectin
g
first left a
n
d
first ri
ght ey
e
image of eac
h s
u
bject e
x
ce
pt 11 s
u
bjects.
Th
e exp
e
rim
e
n
t
s o
n
th
e
d
a
tasets were don
e
u
s
ing
a co
m
p
uter with
In
tel i5
CPU
@ 2
.
40 GHz,
8
GB
R
A
M
and
W
i
n
d
o
w
s
7 o
p
erat
i
ng sy
st
em
. The pr
op
ose
d
al
g
o
ri
t
h
m
i
s
im
p
l
em
ent
e
d an
d t
e
st
ed wi
t
h
M
A
TLAB
(ve
r
si
o
n
8.
4)
t
o
ol
.
Fig
u
re
8
.
Accurately lo
calized
irises in th
e i
r
is
im
ag
es fro
m
two
C
A
SIA d
a
tab
a
ses
[17
]
: (a) CITHV4
;
an
d (b
) CI
LV3
Figure
7. The
s
u
rf
ace plot of
the 2-D
accum
u
lator array in t
h
e a
d
aptive CHT a
f
ter
vot
i
n
g
i
s
o
v
er
cor
r
es
po
n
d
i
n
g t
o
o
n
e ra
di
us;
not
e
that voting s
p
a
ce is a
10×
10 rectangle ce
ntered
at pupil ce
nter
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 4
,
Au
gu
st 2
016
:
16
37
–
1
646
1
644
*M
M
UV1: M
u
ltim
e
dia
Univer
sity
, ver
s
ion
1.
0
3.
2.
Results
and
discussion
The sam
p
le images
with acc
urately
localized irises by the
proposed
al
go
r
ith
m
ar
e sh
own
in
Figu
r
e
8.
The
res
u
l
t
s
o
f
t
h
e
pr
o
pose
d
al
go
r
ithm
are summarized in T
a
ble 1.
Tab
l
e
1
.
Exp
e
ri
m
e
n
t
al resu
lts
o
f
th
e
p
r
op
osed
iris l
o
calizatio
n algo
rith
m
I
r
i
s database
Nu
m
b
er
of im
ages
taken
for testing (N
t
)
Nu
m
b
e
r
of
corre
ct
iris localized
i
m
ages (
N
i
)
Accu
rac
y
(
%
) =
(N
i
/N
t
)
×100
Average ti
m
e
cost
per
i
m
age (s
ec)
CI
T
HV4* (
640×480)
5600
5583
99.
7
0.
92
CI
L
V
3* (
640×480
)
811
806
99.
38
0.
89
*CITHV4: CAS
I
A
-Iris
-
Thousand (version 4.0);
*CILV
3
: CASIA-Iris
-
La
m
p
(version 3.0)
The acc
uracy
of the
propose
d algorith
m
is 100
pe
rcent
alm
o
st. The acc
uracy
of the
circle detection
i
n
an i
m
age by
t
h
e C
H
T
depe
nds
o
n
t
h
e am
ount
of
fal
s
e ed
ges t
h
e e
d
g
e
m
a
p o
f
t
h
e i
m
age cont
ai
n
s
. Fe
wer t
h
e
false edges hi
ghe
r would
be the accurac
y
. The edge
ma
p used
for
the pupil bou
nda
ry detection in the
pr
o
pose
d
al
g
o
r
i
t
h
m
cont
ai
ns
very
l
e
ss fal
s
e edges d
u
e t
o
t
h
e i
n
t
e
rsect
i
on o
p
erat
i
on
as di
scusse
d i
n
t
h
e
sub
s
ect
i
on
2.
1.
1. T
h
e use
of t
h
e ada
p
t
i
v
e C
H
T f
o
r l
i
m
bi
c
bo
u
nda
ry
det
e
ct
i
on t
o
co
u
n
t
e
r t
h
e i
r
i
s
-
o
ccl
u
s
i
ons
by
the eyelids a
n
d eyelashes is a
not
her cause
for
high acc
urac
y.
Tabl
e 1 al
s
o
s
h
o
w
s t
h
e t
i
m
e
per
f
o
r
m
a
nce resul
t
s
of t
h
e p
r
op
ose
d
m
e
t
hod. T
h
e ave
r
a
g
e
t
i
m
e
cost
is
repo
rted
i
n
th
e tab
l
e, as th
e time tak
e
n
b
y
the CHT fo
r
circle d
e
tectio
n is
d
i
rectly propo
rtio
n
a
l to th
e
num
b
e
r
of e
d
ge pixels in the edge m
a
p of the im
age. Fewer t
h
e
false ed
g
e
s in
th
e
ed
g
e
m
a
p
of iris i
m
ag
e, lesser will
be t
h
e t
i
m
e
cost
. The ave
r
a
g
e t
i
m
e
cost
per im
age was c
a
l
c
ul
at
ed by
ra
nd
om
l
y
choosi
ng
50
0 i
m
ages from
each individua
l
database. T
h
e MATLAB
’
s
tim
e
r functions
‘tic’ and ‘toc
’ were use
d
to know t
h
e exe
c
ution
t
i
m
e
of a co
de
t
h
at
r
uns
t
o
l
o
cal
i
ze i
r
i
s
es i
n
50
0 i
m
ages. The e
x
ecut
i
on
t
i
m
e
obt
ai
ned
was t
h
e
n
di
vi
d
e
d
by
500 t
o
find the
avera
g
e tim
e c
o
st
per im
age.
3.
2.
1.
Com
p
aris
on with
other me
thods
In
ou
r w
o
rk
, we al
so i
m
pl
em
ent
e
d t
h
e p
o
p
u
l
a
r
W
i
l
d
es
’ [
2
]
and
Da
ugm
an’s [
1
]
m
e
t
hods
f
o
r
com
p
ari
s
on
wi
t
h
t
h
e pr
o
pose
d
al
go
ri
t
h
m
as t
h
e pu
bl
i
s
he
d resul
t
s
o
f
t
h
es
e
m
e
t
hods f
o
r
C
I
TH
V4 a
n
d C
I
L
V
3
databases a
r
e
not a
v
ailable in the literature
.
W
ildes
’
m
e
thod
[2] is based on t
h
e Canny
edge
detection pl
us
C
H
T, w
h
e
r
eas
Dau
g
m
a
n’s m
e
t
hod [
1
]
us
es t
h
e ID
O a
s
a ci
rcul
ar e
dge
det
ect
or
. We ap
pl
i
e
d b
o
t
h
t
h
e
ap
pro
ach
es
on th
e Gau
ssian
sm
o
o
t
h
e
d
iris i
m
ag
es. Th
e
pup
il was lo
calized
prior to
th
e li
m
b
ic b
o
u
n
d
ary in
bot
h t
h
e
m
e
t
hods.
While usi
ng
W
i
l
d
es’ m
e
thod [2],
we found that
the lim
b
ic boundary’s accuracy was
com
i
ng very
low
due to the
false e
d
ges
of eyelids,
eyelashes a
n
d
p
upil. So
, we ap
plied
th
e C
H
T
o
n
th
e
selected
ed
ge
pi
xel
s
i
n
t
h
e C
a
nny
e
dge m
a
p;
whe
r
e t
h
e e
d
ge pi
xel
s
we
re
sel
ect
ed by
pl
a
c
i
ng t
w
o
rect
angl
es
o
n
l
e
ft
a
nd
ri
g
h
t
si
des o
f
t
h
e
p
upi
l
as di
sc
uss
e
d i
n
o
u
r
p
r
o
p
o
se
d al
go
ri
t
h
m
(Fi
g
ure
5(c
)
)
.
The ed
ge m
a
ps us
ed
fo
r
Wi
l
d
es’
m
e
t
hod
[
2
]
are
sho
w
n i
n
Fi
gu
r
e
9.
Wh
ile u
s
i
n
g
Dau
g
m
an
’s IDO [1
] fo
r th
e
p
u
p
il d
e
tectio
n
,
we ob
serv
ed
th
at it is
v
e
ry sen
s
itiv
e to
th
e
refl
ect
i
o
n d
o
t
s
i
n
si
de t
h
e
pu
pi
l
and
gi
ves
wr
on
g
resu
lts.
So
,
we rem
o
ved
th
ese
reflectio
n
s
during
the p
u
p
i
l
localization. T
h
e accuracy re
sults of
both the m
e
thods [1] and [2] are s
h
own in Ta
ble 2.
W
e
als
o
observe
d
Me
th
o
d
C
I
T
H
V
4
C
I
L
V
3
MMU
V
1
*
W
ild
e
s
[
2
]
8
6
.9
8
0
.5
9
3
.3
3
D
a
ug
m
a
n [
1
]
90
.
6
8
8
.1
2
96
.
4
4
W
i
l
d
e
s
[
2
]
+
D
a
ug
m
a
n’
s I
D
O
[
1
]
9
2
.
1
91
.
0
9
98
P
r
op
os
e
d
9
9
.
7
9
9
.
3
8
9
9
.
55
Tab
l
e
2
.
Exp
e
ri
m
e
n
t
al resu
lts
o
f
iris lo
calizatio
n
m
e
t
hods
C
a
nny
e
d
ge
de
t
ect
i
on wi
t
h
T
=
[0
.0
3
4
,
0
.
0
8
5
]
an
d
σ
=1.0
Fi
gu
re
9.
Ed
ge
m
a
ps of t
h
e i
r
i
s
i
m
age used i
n
Fi
g.
2:
(a
) E
d
ge m
a
p
fo
r
pu
pi
l
b
o
u
nda
ry
det
ect
i
on;
(
b
)
Ed
ge m
a
p fo
r l
i
m
b
i
c
bo
un
da
r
y
det
ect
i
o
n
Evaluation Warning : The document was created with Spire.PDF for Python.
I
J
ECE
I
S
SN
:
208
8-8
7
0
8
Accur
a
t
e
Iri
s
L
o
cal
i
z
at
i
o
n
Usi
n
g
E
dge
Ma
p
Gene
rat
i
o
n
a
n
d
A
d
apt
i
ve C
i
r
c
ul
ar
H
o
ug
h
..
.. (
V
i
n
eet
K
u
m
a
r)
1
645
th
at lo
calizin
g
th
e pup
il u
s
i
n
g
W
ild
es’ ap
proach
[2
] an
d d
e
t
ectin
g
the lim
b
i
c b
oun
d
a
ry
u
s
in
g
Dau
g
m
an
’s IDO
[1]
gi
ves
bet
t
e
r
res
u
l
t
s
as c
o
m
p
are
d
t
o
i
n
di
vi
dual
m
e
t
h
o
d
as
sh
ow
n i
n
Ta
bl
e 2.
Table 2 shows that both
W
i
l
d
es [2] and
Daugm
an
[1] m
e
t
h
ods
give good accuracy for Multi
m
e
di
a
Uni
v
ersity, version 1.0 (MM
U
V1)
database
[23] as it c
ontains less noisy im
ages, but
the
i
r accuracy de
gra
d
e
s
for the noisy images of CIT
H
V4 and
CIL
V
3.
W
ildes
[2]
gives less iris
localization accuracy m
a
inly due t
o
t
h
e refl
ect
i
o
n
spot
s i
n
C
I
T
H
V4 a
n
d t
o
o m
a
ny
fal
s
e e
dge
s fr
om
t
h
e occl
usi
o
ns by
ey
e
l
i
d
s an
d ey
el
ashes i
n
CILV3.
These
noises also
re
duce
the acc
uracy of
Da
ugm
an’s
IDO
[1].
The a
v
era
g
e ti
me cost per image
obt
ai
ne
d i
n
Wi
l
d
es’ [
2
]
an
d D
a
ugm
an’s [
1
]
i
s
2.
17 sec a
n
d 2.
45 sec
repe
ct
i
v
el
y
,
fo
r t
h
e
C
I
TH
V4 a
n
d
C
I
LV
3
im
ages of size
320×
240
pixe
ls. The
propos
ed algorithm
is m
o
re accura
te
and faster
t
h
an Wildes [2]
and
Daug
m
a
n
[1
] as it u
s
es th
e o
p
tim
al ed
g
e
map
s
with
v
e
ry less false ed
g
e
s and
th
e ad
ap
tiv
e C
H
T fo
r iris
bo
u
nda
ry
det
ect
i
on al
s
o
i
m
pro
v
es i
t
f
u
rt
her
.
The c
o
m
p
arison
of t
h
e res
u
lts
of the
propose
d
algo
rith
m
with
th
e
p
u
b
lish
e
d
resu
lts is sh
own in
Tab
l
e
3. T
h
e
pu
bl
i
s
h
e
d m
e
t
hods i
n
cl
ude
d i
n
t
h
e c
o
m
p
ari
s
on
are
cho
s
en
o
n
t
h
e
basi
s t
h
at
t
h
ey
use
d
sam
e
dat
a
bases
for expe
rim
e
ntation that
we have ta
ke
n. M
o
reove
r, Ja
n et al
. [8],[9] show t
h
e hi
ghe
st accuracy am
ong a
ll the
iris localization m
e
thods
ava
ilable in the
literature
fo
r CITHV4 and C
I
LV3
database
s
.
T
h
e sym
bol -- in the
tab
l
e sh
ows that th
e co
rresp
on
d
i
n
g
inform
at
io
n
was
n
o
t
fou
n
d
in
th
e literatu
re. Th
e Table 3
sh
ows th
at th
e
propose
d
algorith
m
has the highe
st accu
racy
and lowest time cost per imag
e, which
has
happene
d
due
to the
pr
o
pose
d
e
d
g
e
m
a
p an
d t
h
e
adapt
i
v
e
C
H
T
use
d
f
o
r
p
upi
l
and l
i
m
bi
c bou
n
d
ary
det
ect
i
on
respe
c
t
i
v
el
y
,
as
com
p
ared t
o
t
h
e ot
her m
e
t
hod
s i
n
t
h
e t
a
bl
e. In t
h
e pr
o
p
o
s
ed
m
e
t
hod, t
h
e o
r
i
g
i
n
al
i
r
i
s
im
age i
s
scal
ed do
wn t
o
hal
f
si
ze, w
h
i
c
h was al
s
o
d
o
n
e
i
n
t
h
e Jan et
al
.
m
e
t
hods [
8
]
,
[
9
]
t
o
spee
d u
p
t
h
e p
r
ocessi
n
g
. T
h
e i
m
age resi
zi
ng
by
a scal
i
n
g
fa
ct
or,
s =
0.
5
n
o
t
onl
y
red
u
ces
al
l
t
h
e ed
ge
pi
xel
s
t
o
hal
f
i
n
num
ber,
b
u
t
al
so t
h
e
num
ber
of
ra
di
i
t
a
ken i
n
a C
H
T
al
go
ri
t
h
m
becom
e
s hal
f
.
Tab
l
e
3
.
C
o
m
p
arison
with
p
u
b
lish
e
d
iris l
o
calizatio
n
resu
lts
M
e
thod
Accurac
y
(
%
)
&
A
v
erage ti
m
e
cost p
e
r i
m
age (se
c
)
CITHV4
CILV3
Jan et al.
[8]
99.
5 &
6.
4
98 &
4.
93
Jan et al.
[9]
99.
23 &
3.
4
99.
21 &
3.
35
Jan et al.
[6]
-
-
99.
05 &
-
-
I
b
r
a
him
et al.
[
24]
-
-
98.
28 &
-
-
P
r
oposed
99.
7 & 0.
92
99.
38 & 0.
89
4.
CO
NCL
USI
O
N
The p
r
op
ose
d
i
r
i
s
l
o
cal
i
zat
i
o
n
m
e
t
hod i
s
t
o
l
e
rant
t
o
t
h
e
n
o
n
-
i
d
eal
i
ssues a
n
d n
o
i
s
es i
n
t
h
e
i
r
i
s
im
ages
suc
h
as i
r
i
s
-o
ccl
usi
o
ns by
t
h
e ey
el
i
d
s and ey
el
ashes
,
l
i
ght
i
ng re
fl
ect
i
ons,
no
n-
u
n
i
f
orm
il
l
u
m
i
nat
i
o
n
,
eyeglasses, low contrast an
d ey
ebr
o
w hai
r
.
Ho
we
ver
,
t
h
e expe
ri
m
e
nt
al
resul
t
s
sho
w
t
h
at
t
h
e pro
p
o
se
d m
e
t
hod
also
im
p
r
ov
es
iris lo
calizatio
n
in th
e im
ag
e
s
th
at
d
o
no
t
hav
e
reflection
sp
o
t
s and non
-u
n
i
form
illu
mi
n
a
tion,
b
u
t
h
a
v
e
m
a
in
l
y
th
e iris-o
ccl
u
s
ion
s
b
y
th
e
eyelid
s a
nd ey
elashes. T
h
e c
o
m
p
arison
wit
h
the
fam
ous
W
i
l
d
es’
app
r
oach
[2]
,
whi
c
h i
s
base
d
on C
a
nny
ed
g
e
det
ect
i
on
pl
u
s
C
H
T,
dem
onst
r
at
es t
h
at
t
h
e i
n
t
r
o
duct
i
on
o
f
ne
w
edge m
a
p for
pu
pi
l
bo
u
nda
ry
det
ect
i
on an
d
adapt
i
v
e C
H
T
fo
r l
i
m
b
i
c
bou
nda
ry
det
ect
i
o
n m
a
ke t
h
e prop
ose
d
iris localization m
e
thod m
o
re
accurate
and
fast. The
pe
rform
a
nce res
u
lts
of t
h
e
propose
d
algorithm
are m
u
ch
better tha
n
bot
h
the
popular
Daugm
a
n’s [1] and
Wildes
’ [2] approache
s
. The c
o
m
p
arison
with s
o
m
e
recent
pu
bl
i
s
he
d
res
u
l
t
s
fo
r C
A
SI
A
dat
a
bases
al
so
sho
w
s
t
h
at
t
h
e
pr
o
pose
d
m
e
t
hod
has
i
m
prov
ed
per
f
o
r
m
a
nce. T
h
e
p
r
op
o
s
ed
al
g
o
rith
m
can
be
u
s
ed
for th
e accurate iris se
g
m
e
n
tatio
n
i
n
less
co
nstrain
e
d
iris reco
gn
itio
n syste
m
s.
ACKNOWLE
DGE
M
ENTS
We th
ank
f
u
lly ackn
o
wledg
e
Ch
in
ese
Acade
m
y o
f
Scien
c
es' In
stitu
te o
f
Au
t
o
m
a
tio
n
(CASIA) fo
r
p
r
ov
id
ing
u
s
t
h
e iris im
ag
es. We also
th
ank
Mu
lti
m
e
d
i
a Univ
ersity fo
r
prov
id
ing
MM
U i
r
is d
a
tab
a
se.
REFERE
NC
ES
[1]
J. Daugm
an, “
H
igh confiden
ce
visual recogn
itio
n of
persons by a test of statisti
cal ind
e
penden
c
e,”
IE
EE T
r
ans.
Pattern
Anal
. M
a
ch. In
te
ll.
, vo
l/issue: 15(11), pp. 1148–1161, 199
3.
[2]
R. P. Wildes, “Iris recognition:
an emerging bio
m
etric technolog
y
,
”
Proc
. IEE
E
,
vol/issue: 85(9), pp. 1348–1363
,
1997.
[3]
L. Ma,
et al
., “Personal identification based
on
iris textur
e ana
l
y
s
is,
”
Pattern
Anal. Ma
ch. Intell. IEEE Trans.
,
vol/issue:
25(12)
, pp
. 1519–1533
, 2003.
[4]
K.
W.
Bowy
er,
et al.
, “Image understanding for iris
biometrics: A survey
,”
Comput. Vis. imag
e Underst.
, vol/issue:
110(2), pp
. 281–
307, 2008
.
Evaluation Warning : The document was created with Spire.PDF for Python.
I
S
SN
:
2
088
-87
08
I
J
ECE
Vo
l. 6
,
N
o
. 4
,
Au
gu
st 2
016
:
16
37
–
1
646
1
646
[5]
F. Jan,
et al.
, “
A
d
y
namic non-
circu
l
ar ir
is localiz
ation techn
i
qu
e for non-ideal
data,”
Computer
s and Electrical
Engineering
, vol/issue: 40(8)
, pp
. 215–226, 2014.
[6]
J. Daugman, “How iris recogn
ition works,”
IEEE Trans. Circuits
Syst. Video Technol.
, vol/issue: 14(1), pp
. 21–3
0,
2004.
[7]
N. Wang,
et al.
, “Toward accurate localization and high rec
ognition p
e
rfor
m
ance for noisy
iris images,”
Multimedia
Tool
s Appl.
, vo
l/issu
e: 71(3)
, pp
. 141
1-1430, 2014
.
[8]
F. Jan,
et al.
, “
I
ris local
iz
ation i
n
frontal e
y
e
im
ages for less co
nstrained ir
is re
cognition s
y
ste
m
s,”
Digit. Sign
al
P
r
oc
e
ss.
A Re
v.
J
.
, vol/issue: 22(
6), pp
. 971–986
, 2012.
[9]
F. Jan,
et al.
, “
R
eliab
l
e iris
localization using
Hough transf
or
m, histogram-bisection
,
and
eccentricity
,”
Signa
l
Proc
e
ssing
, vo
l/issue: 93(1), pp.
230–241, 2013
.
[10]
P.
L
i
,
et al.
, “Robust and accur
a
te
iris segmenta
tion in v
e
r
y
no
is
y
ir
is images,”
Image Vis. Co
mput.
, vol/issue:
28(2), pp
. 246–2
53, 2010
.
[11]
H. Proença, “
Iri
s recognition
:
On the segm
entat
i
on of
degraded i
m
a
ges acquired
in the visibl
e wa
veleng
th,”
I
EEE
Trans. Pattern
Anal. Ma
ch. Intell.
, vol/issue: 32(8
)
, pp
. 1502–151
6, 2010
.
[12]
S. A. Sahmoud and I. S.
Abuhaiba, “Efficien
t iris segmentation
me
thod in unco
n
strained
enviro
nments,”
Pat
t
er
n
Recogn
it.
, vo
l/is
sue: 46(12), pp
.
3174–3185, 201
3.
[13]
S. Khalighi,
et al.
, “Iris recogn
ition using robu
st localization
and nonsubsampl
ed contour
let b
a
sed featur
es,”
J.
Sign. Process.
S
y
st.
, vol/issue: 8
1
(1), pp
. 111-12
8, 2015
.
[14]
J. Zuo and N. A. Schmid, “On
a
methodolog
y
fo
r robust segmentation of nonideal iris images,”
I
EEE T
r
ans. Syst.
Man,
Cy
be
rn. Part B Cy
be
rn.
, v
o
l/issue: 4
0
(3), p
p
. 703–718
, 201
0.
[15]
K.
M.
I.
Ha
sa
n a
nd M.
A.
Am
in, “
D
ual iris
m
a
tching for bi
om
etric iden
tifi
cat
ion,”
Signa
l, Image and Video
Proc
e
ssing
, vo
l/issue: 8(8), pp. 1
605–1611, 2014
.
[16]
T. Marcin
iak
,
et al.
, “Selection
of parameters in
iris recognition s
y
stem,”
Multim
ed. Tools Appl.
, vol/issue: 68(1)
,
pp. 193–208
, 20
14.
[17]
CASIA Iris Im
age Dat
a
base
, 20
10. ht
tp:/
/biom
e
t
r
ics.id
eal
test
.org
/.
[18]
E. R
.
Dav
i
es, “Computer and
machine vision: Th
eor
y
, algor
ithms,
practic
alities,” Academ
ic Press, 2012.
[19]
S. J. K. Pedersen, “Circular Ho
ugh tr
ansform,” Aalborg University
, Vision,
Graphics and Inte
ractive S
y
stems,
2007.
[20]
H.
K.
Yuen,
et al.
, “Comparative stud
y
of Ho
ugh Transf
orm methods for cir
c
le f
i
nding
,”
Im
age Vis. Comput.
,
vol/issue:
8(1), p
p
. 71–77
, 1990
.
[21]
R. C. Gonzalez,
et a
l
.
, “Digital
image processing using
MATLA
B,” Gatesmark
P
ublishing, 2009.
[22]
A.
Radman,
et al.
, “
F
ast and reli
able iris segm
entation a
l
gorithm
,
”
IET Image Process.
, vol/issue: 7(1), pp. 42–49,
2013.
[23]
MMU Iris Image Database, 200
4. ht
tp:/
/pesona
. m
m
u
.edu.
m
y
/~
c
c
teo
/
.
[24]
M. T. Ibr
a
him,
et al.
, “
Iris lo
cal
iza
tion using
loca
l histogr
a
m
and other im
age sta
tisti
cs,”
Opt.
Lase
rs Eng.
,
vol/issue: 50(5), pp.
645–654
,
20
12.
Evaluation Warning : The document was created with Spire.PDF for Python.