Inter
national
J
our
nal
of
Electrical
and
Computer
Engineering
(IJECE)
V
ol.
16,
No.
2,
April
2026,
pp.
924
∼
944
ISSN:
2088-8708,
DOI:
10.11591/ijece.v16i2.pp924-944
❒
924
The
ethics
of
articial
intelligence
technology
in
academic
w
ork:
assessing
the
line
between
assistance
and
plagiarism
Md.
Owafeeuzzaman
P
atwary
1
,
Md.
Reazul
Islam
1
,
Abtahi
Islam
1
,
Nur
-e
Sarjina
Khan
1
,
Md.
Abdullah
-
Al
–
J
ubair
1
,
Md.
J
akir
Hossen
2
,
M.
F
.
Mridha
1
1
F
aculty
of
Science
and
T
echnology
,
American
International
Uni
v
ersity-Bangladesh
(AIUB),
Dhaka,
Bangladesh
2
Department
of
Robotics
and
Automation,
F
aculty
of
Engineering
and
T
echnology
,
Multimedia
Uni
v
ersity
,
Melaka,
Malaysia
Article
Inf
o
Article
history:
Recei
v
ed
Apr
17,
2025
Re
vised
Jan
7,
2026
Accepted
Jan
16,
2026
K
eyw
ords:
Academic
inte
grity
AI
dependenc
y
AI
ethics
AI
in
Academia
AI
tools
in
education
Human-AI
interaction
Responsible
AI
inte
gration
ABSTRA
CT
The
inte
gration
of
articial
intelligence
(AI)
into
academia
has
transformed
ed-
ucational
practi
ces
and
enhanced
personalized
lear
ning
and
problem-solvi
ng
ca-
pabilities.
Ho
we
v
er
,
this
raises
signicant
ethical
concerns
re
g
arding
the
balance
between
le
gitimate
assistance
and
plagiarism.
This
study
in
v
estig
ated
public
perceptions
of
AI
in
academic
settings,
focusing
on
its
impact
on
ef
fecti
v
eness,
dependenc
y
,
and
ethical
considerations
of
AI
use.
A
surv
e
y
of
498
respondents
from
v
arious
educational
roles
w
as
conducted
,
and
the
data
were
analyzed
using
SPSS
for
descripti
v
e
statistics,
chi-square
tests,
and
re
gression
analyses.
The
re-
sults
identied
a
signicant
correlation
between
people’
s
educational
roles
and
their
interaction
with
AI
tools
(
χ
2
(6)
=
16
.
488
,
p
=
0
.
036
),
reecting
the
di-
v
erse
patterns
of
interaction
within
the
academic
community
.
More
frequent
use
of
AI
w
as
link
ed
to
less
dependenc
y
(
β
=
−
0
.
298
,
p
<
0
.
001
),
contradicting
the
widespread
belief
of
o
v
er
-reliance
on
AI.
Age
and
educational
role
had
lim-
ited
e
xplanatory
v
alue
in
perception
of
AI
dependenc
y
issues
(
R
2
=
0
.
033
).
The
ndings
indicate
a
strong
correlation
between
AI
usage
frequenc
y
and
depen-
denc
y
le
v
els,
with
increased
e
xposure
to
AI
fostering
a
more
critical
approach
rather
than
a
dependent
one.
Concerns
re
g
arding
the
unethical
use
of
AI,
in-
accuracies
in
AI-generat
ed
content,
and
the
need
for
clear
institutional
policies
were
also
highlighted.
This
study
underscores
the
importance
of
responsible
AI
inte
gration,
adv
ocating
for
ethical
frame
w
orks
and
educational
interv
entions
to
ensure
that
AI
enhances
learning
without
compromising
academic
inte
grity
.
This
is
an
open
access
article
under
the
CC
BY
-SA
license
.
Corresponding
A
uthor:
Md.
Jakir
Hossen
Department
of
Robotics
and
Automation,
F
aculty
of
Engineering
and
T
echnology
,
Multimedia
Uni
v
ersity
Melaka,
Malaysia
Email:
jakir
.hossen@mmu.edu.my
1.
INTR
ODUCTION
W
idespread
articial
intelligence
(AI)
adoption
in
schools
has
re
v
olutionized
education
by
allo
wing
personalized
learning
routes
and
optimum
intellectual
stimulation
for
all
types
of
learners
[1],
[2].
While
the
applications
of
AI
pro
vide
such
adv
antages
as
cost
sa
vings
and
accessibility
,
their
use
is
hindered
by
dire
ethical
issues,
mainly
academic
inte
grity
,
authorship,
and
the
acceptable
limit
of
assistance
v
ersus
plagiarism
[3]–[5].
Concerns
ha
v
e
mounted
as
or
g
anizations
f
ail
to
establish
coherent
policies
for
the
responsible
use
o
f
AI
amidst
inconsistent
usage
patterns
and
murk
y
accountability
measures
[6],
[7].
P
arallel
to
this,
there
ha
v
e
been
se
v
ere
concerns
re
g
arding
the
inte
grity
and
equity
of
AI
systems
in
learning
processes.
Evidence
e
xists
for
biases
in
grading
systems
[8],
in
v
ading
students’
autonomy
and
pri-
J
ournal
homepage:
http://ijece
.iaescor
e
.com
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
❒
925
v
ac
y
by
monitoring
learning
platforms
[9],
and
general
confusion
re
g
arding
the
ethics
of
AI-generated
content
[10],
[11].
These
are
e
xacerbated
by
the
rapid
rate
of
AI
de
v
elopment
occurring
before
institutional
polic
y
interv
entions,
lea
ving
learners
and
teachers
in
the
dark
about
what
is
acceptable
practice
[12],
[13].
While
the
originality
of
ethical
resea
rch
has
long
been
celebrated,
empirical
in
v
estig
ations
of
the
frequenc
y
of
AI
use
and
self-reported
dependence
are
scarce.
Earlier
studies
ha
v
e
typically
dra
wn
upon
anecdotal
or
discipline-specic
case
studies
that
do
not
re
v
eal
role-based
adoption
patterns
of
groups
such
as
students,
f
aculty
,
and
administra-
tion
[14],
[15].
Furthermore,
there
are
limited
longitudinal
studies
e
xamining
the
ef
fect
of
repeated
e
xposure
to
ubiquitous
AI
on
cogniti
v
e
independence,
especially
i
n
learning
en
vironments
with
signicant
heterogeneity
between
science,
technology
,
engineering
and
mathematics
(STEM)
and
humanities
elds
[16].
The
current
study
lls
these
g
aps
by
e
xamining
AI
use
patterns
and
percei
v
ed
dependence
among
498
w
orking
academics.
Using
a
mix
ed-methods
design,
the
current
study
e
xamined
whether
a
higher
fre-
quenc
y
of
use
is
associat
ed
with
higher
critical
literac
y
or
passi
v
e
dependence.
Contrary
to
the
widespread
belief
that
frequent
AI
use
undermines
student
agenc
y
,
this
study
e
xamines
the
h
ypothes
is
that
a
higher
fre-
quenc
y
of
use
translates
into
more
sophisticated
and
strate
gic
tool
use.
Statistical
procedures,
such
as
chi-square
analysis
and
re
gression
modeling,
were
emplo
yed
to
e
v
aluate
the
impact
of
v
ariables
such
as
age
and
job
on
dependenc
y
perceptions.
The
results
support
human-led
AI
literac
y
practices
that
add
academic
inte
grity
and
enhance
the
educational
impact
of
AI.
By
combining
quantitati
v
e
data
and
qualitati
v
e
understanding,
this
study
pro
vides
e
vidence-based
e
vidence
for
role-specic
AI
training
planning,
informs
institution-le
v
el
polic
y
,
and
establishes
a
more
e
xplicit
distinction
between
v
alid
assist
ance
and
academic
fraud
in
the
age
of
generati
v
e
AI.
The
or
g
anization
of
the
paper
is
as
prese
nted
belo
w:
In
this
study
,
section
2
presents
the
objecti
v
es
of
the
in
v
estig
ation.
Section
3
re
vie
ws
the
rele
v
ant
literature
on
AI
ethics
in
academic
settings.
Section
4
describes
the
methodology
emplo
yed
in
this
study
.
Section
5
presents
the
data
analysis
technique.
Section
6
reports
the
results
of
this
study
,
based
on
both
quantitati
v
e
and
qualitati
v
e
ndings.
Section
7
discusses
the
implications
of
the
results,
including
their
limitations
and
directions
for
future
research.
Section
8
concludes
the
paper
with
k
e
y
ndings
and
recommendations.
2.
OBJECTIVES
This
study
analyzes
the
moral
aspects
of
AI
in
education
in
terms
of
its
ef
fects
on
academic
i
n
t
e
grity
,
dependenc
y
,
and
ef
cienc
y
.
This
study
discusses
the
emplo
yment
of
AI
tools
by
students,
instructors,
and
administrators
for
their
respecti
v
e
roles.
An
essential
part
of
the
research
is
whether
repeated
use
results
in
autonomy
or
harmful
reliance
on
AI
systems.
This
research
also
discusses
ethical
issues
such
as
plagiarism,
academic
dishonesty
,
and
errors
in
AI
w
ork.
Moreo
v
er
,
it
e
xamines
ho
w
demographics
(role,
age,
and
AI
literac
y)
inuence
the
attitude
to
w
ards
embracing
AI.
Based
on
the
ndings,
this
study
recommends
ethical
practices
for
inte
grating
AI
into
academia,
with
a
focus
on
AI
literac
y
programs
and
institutional
policies
to
maintain
academic
inte
grity
and
ensure
w
orthwhile
AI
use.
This
study
considered
v
e
k
e
y
aims,
inte
grating
quantitati
v
e
e
xamination
with
qualitati
v
e
discourse
to
create
a
well-rounded
kno
wledge
of
AI
among
academic
institutions.
a.
P
atterns
of
adoption
quantied
o
v
er
academic
functions:
This
study
initially
quantied
dif
ferences
in
the
adoption
of
AI
tools
acr
oss
distinct
groups
of
academic
stak
eholders
[15].
It
also
identied
statistically
signicant
usage
rates
of
administrators,
f
aculty
,
and
students
based
on
descripti
v
e
statistics
and
chi-square
(
χ
2
)
testing.
b
.
Explored
the
frequenc
y-dependenc
y
re
lationship:
This
study
e
xplored
the
complicated
rel
ationship
between
dependenc
y
and
frequenc
y
of
AI
use
[17].
Using
linear
re
gression
modeling,
it
also
tes
ted
the
common
assumption
that
o
v
er
-reliance
arises
from
high
frequenc
y
,
with
consideration
of
the
alternati
v
e
scenario
that
more
frequent
e
xposure
may
yield
more
mature
and
independent
patterns
of
use.
c.
Measured
the
impact
of
demographics:
This
study
tested
the
e
xplanatory
po
wer
of
k
e
y
demographic
v
ari-
ables,
i
.e.,
educational
role
and
age,
on
stak
eholders’
perceptions
of
dependenc
y
on
AI
[18].
It
also
used
multi
v
ariate
re
gression
to
ascertain
if
these
v
ariables
were
strong
predictors
of
ho
w
indi
viduals
percei
v
ed
using
AI
tools
in
their
study
.
d.
Dominant
ethical
issues
and
concerns
identied:
This
study
identied
dominant
ethical
concerns
and
is-
sues
of
the
academic
community
through
inducti
v
e
thematic
analysis
of
open-ended
surv
e
y
questions
[9].
This
qualitati
v
e
analysis
attempted
to
b
uild
a
sense
of
multif
aceted
positions
on
academic
inte
grity
,
plagia-
The
ethics
of
articial
intellig
ence
tec
hnolo
gy
in
academic
work:
...
(Md.
Owafeeuzzaman
P
atwary)
Evaluation Warning : The document was created with Spire.PDF for Python.
926
❒
ISSN:
2088-8708
rism,
and
the
percei
v
ed
truthfulness
of
AI-generated
information.
e.
Evidence-based
recommendations
de
v
eloped:
Based
on
the
ndings
of
the
mix
ed-methods
analysis,
this
study
de
v
eloped
a
set
of
e
vidence-based
recommendations
for
schools.
It
is
intended
specically
to
guide
the
de
v
elopment
of
role-specic
training
programs
for
AI
and
straightforw
ard,
action-oriented
policies
for
incorporating
the
ethical
use
of
AI
into
the
curriculum.
3.
LITERA
TURE
REVIEW
This
literature
re
vie
w
collates
emer
ging
con
v
ersations
surrounding
the
transformati
v
e
potential
of
AI
in
academia,
with
a
particular
focus
on
the
ethical
implications
of
its
implementation
in
research,
writing,
and
kno
wledge
sharing.
It
e
xamined
AI’
s
tw
o-f
aced
role
as
both
an
accelerator
of
academic
producti
vity
and
a
source
of
trouble
for
academic
inte
grity
,
data
pri
v
ac
y
,
and
e
v
aluation
f
airness.
3.1.
Setting
ethical
standards
Scholars
rated
frame
w
orks
as
the
most
ef
fecti
v
e
means
of
controlling
AI
use
in
academia.
Ashok
et
al.
[12]
created
a
cross-industry
ethical
frame
w
ork
that
highlights
transparenc
y
,
accountability
,
and
human
o
v
ersight
and
ar
gued
that
such
principles
a
v
oid
misuse
in
academic
settings.
The
y
discussed
ho
w
algo-
rithmic
auditing
reduces
bias
in
automarking.
Castell
´
o-Sirv
ent
et
al.
[6]
responded
with
a
campus-wide
plan
calling
for
institution-wide
et
hics
boards
and
AI
ethics
inte
gration
into
the
curriculum,
citing
decent
ralized
poli-
cies
that
only
e
xacerbate
enforcement
disparity
.
Their
research
cited
case
studies
at
European
uni
v
ersities
in
which
brok
en
guidelines
doubled
plagiarism
cases
by
22%.
T
ang
et
al.
[19]
proposed
journal-le
v
el
guidel
ines
for
generati
v
e
AI,
calling
for
authorial
responsibility
and
straightforw
ard
AI
tool
disclosure.
The
y
e
xamined
350
articles
on
science
education
and
found
that
37%
of
the
manuscripts
had
undisclosed
AI
contrib
utions.
Mujtaba
et
al.
[20]
dispelled
myths
of
AI
substituting
human
judgment,
pointing
out
that
ethical
dilemmas
occur
when
tools
circumv
ent
required
analysis.
Their
surv
e
y
of
b
usiness
students
reported
that
64%
confused
AI
editing
with
original
content
in
the
absence
of
guidelines.
F
arooqi
et
al.
[7]
rigorously
re
vie
wed
inte
gration
challenges
and
concluded
that
the
shortage
of
sector
-specic
guidelines
w
as
the
root
cause
of
ethical
breaches.
The
y
suggested
adapti
v
e
frame
w
orks
for
v
ocations
instead
of
abstract
elds.
3.2.
Algorithmic
bias
and
data
pri
v
acy
Studies
ha
v
e
unco
v
ered
the
risks
of
embedded
bias
and
monitoring
within
AI
systems.
Santoni
de
Sio
[8]
identied
algorithmic
bias
in
admissions
and
grading
programs,
demonstrating
training
data
biases
that
increased
socio-economic
disparities.
Their
simulation
identied
AI
admissions
softw
are
biased
in
f
a
v
or
of
appli
cants
from
af
uent
schools
by
19%.
Jac
o
b
et
al.
[21]
replicated
these
results
in
medical
school,
demonstrating
that
biased
clinical
AI
softw
are
e
xaggerated
diagnosis
biases
among
student
doctors.
The
y
promoted
biased
audits
and
multi
v
ariate
dataset
curation
in
the
name
of
equity
.
Dourish
and
Bell
[9]
made
an
early
critique
of
ubiquitous
computing,
cautioning
that
the
surv
eillance
of
students
using
learning
management
system
(LMS)
threatened
student
autonomy
and
f
acilitated
the
commodication
of
data.
Their
ethnographic
research
foreshado
ws
today’
s
pri
v
ac
y
tensions
in
AI-monitored
e
xaminations.
Polat
et
al.
[14]
associated
biased
leadership
algorithms
with
instituti
onal
biases
through
bibliometric
analysis.
The
y
established
that
AI-
based
resource
distrib
ution
in
schools
e
xacerbated
gender
disparities
in
STEM
enrollment.
3.3.
Ab
use
of
AI
in
Academia
Quantitati
v
e
research
has
re
v
ealed
plagiarism
risks
and
detection
f
ailures.
Perkins
[3]
reported
ram-
pant
lar
ge
language
models
(LLM)
aided
plagiarism,
with
68%
of
students
using
C
hatGPT
unf
airly
under
ambiguous
policies.
His
re
vie
w
of
1,200
assignments
re
v
ealed
that
modern
detectors
missed
45%
of
AI-
paraphrased
content.
Fyfe
[5]
empirically
conrmed
AI-authored
essays,
sho
wing
that
detectors
f
ailed
to
identify
structural
plagiarism
in
72%
of
cases.
He
adv
ocated
for
pedagogical
changes
to
w
ards
process-based
e
xaminations,
such
as
oral
defenses.
Hutson
[4]
re
n
a
med
plagiarism
as
“attrib
utional
ne
gligence”,
contending
that
unethical
use
is
only
present
when
users
conceal
AI
inputs.
His
semantic
analysis
diseng
aged
collaborati
v
e
drafting
from
fraudulent
authorship.
Miao
et
al.
[22]
surv
e
yed
academia
in
nephrology
and
found
that
peer
re
vie
wers
were
unable
to
detect
statistical
manipulation
generated
by
AI
58%
of
the
time.
The
y
designed
a
disclosure
proces
s
for
the
medical
journals.
Chen
[13]
associated
relax
ed
conference
policies
with
escalating
misconduct,
nding
a
31%
rise
in
AI-a
v
oiding
plagiarism
after
the
pandemic.
Int
J
Elec
&
Comp
Eng,
V
ol.
16,
No.
2,
April
2026:
924-944
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
❒
927
3.4.
AI
literacy
and
policy
de
v
elopment
Successful
go
v
ernance
h
i
nges
on
stak
eholders’
education.
Zeb
et
al.
[23]
correlated
library-pro
vided
AI
literac
y
training
with
52%
greater
rates
of
students’
ethical
adoption.
Their
longitudinal
study
demonstrated
that
w
orkshops
reduced
tool
misuse
by
41%.
Mustof
a
et
al.
[18]
simulated
acceptance
f
actors,
demonstrating
subjecti
v
e
norms
(e.g.,
peers’
attitudes)
and
tool
credibility
o
v
er
technical
con
v
enience
for
ethical
use.
The
T
AM
e
xtension
questionnaire
surv
e
yed
800
Indonesian
uni
v
ersity
students.
F
alebita
and
K
ok[16]
recognized
STEM
cohorts’
technology
readiness
g
aps
and
suggested
polic
y-incenti
vized
certications
to
institute
stan-
dardized
competencies.
Structural
equation
modeling
v
eried
that
self-ef
cac
y
g
aps
prohibited
responsible
use.
Khalif
a
and
Albada
wy
[10]
placed
AI
as
a
w
ork-enhancer
when
it
is
paired
with
literac
y
education,
with
researchers
utilizing
guided
tools
that
came
up
with
30%
more
ne
w
conclusions
.
Y
im
and
Su
[24]
cautioned
that
there
were
no
age-appropriate
ethics
modules
in
K-12
AI
programs,
and
therefore,
plagiarism
became
a
threat
in
primary
education.
3.5.
Social
and
ethical
implications
of
AI
W
ider
social
ef
fects
were
critically
e
xplored.
Al-Zahrani
and
Alasmari
[11]
attrib
uted
dependence
on
AI
to
lessened
critical
thinking,
indicating
that
o
v
eruse
reduced
comple
x
problem-solving
among
under
grads
by
30%.
Their
transdisciplinary
in
v
estig
ation
spok
e
in
f
a
v
or
of
tool
balancing
for
the
follo
wing
r
easons.
Ger
-
lich
[15]
speculated
cogniti
v
e
of
oading
as
a
double-edged
sw
ord,
where
g
ains
in
ef
cienc
y
w
ould
undermine
metacogniti
v
e
potential
in
t
h
e
absence
of
reection-based
pedagogy
.
He
surv
e
yed
cogniti
v
e
psychology
e
xper
-
iments
in
f
a
v
or
of
this
trade-of
f
and
found
that
Cukuro
v
a
[2]
en
visioned
h
ybrid
frame
w
orks
for
intelligence
that
combined
AI
analysis
with
human
direction
to
preserv
e
ethical
enrichment.
In
their
architecture,
co-adapti
v
e
learning
systems
are
the
center
stage.
Carayannis
et
al.
[25]
applied
ethics
to
SME
upskilling,
demonstrating
that
unre
gulated
AI
training
softw
are
w
orsened
labour
disparities.
Prather
et
al.
[26]
deconstructed
the
h
ype
o
v
er
generati
v
e
AI,
unco
v
ering
o
v
erblo
wn
adv
antages
hiding
ethical
risks
in
70%
of
the
EdT
ech
h
ype.
3.6.
Lear
ning
and
cr
eati
vity
AI
tools
Evidence
pro
v
es
the
creati
vity
adv
antage
of
ethical
re
g
ul
ation.
Iqbal
et
al.
[1]
re
v
ealed
that
generati
v
e
AI
impro
v
ed
preservice
teachers’
metacognition
by
40%
through
group
problem-solving.
Controlled
trials
re-
quire
close
monitoring
to
a
v
oid
addiction.
Chen
[27]
re
v
ealed
that
composition
softw
are
used
by
music
students
generated
greater
melodic
creati
vity
b
ut
had
25%
lo
wer
theoretical
kno
wledge
when
left
unsupervised.
Dibek
et
al.
[28]
meta-analyzed
62
studies,
concluding
AI
el
e
v
ated
higher
-order
thinking
only
in
tasks
demanding
creati
v
e
synthesis.
Ef
fect
sizes
were
strongest
(+0.78)
when
AI
supplemented,
not
replaced,
the
cogniti
v
e
ef-
fort.
Mohebbi
[17]
demonstrated
that
AI
language
tools
promote
learner
autonomy
b
ut
decrease
grammatical
correctness
by
18%
in
the
absence
of
a
feedback
mechanism.
3.7.
Resear
ch
gaps
and
futur
e
dir
ections
While
there
is
an
increasing
body
of
research
addressing
the
ethical
considerations
of
AI
in
education,
the
e
xisting
literature
tends
to
of
fer
generalized
ethical
frame
w
orks
that
are
not
sensiti
v
e
to
specic
conte
xts.
Ho
we
v
er
,
as
AI
is
introduced
into
dif
ferent
elds
of
study
,
it
becomes
clear
that
ethical
considerations
and
use
case
areas
dif
fer
within
dif
ferent
disciplines.
This
distinction
indicates
the
need
for
more
conte
xtualized
ethical
guidelines
that
ha
v
e
the
e
xibility
to
address
the
dif
culties
associated
with
separate
elds
and
address
particular
needs.
T
able
1
sho
ws
the
summary
of
e
xisting
w
orks
on
AI
ethics
in
education:
a.
Discipline-specic
ethical
guidelines:
Current
standards
(e.g.,
Ashok
et
al.
[12];
Castell
´
o-Sirv
ent
et
al.
[6])
remain
too
general,
f
ailing
to
k
eep
up
with
discipline-specic
nuances.
AI
use
in
creati
v
e
writing
(Chen
[27]),
for
instance,
demands
dif
ferent
ethical
standards
than
STEM
data
analysis
(F
alebita
and
K
ok
[16]),
b
ut
no
adapti
v
e
frame
w
orks
e
xist
to
address
these
disparities.
Future
studies
should
create
eld-specic
guidelines
with
educators
in
mind,
follo
wing
up
on
ho
w
standards
of
attrib
ution
v
ary
across
elds.
b
.
Longitudinal
cogniti
v
e
ef
fect:
Short-term
studies
re
v
eal
cogniti
v
e
decits
in
critical
thinking
(Al-Zahrani
and
Alasmari
[11];
Gerlich
[15]),
b
ut
the
long-term
impact
of
AI-f
acilitated
learning
on
metacognition
and
creati
vity
is
unclear
.
Dibek
et
al.
[28]
referred
to
this
as
a
“black
box”
for
educational
psychology
,
calling
for
cohort
studies
o
v
er
a
decade
e
xploring
the
impact
of
early
e
xposure
to
AI
on
graduates’
professional
ethics
and
problem-solving
ability
.
c.
Scalable
bias
mitig
ation:
Algorithmic
discrimination
solutions
(Santoni
de
Sio
[8];
Jacob
et
al.
[21])
remain
only
at
small-scale
trials.
Bias
audits
by
Jacob
et
al.
,
although
cost-ef
fecti
v
e
in
clinical
training
simulations,
are
impractical
for
institution-le
v
el
deplo
yment,
considering
computational
e
xpenses.
Research
priorities
The
ethics
of
articial
intellig
ence
tec
hnolo
gy
in
academic
work:
...
(Md.
Owafeeuzzaman
P
atwary)
Evaluation Warning : The document was created with Spire.PDF for Python.
928
❒
ISSN:
2088-8708
should
be
placed
on
de
v
eloping
lo
w-cost,
open-source,
bias-detecting
tools
that
are
deplo
yable
across
under
-
resourced
institutions
globally
.
d.
Global
equity
in
polic
y
making:
Existing
ethics
are
Anglo-European
domi
nant
(Chen
[13];
F
arooqi
et
al.
[7]),
ne
glecting
infrastructural
and
cultural
constraints
in
Global
South
schooling.
F
arooqi
et
al.
determined
that
78%
of
suggested
AI
go
v
ernance
frame
w
orks
assume
common
high-bandwidth
connecti
vity
,
making
them
inef
fecti
v
e
in
en
vironments
with
intermittent
connecti
vity
.
Future
studies
should
emplo
y
participatory
design
practices
that
prioritize
the
v
oices
of
underrepresented
educational
conte
xts.
e.
Ne
xt-generation
assessment
models:
AI-paraphrased
writing
is
be
yond
the
scope
of
plagiarism
detection
(Fyfe
[5];
Perkins
[3]),
and
ne
w
choices
are
under
-e
xplored.
Hutson
[4]
promoted
“process-oriented
e
v
alu-
ations,
”
b
ut
big
models
for
ideati
on
genesis
tracing
(e.g.,
blockchain-documented
drafting
histories)
require
interdisciplinary
collaboration
between
pedagogues
and
AI
engineers.
T
able
1.
Summary
of
e
xisting
w
orks
on
AI
ethics
in
education
Author(s)
Y
ear
K
e
y
Contrib
ution
Identied
Research
Gap
Ashok
et
al.
[12]
2022
Proposed
foundational
ethical
frame
w
ork
identifying
14
k
e
y
AI
ethics
principles
(intelligibility
,
accountability
,
f
airness,
pri
v
ac
y)
Critical
g
ap
in
practical
implementation
guidance
Perkins
[3]
2023
Redened
academic
inte
grity
breach
as
lack
of
transparenc
y
in
AI
use
rather
than
usage
itself
Need
for
institutional
policies
addressing
transparenc
y
requirements
Castell
´
o-Sirv
ent
et
al.
[6]
2024
Created
3-le
v
el
roadmap
(Micro/Meso/Macro)
for
ethical
AI
deplo
yment
in
uni
v
ersities
Lack
of
coherent
institutional
vision
for
AI
inte
gration
Fyfe
[5]
2023
De
v
eloped
”proacti
v
e
cheating”
pedagogy
to
foster
critical
AI
literac
y
Need
to
mo
v
e
be
yond
plagiarism
detection
to
w
ard
acti
v
e
eng
agement
T
ang
et
al.
[19]
2024
Established
concrete
guidelines
for
generati
v
e
AI
use
in
academic
publishing
Practical
implementation
g
aps
in
authorship/cop
yright
frame
w
orks
Gerlich
[15]
2025
Quantied
cogniti
v
e
of
oading
as
mediator
between
AI
use
and
critical
thinking
decline
Empirical
e
vidence
g
ap
re
g
arding
analytical
skill
erosion
Cukuro
v
a
[2]
2025
Proposed
”h
ybrid
intelligence”
model
(e
xternaliza-
tion/internalization/e
xtension)
Ov
ersimplied
tool-based
conceptualization
of
AI
Jacob
et
al.
[21]
2025
Introduced
”AI
for
IMP
A
CTS”
frame
w
ork
for
clinical
tool
e
v
aluation
Need
for
holistic
assessment
be
yond
technical
accurac
y
Mustof
a
et
al.
[18]
2025
Extended
T
AM
model
sho
wing
ethics/trust
>
ease-of-use
in
AI
adoption
Polic
y
focus
misalignment
with
student
adoption
dri
v
ers
Hutson
[4]
2024
Called
for
redenition
of
plagiarism/originality
Curricular
misalignment
with
AI-assisted
writing
realities
concepts
4.
METHODOLOGY
This
study
e
xplored
the
ethical
issues
in
v
olv
ed
in
using
AI
in
academic
w
ork,
specically
the
bal-
ance
between
assistance
and
plagiarism.
This
section
elucidates
the
res
earch
design,
participant
selection,
data
collection
methods,
and
procedures
follo
wed
in
analyzing
the
collected
data
and
ethical
considerations.
Figure
1
sho
ws
a
o
wchart
of
the
study
methodology
and
mix
ed-methods
analytical
process.
Int
J
Elec
&
Comp
Eng,
V
ol.
16,
No.
2,
April
2026:
924-944
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
❒
929
Figure
1.
Ov
ervie
w
of
the
study
methodology
and
mix
ed-methods
analytical
process
4.1.
Resear
ch
design
The
qualitati
v
e
portion
of
the
study
emplo
yed
a
surv
e
y-based
design
to
e
v
aluate
the
pre
v
alence
and
trends
of
AI
tool
use
in
academic
en
vironments.
This
process
entailed
the
collection
of
quantitati
v
e
data
through
closed-ended
question
designs,
allo
wing
for
statistical
analysis
that
re
v
ealed
the
correlations
and
patterns.
The
qualitati
v
e
aspect,
embedded
in
the
same
surv
e
y
inst
rument,
in
v
olv
ed
the
in
vitation
of
open-ended
questions
to
collect
rich,
descripti
v
e
data
on
participants’
e
xperiences,
perceptions,
and
concerns
re
g
arding
AI
in
academia.
The
qualitati
v
e
data
helped
to
pro
vide
conte
xt
and
depth
to
the
quantitati
v
e
results,
enabling
a
more
nuanced
understanding.
4.2.
P
articipants
P
articipants
in
the
study
included
a
sample
of
498
(N
=
498)
students
(at
v
arious
le
v
els),
teach-
ers/f
aculty
members,
administrati
v
e
emplo
yees,
and
other
educational
roles,
respecti
v
ely
.
P
articipants
were
recruited
using
a
con
v
enience
sampling
approach,
allo
wing
for
a
wide
representation
of
perspecti
v
es
in
an
educational
community
.
Be
yond
their
primary
function
as
educational
en
vironment
actors
,
the
y
also
pro
vided
demographic
data,
including
age
and
gender
,
to
undertak
e
a
breakdo
wn
of
ho
w
these
v
ariables
af
fected
their
use
of
and
perspecti
v
es
on
AI.
Such
information
w
ould
enable
an
analysis
of
ho
w
these
may
inuence
AI-related
beliefs
and
acti
vities
concerning
academic
w
ork.
4.3.
Data
collection
Data
we
re
collected
using
an
online
surv
e
y
platform
called
Google
F
orms.
The
surv
e
y
instrument
w
as
specically
de
v
eloped
and
pilot-tested
with
a
small
subset
of
the
population
of
interest
to
ensure
appropriate
w
ording,
v
alidity
,
and
reliability
of
the
questions.
The
surv
e
y
comprised
tw
o
major
components:
a.
Quantitati
v
e
section:
Questions
manipulated
based
on
Lik
ert
scales
to
dene
participant
use
of
AI
tools,
signs
of
dependenc
y
on
AI,
and
perceptions
of
se
v
eral
ethical
considerations
associated
wi
th
AI
in
the
academic
w
orld.
b
.
Qualitati
v
e
section:
This
section
included
open-ended
questions
that
aimed
to
obtain
in-depth
responses
re
g
arding
the
participants’
e
xperiences
with
AI
tools,
their
opinions
on
the
adv
antages
and
disadv
antages
of
AI
in
the
education
sector
,
as
well
as
the
proper
ethical
limits
bet
ween
acceptable
and
plagiarized
AI
assistance.
The
ethics
of
articial
intellig
ence
tec
hnolo
gy
in
academic
work:
...
(Md.
Owafeeuzzaman
P
atwary)
Evaluation Warning : The document was created with Spire.PDF for Python.
930
❒
ISSN:
2088-8708
In
addition,
the
surv
e
y
g
athered
demographic
data.
P
articipants
were
briefed
on
the
objecti
v
e
of
the
study
,
that
their
participation
w
as
v
oluntary
,
and
that
anon
ymity
and
condentiality
w
ould
be
preserv
ed.
4.4.
Data
analysis
Data
were
analyzed
using
SPSS
softw
are
for
quantitati
v
e
and
thematic
analysis
for
the
q
ua
litati
v
e
analysis.
Quantitati
v
e
analysis:
descripti
v
e
statistics
(means,
standard
de
viations,
frequenc
ies)
were
estimated
for
the
demographic
characteristics
of
the
sample
and
the
general
patterns
of
AI
use.
The
follo
wing
statistical
tests
were
performed
to
assess
the
relationships
between
v
ariables:
T
w
o
chi-square
tests
of
independence
were
conducted
to
e
xam
ine
the
relationship
between
educational
purpose
and
AI
tool
utilizati
o
n.
Ho
w
oft
en
is
AI
used,
and
ho
w
dependent
are
the
y
on
AI?
Re
gression
analysis
w
as
used
to
model
the
relationship
between
AI
usage
(independent
v
ariable)
and
dependenc
y
(dependent
v
ariable).
Perceptions
of
dependenc
y
problems
of
AI
(dependent
v
ariable)
according
to
age
and
educational
role
(independent
v
ariables).
All
tests
were
tw
o-
tailed
with
a
signicance
threshold
of
p
≤
0.05.
Qualitati
v
e
analysis
of
open-ended
responses
w
as
thematically
analyzed.
Specically
,
this
included
a
systematic
process
of
coding
the
data
to
unco
v
er
recurrent
themes,
patterns,
and
insights.
This
is
because
the
y
relate
to
participants’
e
xperiences
and
perceptions
of
AI
within
the
academic
conte
xt.
Themes
were
identied
and
interpreted
to
enrich
our
understanding
of
the
quantitati
v
e
results.
4.5.
Ethical
considerations
This
study
w
as
conducted
with
the
utmost
respect
for
the
ethical
treatment
of
all
participants
and
follo
wed
the
highest
standards
for
research
in
v
olving
human
subjects.
The
fol
lo
wing
measures
were
tak
en
to
ensure
ethical
conduct
throughout
the
study:
a.
Informed
consent:
Before
participation,
all
subj
ects
recei
v
ed
detailed
information
about
the
study
,
including
its
purpose,
procedures,
potential
risks
and
benets,
the
v
oluntary
nature
of
participation,
and
their
right
to
withdra
w
at
an
y
time
without
consequence.
The
participants
pro
vided
informed
consent
after
the
y
had
the
opportunity
to
ask
questions
about
the
study
.
b
.
Anon
ymity
and
condentiality:
P
articipants’
anon
ymity
and
condentiality
were
strictly
maintained.
No
personal
identifying
information
(e.g.,
names,
email
addresses,
or
institutional
af
liations)
w
as
collected
or
link
ed
to
indi
vidual
responses.
The
data
were
aggre
g
at
ed
and
analyzed
at
the
group
le
v
el
to
pre
v
ent
the
identication
of
indi
vidual
participants.
c.
Data
security:
Collected
data
were
stored
on
passw
ord-protected
systems,
accessible
only
to
authorized
re-
search
team
members.
Additional
protections
include
the
encryption
and
secure
storage
of
ph
ysical
records
in
lock
ed
cabinets.
Data
were
retained
for
a
specied
period
and
securely
destro
yed
follo
wing
standard
data
disposal
protocols.
d.
V
oluntary
participation:
P
articipation
in
the
study
w
as
entirely
v
oluntary
.
P
articipants
were
informed
that
the
y
could
decline
to
answer
an
y
question
or
withdra
w
from
the
study
at
an
y
time
without
penalty
.
This
process
w
as
repeated
throughout
the
data-collection
process.
e.
Minimization
of
harm:
The
study
w
as
designed
to
minimize
potential
risks.
The
surv
e
y
questions
were
re
vie
wed
to
a
v
oid
sensiti
v
e
or
potentially
triggeri
n
g
content.
The
participants
were
pro
vided
with
the
contact
details
of
the
research
team
re
g
arding
their
concerns
or
questions.
f.
Use
and
sharing
of
data:
P
articipants
were
informed
about
the
intended
use
of
their
data,
including
research
analysis
and
academic
publication.
If
an
y
data
sharing
is
planned,
it
will
be
performed
in
an
anon
ymized
form
under
strict
protection
protocols.
5.
D
A
T
A
AN
AL
YSIS
Quantitati
v
e
data
from
498
education
stak
eholders
were
analyzed
using
SPSS
(v28)
with
α
=
0
.
05
.
Statistical
analysis
entailed
descripti
v
e
analysis,
chi-square
tests
of
independence,
and
linear
re
gression
mod-
eling
to
e
xamine
the
k
e
y
relationships.
5.1.
T
est
of
independence
The
chi-square
test
of
independence
is
commonly
used
to
determine
whether
there
is
a
signicant
relationship
between
tw
o
v
ariables
and
the
nature
of
the
rel
ationship.
This
test
is
particularly
useful
for
under
-
standing
the
types
of
non-numeric
data
or
the
relationships
between
them,
such
as
user
groups
and
technology
Int
J
Elec
&
Comp
Eng,
V
ol.
16,
No.
2,
April
2026:
924-944
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
❒
931
usage
patterns.
5.1.1.
Relationship
between
educational
r
ole
and
AI
adoption
Hypothesis:
•
H
0
:
No
relationship
between
educational
roles
and
the
adoption
of
AI
tools.
•
H
1
:
A
relationship
e
xists
between
educational
roles
and
the
adoption
of
AI
tools.
A
chi-square
test
re
v
ealed
a
statistically
signicant
association
between
the
educational
role
and
AI
adoption:
χ
2
(6)
=
16
.
488
,
p
=
0
.
036
The
rejection
of
the
null
h
ypothesis
(
H
0
)
suggests
role-dependent
adoption
patterns.
Students
represented
the
majority
of
users
(98%
of
adopters),
with
adoption
being
much
lo
wer
for
f
aculty
(0.2%)
and
administrators
(0.4%).
T
o
e
xplore
ho
w
dif
ferent
educational
roles
(students,
teachers,
and
administrators)
inuence
the
adop-
tion
of
AI
tools,
a
chi-square
test
of
independence
w
as
conducted.
Figure
2
and
the
corresponding
bar
chart
in
Figure
3
depict
the
distrib
ution
and
statistical
association.
Chi-Square
v
alue
(
χ
2
)
=
16
.
488
De
grees
of
Freedom
(df)
=
6
p
-v
alue
=
0
.
036
Since
the
p-v
alue
is
less
than
0.05,
reject
the
null
h
ypothesis
H
0
and
accept
the
alternati
v
e
h
ypothesis
H
1
.
This
result
indicates
a
statistically
signicant
relationship
between
the
educational
role
and
AI
adoption.
Figure
3
sho
ws
that
students
represent
the
lar
gest
group
of
AI
equipment
users,
follo
wed
by
teachers
and
administrators.
This
suggests
that
while
AI
tools
are
g
aining
traction
across
the
board,
their
adoption
is
not
uniform.
Students
are
more
acti
v
e
in
adopting
AI
technologies
because
the
y
are
lik
ely
to
be
f
amiliar
with
educational
w
orkloads
and
digital
de
vices.
In
contrast,
teachers
and
administrators
can
adopt
such
techniques
more
cautiously
or
selecti
v
ely
.
This
insight
outlines
the
importance
of
designing
role-specic
AI
literac
y
initiati
v
es
to
promote
equal
and
ef
fecti
v
e
inte
gration.
Figure
2.
Association
between
educational
roles
and
AI
adoption
based
on
Chi-Square
test
results
5.1.2.
Fr
equency
of
AI
use
vs.
le
v
el
of
dependency
Hypothesis:
•
H
0
:
No
connection
between
ho
w
often
AI
is
used
and
ho
w
much
it’
s
depended
upon.
•
H
1
:
A
connection
e
xists
between
the
frequenc
y
of
AI
use
and
the
de
gree
of
reliance
on
AI.
A
v
ery
strong
association
e
xisted
between
the
frequenc
y
of
use
and
the
le
v
el
of
dependenc
y
.
χ
2
(16)
=
531
.
012
,
p
<
0
.
001
The
ethics
of
articial
intellig
ence
tec
hnolo
gy
in
academic
work:
...
(Md.
Owafeeuzzaman
P
atwary)
Evaluation Warning : The document was created with Spire.PDF for Python.
932
❒
ISSN:
2088-8708
The
strong
relationship
(
H
0
rejected)
sho
wed
polarized
dependenc
y
reporting:
46.6%
felt
“signicant
depen-
denc
y”
compared
to
46.6%
who
reported
“some
what
dependenc
y
.
”
This
test
e
xamined
whether
the
frequent
use
of
AI
tools
w
as
associated
with
user
dependenc
y
.
As
illustrated
in
Figure
4
and
the
supporting
distrib
ution
in
Figure
5:
Chi-Square
v
alue
(
χ
2
)
=
531
.
012
p
-v
alue
=
0
.
000
The
e
xtremely
lo
w
p-v
alue
(
<
0
.
001)
conrms
a
highly
signicant
association.
As
a
result,
this
study
accept
H
1
and
reject
the
null
h
ypothesis
H
0
.
This
suggests
that
the
frequenc
y
of
use
and
percei
v
ed
dependence
are
interconnected.
Interestingly
,
Figure
5
sho
ws
that
respondents
with
high
and
lo
w
use
reported
dependenc
y
.
Ho
we
v
er
,
as
e
xplored
in
Section
5.3.1.,
this
unit
is
not
necessarily
linear
or
increases
with
use.
This
result
w
arns
institutions
ag
ainst
assuming
that
the
most
frequent
use
automatically
leads
to
e
xcessi
v
e
dependence.
Instead,
the
comple
xity
of
the
psychological
and
beha
vioral
dynamics
surrounding
AI
commitment
stands
out.
Figure
3.
Bar
chart
sho
wing
adoption
of
AI
among
v
arious
educational
roles
Figure
4.
Correlation
between
AI
adoption
and
student
dependenc
y
problems:
Chi-square
test
Int
J
Elec
&
Comp
Eng,
V
ol.
16,
No.
2,
April
2026:
924-944
Evaluation Warning : The document was created with Spire.PDF for Python.
Int
J
Elec
&
Comp
Eng
ISSN:
2088-8708
❒
933
Figure
5.
Distrib
ution
of
AI
adoption
in
relation
to
reported
dependenc
y
issues
5.2.
Fr
equency
analysis
T
o
quantitati
v
ely
synthe
size
the
surv
e
y
data,
a
frequenc
y
analysis
w
as
conducted
to
capture
the
re-
spondents’
demographics,
usage
patterns
in
v
olving
AI,
and
attitudes
to
w
ard
AI’
s
role
and
inuence
within
educational
settings.
This
analysis
has
the
function
of
putting
inferential
statistical
ndings
reported
later
in
this
paper
into
perspecti
v
e.
The
response
pattern
along
the
central
surv
e
y
items
is
described
belo
w
.
5.2.1.
Respondents’
fr
equency
distrib
ution
by
education
r
ole
The
demographics
of
the
498
participants
are
sho
wn
in
Figure
6.
Most
of
the
respondents
were
students,
comprising
98.0%
(n=488)
of
the
total
sample.
The
rest
of
the
participants
comprised
other
academic
roles,
such
as
parents
at
1.0%
(n=5),
administrators
at
0.4%
(n=2),
those
who
identied
themsel
v
es
as
both
teachers
and
parents
at
0.4%
(n=2),
and
teachers
at
0.2%
(n=1).
This
pattern
at
this
le
v
el
sho
ws
that
the
results
mostly
reect
students’
vie
ws
on
AI
in
education.
Figure
6.
Frequenc
y
distrib
ution
of
respondents
by
education
role
(i.e.,
student,
teacher
,
administrator)
5.2.2.
P
er
cei
v
ed
fr
equency
distrib
ution
of
AI-induced
issues
of
dependency
Figure
7
sho
ws
the
perceptions
of
respondents
re
g
arding
whether
or
not
AI
tools
cause
dependenc
y
among
students.
A
signicant
majority
of
participants
conrmed
some
le
v
el
of
dependenc
y
,
with
responses
e
v
enly
split
between
“Some
what”
(46.6%,
n=232)
and
“Y
es,
signicantly”
(46.6%,
n=232).
This
equates
to
a
cumulati
v
e
total
of
93.2%
who
vie
w
there
to
be
an
issue
of
dependenc
y
.
There
w
as
a
v
ery
small
minority
of
“Neutral”
on
the
question
(6.4%,
n=32),
though
just
0.4%
(n=2)
of
respondents
thought
that
AI
tools
produced
“No,
not
at
all”
dependenc
y
.
The
ethics
of
articial
intellig
ence
tec
hnolo
gy
in
academic
work:
...
(Md.
Owafeeuzzaman
P
atwary)
Evaluation Warning : The document was created with Spire.PDF for Python.