Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,393 Article Results

An information retrieval system for Indian legal documents

10.11591/ijece.v16i1.pp246-255
Rasmi Rani Dhala , A V S Pavan Kumar , Soumya Priyadarsini Panda
In this work, a legal document retrieval system is presented that estimates the significance of the user queries to appropriate legal sub-domains and extracts the key documents containing required information quickly. In order to develop such a system, a document repository is prepared comprising the documents and case study reports of different Indian legal matters of last five years. A legal sub-domain classification technique using deep neural network (DNN) model is used to obtain the relevance of the user queries with respective legal sub-domains for quick information retrieval. A query-document relevance (QDR) score-based technique is presented to rank the output documents in relation to the query terms. The presented model is evaluated by performing several experiments under different context and the performance of the presented model is analyzed. The presented model achieves an average precision score of 0.98 and recall score of 0.97 in the experiments performed. The retrieval model is assessed with other retrieval models and the presented model achieves 13% and 12% increase average accuracy with respect to precision scores and recall measures respectively compared to the traditional models showing the strength of the presented model.
Volume: 16
Issue: 1
Page: 246-255
Publish at: 2026-02-01

Hybrid neurocontrol of irrigation of field agricultural crops

10.11591/ijece.v16i1.pp206-215
Aleksandr S. Kabildjanov , Aziz M. Usmanov , Dilnoza B. Yadgarova
This study investigates a conceptual framework for a hybrid intelligent control system designed to optimize the irrigation practice for field crops via fertigation technologies. This research is aimed at enhancing irrigation management through the improvement of the prediction, optimization, and regulation processes. This is achieved through the incorporation of modern computational intelligence with advanced deep learning based neural networks, evolutionary optimization algorithms, and the adaptive neuro-fuzzy technique. This hybrid control framework is made up of interconnected sets of monitoring and decision-making modules. These include subsystems for evaluation of soil conditions, monitoring of plant growth and physiological development, assessment of environmental and climatic conditions, and measurements of the intensity of solar radiation. Additional systems address the preparation of the fertigation mixture and control of intelligent decision-making processes. For this system, the overall control policy is rendered through a predictive neurocontrol approach with execution on a computer platform. A recurrent deep neural model, long short-term memory (LSTM) type, provides crop growth and development parameter predictions through the ability to explore temporal dependencies in agricultural processes. Optimization in the predictive control feedback is accomplished through genetic algorithms in an adaptive manner.
Volume: 16
Issue: 1
Page: 206-215
Publish at: 2026-02-01

Deep learning architecture for detection of fetal heart anomalies

10.11591/ijece.v16i1.pp414-422
Nusrat Jawed Iqbal Ansari , Maniroja M. Edinburgh , Nikita Nikita
Research has demonstrated that artificial intelligence (AI) techniques have shown tremendous potential over the past decade for analyzing and detecting anomalies in the fetal heart during ultrasound tests. Despite their potential, the adoption of these algorithms remains limited due to concerns over patient privacy, the scarcity of large well-annotated datasets and challenges in achieving high accuracy. This research aims to overcome these limitations by proposing an optimal solution. Two methods such as deterministic image augmentation techniques and Wasserstein generative adversarial network with gradient penalty (WGAN-GP) showcase the framework's capacity to seamlessly and effectively expand original datasets to 14 times and 17 times respectively, thereby effectively tackling the problem of data scarcity. It uses an annotation tool to precisely categorize anomalies identified in the echocardiogram dataset. Segmentation of the annotated data is done to highlight region of interest. Nine distinct fetal heart anomalies are identified with respect to the fewer covered in existing research. This study also investigates the state-of-the-art architectures and optimization techniques used in deep learning models. The results clearly indicate that the ResNet-101 model demonstrated superior precision accuracy of 99.15%. To ensure the reliability of the proposed model, its performance underwent thorough evaluation and validation by certified gynecologists and fetal medicine specialists.
Volume: 16
Issue: 1
Page: 414-422
Publish at: 2026-02-01

Autonomous mobile robot implementation for final assembly material delivery system

10.11591/ijece.v16i1.pp158-173
Ahmad Riyad Firdaus , Imam Sholihuddin , Fania Putri Hutasoit , Agus Naba , Ika Karlina Laila Nur Suciningtyas
This study presents the development and implementation of an autonomous mobile robot (AMR) system for material delivery in a final assembly environment. The AMR replaces conventional transport methods by autonomously moving trolleys between the warehouse, production stations, and recycling areas, thereby reducing human intervention in repetitive logistics tasks. The proposed system integrates a laser-SLAM navigation approach, customized trolley design, RoboShop programming, and robot dispatch system coordination, enabling real-time route planning, obstacle detection, and material scheduling. Experimental validation demonstrated high accuracy in path following, with root mean square error values ranging between 0.001 to 0.020 meters. The AMR achieved an average travel distance of 118.81 meters and a cycle time of 566.90 seconds across three final assembly stations. The overall efficiency reached 57%, primarily due to reduced idle time and optimized material replenishment. These results confirm the feasibility of AMR deployment as a scalable and flexible intralogistics solution, supporting the transition toward Industry 4.0 smart manufacturing systems.
Volume: 16
Issue: 1
Page: 158-173
Publish at: 2026-02-01

Internet of things heatstroke detection device

10.11591/ijece.v16i1.pp535-544
Swati Patil , Rugved Ravindra Kulkarni , Karishma Prashant Salunkhe , Vidit Pravin Agrawal
The increasing frequency and intensity of heat waves due to climate change underscore the critical need for proactive measures to prevent heat stroke, a life-threatening condition affecting individuals of all demographics, with vulnerability among the elderly and outdoor workers. In response to this pressing public health challenge, we present the internet of things (IoT) based heat stroke prevention device, a comprehensive solution leveraging a suite of sensors including temperature, atmospheric, pulse rate, blood pressure, and gyroscope sensors, seamlessly integrated with an ESP32 microcontroller and Firebase's real-time database. Central to the device's functionality is a random forest classifier machine learning model, trained on historical data and user-specific parameters, to accurately predict the likelihood of heat stroke onset in real-time. Rigorous testing and validation procedures demonstrate the device's high accuracy and reliability in sensor measurements, data transmission, and model performance. The accompanying web-based dashboard provides users with intuitive access to their current health metrics, including temperature, humidity, blood pressure, pulse rate, and personalized predictions for heat stroke risk. This innovative device serves as a versatile tool for public health agencies, occupational safety programs, and individuals seeking to safeguard their well-being in the face of escalating temperatures and climate uncertainties.
Volume: 16
Issue: 1
Page: 535-544
Publish at: 2026-02-01

Application of deep learning and machine learning techniques for the detection of misleading health reports

10.11591/ijece.v16i1.pp373-382
Ravindra Babu Jaladanki , Garapati Satyanarayana Murthy , Venu Gopal Gaddam , Chippada Nagamani , Janjhyam Venkata Naga Ramesh , Ramesh Eluri
In the current era of vast information availability, the dissemination of misleading health information poses a considerable obstacle, jeopardizing public health and overall well-being. To tackle this challenge, experts have utilized artificial intelligence methods, especially machine learning (ML) and deep learning (DL), to create automated systems that can identify misleading health-related information. This study thoroughly investigates ML and DL techniques for detecting fraudulent health news. The analysis delves into distinct methodologies, exploring their unique approaches, metrics, and challenges. This study explores various techniques utilized in feature engineering, model architecture, and evaluation metrics within the realms of machine learning and deep learning methodologies. Additionally, we analyze the consequences of our results on enhancing the efficacy of systems designed to detect counterfeit health news and propose possible avenues for future investigation in this vital area.
Volume: 16
Issue: 1
Page: 373-382
Publish at: 2026-02-01

Parameter-efficient fine-tuning of small language models for code generation: a comparative study of Gemma, Qwen 2.5 and Llama 3.2

10.11591/ijece.v16i1.pp278-287
Van-Viet Nguyen , The-Vinh Nguyen , Huu-Khanh Nguyen , Duc-Quang Vu
Large language models (LLMs) have demonstrated impressive capabilities in code generation; however, their high computational demands, privacy limitations, and challenges in edge deployment restrict their practical use in domain-specific applications. This study explores the effectiveness of parameter efficient fine-tuning for small language models (SLMs) with fewer than 3 billion parameters. We adopt a hybrid approach that combines low-rank adaptation (LoRA) and 4-bit quantization (QLoRA) to reduce fine-tuning costs while preserving semantic consistency. Experiments on the CodeAlpaca-20k dataset reveal that SLMs fine-tuned with this method outperform larger baseline models, including Phi-3 Mini 4K base, in ROUGE-L. Notably, applying our approach to the LLaMA 3 3B and Qwen2.5 3B models yielded performance improvements of 54% and 55%, respectively, over untuned counterparts. We evaluate models developed by major artificial intelligence (AI) providers Google (Gemma 2B), Meta (LLaMA 3 1B/3B), and Alibaba (Qwen2.5 1.5B/3B) and show that parameter-efficient fine-tuning enables them to serve as cost-effective, high-performing alternatives to larger LLMs. These findings highlight the potential of SLMs as scalable solutions for domain-specific software engineering tasks, supporting broader adoption and democratization of neural code synthesis.
Volume: 16
Issue: 1
Page: 278-287
Publish at: 2026-02-01

Scalable resume screening using large language model Meta AI version 3

10.11591/ijai.v15.i1.pp953-961
Asmita Deshmukh , Anjali Raut , Vedant Deshmukh
This research paper explores the use of large language model Meta AI 3 (LLaMA 3) for automating the resume screening process. Traditional resume screening methods that rely on keyword searching and human review can be inefficient, biased, and fail to identify qualified candidates. LLaMA 3, trained on large-scale text datasets, has the potential to accurately analyze resumes by understanding context and semantic details beyond simple keyword matching.The study presents a system that converts resume PDFs to text, inputs the text along with the job description into the LLaMA 3 model, and generates a ranked list of candidates with reasoning for their job fit. This discusses the data preparation, model setup, and performance evaluation of this system. Results show LLaMA 3 can rapidly process batches of resumes while reducing human bias in the screening process. The system aims to streamline hiring by automating the initial resume screening stage to surface top candidates for further in-depth evaluation. Key benefits include improved accuracy in identifying relevant skills, reduced bias compared to human screeners, and significant time savings for recruiters. The paper also examines ethical considerations around using AI for hiring decisions. Overall, this work demonstrates the promising application of large language models (LLMs) like LLaMA 3 to transform and enhance resume screening practices.
Volume: 15
Issue: 1
Page: 953-961
Publish at: 2026-02-01

Multi-scale features assisted knowledge distillation vision transformer for land cover segmentation and classification

10.11591/ijai.v15.i1.pp361-373
Sujata Arjun Gaikwad , Vijaya Musande
The most significant problem in remote sensing interpretation is semantic segmentation, which attempts to give each pixel in the image a particular class. This research work follows the various steps, such as pre-processing, segmentation, and classification. Initially, high spatial resolution remote sensing images (RSI) are collected from the open-source dataset. In the pre processing stage, an improved guided filter (Imp-GF) is used to remove various noises from images. Next, the segmentation is done by using a knowledge distillation-based vision transformer approach integrated with an atrous spatial multi-scale pyramidal module (KD-MuViTPy). Based on the segmented image, land cover classes such as vegetation, urban areas, forest, water bodies, and roads are classified. The proposed method outperformed the Bhuvan satellite dataset, achieving better accuracy, precision, recall, F1 score, Dice score, intersection over union (IoU), and Kappa score at values of 98.01%, 98.99%, 97.49%, 98.23%, 98.23%, 96.55%, and 95.91%, respectively.
Volume: 15
Issue: 1
Page: 361-373
Publish at: 2026-02-01

Benchmarking machine learning models for natural disaster prediction with synthetic IoT data

10.11591/ijai.v15.i1.pp257-268
Moath Alsafasfeh , Abdullah Alhasanat , Atheer Bassel , Moahand Alhasanat
Natural disasters pose severe threats to human life and infrastructure, demanding robust early warning systems (EWS) supported by machine learning (ML) and internet of things (IoT)-based sensing. This study benchmarks ML models for predicting floods and earthquakes using synthetic IoT sensor data. A dataset comprising nine environmental and seismic parameters was generated and labeled into three classes: no disaster, flood, and earthquake, where the feature preprocessing was applied during model training. Logistic regression (LR), random forest (RF), and extreme gradient boosting (XGBoost) models were trained and evaluated using accuracy, precision, recall, and F1-score. Experimental results on the World-A test set show that ensemble models consistently outperform LR, with XGBoost and RF achieving F1-scores of up to 97%and99%,respectively, compared to79%forLR.Anindependenttestonthe separately generated World-B dataset revealed that ensemble models maintained higher generalization capability with F1-scores of 80% for XGBoost and 78% for RF. In contrast, LR showed substantial degradation with an F1-score of 54%. Stress testing on the World-B dataset under simulated situations, such as sensor failures, noise injection, and extreme weather events, confirms the resilience performance of ensemble models in comparison to LR. These results demonstrate the usefulness of ensemble learning in handling unpredictable IoT data for disaster prediction and highlight their potential integration into intelligent EWS. Future work will focus on expanding the framework to include cross-time prediction, incorporating additional environmental features, and deploying the models in real-time IoT systems for field validation.
Volume: 15
Issue: 1
Page: 257-268
Publish at: 2026-02-01

Predicting non-performing loans in Vietnam’s financial sector: a deep Q-learning approach

10.11591/ijeecs.v41.i2.pp700-709
Luyen Anh Do , Huong Thi Viet Pham , Thinh Duc Le , Oanh Thi Tran
Non-performing loans (NPLs) prediction is a very important task in risk management of financial institutions. NPLs often lead to substantial losses when loans are not paid back on time. While traditional machine learning (ML) models have been conventionally exploited for credit risk assessment, they frequently face challenges with handling imbalanced data. To deal with this problem, this paper introduces a novel approach using deep reinforcement learning (DRL), specifically deep Q-learning, to enhance the prediction of NPLs. To verify the effectiveness of the method, we introduce a new dataset comprising 83,732 customer records (each described with 22 key features) from one of Vietnam's largest financial entities. Our method is compared with standard ML techniques such as random forest, decision tree, logistic regression, support vector machine, LightGBM, and XGBoost. Experimental results on this dataset demonstrate that deep Q-learning outperforms these traditional models in handling imbalanced data and boosting prediction accuracy. This research highlights the potential of DRL as a robust risk management tool, helping financial institutions make credit assessments more efficiently and reducing decision-making costs.
Volume: 41
Issue: 2
Page: 700-709
Publish at: 2026-02-01

Hybrid SVM–ANN system for automated MRI diagnosis of anterior cruciate ligament injuries

10.11591/ijeecs.v41.i2.pp773-781
Sazwan Syafiq Mazlan , Azizi Miskon , Sharizal Ahmad Sobri
Anterior cruciate ligament (ACL) tears are a frequent cause of knee instability, yet magnetic resonance imaging (MRI) interpretation remains time-consuming and observer-dependent. This paper presents an automated MRI framework for ACL injury screening and severity grading using a hybrid support vector machine–artificial neural network (SVM–ANN) model. A balanced dataset of 600 sagittal knee MRI images from Hospital Taiping (normal, partial tear, complete tear) was standardized via resizing, region-of-interest cropping, contrast enhancement, noise filtering, and segmentation. Morphological and texture features were extracted and reduced using principal component analysis (PCA). The SVM performs the initial screening (injured vs. non-injured) and samples predicted as injured are passed to the artificial neural network (ANN) to classify severity. Using confusion-matrix and receiver operating characteristic (ROC) evaluation, the proposed system achieved 86.2% overall accuracy and 81.7% sensitivity, with the ANN reaching approximately 95% accuracy on injured cases forwarded for grading. A clinician usability survey indicated high acceptance (~95%), supporting the feasibility of deployment as a lightweight decision-support tool. Limitations include reliance on single sagittal slices and single-sequence data; future work will incorporate multi-slice/3D and multi-sequence MRI to improve sensitivity and generalizability.
Volume: 41
Issue: 2
Page: 773-781
Publish at: 2026-02-01

Fraud detection using TabNet* classifier: a machine learning approach

10.11591/ijeecs.v41.i2.pp601-613
G. Anish Mary , S. Sudha
Detecting fraudulent transactions is a big challenge in the digital financial world. Transaction volumes are growing quickly, and new attack methods often outstrip traditional detection systems. Current fraud-detection models usually lack clarity and do not perform reliably on unbalanced real-world datasets. This highlights the urgent need for clear and explainable deep-learning methods for tabular financial data. This paper presents an interpretable deep learning framework built on the TabNet classifier. It uses attention-driven feature selection, sparse representation learning, and sequential decision reasoning to model complex interactions among transactional, demographic, and geographical factors. The model was tested on a real-world credit card transaction dataset with 23 features. It achieved 99.69% accuracy, a 0.975 F1-score, and a 0.956 ROC-AUC. This performance outperforms benchmark models such as random forest, XGBoost, LightGBM, and logistic regression. In addition to outstanding predictive results. Furthermore, interpretability is enhanced by TabNet's attention-based feature attribution. This facilitates the clear understanding of model decisions, supporting its use in regulated financial environments where precision and responsibility are crucial.
Volume: 41
Issue: 2
Page: 601-613
Publish at: 2026-02-01

An automatic stock price movement prediction using circularly dilated convolutions with orthogonal gated recurrent unit

10.11591/ijeecs.v41.i2.pp823-832
Durga Meena Rajendran , Maharajan Kalianandi , Bhuvanesh Ananthan
Recently, stock trend analysis has played an integral role in gaining knowledge about trading policy and determining stock intrinsic patterns. Several conventional studies reported stock trend prediction analysis but failed to obtain better performance due to poor generalization capability and high gradient vanishing problems. In light of the need to forecast stock price trends using both textual and empirical price data, this research proposed a novel hybridized deep learning (DL) model. Preprocessing, feature extraction, and prediction are the three effective stages that the created research goes through in order to properly estimate the stock movements. Data cleaning, which helps improve data quality, is calculated in the preprocessing step. Next, we use the created CDConv-OGRU technique-hybridized circularly dilated convolutions with orthogonal gated recurrent units-to extract features and make predictions. Python serves as the platform for processing and analyzing the created approach. This research uses a publicly accessible StockNet database for testing and compares results using a number of performance metrics, including accuracy, recall, precision, Mathew’s correlation coefficient (MCC), and f-score. In the experimental part, the created approach obtains a total of 95.16% accuracy, 94.8% precision, 94.89% recall, 95% confidence interval, and 0.9 MCC, in that order.
Volume: 41
Issue: 2
Page: 823-832
Publish at: 2026-02-01

RAC: a reusable adaptive convolution for CNN layer

10.11591/ijeecs.v41.i2.pp753-763
Nguyen Viet Hung , Phi Dinh Huynh , Pham Hong Thinh , Phuc Hau Nguyen , Trong-Minh Hoang
This paper proposes reusable adaptive convolution (RAC), an efficient alternative to standard 3×3 convolutions for convolutional neural networks (CNNs). The main advantage of RAC lies in its simplicity and parameter efficiency, achieved by sharing horizontal and vertical 1×k/k×1 filter banks across blocks within a stage and recombining them through a lightweight 1×1 mixing layer. By operating at the operator design level, RAC avoids post-training compression steps and preserves the conventional Conv–BN–activation structure, enabling seamless integration into existing CNN backbones. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on CIFAR-10 using several architectures, including ResNet-18/50/101, DenseNet, WideResNet, and EfficientNet. Experimental results demonstrate that RAC significantly reduces parameters and memory usage while maintaining competitive accuracy. These results indicate that RAC offers a reasonable balance between accuracy and compression, and is suitable for deploying CNN networks on resource-constrained platforms.
Volume: 41
Issue: 2
Page: 753-763
Publish at: 2026-02-01
Show 17 of 1960

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration