Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,734 Article Results

Synaptic shield: fusion of ResNext–50 and long short-term memory for enhanaced deepfake detection

10.11591/ijres.v15.i1.pp224-235
Amit Mishra , Prajwal Chinchmalatpure , Govinda B. Sambare , Viomesh Kumar Singh , Atul Gulabrao Pawar , Rahul Prakash Mirajkar , Priyanka K. Takalkar , Kuldeep Vayadande
Recent developments in deepfakes have created much anxiety about the authenticity of any digital content and thus, calls for implementing detection mechanisms that will work accordingly. This paper uses Synaptic Shield, a innovative deep learning (DL) framework which is customized to detect alterations by deepfakes with high precision levels. It employs both convolution neural networks (CNNs) as well as modules for time feature extractions to test spatial and motion indicators from video data. High-level preprocessing pipelines in combination with confidence scoring mechanism help make Synaptic Shield adaptive toward manipulation techniques such as FaceSwap and DeepFake. The accuracy of our model surpasses other deepfake detection models with a high accuracy of 98.3%. The above results are based on exhaustive experimentation on standard datasets like FaceForensics++, DeepFake detection challenge (DFDC), and Celeb DeepFake (Celeb-DF). Synaptic Shield is shown to be the best with outstanding results that maintain a higher confidence score equivalent to its precision and reliability. Scalability in having the capacity to accommodate various manipulation techniques and levels of video quality indicates robustness in offering an effective method toward ensuring integrity in digital media. The work is an important move forward in addressing the problems created by DeepFake technologies.
Volume: 15
Issue: 1
Page: 224-235
Publish at: 2026-03-01

Energy-efficient multilevel inverter for electric vehicles using wireless sensor network monitoring

10.11591/ijres.v15.i1.pp130-137
Nishalini Delcy , Francis Thomas Josh , Kannadhasan Suriyan
This research presents a unique energy-efficient routing strategy aimed at optimizing energy consumption and prolonging network longevity using an innovative clustering probability. Cluster-based routing algorithms facilitate versatile configurations and extend the network's lifetime until the last node ceases operation. This study introduces an energy-efficient hierarchical clustering algorithm for wireless sensor networks (WSNs), enhancing the low-energy adaptive clustering hierarchy (LEACH) algorithm. The objective of this algorithm is to reduce power consumption by the strategic selection of new cluster heads (CH) in each data transfer round and to prevent network conflicts. This objective is accomplished by employing an efficient function to identify the optimal CH nodes in each cycle, considering the current energy levels of the sensors. The suggested technique enhances the cluster formation process by utilizing the reduced distance to the base station. This study findings will enhance packet scheduling algorithms for data aggregation in WSNs to minimize the number of packets transmitted from sensors to CH. Simulation findings validate the system's efficacy in comparison to alternative compression techniques and non-compression scenarios utilized in LEACH and multi-hop LEACH.
Volume: 15
Issue: 1
Page: 130-137
Publish at: 2026-03-01

FPGA implementation and bit error rate analysis of the forward error correction algorithms in voice signals

10.11591/ijres.v15.i1.pp86-96
Ramjan Khatik , Afzal Shaikh , Shraddha Sawant , Pritika Patil
The idea of codes (VITERBI) is broadly utilized as a part of the wireless communication system as a result of their less complex nature in the decoding of transmitted message. This paper attempts to develop a performance analysis of the decoder by methods for bit error rate (BER) examination. The Galois field based decoder calculation is only utilized as a part of the communication systems. The decoder calculation-based Viterbi based decoder is carried out using field programmable gate arrays (FPGA) and MATLAB. This paper looks at the execution examination of both the calculations. The reconfigurable processor called Microblaze on the Spartan 3E FPGA is utilized for this purpose. MATLAB based code is used to see the BER analysis after the FPGA implementation output.
Volume: 15
Issue: 1
Page: 86-96
Publish at: 2026-03-01

Portable verification IP: a UVM-based approach for reusable verification environments in complex IP and SoC verification

10.11591/ijres.v15.i1.pp78-85
Harinagarjun Chippagi , Vangala Sumalatha
Reusable and portable verification techniques are becoming more and more necessary due to the growing complexity of system-on-chip (SoC) designs and the need for quick time-to-market. In order to facilitate cross-project reusability, automation, and scalability in SoC verification, this paper introduces a portable verification IP (PVIP) framework based on the universal verification methodology (UVM). The suggested framework improves coverage efficiency and verification portability across heterogeneous platforms by integrating UVM with the portable stimulus standard (PSS). In comparison to traditional UVM-based methods, experimental evaluation shows that the PVIP framework achieves 92% functional coverage, enhances reusability by 87%, and shortens verification cycle time by 27%. These findings demonstrate how PVIP can greatly speed up verification closure, minimize engineering effort, and assist in the development of the next generation of intelligent, scalable, and industry-ready SoC verification environments.
Volume: 15
Issue: 1
Page: 78-85
Publish at: 2026-03-01

Multi-modal sensor integration in chicken-fish-vegetable greenhouse agriculture based on internet of things

10.11591/ijres.v15.i1.pp138-149
Muhammad Risal , Pujianti Wahyuningsih , Suwatri Jura , Irmawaty Iskandar , Abdul Jalil
Integrated chicken-fish-vegetable farming is a type of agriculture that combines the benefits of them within a single ecosystem. The objective of this study is to develop a control and monitoring system for integrated greenhouse-based chicken-fish-vegetable farming using the internet of things (IoT). The monitoring method employs the integration of multi-modal sensors in the greenhouse, consisting of a camera, water level, DHT11, pH, TDS, DS18B20, light dependent resistor (LDR), and infrared (IR) sensor. The camera functions as a visual monitoring tool for the farm, water level sensor detects hydroponic water levels, DHT11 measures air temperature and humidity, pH sensor measures water acidity, TDS sensor detects water nutrients, DS18B20 measures pond water temperature, LDR detects weather conditions, and IR sensor measures sunlight intensity. The processing units used to control the sensors and output devices are the ESP32 and Raspberry Pi. The system outputs include a relay for pump control, an LCD for text messages, and IoT information visualization using the Blynk platform. The results of this study demonstrate that the multi-modal sensor device can effectively monitor the conditions of integrated greenhouse-based chicken-fish-vegetable farming, achieving an accuracy of up to 96%, with an average data transmission time of 6 seconds through the Blynk IoT platform.
Volume: 15
Issue: 1
Page: 138-149
Publish at: 2026-03-01

Learning customer preference dynamics using rank-aware matrix factorization and enhanced collaborative filtering model

10.11591/ijres.v15.i1.pp159-169
Sathya Sundar , Eswara Thevar Ramaraj , Padmapriya Arumugam
Understanding how customer preferences evolve over time is a critical challenge for modern recommender systems operating in large-scale, implicit-feedback–driven e-commerce environments. The primary objective of this study is to develop a unified and interpretable framework that simultaneously models ranking-based preferences, collaborative similarity structures, and temporal behavioral evolution of customers. To achieve this, the study proposes a novel hybrid framework that integrates rank-aware matrix factorization (RA-MF), enhanced collaborative filtering (CF), K-means clustering, and temporal cluster migration matrices (TCMM) for learning customer preference dynamics. The ranking factorization model effectively captures implicit signals such as purchase frequency and recency decay, while CF provides complementary similarity-based insights. K-means segmentation reveals diverse customer personas, including high-value loyal buyers and exploratory shoppers, with significant differences in spending and purchasing behavior. Quantitative evaluations demonstrate strong performance improvements, with 11–18% gains in NDCG@10, 10–15% increases in Precision@10, and notable reductions in root mean square error (RMSE) and mean absolute error (MAE). The results highlight the framework’s ability to deliver both accurate recommendations and interpretable behavioral insights, offering valuable contributions to personalized marketing, customer retention, and data-driven e-commerce strategy.
Volume: 15
Issue: 1
Page: 159-169
Publish at: 2026-03-01

Deployment and evaluation of facial expression recognition on Android and Temi V3 in controlled settings

10.11591/ijres.v15.i1.pp42-53
Mohamad Hariz Nazamid , Rozita Jailani , Nur Khalidah Zakaria , Anwar P. P. Abdul Majeed
Facial expression recognition (FER) is vital for improving human-robot interaction (HRI). This study presents the deployment and evaluation of an optimized FER model on android devices, specifically tested on the Temi V3 robot in controlled environments. Trained using FER+ and CK+ datasets and optimized with TensorFlow Lite (TFLite) and MobileNetV2, the model achieved a validation accuracy of 92.32%. Its performance was assessed on the Temi V3 robot and a Samsung A52 smartphone, focusing on CPU usage, memory, and power consumption. Cross-device compatibility and real-time performance challenges were addressed through model quantization and thread optimization. Real-time testing on the Temi V3 showed an overall accuracy of 82.28%, with emotion-specific accuracies ranging from 46.19% to 92.28%. This study offers practical insights for optimizing FER systems across android platforms, with potential applications in education, healthcare, and customer service. The results support the feasibility of implementing FER models as backends in android applications, enabling more intuitive and responsive HRI. Future work will focus on improving model efficiency for lower-end devices and exploring on-device learning techniques to boost accuracy in diverse real-world environments.
Volume: 15
Issue: 1
Page: 42-53
Publish at: 2026-03-01

Robust multi-faces recognition and tracking via fuzzy genetic algorithms and deep coupled features

10.11591/ijaas.v15.i1.pp209-218
Adil Abdulhur Abushana , Yousif Samer Mudhafar
In real-world surveillance environments, face recognition and tracking remain challenging due to partial occlusion, pose variation, illumination changes, and background clutter. This paper presents a robust hybrid framework that integrates fuzzy genetic algorithms (FGA) with deep coupled feature learning for multi-face recognition and tracking. The proposed system comprises three main modules: i) face detection and pre processing using the multi-task cascaded convolutional network (MTCNN), ii) deep coupled ResNet embeddings that jointly learn identity and appearance-invariant representations, and iii) a fuzzy rule-based genetic optimizer that adaptively refines tracking decisions based on uncertainty in motion, appearance similarity, and occlusion levels. The novelty of this work lies in the fusion of fuzzy inference with evolutionary search to guide the genetic optimization process—allowing dynamic adaptation to noisy and uncertain visual conditions. Moreover, probabilistic data association filters (PDAF) and conditional joint likelihood filters (CJLF) are employed to further enhance temporal consistency under occlusion and appearance variation. The results confirm that fuzzy evolutionary optimization, when coupled with deep feature learning, significantly improves robustness and stability for real-time face tracking in complex, dynamic scenes.
Volume: 15
Issue: 1
Page: 209-218
Publish at: 2026-03-01

ELLMW: an enhanced vision–language model for reliable text extraction from manually composed scripts

10.11591/ijres.v15.i1.pp194-203
Dhivya Venkatesh , Brintha Rajakumari Sivaraj
While conventional optical character recognition (OCR) systems can digitize text, they struggle with diverse handwriting styles, noisy inputs, and unstructured layouts, limiting their effectiveness. This study proposes enhanced large language model whisperer (ELLMW), a vision–language framework for accurate text extraction (TE) from fully handwritten scripts. The methodology integrates advanced preprocessing (noise reduction, binarization, and skew correction), deep learning–based handwriting recognition convolutional neural network–long short-term memory (CNN–LSTM), and LLM-based post-correction to ensure context-aware and structurally coherent outputs. The system converts scanned images, portable document formats (PDFs), and irregularly formatted answer sheets into machine-readable text, while automatically correcting errors in spelling, grammar, and layout. Experimental evaluation on a curated dataset of handwritten examination answer scripts (HEAS) demonstrates that ELLMW achieves 97.8% accuracy, 1.04%-character error rate (CER), and 3.24%-word error rate, outperforming widely used OCR tools including Tesseract, EasyOCR, Google Cloud Vision (GCV), PaddleOCR, ABBYY FineReader, and Transym OCR. The results highlight the model’s robustness across varying handwriting styles, noisy backgrounds, and complex document structures.
Volume: 15
Issue: 1
Page: 194-203
Publish at: 2026-03-01

Inquisitive biometric feature analysis and implementation for recognition tasks using camouflaged segmentation with AI and IoT

10.11591/ijres.v15.i1.pp119-129
Mahesh Shankarrao Patil , Harsha J. Sarode , Abhijit Banubakode , Prakash Tukaram Patil , Nutan Patil , Vijayakumar Varadarajan , Deshinta Arrova Dewi
A vital role in reconfigurable and embedded systems which are deployed in smart environements and healthcare monitoring applications is played by human activity recognition (HAR). However, the potential leakage of sensitive user attributes raises serious privacy issues due to collection of data from the end devices and it needs to be transmitted to more powerful platforms for inference. Addressing this key challenge is principally crucial for resource-constrained embedded systems where efficiency of energy is a chief design requirement. The aim of this paper is present an energy-aware, privacy-preserving HAR framework appropriate for low-power embedded platforms. A machine learning–based camouflaged signal segmentation technique is proposed to transform the data collected from the sensor by eliminating sensitive information while preserving activity-relevant features. For characterization of trade off between the energy consumption and accuracy of recognition, parameters are extensively tuned by careful optimization in this proposed model. Experimental evaluations demonstrate that the method significantly reduces the inference of sensitive attributes such as gender, age, height, and weight, with minimal impact on HAR accuracy. Furthermore, the system supports configurable trade-offs between energy usage and classification performance, making it suitable for implementation on low-power embedded devices.
Volume: 15
Issue: 1
Page: 119-129
Publish at: 2026-03-01

Energy-efficient reconfigurable architectures for Edge AI in healthcare IoT: trends, challenges, and future directions

10.11591/ijres.v15.i1.pp1-20
Tole Sutikno , Aiman Zakwan Jidin , Lina Handayani
The integration of Edge artificial intelligence (AI) with internet of things (IoT) technologies is transforming healthcare applications, including wearable monitoring, telemedicine, and implantable medical devices, by enabling low-latency and intelligent data processing close to patients. However, stringent requirements on energy efficiency, reliability, real-time responsiveness, and data privacy continue to hinder scalable and long-term deployment in resource-constrained healthcare environments. Energy-efficient reconfigurable architectures—such as field-programmable gate arrays (FPGAs), coarse-grained reconfigurable arrays (CGRAs), and emerging memory-centric and heterogeneous platforms—have emerged as promising solutions to address these challenges by balancing flexibility, adaptability, and power efficiency. This review systematically examines recent advances in reconfigurable Edge AI architectures for healthcare IoT, highlighting key trends in hardware–software co-design, AI-assisted design automation, memory-centric optimization, and domain-specific overlays. It further identifies critical challenges, including energy–performance trade-offs, runtime reconfiguration overheads, security and privacy vulnerabilities, limited standardization, and reliability concerns in dynamic clinical settings. Finally, future research directions are outlined, emphasizing self-optimizing and context-aware architectures, secure and trustworthy reconfiguration mechanisms, unified frameworks for heterogeneous healthcare workloads, and sustainable, carbon-aware edge computing. Collectively, this review positions energy-efficient reconfigurable architectures as a foundational enabler for next-generation Edge AI in IoT-enabled healthcare systems.
Volume: 15
Issue: 1
Page: 1-20
Publish at: 2026-03-01

FPGA implementation of a coprocessor architecture for random data generation and encryption

10.11591/ijres.v15.i1.pp21-30
Manoj Kumar
Coprocessors are designed to perform some specific tasks to enhance system performance and speed. Information security is the main focus in internet of thing (IoT), cryptography, and cybersecurity applications. In this work, a coprocessor architecture is designed to generate 4-bits of random data and perform encryption. Coprocessor architecture uses true random number generator (TRNG) and pseudo-random number generator (PRNG) architectures to generate random data. Modified linear feedback shift register (LFSR)-based PRNG and modified transition effect ring oscillator (TERO) and ring oscillator-based TRNG architectures are designed and implemented for performing encryption. A serial-in-parallel-out (SIPO) shift register circuit is used to generate 4-bit random data. A 15-bit instruction word is assigned to coprocessor architecture to perform its task. The coprocessor architecture is designed using VHSIC Hardware Description Language (VHDL) and implemented on an Artix-7 field programmable gate array (FPGA). All simulation and synthesis results of the proposed coprocessor architecture are obtained by the Xilinx Vivado 2015.2 tool. Coprocessor architecture efficiency (throughput (Mbps)/LUTs) is 2.31, and it operates at a 100 MHz clock.
Volume: 15
Issue: 1
Page: 21-30
Publish at: 2026-03-01

Advanced MRI-based deep learning for brain tumors: a five-year review of oncology–radiology–AI synergy

10.11591/ijres.v15.i1.pp214-223
Shrisha Maddur Ramesh , Chitrapadi Gururaj
Rapid advancements in computer vision and machine learning have significantly revolutionized medical imaging one such application is brain tumor detection and classification. Deep learning has emerged as a powerful tool, which offers exceptional capabilities in handling complex medical datasets. However, the current systems still face challenges in achieving optimal accuracy, robustness and clinical interpretability. This study presents a comprehensive survey of brain tumor segmentation, classification and detection techniques using deep learning, metaheuristic and hybrid approaches. The detailed quantitative evaluations of conventional and emerging methods are conducted by examining key performance metrics, dataset characteristics, strengths, and limitations. This review highlights recent breakthroughs by analyzing state-of-the-art techniques from the past five years, research gaps and potential directions for future advancements. These findings provide insights into novel architectures, optimization strategies and clinical applications which ultimately guide researchers towards more robust, interpretable and clinically impactful artificial intelligence (AI)-driven solutions for brain tumor analysis.
Volume: 15
Issue: 1
Page: 214-223
Publish at: 2026-03-01

Heart disease prediction using hybrid deep learning and medical imaging with wavelet-based feature extraction

10.11591/ijres.v15.i1.pp183-193
Chairmadurai Palanisamy , Kavitha Pachamuthu , Arun Kumar Ramamoorthy
The process of heart disease prediction is based on patient medical information, which can be addressed in terms of medical image as well as the results of an electrocardiogram (ECG) conducted to determine the risk of developing heart disease. The hybrid deep learning (DL) algorithms are developed using past data that can identify trends related to cardiovascular disease (CVDs). In the current paper, it is possible to offer a new method of heart disease prediction that would combine high-quality image processing and hybrid DL to enhance the effectiveness of predictions and avoid the shortcomings of the modern approaches. First, medical images like ECG images are pre-processed with butterworth adaptive 2D wavelet filter, which ensures maximal noise reduction, followed by maintenance of spatial and frequency information. The Gabor Wavelet-based feature extraction technique is applied to extract meaningful patterns, including both spatial and frequency domain information, which is essential for detecting heart-related anomalies. The resultant features are then categorized, along with both convolutional neural networks (CNN) and long short-term memory (LSTM), to make reliable and precise predictions of heart disease. The performance indicators, including accuracy (92.4%), precision (91.2%), recall (93.5%), and F1-score (91.0%), are utilized. Applying the model yields significant levels of reliability and generalization compared to traditional applications.
Volume: 15
Issue: 1
Page: 183-193
Publish at: 2026-03-01

An edge AIoT system for non-invasive biological indicators estimation and continuous health monitoring using PPG and ECG signals

10.11591/ijres.v15.i1.pp97-108
Hung K. Nguyen , Manh V. Pham
This paper presents the design and implementation of an artificial intelligence of things (AIoT)-based system that integrates deep learning and edge computing for real-time non-invasive health monitoring, focusing on the estimation of mean arterial pressure (MAP) alongside vital parameters such as heart rate (HR), blood oxygen saturation (SpO₂), and body temperature. Photoplethysmography (PPG) and electrocardiography (ECG) signals are acquired using low-power MAX30102 and AD8232 sensors, preprocessed with lightweight digital filters, and processed through a 1D convolutional neural network (CNN) deployed on a SEEED Studio XIAO ESP32S3 microcontroller. The model trained using the cuff-less blood pressure estimation dataset, achieved a mean absolute error (MAE) of 2.51 mmHg on the embedded microcontroller and 2.93 mmHg when validated against a standard blood pressure monitor. Experimental results demonstrate high accuracy, achieving a MAE below 5 mmHg, thereby meeting the AAMI and British Hypertension Society (BHS) Grade A standards for blood pressure measurement. The system achieves real-time inference with an average latency of 16 ms and efficient memory utilization, ensuring suitability for wearable and embedded devices. Physiological data are transmitted via Wi-Fi to a Firebase cloud platform and visualized through a cross-platform mobile application. The proposed system demonstrates strong potential for remote healthcare applications, particularly in continuous monitoring and early health risk detection.
Volume: 15
Issue: 1
Page: 97-108
Publish at: 2026-03-01
Show 15 of 1983

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration