Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,734 Article Results

FPGA implementation and bit error rate analysis of the forward error correction algorithms in voice signals

10.11591/ijres.v15.i1.pp86-96
Ramjan Khatik , Afzal Shaikh , Shraddha Sawant , Pritika Patil
The idea of codes (VITERBI) is broadly utilized as a part of the wireless communication system as a result of their less complex nature in the decoding of transmitted message. This paper attempts to develop a performance analysis of the decoder by methods for bit error rate (BER) examination. The Galois field based decoder calculation is only utilized as a part of the communication systems. The decoder calculation-based Viterbi based decoder is carried out using field programmable gate arrays (FPGA) and MATLAB. This paper looks at the execution examination of both the calculations. The reconfigurable processor called Microblaze on the Spartan 3E FPGA is utilized for this purpose. MATLAB based code is used to see the BER analysis after the FPGA implementation output.
Volume: 15
Issue: 1
Page: 86-96
Publish at: 2026-03-01

Online method for identifying Thevenin model parameters of Li-ion batteries and estimating SOC using EKF

10.11591/ijres.v15.i1.pp54-67
Mouhssine Lagraoui , Ali Nejmi , Mouna Lhayani , Mohamed Benfars , Ahmed Abbou
Accurate state of charge (SOC) estimation is critical for the reliable operation of battery management systems (BMS) in electric vehicles (EVs) and energy storage applications. This paper presents a method for online identification of Thevenin model (TM) parameters and SOC estimation using the extended Kalman filter (EKF). The objective is to improve estimation accuracy by precisely characterizing the SOC-dependent variations of model parameters, including open-circuit voltage (VOCV), internal resistance R1, polarization resistance R2, and capacitance C2. These parameters are identified using least squares regression based on experimental discharge data from a 1.83 Ah lithium-ion (Li-ion) battery. The resulting model is validated under pulsed discharge conditions, achieving a mean absolute error (MAE) of 0.0059 V and root mean square error (RMSE) of 0.0074 V, indicating high modeling accuracy. Subsequently, an EKF algorithm is implemented using the identified model to estimate SOC in real time. Experimental results show excellent performance with an SOC estimation MAE of 0.059% and RMSE of 0.0798%, demonstrating high precision, fast convergence, and stability. The method effectively combines empirical parameter identification with a recursive filtering technique, offering a practical and embeddable solution for BMS applications. The study concludes that accurate parameter modeling significantly enhances EKF-based SOC estimation, providing a robust foundation for real-time battery monitoring and control. 
Volume: 15
Issue: 1
Page: 54-67
Publish at: 2026-03-01

Design of a solar system with a PID controller based on the Tyrannosaurus optimization algorithm

10.11591/ijres.v15.i1.pp170-182
Kadhim Sabah Rahimah , Issa Ahmed Abed , Afrah Abood Abdul Kadhim
Although photovoltaic (PV) power generation systems are an efficient way to use solar energy, their conversion efficiency is very low. Keeping the DC output power from the panel consistent is the key challenge with solar PV systems. Radiation and temperature are two variables that can impact a panel's output power. This study proposes a unique hunting-based optimization technique called the Tyrannosaurus optimization algorithm (TROA). It is demonstrated that the TROA can be used to achieve maximum power point tracking (MPPT) for lithium-ion battery charging with solar panels. Tyrannosaurus Rex hunting techniques served as the model for this approach. MPPT is used to regulate the solar array's output in PV systems. A buck converter is used by the charge controller to convert DC to DC. To provide the most power, it is utilized to balance the impedance of batteries and solar panels. To maximize power transfer, the algorithm modifies the gating signal's duty cycle based on the voltage and current detected by the solar panel. Three well-known optimization methods are contrasted with TROA's performance: gorilla troops optimization (GTO) algorithm, perticle swarm optimization (PSO), and cultural algorithm (CA). In contrast to current approaches, the proposed approach has yielded superior results.
Volume: 15
Issue: 1
Page: 170-182
Publish at: 2026-03-01

Inquisitive biometric feature analysis and implementation for recognition tasks using camouflaged segmentation with AI and IoT

10.11591/ijres.v15.i1.pp119-129
Mahesh Shankarrao Patil , Harsha J. Sarode , Abhijit Banubakode , Prakash Tukaram Patil , Nutan Patil , Vijayakumar Varadarajan , Deshinta Arrova Dewi
A vital role in reconfigurable and embedded systems which are deployed in smart environements and healthcare monitoring applications is played by human activity recognition (HAR). However, the potential leakage of sensitive user attributes raises serious privacy issues due to collection of data from the end devices and it needs to be transmitted to more powerful platforms for inference. Addressing this key challenge is principally crucial for resource-constrained embedded systems where efficiency of energy is a chief design requirement. The aim of this paper is present an energy-aware, privacy-preserving HAR framework appropriate for low-power embedded platforms. A machine learning–based camouflaged signal segmentation technique is proposed to transform the data collected from the sensor by eliminating sensitive information while preserving activity-relevant features. For characterization of trade off between the energy consumption and accuracy of recognition, parameters are extensively tuned by careful optimization in this proposed model. Experimental evaluations demonstrate that the method significantly reduces the inference of sensitive attributes such as gender, age, height, and weight, with minimal impact on HAR accuracy. Furthermore, the system supports configurable trade-offs between energy usage and classification performance, making it suitable for implementation on low-power embedded devices.
Volume: 15
Issue: 1
Page: 119-129
Publish at: 2026-03-01

Learning customer preference dynamics using rank-aware matrix factorization and enhanced collaborative filtering model

10.11591/ijres.v15.i1.pp159-169
Sathya Sundar , Eswara Thevar Ramaraj , Padmapriya Arumugam
Understanding how customer preferences evolve over time is a critical challenge for modern recommender systems operating in large-scale, implicit-feedback–driven e-commerce environments. The primary objective of this study is to develop a unified and interpretable framework that simultaneously models ranking-based preferences, collaborative similarity structures, and temporal behavioral evolution of customers. To achieve this, the study proposes a novel hybrid framework that integrates rank-aware matrix factorization (RA-MF), enhanced collaborative filtering (CF), K-means clustering, and temporal cluster migration matrices (TCMM) for learning customer preference dynamics. The ranking factorization model effectively captures implicit signals such as purchase frequency and recency decay, while CF provides complementary similarity-based insights. K-means segmentation reveals diverse customer personas, including high-value loyal buyers and exploratory shoppers, with significant differences in spending and purchasing behavior. Quantitative evaluations demonstrate strong performance improvements, with 11–18% gains in NDCG@10, 10–15% increases in Precision@10, and notable reductions in root mean square error (RMSE) and mean absolute error (MAE). The results highlight the framework’s ability to deliver both accurate recommendations and interpretable behavioral insights, offering valuable contributions to personalized marketing, customer retention, and data-driven e-commerce strategy.
Volume: 15
Issue: 1
Page: 159-169
Publish at: 2026-03-01

Portable verification IP: a UVM-based approach for reusable verification environments in complex IP and SoC verification

10.11591/ijres.v15.i1.pp78-85
Harinagarjun Chippagi , Vangala Sumalatha
Reusable and portable verification techniques are becoming more and more necessary due to the growing complexity of system-on-chip (SoC) designs and the need for quick time-to-market. In order to facilitate cross-project reusability, automation, and scalability in SoC verification, this paper introduces a portable verification IP (PVIP) framework based on the universal verification methodology (UVM). The suggested framework improves coverage efficiency and verification portability across heterogeneous platforms by integrating UVM with the portable stimulus standard (PSS). In comparison to traditional UVM-based methods, experimental evaluation shows that the PVIP framework achieves 92% functional coverage, enhances reusability by 87%, and shortens verification cycle time by 27%. These findings demonstrate how PVIP can greatly speed up verification closure, minimize engineering effort, and assist in the development of the next generation of intelligent, scalable, and industry-ready SoC verification environments.
Volume: 15
Issue: 1
Page: 78-85
Publish at: 2026-03-01

Energy-efficient reconfigurable architectures for Edge AI in healthcare IoT: trends, challenges, and future directions

10.11591/ijres.v15.i1.pp1-20
Tole Sutikno , Aiman Zakwan Jidin , Lina Handayani
The integration of Edge artificial intelligence (AI) with internet of things (IoT) technologies is transforming healthcare applications, including wearable monitoring, telemedicine, and implantable medical devices, by enabling low-latency and intelligent data processing close to patients. However, stringent requirements on energy efficiency, reliability, real-time responsiveness, and data privacy continue to hinder scalable and long-term deployment in resource-constrained healthcare environments. Energy-efficient reconfigurable architectures—such as field-programmable gate arrays (FPGAs), coarse-grained reconfigurable arrays (CGRAs), and emerging memory-centric and heterogeneous platforms—have emerged as promising solutions to address these challenges by balancing flexibility, adaptability, and power efficiency. This review systematically examines recent advances in reconfigurable Edge AI architectures for healthcare IoT, highlighting key trends in hardware–software co-design, AI-assisted design automation, memory-centric optimization, and domain-specific overlays. It further identifies critical challenges, including energy–performance trade-offs, runtime reconfiguration overheads, security and privacy vulnerabilities, limited standardization, and reliability concerns in dynamic clinical settings. Finally, future research directions are outlined, emphasizing self-optimizing and context-aware architectures, secure and trustworthy reconfiguration mechanisms, unified frameworks for heterogeneous healthcare workloads, and sustainable, carbon-aware edge computing. Collectively, this review positions energy-efficient reconfigurable architectures as a foundational enabler for next-generation Edge AI in IoT-enabled healthcare systems.
Volume: 15
Issue: 1
Page: 1-20
Publish at: 2026-03-01

FPGA implementation of a coprocessor architecture for random data generation and encryption

10.11591/ijres.v15.i1.pp21-30
Manoj Kumar
Coprocessors are designed to perform some specific tasks to enhance system performance and speed. Information security is the main focus in internet of thing (IoT), cryptography, and cybersecurity applications. In this work, a coprocessor architecture is designed to generate 4-bits of random data and perform encryption. Coprocessor architecture uses true random number generator (TRNG) and pseudo-random number generator (PRNG) architectures to generate random data. Modified linear feedback shift register (LFSR)-based PRNG and modified transition effect ring oscillator (TERO) and ring oscillator-based TRNG architectures are designed and implemented for performing encryption. A serial-in-parallel-out (SIPO) shift register circuit is used to generate 4-bit random data. A 15-bit instruction word is assigned to coprocessor architecture to perform its task. The coprocessor architecture is designed using VHSIC Hardware Description Language (VHDL) and implemented on an Artix-7 field programmable gate array (FPGA). All simulation and synthesis results of the proposed coprocessor architecture are obtained by the Xilinx Vivado 2015.2 tool. Coprocessor architecture efficiency (throughput (Mbps)/LUTs) is 2.31, and it operates at a 100 MHz clock.
Volume: 15
Issue: 1
Page: 21-30
Publish at: 2026-03-01

Synaptic shield: fusion of ResNext–50 and long short-term memory for enhanaced deepfake detection

10.11591/ijres.v15.i1.pp224-235
Amit Mishra , Prajwal Chinchmalatpure , Govinda B. Sambare , Viomesh Kumar Singh , Atul Gulabrao Pawar , Rahul Prakash Mirajkar , Priyanka K. Takalkar , Kuldeep Vayadande
Recent developments in deepfakes have created much anxiety about the authenticity of any digital content and thus, calls for implementing detection mechanisms that will work accordingly. This paper uses Synaptic Shield, a innovative deep learning (DL) framework which is customized to detect alterations by deepfakes with high precision levels. It employs both convolution neural networks (CNNs) as well as modules for time feature extractions to test spatial and motion indicators from video data. High-level preprocessing pipelines in combination with confidence scoring mechanism help make Synaptic Shield adaptive toward manipulation techniques such as FaceSwap and DeepFake. The accuracy of our model surpasses other deepfake detection models with a high accuracy of 98.3%. The above results are based on exhaustive experimentation on standard datasets like FaceForensics++, DeepFake detection challenge (DFDC), and Celeb DeepFake (Celeb-DF). Synaptic Shield is shown to be the best with outstanding results that maintain a higher confidence score equivalent to its precision and reliability. Scalability in having the capacity to accommodate various manipulation techniques and levels of video quality indicates robustness in offering an effective method toward ensuring integrity in digital media. The work is an important move forward in addressing the problems created by DeepFake technologies.
Volume: 15
Issue: 1
Page: 224-235
Publish at: 2026-03-01

Multi-modal sensor integration in chicken-fish-vegetable greenhouse agriculture based on internet of things

10.11591/ijres.v15.i1.pp138-149
Muhammad Risal , Pujianti Wahyuningsih , Suwatri Jura , Irmawaty Iskandar , Abdul Jalil
Integrated chicken-fish-vegetable farming is a type of agriculture that combines the benefits of them within a single ecosystem. The objective of this study is to develop a control and monitoring system for integrated greenhouse-based chicken-fish-vegetable farming using the internet of things (IoT). The monitoring method employs the integration of multi-modal sensors in the greenhouse, consisting of a camera, water level, DHT11, pH, TDS, DS18B20, light dependent resistor (LDR), and infrared (IR) sensor. The camera functions as a visual monitoring tool for the farm, water level sensor detects hydroponic water levels, DHT11 measures air temperature and humidity, pH sensor measures water acidity, TDS sensor detects water nutrients, DS18B20 measures pond water temperature, LDR detects weather conditions, and IR sensor measures sunlight intensity. The processing units used to control the sensors and output devices are the ESP32 and Raspberry Pi. The system outputs include a relay for pump control, an LCD for text messages, and IoT information visualization using the Blynk platform. The results of this study demonstrate that the multi-modal sensor device can effectively monitor the conditions of integrated greenhouse-based chicken-fish-vegetable farming, achieving an accuracy of up to 96%, with an average data transmission time of 6 seconds through the Blynk IoT platform.
Volume: 15
Issue: 1
Page: 138-149
Publish at: 2026-03-01

Deployment and evaluation of facial expression recognition on Android and Temi V3 in controlled settings

10.11591/ijres.v15.i1.pp42-53
Mohamad Hariz Nazamid , Rozita Jailani , Nur Khalidah Zakaria , Anwar P. P. Abdul Majeed
Facial expression recognition (FER) is vital for improving human-robot interaction (HRI). This study presents the deployment and evaluation of an optimized FER model on android devices, specifically tested on the Temi V3 robot in controlled environments. Trained using FER+ and CK+ datasets and optimized with TensorFlow Lite (TFLite) and MobileNetV2, the model achieved a validation accuracy of 92.32%. Its performance was assessed on the Temi V3 robot and a Samsung A52 smartphone, focusing on CPU usage, memory, and power consumption. Cross-device compatibility and real-time performance challenges were addressed through model quantization and thread optimization. Real-time testing on the Temi V3 showed an overall accuracy of 82.28%, with emotion-specific accuracies ranging from 46.19% to 92.28%. This study offers practical insights for optimizing FER systems across android platforms, with potential applications in education, healthcare, and customer service. The results support the feasibility of implementing FER models as backends in android applications, enabling more intuitive and responsive HRI. Future work will focus on improving model efficiency for lower-end devices and exploring on-device learning techniques to boost accuracy in diverse real-world environments.
Volume: 15
Issue: 1
Page: 42-53
Publish at: 2026-03-01

FPGA implementation of high-performance Huffman encoder for image processing applications

10.11591/ijres.v15.i1.pp68-77
Masood Ahmad Mahammad , Appala Raju Uppala , Shaik Mazhar Hussain , Anusha Marouthu
An optimized Huffman encoder is essential in all applications where it is necessary to achieve the best performance, such as audio coding, data encryption, data compression, and image processing applications. This article presents a space-optimized encoding scheme to maximize performance and minimize latency in Dual Huffman encoding. The proposed approach employs dynamic tree selection using Dual Huffman encoding. A Dual Huffman code with dynamic tree selection can be run in parallel to support high-throughput applications. The resulting design optimally creates the Huffman dual encoding. This codeword table is based on a dynamic tree generation and selection algorithm, leading to a faster encoding process with lower latency. The architecture was developed using Vivado Xilinx 2023.2 and tested on three different field programmable gate array (FPGA) platforms (Zynq 7045, Zynq 7100, and Kria KV260 AI Vision board). A performance comparison between devices demonstrates that the Kria KV260 had the lowest latency (100 ns), as opposed to the Zync 7045 and Zynq 7100, which had latencies of 200 ns and 150 ns, respectively. These results elucidate the scalability of the architecture and its suitability for real-time image compression. When implemented on the Kria KV260, the dynamic tree selection-based Dual Huffman encoder is capable of fast, parallel image compression. The compression makes it a good candidate for advanced FPGA-based image processing systems with internet of things (IoT) applications.
Volume: 15
Issue: 1
Page: 68-77
Publish at: 2026-03-01

Advanced MRI-based deep learning for brain tumors: a five-year review of oncology–radiology–AI synergy

10.11591/ijres.v15.i1.pp214-223
Shrisha Maddur Ramesh , Chitrapadi Gururaj
Rapid advancements in computer vision and machine learning have significantly revolutionized medical imaging one such application is brain tumor detection and classification. Deep learning has emerged as a powerful tool, which offers exceptional capabilities in handling complex medical datasets. However, the current systems still face challenges in achieving optimal accuracy, robustness and clinical interpretability. This study presents a comprehensive survey of brain tumor segmentation, classification and detection techniques using deep learning, metaheuristic and hybrid approaches. The detailed quantitative evaluations of conventional and emerging methods are conducted by examining key performance metrics, dataset characteristics, strengths, and limitations. This review highlights recent breakthroughs by analyzing state-of-the-art techniques from the past five years, research gaps and potential directions for future advancements. These findings provide insights into novel architectures, optimization strategies and clinical applications which ultimately guide researchers towards more robust, interpretable and clinically impactful artificial intelligence (AI)-driven solutions for brain tumor analysis.
Volume: 15
Issue: 1
Page: 214-223
Publish at: 2026-03-01

Heart disease prediction using hybrid deep learning and medical imaging with wavelet-based feature extraction

10.11591/ijres.v15.i1.pp183-193
Chairmadurai Palanisamy , Kavitha Pachamuthu , Arun Kumar Ramamoorthy
The process of heart disease prediction is based on patient medical information, which can be addressed in terms of medical image as well as the results of an electrocardiogram (ECG) conducted to determine the risk of developing heart disease. The hybrid deep learning (DL) algorithms are developed using past data that can identify trends related to cardiovascular disease (CVDs). In the current paper, it is possible to offer a new method of heart disease prediction that would combine high-quality image processing and hybrid DL to enhance the effectiveness of predictions and avoid the shortcomings of the modern approaches. First, medical images like ECG images are pre-processed with butterworth adaptive 2D wavelet filter, which ensures maximal noise reduction, followed by maintenance of spatial and frequency information. The Gabor Wavelet-based feature extraction technique is applied to extract meaningful patterns, including both spatial and frequency domain information, which is essential for detecting heart-related anomalies. The resultant features are then categorized, along with both convolutional neural networks (CNN) and long short-term memory (LSTM), to make reliable and precise predictions of heart disease. The performance indicators, including accuracy (92.4%), precision (91.2%), recall (93.5%), and F1-score (91.0%), are utilized. Applying the model yields significant levels of reliability and generalization compared to traditional applications.
Volume: 15
Issue: 1
Page: 183-193
Publish at: 2026-03-01

An edge AIoT system for non-invasive biological indicators estimation and continuous health monitoring using PPG and ECG signals

10.11591/ijres.v15.i1.pp97-108
Hung K. Nguyen , Manh V. Pham
This paper presents the design and implementation of an artificial intelligence of things (AIoT)-based system that integrates deep learning and edge computing for real-time non-invasive health monitoring, focusing on the estimation of mean arterial pressure (MAP) alongside vital parameters such as heart rate (HR), blood oxygen saturation (SpO₂), and body temperature. Photoplethysmography (PPG) and electrocardiography (ECG) signals are acquired using low-power MAX30102 and AD8232 sensors, preprocessed with lightweight digital filters, and processed through a 1D convolutional neural network (CNN) deployed on a SEEED Studio XIAO ESP32S3 microcontroller. The model trained using the cuff-less blood pressure estimation dataset, achieved a mean absolute error (MAE) of 2.51 mmHg on the embedded microcontroller and 2.93 mmHg when validated against a standard blood pressure monitor. Experimental results demonstrate high accuracy, achieving a MAE below 5 mmHg, thereby meeting the AAMI and British Hypertension Society (BHS) Grade A standards for blood pressure measurement. The system achieves real-time inference with an average latency of 16 ms and efficient memory utilization, ensuring suitability for wearable and embedded devices. Physiological data are transmitted via Wi-Fi to a Firebase cloud platform and visualized through a cross-platform mobile application. The proposed system demonstrates strong potential for remote healthcare applications, particularly in continuous monitoring and early health risk detection.
Volume: 15
Issue: 1
Page: 97-108
Publish at: 2026-03-01
Show 16 of 1983

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration