Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

25,016 Article Results

A novel approach for generating physiological interpretations through machine learning

10.11591/ijeecs.v38.i2.pp1339-1352
Md. Jahirul Islam , Md. Nasim Adnan , Md. Moradul Siddique , Romana Rahman Ema , Md. Alam Hossain , Syed Md. Galib
Predicting blood glucose trends and implementing suitable interventions are crucial for managing diabetes. Modern sensor technologies enable the collection of continuous glucose monitoring (CGM) data along with diet and activity records. However, machine learning (ML) techniques are often used for glucose level predictions without explicit physiological interpretation. This study introduces a method to extract physiological insights from ML-based glucose forecasts using constrained programming. A feed-forward neural network (FFNN) is trained for glucose prediction using CGM data, diet, and activity logs. Additionally, a physiological model of glucose dynamics is optimized in tandem with FFNN forecasts using sequential quadratic programming and individualized constraints. Comparisons between the constrained response and ML predictions show higher root mean square error (RMSE) in certain intervals for the constrained approach. Nevertheless, Clarke error grid (CEG) analysis indicates acceptable accuracy for the constrained method. This combined approach merges the generalization capabilities of ML with physiological insights through constrained optimization.
Volume: 38
Issue: 2
Page: 1339-1352
Publish at: 2025-05-01

Intrusion detection in clustering wireless network by applying extreme learning machine with deep neural network algorithm

10.11591/ijeecs.v38.i2.pp887-896
Palaniraj Rajidurai Parvathy , Satheeshkumar Sekar , Bharat Tidke , Rudraraju Leela Jyothi , Venugopal Sujatha , Madappa Shanmugathai , Subbiah Murugan
Nowadays, intrusion detection systems (IDSs) have growingly come to be considered as an important method owing to their possible to expand into a key factor, which is crucial for the security of wireless networks. In wireless network, when there is a thousand times more traffic, the effectiveness of normal IDS to identify hostile network intrusions is decreased by an average factor. This is because of the exponential growth in network traffic. This is due to the decreased number of possibilities to discover the intrusions. This is because there are fewer opportunities to see possible risks. We intend an extreme learning machine with deep neural network (DNN) algorithm-based intrusion detection in clustering (EIDC) wireless network. The main objective of this article is to detect the intrusion efficiently and minimize the false alarm rate. This mechanism utilizes the extreme learning machine (ELM) with a deep neural network algorithm for optimizing the weights of input and hidden node biases to deduce the network output weights. Simulation outcomes illustrate that the EIDC mechanism not only assures a better accuracy for detection, considerably minimizes an intrusion detection time, and shortens the false alarm rate.
Volume: 38
Issue: 2
Page: 887-896
Publish at: 2025-05-01

A variant of particle swarm optimization in cloud computing environment for scheduling workflow applications

10.11591/ijeecs.v38.i2.pp1392-1401
Ashish Tripathi , Rajnesh Singh , Suveg Moudgil , Pragati Gupta , Nitin Sondhi , Tarun Kumar , Arun Pratap Srivastava
Cloud computing offers on-demand access to shared resources, with user costs based on resource usage and execution time. To attract users, cloud providers need efficient schedulers that minimize these costs. Achieving cost minimization is challenging due to the need to consider both execution and data transfer costs. Existing scheduling techniques often fail to balance these costs effectively. This study proposes a variant of the particle swarm optimization algorithm (VPSO) for scheduling workflow applications in a cloud computing environment. The approach aims to reduce both execution and communication costs. We compared VPSO with several PSO variants, including Inertia-weighted PSO, gaussian disturbed particle swarm optimization (GDPSO), dynamic-PSO, and dynamic adaptive particle swarm optimization with self-supervised learning (DAPSO-SSL). Results indicate that VPSO generally offers significant cost reductions and efficient workload distribution across resources, although there are specific scenarios where other algorithms perform better. VPSO provides a robust and cost-effective solution for cloud workflow scheduling, enhancing task-resource mapping and reducing costs compared to existing methods. Future research will explore further enhancements and additional PSO variants to optimize cloud resource management.
Volume: 38
Issue: 2
Page: 1392-1401
Publish at: 2025-05-01

Boosting stroke prediction with ensemble learning on imbalanced healthcare data

10.11591/ijeecs.v38.i2.pp1137-1148
Outmane Labaybi , Mohamed Bennani Taj , Khalid El Fahssi , Said El Garouani , Mohamed Lamrini , Mohamed El Far
Detecting strokes at the early day is crucial for preventing health issues and potentially saving lives. Predicting strokes accurately can be challenging, especially when working with unbalanced healthcare datasets. In this article, we suggest a thorough method combining machine learning (ML) algorithms and ensemble learning techniques to improve the accuracy of predicting strokes. Our approach includes using preprocessing methods for tackling imbalanced data, feature engineering for extracting key information, and utilizing different ML algorithms such as random forests (RF), decision trees (DT), and gradient boosting (GBoost) classifiers. Through the utilization of ensemble learning, we amalgamate the advantages of various models in order to generate stronger and more reliable predictions. By conducting thorough tests and assessments on a variety of datasets, we demonstrate the efficacy of our approach in addressing the imbalanced stroke datasets and greatly enhances prediction accuracy. We conducted comprehensive testing and validation to ensure the reliability and applicability of our method, improving the accuracy of stroke prediction and supporting healthcare planning and resource allocation strategies.
Volume: 38
Issue: 2
Page: 1137-1148
Publish at: 2025-05-01

Multi-camera multi-person tracking with DeepSORT and MySQL

10.11591/ijeecs.v38.i2.pp997-1009
Shashank Horakodige Raghavendra , Yashasvi Sorapalli , Nehashri Poojar S. V. , Hrithik Maddirala , Ramakanth Kumar P. , Azra Nasreen , Neeta Trivedi , Ashish Agarwal , Sreelakshmi K.
Multi-camera multi-object tracking refers to the process of simultaneously tracking numerous objects using a network of connected cameras. Constructing an accurate depiction of an object’s movements requires the analysis of video data from many camera feeds, detection of items of interest, and their association across various camera perspectives. The objective is to accurately estimate the trajectories of the objects as they navigate through a monitored area. It has several uses, including surveillance, robotics, self-driving cars, and augmented reality. The current version of an object tracking algorithm, DeepSORT, doesn’t account for errors caused by occlusion or implementation of multiple cameras. In this paper, DeepSORT has been extended by introducing new states to improve the tracking performance in scenarios where objects are occluded in the presence of multiple cameras. The communication of track information across multiple cameras is achieved with the help of a database. The suggested system performs better in situations where objects are occluded, whether due to object occlusions or person occlusions.
Volume: 38
Issue: 2
Page: 997-1009
Publish at: 2025-05-01

Word embedding for contextual similarity using cosine similarity

10.11591/ijeecs.v38.i2.pp1170-1180
Yessy Asri , Dwina Kuswardani , Amanda Atika Sari , Atikah Rifdah Ansyari
Perspectives on technology often have similarities in certain contexts, such as information systems and informatics engineering. The source of opinion data comes from the Quora application, with a retrieval limit of the last 5 years. This research aims to implement Indo-bidirectional encoder representations from transformers (BERT), a variant of the BERT model optimized for Indonesian language, in the context of information system (IS) and information technology (IT) topic classification with 414 original data, which, after being augmented using the synonym replacement method, The generated data becomes 828. Data augmentation aims to evaluate the performance of models by using synonyms and rearranging text while maintaining meaning and structure. The approach used is to label the opinion text based on the cosine similarity calculation of the embedding token from the IndoBERT model. Then, the IndoBERT model is applied to classify the reviews. The experimental results show that the approach of using IndoBERT to classify SI and IT topics based on contextual similarity achieves 90% accuracy based on the confusion matrix. These positive results show the great potential of using transformer-based language models, such as IndoBERT, to support the analysis of comments and related topics in Indonesian.
Volume: 38
Issue: 2
Page: 1170-1180
Publish at: 2025-05-01

Performance of rocket data communication system using wire rope isolator on sounding rocket RX

10.11591/ijeecs.v38.i2.pp783-793
Kandi Rahardiyanti , Shandi Prio Laksono , Khaula Nurul Hakim , Yuniarto Wimbo Nugroho , Andreas Prasetya Adi , Salman Salman , Kurdianto Kurdianto
The rocket experiment (RX) ballistic rocket requires a reliable data communication system capable of withstanding intense vibrations and shocks during flight. This study investigates the application of wire rope isolators (WRI) to damper mechanical disturbances and protect the rocket's communication system. Installation of WRI position and direction in this experiment with compression position. A series of vibration tests were conducted using 4 WRI installed in the rocket’s 30 kg data communication compartment, vibration test results frequency between 4 Hz and 1500 Hz with acceleration of 8.37 g to 20.37 g, higher "g" readings on the test object sensor compared to vibration machine readings are usually caused by phenomena such as resonance, differences in dynamic response, non-linear behavior, sensor placement location, and swing effects when the vibration machine oscillates. This is a natural mechanical response to external vibrations during testing. While the results of flight tests rocket RX has an acceleration of 8 g to 9.3 g. The results showed that the WRI dampers are effective in protecting the data communication system and ensuring the uninterrupted transmission of flight data to the ground control station (GCS).
Volume: 38
Issue: 2
Page: 783-793
Publish at: 2025-05-01

A deep learning approach to detect DDoS flooding attacks on SDN controller

10.11591/ijeecs.v38.i2.pp1245-1255
Abdullah Ahmed Bahashwan , Mohammed Anbar , Selvakumar Manickam , Taief Alaa Al-Amiedy , Iznan H. Hasbullah
Software-defined networking (SDN), integrated into technologies like internet of things (IoT), cloud computing, and big data, is a key component of the fourth industrial revolution. However, its deployment introduces security challenges that can undermine its effectiveness. This highlights the urgent need for security-focused SDN solutions, driving advancements in SDN technology. The absence of inherent security countermeasures in the SDN controller makes it vulnerable to distributed denial of service (DDoS) attacks, which pose a significant and pervasive threat. These attacks specifically target the controller, disrupting services for legitimate users and depleting its resources, including bandwidth, memory, and processing power. This research aims to develop an effective deep learning (DL) approach to detect such attacks, ensuring the availability, integrity, and consistency of SDN network functions. The proposed DL detection approach achieves 98.068% accuracy, 98.085% precision, 98.067% recall, 98.057% F1-score, 1.34% false positive rate (FPR), and 1.713% detection time.
Volume: 38
Issue: 2
Page: 1245-1255
Publish at: 2025-05-01

S-commerce: competition drives action through small medium enterprise top management

10.11591/ijeecs.v38.i2.pp1042-1050
Erwin Sutomo , Nur Shamsiah Abdul Rahman , Awanis Romli
This study investigates the factors influencing the continued use of S-commerce in small and medium enterprises (SMEs), focusing on the roles of top management (TM) support, competitive pressure (CP), facilitating conditions, and service quality. Data were collected from 341 SME owners and analyzed using SEM. Data was analyzed with SmartPLS using a two-step approach. The findings indicate that TM support significantly impacts the continued use of S-commerce by influencing facilitating conditions and service quality while CP affects TM behavior and usage continuity. However, the findings reveal that operational factors, such as infrastructure and service quality, play a more critical role in sustaining S-commerce engagement than external pressures. Facilitating conditions, in particular, were found to have a strong influence on service quality and platform engagement, underscoring the importance of technical and organizational resources. The study extends prior research by highlighting the interplay between internal and external drivers in fostering the continuous use of S-commerce, offering practical insights for SMEs and future research directions.
Volume: 38
Issue: 2
Page: 1042-1050
Publish at: 2025-05-01

Optimizing cloud tasks scheduling based on the hybridization of darts game hypothesis and beluga whale optimization technique

10.11591/ijeecs.v38.i2.pp1195-1207
Manish Chhabra , Rajesh E.
This paper presents the hybridization of two metaheuristic algorithms which belongs to different categories, for optimizing the tasks scheduling in cloud environment. Hybridization of a game-based metaheuristic algorithm namely, darts game optimizer (DGO), with a swarm-based metaheuristic algorithm namely, beluga whale optimization (BWO), yields to the evolution of a new algorithm known as “hybrid darts game hypothesis – beluga whale optimization” (hybrid DGH-BWO) algorithm. Task scheduling optimization in cloud environment is a critical process and is determined as a non-deterministic polynomial (NP)-hard problem. Metaheuristic techniques are high-level optimization algorithms, designed to solve a wide range of complex, optimization problems. In the hybridization of DGO and BWO metaheuristic algorithms, expedition and convergence capabilities of both algorithms are combined together, and this enhances the chances of finding the higher-quality solutions compared to using a single algorithm alone. Other benefits of the proposed algorithm: increased overall efficiency, as “hybrid DGH-BWO” algorithm can exploit the complementary strengths of both DGO and BWO algorithms to converge to optimal solutions more quickly. Wide range of diversity is also introduced in the search space and this helps in avoiding getting trapped in local optima.
Volume: 38
Issue: 2
Page: 1195-1207
Publish at: 2025-05-01

Assessment of cloud-free normalized difference vegetation index data for land monitoring in Indonesia

10.11591/ijeecs.v38.i2.pp845-853
Ahmad Luthfi Hadiyanto , Sukristiyanti Sukristiyanti , Arif Hidayat , Indri Pratiwi
Continuous land monitoring in Indonesia using optical remote sensing satellites is difficult due to frequent clouds. Therefore, we studied the feasibility of monthly land monitoring during the second half of 2019, using moderate resolution imaging spectroradiometer (MODIS) normalized difference vegetation index (NDVI) data from Terra and Aqua satellites. We divide the Indonesian area into seven regions (Sumatra, Java, Kalimantan, Sulawesi, Nusa Tenggara, Maluku, and Papua) and examine NDVI data for each of the regions. We also calculated the cloud occurrence percentage every hour using Himawari-8 data to compare cloud conditions at different acquisition times. This research shows that Terra satellite provides more cloud-free pixels than Aqua while combining data from both significantly increase the cloud-free NDVI pixels. Monthly monitoring is feasible in most regions because the cloudy areas are less than 10%. However, in Sumatra, the cloudy area was more than 10% in October 2019. We need to include further data processing to improve the feasibility of continuous monitoring in Sumatra. This research concludes that monthly monitoring is still feasible in Indonesia, although some data require further processing. The use of additional data from other satellites in the monitoring can be an option for further research.
Volume: 38
Issue: 2
Page: 845-853
Publish at: 2025-05-01

Trends in machine learning for predicting personality disorder: a bibliometric analysis

10.11591/ijeecs.v38.i2.pp1299-1307
Heni Sulistiani , Admi Syarif , Warsito Warsito , Khairun Nisa Berawi
Over the last decade, research on artificial intelligence (AI) in the medical field has increased. However, unlike other disciplines, AI in personality disorders is still in the minority. For this reason, we conduct a map research using bibliometric and build a visualization map using VOSviewer in AI to predict personality disorders. We conducted a literature review using the systematic literature review (SLR) method, consisting of three stages: planning, implementation, and reporting. The evaluation involved 22 scientific articles on AI in predicting personality disorders indexed by Scopus Quartile Q1–Q4 from the Google Scholar database during the last five years, from 2018–2023. In the meantime, the results of bibliometric analysis have led to the discovery of information about the most productive publishers, the evolution of scientific articles, and the quantity of citations. In addition, VOSviewer’s visualization of the most frequently occurring terms in abstracts and titles has made it easier for researchers to find novel and infrequently studied subjects in AI on personality disorders.
Volume: 38
Issue: 2
Page: 1299-1307
Publish at: 2025-05-01

Detection of COVID-19 based on cough sound and accompanying symptom using LightGBM algorithm

10.11591/ijeecs.v38.i2.pp940-949
Wiharto Wiharto , Annas Abdurrahman , Umi Salamah
Coronavirus disease 19 (COVID-19) is an infectious disease whose diagnosis is carried out using antigen-antibody tests and reverse transcription polymerase chain reaction (RT-PCR). Apart from these two methods, several alternative early detection methods using machine learning have been developed. However, it still has limitations in accessibility, is invasive, and its implementation involves many parties, which could potentially even increase the risk of spreading COVID-19. Therefore, this research aims to develop an alternative early detection method that is non-invasive by utilizing the LightGBM algorithm to detect COVID-19 based on the results of feature extraction from cough sounds and accompanying symptoms that can be identified independently. This research uses cough sound samples and symptom data from the Coswara dataset, and cough sound’s features were extracted using the log mel-spectrogram, mel frequency cepstrum coefficient (MFCC), chroma, zero crossing rate (ZCR), and root mean square (RMS) methods. Next, the cough sound features are combined with symptom data to train the LightGBM. The model trained using cough sound features and patient symptoms obtained the best performance with 95.61% accuracy, 93.33% area under curve (AUC), 88.74% sensitivity, 97.91% specificity, 93.17% positive prediction value (PPV), and 96.33% negative prediction value (NPV). It can be concluded that the trained model has excellent classification capabilities based on the AUC values obtained.
Volume: 38
Issue: 2
Page: 940-949
Publish at: 2025-05-01

A GRU-based approach for botnet detection using deep learning technique

10.11591/ijeecs.v38.i2.pp1098-1105
Suchetha G. , Pushpalatha K.
The increasing volume of network traffic data exchanged among interconnected devices on the internet of things (IoT) poses a significant challenge for conventional intrusion detection systems (IDS), especially in the face of evolving and unpredictable security threats. It is crucial to develop adaptive and effective IDS for IoT to mitigate false alarms and ensure high detection accuracy, particularly with the surge in botnet attacks. These attacks have the potential to turn seemingly harmless devices into zombies, generating malicious traffic that disrupts network operations. This paper introduces a novel approach to IoT intrusion detection, leveraging machine learning techniques and the extensive UNSW-NB15 dataset. Our primary focus lies in designing, implementing, and evaluating machine learning (ML) models, including K-nearest neighbors (KNN), random forest (RF), long short-term memory (LSTM), and gated recurrent unit (GRU), against prevalent botnet attacks. The successful testing against prominent Bot- net attacks using a dedicated dataset further validates its potential for enhancing intrusion detection accuracy in dynamic and evolving IoT landscapes.
Volume: 38
Issue: 2
Page: 1098-1105
Publish at: 2025-05-01

High-efficiency multimode charging interface for Li-Ion battery with renewable energy sources in 180 nm CMOS

10.11591/ijeecs.v38.i2.pp744-754
Hajjar Mamouni , Karim El Khadiri , Anass El Affar , Mohammed Ouazzani Jamil , Hassan Qjidaa
The high-efficiency multi-source Lithium-Ion battery charger with multiple renewable energy sources described in the present paper is based on supply voltage management and a variable current source. The goal of charging the battery in a constant current (CC) mode and controlling the supply voltage of the charging circuit are both made achievable using a variable current source, which may improve the battery charger’s energy efficiency. The battery must be charged with a degraded current by switching from the CC state for the constant voltage (CV) state to prevent harming the Li-Ion battery. The Cadence Virtuoso simulator was utilized to obtain simulation results for the charging circuit, which is constructed in 0.18 μm CMOS technology. The simulation results obtained using the Cadence Virtuoso simulator, provide a holding current trickle charge (TC) of approximately 250 mA, a maximum charging current (LC) of approximately 1.3 A and a maximum battery voltage of 4.2 V, and takes only 29 minutes to charge.
Volume: 38
Issue: 2
Page: 744-754
Publish at: 2025-05-01
Show 8 of 1668

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration