Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,939 Article Results

High performance modified bit-vector based packet classification module on low-cost FPGA

10.11591/ijece.v11i5.pp3855-3863
Anita P. , Manju Devi
The packet classification plays a significant role in many network systems, which requires the incoming packets to be categorized into different flows and must take specific actions as per functional and application requirements. The network system speed is continuously increasing, so the demand for the packet classifier also increased. Also, the packet classifier's complexity is increased further due to multiple fields should match against a large number of rules. In this manuscript, an efficient and high performance modified bitvector (MBV) based packet classification (PC) is designed and implemented on low-cost Artix-7 FPGA. The proposed MBV based PC employs pipelined architecture, which offers low latency and high throughput for PC. The MBV based PC utilizes <2% slices, operating at 493.102 MHz, and consumes 0.1 W total power on Artix-7 FPGA. The proposed PC considers only 4 clock cycles to classify the incoming packets and provides 74.95 Gbps throughput. The comparative results in terms of hardware utilization and performance efficiency of proposed work with existing similar PC approaches are analyzed with better constraints improvement.
Volume: 11
Issue: 5
Page: 3855-3863
Publish at: 2021-10-01

Impact of optical current transformer on protection scheme of hybrid transmission line

10.11591/ijeecs.v24.i1.pp1-11
Zainal Arifin , Muhammad Zulham , Eko Prasetyo
Continuity of power transmission is important to ensure the reliability of the electricity supply. As most system faults are temporary, the auto reclose (AR) scheme has been used extensively to minimise the outage duration, prevent widespread outages, thus increase system stability. Meanwhile, the hybrid transmission line (HTL) combining overhead line (OHL) and high voltage cable has been introduced to provide an inexpensive solution for an urban power grid. Protecting HTL with a conventional protection system would forbid the operation of the AR scheme due to difficulty to ensure whether the fault occurred on the OHL or cable section. Therefore, the circulating current protection (CCP) scheme is used in the cable section to ensure the fault location and block the AR scheme. The technology of an optical current transformer (OCT) as one of the non-conventional instrument transformers (NCIT) has emerged to provide a solution to drawbacks on the conventional current transformer (CCT). Consequently, this paper investigated the impact of using OCT over the CCT for CCP of the HTL. The result shows that OCT could be used for CCP on much longer cable sections thus increase its reliability as the AR scheme can be used on longer or multiple cable section.
Volume: 24
Issue: 1
Page: 1-11
Publish at: 2021-10-01

Maximum convergence algorithm for WiFi based indoor positioning system

10.11591/ijece.v11i5.pp4027-4036
Vinh Truong-Quang , Thong Ho-Sy
WiFi-based indoor positioning is widely exploited thanks to the existing WiFi infrastructure in buildings and built-in sensors in smartphones. The techniques for indoor positioning require the high-density training data to archive high accuracy with high computation complexity. In this paper, the approach for indoor positioning systems which is called the maximum convergence algorithm is proposed to find the accurate location by the strongest receiver signal in the small cluster and K nearest neighbours (KNN) of other clusters. Also, the K-mean clustering is deployed for each access point to reduce the computation complexity of the offline databases. Moreover, the pedestrian dead reckoning (PDR) method and Kalman filter with the information from the received signal strength (RSS) and inertial sensors are applied to the WiFi fingerprinting to increase the efficiency of the mobile object's position. The different experiments are performed to compare the proposed algorithm with the others using KNN and PDR. The recommended framework demonstrates significant proceed based on the results. The average precision of this system can be lower than 1.02 meters when testing in the laboratory environment with an area of 7x7 m using three access points.
Volume: 11
Issue: 5
Page: 4027-4036
Publish at: 2021-10-01

An integrated machine learning model for indoor network optimization to maximize coverage

10.11591/ijeecs.v24.i1.pp394-402
Ahmed Wasif Reza , Abdullah Al Rifat , Tanvir Ahmed
Indoor network optimization is not a simple task due to the obstacles, interference, and attenuation of the signal in an environment. Intense noises can affect the intelligibility of the signal and reduce the coverage strength significantly which results in a poor user experience. Most of the existing works are associated with finding the location of the devices via different mathematical and generic algorithmic approaches, but very few are focused on implying machine learning algorithms. The purpose of this research is to introduce an integrated machine learning model to find maximum indoor coverage with a minimum number of transmitters. The users in the indoor environment also have been allocated based on the most reliable signal strength and the system is also capable of allocating new users. K-means clustering, K-nearest neighbor (KNN), support vector machine (SVM), and Gaussian Naïve Bayes (GNB) have been used to provide an optimized solution. It is found that KNN, SVM, and GNB obtained maximum accuracy of 100% in some cases. However, among all the algorithms, KNN performed the best and provided an average accuracy of 93.33%. K-fold cross-validation (Kf-CV) technique has been added to validate the experimental simulations and re-evaluate the outcomes of the machine learning models.
Volume: 24
Issue: 1
Page: 394-402
Publish at: 2021-10-01

Engaging students to fill surveys using chatbots: University case study

10.11591/ijeecs.v24.i1.pp473-483
Nadir Belhaj , Abdemounaime Hamdane , Nour El Houda Chaoui , Habiba Chaoui , Moulhime El Bekkali
The use of chatbot or conversational agents is becoming common these days by the companies in many fields to make smart conversations with users. Backed by artificial intelligence and natural language processing they provide a strong platform to engage users. These positive aspects of chatbots can be beneficial in the educational sector, especially in conducting online survey. This study aims to explore the feasibility of a new chatbot approach survey as a new survey method in Moroccan university to overcome the web survey’s common response quality problems. Indeed, having student feedback before and after graduation is essential for university assessment. This new approach keeps students engaged, supportive, and even excited to offer feedback without getting bored and dropping the conversation, especially in Moroccan universities known by an overcrowding of students where it is difficult to get their feedback. This feedback feeds into our university' databases for further reporting and decision making to improve the quality of educational content and student-oriented services. Finally, we have shown the effectiveness of our approach by a comparative data study between the traditional online survey and the use of this chatbot.
Volume: 24
Issue: 1
Page: 473-483
Publish at: 2021-10-01

IoT for wheel alignment monitoring system

10.11591/ijece.v11i5.pp3809-3817
Mohamad Hadi Sulaiman , Suhana Sulaiman , Azilah Saparon
A great deal of previous research into wheel alignment has focused on techniques of the alignment, which involve big, bulky and high cost to maintain. Even though several approaches are required, the works are tedious and only performed in spacious area and trained mechanics. IoT is the alternatives due to the evolution of smartphone with numerous sensors to support and assist the research and development for IoT applications in vehicles. In this work, smaller and portable wheel alignment monitoring system is introduced by using communication protocol between sensors, microcontroller and mobile phone application. Thus, graphical user interface (GUI) is utilized to the system via wireless communication technology using TCP/IP Communication Protocol. The system has been tested to suit the functioning architecture system for the wheel alignment to provide the user awareness on early detection of wheel misalignment. In addition, the application has been successfully integrated with Android mobile application via TCP/IP communication protocol and view the results in smart phone in real-time.
Volume: 11
Issue: 5
Page: 3809-3817
Publish at: 2021-10-01

Multi-label classification approach for quranic verses labeling

10.11591/ijeecs.v24.i1.pp484-490
Abdullahi Adeleke , Noor Azah Samsudin , Mohd Hisyam Abdul Rahim , Shamsul Kamal Ahmad Khalid , Riswan Efendi
Machine learning involves the task of training systems to be able to make decisions without being explicitly programmed. Important among machine learning tasks is classification involving the process of training machines to make predictions from predefined labels. Classification is broadly categorized into three distinct groups: single-label (SL), multi-class, and multi-label (ML) classification. This research work presents an application of a multi-label classification (MLC) technique in automating Quranic verses labeling. MLC has been gaining attention in recent years. This is due to the increasing amount of works based on real-world classification problems of multi-label data. In traditional classification problems, patterns are associated with a single-label from a set of disjoint labels. However, in MLC, an instance of data is associated with a set of labels. In this paper, three standard MLC methods: binary relevance (BR), classifier chain (CC), and label powerset (LP) algorithms are implemented with four baseline classifiers: support vector machine (SVM), naïve Bayes (NB), k-nearest neighbors (k-NN), and J48. The research methodology adopts the multi-label problem transformation (PT) approach. The results are validated using six conventional performance metrics. These include: hamming loss, accuracy, one error, micro-F1, macro-F1, and avg. precision. From the results, the classifiers effectively achieved above 70% accuracy mark. Overall, SVM achieved the best results with CC and LP algorithms.
Volume: 24
Issue: 1
Page: 484-490
Publish at: 2021-10-01

Forecasting number of vulnerabilities using long short-term neural memory network

10.11591/ijece.v11i5.pp4381-4391
Mohammad Shamsul Hoque , Norziana Jamil , Nowshad Amin , Azril Azam Abdul Rahim , Razali B. Jidin
Cyber-attacks are launched through the exploitation of some existing vulnerabilities in the software, hardware, system and/or network. Machine learning algorithms can be used to forecast the number of post release vulnerabilities. Traditional neural networks work like a black box approach; hence it is unclear how reasoning is used in utilizing past data points in inferring the subsequent data points. However, the long short-term memory network (LSTM), a variant of the recurrent neural network, is able to address this limitation by introducing a lot of loops in its network to retain and utilize past data points for future calculations. Moving on from the previous finding, we further enhance the results to predict the number of vulnerabilities by developing a time series-based sequential model using a long short-term memory neural network. Specifically, this study developed a supervised machine learning based on the non-linear sequential time series forecasting model with a long short-term memory neural network to predict the number of vulnerabilities for three vendors having the highest number of vulnerabilities published in the national vulnerability database (NVD), namely microsoft, IBM and oracle. Our proposed model outperforms the existing models with a prediction result root mean squared error (RMSE) of as low as 0.072.
Volume: 11
Issue: 5
Page: 4381-4391
Publish at: 2021-10-01

Method for improving ripple reduction during phase shedding in multiphase buck converters for SCADA systems

10.11591/ijeecs.v24.i1.pp29-36
Mini P. Varghese , A. Manjunatha , T. V. Snehaprabha
In the current digital environment, central processing unit (CPUs), field programmable gate array (FPGAs), application-specific integrated circuit (ASICs), as well as peripherals, are growing progressively complex. On motherboards in many areas of computing, from laptops and tablets to servers and Ethernet switches, multiphase phase buck regulators are seen to be more common nowadays, because of the higher power requirements. This study describes a four-stage buck converter with a phase shedding scheme that can be used to power processors in programmable logic controller (PLCs). The proposed power supply is designed to generate a regulated voltage with minimal ripple. Because of the suggested phase shedding method, this power supply also offers better light load efficiency. For this objective, a multiphase system with phase shedding is modeled in MATLAB SIMULINK, and the findings are validated.
Volume: 24
Issue: 1
Page: 29-36
Publish at: 2021-10-01

A computational experimental of noise suppressing technique stand on hard decision threshold dissimilarity

10.11591/ijeecs.v24.i1.pp144-156
Vorapoj Patanavijit , Kornkamol Thakulsukanant
Due to the extreme insistence for digital image processing, plentiful modern noise suppressing techniques are embodied of dissimilarity process and suppressing process. One of the extreme capability dissimilarity is hard decision threshold (HDT) dissimilarity, which has been recently declared in 2012, for suppressing the impulsive noisy photographs thus the computer experimental statement attempts to investigate the capability of the noise suppressing technique that is stand on HDT dissimilarity for the processed photographs, which are corrupted by fixed-intensity impulse noise (FIIN). This paper proposes the noise suppressing technique stand on HDT dissimilarity for FIIN. There are 3 primary contributions of this paper. The first contribution is the statistical average of the HDT dissimilarity of noise-free elements, which are computed from plentiful ground-truth photographs by varying window size for the best HDT window size. The second contribution is the statistical average of the HDT dissimilarity of corrupted elements, which are computed from plentiful corrupted photographs by varying outlier density for the best HDT window size. The final contribution is the statistical interrelation of the capability of the noise suppressing technique and hard consistent of HDT dissimilarity are investigated by varying the outlier denseness for the best HDT hard consistence.
Volume: 24
Issue: 1
Page: 144-156
Publish at: 2021-10-01

Artificial intelligence techniques over the fifth generation mobile networks: a review

10.11591/ijeecs.v24.i1.pp317-328
Ashwaq N. Hassan , Sarab Al-Chlaihawi , Ahlam R. Khekan
A well Fifth generation (5G) mobile networks have been a common phrase in recent years. We have all heard this phrase and know its importance. By 2025, the number of devices based on the fifth generation of mobile networks will reach about 100 billion devices. By then, about 2.5 billion users are expected to consume more than a gigabyte of streamed data per month. 5G will play important roles in a variety of new areas, from smart homes and cars to smart cities, virtual reality and mobile augmented reality, and 4K video streaming. Bandwidth much higher than the fourth generation, more reliability and less latency are some of the features that distinguish this generation of mobile networks from previous generations.  Clearly, at first glance, these features may seem very impressive and useful to a mobile network, but these features will pose serious challenges for operators and communications companies. All of these features will lead to considerable complexity. Managing this network, preventing errors, and minimizing latency are some of the challenges that the 5th generation of mobile networks will bring. Therefore, the use of artificial intelligence and machine learning is a good way to solve these challenges. in other say, in such a situation, proper management of the 5G network must be done using powerful tools such as artificial intelligence. Various researches in this field are currently being carried out. Research that enables automated management and servicing and reduces human error as much as possible. In this paper, we will review the artificial intelligence techniques used in communications networks. Creating a robust and efficient communications network using artificial intelligence techniques is a great incentive for future research. The importance of this issue is such that the sixth generation (6G) of cellular communications; There is a lot of emphasis on the use of artificial intelligence.
Volume: 24
Issue: 1
Page: 317-328
Publish at: 2021-10-01

Reliability assessment for electrical power generation system based on advanced Markov process combined with blocks diagram

10.11591/ijece.v11i5.pp3647-3659
A. A. Tawfiq , M. Osama abed el-Raouf , A. A. El-Gawad , M. A. Farahat
This paper presents the power generation system reliability assessment using an advanced Markov process combined with blocks diagram technique. The effectiveness of the suggested methodology is based on HL-I of IEEE_EPS_24_bus. The proposed method achieved the generation reliability and availability of an electrical power system using the Markov chain which based on the operational transition from state to state which represented in matrix. The proposed methodology has been presented for reliability performance evaluation of IEEE_EPS_24_bus. MATLAB code is developed using Markov chain construction. The transition between probability states is represented using changing the failure and repair rates. The reduced number of generation system are used with Markov process to assess the availability, unavailability, and reliability for the generation system. Additionally, the proposed technique calculates the frequency, time duration of states, the probability of generation capacity state which get out of service or remained in service for each state of failure, and reliability indices. A considerable improvement in reliability indices is found with using blocks diagram technique which is used to reduce the infinity number of transition states and assess the system reliability. The proposed technique succeeded at achieving accurate and faster reliability for the power system.
Volume: 11
Issue: 5
Page: 3647-3659
Publish at: 2021-10-01

Artificial neural network technique for improving prediction of credit card default: A stacked sparse autoencoder approach

10.11591/ijece.v11i5.pp4392-4402
Sarah A. Ebiaredoh-Mienye , E. Esenogho , Theo G. Swart
Presently, the use of a credit card has become an integral part of contemporary banking and financial system. Predicting potential credit card defaulters or debtors is a crucial business opportunity for financial institutions. For now, some machine learning methods have been applied to achieve this task. However, with the dynamic and imbalanced nature of credit card default data, it is challenging for classical machine learning algorithms to proffer robust models with optimal performance. Research has shown that the performance of machine learning algorithms can be significantly improved when provided with optimal features. In this paper, we propose an unsupervised feature learning method to improve the performance of various classifiers using a stacked sparse autoencoder (SSAE). The SSAE was optimized to achieve improved performance. The proposed SSAE learned excellent feature representations that were used to train the classifiers. The performance of the proposed approach is compared with an instance where the classifiers were trained using the raw data. Also, a comparison is made with previous scholarly works, and the proposed approach showed superior performance over other methods.
Volume: 11
Issue: 5
Page: 4392-4402
Publish at: 2021-10-01

NLP-based personal learning assistant for school education

10.11591/ijece.v11i5.pp4522-4530
Ann Neethu Mathew , Rohini V. , Joy Paulose
Computer-based knowledge and computation systems are becoming major sources of leverage for multiple industry segments. Hence, educational systems and learning processes across the world are on the cusp of a major digital transformation. This paper seeks to explore the concept of an artificial intelligence and natural language processing (NLP) based intelligent tutoring system (ITS) in the context of computer education in primary and secondary schools. One of the components of an ITS is a learning assistant, which can enable students to seek assistance as and when they need, wherever they are. As part of this research, a pilot prototype chatbot was developed, to serve as a learning assistant for the subject Scratch (Scratch is a graphical utility used to teach school children the concepts of programming). By the use of an open source natural language understanding (NLU) or NLP library, and a slackbased UI, student queries were input to the chatbot, to get the sought explanation as the answer. Through a two-stage testing process, the chatbot’s NLP extraction and information retrieval performance were evaluated. The testing results showed that the ontology modelling for such a learning assistant was done relatively accurately, and shows its potential to be pursued as a cloud-based solution in future.
Volume: 11
Issue: 5
Page: 4522-4530
Publish at: 2021-10-01

Outage probability analysis for hybrid TSR-PSR based SWIPT systems over log-normal fading channels

10.11591/ijece.v11i5.pp4233-4240
Hoang Thien Van , Hoang-Phuong Van , Danh Hong Le , Ma Quoc Phu , Hoang-Sy Nguyen
Employing simultaneous information and power transfer (SWIPT) technology in cooperative relaying networks has drawn considerable attention from the research community. We can find several studies that focus on Rayleigh and Nakagami-m fading channels, which are used to model outdoor scenarios. Differing itself from several existing studies, this study is conducted in the context of indoor scenario modelled by log-normal fading channels. Specifically, we investigate a so-called hybrid time switching relaying (TSR)-power splitting relaying (PSR) protocol in an energy-constrained cooperative amplify-and-forward (AF) relaying network. We evaluate the system performance with outage probability (OP) by analytically expressing and simulating it with Monte Carlo method. The impact of power-splitting (PS), time-switching (TS) and signal-to-noise ratio (SNR) on the OP was as well investigated. Subsequently, the system performance of TSR, PSR and hybrid TSR-PSR schemes were compared. The simulation results are relatively accurate because they align well with the theory.
Volume: 11
Issue: 5
Page: 4233-4240
Publish at: 2021-10-01
Show 912 of 1996

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration