Indonesian J our nal of Electrical Engineering and Computer Science V ol. 40, No. 1, October 2025, pp. 202 215 ISSN: 2502-4752, DOI: 10.11591/ijeecs.v40.i1.pp202-215 202 DigiScope: IoT -enhanced deep lear ning f or skin cancer pr ognosis A ymane Edder 1 , F atima-Ezzahraa Ben-Bouazza 1,2,3 , Oumaima Manchadi 1 , Idriss T afala 1 , Bassma Jioudi 1 1 BRET Lab, Mohammed VI Uni v ersity of Sciences and Health, Casablanca, Morocco 2 LaMSN, La Maison des Sciences Num ´ eriques, France 3 Arfticial Intelligence Research and Application Laboratory (AIRA Lab), F aculty of Science and T echnology , Hassan 1st Uni v ersity , Settat, Morocco Article Inf o Article history: Recei v ed Jun 5, 2024 Re vised Apr 24, 2025 Accepted Jul 3, 2025 K eyw ords: Deep learning Early detection Internet of things Lo w-cost de vices Rural areas Skin cancer ABSTRA CT In dermatology , early identication and interv ention are crucial for optimiz- ing patient outcomes in skin cancer care. Recent technological adv ances, particularly in the int ernet of things (IoT), ha v e led to signicant gro wth in telemedicine. This study introduces a cutting-edge system that proacti v ely pre- dicts the emer gence of skin cancer by combining deep learning algorithms, IoT de vices, and sophisticated medical imaging techniques. The e xperimen- tal setup le v erages a high-resolution mobile came ra for dermoscop y , associated with a cloud-inte grated machine learning fr ame w ork. The proposed algorithm comprehensi v ely e xamines lesion characteristics, Utilizi ng color , te xture, and shape characteristics to e v aluate the probability of malignanc y . Subsequently , a cloud-hosted m achine learning model analyzes and scrutinizes the collected data, yielding a thorough diagnostic e v aluation. Initial results re v eal that this system achie v es an impressi v e predicti v e accurac y rate e xce eding 97.6%, en- abling swift and ef cient skin cancer detec tion. These promising ndings em- phasize the potential for rapid, ef cient, and proacti v e diagnosis, signicantly impro ving patient prognosis and reinforcing the v alue of telemedicine in con- temporary healthcare. This is an open access article under the CC BY -SA license . Corresponding A uthor: A ymane Edder BRET Lab, Mohammed VI Uni v ersity of Sciences and Health Casablanca, Morocco Email: aedder@um6ss.ma 1. INTR ODUCTION Skin cancer poses a signicant health challenge across the globe, emphasizing the need for ef fecti v e detection and treatment methods to impro v e patient outcomes and reduce its impac t. The rise of technology in healthcare of fers ne w possibilities for tackling this challenge, as articial intelligence (AI) and the internet of things (IoT) are becoming po werful tools in the ght ag ainst skin cancer . Ben-Bouazza et al. [1] by le v eraging these technologies, we are entering an era where early detection and precise diagnosis are increasingly within reach. Azeroual et al. [2] AI algorithms are capable of analyzing enormous v olumes of data, un v eiling intri- cate patterns often missed by human clinicians. This capabilit y is further enhanced by the inte gration of data streams from wearable sensors and medical imaging de vices within the IoT frame w ork, enabling comprehen- si v e analyses. Recent studies underscore the transformati v e po wer of these technologies, Hoang et al. [3] with deep learning algorithms redening lesion detection and classication. IoT -enabled de vices, such as intelligent J ournal homepage: http://ijeecs.iaescor e .com Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 203 skin patches, f acilitate adv anced data acquisition and transmission for in-depth analysis, thereby pushing the boundaries of medical inno v ation. Gajera et al. [4] emplo yed deep features deri v ed from pre-trained con v olutional neural netw ork (CNN) models to assess dermoscopic images for melanoma diagnosis, adv ocating for border localization to safe guard critical skin lesion sites. A total of eight CNN models were systematically e xamined for the purpose of feature e xtraction, utilizing four distinct datasets in the e xperimental procedures. The inte gration of DenseNet-121 with a multilayer perceptron yielded a commendable classication rate. K umar et al. [5] ef fecti v ely discerned preliminary indicators of three distinct types of skin cancer through the application of computational method- ologies. The y emplo yed a deep e v olutionary articial neural netw ork (DEANN) for the classication of skin cancer , alongside techniques such as local binary patterns (LBP), gray le v el co-occurrence matrix (GLCM), color space analysis, and RGB techniques to e xtract pertinent image features critical for the accurate classi- cation of the condition. Chaturv edi et al. [6] proposed a methodology for t he classication of Malignant Cutaneous Melanoma that demonstrates superior performance compared to both dermatological assessments and e xisting deep learning approaches. Khan et al. [7] de v eloped a system that inte grates deep learning models, specically le v eraging DenseNet for classication purposes and mask re gional con v olutional neural netw ork (Mask-RCNN) for se gmentation tasks. Srini v asu et al. [8] utilized MobileNet V2 as the selected architecture for the cl assication of di v erse dermatological conditions, inte grating long short-term memory (LSTM) to en- hance the model’ s performance . Hosn y et al. [9] presented a no v el methodology for the classication of skin lesions, utilizing transfer l earning in conjunction with a deep neural netw ork architecture kno wn as Ale xNet. The public database ISIC 2018 serv ed as the foundational dataset for the training, testing, and comparati v e anal- ysis of the proposed methodology ag ainst state-of-the-art techniques. The methodology ef fecti v ely classies se v en unique cate gories of skin lesions, with the authors reporting outs tanding outcomes in classication per - formance. Sae-Lim et al. [10] emplo yed a modied MobileNet architecture for the classication of skin lesions. The nd i ngs indicated that the modied model e xhibited superior performance compared to the con v entional MobileNet model, as e videnced by enhancements in accurac y , specicity , sensiti vity , and F1-score. During the preprocessing phase, the implem entation of data upsampling and data augmentation techniques pro v ed bene- cial in addressing clas s imbalance. Furthermore, data augmentation serv ed as a mechanis m to mitig ate the risk of o v ertt ing within the model. Zaqout et al. [11] ha v e formulated an automated diagnostic frame w ork aimed at the preliminary e v aluation of melanoma, utilizing image processing techniques that are grounded in the widely recognized ABCD medical protocol. The proposed system emplo ys a range of image processing techniques to f acilitate precise, ra pid, cost-ef fecti v e, and readily accessible diagnosis of melanoma. Hasan et al. [12] introduced an inno v ati v e automatic skin lesion se gmentation netw ork designated as DSNet. This netw ork e xhibits rob ustness and incorporates a proposed loss function that inte grates a binary cross-entrop y compo- nent alongside an intersection o v er union component. Ade gun and V iriri [13] de v eloped a deep learning-based computer -aided diagnosis system aimed at the detection and identication of skin lesions for the purpose of diagnosing skin cancer . Chatterjee et al. [6] i ntroduced an inno v ati v e k ernel sparse coding methodology aimed at the se gmentation and classication of skin lesions. Their approach demonstrated competiti v e performance relati v e to alternati v e techniques in e xperimental e v aluations utilizing both dermoscop y and digital datasets. S ` aez et al. emplo yed a computerized system designed to quantify melanoma thickness through the analysis of dermoscopic images. Y u et al. [14], Hameed et al. [15], Khan et al. [16], Hoang et al. [3], Zhang et al. [17], Periera et al. [18], Shetty et al. [19], Dhi vyaa et al. [20], Mahbod et al. [21], and Alenezi et al. [22] ha v e made signicant contrib utions to the domain of skin lesion classication. Y u et al. [14] introduced an inno v ati v e methodology for the classication of dermoscop y images, emplo ying a compact architectural frame- w ork alongside local descriptor encoding techniques. The con v olutional features were e xtracted from an image emplo ying a deep residual netw ork, follo wed by the application of the sher v ector technique to encode these features into more intricate representations. Hameed et al. [15] emplo yed a combination of traditional ma- chine learning methodologi es alongside adv anced deep learning approaches to assess early-st age skin lesions. The deep learning methodology utilized transfer learning directly from the images, whereas the con v entional approach initially conducted pre-processing, cate gorization, feature e xtraction, and subsequent cate gorization processes. The proposed methodology demonstrated superior performance compared to the multi-class single- le v el classicati on algorithm, attaining ele v ated accurac y across both approaches. Khan et al. [7] introduced an inno v ati v e computer -aided diagnosis (CAD) system aimed at the classication of skin lesions through the application of deep learning methodologies. The system emplo ys ResNet-50 and ResNet-101 architectures for the e xtraction of features from enhanced dermoscopic images, utilizing a no v el methodology referred to DigiScope: IoT -enhanced deep learning for skin cancer pr o gnosis (Edder A ymane) Evaluation Warning : The document was created with Spire.PDF for Python.
204 ISSN: 2502-4752 as KcPCA for the selection of signicant features. A multi-class support v ector machine (SVM) utilizing a radial basis function k ernel is emplo yed, incorporating the upper 60% of these features as input. Hoang et al. [3] de v eloped a straightforw ard methodology for the classication of skin lesions, demonstrating superior per - formance compared to 20 alternati v e methods while necessitating 79 times fe wer parameters. The researchers emplo yed a deep learning methodology to ef fecti v ely se gment and classify skin lesions, attaining remarkable outcomes when the lesion’ s fore ground is discernible from the background through te xture and color dif feren- tiation. Zhang et al. [17] proposed an inno v ati v e methodology utilizing a CNN frame w ork for the diagnosis of skin cancer . A modied v ariant of the whale optimization algorithm w as emplo yed to enhance the ef cac y of CNNs and to minimize the discrepanc y between the netw ork’ s output and the intended output. Thurnhofer - Hemsi et al. presented an inno v ati v e methodology for the classication of skin lesions through the application of deep CNNs, demonstrating enhanced reliability compared to traditional CNN classication methods. Shetty et al. [19] proposed a methodology for the classication of skin lesion photographs emplo ying CNN and machine learning techniques, with outcomes assessed utilizing the HAM10000 dataset. Dhi vyaa et al. [20] inte grated learning theory wit h the decision tree-based random forest cl assication methodology to enhance the accurac y and rob ustness of skin lesion image cate gorization. Mahbod et al. [21] conducted an in v estig ation into the inuence of image dimensions on the ef cac y of transfer learning class ication in the conte xt of skin lesion analysis. Alenezi et al. [22] introduced an inno v ati v e approach for the classication of skin lesions, which inte grates w a v elet-based preprocessing techniques, deep residual neural netw orks, and e xtreme learning machine classiers. Ho we v er , despite these adv ancements in skin cancer detection and diagnosis, indi viduals in rural areas continue to f ace signicant barriers to timely and accurate healthcare access. Current technologies, including telemedicine and e xisting mobile health solutions, often f all short in pro viding the necessary high-resolution imaging and rob ust computational resources needed for precise skin cancer classication. Ben-Bouazza et al. [23] there is a noticeable lack of inte gration between mobile imaging de vices and cloud-based deep learning models that can bridge the diagnostic g ap between urban and rural popu l ations. This g ap underscores the need for an architecture that not only f acilitates high-quality dermoscopic imaging b ut also ensures seamless data transfer and analysis in cloud en vironments. Furthermore, there is a critical need for a system that deli v ers di- agnostic insights to remote healthcare pro viders, ensuring that patients in underserv ed areas recei v e comparable le v els of care to those in urban centers [4]. In light of these shortcomings, this paper proposes a ne w architec- ture aimed at impro ving skin cancer classication, specically focusing on indi viduals in rural areas who may ha v e limited access to healthcare. The architecture combines a high-resolution mobile camera designed for dermoscop y with a deep learning model hosted on the cloud. Ben-Bouazza et al. [1] this setup enables not only the capture and analysis of dermoscopic images b ut also the seamless transfer of data to cloud serv ers equipped with ample computational res ources for thorough analysis. The resulting insights are then shared with medical centers, allo wing healthcare professionals to remotely access diagnostic results. This method ensures that pa- tients in rural areas recei v e the same le v el of diagnostic scrutin y as those in urban settings, thereby closing a signicant g ap in healthcare accessibility [12]. This research has the potential to signicantly impro v e early diagnosis and timely interv ention for skin cancer , particularly in underserv ed rural communities where access to specialized healthcare is limited. The proposed deep learning m od e l is instrumental in f acilitating a thorough e xaminati on of skin lesions with enhanced precision and ef cienc y , f ar surpassing traditional approaches that often rely on manual analysis [24]. By enabling timely detection and ef cie nt diagnostic processes, the system ensures that treatment can commence promptly , leading to better patient outcomes and potentially reducing mortality rates associated with skin cancer . Be yond its academic contrib utions, this research of fers practical applications in real-w orld scenarios, such as mobile clinics and telehealth platforms, leading to positi v e impacts on global health outcomes and equity in healthcare access [25]. The remainder of this paper is or g anized as follo ws. I n section 2, the methods and Materials section pro vides a detailed account of the techniques and technologies used in the study , including the unique architecture proposed for classifying skin cancer . This section also co v ers the w orko w of Digiscope in Node-Red, demonstrating ho w data is processed and analyzed in a real-time en vironment. The Data section within this part gi v es a comprehensi v e o v ervie w of the types and sources of data utilized, with a specic emphasis on dermoscopic images obtained from rural areas. Section 3 presents the results and discussion, e v aluating the ef fecti v eness of t he proposed methods. Section 4 discusses the challenges and limitations encountered during the study . Finally , section 5 pro vides the conclusion, encapsulates the principal disco v eries and proposes possible directions for subsequent in v estig ations. Indonesian J Elec Eng & Comp Sci, V ol. 40, No. 1, October 2025: 202–215 Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 205 2. MA TERIALS AND METHODS 2.1. Dataset : HAM10000 2.1.1. Data settings The HAM10000 dataset comprises a total of 10015 dermatoscopic images, which were meticulous ly g athered o v er a span of tw o decades from distinct locations . Specically , these images were procured from tw o prominent sites: the esteemed Department of Dermatology at the Medical Uni v ers ity of V ienna, Austria, and the reputable skin cancer practice of Clif f Rosendahl situated in Queensland, Australia. Gajera et al. [4] the Australian platform ef fecti v ely emplo yed Po werPoint les and Excel databases for the purpose of storing both images and meta-data. The Austrian site commenced the process of amassing visual representations prior to the adv ent of digit al cameras, and subsequently preserv ed said images alongside corresponding metadata in di v erse formats across v arying temporal epochs. The lesion is positioned at the centre of the im age, precisely at coordinates 800x600 pix els, with a resolution of 72 dots per inch (DPI). The entirety of the data records pertaining to the HAM 1 0000 dataset has been archi v ed within the Harv ard Data v erse repository . T able 1 presents a comprehensi v e summary of the image count within the HAM10000 training set, cate gorized by diagnosis, and juxtaposed with data from e xisting databases. The images and associated metadata can be accessed via the public ISIC archi v e, both through the archi v e g allery and through standardized API calls (https://isic-archi v e.com/api/v1). T able 1. Summary of dermatological datasets: total images, pathologic v erication percentages, and class distrib ution Dataset T otal images P athologic v erication akiec bcc bkl df mel n v v ast PH2 200 20.5% - - - - 40 160 - Atlas 1024 unkno wn 5 42 70 20 275 582 30 ISIC 2017 13786 26.3% 2 33 575 7 1019 11861 15 Rosenthal 2259 100% 295 296 490 30 342 803 3 V iDIR Le g ac y 439 100% 0 5 10 4 67 350 3 V iDIR MoleMax 3954 1.2% 0 2 124 30 24 3720 54 HAM10000 10015 53.3% 327 514 1099 115 1113 6705 142 The HAM10000 dataset, comprising 10,015 images of v arious skin lesions cate gorized into se v en dif ferent classes [26]. The classes are visually depicted in Figure 1. Figure 1. HAM10000 database classes 2.1.2. Data pr eparation Applying a range of transformations to e xisting images is a common practice in skin cancer imaging to e xpand and di v ersify the dataset. V arious techniques are emplo yed to create dif ferent v ariations of images, in- cluding rotation, ipping, scaling, cropping, and color adjustment lik e sho wn in Figure 2, this process enhances the reliability and prec ision of machine learning models by enabling them to learn from a wider v ariety of data, DigiScope: IoT -enhanced deep learning for skin cancer pr o gnosis (Edder A ymane) Evaluation Warning : The document was created with Spire.PDF for Python.
206 ISSN: 2502-4752 minimizing o v ertting and impro ving their capacity to generalize to unf amiliar images. Data augmentation plays a vital role in tackling the limited a v ailability of label ed medical images and enhancing the ef fecti v eness of skin cancer detection algorithms. Figure 2. Data augmentation e xample 2.2. W orko w of the pr oposed appr oach This o wchart in Figure 3 illustrat es the w orko w for a deep learning project focused on skin cancer classication using the HAM10000 dataset. HAM-10000: HAM10000 is a dataset conta ining images of skin lesions, used for training and testing the model. Pre-processing: the ra w data from HAM10000 is pre-processed. Pre-processing might include tasks such as normalization, resizing images, data augmentation, and other techniques to prepare the data for training the model. T raining model: after pre-processing, the data is fed into a machine learning model for training. This in v olv es using algorithms to learn patterns from the training data. Classication: the trained model is then used for classication. This is where the model mak es predictions on ne w , unseen data. Y es (Successful classication): if the classication results are satisf actory , the w orko w proceeds to deplo yment. No (Unsuccessful classication): if the classication results are not satisf actory , the w orko w mo v es to the results analysis phase. Results analysis: here, the results of the classication are analyzed. This step in v olv es assessing the perfor - mance of the model, identifying an y shortcomings, and understanding the reasons behind incorrect classi- cations. Hyperparameters update: based on the analysis, the model’ s h yperparameters are updated. Hyperparameter tuning is crucial for impro ving model performance. Once updated, the model is retrained with the ne w settings. Deplo yment: if the classication is successful, the model is deplo yed. Deplo yment means inte grating the model into a production en vironment where it can be used for real-time predictions. Optimization: after deplo yment, the model is further optimized to enhance its performance and ef cienc y in the production en vironment. Real-w orld inte gration: the nal step in v olv es inte grating the optimi zed model into real-w orld applications, making it accessible for end-users and ensuring it performs well in practical scenarios.predictions. This w orko w is iterati v e, with the loop between results analysis, h yperparameters update, and model training ensuring continuous impro v ement until satisf actory classication performance is achie v ed. Indonesian J Elec Eng & Comp Sci, V ol. 40, No. 1, October 2025: 202–215 Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 207 Figure 3. W orko w of the proposed approach 2.3. W orko w of the pr oposed ar chitectur e In this scientic paper , we de vised an entirely autonomous methodology that harnesses the po wer of CNNs to discern and classify cutaneous anomalies with utmost precision. The central emphasis of our study re v olv ed around the e xploration and e v aluation of ef cacious pre-processing methodologies and classication algorithms. In order to assess the ef cac y of our methodology , we utilised the HAM10000 dataset, which encompasses a total of 10,015 di v erse images depicting a wide range of ski n lesions that ha v e been meticulously classied into se v en distinct cate gories. The sequential procedure that we emplo yed is graphically represented in Figure 4. In the subsequent section, we shall embark upon an in-depth e xploration of the data emplo yed in this study , elucidating the preprocessing procedures that were implemented. Furthermore, we shall delv e into the proposed theoretical frame w ork, meticulously e xamining its intricate components and scrutinising its h yperparameters. Figure 4. W orko w of the proposed architecture 2.3.1. The pr oposed ar chitectur e In contrast to a traditional neural netw ork, a CNN is designed to elucidate intricate patterns through the direct application of lters to the unprocessed pix els of an image. W e used the Python libraries T ensoro w and K eras for our project to de v elop and implement the CNN model. T able 2 pro vides an o v ervie w of the layers and h yperparameters utilized in our netw ork. These layers and h yperparameters play a crucial role in dening the structure and beha vior of the CNN model. DigiScope: IoT -enhanced deep learning for skin cancer pr o gnosis (Edder A ymane) Evaluation Warning : The document was created with Spire.PDF for Python.
208 ISSN: 2502-4752 T able 2. CNN layers and h yperparameters Layer Hyperparameters Con v2D 16 ltres, 3x3 lter size, ReLu acti v ation, same padding Con v2D 32 ltres, 3x3 lter size, ReLu acti v ation, MaxPool2D 2x2 pool size Con v2D 32 ltres, 3x3 lter size, ReLu acti v ation, same padding Con v2D 64 ltres, 3x3 lter size, ReLu acti v ation, same padding MaxPool2D 2x2 pool size, same padding Flatten 2304 units Dense 64 units, ReLu acti v ation Dense 32 units, ReLu acti v ation Dense 7 units, SoftMax acti v ation 2.3.2. Model h yper parameters W e carefully selected specic commonly used h yperparameter v alues to ensure a more accurate e v al- uation of our model. T able 3 highlights the specic h yp e rparameter v alues emplo yed in our CNN model. The follo wing section e xplains the rationale behind selecting these v alues in our approach. By choosing appropriate h yperparameter v alues, we aimed to optimize the performance and ef fecti v eness of our CNN model for skin lesion identication and classication: Optimizer: Adam w as selected as the optimization technique for training deep neural netw orks due to its f acile to use nature, computational ef cienc y , and ef cac y in managing substantial v olumes of data and parameters. Loss function: the loss function emplo yed in the multi-clas s scenario is deri v ed from the “sparse cate gorical cross-entrop y” methodology , which f acilitates the computation of the loss v alue. Epochs: the epoch count is set at 50. This w as determined through e xperimentation, which found that 50 epochs resulted in a model with lo w loss and no o v ertting to the training set (or the least amount of o v ertting possible). Batch size: a series of preliminary e xperiments were conducted utilizing batch sizes of 20, 30, 60, and 90, with the ndings indicating that a batch size of 128 yielded the most f a v orable outcomes. T able 3. CNN model’ s h yperparameters Hyperparametres V alue Optimizer Adam Loss function Sparse cate gorical cross-entrop y Epochs 50 Batch size 128 2.4. DigiScope framew ork 2.4.1. The pr oposed edge-AI framew ork the Digiscope edge-AI frame w ork is a no v el medical AI paradigm that uses self-learning and lar ge- scale data e v olution. So in this Figure 5 we illustrates a system for managing skin disease data using IoT and cloud technologies, di vided into three main parts: Edge de vices: the edge de vices section includes v arious de vices such as dermatoscopic cameras, smart- phones, smartw atches, and other IoT de vices. These de vices are responsible for collecting data related to skin diseases, including images and other health metrics. Once collected, the data is transmitted to the cloud using secure communication protoc o l s f acilitated by routers, ensuring that the data is sent ef ciently and securely . Cloud: the cloud section represents the cloud infrastructure, which includes storage, processing units, and machine learning models. When data from the edge de vices reaches the cloud, it is stored and processed. Indonesian J Elec Eng & Comp Sci, V ol. 40, No. 1, October 2025: 202–215 Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 209 The cloud infrastructure uses machine learning algorithms to analyze the data, pro viding insights and up- dates. The cloud also updates the model parameters based on ne w data, ensuring that the analysis remains accurate and up-to-date. The results of the data processing are then sent back to the edge de vices and forw arded to the online medical services. Online medical services: the online medical services section includes v arious healthca re services such as telemedicine platforms, hospitals, amb ulances, and healthcare pro viders. These services utilize the pro- cessed data and insights pro vided by the cloud to of fer medical advice, diagnosis, and treatment options. By inte grating the data from the cloud, healthcare professionals can access real-time updates and therapeu- tic protocols, which helps in impro ving patient care and outcomes. This part of the system ensures that the processed data is ef fecti v ely used to pro vide timely and accurate medical services to patients. This inte grated system allo ws for ef cient data collection, processing, and utilization, thereby enhancing the management and treatment of skin diseases through a connected and intelligent infrastructure. Figure 5. Digiscope: medical edge-AI frame w ork 2.4.2. Digiscope w orko w in node-RED This Figure 6 illustrates the theoretical transfer of skin cancer picture data via a secure communicati o n technique from a dermatoscopic camera to the cloud for p r ocessing. It illustrates projected data tra v el. The images are transmitted from edge de vices to Google Cloud via MQTT , ensuring secure and ef cient data transfer . The MQTT brok er publishes the data to a pub/sub system, which forw ards it to a vision module for processing. The processed data is then sent to an AutoML module for machine learning analysis. The analysis results update the IoT conguration and send commands back through the pub/sub system to the MQTT brok er for terminal visualization. This simulation helps conceptualize the architecture, with real data o w planned for future projects. Figure 6. Digiscope w orko w in node-RED 3. RESUL TS AND DISCUSSIONS In the present study , we implemented a CNN algorithm on a computational platform with a 16- gig abyte (GB) random access memory (RAM) and an Intel i7-8650U processor . This setup f acilitated ef cient data processing and e x ecution of the CNN algorithm, with training a v eraging 25 minutes and classication of a single sample taking approximately 0.130 milliseconds. Python w as utilized for implementation, emplo ying libraries such as K eras, P andas, and Scikit-Learn. The model demonstrated remarkable ef cac y , achie ving DigiScope: IoT -enhanced deep learning for skin cancer pr o gnosis (Edder A ymane) Evaluation Warning : The document was created with Spire.PDF for Python.
210 ISSN: 2502-4752 an o v erall precision rate of 98% on an independent test dataset and a loss rate of 19% during 50 epochs of training, with minimal signs of o v ertting. Notably , data augmentation techniques enhanced model accurac y . The ndings underscore the ef cac y of deep learning models in the precise classication of skin lesions within practical, real-w orld conte xts. This research presents a comparati v e e xamination of our deep learning model in relation to estab- lished methodologies for the classication of skin lesions. re v ealing superior accurac y and speed compared to con v entional techniques. The CNN algorithm’ s performance, as assessed by metrics such as recall, preci- sion, F1-score, and support, wich can be calculated by the v alues sho wn in Figure 7, demonstrated comparable results to the SVM algorithm. Ho we v er , our approach e xcels in ef ciently identifying positi v e instances and minimizing f alse positi v es, as sho wn in T ables 4-6. The ndings align with pre vious studies that emphasize the benets of deep learning for skin lesion classication. Despite its strengths, our study has limitations, such as potential biases in the training data and the need for further v alidation in di v erse clinical settings. Une xpect- edly , the CNN model e xhibited a notably lo w loss rate with data augmentation, underscoring its rob ustness in v arious conditions. Figure 7. Multi-class confusion matrix of the customised CNN model T able 4. Multi-class classication report of the customised CNN model Precision Recall F1-score Support 0-n v 0,99 1,00 1,00 1359 1-mel 0,98 1,00 0,99 1318 2-bkl 0,96 0,98 0,97 1262 3-bcc 1,00 1,00 1,00 1351 4-v asc 0,99 0,88 0,93 1374 5-akiec 1,00 1,00 1,00 1358 6-df 0,94 0,99 0,97 1365 macro a vg 0,98 0,98 0,98 9387 weighted a vg 0,98 0,98 0,98 9387 T able 5. Metrics model Metrics Classication Dense SVM Accurac y (%) 0,98 0,98 Precision (%) 0,98 0,98 Recall (%) 0,98 0,98 F1-score (%) 0,98 0,98 Indonesian J Elec Eng & Comp Sci, V ol. 40, No. 1, October 2025: 202–215 Evaluation Warning : The document was created with Spire.PDF for Python.
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752 211 T able 6. CNN model learning results CNN model Accurac y (%) Loss (%) T est set 0,98 0,19 The principal objecti v e of this in v estig ation w as to e xamine the ef cac y of deep learning algorithms in the classication of dermal lesions, with the results suggesting considerable promise for practical clinical imple- mentation. This study highlights the critical role of emplo ying sophisticated computational methodologies for the early identication and management of skin cancer , potentially resulting in enhanced patient outcomes and diminished healthcare e xpenditures. Nonetheless, se v eral inquiries persist, particularly re g arding the model’ s generalizability across di v erse populations and the incorporation of t hese systems into clinical practice. Sub- sequent in v estig ations ought to concentrate on mitig ating these deciencies and enhancing models for more e xtensi v e applicability . Through the inte gration of the ndings presented in this study , we can f acilitate the progression of inno v ati v e diagnostic instruments within the eld of dermatology . The e v aluation of our proposed model ag ainst contemporary methodologies utilizing the HAM10000 dataset re v eals its enhanced performance with respect to accurac y . As illustrated in T able 7 and Figures 8 and 9, which depict the accurac y and loss curv es respecti v ely , the proposed model attained an accurac y of 98%, thereby signicantly surpassing multiple well-established architectures. F or e xample, InceptionV3 and Xcep- tion, recognized for their strong feature e xtraction abilities, achie v ed accuracies of 91.56% and 91.47%, respec- ti v ely . In a comparable analysis, InceptionResNetV2, recognized as a leading model in the eld, achie v ed an accurac y of 93.20%, whi ch remains signicantly inferior to the performance metr ics of the model we propose. Alternati v e methodologies, such as Shifted 2-Nets and EW -FCM+wide-shuf enet, demonstrated e v en lo wer accurac y rates, recording 83.20% and 84.80%, respecti v ely . The ndings underscore t h e ef fecti v eness of the proposed methodology , demonstrating a signicant enhancement compared to con v entional techniques in the classication of dermatological images. The notable impro v ement in precision can be asc ribed to the model’ s capacity to discern comple x patterns and character istics present in skin lesion images, thereby pro viding a viable approach for the accurate and dependable diagnosis of skin lesions. T able 7. The proposed w ork with recent e xisting techniques on the HAM10000 dataset Comparing proposed and e xisting w ork Accurac y (%) InceptionV3 [6] 91.56 InceptionResNetV2 [6] 93.20 Xception [6] 91.47 Shifted 2-Nets [27] 83.20 EW -FCM+wide-shuf enet [3] 84.80 Proposed model 98.00 Figure 8. Accurac y of the customized CNN model DigiScope: IoT -enhanced deep learning for skin cancer pr o gnosis (Edder A ymane) Evaluation Warning : The document was created with Spire.PDF for Python.