Inter national J our nal of Electrical and Computer Engineering (IJECE) V ol. 15, No. 1, February 2025, pp. 356 364 ISSN: 2088-8708, DOI: 10.11591/ijece.v15i1.pp356-364 356 ReRNet: r ecursi v e neural netw ork f or enhanced image corr ection in print-cam watermarking Said Boujerfaoui 1 , Hassan Douzi 1 , Rachid Harba 2 , Khadija Gourrame 1,2 1 IRF-SIC Laboratory , F aculty of Sciences, Ibn Zohr Uni v ersity , Ag adir , Morocco 2 PRISME Laboratory , Polytech Orl ´ eans, Orl ´ eans Uni v ersity , Orl ´ eans, France Article Inf o Article history: Recei v ed Jun 16, 2024 Re vised Sep 3, 2024 Accepted Oct 1, 2024 K eyw ords: Corner detection F ourier transform Geometric distortions Image w atermarking Neural netw orks Print-cam process ABSTRA CT Rob ust image w atermarking that can resist camera shooting has g ained consid- erable attention in recent years due to the need to protect sensiti v e pri nted infor - mation from being captured and reproduced without authorization. Indeed, the e v olution of smartphones has made identity w atermarking a feasible and con v e- nient process. Ho we v er , this process also intr oduces challenges lik e perspecti v e distortions, which can signicantly impair the ef fecti v eness of w atermark detec- tion on freehandedly digitized images. T o meet this challenge, ResNet50-based ensemble of randomized neural netw orks (ReRNet), a recursi v e con v olutional neural netw ork-based correction method, is presented for the print-cam process, specically applied to identity images. Therefore, this paper proposes an im- pro v ed F ourier w atermarking method based on ReRNet to rectify perspecti v e distortions. Experimental results v alidate the rob ustness of the enhanced scheme and demonstrate its superiority o v er e xisting methods, especially in handling perspecti v e distortions encountered in the print-cam process. This is an open access article under the CC BY -SA license . Corresponding A uthor: Said Boujerf aoui IRF-SIC Laboratory , F aculty of Sciences, Ibn Zohr Uni v ersity BP 8106, Dakhla District, Ag adir 80000, Morocco Email: said.boujerf aoui@edu.uiz.ac.ma 1. INTR ODUCTION T oday , data plays a fundamental pillar for industries and b usi nesses, fortied by the ongoing sur ge in technological progress [1]. These adv ances ha v e impro v ed the ef cienc y of data transfer , b ut ha v e also introduced challenges such as unauthorized data manipulation, af fecting cop yright protection and data inte grity . As a result, industries are no w f aced with the imperati v e task of seeking real-time solutions for secure data processing. Since the 1990s, digital w atermar k i ng has emer ged as an important research direction, particularly with the rise of smartphones, making w atermarking algorithms viable for mobile systems to meet industrial security challenges [2]. Print-cam image w atermarking [3] in v olv es embedding a w atermark i nto an image intended to be printed on a paper medium and then freehandedly captured using a smartphone camera. This uncontrolled operation introduces major challenges link ed to perspecti v e distortions and desynchronization problems [4], as freehand captures at v arying angles can distort the w atermark and introduce artif acts. These alterations complicate w atermark detection, making it dif cult or inaccurate [5]. The print-cam w atermarking process is illustrated in Figure 1. J ournal homepage: http://ijece .iaescor e .com Evaluation Warning : The document was created with Spire.PDF for Python.
Int J Elec & Comp Eng ISSN: 2088-8708 357 W a t er mar k embedding I mage r eady f or        det ec tion P r in t the image C aptur e   the  p r i n t ed                 image Figure 1. Print-cam w atermarking process Image w atermarking approaches ha v e adv anced using spatial and fre q ue n c y domains [6], [7] such as discrete cosine transform (DCT) [8], discrete w a v elet transform (D WT) [9], and discrete F ourier trans- form (DFT) [10], each with distinct adv antages and li mitations. T o deal with geometric distortions, v arious strate gies ha v e been de vised, including frame synchronization mechanisms [11], con v e x optimization frame- w ork [12] and pseudo-random sequences [13]. Deep learning-based approaches, using con v olutional neural netw orks, automate w ater marking by learning correlations between w atermark ed and original images and e x- ploiting imperceptible perturbations for data hiding [14]. Ho we v er , research specic to print-cam scenarios remains limited, focusing mainly on learning-based techniques such as ne-tuned se gmentation [15], [16], distortion mapping frame w ork [17], [18], in v ariance layers [19], and 3D rendering distortion netw orks [20]. Image w atermarking f aces signicant challenges in print-cam scenari os due to perspecti v e distorti o ns caused by the joint ef fect of rotation, translation, scaling (RST), and tilt angle. Despite their critical impact on w atermark rob ustness, only a fe w approaches ha v e been conducted for this process. Accordingly , le v er - aging deep learning methodologies and geometric transf ormation modeling could impro v e the rob ustness of w atermarking schemes in o v ercoming these distortions. This paper introduces randomized neural netw orks (ReRNet), a deep-l earning-based method to ad- dress perspecti v e distortions found in ID images during the print-cam process. ReRNet locates the ID image corners us ing a recursi v e con v olutional neural netw ork, enabling projecti v e transformation for image recti- cation and accurate w atermark alignment. As a result, an impro v ed rob ust image w atermarking technique is proposed to address print-cam attacks. This approach combines a F ourier transform-based embedding method [21] with ReRNet, emplo yed for rectifying image distortions. The F ourier -based approach is selected for its pro v en ability to withstand the geometric attacks common in the print-cam process [22]. W e conducted prac- tical e xperiments on a selection of framed ID images, which were subjected to real print-cam attacks using a printer and tw o smartphones. The performance of the impro v ed w atermarking met hod w as then e v aluated and compared with e xisting competiti v e methods. The rest of the paper is or g ani zed as follo ws: section 2 presents the complete w atermarking method, including ReRNet architecture. Section 3 co v ers the e xperimental results. Finally , section 4 concl ud e s the paper . 2. PRINT -CAM W A TERMARKING SCHEME First, the w atermark is embedded into the original image. Once the w atermark ed image is printed and captured by a camera, perspecti v e distortions are corrected using the proposed correction technique. Finally , in the detection phase, we determine whether or not the w atermark is present in the corrected image. The components of the impro v ed w atermarking scheme are described belo w . 2.1. F ourier -based embedding The w ate rmark is embedded into the image DFT magnitude, specically along a circular area with a dened radius r . The embedding process af fects the luminance components of the original image, k eeping the chrominance components unmodied. T o impro v e the detection rate, a l o w-pass lter is emplo yed on the embeddable DFT coef cients [21]. Hence, the w atermark W is inserted into the ltered coef cients as (1): M W = M f + α × W (1) Here, M W is the w atermark ed coef cients, M f represents the ltered coef cients, and α is the insertion strength. α is adjusted to achie v e the desired peak s ignal-to-noise ratio (PSNR) of 40 dB. In v erse DFT is ReRNet: r ecur sive neur al network for enhanced ima g e corr ection in print-cam ... (Said Boujerfaoui) Evaluation Warning : The document was created with Spire.PDF for Python.
358 ISSN: 2088-8708 applied to the w atermark ed image to obtain its luminance, and the nal color image is then reco v ered using the unmodied chrominance. 2.2. F ourier -based detection During the detection phase, only the w atermark ed image and the initial w atermark are used, without the need for the original, unw atermark ed image. Firstly , the luminance of the rectied image under goes DFT processing. Ne xt, coef cients are e xtracted from the m agnitude along the predened radius. Finally , the maxi- mum norm alized cross-correlation, denoted as C max , is calculated between the original w atermark W and the e xtracted DFT coef cients F as (2): C max = max 0 j 1   P N 1 i =0 ( W ( i ) W )( F ( i + j ) F ) q P N 1 i =0 ( W ( i ) W ) 2 P N 1 i =0 ( F ( i + j ) F ) 2 ! (2) Where N is the w atermark length, W and F are respecti v ely the means of the w atermark sequence and the means of the e xtracted coef cients . If C max e xceeds a certain threshold t , the w atermark is considered present. 2.3. Print-cam perspecti v e corr ection T o impro v e the resilience of the w atermarking system ag ainst perspecti v e attacks during the print cam process, a correcti v e technique is carried out as a complementary measure. In this section, we present ReRNet, a neural netw ork-based method to detect the corners of the wireframe around the ID image. Our approach addresses the challenge as a k e y point detection issue. These pi v otal points correspond to the four corners of the image frame: the top left corner (TL), top right corner (TR), bottom right corner (BR), and bottom left corner (BL). An o v ervie w of the proposed correction method can be seen in Figure 2. Figure 2. ReRNet architecture 32 × 32 image serv es as input to the corner detector model, predicting four corners, which are then used by re gion e xtractor algorithm to obtain four corner images. Each of these corner images under goes iterati v e renement by the corner rener model to produce the nal output In light of Figure 2, ReRNet can be di vided into tw o main steps: initially , a deep con v olutional neural netw ork, modeled on the structure of ResNet-20, is emplo yed to predict the four corners of the image col- lecti v ely . Subsequently , each of these predictions is indi vidually rened through the iterati v e application of the second con v olutional neural netw ork to predict one-corner coordinates. The architecture of ResNet-20 is depicted in Figure 3. 3 x3  c on 16 3 x3  c on 16 3 x3  c on 16 3 x3  c on 16 3 x3  c on 16 3 x3  c on 16 3 x3  c on 16 3 x3  c on v   32 3 x3   c on v  32 3x3  c on 3 2 3 x3  c on 32 3 x3  c on 32 3 x3   c on 32 3 x3  c on v  64 3 x3  c on v   6 4 3 x3   c o n 64 3 x3  c on 64 3 x3  c on 6 4 3 x3  c on v   64 FC  1 0 0 Figure 3. ResNet-20 architecture 2.3.1. Cor ner detector In the rst stage, the corner detector model is designed to predict the coordinates of the four corners of an input ID image, using the ResNet-20 netw ork. As sho wn in Figure 3, ResNet-20 comprises sets of three con- v olutional blocks, each incorporating basic residual blocks . These blocks consist of tw o con v olutional layers Int J Elec & Comp Eng, V ol. 15, No. 1, February 2025: 356-364 Evaluation Warning : The document was created with Spire.PDF for Python.
Int J Elec & Comp Eng ISSN: 2088-8708 359 follo wed by batch normalization and rectied linear unit (ReLU) acti v ation. Flattening the feature maps leads to a fully connected layer for re gression tasks, with the nal dense layer ha ving a size of the output (def ault: 8 coordinates). P arameter initialization adheres to best practices, including He initialization for con v olutional layers. The output of the model includes the predicted coordinates of the four corners, yielding a total of eight v alues. The structure of the corner detector model can be seen in Figure 4. Figure 4. Corner detector structure Ideally , we w ould lik e our model to achie v e precise corner localization. Ho we v er , this cannot be ful ly realized due to tw o fundamental challenges: features v ariati on: the nal con v olutional layer is primarily de- signed to capture high-le v el features, which are ef fecti v e for classication or bounding-box detection. Ho we v er , these features might not generalize well for re gression and preci se localization tasks. Upscale error: upscal- ing input and output images can introduce a considerable ±10 pix el error , undermining the model’ s ability to precisely pinpoint the corners of interest. Gi v en this inherent imprecision, the output from the corner detector does not serv e as the nal res ult. Instead, it plays a crucial intermediate step in our approach. W e le v erage this output to e xtract four distinct re gions from the image, ensuring that each re gion contains one of the four corners. T o accomplish this, we ha v e us ed a dedicated algorithm, aptly named re gion e xtrac tor , which ef fecti v ely identies and e xtracts these re gions based on the corner detector’ s predictions. Re gion e xtractor algorithm is used to identify and e xtract specic re gions from the image based on predicted corner points labeled as TL ’, TR’, BR’ and BL ’. The process in v olv es cropping the image abo v e and belo w each corner prediction along the x-axis and to the ri ght and left along the y-axis. T o focus on the re gion containing TL, for e xample, the method dra ws a line parallel to the y-axis through the middle of the x-coordinates of TL and TR’, thus eliminating the right-hand part of the image. Similarly , it e xcludes the area belo w a line parallel to the x-axis, intersecting the midpoint between of the y-coordinates of TL and BL ’. Where the area abo v e and to the left of TL e xceeds that belo w or to the right, adjustments are made to balance them by remo ving a strip of the image on the top or left-hand side. It is important to note that this approach applies to the predictions of all four corners and ensures that the e xtracted re gions are normalized to the image size. 2.3.2. Cor ner r ener The four re gions e xtracted using re gion e xtractor algorithm are then fed into the second model, referred to as the corner rener , specially designed to locate a lone corner within im ages where only one c o r ner of the frame is present. This model is also b uilt upon the ResNet-20 architecture and features tw o re gression heads, corresponding to one corner coordinates. Notably , it adopts the same parameters as the rst model. Figure 5 gi v es an o v ervie w of the corner rener structure. Figure 5. Corner rener structure Ho we v er , due to the comple xity of p r ecisely locating the corner in a single step, the corner rener model is used iterati v ely to con v er ge on the accurate position of the corner . At each iteration, a portion of the image c losest to a predicted point is retained in both v ertical and horizontal directions based on a h yperparam- eter called β (retain f actor), where 0 < β < 1 . As a result, after n iterations, the original ( H , W ) image is cropped do wn to ( H × β n , W × β n ) , and the process continues until the image size is smaller than (10 × 10), serving as the termination condition. Thanks to this recursi v e mechanism, we achie v e precise corner detection with remarkable accurac y . Figure 6 sho ws the visual representation of this recursi v e corner renement process. ReRNet: r ecur sive neur al network for enhanced ima g e corr ection in print-cam ... (Said Boujerfaoui) Evaluation Warning : The document was created with Spire.PDF for Python.
360 ISSN: 2088-8708 Figure 6. Corner retention process, based on the prediction, the system uses a predicti v e point to eliminate the zone least lik ely to contain the corner , then returns the cropped image to the model, with visually represented box es indicating the areas observ ed at each iteration 2.3.3. T raining setup T o prepare the training dat a, we started with 500 ID images sourced from PICS dataset [23]. These images were then printed and captured manually using iPhone 6 and Galaxy S5 smartphones with f actory settings. The labeling of ground truth w as done manually through the ImageJ application [24], using labels top left (TL), top right (TR), bottom right (BR), and bottom left (BL) to indicate specic corner positions. Subsequently , this dataset w as augmented to a total of 16,000 images by applying random rotations, cropping, contrast adjustments, and changes in brightnes s [25]. F or the rst model, the captured images must contain four corners, whereas in the second model, these images are di vided into four indi vidual re gions, with one corner included in each. The input image is resized to 32 × 32 before being transmitted via the netw ork. This helps to balance the trade-of fs between computational ef cienc y , memory usage, and netw ork performance. The stochastic gradient descent (SGD) is used with a learning rate of 0.05, momentum of 0.9, and weight decay of 0.00001. The training process spans 40 epochs, with the learning rate dynamically adjusted at epochs 10, 20, and 30. This process iterates through mini-batches of 32 samples, tracking the mean square error (MSE) loss. F or v alidation, root mean squared error (RMSE) is used with e v aluations carried out after each epoch. 2.3.4. Geometric corr ection The process of capturing images freehandedly with a smartphone establishes geometric relat ionships between the camera position and the image [26]. This process of mapping the 3D w orld coordinate points onto the 2D plane of the image is kno wn as projecti v e transformation, often represented using matrix relations: p = H × P (3) Where p represents the 2D image coordinates, H stands for the 3 × 4 projecti v e matrix describing the transfor - mation, and P denotes the 3D w orld coordinates. In our scenario, where we capture an image from a 2D plane, the perspecti v e transformation simplies to: x y 1 = H X Y 1 (4) Where H is the 3 × 3 perspecti v e matrix [9]. The H matrix has 8 de grees of freedom, requiring the estimation of 8 unkno wn v ariables. T o deter - mine the transformation that corrects the distorted image, the coordinates of t he four corners are used to solv e the system as (5): ( x = h 11 X + h 12 Y + h 13 h 31 X + h 32 Y +1 y = h 21 X + h 22 Y + h 23 h 31 X + h 32 Y +1 (5) In summary , ReRNet is used to detect the four corners of the image fram e. Then, the projecti v e matrix is estimated by solving the system of equat ions mentioned in (5) using the corresponding four points. Ultimately , an in v erse transformation is applied to the image to remap it into a rectied state. Int J Elec & Comp Eng, V ol. 15, No. 1, February 2025: 356-364 Evaluation Warning : The document was created with Spire.PDF for Python.
Int J Elec & Comp Eng ISSN: 2088-8708 361 3. RESUL T AND DISCUSSION In this section, we compare the enhanced print-cam w atermarking method with its predecessor [22] in terms of detection rate. Both methods use F ourier w atermarking to embed the w atermark and incorporate per - specti v e correction techniques to correct dis tortions introduced by the print-cam process. Gourrame et al . [22], hough transform w as used to rectify the captured images, whereas our approach em- plo ys ReRNet for the same purpose. The e v aluation of the tw o methods w as conducted under real print-cam conditions using 600 ID im- ages from PICS dataset [23], 300 of them w atermark ed and the rest unw atermark ed. These images were printed on paper using K onica Minolta Bizhub 450i, with dimensions of 44 × 44 mm. T o capture these printed images, tw o mobile de vices were used, a Redmi Note 10 and a Galaxy S21, with 64 and 108-me g apix el cameras respec- ti v ely . The acquisition process in v olv ed capturing the images freehandedly , without using an y lters or ash during the capture. All images were tak en under daylight illumination conditions. The test process is sho wn in Figure 7. After perspecti v e correction, the probability of true positi v e detection is calculated at dif ferent detec- tion thresholds, and the results are sho wn in Figure 8. As seen from the gure, the proposed method e xhibits remarkable results, achie ving 80% in detection rate at a relati v ely high threshold of 0.3, surpassing the alterna- ti v e method (42%) at the same threshold. This underscores the signicant contrib ution of ReRNet in addressing perspecti v e distortions during the print-cam process, ultimately impro ving the accurac y of w atermark detection. Figure 7. Real-w orld print-cam test procedure Figure 8. Probability of true positi v e detection at dif ferent threshold v alues under attacks (black line), and after perspecti v e corrections using the proposed method (red line) and the method in [22] (blue line) According to Figure 9, the proposed method demonstrates a remarkable enhancement in terms of de- tection rates, reaching 82% e v en at lo wer le v els of probability of f alse alarm (PF A). Consequently , after ReRNet rectication, the probability of achie ving true positi v e detection e xperiences a substantial increase, marking a notable impro v ement o v er its predecessor [22]. As a result, our proposed ReRNet-based w atermar king tech- nique sho ws outstanding performance, achie ving a minimum error rate of 1.02%. In comparison, the other ReRNet: r ecur sive neur al network for enhanced ima g e corr ection in print-cam ... (Said Boujerfaoui) Evaluation Warning : The document was created with Spire.PDF for Python.
362 ISSN: 2088-8708 method e xhibits a higher error rate of 1.37%. This conrms the a d v a n t age of our approach, which signicantly reduces the error rate compared to the other method. In summary , the insights from our study demonstrate the considerable inuence of the suggested correction technique, ReRNet, in ele v ating the ef fecti v eness of the F ourier w atermarking approach during print-cam scenarios. The results sho w that ReRNet maintains high detection quality e v en under challenging attacks, meeting the industry benchmark error rate of 1%. This highlights its rob ustness and potential for implementation on smartphones. Figure 9. Detection rate in terms of recei v er -operating characteristic (R OC) curv es of the proposed approach and the method in [22] 4. CONCLUSION In this paper , we present an impro v ed F ourier -based image w atermarking method for the print-cam process. The approach includes ReRNet, a ne w correction neural netw ork to address perspecti v e distortions of- ten encountered in images freehandedly tak en with a smartphone camera. Real-w orld testing w as conducted to v alidate the ef fec ti v eness and reliability of our correction technique in an industrial setting applied to ID images. Results underscore the remarkable performance of the proposed scheme to successfully mitig ate perspecti v e distortions, a critical f actor in a v ariety of print-cam applications. Additionally , our approach demonstrates impro v ed detection rate v alues and sho ws a reduced error rate of 1.02% compared to 1.37% observ ed with its predecessor . The collecti v e results from these assessments suggest that the impro v ed F ourier w atermarking method holds promise in addressing the challenges presented during the print-cam process and contrib utes to the adv ancement of the digital image w atermarking eld. REFERENCES [1] S. Pouyanf ar , Y . Y ang, S.-C. Chen, M.-L. Sh yu, and S. S. Iyeng ar , “Multimedia big data analytics, A CM Computing Surve ys , v ol. 51, no. 1, pp. 1–34, Jan. 2019, doi: 10.1145/3150226. [2] S . P . Mohanty , A. Sengupta, P . Guturu, and E. K ougianos, “Ev erything you w ant to kno w about w a termarking: from paper marks to hardw are protection, IEEE Consumer Electr onics Ma gazine , v ol. 6, no. 3, pp. 83–91, Jul. 2017, doi: 10.1109/MCE.2017.2684980. [3] K. Gourrame, F . Ros, H. Douzi, R. Harba, and R. Riad, “F ourier image w atermarking: print-cam applicati on, Electr onics , v ol. 11, no. 2, p. 266, Jan. 2022, doi: 10.3390/electronics11020266. [4] V . Licks and R. Jordan, “Geometric attacks on image w atermarking systems, IEEE Multimedia , v ol. 12, no. 3, pp. 68–78, Jul. 2005, doi: 10.1109/MMUL.2005.46. [5] A . Pramila, A. K eskinarkaus, and T . Sepp ¨ anen, “Camera based w atermark e xtraction-problems and e xamples, in Pr oceedings of the F innish Signal Pr ocessing Symposium , 2007. [6] L. Rakhma w ati, W . W ira w an, and S. Suw adi, A recent surv e y of self-embedding fragile w atermarking scheme for image authen- tication with reco v ery capability , EURASIP J ournal on Ima g e and V ideo Pr ocessing , v ol. 2019, no. 1, p. 61, Dec. 2019, doi: 10.1186/s13640-019-0462-3. [7] M. Be gum and M. S. Uddin, “Digital image w atermarking techniques: a re vie w , Information , v ol. 11, no. 2, p. 110, Feb . 2020, doi: 10.3390/info11020110. [8] K. Rangel-Espinoz a, E. Fragoso-Na v arro, C. Cruz-Ramos, R. Re yes-Re yes, M. Nakano-Miyatak e, and H. M. P ´ erez-Meana, Adap- ti v e remo v able visible w atermarking technique using dual w atermarking for digital color images, Multimedia T ools and Applica- tions , v ol. 77, no. 11, pp. 13047–13074, Jun. 2018, doi: 10.1007/s11042-017-4931-3. 13 047–13 074, 2018. Int J Elec & Comp Eng, V ol. 15, No. 1, February 2025: 356-364 Evaluation Warning : The document was created with Spire.PDF for Python.
Int J Elec & Comp Eng ISSN: 2088-8708 363 [9] T . Huynh-The et al ., “Selecti v e bit embedding scheme for rob ust blind color image w atermarking, Information Sciences , v ol. 426, pp. 1–18, Feb . 2018, doi: 10.1016/j.ins.2017.10.016. [10] Sunesh and R. R. Kishore, A no v el and ef cient blind ima ge w atermarking in transform domain, Pr ocedia Computer Science , v ol. 167, pp. 1505–1514, 2020, doi: 10.1016/j.procs.2020.03.361. [11] T . Nakamura, A. Katayama, M. Y amamuro, and N. Sonehara, “F ast w atermark detection scheme for camera-equipped cellular phone, in Pr oceedings of the 3 r d International Confer ence on Mobile and Ubiquitous Multimedia , Ne w Y ork, NY , USA: A CM, Oct. 2004, pp. 101–108. doi: 10.1145/1052380.1052395. [12] L. Dong, J. Chen, C. Peng, Y . Li, and W . Sun, “W atermark-preserving k e ypoint enhancement for screen-shooting resilient w atermarking, in 2022 IEEE International Confer ence on Multimedia and Expo (ICME) , IEEE, Jul. 2022, pp. 1–6. doi: 10.1109/ICME52920.2022.9859950. [13] A. Pramila, A. K eskinarkaus, V . T akala, and T . Sepp ¨ anen, “Extracting w atermarks from printouts captured with wide angles using computational photograph y , Multimedia T ools and Applications , v ol. 76, no. 15, pp. 16063–16084, Aug. 2017, doi: 10.1007/s11042-016-3895-z. [14] Z. W ang et al., “Data hiding with deep learning: a surv e y unifying digital w atermarking and ste g anograph y , IEEE T r ansactions on Computational Social Systems , v ol. 10, no. 6, pp. 2985–2999, 2023, doi: 10.1109/TCSS.2023.3268950. [15] M. T ancik, B. Mildenhall, and R. Ng, “Ste g aStamp: in visible h yperlinks in ph ysical photographs, in 2020 IEEE/CVF Confer ence on Computer V ision and P attern Reco gnition (CVPR) , IEEE, Jun. 2020, pp. 2114–2123. doi: 10.1109/CVPR42600.2020.00219. [16] C. Y u, J. W ang, C. Peng, C. Gao, G. Y u, and N. Sang, “BiSeNet: bilateral se gmentation netw ork for real-time semantic se gmenta- tion, in Computer V ision ECCV 2018. ECCV 2018. Le ctur e Notes in Computer Science() , Cham: Springer , 2018, pp. 334–349. doi: 10.1007/978-3-030-01261-8 20. [17] S. Boujerf aoui, H. Douzi, R. Harba, and F . Ros, “Cam-Unet: print -cam image correction for zero-bit fourier image w atermarking, Sensor s , v ol. 24, no. 11, p. 3400, May 2024, doi: 10.3390/s24113400. [18] S. Boujerf aoui, H. Douzi, and R. Harba, “Print-cam image distortion correction for rob ust image w atermarking, in 2024 26th International Conf er ence on Digital Signal Pr ocessing and its Applications (DSP A) , IEEE, Mar . 2024, pp. 1–6. doi: 10.1109/DSP A60853.2024.10510115. [19] X. Zhong, P .-C. Huang, S. Mastorakis, and F . Y . Shih, An automated and rob ust image w atermarking scheme based on deep neural netw orks, IEEE T r ansactions on Multimedia , v ol. 23, pp. 1951–1961, 2021, doi: 10.1109/TMM.2020.3006415. [20] J. Jia et al ., “RI HOOP: rob ust in visible h yperlinks in of ine and online photographs, IEEE T r ansactions on Cybernetics , v ol. 52, no. 7, pp. 7094–7106, Jul. 2022, doi: 10.1109/TCYB.2020.3037208. [21] R. Riad, F . Ros, R. Harba, H. Douzi, and M. El Hajji, “Pre-processing the co v er image before embedding impro v es the w atermark detection rate, in 2014 Second W orld Confer ence on Comple x Systems (WCCS) , IEEE, No v . 2014, pp. 705–709. doi: 10.1109/IC- oCS.2014.7060967. [22] K. Gourram e et al ., A zero-bit fourier image w atermarking for print-cam process, Multimedia T ools and Applications , v ol. 78, no. 2, pp. 2621–2638, Jan. 2019, doi: 10.1007/s11042-018-6302-0. [23] P . Hancock, “P sychological image collection at stirling (PICS). https://pics.stir .ac.uk/, [Online] (accessed on Jan 23, 2022). [24] C. A. Schneider , W . S. Rasband, and K. W . Eliceiri, “NIH ima ge to ImageJ: 25 years of image analysis, Natur e Methods , v ol. 9, no. 7, pp. 671–675, Jul. 2012, doi: 10.1038/nmeth.2089. [25] C. Shorten and T . M. Khoshgoftaar , A surv e y on image data augmentation for deep learning, J ournal of Big Data , v ol. 6, no. 1, Dec. 2019, doi: 10.1186/s40537-019-0197-0. [26] R. Hartle y and A. Zisserman, Multiple vie w g eometry in computer vision . Cambridge Uni v ersity Press, 2004. doi: 10.1017/CBO9780511811685 BIOGRAPHIES OF A UTHORS Said Boujerfaoui Ph.D. student at Ibn Zohr Uni v ersity in Morocco. He obtained his mas- ter’ s de gree in data science from the same uni v ersity in 2020. His areas of research include articial intelligence, image processing, digital w atermarking and computer vision. He can be contacted at email: said.boujerf aoui@edu.uiz.ac.ma. Hassan Douzi recei v ed the Doctorat (French Ph.D.) from The Uni v ersity of P aris IX (Dauphine) in application of w a v elets to seismic in v ersion problem. Since 1993 he is a Research Professor at the Uni v ersity Ibn Zohr of Ag adir in Morocco. His research interests include w a v elets, image and signal processing. He can be contacted at email: h.douzi@uiz.ac.ma. ReRNet: r ecur sive neur al network for enhanced ima g e corr ection in print-cam ... (Said Boujerfaoui) Evaluation Warning : The document was created with Spire.PDF for Python.
364 ISSN: 2088-8708 Rachid Harba graduated from ENS Cachan in electrical engineer ing, P aris, France, in 1983, and obtained his Ph.D. in electrical engineering in 1985 from INP G Grenoble, France. In 1988, he became an associate professor at the Uni v ersity of Orl ´ eans, France, and i s currently a professor at the Uni v ersity of Orl ´ eans, in the Polytech’Orl ´ eans department. His research focuses on the statistical processing of signals and images, particularly for medical and industrial applications. Currently , he is the leader of the European project ST ANDUP #777661 for the early detecti on of diabetic foot using a smartphone. He can be contacted at email: rachid.harba@uni v-orleans.fr . Khadija Gourrame holds a Ph.D. in computer science from Uni v ersity of Ibn Zohr in Morocco and Uni v ersity of Orleans in France, under a cotutelle scheme. She recei v ed a master’ s de gree in computer science from uni v ersity of Mysore, India, in 2014. Her research interests in- clude image processing, digital w atermarking, and computer vision. She can be contacted at email: khadija.gourrame@edu.uiz.ac.ma. Int J Elec & Comp Eng, V ol. 15, No. 1, February 2025: 356-364 Evaluation Warning : The document was created with Spire.PDF for Python.