Robotic product-based manipulation in simulated environment
10.11591/ijece.v15i6.pp5894-5903
Juan Camilo Guacheta-Alba
,
Anny Astrid Espitia-Cubillos
,
Robinson Jimenez-Moreno
Before deploying algorithms in industrial settings, it is essential to validate them in virtual environments to anticipate real-world performance, identify potential limitations, and guide necessary optimizations. This study presents the development and integration of artificial intelligence algorithms for detecting labels and container formats of cleaning products using computer vision, enabling robotic manipulation via a UR5 arm. Label identification is performed using the speeded-up robust features (SURF) algorithm, ensuring robustness to scale and orientation changes. For container recognition, multiple methods were explored: edge detection using Sobel and Canny filters, Hopfield networks trained on filtered images, 2D cross-correlation, and finally, a you only look once (YOLO) deep learning model. Among these, the custom-trained YOLO detector provided the highest accuracy. For robotic control, smooth joint trajectories were computed using polynomial interpolation, allowing the UR5 robot to execute pick-and-place operations. The entire process was validated in the CoppeliaSim simulation environment, where the robot successfully identified, classified, and manipulated products, demonstrating the feasibility of the proposed pipeline for future applications in semi-structured industrial contexts.