Open Access Open Access  Restricted Access Subscription or Fee Access

Autonomous Gluing Shoe Sole Based on Pointcloud Data

(*) Corresponding author

Authors' affiliations



Adhesive gluing is an important step in leather shoe manufacturing, helping the shoe parts to stick together, especially between the sole and the upper body of the shoe. However, this task is still greatly relied on manual operation and robot teaching by playback. It is difficult that those tasks produce high productivity because the kinds of shoes to be performed are up to several thousands and the shoe itself is a non-rigid object, which is easy to be twisted during the manufacturing process. There are two kinds of shoe: flat objects that can be classified as 2D objects; however, for high heels, they should be considered as 3D objects. Therefore, the definition of the gluing trajectory is more complex for high heels soles. In order to ensure the satisfactory quality, the adhesive tip should be perpendicular to the sole at the point of the trajectory. This research presents a practical strategy for an automated gluing task in footwear industry by using a 6DOF industrial robot and RGB-D camera. By using 3D camera, when a random soling bond is positioned inside the robot working area, the 3D pointcloud boundary contour data of the shoe sole is extracted and interpolated for accurate and smooth trajectory generation then the manipulator will autonomously execute the trajectory tracing task. The efficiency of the system in terms of time, stability, and accuracy is proven through experiments.
Copyright © 2022 Praise Worthy Prize - All rights reserved.


Gluing Shoe; Computer Vision; RGB-D Camera; Pointcloud; Industrial Robot

Full Text:



W. Hardin, Vision enables freestyle bin-picking, Vision System Design, Jun. 2007.

H. Golnabia, A. Asadpour, Design and application of industrial machine vision systems, Robotics and Computer-Integrated Manufacturing, vol. 23 n. 6, 2007, pp. 630-637.

Ming-Yi Lai, Ling-Ling Wang, Automatic shoe-pattern boundary extraction by image-processing techniques, Robotics and Computer-Integrated Manufacturing, vol. 24 n. 2, April 2008, pp 217-227.

Kai-Fu Yang, Shao-Bing Gao, Ce-Feng Guo, Chao-Yi Li, and Yong-Jie Li, Boundary Detection Using Double-Opponency and Spatial Sparseness Constraint, IEEE Transactions on Image Processing, vol. 24 n. 8, August 2015, pp. 2565 - 2578.

Y. Chi, M.K.H. Leung, A general shape context framework for object identification, Computer Vision and Image Understanding, vol. 112, 2008, pp. 324-336.

W. Wu, X. Wang, D. Xu and Yingjie Yin, position and orientation measurement for autonomous aerial refueling based on monocular vision, International Journal of Robotics and Automation, vol. 32, 2017, pp. 4661-4672.

P. Zhao, N.H. Wang, Precise perimeter measurement for 3D object with a binocular stereovision measurement system, Optik, vol. 121, 2010, pp. 953-957.

Uc Ríos, C., Teruel, P., Use of Unmanned Aerial Vehicles for Calibration of the Precision Approach Path Indicator System, (2021) International Review of Aerospace Engineering (IREASE), 14 (4), pp. 192-200.

Alhasanat, M., Alsafasfeh, M., Alhasanat, A., Althunibat, S., RetinaNet-Based Approach for Object Detection and Distance Estimation in an Image, (2021) International Journal on Communications Antenna and Propagation (IRECAP), 11 (1), pp. 19-25.

Z. Hu, R. Bocker, P. Taylor, C. Marshall, Computer vision for shoe upper profile measurement via upper and sole conformal matching, Optics and Lasers in Engineering, vol. 45, 2007, pp. 183-190.

P. Kiddee, Z. Fang, M. Tan, An automated weld seam tracking system for thick plate using cross mark structured light, The International Journal of Advanced Manufacturing, vol.87, 2016, pp. 3589-3603.

X. Wu, Z. Li, P. Wen, An automatic shoe-groove feature extraction method based on robot and structural laser scanning, International Journal of Advanced Robotic Systems, vol. 14, 2017, pp. 1-14

W. Guoyang, W. Guofeng, X. Kaisheng, Y. Tinghao, F. Yunsheng, 6DOF Object Positioning and Grasping Approach for Industrial Robots Based on Boundary Point Cloud Features, Mathematical Problems in Engineering, vol. 2020, pp. 1-12.

A. Aldoma, M. Vincze; N. Blodow; D. Gossow; S. Gedikli; R. B. Rusu; G. Bradski, CAD-model recognition and 6DOF pose estimation using 3D cues, IEEE International Conference on Computer Vision, 2011, pp. 585-592.

K.T. Song, C.H. Wu and S. Y. Jiang, CAD-based Pose Estimation Design for Random Bin Picking using a RGB-D Camera, Journal of Intelligent & Robotic Systems, vol. 87, 2017, pp. 455-470.

M. Li and K. Hashimoto, Fast and Robust Pose Estimation Algorithm for Bin Picking Using Point Pair Feature, 24th International Conference on Pattern Recognition (ICPR), 2018 pp. 1604-1609.

Prabhu, N., Anand, M., Alex, P., Integrated CAD/CAE/CAM and RP for Scorbot-ER Vu Plus Industrial Robot Manipulator, (2013) International Review of Automatic Control (IREACO), 6 (6), pp. 681-688.

Bouhalassa, L., Benchikh, L., Ahmed-Foitih, Z., Bouzgou, K., Path Planning of the Manipulator Arm FANUC Based on Soft Computing Techniques, (2020) International Review of Automatic Control (IREACO), 13 (4), pp. 171-181.

F. Waldner and F. I. Diakogiannis, Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network, Remote Sensing of Environment, vol. 245, 2020, 111741.

P. Dollár, Z. W. Tu, S. Belongie, Supervised Learning of Edges and Object Boundaries, Proceeding on IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp.1964 - 1971

A. Stork, Visual computing challenges of advanced manufacturing and industry 4.0, IEEE Computer Graphics and Applications, vol.35, 2015, pp. 21-25.

Z. Yan, R. Liu 4, L. Cheng, X. Zhou, X. Ruan and Y. Xiao, A Concave Hull Methodology for Calculating the Crown Volume of Individual Trees Based on Vehicle-Borne LiDAR Data, Remote Sensing, vol. 11 n.6, pp. 623, 2019.

Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl Intell, vol. 24, 2010, pp.345-348.

Hanh, L. D. and Duc, L. M. Planar Object Recogni-tion For Bin Picking Application, In Proceedings of 5th NAFOSTED Conference on Information and Computer Science (NICS), Ho Chi Minh, 2018, pp. 213-218


  • There are currently no refbacks.

Please send any question about this web site to
Copyright © 2005-2024 Praise Worthy Prize