Open Access Open Access  Restricted Access Subscription or Fee Access

Autonomous Gluing Shoe Sole Based on Pointcloud Data

Le Duc Hanh(1*), Nguyen Thanh Nhat(2)

(1) Department of Mechatronics, Falculty of Mechanical Engineering, Ho Chi Minh City University of Technology (HCMUT), Viet Nam
(2) Department of Mechatronics, Falculty of Mechanical Engineering, Ho Chi Minh City University of Technology (HCMUT), Viet Nam
(*) Corresponding author


DOI: https://doi.org/10.15866/irea.v10i1.20674

Abstract


Adhesive gluing is an important step in leather shoe manufacturing, helping the shoe parts to stick together, especially between the sole and the upper body of the shoe. However, this task is still greatly relied on manual operation and robot teaching by playback. It is difficult that those tasks produce high productivity because the kinds of shoes to be performed are up to several thousands and the shoe itself is a non-rigid object, which is easy to be twisted during the manufacturing process. There are two kinds of shoe: flat objects that can be classified as 2D objects; however, for high heels, they should be considered as 3D objects. Therefore, the definition of the gluing trajectory is more complex for high heels soles. In order to ensure the satisfactory quality, the adhesive tip should be perpendicular to the sole at the point of the trajectory. This research presents a practical strategy for an automated gluing task in footwear industry by using a 6DOF industrial robot and RGB-D camera. By using 3D camera, when a random soling bond is positioned inside the robot working area, the 3D pointcloud boundary contour data of the shoe sole is extracted and interpolated for accurate and smooth trajectory generation then the manipulator will autonomously execute the trajectory tracing task. The efficiency of the system in terms of time, stability, and accuracy is proven through experiments.
Copyright © 2022 Praise Worthy Prize - All rights reserved.

Keywords


Gluing Shoe; Computer Vision; RGB-D Camera; Pointcloud; Industrial Robot

Full Text:

PDF


References


W. Hardin, Vision enables freestyle bin-picking, Vision System Design, Jun. 2007.

H. Golnabia, A. Asadpour, Design and application of industrial machine vision systems, Robotics and Computer-Integrated Manufacturing, vol. 23 n. 6, 2007, pp. 630-637.
https://doi.org/10.1016/j.rcim.2007.02.005

Ming-Yi Lai, Ling-Ling Wang, Automatic shoe-pattern boundary extraction by image-processing techniques, Robotics and Computer-Integrated Manufacturing, vol. 24 n. 2, April 2008, pp 217-227.
https://doi.org/10.1016/j.rcim.2006.10.005

Kai-Fu Yang, Shao-Bing Gao, Ce-Feng Guo, Chao-Yi Li, and Yong-Jie Li, Boundary Detection Using Double-Opponency and Spatial Sparseness Constraint, IEEE Transactions on Image Processing, vol. 24 n. 8, August 2015, pp. 2565 - 2578.
https://doi.org/10.1109/TIP.2015.2425538

Y. Chi, M.K.H. Leung, A general shape context framework for object identification, Computer Vision and Image Understanding, vol. 112, 2008, pp. 324-336.
https://doi.org/10.1016/j.cviu.2008.05.009

W. Wu, X. Wang, D. Xu and Yingjie Yin, position and orientation measurement for autonomous aerial refueling based on monocular vision, International Journal of Robotics and Automation, vol. 32, 2017, pp. 4661-4672.
https://doi.org/10.2316/Journal.206.2017.1.206-4661

P. Zhao, N.H. Wang, Precise perimeter measurement for 3D object with a binocular stereovision measurement system, Optik, vol. 121, 2010, pp. 953-957.
https://doi.org/10.1016/j.ijleo.2008.12.008

Uc Ríos, C., Teruel, P., Use of Unmanned Aerial Vehicles for Calibration of the Precision Approach Path Indicator System, (2021) International Review of Aerospace Engineering (IREASE), 14 (4), pp. 192-200.
https://doi.org/10.15866/irease.v14i4.20709

Alhasanat, M., Alsafasfeh, M., Alhasanat, A., Althunibat, S., RetinaNet-Based Approach for Object Detection and Distance Estimation in an Image, (2021) International Journal on Communications Antenna and Propagation (IRECAP), 11 (1), pp. 19-25.
https://doi.org/10.15866/irecap.v11i1.19341

Z. Hu, R. Bocker, P. Taylor, C. Marshall, Computer vision for shoe upper profile measurement via upper and sole conformal matching, Optics and Lasers in Engineering, vol. 45, 2007, pp. 183-190.
https://doi.org/10.1016/j.optlaseng.2006.04.004

P. Kiddee, Z. Fang, M. Tan, An automated weld seam tracking system for thick plate using cross mark structured light, The International Journal of Advanced Manufacturing, vol.87, 2016, pp. 3589-3603.
https://doi.org/10.1007/s00170-016-8729-7

X. Wu, Z. Li, P. Wen, An automatic shoe-groove feature extraction method based on robot and structural laser scanning, International Journal of Advanced Robotic Systems, vol. 14, 2017, pp. 1-14
https://doi.org/10.1177/1729881416678135

W. Guoyang, W. Guofeng, X. Kaisheng, Y. Tinghao, F. Yunsheng, 6DOF Object Positioning and Grasping Approach for Industrial Robots Based on Boundary Point Cloud Features, Mathematical Problems in Engineering, vol. 2020, pp. 1-12.
https://doi.org/10.1155/2020/9279345

A. Aldoma, M. Vincze; N. Blodow; D. Gossow; S. Gedikli; R. B. Rusu; G. Bradski, CAD-model recognition and 6DOF pose estimation using 3D cues, IEEE International Conference on Computer Vision, 2011, pp. 585-592.
https://doi.org/10.1109/ICCVW.2011.6130296

K.T. Song, C.H. Wu and S. Y. Jiang, CAD-based Pose Estimation Design for Random Bin Picking using a RGB-D Camera, Journal of Intelligent & Robotic Systems, vol. 87, 2017, pp. 455-470.
https://doi.org/10.1007/s10846-017-0501-1

M. Li and K. Hashimoto, Fast and Robust Pose Estimation Algorithm for Bin Picking Using Point Pair Feature, 24th International Conference on Pattern Recognition (ICPR), 2018 pp. 1604-1609.
https://doi.org/10.1109/ICPR.2018.8545432

Prabhu, N., Anand, M., Alex, P., Integrated CAD/CAE/CAM and RP for Scorbot-ER Vu Plus Industrial Robot Manipulator, (2013) International Review of Automatic Control (IREACO), 6 (6), pp. 681-688.

Bouhalassa, L., Benchikh, L., Ahmed-Foitih, Z., Bouzgou, K., Path Planning of the Manipulator Arm FANUC Based on Soft Computing Techniques, (2020) International Review of Automatic Control (IREACO), 13 (4), pp. 171-181.
https://doi.org/10.15866/ireaco.v13i4.18506

F. Waldner and F. I. Diakogiannis, Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network, Remote Sensing of Environment, vol. 245, 2020, 111741.
https://doi.org/10.1016/j.rse.2020.111741

P. Dollár, Z. W. Tu, S. Belongie, Supervised Learning of Edges and Object Boundaries, Proceeding on IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, pp.1964 - 1971

A. Stork, Visual computing challenges of advanced manufacturing and industry 4.0, IEEE Computer Graphics and Applications, vol.35, 2015, pp. 21-25.
https://doi.org/10.1109/MCG.2015.46

Z. Yan, R. Liu 4, L. Cheng, X. Zhou, X. Ruan and Y. Xiao, A Concave Hull Methodology for Calculating the Crown Volume of Individual Trees Based on Vehicle-Borne LiDAR Data, Remote Sensing, vol. 11 n.6, pp. 623, 2019.
https://doi.org/10.3390/rs11060623

Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. Künstl Intell, vol. 24, 2010, pp.345-348.
https://doi.org/10.1007/s13218-010-0059-6

Hanh, L. D. and Duc, L. M. Planar Object Recogni-tion For Bin Picking Application, In Proceedings of 5th NAFOSTED Conference on Information and Computer Science (NICS), Ho Chi Minh, 2018, pp. 213-218
https://doi.org/10.1109/NICS.2018.8606884


Refbacks

  • There are currently no refbacks.



Please send any question about this web site to info@praiseworthyprize.com
Copyright © 2005-2022 Praise Worthy Prize