Open Access Open Access  Restricted Access Subscription or Fee Access

Robotic Manipulator Control Using CNN and Deep Q-Network Algorithm


(*) Corresponding author


Authors' affiliations


DOI: https://doi.org/10.15866/ireaco.v15i5.19762

Abstract


Convolution Neural Networks are used with a Reinforcement Learning algorithm to control a robotic arm. The objective is to move the end-effector to a desired position using minimum actions without prior knowledge of the robot’s structure nor its mathematical model. The proposed methodology processes camera images from the environment and inputs this information as raw pixels to the agent. Then, Deep Q-Network algorithm trains the agent to provide the appropriate actions based on a value function that estimates future rewards.  An experience replay feature is used to improve the agent’s performance by memorizing the highest rewarded-sequence of actions episode and replays it when needed. Simulation runs, using the graphical robotic simulator CoppeliaSim, validate the proposed algorithm as the agent provides the optimal number of actions to move the robot’s end-effector to the desired target using camera images as inputs. Four different cases are presented and the results show that the proposed algorithm can reduce the number of required actions significantly.
Copyright © 2022 Praise Worthy Prize - All rights reserved.

Keywords


Reinforcement Learning; Deep Q-Networks; Convolution Neural Networks; Robot Manipulator; Image Processing

Full Text:

PDF


References


Kaelbling, Leslie P.; Littman, Michael L.; Moore, Andrew W. (1996). Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research. 4: 237-285.
https://doi.org/10.1613/jair.301

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. 2nd edition, MIT Press 2018.

Morales, Miguel. Grokking deep reinforcement learning. Manning Publications, 2020.

V. Mnih et al., Playing Atari with Deep Reinforcement Learning, ArXiv13125602 Cs, Dec. 2013.

V. Mnih et al., Human-level control through deep reinforcement learning, Nature, vol. 518, no. 7540, pp. 529-533, Feb. 2015.
https://doi.org/10.1038/nature14236

Badia, A. P., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, D., and Blundell, C., Agent57: Out performing the atari human benchmark, 30-Mar-2020.

R. Cellan-Jones, Google AI beats Go master in first game, 09-Mar-2016.

D. Silver et al., Mastering the game of Go with deep neural networks and tree search, Nature, vol. 529, no. 7587, pp. 484-489, Jan. 2016.
https://doi.org/10.1038/nature16961

Lamini, C., Benhlima, S., Bekri, M., Q-Free Walk Ant Hybrid Architecture for Mobile Robot Path Planning in Dynamic Environment, (2022) International Journal on Engineering Applications (IREA), 10 (2), pp. 105-115.
https://doi.org/10.15866/irea.v10i2.20443

Z. Cao et al., Highway Exiting Planner for Automated Vehicles Using Reinforcement Learning, in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 990-1000, Feb. 2021.
https://doi.org/10.1109/TITS.2019.2961739

Pérez-Gil, Ó., Barea, R., López-Guillén, E. et al. Deep reinforcement learning based control for Autonomous Vehicles in CARLA. Multimed Tools Appl 81, 3553-3576 (2022).
https://doi.org/10.1007/s11042-021-11437-3

L. Chen, X. Hu, B. Tang and Y. Cheng, Conditional DQN-Based Motion Planning With Fuzzy Logic for Autonomous Driving, in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 4, pp. 2966-2977, April 2022.
https://doi.org/10.1109/TITS.2020.3025671

Craig, Introduction to Robotics: Mechanism and Control 2005.

Thomas, M. Chien, A. Tamar, J. A. Ojea, and P. Abbeel, Learning Robotic Assembly from CAD, ArXiv180307635 Cs, Mar. 2018.
https://doi.org/10.1109/ICRA.2018.8460696

Y. Tsurumine, Y. Cui, E. Uchibe, and T. Matsubara, Deep reinforcement learning with smooth policy update: Application to robotic cloth manipulation, Robot. Auton. Syst., vol. 112, pp. 72-83, Feb. 2019.
https://doi.org/10.1016/j.robot.2018.11.004

Q. Bateux, E. Marchand, J. Leitner, F. Chaumette, and P. Corke, Visual Servoing from Deep Neural Networks, ArXiv170508940 Cs, May 2017.

K. Hwang, J. Lee, Y. L. Hwang, and W. Jiang, Image base visual servoing base on reinforcement learning for robot arms, in 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2017, pp. 566-569.
https://doi.org/10.23919/SICE.2017.8105453

Wu, J., Jin, Z., Liu, A., Yu, L., & Yang, F. A hybrid deep-Q-network and model predictive control for point stabilization of visual servoing systems. Control Engineering Practice, 128, 2022 105314.
https://doi.org/10.1016/j.conengprac.2022.105314

W. Liu, L. Peng, J. Cao, X. Fu, Y. Liu, Z. Pan, and J. Yang. Ensemble Bootstrapped Deep Deterministic Policy Gradient for Vision-Based Robotic Grasping IEEE Access Volume 9, 2021.
https://doi.org/10.1109/ACCESS.2021.3049860

M. Sasaki, J. Muguro, F. Kitano, W. Njeri and K. Matsushita. Sim-Real Mapping of an Image-Based Robot Arm Controller Using Deep Reinforcement Learning Applied Sciences 2022, 12, 10277.
https://doi.org/10.3390/app122010277

M. Mohammed, K. Chung, and C. Chyi. Review of Deep Reinforcement Learning-Based Object Grasping: Techniques, Open Challenges, and Recommendations. IEEE ACCESS. (2020).
https://doi.org/10.1109/ACCESS.2020.3027923

C. Millan-Arias, B. Fernandes, F. Cruz, R. Dazeley, and S. Fernandes, Object Detection Recognition and Robot Grasping Based on Machine Learning: A Survey IEEE Access (2020).
https://doi.org/10.1109/ACCESS.2020.3028740

C. Millan-Arias, B. Fernandes, F. Cruz, R. Dazeley, and S. Fernandes, A Robust Approach for Continuous Interactive Actor-Critic Algorithms. IEEE Access (2021).
https://doi.org/10.1109/ACCESS.2021.3099071

A. Al-Zabt and T. Tutunji Robotic Arm Representation using Image-based Feedback for Deep Reinforcement Learning. AEECT, Jordan 2019.
https://doi.org/10.1109/JEEIT.2019.8717511

A. Al-Zabt, Controlling a Robotic Arm using Image-based Feedback and Deep Reinforcement Learning MS Thesis, Philadelphia University, Jordan 2019.
https://doi.org/10.1109/JEEIT.2019.8717511

D. Kingma and J. Ba ADAM: A Method for Stochastic Optimization ICLR 2015.

L. F. Vecchietti, M. Seo, and D. Har, Sampling rate decay in hindsight experience replay for robot control, IEEE Transactions on Cybernetics, 2020

E. Rohmer, S. P. N. Singh, M. Freese, CoppeliaSim (formerly V-REP): a Versatile and Scalable Robot Simulation Framework, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2013.
https://doi.org/10.1109/IROS.2013.6696520

Aguirre, D., Velandia, J., Parra, O., Routing in Elastic Optical Networks Based on Deep Reinforcement Learning, (2022) International Review on Modelling and Simulations (IREMOS), 15 (1), pp. 47-52.
https://doi.org/10.15866/iremos.v15i1.21569

AlZoubi, O., AlAbabneh, N., Hmeidi, I., Bani Yassein, M., A Deep Learning System for the Diagnosis of Heart Problems from ECG Media Files, (2021) International Journal on Communications Antenna and Propagation (IRECAP), 11 (5), pp. 363-371.
https://doi.org/10.15866/irecap.v11i5.21132

Fawa'reh, M., Qasaimeh, M., AbuArja, I., Al-Fayoumi, M., Mitigating Deep learning Attacks Against Text Image CAPTCHA Using Arabic Scheme, (2021) International Journal on Communications Antenna and Propagation (IRECAP), 11 (4), pp. 288-296.
https://doi.org/10.15866/irecap.v11i4.20375


Refbacks

  • There are currently no refbacks.



Please send any question about this web site to info@praiseworthyprize.com
Copyright © 2005-2024 Praise Worthy Prize