Optimizing Deep Learning Methods in Neural Network Architectures
Deep neural networks are a powerful tool for computer-assisted learning and have achieved significant success in numerous computer vision and image processing tasks. This paper discusses several new neural network structures that have better performance than the traditional feedforward neural network structure. A method of network structure optimization based on gradient descent and heavy-ball algorithms has been proposed. Furthermore, an approach based on the concept of sparse representation for simultaneous training and optimizing the network structure has been presented. According to CIFAR-10 and CIFAR-100 dataset classification tasks and experimental results, the optimization of ResNet and DenseNet structures using gradient descent and heavy-ball algorithms, accordingly, has been shown to result in better performance with increased depth of neural network. A neural network based on a sparse representation is shown to have the highest performance in all datasets. This strategy encourages quick data adaptation at each iteration. The results obtained can be used to design deeper neural networks with no loss of precision and computing speed.
Copyright © 2021 Praise Worthy Prize - All rights reserved.
A. Bashar, Survey on evolving deep learning neural network architectures, Journal of Artificial Intelligence, Vol. 1(Issue 2): 73-82, 2019.
W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F.E. Alsaadi, A survey of deep neural network architectures and their applications, Neurocomputing, Vol. 234: 11-26, 2017.
A. Gómez-Ríos, S. Tabik, J. Luengo, A.S.M. Shihavuddin, B. Krawczyk, and F. Herrera, Towards highly accurate coral texture images classification using deep convolutional neural networks and data augmentation, Expert Systems with Applications, Vol. 118: 315-328, 2019.
Q. Guan, Y. Wang, B. Ping, D. Li, J. Du, Y. Qin, H. Lu, X. Wan, and J. Xiang, Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study, Journal of Cancer, Vol. 10(Issue 20): 4876, 2019.
W. Tarnowski, P. Warchoł, S. Jastrzȩbski, J. Tabor, and M. Nowak. Dynamical isometry is achieved in residual networks in a universal way for any activation function, in The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2221-2230, PMLR, 2019.
D. Singh, V. Kumar, and M. Kaur, Densely connected convolutional networks-based COVID-19 screening model, Applied Intelligence, Vol. 51(Issue 5): 3044-3051, 2021.
Nariman-zadeh, N., Haghgoo, E., Jamali, A., Pareto Optimization of GMDH-Type Neural Networks for Modelling and Prediction of Hoop Strain in Explosive Forming Process, (2020) International Review of Chemical Engineering (IRECHE), 12 (1), pp. 1-11.
Ebhota, V., Srivastava, V., Modeling Environmental Effects on Electromagnetic Signal Propagation Using Multi-Layer Perceptron Artificial Neural Network, (2020) International Journal on Communications Antenna and Propagation (IRECAP), 10 (3), pp. 175-182.
Idroes, R., Noviandy, T., Maulana, A., Suhendra, R., Sasmita, N., Muslem, M., Idroes, G., Kemala, P., Irvanizam, I., Application of Genetic Algorithm-Multiple Linear Regression and Artificial Neural Network Determinations for Prediction of Kovats Retention Index, (2021) International Review on Modelling and Simulations (IREMOS), 14 (2), pp. 137-145.
D.X. Zhou, Universality of deep convolutional neural networks, Applied and computational harmonic analysis, Vol. 48(Issue 2): 787-794, 2020.
Z. Lu, I. Whalen, V. Boddeti, Y. Dhebar, K. Deb, E. Goodman, and W. Banzhaf, Nsga-net: neural architecture search using multi-objective genetic algorithm, in Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419-427, 2019.
Bataineh, A., Batayneh, W., Okour, M., Intelligent Control Strategies for Three Degree of Freedom Active Suspension System, (2021) International Review of Automatic Control (IREACO), 14 (1), pp. 17-27.
J. Ma, F. Lin, S. Wesarg, and M. Erdt, A novel bayesian model incorporating deep neural network and statistical shape model for pancreas segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, pp. 480- 487, 2018.
S. Singaravel, J. Suykens, and P. Geyer, Deep-learning neural-network architectures and methods: Using component-based models in building-design energy prediction, Advanced Engineering Informatics, Vol. 38: 81-90, 2018.
Jiménez-Moreno, R., Pinzón-Arenas, J., New Hybrid Fuzzy-CNN Architecture for Human-Robot Interaction, (2019) International Review of Automatic Control (IREACO), 12 (5), pp. 236-241.
S. Akcay, M.E. Kundegorski, C.G. Willcocks, and T.P. Breckon, Using deep convolutional neural network architectures for object classification and detection within x-ray baggage security imagery, IEEE Transactions on Information Forensics and Security, Vol. 13(Issue 9): 2203-2215, 2018.
S. Ramjee, S. Ju, D. Yang, X. Liu, A.E. Gamal, and Y.C. Eldar, Fast deep learning for automatic modulation classification, arXiv preprint arXiv:1901.05850, 2019.
Y. Jaafra, J.L. Laurent, A. Deruyver, and M.S. Naceur, Reinforcement learning for neural architecture search: A review, Image and Vision Computing, Vol. 89: 57-66, 2019.
C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy, Progressive neural architecture search, in Proceedings of the European conference on computer vision (ECCV), pp. 19-34, 2018.
Wang, Z., Al Said, N., Analog Computing and a Hybrid Approach to the Element Base of Artificial Intelligence Applications, (2020) International Review of Automatic Control (IREACO), 13 (5), pp. 206-213.
H. Guo, J. Zhou, M. Koopialipoor, D.J. Armaghani, and M.M. Tahir, Deep neural network and whale optimization algorithm to assess flyrock induced by blasting, Engineering with Computers, Vol. 37(Issue 1): 173-186, 2021.
S. Kapturowski, G. Ostrovski, J. Quan, R. Munos, and W. Dabney, Recurrent experience replay in distributed reinforcement learning, International Conference on Learning Representations, 2018.
S. Du, J. Lee, H. Li, L. Wang, and X. Zhai, Gradient descent finds global minima of deep neural networks, International Conference on Machine Learning, PMLR, pp. 1675-1685, 2019.
Z. Chen, H. Cai, Y. Zhang, C. Wu, M. Mu, Z. Li, and M.A. Sotelo, A novel sparse representation model for pedestrian abnormal trajectory understanding, Expert Systems with Applications, Vol. 138: 112753, 2019.
T. Bouwmans, S. Javed, M. Sultana, and S.K. Jung, Deep neural network concepts for background subtraction: A systematic review and comparative evaluation. Neural Networks, Vol. 117: 8-66, 2019.
Y. Guo, Y. Liu, T. Georgiou, M.S. Lew, A review of semantic segmentation using deep neural networks, International Journal of Multimedia Information Retrieval, Vol. 7(Issue 2): 87-93, 2018.
A. Krizhevsky, I. Sutskever, and G.E. Hinton, ImageNet classification with deep convolutional neural networks, Communications of the ACM, vol. 60(issue 6): 84-90, 2017.
Z. Meng, L. Li, X. Tang, Z. Feng, L. Jiao, and M. Liang, Multipath residual network for spectral-spatial hyperspectral image classification, Remote Sensing, Vol. 11(Issue 16): 1896, 2019.
G. Li, C. Zhang, R. Lei, X. Zhang, Z. Ye, and X. Li, Hyperspectral remote sensing image classification using three-dimensional-squeeze-and-excitation-DenseNet (3D-SE-DenseNet), Remote Sensing Letters, Vol. 11(Issue 2): 195-203, 2020.
Y. Lu, C. Ma, Y. Lu, J. Lu, and L. Ying, A mean field analysis of deep ResNet and beyond: Towards provably optimization via overparameterization from depth, in International Conference on Machine Learning, pp. 6426-6436. PMLR, 2020.
J. Schmidt-Hieber, Nonparametric regression using deep neural networks with ReLU activation function, Annals of Statistics, Vol. 48(Issue 4): 1875-1897, 2020.
A. Onan, Sentiment analysis on product reviews based on weighted word embeddings and deep neural networks, Concurrency and Computation: Practice and Experience, Vol. 1: e5909, 2020.
S. Zeng, and Y. Huang, A Hybrid-Pipelined Architecture for FPGA-based Binary Weight DenseNet with High Performance-Efficiency, 2020 IEEE High Performance Extreme Computing Conference (HPEC), IEEE, pp. 1-5, 2020.
S.S. Roy, R. Chopra, K.C. Lee, C. Spampinato, and B. Mohammadi-ivatlood, Random forest, gradient boosted machines and deep neural network for stock price forecasting: a comparative analysis on South Korean companies, International Journal of Ad Hoc and Ubiquitous Computing, Vol. 33(Issue 1): 62-71, 2020.
S. Hwang, and H. Ikeda, Force balance controls the relaxation time of the gradient descent algorithm in the satisfiable phase, Physical Review E, Vol. 101(Issue 5): 052308, 2020.
S. Gadat, F. Panloup, and S. Saadane, Stochastic heavy ball, Electronic Journal of Statistics, Vol. 12(Issue 1): 461-529, 2018.
W.S. Hu, H.C. Li, L. Pan, W. Li, R. Tao, and Q. Du, Spatial–spectral feature extraction via deep ConvLSTM neural networks for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing, Vol. 58(Issue 6): 4237-4250, 2020.
Z. Song, Y. Liu, R. Song, Z. Chen, J. Yang, C. Zhang, and Q. Jiang, A sparsity-based stochastic pooling mechanism for deep convolutional neural networks, Neural Networks, Vol. 105: 340-345, 2018.
S. Shirakawa, Y. Iwata, and Y. Akimoto, Dynamic Optimization of Neural Network Structures Using Probabilistic Modeling, Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32(Issue 1): 4074-4082, 2018.
J. Yoon, E. Gong, I. Chatnuntawech, B. Bilgic, J. Lee, W. Jung, J. Ko, H. Jung, K. Setsompop, G. Zaharchuk, E.Y. Kim, J. Pauly, and J. Lee, Quantitative susceptibility mapping using deep neural network: QSMnet, Neuroimage, Vol. 179: 199-206, 2018.
H. Nakahara, H. Yonekawa, T. Fujii, M. Shimoda, and S. Sato, GUINNESS: A GUI based binarized deep neural network framework for software programmers, IEICE TRANSACTIONS on Information and Systems, Vol. 102(Issue 5): 1003-1011, 2019.
Y. Tu, and Y. Lin, Deep neural network compression technique towards efficient digital signal modulation recognition in edge device, IEEE Access, Vol. 7: 58113-58119, 2019.
Z. Chen, Z. Chen, J. Lin, S. Liu, and W. Li, Deep neural network acceleration based on low-rank approximated channel pruning, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 67(Issue 4): 1232-1244, 2020.
- There are currently no refbacks.
Please send any question about this web site to firstname.lastname@example.org
Copyright © 2005-2021 Praise Worthy Prize