Analog Computing and a Hybrid Approach to the Element Base of Artificial Intelligence Applications
(*) Corresponding author
Strong demand for artificial intelligence technology is driving the development of complex high-performance applications with less power consumption. Analog computing is of high-performance and has simplified system, which simulates the physical processes occurring in nature. The universality of the digital coding allows getting a fairly accurate calculation result and provides saving without loss and additional restoration. The benefits of digital and analog computing systems can be enhanced by their hybridization. The type and level of hybrid computing depend on the complexity of the task set. Hardware implementation of a neural network offers promising solutions for computing tasks that require compact and low-power computing technologies. Artificial neural networks or ANNs, like biological neurons, are characterized by their capacity of learning and memorizing information, depending on architecture and weight of the latter. The literature review shows that stable weight storage can be achieved using digital weights and analog multipliers to reduce footprint. The proposed methodology for the network architecture provides optimal conditions for maintaining synaptic weights, increasing processing speed by the parallel weight perturbation.
Copyright © 2020 Praise Worthy Prize - All rights reserved.
C. J. Maley, Toward analog neural computation, Minds and Machines, Vol. 28(Issue 1): 77-91, 2018.
K. Mainzer, The Digital and the Real Universe, Foundations of Natural Philosophy and Computational Physics. Philosophies, Vol. 4(Issue 1): 3, 2019.
W. Haensch, T. Gokmen, R. Puri, The next generation of deep learning hardware: analog computing, Proceedings of the IEEE, Vol. 107(Issue 1): 108-122, 2018
J.S. Chang, A.F. Facchetti, R. Reuss, A circuits and systems perspective of organic/printed electronics: Review, challenges, and contemporary and emerging design approaches, IEEE Journal on emerging and selected topics in circuits and systems, Vol. 7(Issue 1): 7-26, 2017.
O. Krestinskaya, K.N. Salama, A.P. James, Learning in memristive neural network architectures using analog backpropagation circuits, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 66(Issue 2): 719-732, 2018.
M.L. Alomar, V. Canals, N. Perez-Mora, V. Martínez-Moll, J.L. Rosselló, FPGA-based stochastic echo state networks for time-series forecasting, Computational intelligence and neuroscience, Vol. 1: 3917892, 2016.
A. Ardakani, F. Leduc-Primeau, N. Onizawa, T. Hanyu, W.J. Gross, VLSI implementation of deep neural network using integral stochastic computing, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 25(Issue 10): 2688-2699, 2017.
F. Neugebauer, I. Polian, J.P. Hayes, Framework for quantifying and managing accuracy in stochastic circuit design, ACM Journal on Emerging Technologies in Computing Systems, Vol. 14(Issue 2): 1-21, 2018.
A. Natarajan, J. Hasler, Modeling, simulation and implementation of circuit elements in an open-source tool set on the FPAA, Analog Integrated Circuits and Signal Processing, Vol. 91(Issue 1): 119-130, 2017.
M. Krcma, Z. Kotasek, J. Lojda, Triple modular redundancy used in field programmable neural networks, In 2017 IEEE East-West Design & Test Symposium (EWDTS), pp. 1-6, IEEE, 2017.
C.D. Schuman, T.E. Potok, R.M. Patton, J.D. Birdwell, M.E. Dean, G.S. Rose, J.S. Plank, A survey of neuromorphic computing and neural networks in hardware, arXiv preprint arXiv:1705.06963, 2017.
E.O. Neftci, B.B. Averbeck, Reinforcement learning in artificial and biological systems, Nature Machine Intelligence, Vol. 1(Issue 3): 133-143, 2019.
R. Kreiser, D. Aathmani, N. Qiao, G. Indiveri, Y. Sandamirskaya, Organizing sequential memory in a neuromorphic device using dynamic neural fields, Frontiers in Neuroscience, Vol. 12: 717, 2018.
G.A. Barrios, J.C. Retamal, E. Solano, M. Sanz, Analog simulator of integro-differential equations with classical memristors, Scientific Reports, Vol. 9(Issue 1): 1-10, 2019.
M. Keller, B. Murmann, Y. Manoli, Analog-Digital Interfaces-Review and Current Trends, In CHIPS 2020, Vol. 2, pp. 93-116, Springer, Cham, 2016.
Z. Ignjatovic, Y. Zhang, Analog-to-Digital Converters Employing Chaotic Internal Circuits to Maximize Resolution-Bandwidth Product-Turbo ADC, In Proceedings of the 5th International Conference on Applications in Nonlinear Dynamics, pp. 199-210, Springer, Cham, 2019.
J.M. Rabaey, M. Pedram, Low power design methodologies, Vol. 336 (Springer Science & Business Media, 2012).
M. Rakide, M. Seifouri, P. Amiri, A folded cascade-based broadband transimpedance amplifier for optical communication systems, Microelectronics Journal, Vol. 54: 1-8, 2016.
B.G. Douglass, 1.4 Thermal Noise and Other Circuit Noise, In The Electronics Handbook, pp. 30, Taylor and Francis Group, 2018.
F. Radicchi, D. Krioukov, H. Hartle, G. Bianconi, Classical information theory of networks, Journal of Physics: Complexity, Vol. 1(Issue 2): 025001, 2020.
J.A. Hertz, Introduction to the theory of neural computation (CRC Press, 2018).
D. Graupe, Principles of Artificial Neural Networks: Basic Designs to Deep Learning, Vol. 8 (World Scientific, 2019).
I.N. Da Silva, D.H. Spatti, R.A. Flauzino, L.H.B. Liboni, S.F. dos Reis Alves, Artificial neural network architectures and training processes, In Artificial neural networks, pp. 21-28, Springer, Cham. 2017.
P. Jawandhiya, Hardware design for machine learning, International Journal of Artificial Intelligence and Applications, Vol. 9(Issue 1): 63-84, 2018.
Q. Chen, J. Wang, S. Yang, Y. Qin, B. Deng, X. Wei, A real-time FPGA implementation of a biologically inspired central pattern generator network, Neurocomputing, Vol. 244: 63-80, 2017.
M.P. Véstias, A survey of convolutional neural networks on edge with reconfigurable computing, Algorithms, Vol. 12(Issue 8): 154, 2019.
A. Slowik, H. Kwasnicka, Evolutionary algorithms and their applications to engineering problems, Neural Computing and Applications, Vol. 32: 12363-12379, 2020.
L. Danial, E. Pikhay, E. Herbelin, N. Wainstein, V. Gupta, N. Wald, Y. Roizi, R. Daniel, S. Kvatinsky, Two-terminal floating-gate transistors with a low-power memristive operation mode for analogue neuromorphic computing, Nature Electronics, Vol. 2(Issue 12): 596-605, 2019.
A. Soltani, W. Chaisangmongkon, X.J. Wang, Neural circuit mechanisms of value-based decision-making and reinforcement learning, In Decision Neuroscience, pp. 163-176, Academic Press, 2017.
N. Gong, T. Idé, S. Kim, I. Boybat, A. Sebastian, V. Narayanan, T. Ando, Signal and noise extraction from analog memory elements for neuromorphic computing, Nature communications, Vol. 9(Issue 1): 1-8, 2018.
G. Indiveri, R. Douglas, S. E. Lyshevski, Neuromorphic Networks of Spiking Neurons (CRC Press, 2018).
A. Nawaz, M.S. Meruvia, D.L. Tarange, S.P. Gopinathan, A. Kumar, A. Kumar, H. Bhunia, A.J. Pal, I. A. Hümmelgen, High mobility organic field-effect transistors based on defect-free regioregular poly (3-hexylthiophene-2, 5-diyl), Organic Electronics, Vol. 38: 89-96, 2016.
O. Krestinskaya, A.P. James, Learning algorithms and implementation, In Deep Learning Classifiers with Memristive Networks, pp. 91-102, Springer, Cham, 2020.
G. Oliveri, M. Mostafa, W.G. Teich, J. Lindner, H. Schumacher, Advanced Low Power High Speed Nonlinear Signal Processing: An Analog VLSI Example, Journal of Signal Processing Systems, Vol. 89(Issue 1): 163-180, 2017.
Y. Chen, H.H. Li, C. Wu, C. Song, S. Li, C., H.P. Cheng, W. Wei, X. Liu, Neuromorphic computing's yesterday, today, and tomorrow – an evolutional view, Integration, Vol. 61: 49-61, 2018.
M.L. Schneider, C.A. Donnelly, I.W. Haygood, A. Wynn, S.E. Russek, M.A. Castellanos-Beltran, P.D. Dresselhaus, P.F. Hopkins, M.R. Pufall, W.H. Rippard, Synaptic weighting in single flux quantum neuromorphic computing, Scientific reports, Vol. 10(Issue 1): 1-7, 2020.
Y. Liao, N. Deng, H. Wu, B. Gao, Q. Zhang, H. Qian, Weighted synapses without carry operations for RRAM-based neuromorphic systems, Frontiers in neuroscience, Vol. 12: 167, 2018.
R. Hasan, T.M. Taha, C. Yakopcic, On-chip training of memristor crossbar based multi-layer neural networks. Microelectronics Journal, Vol. 66: 31-40, 2017.
N. Guo, Y. Huang, T. Mai, S. Patil, C. Cao, M. Seok, S. Sethumadhavan, Y. Tsividis, Energy-efficient hybrid analog/digital approximate computation in continuous time, IEEE Journal of Solid-State Circuits, Vol. 51(Issue 7): 1514-1524, 2016.
D. Miyashita, S. Kousai, T. Suzuki, J. Deguchi, A neuromorphic chip optimized for deep learning and CMOS technology with time-domain analog and digital mixed-signal processing, IEEE Journal of Solid-State Circuits, Vol. 52(Issue 10): 2679-2689, 2017.
Y.C. Xiang, P. Huang, Z. Zhou, R.Z. Han, Y.N. Jiang, Q.M. Shu, Z.Q. Su, Y.B. Liu, X.Y. Liu, J.F. Kang, Analog Deep Neural Network Based on NOR Flash Computing Array for High Speed/Energy Efficiency Computation, 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-4, IEEE, 2019.
- There are currently no refbacks.
Please send any question about this web site to email@example.com
Copyright © 2005-2023 Praise Worthy Prize