Hardware implementation of an asynchronous analog neural network with training based on unified cmos ip blocks

Мұқаба

Дәйексөз келтіру

Толық мәтін

Ашық рұқсат Ашық рұқсат
Рұқсат жабық Рұқсат берілді
Рұқсат жабық Рұқсат ақылы немесе тек жазылушылар үшін

Аннотация

An approach to designing neuromorphic electronic devices based on convolutional neural networks with backpropagation training is presented. The approach is aimed at improving the energy efficiency and performance of autonomous systems. The developed approach is based on the use of a neural network topology compiler based on five basic CMOS blocks intended for analog implementation of all computational operations in training and inference modes. The developed crossbar arrays of functional analog CMOS blocks with digital control of the conductivity level ensure the execution of the matrix-vector multiplication operation in the convolutional and fully connected layers without using the DAC and using the ADC in the synaptic connection weight control circuits only in the training mode. The effectiveness of the approach is demonstrated by the example of the digit classification problem solved with an accuracy of 97.87 % on test data using the developed model of hardware implementation of an asynchronous analog neural network with training.

Авторлар туралы

M. Petrov

Saint Petersburg Electrotechnical University ETU “LETI”

Email: nvandr@gmail.com
St. Petersburg, Russia

E. Ryndin

Saint Petersburg Electrotechnical University ETU “LETI”

Email: nvandr@gmail.com
St. Petersburg, Russia

N. Andreeva

Saint Petersburg Electrotechnical University ETU “LETI”

Хат алмасуға жауапты Автор.
Email: nvandr@gmail.com
St. Petersburg, Russia

Әдебиет тізімі

  1. He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. P. 770–778.
  2. Zagoruyko S., Komodakis N. Wide residual networks // arXiv:1605.07146. 2016. P. 1–15.
  3. Voulodimos A., Doulamis N., Doulamis A., Protopapadakis E. Deep learning for computer vision: A brief review // Computational intelligence and neuroscience. 2018. V. 2018. N. 1. 7068349.
  4. Goyal P., Sumit P., Karan J. Deep learning for natural language processing. Apress Berkeley, CA. 2018. 277 p.
  5. Petrov M.O., Ryndin E.A., Andreeva N.V. Compiler for Hardware Design of Convolutional Neural Networks with Supervised Learning Based on Neuromorphic Electronic Blocks // 2024 Sixth International Conference Neurotechnologies and Neurointerfaces (CNN). 2024. P. 1–4.
  6. Petrov M.O., Ryndin E.A., Andreeva N.V. Automated design of deep neural networks with in-situ training architecture based on analog functional blocks // The European Physical Journal Special Topics. 2024. P. 1–14.
  7. Gupta I., Serb A., Khiat A., Zeitler R., Vassanelli S., Prodromakis T. Sub 100 nW volatile nano-metal-oxide memristor as synaptic-like encoder of neuronal spikes // IEEE transactions on biomedical circuits and systems. 2018. V. 12. N. 2. P. 351–359.
  8. Valueva M.V., Valuev G.V., Babenko M.G., Cherny`x A., Kortes-Mendosa X.M. Metod apparatnoj realizacii svertochnoj nejronnoj seti na osnove sistemy` ostatochny`x klassov // Trudy` Instituta sistemnogo programmirovaniya RAN. 2022. T. 34. № 3. S. 61–74.
  9. LeCun Y., Bengio Y., Hinton G. Deep learning // Nature. 2015. V. 521. N. 7553. P. 436–444.
  10. Goodfellow I., Bengio Y., Courville A. Regularization for deep learning // Deep learning. 2016. P. 216–261.
  11. Schmidhuber J. Deep learning in neural networks: An overview // Neural networks. 2015. V. 61. P. 85–117.
  12. TensorFlow. MNIST dataset in Keras. 2024. URL: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/mnist

Қосымша файлдар

Қосымша файлдар
Әрекет
1. JATS XML

© Russian Academy of Sciences, 2025