LitMy.ru - литература в один клик

Number Systems for Deep Neural Network Architectures

  • Добавил: literator
  • Дата: 28-09-2023, 04:25
  • Комментариев: 0
Название: Number Systems for Deep Neural Network Architectures
Автор: Ghada Alsuhli, Vasilis Sakellariou, Hani Saleh, Mahmoud Al-Qutayri
Издательство: Springer
Год: 2024
Страниц: 100
Язык: английский
Формат: pdf (true), epub
Размер: 11.2 MB

This book provides readers a comprehensive introduction to alternative number systems for more efficient representations of Deep Neural Network (DNN) data. Various number systems (conventional/unconventional) exploited for DNNs are discussed, including Floating Point (FP), Fixed Point (FXP), Logarithmic Number System (LNS), Residue Number System (RNS), Block Floating Point Number System (BFP), Dynamic Fixed-Point Number System (DFXP) and Posit Number System (PNS). The authors explore the impact of these number systems on the performance and hardware design of DNNs, highlighting the challenges associated with each number system and various solutions that are proposed for addressing them.

During the past decade, DNNs have shown outstanding performance in a myriad of Artificial Intelligence (AI) applications. Since their success in both speech and image recognition, great attention has been drawn to DNNs from academia and industry, which subsequently led to a wide range of products that utilize them. Although DNNs are inspired by the deep hierarchical structures of the human brain, they have exceeded human accuracy in a number of domains. Nowadays, the contribution of DNNs is notable in many fields including self-driving cars, speech recognition, computer vision, natural language processing (NLP), and medical applications. This DNN revolution is helped by the massive accumulation of data and the rapid growth in computing power.

Due to the substantial computational complexity and memory demands, accelerating DNN processing has typically relied on either high-performance general-purpose compute engines like Central Processing Units (CPUs) and Graphics Processing Units (GPUs), or customized hardware such as Field Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs). While general-purpose compute engines continue to dominate DNN processing in academic settings, the industry places greater emphasis on deploying DNNs in resource-constrained edge devices, such as smartphones or wearable devices, which are commonly used for various practical applications. Whether DNNs are run on GPUs or dedicated accelerators, speeding up and/or increasing DNN hardware efficiency without sacrificing their accuracy continues to be a demanding task.

Convolutional Neural Networks (CNNs) are a fundamental component of Deep Learning and have revolutionized many areas of machine learning, including computer vision and natural language processing. In CNNs, the network itself extracts, through the iterative adaptation of the filter coefficients, the input features with the highest information content. These features include edges, corners, and textures which can then be used to classify, segment, or recognize objects. An important property of convolution is its ability to preserve spatial information, enabling the use of CNNs for tasks such as object detection and localization as well. CNNs can capture spatial dependencies inside the input. They are most commonly used in image and video processing tasks.

Contents:


Скачать Number Systems for Deep Neural Network Architectures












[related-news] [/related-news]
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.