Parallel and High-Performance Computing in Artificial Intelligence
- Добавил: literator
- Дата: Сегодня, 17:25
- Комментариев: 0

Автор: Mukesh Raghuwanshi, Pradnya Borkar, Rutvij H. Jhaveri, Roshani Raut
Издательство: CRC Press
Год: 2025
Страниц: 340
Язык: английский
Формат: pdf, epub (true)
Размер: 13.8 MB
Parallel and High-Performance Computing in Artificial Intelligence explores high-performance architectures for data-intensive applications as well as efficient analytical strategies to speed up data processing and applications in automation, Machine Learning, Deep Learning, healthcare, bioinformatics, natural language processing (NLP), and vision intelligence.
The book’s two major themes are high-performance computing (HPC) architecture and techniques and their application in Artificial Intelligence. Highlights include:
HPC use cases, application programming interfaces (APIs), and applications
Parallelization techniques
HPC for Machine Learning
Implementation of parallel computing with AI in big data analytics
HPC with AI in healthcare systems
AI in industrial automation
Coverage of HPC architecture and techniques includes multicore architectures, parallel-computing techniques, and APIs, as well as dependence analysis for parallel computing. The book also covers hardware acceleration techniques, including those for GPU acceleration to power Big Data systems.
The relationship between algorithms for Machine Learning and high-performance computing is a powerful and significant one, although it may not be immediately apparent. If we trace the evolution of Machine Learning algorithms, we can identify key components that existed a long time before computers. The inception of computing systems flourished, and there was renewed hope for machine learning algorithms. However, this hope quickly faded as the computational power required to run these algorithms was unimaginable.
In general, most Machine Learning algorithms require a training phase before they can make predictions or generalize results from unseen data. Training can be time consuming, varying in duration depending on the algorithm. It is a well-established fact that as the size of the training dataset or the number of iterations increases, the training process becomes slower. Another proven fact is that increasing more training data and iterations can produce better outcomes. Consequently, one can deduce that by reducing the length of the training process, less time spent training a model often comes at the expense of its performance.
The fact that most Machine Learning algorithms have hyper-parameters that need to be empirically modified in order to obtain the best results for a given problem is another crucial factor. To determine the ideal hyper-parameters, this procedure frequently necessitates repeating the training phase several times. To get the best results, the process can be time consuming and may conclude too soon without thoroughly examining the complete training set and hyper-parameter space.
Machine Learning algorithms can reach new heights with the help of high-performance computing by assisting in the computationally intensive training process and beyond. However, there is a warning. Algorithm parallel execution is a critical component of modern high-performance computers. Thus, high-performance computing can only be fully utilized by Machine Learning if its algorithms can be broken down into parallelizable subroutines. This stage can be as simple as determining the optimal hyper-parameters in certain situations, or it could include redesigning the algorithms’ architecture in others.
As AI is increasingly being integrated into HPC applications, the book explores emerging and practical applications in such domains as healthcare, agriculture, bioinformatics, and industrial automation. It illustrates technologies and methodologies to boost the velocity and scale of AI analysis for fast discovery. Data scientists and researchers can benefit from the book’s discussion on AI-based HPC applications that can process higher volumes of data, provide more realistic simulations, and guide more accurate predictions. The book also focuses on Deep Learning and edge computing methodologies with HPC and presents recent research on methodologies and applications of HPC in AI.
Скачать Parallel and High-Performance Computing in Artificial Intelligence

[related-news] [/related-news]
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.