LitMy.ru - литература в один клик

The Cambridge Handbook of Computational Cognitive Sciences (2nd Edition)

  • Добавил: literator
  • Дата: 27-04-2023, 07:27
  • Комментариев: 0
The Cambridge Handbook of Computational Cognitive Sciences (2nd Edition)Название: The Cambridge Handbook of Computational Cognitive Sciences (2nd Edition)
Автор: Ron Sun
Издательство: Cambridge University Press
Год: 2023
Страниц: 1325
Язык: английский
Формат: pdf (true)
Размер: 17.4 MB

The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, Artificial Intelligence, Computer Science, and more.

Models (or theories) in cognitive sciences can be divided roughly into computational, mathematical, and verbal-conceptual ones. Although each of these types of models/theories has its role to play, this handbook is mainly concerned with computational models/theories. The reason for this emphasis is that, at least at present, computational modeling appears to be the most promising approach in many ways and offers the flexibility and the expressive power that no other approaches can match. (Mathematical models may be viewed as a kind of subset of computational models, as they can usually lead to computational implementation.) Furthermore, a computational model can often be viewed as a theory by itself and may be important intellectually in this way.

Computational cognitive sciences explore the essence of cognition (which should be noted as being broadly defined here, including all kinds of processes of the mind, such as motivation, emotion, perception, and so on, far beyond just pure cognition) and various cognitive functionalities, through developing detailed, mechanistic, process-based understanding by specifying corresponding computational models (in a broad sense) of representations, mechanisms, and processes. These models embody descriptions of cognition in computer algorithms and programs, based on or inspired by Artificial Intelligence and Computer Science. That is, they impute computational processes onto cognitive functions, and thereby they produce runnable programs. Detailed simulations and other operations can be conducted based on computational models. Computational cognitive sciences may be considered a field by itself, although various parts of it are also embedded within separate disciplines.

Recent, relevant trends in the research on Deep Learning include graph convolutional networks, deep networks with attention mechanisms, generative adversarial networks, and deep reinforcement learning, among many others. All these advances have been made possible by the impressive, relentless developments in hardware, software libraries, and datasets that have become of everyday use for researchers active in the field. Advances in hardware technology provided Deep Learning algorithms with a faster and highly parallel processing, thanks to many-core processors, high bandwidth memory, and accelerators suitable to the learning and induction tasks. The most popular form of accelerator is based on the graphics processing unit (GPU), originally devised for fast image manipulation but equipped with processing capabilities that match the computations required in deep neural networks. Due to the impressive growth in the use of GPUs for deep learning, manufacturers have begun to incorporate neural network-specific instruction sets, or specific tensor cores in their GPUs. Software layers realizing the deep neural network functionalities on GPUs have been developed, as well, and they have become extremely popular among practitioners. Major instances are the libraries TensorFlow and PyTorch, among many others. Recently, other forms of accelerators have been proposed, namely field-programmable gate arrays (FPGA) and application-specific integrated circuits (ASIC). Although both FPGAs and ASICs are promising for realizing neural networks, due to their speed and extreme flexibility, they still lack enough momentum to overtake GPUs because of the lack of software layers that can compete with those available for the GPUs.

Скачать The Cambridge Handbook of Computational Cognitive Sciences (2nd Edition)












[related-news] [/related-news]
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.