Название: Search Methods in Artificial Intelligence Автор: Deepak Khemani Издательство: Cambridge University Press Год: 2024 Страниц: 488 Язык: английский Формат: pdf Размер: 11.3 MB
This book is designed to provide in-depth knowledge on how search plays a fundamental role in problem solving. Meant for undergraduate and graduate students pursuing courses in Computer Science and Artificial Intelligence, it covers a wide spectrum of search methods. Readers will be able to begin with simple approaches and gradually progress to more complex algorithms applied to a variety of problems. It demonstrates that search is all pervasive in Artificial Intelligence and equips the reader with the relevant skills.
This book is meant for the serious practitioner-to-be of constructing intelligent machines. Machines that are aware of the world around them, that have goals to achieve, and the ability to imagine the future and make appropriate choices to achieve those goals. It is an introduction to a fundamental building block of Artificial Intelligence (AI). As the book shows, search is central to intelligence.
A neuron is a simple device that computes a simple function of the inputs it receives. Collections of interconnected neurons can do complex computations. Insights into animal brains have prompted many researchers to pursue the path of creating artificial neural networks (ANNs). An ANN is a computational model that can be trained to perform certain tasks by repeatedly showing a stimulus and the expected response.
The development of newer architectures and newer algorithms was instrumental in the spurt of interest in deep neural networks. Equally responsible perhaps was the explosion in the amount of data available on the internet, for example, the millions of images with captions uploaded by users, along with rapid advances in the computing power available. Deep networks got further impetus with the availability of open source software like Tensorflow from Google that makes the task of implementing Machine Learning models easier for researchers. More recently, generative neural networks have been successfully deployed for language generation and even creating paintings, for example, from OpenAI. Generative models embody a form of unsupervised learning from large amounts of data, and are then trained to generate data like the one the algorithms were trained on. After having been fed with millions of images and text and their associated captions, they have now learnt to generate similar pictures or stories from similar text commands. Programs like ChatGPT, Imagen, and DALL-E have created quite a flurry amongst many users on the internet. Deep neural networks are very good at the task of pattern recognition.
In the solution space search algorithms we have seen so far, new candidates are generated from old ones by perturbation of a parent candidate. The neighbourhood function makes a small change in a candidate, to produce a variant in the neighbourhood. This is possible when candidates and solutions are made up of components, which can be replaced by other components. The candidate can be thought of as a chromosome, and the components themselves as genes. Genetic Algorithms take a cue from nature and produce a new candidate from two parents. This is done by a process called crossover, which does this mixing up of genes. Genetic algorithms, also known as evolutionary algorithms, work with a population of candidates, and are called population based methods. There are some differences between the way evolution happens in nature and the manner in which we implement genetic algorithms. While competition between individuals is the common theme, the way it comes about is different.
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.