LitMy.ru - литература в один клик

Moral Codes: Designing Alternatives to AI

  • Добавил: literator
  • Дата: 10-08-2024, 15:46
  • Комментариев: 0
Название: Moral Codes: Designing Alternatives to AI
Автор: Alan F. Blackwell
Издательство: The MIT Press
Год: 2024
Страниц: 239
Язык: английский
Формат: pdf (true)
Размер: 10.1 MB

Why the world needs less AI and better programming languages.

Decades ago, we believed that robots and computers would take over all the boring jobs and drudgery, leaving humans to a life of leisure. This hasn't happened. Instead, humans are still doing boring jobs, and even worse, AI researchers have built technology that is creative, self-aware, and emotional—doing the tasks humans were supposed to enjoy. How did we get here? In Moral Codes, Alan Blackwell argues that there is a fundamental flaw in the research agenda of AI. What humanity needs, Blackwell argues, is better ways to tell computers what we want them to do, with new and better programming languages: More Open Representations, Access to Learning, and Control Over Digital Expression, in other words, MORAL CODE.

Blackwell draws on his deep experiences as a programming language designer—which he has been doing since 1983—to unpack fundamental principles of interaction design and explain their technical relationship to ideas of creativity and fairness. Taking aim at software that constrains our conversations with strict word counts or infantilizes human interaction with likes and emojis, Blackwell shows how to design software that is better—not more efficient or more profitable, but better for society and better for all people. Covering recent research and the latest smart tools, Blackwell offers rich design principles for a better kind of software—and a better kind of world.

Software developers and computer scientists like me are fascinated by what computers can do. Our imagination and enthusiasm have resulted in ITsystems, personal devices, and social media that have changed the world in many ways. Unfortunately, imagination and enthusiasm, even if well intentioned, can have unintended consequences. As software has come to rule more and more of the world, a disturbing number of our social problems seem to be caused by software, and by the things we software engineers have made.

I want to make clear that there are two fundamentally different kinds of AI when we look at these problems and systems from a human perspective. The first kind of AI used to be described as “cybernetics” or “control systems” when I did my own undergraduate major in that subject. These are automated systems that use sensors to observe or measure the physical world, and then control some kind of motor in response to what they observe. A familiar example is the household thermostat, which measures the temperature in your house and automatically turns on a heating pump or boiler if it is too cold. Another is automobile cruise control, which measures the speed of the car and automatically accelerates if it is too slow. The second kind of AI is concerned not with achieving practical automated tasks in the physical world, but with imitating human behavior for its own sake. The second kind of AI is concerned with human subjective experience, rather than the objective world. This is the kind of AI that Turing proposed as a philosophical experiment into the nature of what it means to be human, inspired by so many literary fantasies.

A common way to make the argument that these two kinds of AI are really the same (or that they could become the same in the future even if they are different today) is to invoke the speculative brand of “artificial general intelligence” (AGI). This idea interprets the Turing Test as an engineering prediction, arguing that the machine “learning” algorithms of today will naturally evolve as they increase in power to think subjectively like humans, including emotion, social skills, consciousness, and so on. The claims that increasing computer power will eventually result in fundamental change are hard to justify on technical grounds; some say this is like arguing that if we make airplanes fly fast enough, eventually one will lay an egg. I am writing at a time of exciting technical advance, when large language models (or LLMs) such as ChatGPT are gaining popularity and increasingly able to output predictive text that makes them seem surprisingly like an intelligent human.

In an introductory class in programming, students learn to code. They will be taught the syntax and keywords of some programming language notation, perhaps Python or Java, and will learn how to translate an algorithm into the conventional idioms of that particular programming language. Introductions to Machine Learning are not yet quite as familiar or popular as learn-­to-­code initiatives, but they are equally accessible to children—­I have a friend who teaches Machine Learning methods to eight-­year-­olds in an after-­school club. The current popularity of Machine Learning, now the most widely taught approach to AI, dates back only to about 2010, when public interest was captured by a sudden increase in performance of “deep neural network” algorithms, especially after those algorithms were trained using very large quantities of free content that had been scraped from the internet.

Скачать Moral Codes: Designing Alternatives to AI












[related-news] [/related-news]
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.