Building Personality-Driven Language Models: How Neurotic Is ChatGPT
- Добавил: literator
- Дата: 23-03-2025, 06:58
- Комментариев: 0

Автор: Karol Przystalski, Jan K. Argasiński, Natalia Lipp, Dawid Pacholczyk
Издательство: Springer
Год: 2025
Страниц: 183
Язык: английский
Формат: pdf, epub
Размер: 10.1 MB
This book provides an innovative exploration into the realm of artificial intelligence (AI) by developing personalities for large language models (LLMs) using psychological principles. Aimed at making AI interactions feel more human-like, the book guides you through the process of applying psychological assessments to AIs, enabling them to exhibit traits such as extraversion, openness, and emotional stability. Perfect for developers, researchers, and entrepreneurs, this work merges psychology, philosophy, business, and cutting-edge computing to enhance how AIs understand and engage with humans across various industries like gaming and healthcare. The book not only unpacks the theoretical aspects of these advancements but also equips you with practical coding exercises and Python code examples, helping you create AI systems that are both innovative and relatable. Whether you're looking to deepen your understanding of AI personalities or integrate them into commercial applications, this book offers the tools and insights needed to pioneer this exciting frontier.
Fine-tuning serves as a cornerstone technique for adapting pre-trained LLMs to specialized tasks or datasets. Through fine-tuning, the model’s parameters are adjusted to better fit the target domain, enhancing its performance and capabilities. This process involves retraining the model on task-specific data while leveraging transfer learning from its pre-trained knowledge, striking a balance between generalization and task specificity. The process of building an LLM is divided into a few steps. LLM as most Machine Learning models starts with the data. It needs to be cleaned, to make it more valuable for future use. Cleanup includes noise reduction, text preprocessing, and text deduplication. As the next step, the text needs to be tokenized. A token means a meaningful part of text, like a word or a punctuation mark. This is next used to build the representation of the words as vectors of numbers. For algorithms, it is hard to understand the meanings of words using just text and that is why we convert it into vectors. This process is called word vectorization or embeddings. There are several methods to convert text into vectors. Such vectors are then used in a method called positional encoding that finds the position/place in a sentence or sequence of words. There are three major LLM architectures based on the encoder only, on the decoder only, or on both: encoder and decoder.
Скачать Building Personality-Driven Language Models: How Neurotic Is ChatGPT

[related-news] [/related-news]
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.