BOOKS - Hands-On Large Language Models Language Understanding and Generation (6th Ear...
US $5.94
102438
102438
Hands-On Large Language Models Language Understanding and Generation (6th Early Release)
Author: Jay Alammar, Maarten Grootendorst
Year: 2024-03-21
Format: EPUB
File size: 11.0 MB
Language: ENG
Year: 2024-03-21
Format: EPUB
File size: 11.0 MB
Language: ENG
AI has acquired startling new language capabilities in just the past few years. Driven by the rapid advances in deep learning, language AI systems are able to write and understand text better than ever before. This trend enables the rise of new features, products, and entire industries. With this book, Python developers will learn the practical tools and concepts they need to use these capabilities today. One of the most common tasks in natural language processing, and machine learning in general, is classification. The goal of the task is to train a model to assign a label or class to some input text. Categorizing text is used across the world for a wide range of applications, from sentiment analysis and intent detection to extracting entities and detecting language. We can use an LLM to represent the text to be fed into our classifier. The choice of this model, however, may not be as straightforward as you might think. Models differ in the language they can handle, their architecture, size, inference speed, architecture, accuracy for certain tasks, and many more differences exist. BERT is a great underlying architecture for representing tasks that can be fine-tuned for a number of tasks, including classification. Although there are generative models that we can use, like the well-known Generated Pretrained Transformers (GPT) such as ChatGPT, BERT models often excel at being fine-tuned for specific tasks.