How GPT Works (MEAP V01) by Drew Farris,
English | 2024 | ISBN: 9781633437081 | 70 pages | MOBI | 1.63 Mb
English | 2024 | ISBN: 9781633437081 | 70 pages | MOBI | 1.63 Mb
Learn how large language models like GPT and Gemini work under the hood in plain English.
How GPT Works translates years of expert research on Large Language Models into a readable, focused introduction to working with these amazing systems. It explains clearly how LLMs function, introduces the optimization techniques to fine tune them, and shows how to create pipelines and processes to ensure your AI applications are efficient and error-free.
In How GPT Works you will learn how to
Test and evaluate LLMs
Use human feedback, supervised fine tuning, and Retrieval augmented generation (RAG)
Reducing the risk of bad outputs, high-stakes errors, and automation bias
Human-computer interaction systems
Combine LLMs with traditional ML
How GPT Works is written by some of the best machine learning researchers at Booz Allen Hamilton, including researcher Stella Biderman, Director of AI/ML Research Drew Farris, and Director of Emerging AI Edward Raff. In clear and simple terms, these experts lay out the foundational concepts of LLMs, the technology’s opportunities and limitations, and best practices for incorporating AI into your organization.
about the book
How GPT Works is an introduction to LLMs that explores OpenAI’s GPT models. The book takes you inside ChatGPT, showing how a prompt becomes text output. In clear, plain language, this illuminating book shows you when and why LLMs make errors, and how you can account for inaccuracies in your AI solutions. Once you know how LLMs work, you’ll be ready to start exploring the bigger questions of AI, such as how LLMs “think” differently that humans, how to best design LLM-powered systems that work well with human operators, and what ethical, legal, and security issues can—and will—arise from AI automation.
search inside this book
about the reader
Includes examples in Python. No knowledge of ML or AI systems is required.
about the authors
Stella Biderman is a machine learning researcher at Booz Allen Hamilton and the executive director of the non-profit research center EleutherAI. She is a leading advocate for open source artificial intelligence and has trained many of the world's most powerful open source artificial intelligence algorithms. She has a master's degree in computer science from the Georgia Institute of Technology and degrees in Mathematics and Philosophy from the University of Chicago.
Drew Farris is a Director of AI/ML Research at Booz Allen Hamilton. He works with clients to build information retrieval, machine learning and large scale data management systems and has co-authored Booz Allen's Field Guide to Data Science, Machine Intelligence Primer and Manning Publications' Taming Text, the 2013 Jolt Award-winning book on computational text processing. He is a member of the Apache Software Foundation and has contributed to a number of open source projects including Apache Accumulo, Lucene, Mahout and Solr.
Edward Raff is a Director of Emerging AI at Booz Allen Hamilton, where he leads the machine learning research team. He has worked in healthcare, natural language processing, computer vision, and cyber security, among fundamental AI/ML research. The author of Inside Deep Learning Dr. Raff has over 100 published research articles at the top artificial intelligence conferences. He is the author of the Java Statistical Analysis Tool library, a Senior Member of the Association for the Advancement of Artificial Intelligence, and twice chaired the Conference on Applied Machine Learning and Information Technology and the AI for Cyber Security workshop. Dr. Raff's work has been deployed and used by anti-virus companies all over the world.
Feel Free to contact me for book requests, informations or feedbacks.
Without You And Your Support We Can’t Continue
Thanks For Buying Premium From My Links For Support
Without You And Your Support We Can’t Continue
Thanks For Buying Premium From My Links For Support