LLM Prompt Engineering For Developers The Art and Science of Unlocking LLMs True Potential (Aymen El Amri) (Z-Library)

Author: Aymen El Amri

AI

"Such a comprehensive view over Prompt Engineering. It is hard to find a book of this quality and depth covering what is a very emergent field." ~ MR G STEWART (Amazon Review) "Valuable and Readable. This is a friendly but rigorous guide to prompt engineering. It is full of clear explanations and easy-to-follow Python code. I read it to prepare to develop a specialized chatbot. I feel much better prepared now, and I have learned many things that I can't wait to try!" ~ Iver (Amazon Review) A practical approach to Prompt Engineering for developers. Dive into the world of Prompt Engineering agility, optimizing your prompts for dynamic LLM interactions. Learn with hands-on examples from the real world and elevate your developer experience with LLMs. Discover how the

📄 File Format: PDF
💾 File Size: 8.3 MB
39
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
(This page has no text content)
📄 Page 2
LLM Prompt Engineering For Developers The Art and Science of Unlocking LLMs’ True Potential Aymen El Amri This book is for sale at http://leanpub.com/LLM-Prompt-Engineering-For-Developers This version was published on 2024-04-22 Published by FAUN. At FAUN, we empower developers by keeping them at the forefront of what truly matters in the ever-evolving tech landscape. © Aymen EL AMRI - www.faun.dev
📄 Page 3
Tweet This Book! Please help Aymen El Amri by spreading the word about this book on Twitter! The suggested hashtag for this book is #PromptEngineering. Find out what other people are saying about the book by clicking on this link to search for this hashtag on Twitter: #PromptEngineering
📄 Page 4
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 What Are You Going to Learn? . . . . . . . . . . . . . . . . . . . . . . 3 To Whom is This Guide For? . . . . . . . . . . . . . . . . . . . . . . . . 5 Join the Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The Companion Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Your Feedback Matters . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 From NLP to Large Language Models . . . . . . . . . . . . . . . . . . . 9 What is Natural Language Processing? . . . . . . . . . . . . . . . . . 9 Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Statistical Models (N-Grams) . . . . . . . . . . . . . . . . . . . . . . . 10 Knowledge-Based Models . . . . . . . . . . . . . . . . . . . . . . . . . 11 Contextual Language Models . . . . . . . . . . . . . . . . . . . . . . . 12 Neural Network-Based Models . . . . . . . . . . . . . . . . . . . . . . 12 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . 12 Recurrent Neural Networks (RNNs) . . . . . . . . . . . . . . . . 13 Long Short-Term Memory (LSTM) . . . . . . . . . . . . . . . . . 14 Gated Recurrent Units (Grus) . . . . . . . . . . . . . . . . . . . . 14 Transformer Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Bidirectional Encoder Representations from Transformers (BERT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Generative pre-trained transformer (GPT) . . . . . . . . . . . . 17 What’s Next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Introduction to Prompt Engineering . . . . . . . . . . . . . . . . . . . . 19
📄 Page 5
CONTENTS OpenAI GPT and Prompting: An Introduction . . . . . . . . . . . . . 23 Generative Pre-trained Transformers (GPT) Models . . . . . . . . . . 23 What Is GPT and How Is It Different from ChatGPT? . . . . . . . . 23 The GPT models series: a closer look . . . . . . . . . . . . . . . . . . . 24 GPT-3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 GPT-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Other Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 API Usage vs. Web Interface . . . . . . . . . . . . . . . . . . . . . . . . 26 Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Costs, Tokens, and Initial Prompts: How to Calculate the Cost of Using a Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Prompting: How Does It Work? . . . . . . . . . . . . . . . . . . . . . . 30 Probability and Sampling: At the Heart of GPT . . . . . . . . . . . . 34 Understanding the API Parameters . . . . . . . . . . . . . . . . . . . . 35 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Top-p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Top-k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Sequence Length (max_tokens) . . . . . . . . . . . . . . . . . . . 38 Presence Penalty (presence_penalty) . . . . . . . . . . . . . . . . 38 Frequency Penalty (frequency_penalty) . . . . . . . . . . . . . . 39 Number of Responses (n) . . . . . . . . . . . . . . . . . . . . . . . 39 Best of (best_of) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 OpenAI Official Examples . . . . . . . . . . . . . . . . . . . . . . . . . 40 Using the API without Coding . . . . . . . . . . . . . . . . . . . . . . . 41 Completion (Deprecated) . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Chat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Insert (Deprecated) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Edit (Deprecated) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Setting Up the Environment . . . . . . . . . . . . . . . . . . . . . . . . . 47 Choosing the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Choosing the Programming Language . . . . . . . . . . . . . . . . . . 47 Installing the Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . 47 Installing the OpenAI Python library . . . . . . . . . . . . . . . . . . . 49 Getting an OpenAI API key . . . . . . . . . . . . . . . . . . . . . . . . 49 A Hello World Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
📄 Page 6
CONTENTS Interactive Prompting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Interactive Prompting with Multiline Prompt . . . . . . . . . . . . . 54 Few-shot Learning and Chain of Thought . . . . . . . . . . . . . . . . 57 What Is Few-Shot Learning? . . . . . . . . . . . . . . . . . . . . . . . . 57 Zero-Shot vs Few-Shot Learning . . . . . . . . . . . . . . . . . . . . . 57 Approaches to Few-Shot Learning . . . . . . . . . . . . . . . . . . . . 57 Prior Knowledge about Similarity . . . . . . . . . . . . . . . . . . 58 Prior Knowledge about Learning . . . . . . . . . . . . . . . . . . 58 Prior Knowledge of Data . . . . . . . . . . . . . . . . . . . . . . . 58 Examples of Few-Shot Learning . . . . . . . . . . . . . . . . . . . . . . 59 Limitations of Few-Shot Learning . . . . . . . . . . . . . . . . . . . . . 63 Chain of Thought (CoT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Zero-shot CoT Prompting . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Auto Chain of Thought Prompting (AutoCoT) . . . . . . . . . . . . . 78 Self-Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 What Is Transfer Learning? . . . . . . . . . . . . . . . . . . . . . . . . 88 Inductive Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Transductive Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Inductive vs. Transductive Transfer . . . . . . . . . . . . . . . . . . . 90 Transfer Learning, Fine-Tuning, and Prompt Engineering . . . . . . 92 Fine-Tuning with a Prompt Dataset: A Practical Example . . . . . . 93 Why Is Prompt Engineering Vital for Transfer Learning and Fine-Tuning? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Perplexity as a Metric for Prompt Optimization . . . . . . . . . . . . 103 Avoid Surprising the Model . . . . . . . . . . . . . . . . . . . . . . . . 103 How to Calculate Perplexity? . . . . . . . . . . . . . . . . . . . . . . . 104 A Practical Example with Betterprompt . . . . . . . . . . . . . . . . . 107 Hack the Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 ReAct: Reason + Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
📄 Page 7
CONTENTS What Is It? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 React Using Lanchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 General Knowledge Prompting . . . . . . . . . . . . . . . . . . . . . . . 123 What Is General Knowledge Prompting? . . . . . . . . . . . . . . . . 123 Example of General Knowledge Prompting . . . . . . . . . . . . . . . 123 Introduction to Azure Prompt Flow . . . . . . . . . . . . . . . . . . . . 132 What Is Azure Prompt Flow? . . . . . . . . . . . . . . . . . . . . . . . 132 Prompt Engineering Agility . . . . . . . . . . . . . . . . . . . . . . . . 133 Considerations before Using Azure Prompt Flow . . . . . . . . . . . 134 Creating Your First Prompt Flow . . . . . . . . . . . . . . . . . . . . . 137 Deploying the Flow for Real-Time Inference . . . . . . . . . . . . . . 143 LangChain: The Prompt Engineer’s Guide . . . . . . . . . . . . . . . . 146 What is LangChain? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Prompt Templates and Formatting . . . . . . . . . . . . . . . . . . . . 149 Partial Prompting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Composing Prompts Using Pipeline Prompts . . . . . . . . . . . . . . 154 Chat Prompt Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 The Core Building Block of LangChain: LLMchain . . . . . . . . . . 166 Custom Prompt Templates . . . . . . . . . . . . . . . . . . . . . . . . . 168 Few-Shot Prompt Templates . . . . . . . . . . . . . . . . . . . . . . . . 170 Better Few-Shot Learning with Example Selectors . . . . . . . . . . . 174 NGram Overlap Example Selector . . . . . . . . . . . . . . . . . 187 Max Marginal Relevance Example Selector . . . . . . . . . . . . 188 Length-Based Example Selector . . . . . . . . . . . . . . . . . . . 188 The Custom Example Selector . . . . . . . . . . . . . . . . . . . . 189 Few-Shot Learning with Chat Models . . . . . . . . . . . . . . . 196 Using Prompts from a File . . . . . . . . . . . . . . . . . . . . . . . . . 203 Validating Prompt Templates . . . . . . . . . . . . . . . . . . . . . . . 207 A Practical Guide to Testing and Scoring Prompts . . . . . . . . . . . 208 How and What to Evaluate in a Prompt . . . . . . . . . . . . . . . . . 208 Testing and Scoring Prompts with promptfoo . . . . . . . . . . . . . 212
📄 Page 8
CONTENTS promptFoo: Using Variables . . . . . . . . . . . . . . . . . . . . . . . . 216 promptfoo: Testing with Assertions . . . . . . . . . . . . . . . . . . . 217 Integration of promptfoo with LangChain . . . . . . . . . . . . . . . 218 Reusing Assertions with Templates in promptfoo (Dry) . . . . . . . 222 Streamlining the Test with promptfoo Scenarios . . . . . . . . . . . . 226 General Guidelines and Best Practices . . . . . . . . . . . . . . . . . . 230 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Start with an Action Verb . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Provide a Clear Context . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Use Role-Playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Use References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Use Double Quotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Use Single Quotes When Needed . . . . . . . . . . . . . . . . . . . . . 241 Use Text Separators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Be Specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Give Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Indicate the Desired Response Length . . . . . . . . . . . . . . . . . . 249 Guide the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Don’t Hesitate to Refine . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Consider Looking at Your Problem from a Different Angle . . . . . 254 Consider Opening Another Chat (ChatGPT) . . . . . . . . . . . . . . 255 Use the Right Words and Phrases . . . . . . . . . . . . . . . . . . . . . 256 Experiment and Iterate . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Stay Mindful of LLMs Limitations . . . . . . . . . . . . . . . . . . . . 261 How and Where Prompt Engineering Is Used . . . . . . . . . . . . . . 263 Creative Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Content Generation, SEO, Marketing, and Advertising . . . . . . . . 264 Customer Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Data Analysis, Reporting, and Visualization . . . . . . . . . . . . . . 271 Virtual Assistants and Smart Devices . . . . . . . . . . . . . . . . . . 272 Game Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Healthcare and Medical . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Story Generation and Role-Playing . . . . . . . . . . . . . . . . . . . . 274 Business intelligence and analytics . . . . . . . . . . . . . . . . . . . . 274
📄 Page 9
Image Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Anatomy of a Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Role or Persona . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Input Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Types of Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Direct Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Open-Ended Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Socratic Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 System Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Other Types of Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Interactive Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Prompt Databases, Tools, and Resources . . . . . . . . . . . . . . . . . 292 Prompt Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Prompt Generator for ChatGPT . . . . . . . . . . . . . . . . . . . . . . 292 PromptAppGPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Promptify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 PromptBench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 PromptFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 promptfoo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Promptperfect: A Prompt Optimization Tool . . . . . . . . . . . . . . 296 Aiprm for ChatGPT: Prompt Management and Database . . . . . . 298 FlowGPT: A Visual Interface for ChatGPT and Prompt Database . 299 Wnr.ai: A No-Code Tool to Create Animated AI Avatars . . . . . . 301 Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 What’s Next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Your Feedback Matters . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
📄 Page 10
1 - Preface Many of you have likely experimented with ChatGPT or other Large Language Models (LLMs) and initially been swept up in the excitement. The initial “wow” effect was undeniable. However, after a few interactions, the shine began to wear off. You might have noticed its quirks and incon- sistencies, realizing that, in its raw form, it doesn’t seamlessly fit into our daily tasks. text-davinci-003 fail Have you ever asked a language model to write a blog post on a specific topic, only to receive a response that was off-mark or lacked depth? Or perhaps you asked it to summarize a long and complex article, and instead of a concise summary, you got a jumbled mix of sentences lacking the main points. Such experiences can be frustrating, especially when you know the potential that lies within these models. You may have searched for “perfect” prompts in an online prompt database - a new phenomenon that has emerged in the wake of ChatGPT’s and other LLMs’ popularity - hoping for a quick solution. However, you soon realize that what works for one person may not work for another. Each prompt is unique and tailored to specific needs and contexts. It is a mistake to assume that the one-size-fits-all approach will always work. In all cases, be assured: This guide goes beyond just providing prompts and is not a collection of prompts. Instead, it explains the “how” behind crafting
📄 Page 11
Preface 2 effective prompts. With this knowledge and skill, you can create prompts that align with your objectives and lead to better interactions with LLMs. In reality, if you were sometimes disappointed, it’s not entirely the LLM’s fault. The key lies in how we communicate with it. Just as we need to frame our questions correctly when seeking answers from a search engine, we need to craft our prompts effectively for ChatGPT or any other LLM for that matter. It’s just that search engines do not need as much guidance as LLMs do. This art of guiding LLMs and optimizing our interactions with them is more than just a technique; it’s a paradigm shift, a new experimental science that we call “Prompt Engineering”. Indeed, it underscores the importance of clear, structured communication with AI. As developers, we’re not merely coding; we’re teaching, guiding, and refining the AI’s understanding. This guide, “LLM Prompt Engineering For Developers”, serves as your comprehensive guide to mastering this emerging discipline. Dive in and discover how to unlock the true potential of ChatGPT and similar models, transforming them into invaluable tools in your development arsenal. The art of communicating with LLMs heavily relies on prompt engineering. Consider it as finding the perfect key for a lock. While humans can inter- pret vague or incomplete information, LLMs require precise instructions to deliver optimal results. The goal isn’t to converse in human terms, but to effectively translate our needs into a language the model comprehends. Often, individuals approach LLMs as if they’re interacting with another human, projecting their own thought processes and expecting similar interpretations. However, this anthropomorphic perspective can be misleading. Instead of expecting the LLM to think like us, we must strive to understand its “logic” so that it can better serve our needs. This mutual understanding forms the essence of prompt engineering. By refining our prompts, we guide the LLM to produce the most relevant and insightful responses. Prompt engineering is not an isolated discipline; it stands at the intersection of several fields, each contributing unique insights. From Artificial Intelli- gence algorithms to data-driven strategies in Data Science and predictive models in Machine Learning, prompt engineering draws from a rich
📄 Page 12
Preface 3 tapestry of knowledge. This multidisciplinary nature might seem daunting initially, suggesting a steep learning curve that requires expertise in various domains. To excel in this field, possessing robust skills in Artificial Intelligence, Data Science, and Machine Learning is essential. Proficiency in these areas is invaluable for understanding the workings of generative AIs and writing prompts with greater precision and accuracy. However, this guide, “LLM Prompt Engineering For Developers,” aims to demystify the complexity of prompt engineering. Its goal is to distill the essence of prompt engineering into its most accessible form. While a foundational understanding of these disciplines certainly aids in mastering prompt engineering, it’s not a strict prerequisite to start practicing and seeing tangible results. “LLM Prompt Engineering For Developers” is designed to be a bridge, allowing anyone with a basic grasp of programming to dive into the world of prompt engineering and prompt engineering agility. Through clear explanations, practical examples, and step-by-step hands-on labs, we’ll simplify the journey, ensuring that you not only understand the core concepts but also feel empowered to take your first steps in this exciting field. 1.1 - What Are You Going to Learn? In “LLM Prompt Engineering For Developers,” we take a comprehensive journey into the world of LLMs and the art of crafting effective prompts for them. The guide starts by laying the foundation, exploring the evolution of Natural Language Processing (NLP) from its early days to the sophisticated LLMs we interact with today. You will dive deep into the complexities of models such as GPT models, understanding their architecture, capabilities, and nuances. As we progress, this guide emphasizes the importance of effective prompt engineering and its best practices. While LLMs like ChatGPT (gpt-3.5) are
📄 Page 13
Preface 4 powerful, their full potential is only realized when they are communicated with effectively. This is where prompt engineering comes into play. It’s not simply about asking the model a question; it’s about phrasing, context, and understanding the model’s logic. Through chapters dedicated to Azure Prompt Flow, LangChain, and other tools, you’ll gain hands-on experience in crafting, testing, scoring, and optimizing prompts. We’ll also explore advanced concepts like Few-shot Learning, Chain of Thought, Perplexity, and techniques like ReAct and General Knowledge Prompting, equipping you with a comprehensive understanding of the domain. This guide is designed to be hands-on, offering practical insights and exercises. In fact, as you progress, you’ll familiarize yourself with several tools: • OpenAI Python library: You will dive into the core of OpenAI’s LLMs and learn how to interact and fine-tune models to achieve precise outputs tailored to specific needs. • Promptfoo: You will master the art of crafting effective prompts. Throughout the guide, we’ll use Promptfoo to test and score prompts, ensuring they’re optimized for desired outcomes. • LangChain: You’ll explore the LangChain framework, which elevates LLM-powered applications. You’ll dive into understanding how a prompt engineer can leverage the power of this tool to test and build effective prompts. • Betterprompt: Before deploying, it’s essential to test. With Bet- terprompt, you’ll ensure the LLM prompts are ready for real-world scenarios, refining them as needed. • Azure Prompt Flow: You will experience the visual interface of Azure’s tool, streamlining LLM-based AI development. You’ll design executable flows, integrating LLMs, prompts, and Python tools, ensuring a holistic understanding of the art of prompting. • And more! With these tools in your toolkit, you will be well-prepared to craft powerful and effective prompts. The hands-on exercises will help solidify your
📄 Page 14
Preface 5 understanding. Throughout the process, you’ll be actively engaged and by the end, not only will you appreciate the power of prompt engineering, but you’ll also possess the skills to implement it effectively. 1.2 - To Whom is This Guide For? This guide is designed for those passionate about LLMs and the emerging field of prompt engineering. Regardless of your AI expertise or familiarity with programming, this guide provides a clear route to mastering the nuances of creating effective prompts for LLMs. So if you are a beginner, don’t worry. We’ll start from the basics and build up from there. For those eager to unlock the vast capabilities of language models in practical scenarios, consider this guide your essential starting point. 1.3 - Join the Community This guide was published by FAUN, a community of developers, architects, and software engineers who are passionate about learning and sharing their knowledge. If you’re interested in joining the community, you can start by subscribing to our newsletter at faun.dev/join1. Every week, we share the most important and relevant articles, tutorials, and videos on the latest technologies and trends, including AI, ML, and NLP. You can also follow us on Twitter at @joinFAUN2 and LinkedIn3 to stay up-to-date with the latest news and announcements. 1.4 - About the Author Aymen El Amri is a software engineer, author, and entrepreneur. He is the founder of the FAUN Developer Community. He is also the author of 1https://faun.dev/join 2https://twitter.com/joinFAUN 3https://www.linkedin.com/company/22322295
📄 Page 15
Preface 6 multiple books on software engineering. You can find him on Twitter4 and LinkedIn5. 1.5 - The Companion Toolkit For an enhanced learning experience, we have created a companion toolkit that includes all the code snippets and prompts used in this guide. You can download it using the following URL: https://from.faun.to/r/llmcd or by scanning the QR code below. 4https://twitter.com/eon01 5https://www.linkedin.com/in/elamriaymen/
📄 Page 16
Preface 7 QR Code 1.6 - Your Feedback Matters Countless hours have been poured into crafting this guide, aiming to pro- vide you with the best knowledge and tools to master prompt engineering. I would love to hear your thoughts and suggestions. Your feedback will help me improve this guide and create better content in the future. Your
📄 Page 17
Preface 8 insights could also light the way for others! If you feel I’ve achieved my goal and this guide has been valuable to you, I’d be deeply honored if you shared your experience with a thoughtful review on the marketplace where you purchased this book, on social media or directly with me via email (aymen@faun.dev). Your feedback will help me quantify the impact of this work and help others discover this book and benefit from the knowledge within.
📄 Page 18
2 - From NLP to Large Language Models 2.1 - What is Natural Language Processing? Natural language refers to the language that humans use to communicate with each other. It encompasses spoken and written language, as well as sign language. Natural language is distinct from formal language, which is used in mathematics and computer programming. Generative AI systems, specifically ChatGPT, are capable of understanding and producing both natural and formal languages. In both cases, interactive AI assistants like ChatGPT use natural language to communicate with humans. Their output could be a natural language response or a mix of natural and formal languages. To process, understand, and generate natural language, a whole field of AI has emerged: Natural Language Processing (NLP). NLP, by definition, is the field of artificial intelligence that focuses on the understanding and generation of human language by computers. It is employed in a wide range of applications, including voice assistants, machine translation, chatbots, and more. In other words, when we talk about NLP, we refer to the ability of computers to understand and generate natural language. NLP has experienced rapid growth in recent years, largely due to advance- ments in language models such as GPT and BERT. These models are some of the most powerful NLP models to date. But what is a language model?
📄 Page 19
From NLP to Large Language Models 10 2.2 - Language Models Models are intelligent computer programs that can perform a specific task. For example, a model can be trained to recognize images of cats and dogs, to write social media posts or blog posts, to provide medical assistance or legal advice, and so on. These models are the result of a training process that uses large datasets to teach the model how to perform a specific task. The larger the dataset, the more accurate the model will be. This is why models trained on large datasets are often more accurate than models trained on smaller datasets. Using the dataset used for training, models acquire the capability to make predictions on new data. For example, a model trained on a dataset of images of cats and dogs can predict whether a new image contains a cat or a dog. Language models are a subset of models capable of generating, understand- ing, or manipulating text or speech in natural language. These models are essential in the field of NLP and are used in various applications such as machine translation, speech recognition, text generation, chatbots, and more. Here are some types of language models: • Statistical models (n-grams) • Neural network-based models – Feedforward neural networks – Recurrent neural networks (RNNs) – Long short-term memory (LSTM) – Gated recurrent units (GRUs) • Knowledge-based models • Contextual language models • Transformer models – Bidirectional encoder representations from transformers (BERT) – Generative pre-trained transformer (GPT)
📄 Page 20
From NLP to Large Language Models 11 2.3 - Statistical Models (N-Grams) Statistical models, like n-gram models, serve as foundational language models commonly used for text classification and language modeling. They can also be adapted for text generation, although more advanced models are typically better suited for complex text-to-text tasks. Within statistical models, word sequence probabilities are derived from training data, enabling the model to estimate the likelihood of the next word in a sequence. N-gram models specifically consider the preceding n-1 words when estimating the probability of the next word. For instance, a bigram model takes into account only the preceding single word, while a trigram model examines the two preceding words. This characteristic endows n-gram models with quick training and utilization capabilities, but they exhibit limitations in capturing long-range dependencies. In a trigram model, each current word is paired with the two preceding words, forming sequences of three words. For instance, in the sentence “A man of knowledge restrains his words,” the observed trigrams would include “A man of,” “man of knowledge,” “of knowledge restrains,” “knowl- edge restrains his,” and “restrains his words.” These sequential 3-word patterns are then employed by the model to estimate the probabilities of subsequent words. Rather than clustering words, n-gram models leverage local word order and context derived from the training data. By focusing on these short- term sequences, n-gram models can make predictions about forthcoming words without modeling global semantics. Although they are efficient and straightforward, their local context makes them less suitable for generating lengthy texts. Statistical models, particularly n-grams, are quite different from the more recent neural language models. The concept of “prompt engineering” as it’s understood today is more closely associated with the latter. However, there are ways in which the design of input or the preprocessing of data for n- gram models can be thought of as a precursor to prompt engineering.
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List