Statistics
6
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-01-11

AuthorDaniel Nastase

No description

Tags
No tags
Publish Year: 2024
Language: 英文
File Format: PDF
File Size: 4.9 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

No. 1 / 120
Copyright © 2024 by Daniel Nastase All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means including information storage and retrieval systems, without permission in writing from the author. The only exception is by a reviewer, who may quote short excerpts in a review. 1. Introduction 1.1. Getting Book Updates and Code Examples 1.2. About the author 1.3. How this book came to be 1.4. What is LangChain 1.5. Overview of this book 1.6. Requirements and how to use this book 2. Models, Prompts and Chains 2.1. Introduction 2.2. Getting Started 2.3. The API key for the OpenAI ChatGPT model 2.4. Using the ChatOpenAI LangChain model 2.5. Prompt templates 2.6. Chains 2.7. LCEL - LangChain Expression Language 3. Streams in LangChain 3.1. Introduction and project setup 3.2. Project refactor 3.3. Streaming the response 4. Output parsers 4.1. Introduction No. 2 / 120
4.2. Example setup 4.3. StringOutputParser 4.4. CommaSeparatedListOutputParser 4.5. The getFormatInstructions() method and custom parsers 4.6. StructuredOutputParser 5. Chat memory 5.1. Introduction and project setup 5.2. Injecting messages into the conversation memory 5.3. Full conversation history 6. RAG / Chating with documents 6.1. Introduction 6.2. Example setup 6.3. Using local documents 6.4. Components of the RAG process 6.5. Vectors, Embeddings and Vector databases 6.6. Using CheerioWebBaseLoader and MemoryVectorStore 7. AI Agents 7.1. Introduction and project setup 7.2. Making an agent 7.3. Monitoring agents and performance considerations 7.4. Recap 7.5. Final Words   No. 3 / 120
1. Introduction No. 4 / 120
1.1. Getting Book Updates and Code Examples To receive the latest versions of the book, subscribe to our mailing list by emailing daniel@js-craft.io. I strive to keep my book with the latest version of LangChain and other libraries. I regularly update the code and content so make sure to subscribe to stay informed about these updates. In order to access the full source code for all the completed projects, or the PDF version of the book just write me at daniel@js-craft.io. No. 5 / 120
1.2. About the author Hi there, friend! I’m Daniel, a software developer and educator. I like computers. I try to make them like me back. More than computers I like humans. I think every person has value, intelligence, and good inside. I believe that education is the key to building a better, stable, and richer world. I used to work at companies such as Skillshare, Tradeshift and ING, where I had a chance to be exposed to completely different types of frontend development in various teams. Over the past five years, I've been writing articles on js-craft.io about JavaScript, CSS, and other software development topics. I've always enjoyed teaching, holding both in-class and online classes, and being involved in tech education startups. You can always reach me at daniel@js-craft.io, and read more about me at https://www.js-craft.io/about/. You can find me also on: GitHub: github.com/daniel-jscraft Twitter: @js_craft_hq Mastdotodn: @daniel_js_craft YouTube: @js-craftacademy6740 No. 6 / 120
1.3. How this book came to be The first time I wrote a computer program was 26 years ago, in Turbo Pascal. To someone who did not know how to code it felt like casting spells. But I understood how spells were made. If - then - else statements, functions, variables. This was how you make spells. The only time I've had again this feeling was in 2022 when a friend showed me ChatGPT. It felt like some mystical magic. But this time I was not the one who was writing the spells. I did not truly understand how that 'AI thing' was made; you couldn't create it with just the programming I was used to. So, I went down to the "learn how AI works" rabbit hole. And oh boy, this was a deep one. I was trying to understand this thing from the ground up. I was trying to build and train my models from scratch. After some time, my only practical project was a pure JavaScript neuronal network. The model was written from scratch and could be trained to detect if a hand-drawn shape was a digit. I gave up after some time. With all the different architectures, bias neurons, training data sets, backpropagation, or gradient descent. It was too much for my mediocre brain. The AI thing was a bit more logical and less mystical but I was far away from being able to use the new learnings practically. Therefore I went back to my normal job as a web developer. But in the background of my mind was still the feeling that I should still pay attention to AI. There was something there. After a while, I was listening to an episode of the excellent Latent Space podcast. I was following Swan Swyx Wang, the co-founder of this podcast from the time he has speaking about React (the JavaScript framework not ReAct prompting). No. 7 / 120
In that episode, Swyx mentioned something about LangChain. This LangChain framework was made for integrating AI models with "traditional" apps. I realized that I did not fully understand all the inner mechanics of databases, or operating systems, but I was using them. Therefore decided to give AI another shot. But this time from a more practical angle. One book later I can now say that I love LangChain for all it taught me about how Large Language Models (LLMs for short) work. The abstractions, the mental models, and the use cases of this framework will teach you a lot about AI models and how to use them in conjunction with JavaScript- powered apps. LangChain can be seen as the orchestrator that connects nearly everything in the AI-Webapp integration system. This makes it an excellent gateway for understanding how all the components work together. So, let's learn!  No. 8 / 120
1.4. What is LangChain LangChain is a framework designed to simplify the creation of applications that integrate Large Language Models (LLMs). LangChain provides all the AI integration building blocks in one framework. It offers a modular, flexible, and scalable architecture that is easy to maintain. You can see LangChain as the glue layer for almost everything in the AI ecosystem. By learning LangChain, you will gain a deep understanding of the structure, workflows, and practices of AI Engineering. Some use cases for LangChain include: LLMs are trained on human-written unstructured text. While this works well for human interactions, it may not work well for sending unformatted text to an API. APIs instead of structured data like JSON. LangChain can format the input and output in LLM interactions. LLMs are stateless. They don't remember who you are or what you said a few seconds ago. LangChain provides support for both long-term and short-term memory. LLMs can be slow. LangChain can stream responses as they are generated, providing fast feedback to the user. LLMs have knowledge cutoff dates. For example, GPT 3.5 does not have training data after 2021. And GPT-4o does not know anything after Oct 2023. With LangChain, we can create and manage AI Agents that go online, search for information, and use tools to parse that information. We can chain multiple LLMs together. For example, we can generate an article with Google's Gemini model and then pass the text to Midjourney to generate images for that article. No. 9 / 120
LLMs lack access to internal organization documents. We can use Retrieval-Augmented Generation (RAG) to provide extra context to a model, allowing it to interact with users based on an organization's rules and knowledge. LangChain has a great toolset for RAG operations. LangChain standardizes operations like the ones above, making it easy to swap components as needed. Want to use a different LLM because it's better or cheaper? Want to change the vector database to store the RAG embeddings? No problem, LangChain has you covered. LangChain abstracts and standardizes common patterns when working with LLMs. Remember the old days with browser incompatibilities? Then jQuery appeared. This is what LangChain aims to do for LLM integration. While it is possible to create any app without LangChain, it simplifies the process to a great extent and is much better than manual prompting. If you want to delve deeper into the subject, I recommend listening to the episode with Harrison Chase, the founder of LangChain from the excellent Latent Space podcast. No. 10 / 120
1.5. Overview of this book In this book, we email on a fun, hands-on, and pragmatic journey to learning LangChain. Every main chapter has an associated example application that gradually evolves during that chapter. Each step introduces a few new LangChain core concepts in a manageable way without overwhelming you. The book is purposefully broken down into short chapters, each focusing on different topics. Here is how the main chapters and example applications are structured: Chapter 1: We will use a Story Generator for Kids app to introduce the basics of LangChain. We will connect to the first LLM, and use prompt templates and chains. We will delve deeper into partial templates, template composition, and more. Chapter 2: After the basic setup in Chapter 1, this chapter will introduce the concept of streaming. Using the same app, Story Generator for Kids, we will stream long responses from the LLMs to improve the user experience. Chapter 3: One challenge with LLMs is that they give responses in an unstructured format, as they were trained on human-intelligible text. In this chapter, we will study output parsers by making a full Trivia Game from scratch using ChatGPT. Chapter 4: The memory module enables LLM memory for LangChain apps. We can remember past conversations and responses in the current session similar to how ChatGPT does it. In this chapter, we will build a Tea Facts Wiki application. Chapter 5: One of the key features of LangChain is its support for RAG - Retrieval Augmented Generation. In this chapter, we will build a Chat Bot app that answers questions based on external context. We will see how to provide data from sources like PDFs or online documents, what embeddings are, and how vector data stores work. No. 11 / 120
Chapter 6: More flexible and versatile compared to chains, AI agents help you build complex solutions and enable access to third-party tools such as Google search and math calculators. They can make decisions on the appropriate tool to use in a given situation. From building a Story Generator to creating a Trivia Game and a real Chat Bot application, each chapter gradually expands your understanding. By the end, we will have explored streaming, output parsing, memory modules, AI agents, and RAG capabilities, empowering you to develop complex solutions. Get ready to embark on an enriching journey into the world of LangChain development. No. 12 / 120
1.6. Requirements and how to use this book LangChain offers implementations in both JavaScript and Python. Throughout this book, our focus will be on JavaScript. You'll find that you learn best by coding along with the examples provided in the book. My suggestion is to initially go through each chapter, absorbing the content to grasp the concepts. Then, during a second pass, code along as you progress. We'll utilize a combination of LangChain, React, and Next.js. While a basic understanding of React is expected, you need not be an expert. For setting up Next.js, the simplest approach is to utilize the create-next-app utility. In most instances, our work will be confined to just 2 files: src/app/page.js - handling the frontend; its primary function is to display information and data. src/app/api/route.js - managing the backend; this file will handle the API interactions with the LLM. The examples will progress in a step-by-step iterative manner. Each modification will be denoted by the ! sign, with the path of the modified file provided for reference. For instance: No. 13 / 120
Please note that LangChain is currently undergoing development and is chaining rapidly. While I'll strive to update the book frequently, there might be brief periods where method names or import statements are not aligned with the latest version. For support, feel free to reach out via email at daniel@js-craft.io. // src/app/page.js - path of the updated file // ! add the PromptTemplate to avoid repetition - file change import { PromptTemplate } from "@langchain/core/prompts" // ! make a new ChatGPT model - file change const model = new ChatGPT({  openAIApiKey: process.env.OPENAI_API_KEY,  temperature: 0.9 }) No. 14 / 120
2. Models, Prompts and Chains No. 15 / 120
2.1. Introduction It's now time to create our first app using LangChain and JavaScript. You have been assigned to create an app called "The Story Maker". This app will generate titles for children's stories. Here's what the final UI of the app will look like: For instance, if we specify that we want a story with Unicorns as the main characters, the app will generate a story title like this: "The Magical Adventures of Stardust the Unicorn" No. 16 / 120
2.2. Getting Started The easiest way to get started is by using the create-next-app utility. Once you have it installed, create a new project named story-generator : Alternatively, you can use the starting code from the code examples folder. Just remember to run npm install from within the folder of the example. We'll need to install a couple of npm dependencies: LangChain and OpenAI. Go to the root folder of the project and, in your terminal, execute the following command: At the time of writing this book, these are the current versions of the installed modules: Please note that LangChain is currently undergoing rapid development, and as it progresses, certain parts of the code may become obsolete. I'm striving to keep the book updated, but if, for any reason, something breaks, it might be wise to revert to these version numbers. Once the dependencies are set up, we'll proceed to create the frontend of our app using React. Insert the following code into src\App.js : yarn create next-app npm i langchain @langchain/openai "@langchain/openai": "^0.0.14", "langchain": "^0.1.19", // code/story-generator/src/app/page.js export default function Home() {  const onSubmitHandler = async (event)=> {    event.preventDefault() No. 17 / 120
Nothing particularly remarkable at this stage. It's just a React UI that fetches the subject of the story from a text input and logs it to the console. We'll launch our app by executing the following command in the terminal: This will generate the following output:    const subject = event.target.subject.value    console.log(subject) }  return (    <>      <h1> " The Story Maker</h1>      <em>This app uses a GPT Model to generate a story for kids.</em>      <form onSubmit={onSubmitHandler}>        <label htmlFor="subject">Main subject of the story: </label>        <input name='subject' placeholder='subject...' />        <button> # Ask AI Model</button>      </form>    </> ) } npm run dev No. 18 / 120
Currently, if the user enters a subject, no story will be generated as we haven't yet integrated the app with OpenAI. We'll address this in the next chapter. No. 19 / 120
2.3. The API key for the OpenAI ChatGPT model To use OpenAI's ChatGPT model, you will need an API key. The API is subscription-based and the key can be generated from the following URL: You can top up your account with the minimum value. For example, writing all the examples in this book cost me under 2 USD. Once you've generated your API key, return to the Next.js project and paste it into the .env file: This key will be used only in the backend part of our app. To access the API key in the code/story-generator/src/app/api/route.js file we can write the following: The key will be later passed to the LangChain model object constructor. This step will be the default procedure for each of the code examples in the following chapters of this book. If you don’t wish to use an API key, you can check out this list](https://js.langc hain.com/docs/modules/model_io/models/) with the LLMs supported by LangChain and pick up a free model. You can even use a free HuggingFace token to avoid the OpenAI API. https://platform.openai.com/api-keys // code/story-generator/.env OPENAI_API_KEY='sk-1234567890B2tT4HT3BlbkFJXkATN1arB9DXABkRQ1uj' // code/story-generator/src/app/api/route.js console.log( process.env.OPENAI_API_KEY ) No. 20 / 120