Statistics
1
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-03-22

AuthorEvan Anderson

Explore the theory and practice of designing and writing serverless applications using examples from the Knative project. With this practical guide, mid-level to senior application developers and team managers will learn when and why to target serverless platforms when developing microservices or applications. Along the way, you’ll also discover warning signs that suggest cases when serverless might cause you more trouble than joy. Drawing on author Evan Anderson’s 15 years of experience developing and maintaining applications in the cloud, and more than 6 years of experience with serverless platforms at scale, this book acts as your guide into the high-velocity world of serverless application development. You’ll come to appreciate why Knative is the most widely adopted open source serverless platform available. With this book, you will: • Learn what serverless is, how it works, and why teams are adopting it • Understand the benefits of Knative for cloud native development teams • Learn how to build a serverless application on Knative • Explore the challenges serverless introduces for debugging and the tools that can help improve it • Learn why event-driven architecture and serverless compute are complementary but distinct • Understand when a serverless approach might not be the right system design

Tags
No tags
ISBN: 1098142071
Publisher: O'Reilly Media
Publish Year: 2023
Language: 英文
Pages: 252
File Format: PDF
File Size: 5.3 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

Evan Anderson Building Serverless Applications on Knative A Guide to Designing and Writing Serverless Cloud Applications
CLOUD COMPUTING “This in-depth exploration of modern application patterns incorporates cloud provider services, building composable solutions that scale while keeping cost optimization in mind.” —Carlos Santana Senior Specialist Solutions Architect, AWS “Serverless is a design pattern that can be layered on top of any platform. This is a blueprint.” —Kelsey Hightower Distinguished Engineer Building Serverless Applications on Knative Twitter: @oreillymedia linkedin.com/company/oreilly-media youtube.com/oreillymedia Explore the theory and practice of designing and writing serverless applications using examples from the Knative project. With this practical guide, mid-level to senior application developers and team managers will learn when and why to target serverless platforms when developing microservices or applications. Along the way, you’ll also discover warning signs that suggest cases when serverless might cause you more trouble than joy. Drawing on author Evan Anderson’s 15 years of experience developing and maintaining applications in the cloud, and more than 6 years of experience with serverless platforms at scale , this book acts as your guide into the high-velocity world of serverless application development. You’ll come to appreciate why Knative is the most widely adopted open source serverless platform available. With this book, you will: • Learn what serverless is, how it works, and why teams are adopting it • Understand the benefits of Knative for cloud native development teams • Learn how to build a serverless application on Knative • Explore the challenges serverless introduces for debugging and the tools that can help improve it • Learn why event-driven architecture and serverless compute are complementary but distinct • Understand when a serverless approach might not be the right system design Evan Anderson is a founding member of the Knative project and has served on the technical oversight committee, trademark commitee, and in several working groups. He also worked at Google on Compute Engine, App Engine, Cloud Functions, and Cloud Run, as well as in the SRE organization. He’s currently a principal engineer at Stacklok. 9 7 8 1 0 9 8 1 4 2 0 7 0 5 6 5 9 9 US $65.99 CAN $82.99 ISBN: 978-1-098-142070
Evan Anderson Building Serverless Applications on Knative A Guide to Designing and Writing Serverless Cloud Applications Boston Farnham Sebastopol TokyoBeijing
978-1-098-14207-0 [LSI] Building Serverless Applications on Knative by Evan Anderson Copyright © 2024 Evan Anderson. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institu‐ tional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: John Devins Indexer: nSight, Inc. Development Editor: Shira Evans Interior Designer: David Futato Production Editor: Clare Laylock Cover Designer: Karen Montgomery Copyeditor: Stephanie English Illustrator: Kate Dullea Proofreader: Sharon Wilkey November 2023: First Edition Revision History for the First Edition 2023-11-15: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098142070 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Building Serverless Applications on Knative, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the author and do not represent the publisher’s views. While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
Table of Contents Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Part I. The Theory of Serverless 1. What Is Serverless, Anyway?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Why Is It Called Serverless? 5 A Bit of Terminology 6 What’s a “Unit of Work”? 7 Connections 8 Requests 9 Events 10 It’s Not (Just) About the Scale 11 Blue and Green: Rollout, Rollback, and Day-to-Day 12 Creature Comforts: Undifferentiated Heavy Lifting 13 Creature Comforts: Managing Inputs 15 Creature Comforts: Managing Process Lifecycle 16 Summary 19 2. Designing from Scratch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Setup for This Chapter 22 A Single-Page App 24 Bootstrapping a React App 24 A Basic Web UI 25 Packaging Our App 26 Into the Cloud on Your Laptop! 29 Under the Covers: How Requests Are Handled 30 Continuing to Build 31 iii
Adding an API 33 API Gateways and Composing an App 36 Splitting an API into Components 39 Augmenting a Proxy 40 Complementary Services 41 Key-Value Storage 42 Object Storage 44 Timers (Cron) 45 Task Queues 45 Workflows 46 Summary 48 3. Under the Hood: Knative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Infrastructure Assumptions 50 Hardware and Operating System Management 51 Scheduling and the Datacenter as a Computer 52 Serving 54 Control-Plane Concepts 55 Life of a Request 56 Autoscaling Control Loop 62 Comparison with AWS Lambda 63 Eventing 64 Control-Plane Concepts 66 Delivery Guarantees 68 Life of an Event 69 Ecosystem 78 Comparison with Amazon SNS and Other Cloud Providers 78 Comparison with RabbitMQ 79 Comparison with Apache Kafka 79 Eventing Summary 80 Functions 80 Summary 82 4. Forces Behind Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Top Speed: Reducing Drag 84 Continuously Delivering Value 87 Winning the (Business) Race 89 Microbilling 90 A Deal with the Cloud 91 Clouding on Your Own 93 What Happens When You’re Not Running 93 The Gravitational Force of Serverless 94 iv | Table of Contents
Implications for Languages 95 Implications for Sandboxing 96 Implications for Tooling 97 Implications for Security 97 Implications for Infrastructure 97 Summary 98 Part II. Designing with Serverless 5. Extending the Monolith. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 The Monolith Next Door 102 Microservice Extension Points 105 Asynchronous Cleanup and Extension 108 Extract, Transform, Load 109 The Strangler Fig Pattern 110 Experiments 113 Dark Launch and Throwaway Work 114 A Serverless Monolith? 115 Summary 118 6. More on Integration: Event-Driven Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Events and Messages 120 Why CloudEvents 121 Events as an API 123 What Does Event-Driven Mean? 124 Event Distribution 125 Content-Based Routing Versus Channels 126 Internal and External Events 127 Building for Extension with “Inner Monologue” 128 Workflow Orchestration 129 Long-Lived Workflows, Short-Lived Compute 130 Workflow as Declarative Manifests 132 Event Broadcast 132 What’s So Tricky About Broadcast? 133 Broadcast in Fast-Moving Systems 133 Task Queues 134 Task Queue Features 134 Scaling in Time Instead of Instances 135 Summary 136 Table of Contents | v
7. Developing a Robust Inner Monologue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Ambient Event Publishing 137 The Active Storage Pattern 139 Is Your Database Your Monologue? 140 Scenarios for Inner Monologues 141 Key Events 142 Workflows 143 Inner Monologues Versus RPCs 144 Sensitive Data and Claim Checks 145 How to Use an Inner Monologue 145 Extending the Monolith: Workflow Decoration 146 Scatter-Gather Advice 151 Account Hijacking Detection 152 Versioned Events, Unversioned Code 153 Clouds Have Monologues Too 154 Kubernetes: Only a Monologue 155 Audit Log or Mono-Log? 156 Summary 156 8. Too Much of a Good Thing Is Not a Good Thing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Different Types of Work in the Same Instance 160 Work Units That Don’t Signal Termination 161 Protocol Mismatch 163 Inelastic Scaling 165 Instance Addressability and Sharding 166 Summary 167 Part III. Living with Serverless 9. Failing at the Speed of Light. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Meltdown 172 Narrowest Bottleneck 173 Feedback Loop 174 Cold Start, Extra Slow 176 The Race to Ready 176 Avoiding Cold Starts 179 Forecasting Is Hard 180 Loopback 181 Hotspots Don’t Scale 183 Graphs and Focal Points 183 Data Size 184 vi | Table of Contents
Locking 185 Exactly Once Is Hard 186 Summary 187 10. Cracking the Case: Whodunnit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Log Aggregation 190 Tracing 192 Metrics 194 Live Tracing and Profiling 196 APM Agents 198 Summary 200 Part IV. A Brief History of Serverless 11. A Brief History of Serverless. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 35 Years of Serverless 204 inetd 204 CGI 206 Stored Procedures 206 Heroku 207 Google App Engine 208 Cloud Foundry 209 AWS Lambda 211 Azure and Durable Functions 212 Knative and Cloud Run 213 Cloudflare Workers and Netlify Edge Functions 215 Where to Next? 216 AI 216 Distributed and the Edge 217 Beyond Stateless 218 Summary 219 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Table of Contents | vii
(This page has no text content)
Preface Serverless has become a major selling point of cloud service providers. Over the last four years, hundreds of services from both major cloud providers and smaller service offerings have been branded or rebranded as “serverless.” Clearly, serverless has something to do with services provided over a network, but what is serverless, and why does it matter? How does it differ from containers, functions, or cloud native technologies? While terminology and definitions are constantly evolving, this book aims to highlight the essential attributes of serverless technologies and explain why the serverless moniker is growing in popularity. This book primarily focuses on serverless compute systems; that is, systems that exe‐ cute user-defined software, rather than performing a fixed-function system like stor‐ age, indexing, or message queuing. (Serverless storage systems exist as well, but they aren’t the primary focus of this book!) With that said, the line between fixed-function storage and general-purpose compute is never as sharp and clear as theory would like—for example, database systems that support the SQL query syntax combine stor‐ age, indexing, and the execution of declarative query programs written in SQL. While the architecture of fixed-function systems can be fascinating and important to under‐ stand for performance tuning, this book primarily focuses on serverless compute because it’s the interface with the most degrees of freedom for application authors, and the system that they are most likely to interact with day-to-day. If you’re still not quite sure what serverless is, don’t worry. Given the number of dif‐ ferent products on the market, it’s clear that most people are in the same boat. We’ll chart the evolution of the “serverless” term in “Background” on page x of the Preface and then lay out a precise definition in Chapter 1. ix
1 Including operationally focused engineers like site reliability engineers (SREs) or DevOps practitioners. Who Is This Book For? The primary audience for this book is software engineers1 and technologists who are either unfamiliar with serverless or are looking to deepen their understanding of the principles and best practices associated with serverless architecture. New practitioners who want to immediately dive into writing serverless applications can start in Chapter 2, though I’d recommend Chapter 1 for additional orientation on what’s going on and why serverless matters. Chapter 3 provides additional practi‐ cal material to develop a deeper understanding of the architecture of the Knative platform used in the examples. The order of the chapters should be natural for readers who are familiar with server‐ less. Chapters 5 and 6 provide a checklist of standard patterns for applying serverless, while Chapter 8 and onward provides a sort of “bingo card” of serverless warning signs and solution sketches that may be handy on a day-to-day basis. Chapter 11’s historical context also provides a map of previous technology communities to exam‐ ine for patterns and solutions. For readers who are more interested in capturing the big-picture ideas of serverless, Chapters 1, 4, and 7 have some interesting gems to inspire deeper understanding and new ideas. Chapter 11’s historical context and future predictions may also be of inter‐ est in understanding the arc of software systems that led to the current implementa‐ tions of scale-out serverless offerings. For readers who are new not only to serverless computing, but also to backend or cloud native development, the remainder of this preface will provide some back‐ ground material to help set the stage. Like much of software engineering, these areas move quickly, so the definitions I provide here may have changed somewhat by the time you read this book. When in doubt, these keywords and descriptions may save some time when searching for equivalent services in your environment of choice. Background Over the last six years, the terms “cloud native,” “serverless,” and “containers” have all been subject to successive rounds of hype and redefinition, to the point that even many practitioners struggle to keep up or fully agree on the definitions of these terms. The following sections aim to provide definitions of some important reference points in the rest of the book, but many of these definitions will probably continue to evolve—take them as general reference context for the rest of this book, but not as the one true gospel of serverless computing. Definitions change as ideas germinate and x | Preface
grow, and the gardens of cloud native and serverless over the last six years have run riot with new growth. Also note that this background is organized such that it makes sense when read from beginning to end, not as a historical record of what came first. Many of these areas developed independently of one another and then met and combined after their ini‐ tial flowering (replanting ideas from one garden into another along the way). Containers Containers—either Docker or Open Container Initiative (OCI) format—provide a mechanism to subdivide a host machine into multiple independent runtime environ‐ ments. Unlike virtual machines (VMs), container environments share a single OS kernel, which provides a few benefits: Reduced OS overhead, because only one OS is running This limits containers to running the same OS as the host, typically Linux. (Win‐ dows containers also exist but are much less commonly used.) Simplified application bundles that run independently of OS drivers and hardware These bundles are sufficient to run different Linux distributions on the same ker‐ nel with consistent behavior across Linux versions. Greater application visibility The shared kernel allows monitoring application details like open file handles that would be difficult to extract from a full VM. A standard distribution mechanism for storing a container in an OCI registry Part of the container specification describes how to store and retrieve a container from a registry—the container is stored as a series of filesystem layers stored as a compressed TAR (tape archive) such that new layers can add and delete files from the underlying immutable layers. Unlike any of the following technologies, container technologies on their own benefit the running of applications on a single machine, but don’t address distributing an application across more than one machine. In the context of this book, containers act as a common substrate to enable easily distributing an application that can be run consistently on one or multiple computers. Preface | xi
Cloud Providers Cloud providers are companies that sell remote access to computing and storage serv‐ ices. Popular examples include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Compute and storage services include VMs, blob storage, databases, message queues, and more custom services. These companies rent access to the services by the hour or even on a finer-grained basis, making it easy for companies to get access to computing power when needed without having to invest and plan datacenter space, hardware, and networking investments up front. Cloud Provider Economics Major cloud providers make much of their money selling or multiplexing access to underlying physical hardware—for example, they might buy a server at (amortized) $5,000/year, but then divide it into 20 slots, which they sell at $0.05/hour. For a com‐ pany looking to rent half a server for testing for a few hours, this means access to $2,500+ of hardware for less than $1. Cloud providers are therefore attractive to many types of businesses where demand is not consistent day to day or hour to hour, and where access over the internet is acceptable. If the cloud provider can sell three-fourths of the machine (15 slots × $0.05 = $0.75/hour of income), they can make $0.75 × 24 × 365 = $6,570/year of income—not a bad return on investment. While some cloud computing services are basically “rent a slice of hardware,” the cloud providers have also competed on developing more complex managed services, either hosted on individual VMs per customer or using a multitenant approach in which the servers themselves are able to separate the work and resources consumed by different customers within a single application process. It’s harder to build a mul‐ titenant application or service, but the benefit is that it becomes much easier to man‐ age and share server resources among customers—and reducing the cost of running the service means better margins for cloud providers. The serverless computing patterns described in this book were largely developed either by the cloud providers themselves or by customers who provided guidance and feedback on what would make services even more attractive (and thus worth a higher price premium). Regardless of whether you’re using a proprietary single-cloud ser‐ vice or self-hosting a solution (see the next sections for more details as well as Chap‐ ter 3), cloud providers can offer an attractive environment for provisioning and running serverless applications. xii | Preface
Kubernetes and Cloud Native While cloud providers started by offering compute as virtualized versions of physical hardware (so-called infrastructure as a service, or IaaS), it soon became clear that much of the work of securing and maintaining networks and operating systems was repetitive and well suited to automation. An ideal solution would use containers as a repeatable way to deploy software, running on bulk-managed Linux operating sys‐ tems with “just enough” networking to privately connect the containers without exposing them to the internet at large. I explore the requirements for this type of sys‐ tem in more detail in “Infrastructure Assumptions” on page 50. A variety of startups attempted to build solutions in this space with moderate success: Docker Swarm, Apache Mesos, and others. In the end, a technology introduced by Google and contributed to by Red Hat, IBM, and others won the day—Kubernetes. While Kubernetes may have had some technical advantages over the competing sys‐ tems, much of its success can be attributed to the ecosystem that sprang up around the project. Not only was Kubernetes donated to a neutral foundation (the Cloud Native Com‐ puting Foundation, or CNCF), but it was soon joined by other foundational projects including gRPC and observability frameworks, container packaging, database, reverse proxy, and service mesh projects. Despite being a vendor-neutral foundation, the CNCF and its members advertised and marketed this suite of technologies effec‐ tively to win attention and developer mindshare, and by 2019, it was largely clear that the Kubernetes + Linux combination would be the preferred infrastructure container platform for many organizations. Since that time, Kubernetes has evolved to act as a general-purpose system for con‐ trolling infrastructure systems using a standardized and extensible API model. The Kubernetes API model is based on custom resource definitions (CRDs) and infrastruc‐ ture controllers, which observe the state of the world and attempt to adjust the world to match a desired state stored in the Kubernetes API. This process is known as rec‐ onciliation, and when properly implemented, it can lead to resilient and self-healing systems that are simpler to implement than a centrally orchestrated model. The technologies related to Kubernetes and other CNCF projects are called “cloud native” technologies, whether they are implemented on VMs from a cloud provider or on physical or virtual hardware within a user’s own organization. The key features of these technologies are that they are explicitly designed to run on clusters of semi- reliable computers and networks and to gracefully handle individual hardware fail‐ ures while remaining available for users. By contrast, many pre-cloud-native technologies were built on the premise of highly available and redundant individual hardware nodes where maintenance would generally result in planned downtime or an outage. Preface | xiii
Cloud-Hosted Serverless While a rush has occurred in the last five years to rebrand many cloud-provider tech‐ nologies as “serverless,” the term originally referred to a set of cloud-hosted technolo‐ gies that simplified service deployment for developers. In particular, serverless allowed developers focused on mobile or web applications to implement a small amount of server-side logic without needing to understand, manage, or deploy appli‐ cation servers (hence the name). These technologies split into two main camps: Backend as a service (BaaS) Structured storage services with a rich and configurable API for managing the stored state in a client. Generally, this API included a mechanism for storing small-to-medium JavaScript Object Notation (JSON) objects in a key-value store with the ability to send device push notifications when an object was modified on the server. The APIs also supported defining server-side object validation, auto‐ matic authentication and user management, and mobile-client-aware security rules. The most popular examples were Parse (acquired by Facebook, now Meta, in 2013 and became open source in 2017) and Firebase (acquired by Google in 2014). While handy for getting a project started with a small team, BaaS eventually ran into a few problems that caused it to lose popularity: • Most applications eventually outgrew the fixed functionality. While adopting BaaS might provide an initial productivity boost, it almost certainly guaran‐ teed a future storage migration and rewrite if the app became popular. • Compared with other storage options, it was both expensive and had limited scaling. While application developers didn’t need to manage servers, many of the implementation architectures required a single frontend server to avoid complex object-locking models. Function as a service (FaaS) In this model, application developers wrote individual functions that would be invoked (called) when certain conditions were met. In some cases, this was com‐ bined with BaaS to solve some of the fixed-function problems, but it could also be combined with scalable cloud-provider storage services to achieve much more scalable architectures. In the FaaS model, each function invocation is independ‐ ent and may occur in parallel, even on different computers. Coordination among function invocations needs to be handled explicitly using transactions or locks, rather than being handled implicitly by the storage API as in BaaS. The first widely popular implementation of FaaS was AWS Lambda, launched in 2014. Within a few years, most cloud providers offered similar competing services, though without any form of standard APIs. xiv | Preface
2 The last part is one chapter because I couldn’t resist adding some historical footnotes in Chapter 11. Unlike IaaS, cloud-provider FaaS offerings are typically billed per invocation or per second of function execution, with a maximum duration of 5 to 15 minutes per invocation. Billing per invocation can result in very low costs for infrequently used functions, as well as favorable billing for bursty workloads that receive thousands of requests and are then idle for minutes or hours. To enable this bill‐ ing model, cloud providers operate multitenant platforms that isolate each user’s functions from one another despite running on the same physical hardware within a few seconds of one another. By around 2019, “serverless” had mostly come to be associated with FaaS, as BaaS had fallen out of favor. From that point, the serverless moniker began to be used for noncompute services, which worked well with the FaaS billing model: charging only for access calls and storage used, rather than for long-running server units. We’ll dis‐ cuss the differences between traditional serverful and serverless computing in Chap‐ ter 1, but this new definition allows the notion of serverless to expand to storage systems and specialized services like video transcoding or AI image recognition. While the definitions of “cloud provider” or “cloud native software” mentioned have been somewhat fluid over time, the serverless moniker has been especially fluid—a serverless enthusiast from 2014 would be quite confused by most of the services offered under that name eight years later. One final note of disambiguation: 5G telecommunications networking has intro‐ duced the confusing term “network function as a service,” which is the idea that long- lived network routing behavior such as firewalls could run as a service on a virtualized platform that is not associated with any particular physical machine. In this case, the term “network function” implies a substantially different architecture with long-lived but mobile servers rather than a serverless distributed architecture. How This Book Is Organized This book is divided into four main parts.2 I tend to learn by developing a mental model of what’s going on, then trying things out to see where my mental model isn’t quite right, and finally developing deep expertise after extended usage. The parts cor‐ respond to this model—those in Table P-1. Preface | xv
Table P-1. Parts of the book Part Chapter Description Part I, “The Theory of Serverless” Chapter 1 Definitions and descriptions of what serverless platforms offer. Chapter 2 Building by learning: a stateless serverless application on Knative. Chapter 3 A deep dive into implementing Knative, a serverless compute system. Chapter 4 This chapter frames the serverless movement in terms of business value. Part II, “Designing with Serverless” Chapter 5 With an understanding of serverless under our belt, this chapter explains how to apply the patterns from Chapter 2 to existing applications. Chapter 6 Events are a common patterns for orchestrating stateless applications. This chapter explains various patterns of event-driven architecture. Chapter 7 While Chapter 6 covers connecting events to an application, this chapter focuses specifically on building a serverless application that natively leverages events. Chapter 8 After four chapters of cheerleading for serverless, this chapter focuses on patterns that can frustrate a serverless application architecture. Part III, “Living with Serverless” Chapter 9 Following Chapter 8’s warnings about serverless antipatterns, this chapter chronicles operational obstacles to serverless nirvana. Chapter 10 While Chapter 9 focuses on the spectacular meltdowns, this chapter covers debugging tools needed to solve regular, everyday application bugs. Part IV, “A Brief History of Serverless” Chapter 11 Historical context for the development of the serverless compute abstractions. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program ele‐ ments such as variable or function names, databases, data types, environment variables, statements, and keywords. Constant width bold Shows commands or other text that should be typed literally by the user. Constant width italic Shows text that should be replaced with user-supplied values or by values deter‐ mined by context. xvi | Preface
This element signifies a tip or suggestion. This element signifies a general note. This element indicates a warning or caution. Using Code Examples Supplemental material (code examples, exercises, etc.) is available for download at https://oreil.ly/BSAK-supp. This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing examples from O’Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission. We appreciate, but generally do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: “Building Serverless Applications on Knative by Evan Anderson (O’Reilly). Copyright 2024 Evan Ander‐ son, 978-1-098-14207-0.” If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at permissions@oreilly.com. Preface | xvii
O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit https://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-889-8969 (in the United States or Canada) 707-829-7019 (international or local) 707-829-0104 (fax) support@oreilly.com https://www.oreilly.com/about/contact.html We have a web page for this book, where we list errata, examples, and any additional information. Access this page at https://oreil.ly/BuildingServerlessAppsKnative. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Follow us on Twitter: https://twitter.com/oreillymedia. Watch us on YouTube: https://youtube.com/oreillymedia. xviii | Preface