Async Rust Unleashing the Power of Fearless Concurrency (Maxwell Flitton, Caroline Morton) (z-library.sk, 1lib.sk, z-lib.sk)

Author: Maxwell Flitton, Caroline Morton

RUST

Already popular among programmers for its memory safety and speed, the Rust programming language is also valuable for asynchrony. This practical book shows you how asynchronous Rust can help you solve problems that require multitasking. You'll learn how to apply async programming to solve problems with an async approach. You will also dive deeper into async runtimes, implementing your own ways in which async runtimes handle incoming tasks. Authors Maxwell Flitton and Caroline Morton also show you how to implement the Tokio software library to help you with incoming traffic, communicate between threads with shared memory and channels, and design a range of complex solutions using actors. You'll also learn to perform unit and end-to-end tests on a Rust async system. With this book, you'll learn • How Rust approaches async programming • How coroutines relate to async Rust • Reactive programming and how to implement pub sub in async rust • How to solve problems using actors • How to customize Tokio to gain control over how tasks are processed • Async Rust design patterns • How to build an async TCP server just using the standard library • How to unit test async Rust By the end of the book, you'll be able to implement your own async TCP server completely from the standard library with zero external dependencies, and unit test your async code.

📄 File Format: PDF
💾 File Size: 3.7 MB
13
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
Maxwell Flitton & Caroline Morton Async Rust Unleashing the Power of Fearless Concurrency
📄 Page 2
9 7 8 1 0 9 8 1 4 9 0 9 3 5 5 9 9 9 ISBN: 978-1-098-14909-3 US $59.99 CAN $74.99 RUST PROGR AMMING Maxwell Flitton is the author of Rust Web Programming and other technical books. He specializes in building real-time systems in Rust for healthcare and financial applications. Caroline Morton is a doctor- turned-software-engineer who has developed innovative Rust-based solutions for medical simulation software. She is currently researching synthetic health data generation using Rust. Already popular among programmers for its memory safety and speed, asynchronous Rust can help you solve problems that require multitasking. This practical book shows you how to apply async programming to solve problems and dive deeper into async runtimes, implementing your own ways to handle incoming tasks and get the specific control you want for your system. Authors Maxwell Flitton and Caroline Morton also show you how to use the Tokio software library to help you manage incoming traffic, communicate between threads with shared memory and channels, and design a range of complex solutions using actors. You’ll also learn to perform unit and end-to-end tests on a Rust async system. • Discover how Rust approaches async programming and how coroutines relate to async Rust • Learn about reactive programming and how to implement pub sub in async Rust • Solve problems using actors and customize Tokio to gain control over task processing • Build an async TCP server using just the standard library • And more Async Rust “For those who sweat while climbing the Rust hill, sweat no more. This book will guide you toward the heights of understanding. It will help you build the conf idence and foundation to walk the rest of your async Rust journey at your own pace.” Glen De Cauwsemaecker Cofounder, Plabayo
📄 Page 3
Maxwell Flitton and Caroline Morton Async Rust Unleashing the Power of Fearless Concurrency
📄 Page 4
978-1-098-14909-3 [LSI] Async Rust by Maxwell Flitton and Caroline Morton Copyright © 2025 Maxwell Flitton and Caroline Morton. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: Brian Guerin Development Editor: Melissa Potter Production Editor: Jonathon Owen Copyeditor: Sharon Wilkey Proofreader: Heather Walley Indexer: Sue Klefstad Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Kate Dullea November 2024: First Edition Revision History for the First Edition 2024-11-12: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098149093 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Async Rust, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
📄 Page 5
Table of Contents Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1. Introduction to Async. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 What Is Async? 2 Introduction to Processes 5 What Are Threads? 10 Where Can We Utilize Async? 16 Using Async for File I/O 17 Improving HTTP Request Performance with Async 20 Summary 22 2. Basic Async Rust. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Understanding Tasks 23 Futures 29 Pinning in Futures 30 Context in Futures 32 Waking Futures Remotely 35 Sharing Data Between Futures 37 High-Level Data Sharing Between Futures 41 How Are Futures Processed? 43 Putting It All Together 44 Summary 48 3. Building Our Own Async Queues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Building Our Own Async Queue 50 Increasing Workers and Queues 57 Passing Tasks to Different Queues 59 Task Stealing 61 iii
📄 Page 6
Refactoring Our spawn_task Function 63 Creating Our Own Join Macro 65 Configuring Our Runtime 66 Running Background Processes 68 Summary 69 4. Integrating Networking into Our Own Async Runtime. . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Understanding Executors and Connectors 72 Integrating hyper into Our Async Runtime 73 Building an HTTP Connection 74 Implementing the Tokio AsyncRead Trait 77 Implementing the Tokio AsyncWrite Trait 79 Connecting and Running Our Client 81 Introducing mio 82 Polling Sockets in Futures 83 Sending Data over the Socket 86 Summary 88 5. Coroutines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Introducing Coroutines 90 What Are Coroutines? 90 Why Use Coroutines? 91 Generating with Coroutines 94 Implementing a Simple Generator in Rust 95 Stacking Our Coroutines 96 Calling a Coroutine from a Coroutine 98 Mimicking Async Behavior with Coroutines 100 Controlling Coroutines 104 Testing Coroutines 109 Summary 113 6. Reactive Programming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Building a Basic Reactive System 115 Defining Our Subjects 116 Building Our Display Observer 118 Building Our Heater and Heat-Loss Observer 121 Getting User Input via Callbacks 124 Enabling Broadcasting with an Event Bus 127 Building Our Event Bus Struct 128 Building Our Event Bus Handle 131 Interacting with Our Event Bus via Async Tasks 132 Summary 135 iv | Table of Contents
📄 Page 7
7. Customizing Tokio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Building a Runtime 137 Processing Tasks with Local Pools 142 Getting Unsafe with Thread Data 147 Graceful Shutdowns 149 Summary 154 8. The Actor Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Building a Basic Actor 155 Working with Actors Versus Mutexes 157 Implementing the Router Pattern 160 Implementing State Recovery for Actors 165 Creating Actor Supervision 170 Summary 176 9. Design Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Building an Isolated Module 179 Waterfall Design Pattern 185 The Decorator Pattern 186 The State Machine Pattern 190 The Retry Pattern 193 The Circuit-Breaker Pattern 194 Summary 196 10. Building an Async Server with No Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Setting Up the Basics 197 Building Our std Async Runtime 199 Building Our Waker 200 Building Our Executor 202 Running Our Executor 205 Building Our Sender 207 Building Our Receiver 208 Building Our Sleep 209 Building Our Server 210 Accepting Requests 211 Handling Requests 214 Building Our Async Client 215 Summary 216 11. Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Performing Basic Sync Testing 219 Mocking Async Code 222 Table of Contents | v
📄 Page 8
Testing For Deadlocks 224 Testing for Race Conditions 227 Testing Channel Capacity 229 Testing Network Interactions 231 Fine-Grained Future Testing 233 Summary 235 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 vi | Table of Contents
📄 Page 9
Preface What Is Async Rust? Asynchronous programming in Rust, often referred to as Async Rust, is a powerful paradigm that allows developers to write concurrent code that is more efficient and scalable. In contrast to traditional synchronous programming, where tasks are exe‐ cuted one after another, async programming enables tasks to run concurrently, which is particularly useful when dealing with I/O-bound operations like network requests or file handling. This approach allows for the efficient use of system resources and can lead to significant performance improvements in applications that need to handle multiple tasks at once without the need for additional cores. Rust’s type system and ownership model provide the safety guarantees that we all love. However, mastering async Rust requires an understanding of specific concepts, such as futures, pinning, and executors. This book will guide you through these concepts, equipping you with the knowledge needed to apply async to your own projects and programs. Who Is This Book For? This book is aimed at intermediate Rust developers who want to learn how to improve their applications and programs by using the range of asynchronous func‐ tionality available to them. If you are new to Rust or programming in general, this book may not be the best starting point. Instead, we recommend the following resources for learning Rust from the ground up: • The Rust Programming Language by Steve Klabnik and Carol Nichols (No Starch Press, 2022). • Rust Web Programming by Maxwell Flitton (Packt Publishing, 2023) vii
📄 Page 10
• Rust by Example, a collection of online runnable examples written by the Rust community Overview of the Chapters Chapter 1, “Introduction to Async”, gives a high-level overview of what async pro‐ gramming is and how it can be useful in particular types of programs—for example, for I/O operations. This chapter also explores threads and processes to explain the context in which async programming is implemented in relation to the operating system. Chapter 2, “Basic Async Rust”, delves into the basics of async programming in Rust, looking at what a future is and why pinning and context are needed to implement async Rust. We’ll finish off with examples of basic data sharing between futures. This chapter will enable you to write basic async futures, implement the Future trait, and run async code. Chapter 3, “Building Our Own Async Queues”, puts together the information from the previous two chapters to create our own queue, where we implement passing tasks to different queues and stealing tasks. We create our own join macros and do some basic configuration of our runtime. This chapter shows how async tasks are passed through an async runtime and processed. Chapter 4, “Integrating Networking into Our Own Async Runtime”, is a relatively complicated chapter that drills down into what an executor and connector are and how to create a program that can do networking. We also get into mio and how to use sockets. This builds on the previous chapter, but feel free to skip this chapter if it is too challenging early on and come back to it later. This chapter shows how to integrate networking primitives into an async runtime. Chapter 5, “Coroutines”, introduces coroutines and how to implement them in Rust. We draw the parallels between async/await and coroutines and implement a basic generator in Rust. This chapter also shows how async tasks are essentially coroutines by building async functionality without any extra threads. Chapter 6, “Reactive Programming”, introduces reactive programming within the context of async Rust. We look at a heater system and build a simple example of a reactive system. We end the chapter learning how to create an event bus. Chapter 7, “Customizing Tokio”, does what it says on the tin. This chapter guides you through customizing your Tokio setup to solve your particular problem. We cannot cover the whole of Tokio, which is an extensive library, but we do go through building a runtime, local pools, and graceful shutdowns. With this chapter, you get to achieve fine-grained control of how your Tokio async tasks are processed, including pinning viii | Preface
📄 Page 11
async tasks to specific threads so those tasks can reference the state of that thread when being polled to completion. Chapter 8, “The Actor Model”, illustrates the power of async as we build our own actor system. We look at the differences between actors and mutuxes and why actors are a helpful design pattern to know. We build a basic key-value storage mechanism using actors so you can get a feel for the design and monitoring of actors. Chapter 9, “Design Patterns”, is a brief overview of some of the common design pat‐ terns that work well within an async system. We take an isolated modular approach and apply various design patterns to highlight their benefits and pitfalls. This chapter enables you to integrate async code with existing synchronous codebases. Chapter 10, “Building an Async Server with No Dependencies”, brings together a lot of the content from the preceding chapters. We get into building our own async system, including our own executor and waker, using the standard library only. It is always helpful to be able to build from scratch if needed, and this chapter highlights some of the benefits and downsides of using a library. Chapter 11, “Testing”, introduces testing an async system. We look at mocking, standard testing, and Tokio testing capabilities. We also consider what are we testing for—deadlocks and data races—so we can write clearer and useful tests. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. This element signifies a tip or suggestion. This element signifies a general note. Preface | ix
📄 Page 12
This element indicates a warning or caution. O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit https://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-889-8969 (in the United States or Canada) 707-827-7019 (international or local) 707-829-0104 (fax) support@oreilly.com https://oreilly.com/about/contact.html We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at https://oreil.ly/async-rust. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media. Watch us on YouTube: https://youtube.com/oreillymedia. x | Preface
📄 Page 13
Acknowledgments We have had a huge amount of support from so many people through the process of writing this book. Thank you so much to everyone who helped us to make it a reality. We would like to give an especially big thanks to the following people: First, the people of O’Reilly—in particular, Melissa Potter, Jonathon Owen, and Brian Guerin—for your feedback and encouragement throughout the whole book cycle. Thank you to our technical reviewers, Glen De Cauwsemaecker, Allen Wyma, and Julio Merino. Thanks for the hours you put in to make this book better for our audience. We would both like to thank the people at SurrealDB who have supported both of us during the writing of this book. In particular, thank you to Tobie Morgan Hitchcock, Jaime Morgan Hitchcock, Kirstie Marsh, Lizzie Holmes, Charli Baptie, Ned Rudkins- Stow, Meriel Cunningham, and many others for support on this journey. Thank you to Harry Tsiligiannis, an amazing DevOps engineer who took the time to teach both of us about how to build a robust system. We learned a lot from you, and your knowledge has immensely improved our programming skills and our approach to problems. Maxwell My output would not have been possible without the extensive support I get from my family. My wife, Melanie Zhang, has been an amazing supportive partner in this journey, alongside raising our son, Henry Flitton, together. My mother, Allison Barson, and Mel’s mother, Ping Zhang, have also been incredible in the amount of support they have given me when writing my books. They are mentioned here because without them, I would not have been able to write this book. The engineering team at SurrealDB has been amazing, and I have learned so much from them. Emmanuel Keller has never ceased to teach me something new and has not held back on the time spent to help me. Hugh Kaznowski has always been open in bouncing around ideas over coffee and has pushed me to think deeply about approaches. Mees Delzenne has shown me innovative ways to structure Rust code. Alexander Fridriksson and Obinna Ekwuno have shown me again and again how to structure a message and convey information to developers. Ignacio Paz has been invaluable at showing me how to collaborate with people who have different perspectives and priorities. Corrado Bogni has never hesitated to show me how to carry out a task in style. Ned Rudkins-Stow taught me how to keep refining some‐ thing until it’s ready to be presented. Micha de Vries has repeatedly demonstrated that you can never have too much energy when approaching work. Jaime Morgan Hitchcock has shown me that sleep is essential for a human to function properly and Preface | xi
📄 Page 14
succeed; in short, don’t follow Jaime’s example. Salvador Girones Gil has taught me that if you break a huge task into small jobs, you can achieve the unthinkable in a short time, as he has led the cloud development at SurrealDB at a shocking pace. Raphael Darley has been a reminder that you can juggle multiple things at once, as he studies computer science full-time at Oxford University, while working with us at SurrealDB and running his own company. I’m also grateful to Mark Gyles and Paz Macdonald, who remind me of the human element when communicating and engaging with developers and communities. Finally, Tobie Morgan Hitchcock has been an inspiration in coming up with an idea, fleshing out the details, and delivering on that idea as he turned his Oxford University thesis into the basis of SurrealDB. And a special thanks to Caroline Morton for always having my back on all the projects we tackle together, and Professor Christos Bergeles for supporting my stud‐ ies and projects in bioengineering with Rust. Caroline To my mum, Margaret Morton, and her partner, Jacques Lumb, thank you for your incredible support and constant care throughout this process. Your kindness has been a source of strength for me. This book would not have been possible without the companionship and encourage‐ ment of my wonderful friends. In no particular order, I am deeply grateful to Tabitha Grimshaw and Emma Laurence, who have cheered me on and taken me out for much-needed breaks. I would also like to thank Professor Rohini Mathur and Dr. Kate Mansfield for their company, guidance, and support and Dr. Marie Spreckley for being a trusted sounding board. A special thank you to Dr. Rob Johnson, a great longtime friend, who has always encouraged me to think clearly and articulate my thoughts. Dr. Jo Horsburgh, thank you for your unwavering support; and Kristen Petit, one of my oldest friends, your boundless enthusiasm has always lifted my spirits. I am immensely grateful to Professor Sue Smith, who took a chance and gave me my first academic job, setting me on this path. You embody what a great leader should be—kind, knowledgeable, and without ego. This book is dedicated to my dad, Michael Frank Morton, who was taken from us too soon. I also want to remember my dear aunt, Marilyn Bickerton, who we lost too early. In her memory, I encourage everyone to support ALS research. Finally, thank you to Maxwell for inviting me on this journey! It would not have been possible without you. Your depth of knowledge and expertise continue to surprise and amaze me. Thank you for all your hard work! xii | Preface
📄 Page 15
CHAPTER 1 Introduction to Async For years, software engineers have been spoiled by the relentless increase in hardware performance. Phrases like “just chuck more computing power at it” or “write time is more expensive than read time” have become popular one-liners when justifying using a slow algorithm, rushed approach, or slow programming language. However, at the time of this writing, multiple microprocessor manufacturers have reported that the semiconductor advancement has slowed since 2010, leading to the controversial statement from NVIDIA CEO Jensen Huang in 2022 that “Moore’s law is dead.” With the increased demand on software and increasing number of I/O network calls in systems such as microservices, we need to be more efficient with our resources. This is where async programming comes in. With async programming, we do not need to add another core to the CPU to get performance gains. Instead, with async, we can effectively juggle multiple tasks on a single thread if there is some dead time in those tasks, such as waiting for a response from a server. We live our lives in an async way. For instance, when we put the laundry into the washing machine, we do not sit still, doing nothing, until the machine has finished. Instead, we do other things. If we want our computer and programs to live an efficient life, we need to embrace async programming. However, before we roll up our sleeves and dive into the weeds of async program‐ ming, we need to understand where this topic sits in the context of our computers. This chapter provides an overview of how threads and processes works, demonstrat‐ ing the effectiveness of async programming in I/O operations. After reading this chapter, you should understand what async programming is at a high level, without knowing the intricate details of an async program. You will also understand some basic concepts around threads and Rust; these concepts pop up in async programming due to async runtimes using threads to execute async tasks. You 1
📄 Page 16
should be ready to explore the details of how async programs work in the following chapter, which focuses on more concrete examples of async programming. If you are familiar with processes, threads, and sharing data between them, feel free to skip this chapter. In Chapter 2, we cover async-specific concepts like futures, tasks, and how an async runtime executes tasks. What Is Async? When we use a computer, we expect it to perform multiple tasks at the same time. Our experience would be pretty bad otherwise. However, think about all the tasks that a computer does at one time. As we write this book, we’ve clicked onto the activity monitor of our Apple M1 MacBook with eight cores. The laptop at one point was running 3,118 threads and 453 processes while only using 7% of the CPU. Why are there so many processes and threads? The reason is that there are multiple running applications, open browser tabs, and other background processes. So how does the laptop keep all these threads and processes running at the same time? Here’s the thing: the computer is not running all 3,118 threads and 453 processes at the same time. The computer needs to schedule resources. To demonstrate the need for scheduling resources, we can run some computationally expensive code to see how the activity monitor changes. To stress our CPU, we employ a recursion calculation like this Fibonacci number calculation: fn fibonacci(n: u64) -> u64 { if n == 0 || n == 1 { return n; } fibonacci(n-1) + fibonacci(n-2) } We can then spawn eight threads and calculate the 4,000th number with the following code: use std::thread; fn main() { let mut threads = Vec::new(); for i in 0..8 { let handle = thread::spawn(move || { let result = fibonacci(4000); println!("Thread {} result: {}", i, result); }); threads.push(handle); } for handle in threads { 2 | Chapter 1: Introduction to Async
📄 Page 17
handle.join().unwrap(); } } If we then run this code, our CPU usage jumps to 99.95%, but our processes and threads do not change much. From this, we can deduce that most of these processes and threads are not using CPU resources all the time. Modern CPU design is very nuanced. What we need to know is that a portion of CPU time is allocated when a thread or process is created. Our task in the created thread or process is then scheduled to run on one of the CPU cores. The process or thread runs until it is interrupted or yielded by the CPU voluntarily. Once the interruption has occurred, the CPU saves the state of the process or thread, and then the CPU switches to another process or thread. Now that you understand at a high level how the CPU interacts with processes and threads, let’s see basic asynchronous code in action. The specifics of the asynchro‐ nous code are covered in the following chapter, so right now it’s not important to understand exactly how every line of code works but instead to appreciate how asyn‐ chronous code is utilizing CPU resources. First, we need the following dependencies: [dependencies] reqwest = "0.11.14" tokio = { version = "1.26.0", features = ["full"] } The Rust library Tokio is giving us a high-level abstraction of an async runtime, and reqwest enables us to make async HTTP requests. HTTP requests are a good, simple real-world example of using async because of the latency through the network when making a request to a server. The CPU doesn’t need to do anything when waiting on a network response. We can time how long it takes to make a simple HTTP request when using Tokio as the async runtime with this code: use std::time::Instant; use reqwest::Error; #[tokio::main] async fn main() -> Result<(), Error> { let url = "https://jsonplaceholder.typicode.com/posts/1"; let start_time = Instant::now(); let _ = reqwest::get(url).await?; let elapsed_time = start_time.elapsed(); println!("Request took {} ms", elapsed_time.as_millis()); Ok(()) } What Is Async? | 3
📄 Page 18
Your time may vary, but at the time of this writing, it took roughly 140 ms to make the request. We can increase the number of requests by merely copying and pasting the request another three times, like so: let first = reqwest::get(url); let second = reqwest::get(url); let third = reqwest::get(url); let fourth = reqwest::get(url); let first = first.await?; let second = second.await?; let third = third.await?; let fourth = fourth.await?; Running our program again gave us 656 ms. This makes sense, since we have increased the number of requests by four. If our time was less than 140 × 4, the result would not make sense, because the increase in total time would not be proportional to increasing the number of requests by four. Note that although we are using async syntax, we have essentially just written syn‐ chronous code. This means we are executing each request after the previous one has finished. To make our code truly asynchronous, we can join the tasks together and have them running at the same time with the following code: let (_, _, _, _) = tokio::join!( reqwest::get(url), reqwest::get(url), reqwest::get(url), reqwest::get(url), ); Here we are using tokio::join!, a macro provided by Tokio. This macro enables multiple tasks to run concurrently. Unlike the previous example, where requests were awaited one after another, this approach allows them to progress simultaneously. As expected, running this code gives us a duration time of 137 ms. That’s a 4.7 times increase in the speed of our program without increasing the number of threads! This is essentially async programming. Using async programming, we can free up CPU resources by not blocking the CPU with tasks that can wait. See Figure 1-1. To help you understand the context around async programming, we need to briefly explore how processes and threads work. While we will not be using processes in asynchronous programming, it is important to understand how they work and com‐ municate with each other in order to give us context for threads and asynchronous programming. 4 | Chapter 1: Introduction to Async
📄 Page 19
Figure 1-1. Blocking synchronous timeline compared to asynchronous timeline Introduction to Processes Standard async programming in Rust does not use multiprocessing; however, we can achieve async behavior by using multiprocessing. For this to work, our async systems must sit within a process. Let’s think about the database PostgreSQL. It spawns a process for every connection made. These processes are single-threaded. If you have ever looked at Rust web frameworks, you might have noticed that the functions defining the endpoints of the Rust web servers are async functions, which means that processes are not spawned per connection for Rust servers. Instead, the Rust web server usually has a thread pool, and incoming HTTP requests are async tasks that are run on this thread pool. We cover how async tasks interact with a thread pool in Chapter 3. For now, let’s focus on where processes fit within async programming. A process is an abstraction provided by an operating system that is executed by the CPU. Processes can be run by a program or application. The instructions of the program are loaded into memory, and the CPU executes these instructions in a sequence to perform a task or set of tasks. Processes are like threads for external inputs (like those from a user via a keyboard or data from other processes) and can generate output, as seen in Figure 1-2. Introduction to Processes | 5
📄 Page 20
Figure 1-2. How processes relate to a program Processes differ from threads in that each process consists of its own memory space, and this is an essential part of how the CPU is managed because it prevents data from being corrupted or bleeding over into other processes. A process has its own ID called a process ID (PID), which can be monitored and controlled by the computer’s operating system. Many programmers have used PIDs to kill stalled or faulty programs by using the command kill PID without realizing exactly what this PID represents. A PID is a unique identifier that the OS assigns to a process. It allows the OS to keep track of all the resources associated with the process, such as memory usage and CPU time. Going back to PostgreSQL, while we must acknowledge that historical reasons do play a role in spawning a process per connection, this approach has some advantages. If a process is spawned per connection, then we have true fault isolation and mem‐ ory protection per connection. This means that a connection has zero chance of accessing or corrupting the memory of another connection. Spawning a process per connection also has no shared state and is a simpler concurrency model. However, shared state can lead to complications. For instance, if two async tasks representing individual connections each rely on data from shared memory, we must introduce synchronization primitives such as locks. These synchronization primitives are at risk of adding complications such as deadlocks, which can end up grinding all connections that are relying on that lock to a halt. These problems can be hard to debug, and we cover concepts such as testing for deadlocks in Chapter 11. The simpler concurrency model of processes reduces the risk of sync complications, but the risk is not completely eliminated; acquiring external locks such as file locks can still cause complications regardless of state isolation. The state isolation of processes can also protect against memory bugs. For instance, in a language like C or C++, we could have code that does not deallocate the memory afterwards, resulting in 6 | Chapter 1: Introduction to Async
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List