Spark in Action, Second Edition (Jean-Georges Perrin) (Z-Library) (1)

Author: Jean-Georges Perrin

技术

The Spark distributed data processing platform provides an easy-to-implement tool for ingesting, streaming, and processing data from any source. In Spark in Action, Second Edition, you’ll learn to take advantage of Spark’s core features and incredible processing speed, with applications including real-time computation, delayed evaluation, and machine learning. Unlike many Spark books written for data scientists, Spark in Action, Second Edition is designed for data engineers and software engineers who want to master data processing using Spark without having to learn a complex new ecosystem of languages and tools. You’ll instead learn to apply your existing Java and SQL skills to take on practical, real-world challenges. Key Features · Lots of examples based in the Spark Java APIs using real-life dataset and scenarios · Examples based on Spark v2.3 Ingestion through files, databases, and streaming · Building custom ingestion process · Querying distributed datasets with Spark SQL For beginning to intermediate developers and data engineers comfortable programming in Java. No experience with functional programming, Scala, Spark, Hadoop, or big data is required. About the technology Spark is a powerful general-purpose analytics engine that can handle massive amounts of data distributed across clusters with thousands of servers. Optimized to run in memory, this impressive framework can process data up to 100x faster than most Hadoop-based systems. Author Bio An experienced consultant and entrepreneur passionate about all things data, Jean-Georges Perrin was the first IBM Champion in France, an honor he’s now held for ten consecutive years. Jean-Georges has managed many teams of software and data engineers.

📄 File Format: PDF
💾 File Size: 36.0 MB
56
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
M A N N I N G Jean-Georges Perrin Foreword by Rob Thomas SECOND EDITION With examples in Java, Python, and Scala Covers Apache Spark 3
📄 Page 2
Lexicon Summary of the Spark terms involved in the deployment process Term Definition Application Your program that is built on and for Spark. Consists of a driver program and executors on the cluster. Application JAR A Java archive (JAR) file containing your Spark application. It can be an uber JAR including all the dependencies. Cluster manager An external service for acquiring resources on the cluster. It can be the Spark built-in cluster manager. More details in chapter 6. Deploy mode Distinguishes where the driver process runs. In cluster mode, the framework launches the driver inside the cluster. In client mode, the submitter launches the driver outside the cluster. You can find out which mode you are in by calling the deployMode() method. This method returns a read-only property. Driver program The process running the main() function of the application and creating the SparkContext. Everything starts here. Executor A process launched for an application on a worker node. The executor runs tasks and keeps data in memory or in disk storage across them. Each application has its own executors. Job A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (for example, save() or collect()); check out appendix I). Stage Each job gets divided into smaller sets of tasks, called stages, that depend on each other (similar to the map and reduce stages in MapReduce). Task A unit of work that will be sent to one executor. Worker node Any node that can run application code in the cluster. Apache Spark components Application JAR Worker node Driver program Cluster manager Ex ec ut or Cache Task Task Worker node Ex ec ut or Cache Task Task Your code in a JAR package Nodes The driver can access its deployment mode. Application processes and resources Job: parallel tasks triggered after an action is called elements Jobs are split into stages. SparkSession (SparkContext)
📄 Page 3
Spark in Action SECOND EDITION JEAN-GEORGES PERRIN FOREWORD BY ROB THOMAS M A N N I N G SHELTER ISLAND
📄 Page 4
For online information and ordering of this and other Manning books, please visit www.manning.com. The publisher offers discounts on this book when ordered in quantity. For more information, please contact Special Sales Department Manning Publications Co. 20 Baldwin Road PO Box 761 Shelter Island, NY 11964 Email: orders@manning.com ©2020 by Manning Publications Co. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps. Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine. Manning Publications Co. Development editor: Marina Michaels 20 Baldwin Road Technical development editor: Al Scherer PO Box 761 cReview editor: Aleks Dragosavljevi´ Shelter Island, NY 11964 Production editor: Lori Weidert Copy editor: Sharon Wilkey Proofreader: Melody Dolab Technical proofreader: Rambabu Dosa and Thomas Lockney Typesetter: Gordan Salinovic Cover designer: Marija Tudor ISBN 9781617295522 Printed in the United States of America
📄 Page 5
Liz, Thank you for your patience, support, and love during this endeavor. Ruby, Nathaniel, Jack, and Pierre-Nicolas, Thank you for being so understanding about my lack of availability during this venture. I love you all.
📄 Page 6
(This page has no text content)
📄 Page 7
v contents foreword xiii preface xv acknowledgments xvii about this book xix about the author xxv about the cover illustration xxvi PART 1 THE THEORY CRIPPLED BY AWESOME EXAMPLES.............1 1 So, what is Spark, anyway? 3 1.1 The big picture: What Spark is and what it does 4 What is Spark? 4 ■ The four pillars of mana 6 1.2 How can you use Spark? 8 Spark in a data processing/engineering scenario 8 ■ Spark in a data science scenario 9 1.3 What can you do with Spark? 10 Spark predicts restaurant quality at NC eateries 11 ■ Spark allows fast data transfer for Lumeris 11 ■ Spark analyzes equipment logs for CERN 12 ■ Other use cases 12 1.4 Why you will love the dataframe 12 The dataframe from a Java perspective 13 ■ The dataframe from an RDBMS perspective 13 ■ A graphical representation of the dataframe 14
📄 Page 8
CONTENTSvi 1.5 Your first example 14 Recommended software 15 ■ Downloading the code 15 Running your first application 15 ■ Your first code 17 2 Architecture and flow 19 2.1 Building your mental model 20 2.2 Using Java code to build your mental model 21 2.3 Walking through your application 23 Connecting to a master 24 ■ Loading, or ingesting, the CSV file 25 ■ Transforming your data 28 ■ Saving the work done in your dataframe to a database 29 3 The majestic role of the dataframe 33 3.1 The essential role of the dataframe in Spark 34 Organization of a dataframe 35 ■ Immutability is not a swear word 36 3.2 Using dataframes through examples 37 A dataframe after a simple CSV ingestion 39 ■ Data is stored in partitions 44 ■ Digging in the schema 45 ■ A dataframe after a JSON ingestion 46 ■ Combining two dataframes 52 3.3 The dataframe is a Dataset<Row> 57 Reusing your POJOs 58 ■ Creating a dataset of strings 59 Converting back and forth 60 3.4 Dataframe’s ancestor: the RDD 66 4 Fundamentally lazy 68 4.1 A real-life example of efficient laziness 69 4.2 A Spark example of efficient laziness 70 Looking at the results of transformations and actions 70 ■ The transformation process, step by step 72 ■ The code behind the transformation/action process 74 ■ The mystery behind the creation of 7 million datapoints in 182 ms 77 ■ The mystery behind the timing of actions 79 4.3 Comparing to RDBMS and traditional applications 83 Working with the teen birth rates dataset 83 ■ Analyzing differences between a traditional app and a Spark app 84 4.4 Spark is amazing for data-focused applications 86 4.5 Catalyst is your app catalyzer 86
📄 Page 9
CONTENTS vii 5 Building a simple app for deployment 90 5.1 An ingestionless example 91 Calculating p 91 ■ The code to approximate p 93 ■ What are lambda functions in Java? 99 ■ Approximating p by using lambda functions 101 5.2 Interacting with Spark 102 Local mode 103 ■ Cluster mode 104 ■ Interactive mode in Scala and Python 107 6 Deploying your simple app 114 6.1 Beyond the example: The role of the components 116 Quick overview of the components and their interactions 116 Troubleshooting tips for the Spark architecture 120 ■ Going further 121 6.2 Building a cluster 121 Building a cluster that works for you 122 ■ Setting up the environment 123 6.3 Building your application to run on the cluster 126 Building your application’s uber JAR 127 ■ Building your application by using Git and Maven 129 6.4 Running your application on the cluster 132 Submitting the uber JAR 132 ■ Running the application 133 Analyzing the Spark user interface 133 PART 2 INGESTION. ........................................................137 7 Ingestion from files 139 7.1 Common behaviors of parsers 141 7.2 Complex ingestion from CSV 141 Desired output 142 ■ Code 143 7.3 Ingesting a CSV with a known schema 144 Desired output 145 ■ Code 145 7.4 Ingesting a JSON file 146 Desired output 148 ■ Code 149 7.5 Ingesting a multiline JSON file 150 Desired output 151 ■ Code 152 7.6 Ingesting an XML file 153 Desired output 155 ■ Code 155
📄 Page 10
CONTENTSviii 7.7 Ingesting a text file 157 Desired output 158 ■ Code 158 7.8 File formats for big data 159 The problem with traditional file formats 159 ■ Avro is a schema- based serialization format 160 ■ ORC is a columnar storage format 161 ■ Parquet is also a columnar storage format 161 Comparing Avro, ORC, and Parquet 161 7.9 Ingesting Avro, ORC, and Parquet files 162 Ingesting Avro 162 ■ Ingesting ORC 164 ■ Ingesting Parquet 165 ■ Reference table for ingesting Avro, ORC, or Parquet 167 8 Ingestion from databases 168 8.1 Ingestion from relational databases 169 Database connection checklist 170 ■ Understanding the data used in the examples 170 ■ Desired output 172 ■ Code 173 Alternative code 175 8.2 The role of the dialect 176 What is a dialect, anyway? 177 ■ JDBC dialects provided with Spark 177 ■ Building your own dialect 177 8.3 Advanced queries and ingestion 180 Filtering by using a WHERE clause 180 ■ Joining data in the database 183 ■ Performing Ingestion and partitioning 185 Summary of advanced features 188 8.4 Ingestion from Elasticsearch 188 Data flow 189 ■ The New York restaurants dataset digested by Spark 189 ■ Code to ingest the restaurant dataset from Elasticsearch 191 9 Advanced ingestion: finding data sources and building your own 194 9.1 What is a data source? 196 9.2 Benefits of a direct connection to a data source 197 Temporary files 198 ■ Data quality scripts 198 ■ Data on demand 199 9.3 Finding data sources at Spark Packages 199 9.4 Building your own data source 199 Scope of the example project 200 ■ Your data source API and options 202
📄 Page 11
CONTENTS ix 9.5 Behind the scenes: Building the data source itself 203 9.6 Using the register file and the advertiser class 204 9.7 Understanding the relationship between the data and schema 207 The data source builds the relation 207 ■ Inside the relation 210 9.8 Building the schema from a JavaBean 213 9.9 Building the dataframe is magic with the utilities 215 9.10 The other classes 220 10 Ingestion through structured streaming 222 10.1 What’s streaming? 224 10.2 Creating your first stream 225 Generating a file stream 226 ■ Consuming the records 229 Getting records, not lines 234 10.3 Ingesting data from network streams 235 10.4 Dealing with multiple streams 237 10.5 Differentiating discretized and structured streaming 242 PART 3 TRANSFORMING YOUR DATA . .................................245 11 Working with SQL 247 11.1 Working with Spark SQL 248 11.2 The difference between local and global views 251 11.3 Mixing the dataframe API and Spark SQL 253 11.4 Don’t DELETE it! 256 11.5 Going further with SQL 258 12 Transforming your data 260 12.1 What is data transformation? 261 12.2 Process and example of record-level transformation 262 Data discovery to understand the complexity 264 ■ Data mapping to draw the process 265 ■ Writing the transformation code 268 Reviewing your data transformation to ensure a quality process 274 What about sorting? 275 ■ Wrapping up your first Spark transformation 275
📄 Page 12
CONTENTSx 12.3 Joining datasets 276 A closer look at the datasets to join 276 ■ Building the list of higher education institutions per county 278 ■ Performing the joins 283 12.4 Performing more transformations 289 13 Transforming entire documents 291 13.1 Transforming entire documents and their structure 292 Flattening your JSON document 293 ■ Building nested documents for transfer and storage 298 13.2 The magic behind static functions 301 13.3 Performing more transformations 302 13.4 Summary 303 14 Extending transformations with user-defined functions 304 14.1 Extending Apache Spark 305 14.2 Registering and calling a UDF 306 Registering the UDF with Spark 309 ■ Using the UDF with the dataframe API 310 ■ Manipulating UDFs with SQL 312 Implementing the UDF 313 ■ Writing the service itself 314 14.3 Using UDFs to ensure a high level of data quality 316 14.4 Considering UDFs’ constraints 318 15 Aggregating your data 320 15.1 Aggregating data with Spark 321 A quick reminder on aggregations 321 ■ Performing basic aggregations with Spark 324 15.2 Performing aggregations with live data 327 Preparing your dataset 327 ■ Aggregating data to better understand the schools 332 15.3 Building custom aggregations with UDAFs 338 PART 4 GOING FURTHER. ................................................345 16 Cache and checkpoint: Enhancing Spark’s performances 347 16.1 Caching and checkpointing can increase performance 348 The usefulness of Spark caching 350 ■ The subtle effectiveness of Spark checkpointing 351 ■ Using caching and checkpointing 352
📄 Page 13
CONTENTS xi 16.2 Caching in action 361 16.3 Going further in performance optimization 371 17 Exporting data and building full data pipelines 373 17.1 Exporting data 374 Building a pipeline with NASA datasets 374 ■ Transforming columns to datetime 378 ■ Transforming the confidence percentage to confidence level 379 ■ Exporting the data 379 Exporting the data: What really happened? 382 17.2 Delta Lake: Enjoying a database close to your system 383 Understanding why a database is needed 384 ■ Using Delta Lake in your data pipeline 385 ■ Consuming data from Delta Lake 389 17.3 Accessing cloud storage services from Spark 392 18 Exploring deployment constraints: Understanding the ecosystem 395 18.1 Managing resources with YARN, Mesos, and Kubernetes 396 The built-in standalone mode manages resources 397 ■ YARN manages resources in a Hadoop environment 398 ■ Mesos is a standalone resource manager 399 ■ Kubernetes orchestrates containers 401 ■ Choosing the right resource manager 402 18.2 Sharing files with Spark 403 Accessing the data contained in files 404 ■ Sharing files through distributed filesystems 404 ■ Accessing files on shared drives or file server 405 ■ Using file-sharing services to distribute files 406 Other options for accessing files in Spark 407 ■ Hybrid solution for sharing files with Spark 408 18.3 Making sure your Spark application is secure 408 Securing the network components of your infrastructure 408 Securing Spark’s disk usage 409 appendix A Installing Eclipse 411 appendix B Installing Maven 418 appendix C Installing Git 422 appendix D Downloading the code and getting started with Eclipse 424 appendix E A history of enterprise data 430 appendix F Getting help with relational databases 434 appendix G Static functions ease your transformations 438 appendix H Maven quick cheat sheet 446 appendix I Reference for transformations and actions 450
📄 Page 14
CONTENTSxii appendix J Enough Scala 460 appendix K Installing Spark in production and a few tips 462 appendix L Reference for ingestion 476 appendix M Reference for joins 488 appendix N Installing Elasticsearch and sample data 499 appendix O Generating streaming data 505 appendix P Reference for streaming 510 appendix Q Reference for exporting data 520 appendix R Finding help when you’re stuck 528 index 533
📄 Page 15
xiii foreword The analytics operating system In the twentieth century, scale effects in business were largely driven by breadth and dis- tribution. A company with manufacturing operations around the world had an inher- ent cost and distribution advantage, leading to more-competitive products. A retailer with a global base of stores had a distribution advantage that could not be matched by a smaller company. These scale effects drove competitive advantage for decades. The internet changed all of that. Today, three predominant scale effects exist: ■ Network—Lock-in that is driven by a loyal network (Facebook, Twitter, Etsy, and so forth) ■ Economies of scale—Lower unit cost, driven by volume (Apple, TSMC, and so forth) ■ Data—Superior machine learning and insight, driven from a dynamic corpus of data In Big Data Revolution (Wiley, 2015), I profiled a few companies that are capitalizing on data as a scale effect. But, here in 2019, big data is still largely an unexploited asset in institutions around the world. Spark, the analytics operating system, is a catalyst to change that. Spark has been a catalyst in changing the face of innovation at IBM. Spark is the analytics operating system, unifying data sources and data access. The unified pro- gramming model of Spark makes it the best choice for developers building data-rich analytic applications. Spark reduces the time and complexity of building analytic
📄 Page 16
FOREWORDxiv workflows, enabling builders to focus on machine learning and the ecosystem around Spark. As we have seen time and again, an open source project is igniting innovation, with speed and scale. This book takes you deeper into the world of Spark. It covers the power of the technology and the vibrancy of the ecosystem, and covers practical applications for putting Spark to work in your company today. Whether you are working as a data engi- neer, data scientist, or application developer, or running IT operations, this book reveals the tools and secrets that you need to know, to drive innovation in your com- pany or community. Our strategy at IBM is about building on top of and around a successful open plat- form, and adding something of our own that’s substantial and differentiated. Spark is that platform. We have countless examples in IBM, and you will have the same in your company as you embark on this journey. Spark is about innovation—an analytics operating system on which new solutions will thrive, unlocking the big data scale effect. And Spark is about a community of Spark-savvy data scientists and data analysts who can quickly transform today’s prob- lems into tomorrow’s solutions. Spark is one of the fastest-growing open source proj- ects in history. Welcome to the movement. —ROB THOMAS SENIOR VICE PRESIDENT, CLOUD AND DATA PLATFORM, IBM
📄 Page 17
xv preface I don’t think Apache Spark needs an introduction. If you’re reading these lines, you probably have some idea of what this book is about: data engineering and data science at scale, using distributed processing. However, Spark is more than that, which you will soon discover, starting with Rob Thomas’s foreword and chapter 1. Just as Obelix fell into the magic potion,1 I fell into Spark in 2015. At that time, I was working for a French computer hardware company, where I helped design highly performing systems for data analytics. As one should be, I was skeptical about Spark at first. Then I started working with it, and you now have the result in your hands. From this initial skepticism came a real passion for a wonderful tool that allows us to process data in—this is my sincere belief—a very easy way. I started a few projects with Spark, which allowed me to give talks at Spark Summit, IBM Think, and closer to home at All Things Open, Open Source 101, and through the local Spark user group I co-animate in the Raleigh-Durham area of North Caro- lina. This allowed me to meet great people and see plenty of Spark-related projects. As a consequence, my passion continued to grow. This book is about sharing that passion. Examples (or labs) in the book are based on Java, but the only repository contains Scala and Python as well. As Spark 3.0 was coming out, the team at Manning and I 1Obelix is a comics and cartoon character. He is the inseparable companion of Asterix. When Asterix, a Gaul, drinks a magic potion, he gains superpowers that allow him to regularly beat the Romans (and pirates). As a baby, Obelix fell into the cauldron where the potion was made, and the potion has an everlasting effect on him. Asterix is a popular comic in Europe. Find out more at www.asterix.com/en/.
📄 Page 18
PREFACExvi decided to make sure that the book reflects the latest versions, and not as an after- thought. As you may have guessed, I love comic books. I grew up with them. I love this way of communicating, which you’ll see in this book. It’s not a comic book, but its nearly 200 images should help you understand this fantastic tool that is Apache Spark. Just as Asterix has Obelix for a companion, Spark in Action, Second Edition has a reference companion supplement that you can download for free from the resource section on the Manning website; a short link is http://jgp.net/sia. This supplement contains reference information on Spark static functions and will eventually grow to more useful reference resources. Whether you like this book or not, drop me a tweet at @jgperrin. If you like it, write an Amazon review. If you don’t, as they say at weddings, forever hold your peace. Nevertheless, I sincerely hope you’ll enjoy it. Alea iacta est.2 2The die is cast. This sentence was attributed to Julius Caesar (Asterix’s arch frenemy) as Caesar led his army over the Rubicon: things have happened and can’t be changed back, like this book being printed, for you.
📄 Page 19
xvii acknowledgments This is the section where I express my gratitude to the people who helped me in this journey. It’s also the section where you have a tendency to forget people, so if you feel left out, I am sorry. Really sorry. This book has been a tremendous effort, and doing it alone probably would have resulted in a two- or three-star book on Amazon, instead of the five-star rating you will give it soon (this is a call to action, thanks!). I’d like to start by thanking the teams at work who trusted me on this project, start- ing with Zaloni (Anupam Rakshit and Tufail Khan), Lumeris (Jon Farn, Surya Koduru, Noel Foster, Divya Penmetsa, Srini Gaddam, and Bryce Tutt; all of whom almost blindly followed me on the Spark bandwagon), the people at Veracity Solu- tions, and my new team at Advance Auto Parts. Thanks to Mary Parker of the Department of Statistics at the University of Texas at Austin and Cristiana Straccialana Parada. Their contributions helped clarify some sections. I’d like to thank the community at large, including Jim Hughes, Michael Ben- David, Marcel-Jan Krijgsman, Jean-Francois Morin, and all the anonymous posting pull requests on GitHub. I would like to express my sincere gratitude to the folks at Databricks, IBM, Netflix, Uber, Intel, Apple, Alluxio, Oracle, Microsoft, Cloudera, NVIDIA, Facebook, Google, Alibaba, numerous universities, and many more who con- tribute to making Spark what it is. More specifically, for their work, inspiration, and support, thanks to Holden Karau, Jacek Laskowski, Sean Owen, Matei Zaharia, and Jules Damji.
📄 Page 20
ACKNOWLEDGMENTSxviii During this project, I participated in several podcasts. My thanks to Tobias Macey for Data Engineering Podcast (http://mng.bz/WPjX), IBM’s Al Martin for “Making Data Simple” (http://mng.bz/8p7g), and the Roaring Elephant by Jhon Masschelein and Dave Russell (http://mng.bz/EdRr). As an IBM Champion, it has been a pleasure to work with so many IBMers during this adventure. They either helped directly, indirectly, or were inspirational: Rob Thomas (we need to work together more), Marius Ciortea, Albert Martin (who, among other things, runs the great podcast called Make Data Simple), Steve Moore, Sourav Mazumder, Stacey Ronaghan, Mei-Mei Fu, Vijay Bommireddipalli (keep this thing you have in San Francisco rolling!), Sunitha Kambhampati, Sahdev Zala, and, my brother, Stuart Litel. I want to thank the people at Manning who adopted this crazy project. As in all good movies, in order of appearance: my acquisition editor, Michael Stephens; our publisher, Marjan Bace; my development editors, Marina Michaels and Toni Arritola; and production staff, Erin Twohey, Rebecca Rinehart, Bert Bates, Candace Gillhool- ley, Radmila Ercegovac, Aleks Dragosavljevic, Matko Hrvatin, Christopher Kaufmann, Ana Romac, Cheryl Weisman, Lori Weidert, Sharon Wilkey, and Melody Dolab. I would also like to acknowledge and thank all of the Manning reviewers: Anupam Sengupta, Arun Lakkakulam, Christian Kreutzer-Beck, Christopher Kardell, Conor Redmond, Ezra Schroeder, Gábor László Hajba, Gary A. Stafford, George Thomas, Giuliano Araujo Bertoti, Igor Franca, Igor Karp, Jeroen Benckhuijsen, Juan Rufes, Kel- vin Johnson, Kelvin Rawls, Mario-Leander Reimer, Markus Breuer, Massimo Dalla Rov- ere, Pavan Madhira, Sambaran Hazra, Shobha Iyer, Ubaldo Pescatore, Victor Durán, and William E. Wheeler. It does take a village to write a (hopefully) good book. I also want to thank Petar Zečević and Marco Banaći, who wrote the first edition of this book. Thanks to Thomas Lockney for his detailed technical review, and also to Ram- babu Posa for porting the code in this book. I’d like to thank Jon Rioux (merci, Jona- than!) for starting the PySpark in Action adventure. He coined the idea of “team Spark at Manning.” I’d like to thank again Marina. Marina was my development editor during most of the book. She was here when I had issues, she was here with advice, she was tough on me (yeah, you cannot really slack off), but instrumental in this project. I will remem- ber our long discussions about the book (which may or may not have been a pretext for talking about anything else). I will miss you, big sister (almost to the point of start- ing another book right away). Finally, I want to thank my parents, who supported me more than they should have and to whom I dedicate the cover; my wife, Liz, who helped me on so many levels, including understanding editors; and our kids, Pierre-Nicolas, Jack, Nathaniel, and Ruby, from whom I stole too much time writing this book.
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List