Statistics
31
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-01-14

AuthorMonica Beckwith

No description

Tags
No tags
ISBN: 0134659953
Publisher: Pearson
Publish Year: 2024
Language: 英文
Pages: 506
File Format: PDF
File Size: 15.9 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

(This page has no text content)
(This page has no text content)
(This page has no text content)
JVM Performance Engineering: Inside OpenJDK and the HotSpot Java Virtual Machine Monica Beckwith A NOTE FOR EARLY RELEASE READERS With Early Release eBooks, you get books in their earliest form—the author’s raw and unedited content as they write—so you can take advantage of these technologies long before the official release of these titles. Please note that the GitHub repo will be made active closer to publication. If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material within this title, please reach out to Pearson at PearsonITAcademics@pearson.com
Contents Preface Acknowledgments About the Author Chapter 1: The Performance Evolution of Java: The Language and the Virtual Machine Chapter 2: Performance Implications of Java’s Type System Evolution Chapter 3: From Monolithic to Modular Java: A Retrospective and Ongoing Evolution Chapter 4: The Unified Java Virtual Machine Logging Interface Chapter 5: End-to-End Java Performance Optimization: Engineering Techniques and Micro-benchmarking with JMH Chapter 6: Advanced Memory Management and Garbage Collection in OpenJDK Chapter 7: Runtime Performance Optimizations: A Focus on Strings and Locks Chapter 8: Accelerating Time to Steady State with OpenJDK HotSpot VM Chapter 9: Harnessing Exotic Hardware: The Future of JVM Performance Engineering
Table of Contents Preface Intended Audience How to Use This Book Acknowledgments About the Author Chapter 1: The Performance Evolution of Java: The Language and the Virtual Machine A New Ecosystem Is Born A Few Pages from History Understanding Java HotSpot VM and Its Compilation Strategies HotSpot Garbage Collector: Memory Management Unit The Evolution of the Java Programming Language and Its Ecosystem: A Closer Look Embracing Evolution for Enhanced Performance Chapter 2: Performance Implications of Java’s Type System Evolution Java’s Primitive Types and Literals Prior to Java SE 5.0 Java’s Reference Types Prior to Java SE 5.0 Java’s Type System Evolution from Java SE 5.0 until Java SE 8 Java’s Type System Evolution: Java 9 and Java 10 Java’s Type System Evolution: Java 11 to Java 17 Beyond Java 17: Project Valhalla
Conclusion Chapter 3: From Monolithic to Modular Java: A Retrospective and Ongoing Evolution Introduction Understanding the Java Platform Module System From Monolithic to Modular: The Evolution of the JDK Continuing the Evolution: Modular JDK in JDK 11 and Beyond Implementing Modular Services with JDK 17 JAR Hell Versioning Problem and Jigsaw Layers Open Services Gateway Initiative Introduction to Jdeps, Jlink, Jdeprscan, and Jmod Conclusion Chapter 4: The Unified Java Virtual Machine Logging Interface The Need for Unified Logging Unification and Infrastructure Tags in the Unified Logging System Diving into Levels, Outputs, and Decorators Practical Examples of Using the Unified Logging System Optimizing and Managing the Unified Logging System Asynchronous Logging and the Unified Logging System Understanding the Enhancements in JDK 11 and JDK 17 Conclusion Chapter 5: End-to-End Java Performance Optimization: Engineering Techniques and Micro-benchmarking with JMH Introduction Performance Engineering: A Central Pillar of Software Engineering Metrics for Measuring Java Performance The Role of Hardware in Performance Performance Engineering Methodology: A Dynamic and Detailed Approach The Importance of Performance Benchmarking
Conclusion Chapter 6: Advanced Memory Management and Garbage Collection in OpenJDK Introduction Overview of Garbage Collection in Java Thread-Local Allocation Buffers and Promotion-Local Allocation Buffers Optimizing Memory Access with NUMA-Aware Garbage Collection Exploring Garbage Collection Improvements Future Trends in Garbage Collection Practical Tips for Evaluating GC Performance Evaluating Garbage Collection Performance in Various Workloads Live Data Set Pressure Chapter 7: Runtime Performance Optimizations: A Focus on Strings and Locks Introduction String Optimizations Enhanced Multithreading Performance: Java Thread Synchronization Transitioning from the Thread-per-Task Model to More Scalable Models Conclusion Chapter 8: Accelerating Time to Steady State with OpenJDK HotSpot VM Introduction JVM Start-up and Warm-up Optimization Techniques Decoding Time to Steady State in Java Applications Managing State at Start-up and Ramp-up GraalVM: Revolutionizing Java’s Time to Steady State Emerging Technologies: CRIU and Project CRaC for Checkpoint/Restore Functionality
Start-up and Ramp-up Optimization in Serverless and Other Environments Boosting Warm-up Performance with OpenJDK HotSpot VM Conclusion Chapter 9: Harnessing Exotic Hardware: The Future of JVM Performance Engineering Introduction to Exotic Hardware and the JVM Exotic Hardware in the Cloud The Role of Language Design and Toolchains Case Studies Envisioning the Future of JVM and Project Panama Concluding Thoughts: The Future of JVM Performance Engineering
Preface For over 20 years, I have been immersed in the JVM and its associated runtime, constantly in awe of its transformative evolution. This detailed and insightful journey has provided me with invaluable knowledge and perspectives that I am excited to share in this book. As a performance engineer and a Java Champion, I have had the honor of sharing my knowledge at various forums. Time and again, I’ve been approached with questions about Java and JVM performance, the nuances of distributed and cloud performance, and the advanced techniques that elevate the JVM to a marvel. In this book, I have endeavored to distill my expertise into a cohesive narrative that sheds light on Java’s history, its innovative type system, and its performance prowess. This book reflects my passion for Java and its runtime. As you navigate these pages, you’ll uncover problem statements, solutions, and the unique nuances of Java. The JVM, with its robust runtime, stands as the bedrock of today’s advanced software architectures, powering some of the most state-of-the-art applications and fortifying developers with the tools needed to build resilient distributed systems. From the granularity of microservices to the vast expanse of cloud-native architectures, Java’s reliability and efficiency have cemented its position as the go-to language for distributed computing. The future of JVM performance engineering beckons, and it’s brighter than ever. As we stand at this juncture, there’s a call to action. The next chapter of JVM’s evolution awaits, and it’s up to us, the community, to pen this narrative. Let’s come together, innovate, and shape the trajectory of JVM for generations to come.
Intended Audience This book is primarily written for Java developers and software engineers who are keen to enhance their understanding of JVM internals and performance tuning. It will also greatly benefit system architects and designers, providing them with insights into JVM’s impact on system performance. Performance engineers and JVM tuners will find advanced techniques for optimizing JVM performance. Additionally, computer science and engineering students and educators will gain a comprehensive understanding of JVM’s complexities and advanced features. With the hope of furthering education in performance engineering, particularly with a focus on the JVM, this text also aligns with advanced courses on programming languages, algorithms, systems, computer architectures, and software engineering. I am passionate about fostering a deeper understanding of these concepts and excited about contributing to coursework that integrates the principles of JVM performance engineering and prepares the next generation of engineers with the knowledge and skills to excel in this critical area of technology. Focusing on the intricacies and strengths of the language and runtime, this book offers a thorough dissection of Java’s capabilities in concurrency, its strengths in multithreading, and the sophisticated memory management mechanisms that drive peak performance across varied environments. In Chapter 1, we trace Java’s timeline, from its inception in the mid-1990s Java’s groundbreaking runtime environment, complete with the Java VM, expansive class libraries, and a formidable set of tools, has set the stage with creative advancements and flexibility. We spotlight Java’s achievements, from the transformative garbage collector to the streamlined Java bytecode. The Java HotSpot VM, with its advanced JIT compilation and avant-garde optimization techniques, exemplifies Java’s commitment to performance. Its intricate compilation methodologies, harmonious synergy between the “client” compiler (C1) and “server” compiler (C2), and dynamic optimization capabilities ensure Java applications remain agile and efficient.
The brilliance of Java extends to memory management with the HotSpot Garbage Collector. Embracing generational garbage collection and the weak generational hypothesis, it efficiently employs parallel and concurrent GC threads, ensuring peak memory optimization and application responsiveness. From Java 1.1’s foundational features to the trailblazing innovations of Java 17, Java’s trajectory has been one of progress and continuous enhancement. Java’s legacy emerges as one of perpetual innovation and excellence In Chapter 2, we delve into the heart of Java: its type system. This system, integral to any programming language, has seen a remarkable evolution in Java, with innovations that have continually refined its structure. We begin by exploring Java’s foundational elements—primitive and reference types, interfaces, classes, and arrays—that anchored Java programming prior to Java SE 5.0. The narrative continues with the transformative enhancements from Java SE 5.0 to Java SE 8, where enumerations and annotations emerged, amplifying Java’s adaptability. Subsequent versions, Java 9 to Java 10, brought forth the Variable Handle Typed Reference, further enriching the language. And as we transition to the latest iterations, Java 11 to Java 17, we spotlight the advent of Switch Expressions, Sealed Classes, and the eagerly awaited Records. We then venture into the realms of Project Valhalla, examining the performance nuances of the existing type system and the potential of future value classes. This chapter offers insights into Project Valhalla’s ongoing endeavors, from refined generics to the conceptualization of classes for basic primitives. Java’s type system is more than just a set of types—it’s a reflection of Java’s commitment to versatility, efficiency, and innovation. The goal of this chapter is to illuminate the type system’s past, present, and promising future, fostering a profound understanding of its intricacies. Chapter 3 extensively covers the Java Platform Module System (JPMS), showcasing its breakthrough impact on modular programming. As we step into the modular era, Java, with JPMS, has taken a giant leap into this future. For those new to this domain, we start by unraveling the essence of
modules, complemented by hands-on examples that guide you through module creation, compilation, and execution. Java’s transition from a monolithic JDK to a modular one demonstrates its dedication to evolving needs and creative progress. A standout section of this chapter is the practical implementation of modular services using JDK 17. We navigate the intricacies of module interactions, from service providers to consumers, enriched by working examples. Key concepts like encapsulation of implementation details and the challenges of Jar Hell versioning are addressed, with the introduction of Jigsaw layers offering solutions in the modular landscape. A hands-on segment further clarifies these concepts, providing readers with tangible insights. For a broader perspective, we draw comparisons with OSGi, spotlighting the parallels and distinctions, to give readers a comprehensive understanding of Java’s modular systems. Essential tools such as Jdeps, Jlink, Jdeprscan, and Jmod are introduced, each integral to the modular ecosystem. Through in-depth explanations and examples, we aim to empower readers to effectively utilize these tools. As we wrap up, we contemplate the performance nuances of JPMS and look ahead, speculating on the future trajectories of Java’s modular evolution. Logs are the unsung heroes of software development, providing invaluable insights and aiding debugging. Chapter 4 highlights Java’s Unified Logging System, guiding you through its proficiencies and best practices. We commence by acknowledging the need for unified logging, highlighting the challenges of disparate logging systems and the advantages of a unified approach. The chapter then highlights the unification and infrastructure, shedding light on the pivotal performance metrics for monitoring and optimization. We explore the vast array of log tags available, diving into their specific roles and importance. Ensuring logs are both comprehensive and insightful, we tackle the challenge of discerning any missing information. The intricacies of log levels, outputs, and decorators are meticulously examined, providing readers with a lucid understanding of how to classify, format, and direct their logs. Practical examples further illuminate the workings of the unified logging system, empowering readers to implement their newfound knowledge in tangible scenarios.
Benchmarking and performance evaluation stand as pillars of any logging system. This chapter equips readers with the tools and methodologies to gauge and refine their logging endeavors effectively. We also touch upon the optimization and management of the unified logging system, ensuring its sustained efficiency. With continuous advancements, notably in JDK 11 and JDK 17, we ensure readers remain abreast of the latest in Java logging. Concluding this chapter, we emphasize the importance of logs as a diagnostic tool, shedding light on their role in proactive system monitoring and reactive problem-solving. Chapter 4 highlights the power of effective logging in Java, underscoring its significance in building and maintaining robust applications. Chapter 5 focuses on the essence of performance engineering within the Java ecosystem, emphasizing that performance transcends mere speed—it’s about crafting an unparalleled user experience. Our voyage commences with a formative exploration of performance engineering’s pivotal role within the broader software development realm. By unraveling the multifaceted layers of software engineering, we accentuate performance’s stature as a paramount quality attribute. With precision, we delineate the metrics pivotal to gauging Java’s performance, encompassing aspects from footprint to the nuances of availability, ensuring readers grasp the full spectrum of performance dynamics. Stepping in further we explore the intricacies of response time and its symbiotic relationship with availability. This inspection provides insights into the mechanics of application timelines, intricately weaving the narrative of response time, throughput, and the inevitable pauses that punctuate them. Yet, the performance narrative is only complete by acknowledging the profound influence of hardware. This chapter decodes the symbiotic relationship between hardware and software, emphasizing the harmonious symphony that arises from the confluence of languages, processors, and memory models. From the subtleties of memory models and their bearing on thread dynamics to the Java Memory Model’s foundational principles, we journey through the maze of concurrent hardware, shedding light on the order mechanisms pivotal to concurrent computing. Transitioning to the realm of methodology, we introduce readers to the dynamic world of
performance engineering methodology. This section offers a panoramic view, from the intricacies of experimental design to formulating a comprehensive statement of work, championing a top-down approach that guarantees a holistic perspective on the performance engineering process. Benchmarking, the cornerstone of performance engineering, receives its due spotlight. We underscore its indispensable role, guiding the reader through the labyrinth of the benchmarking regime. This encompasses everything from its inception in planning to the culmination in analysis. The chapter provides a view into the art and science of JVM memory management benchmarking, serving as a compass for those passionate about performance optimization. Finally, the Java Micro-Benchmark Suite (JMH) emerges as the pièce de résistance. From its foundational setup to the intricacies of its myriad features, the journey encompasses the genesis of writing benchmarks, to their execution, enriched with insights into benchmarking modes, profilers, and JMH’s pivotal annotations. This chapter should inspire a fervor for relentless optimization and arms readers with the arsenal required to unlock Java’s unparalleled performance potential. Memory management is the silent guardian of Java applications, often operating behind the scenes but crucial to their success. Chapter 6 offers a leap into the world of garbage collection, unraveling the techniques and innovations that ensure Java applications run efficiently and effectively. Our journey begins with an overview of the garbage collection in Java, setting the stage for the intricate details that follow. We then venture into Thread- Local Allocation Buffers (TLABs) and Promotion Local Allocation Buffers (PLABs), elucidating their pivotal roles in memory management. As we progress, the chapter sheds light on optimizing memory access, emphasizing the significance of the NUMA-Aware garbage collection and its impact on performance. The highlight of this chapter lies in its exploration of advanced garbage collection techniques. We review the G1 Garbage Collector (G1 GC), unraveling its revolutionary approach to heap management. From grasping the advantages of a regionalized heap to optimizing G1 GC parameters for peak performance, this section promises a holistic cognizance of one of Java’s most advanced garbage collectors. But the exploration doesn’t end
there. The Z Garbage Collector (ZGC) stands as a pinnacle of technological advancement, offering unparalleled scalability and low latency for managing multi-terabyte heaps. We look into the origins of ZGC, its adaptive optimization techniques, and the advancements that make it a game-changer in real-time applications. This chapter also offers insights into the emerging trends in garbage collection, setting the stage for what lies ahead. Practicality remains at the forefront, with a dedicated section offering invaluable tips for evaluating GC performance. From sympathizing with various workloads, such as Online Analytical Processing (OLAP) to Online Transaction Processing (OLTP) and Hybrid Transactional/Analytical Processing (HTAP), to synthesizing live data set pressure and data lifespan patterns, the chapter equips readers with the tools and knowledge to optimize memory management effectively. This chapter is an accessible guide to advanced garbage collection techniques d that Java professionals dneed to navigate the topography of memory management. the ability to efficiently manage concurrent tasks and optimize string operations stands as a testament to the language’s evolution and adaptability. Chapter 7 covers the intricacies of Java’s concurrency mechanisms and string optimizations, offering readers a comprehensive exploration of advanced techniques and best practices. We commence our journey with an extensive review of the string optimizations. From mastering the nuances of literal and interned string optimization in the HotSpot VM to the innovative string deduplication optimization introduced in Java 8, the chapter sheds light on techniques to reduce string footprint. We take a further look into the “Indy-fication” of string concatenation and the introduction of compact strings, ensuring a holistic conceptualization of string operations in Java. Next, the chapter focuses on enhanced multithreading performance, highlighting Java’s thread synchronization mechanisms. We study the role of monitor locks, the various lock types in OpenJDK’s HotSpot VM, and the dynamics of lock contention. The evolution of Java’s locking mechanisms is meticulously detailed, offering insights into the improvements in contended locks and monitor operations. To tap into our learnings from Chapter 5, with the help of practical testing and performance
analysis, we visualize contended lock optimization, harnessing the power of JMH and Async-Profiler. As we navigate the world of concurrency, the transition from the thread-per- task model to the scalable thread-per-request model is highlighted. The examination of Java’s Executor Service, ThreadPools, ForkJoinPool framework, and CompletableFuture ensures a robust comprehension of Java’s concurrency mechanisms. Our journey in this chapter concludes with a glimpse into the future of concurrency in Java as we reimagine concurrency with virtual threads. From understanding virtual threads and their carriers to discussing parallelism and integration with existing APIs, the chapter is a practical guide to advanced concurrency mechanisms and string optimizations in Java. In Chapter 8 the journey from startup to steady-state performance is explored in depth.. This chapter ventures far into the modulation of JVM start-up and warm-up, covering techniques and best practices that ensure peak performance. We begin by distinguishing between the often-confused concepts of warm-up and ramp-up, setting the stage for fully understanding JVM’s start-up dynamics. The chapter emphasizes the importance of JVM start-up and warm-up performance, dissecting the phases of JVM startup and the journey to an application’s steady state. As we navigate the application’s lifecycle, the significance of managing the state during start- up and ramp-up becomes evident, highlighting the benefits of efficient state management. The study of Class Data Sharing offers insights into the anatomy of shared archive files, memory mapping, and the benefits of multi-instance setups. Moving on to Ahead-Of-Time (AOT) compilation, the contrast between AOT and JIT compilation is meticulously highlighted, with GraalVM heralding a paradigm shift in Java’s performance landscape and with HotSpot VM’s up-and-coming Project Leyden and its forecasted ability to manage states via CDS and AOT. The chapter also addresses the unique challenges and opportunities of serverless computing and containerized environments. The emphasis on ensuring swift startups and efficient scaling in these environments underscores the evolving nature of Java performance optimization.
Our journey then transitions to boosting warm-up performance with OpenJDK HotSpot VM. The chapter offers a holistic view of warm-up optimizations, from compiler enhancements to segmented code cache and Project Leyden enhancements in the near future. The evolution from PermGen to Metaspace is also highlighted to showcase start-up, warm-up, and steady-state implications. The chapter culminates with a survey of various OpenJDK projects, such as CRIU, and CraC, revolutionizing Java’s time-to-steady state by introducing groundbreaking checkpoint/restore functionality.. Our final chapter ( Chapter 9) focuses on the intersection of exotic hardware and the Java Virtual Machine (JVM). This chapter offers readers a considered exploration of the world of exotic hardware, its integration with the JVM, and its galvanizing impact on performance engineering. We start with an introduction to exotic hardware and its growing prominence in cloud environments. The pivotal role of language design and toolchains quickly becomes evident, setting the stage for case studies showcasing the real-world applications and challenges of integrating exotic hardware with the JVM. From the light-weight Java gaming library (LWJGL), a baseline example that offers insights into the intricacies of working with the JVM, to Aparapi, which bridges the gap between Java and OpenCL, each case study is carefully detailed, demonstrating the challenges, limitations, and successes of each integration. The chapter then shifts to Project Sumatra, a significant effort in JVM performance optimization, followed by TornadoVM, a specialized JVM tailored for hardware accelerators. Through these case studies, the symbiotic potential of integrating exotic hardware with the JVM becomes increasingly evident, leading up to an overview of Project Panama, a new horizon in JVM performance engineering. At the heart of Project Panama lies the Vector API, a symbol of innovation designed for vector computations. But it’s not just about computations—it’s about ensuring they are efficiently vectorized and tailored for hardware that thrives on vector operations. This API is an example of Java’s commitment to evolving, ensuring that developers have the tools to express parallel computations optimized for diverse hardware architectures. But Panama isn’t just about vectors. The Foreign Function
and Memory API emerges as a pivotal tool, a bridge that allows Java to converse seamlessly with native libraries. This is Java’s answer to the age- old challenge of interoperability, ensuring Java applications can interface effortlessly with native code, breaking language barriers. Yet, every innovation comes with its set of challenges. Integrating exotic hardware with the JVM is no walk in the park. From managing intricate memory access patterns to deciphering hardware-specific behaviors, the path to optimization is laden with complexities. But these challenges drive innovation, pushing the boundaries of what’s possible. Looking to the future, we envision Project Panama as the gold standard for JVM interoperability. The horizon looks promising, with Panama poised to redefine performance and efficiency for Java applications. This isn’t just about the present or the imminent future. The world of JVM performance engineering is on the cusp of a revolution. Innovations are knocking at our door, waiting to be embraced—with Tornado VM’s Hybrid APIs, and with HAT toolkit and Project Babylon on the horizon. How to Use This Book 1. Sequential Reading for Comprehensive Understanding: This book is designed to be read from beginning to end, as each chapter builds upon the knowledge of the previous ones. This approach is especially recommended for readers new to JVM performance engineering. 2. Modular Approach for Specific Topics: Experienced readers may prefer to jump directly to chapters that address their specific interests or challenges. The table of contents and index can guide you to relevant sections. 3. Practical Examples and Code: Throughout the book, practical examples and code snippets are provided to illustrate key concepts. To get the most out of these examples, readers are encouraged to type out and run the code themselves. 4. Visual Aids for Enhanced Understanding: In addition to written explanations, this book employs a variety of visual aids to deepen your understanding.
a. Case Studies: Real-world scenarios that demonstrate the application of JVM performance techniques. b. Screenshots: Visual outputs depicting profiling results as well as various GC plots, which are essential for understanding the GC process and phases. c. Use-Case Diagrams: Visual representations that map out the system’s functional requirements, showing how different entities interact with each other. d. Block Diagrams: Illustrations that outline the architecture of a particular JVM or system component, highlighting performance features. e. Class Diagrams: Detailed object-oriented designs of various code examples, showing relationships and hierarchies. f. Process Flowcharts: Step-by-step diagrams that walk you through various performance optimization processes and components. g. Timelines: Visual representations of the different phases or state changes in an activity and the sequence of actions that are taken. 5. Utilizing the Companion GitHub Repository: A significant portion of the book’s value lies in its practical application. To facilitate this, I have created JVM Performance Engineering GitHub Repository (https://github.com/mo-beck/JVM-Performance-Engineering ). Here, you will find a. Complete Code Listings: All the code snippets and scripts mentioned in the book are available in full. This allows you to see the code in its entirety and experiment with it. b. Additional Resources and Updates: The field of JVM Performance Engineering is ever evolving. The repository will be periodically updated with new scripts, resources, and information to keep you abreast of the latest developments. c. Interactive Learning: Engage with the material by cloning the repository, running the GC scripts against your GC log files, and