Java NIO (Ron Hitchens) (Z-Library)

Author: Ron Hitchens

技术

Java NIO explores the new I/O capabilities of version 1.4 in detail and shows you how to put these features to work to greatly improve the efficiency of the Java code you write. This compact volume examines the typical challenges that Java programmers face with I/O and shows you how to take advantage of the capabilities of the new I/O features. You'll learn how to put these tools to work using examples of common, real-world I/O problems and see how the new features have a direct impact on responsiveness, scalability, and reliability. Because the NIO APIs supplement the I/O features of version 1.3, rather than replace them, you'll also learn when to use new APIs and when the older 1.3 I/O APIs are better suited to your particular application.

📄 File Format: PDF
💾 File Size: 1.4 MB
41
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
1 Copyright Table of Contents Index Full Description Reviews Reader reviews Errata Java™ NIO Ron Hitchens Publisher: O'Reilly First Edition August 2002 ISBN: 0-596-00288-2, 312 pages Java NIO explores the new I/O capabilities of version 1.4 in detail and shows you how to put these features to work to greatly improve the efficiency of the Java code you write. This compact volume examines the typical challenges that Java programmers face with I/O and shows you how to take advantage of the capabilities of the new I/O features. You'll learn how to put these tools to work using examples of common, real-world I/O problems and see how the new features have a direct impact on responsiveness, scalability, and reliability. Because the NIO APIs supplement the I/O features of version 1.3, rather than replace them, you'll also learn when to use new APIs and when the older 1.3 I/O APIs are better suited to your particular application.
📄 Page 2
2 Table of Content Table of Content ............................................................................................................. 2 Dedication ....................................................................................................................... 4 Preface............................................................................................................................. 4 Organization................................................................................................................ 5 Who Should Read This Book ..................................................................................... 7 Software and Versions ................................................................................................ 8 Conventions Used in This Book ................................................................................. 8 How to Contact Us.................................................................................................... 10 Acknowledgments..................................................................................................... 11 Chapter 1. Introduction ................................................................................................. 13 1.1 I/O Versus CPU Time......................................................................................... 13 1.2 No Longer CPU Bound....................................................................................... 14 1.3 Getting to the Good Stuff.................................................................................... 15 1.4 I/O Concepts ....................................................................................................... 16 1.5 Summary............................................................................................................. 25 Chapter 2. Buffers......................................................................................................... 26 2.1 Buffer Basics....................................................................................................... 27 2.2 Creating Buffers.................................................................................................. 41 2.3 Duplicating Buffers............................................................................................. 44 2.4 Byte Buffers ........................................................................................................ 46 2.5 Summary............................................................................................................. 59 Chapter 3. Channels ...................................................................................................... 61 3.1 Channel Basics.................................................................................................... 62 3.2 Scatter/Gather ..................................................................................................... 70 3.3 File Channels ...................................................................................................... 75 3.4 Memory-Mapped Files........................................................................................ 89 3.5 Socket Channels................................................................................................ 101 3.6 Pipes.................................................................................................................. 121 3.7 The Channels Utility Class ............................................................................... 126 3.8 Summary........................................................................................................... 127 Chapter 4. Selectors .................................................................................................... 129 4.1 Selector Basics .................................................................................................. 129 4.2 Using Selection Keys........................................................................................ 138 4.3 Using Selectors ................................................................................................. 142 4.4 Asynchronous Closability................................................................................. 152 4.5 Selection Scaling............................................................................................... 152 4.6 Summary........................................................................................................... 158 Chapter 5. Regular Expressions.................................................................................. 160 5.1 Regular Expression Basics................................................................................ 160 5.2 The Java Regular Expression API .................................................................... 163 5.3 Regular Expression Methods of the String Class ............................................. 185 5.4 Java Regular Expression Syntax....................................................................... 186 5.5 An Object-Oriented File Grep .......................................................................... 190 5.6 Summary........................................................................................................... 196
📄 Page 3
3 Chapter 6. Character Sets............................................................................................ 198 6.1 Character Set Basics ......................................................................................... 198 6.2 Charsets............................................................................................................. 200 6.3 The Charset Service Provider Interface ............................................................ 222 6.4 Summary........................................................................................................... 235 Appendix A. NIO and the JNI .................................................................................... 237 Appendix C. NIO Quick Reference ............................................................................ 243 C.1 Package java.nio............................................................................................... 243 C.2 Package java.nio.channels................................................................................ 251 C.4 Package java.nio.charset .................................................................................. 266 C.5 Package java.nio.charset.spi............................................................................. 271 C.6 Package java.util.regex..................................................................................... 271 Colophon..................................................................................................................... 274
📄 Page 4
4 Dedication To my wife, Karen. What would I do without you? Preface Computers are useless. They can only give you answers. —Pablo Picasso This book is about advanced input/output on the Java platform, specifically I/O using the Java 2 Standard Edition (J2SE) Software Development Kit (SDK), Version 1.4 and later. The 1.4 release of J2SE, code-named Merlin, contains significant new I/O capabilities that we'll explore in detail. These new I/O features are primarily collected in the java.nio package (and its subpackages) and have been dubbed New I/O (NIO). In this book, you'll see how to put these exciting new features to work to greatly improve the I/O efficiency of your Java applications. Java has found its true home among Enterprise Applications (a slippery term if ever there was one), but until the 1.4 release of the J2SE SDK, Java has been at a disadvantage relative to natively compiled languages in the area of I/O. This weakness stems from Java's greatest strength: Write Once, Run Anywhere. The need for the illusion of a virtual machine, the JVM, means that compromises must be made to make all JVM deployment platforms look the same when running Java bytecode. This need for commonality across operating-system platforms has resulted, to some extent, in a least-common-denominator approach. Nowhere have these compromises been more sorely felt than in the arena of I/O. While Java possesses a rich set of I/O classes, they have until now concentrated on providing common capabilities, often at a high level of abstraction, across all operating systems. These I/O classes have primarily been stream-oriented, often invoking methods on several layers of objects to handle individual bytes or characters. This object-oriented approach, composing behaviors by plugging I/O objects together, offers tremendous flexibility but can be a performance killer when large amounts of data must be handled. Efficiency is the goal of I/O, and efficient I/O often doesn't map well to objects. Efficient I/O usually means that you must take the shortest path from Point A to Point B. Complexity destroys performance when doing high-volume I/O. The traditional I/O abstractions of the Java platform have served well and are appropriate for a wide range of uses. But these classes do not scale well when moving large amounts of data, nor do they provide some common I/O functionality widely available on most operating systems today. These features — such as file locking, nonblocking I/O, readiness selection, and memory mapping — are essential for scalability and may be
📄 Page 5
5 required to interact properly with non-Java applications, especially at the enterprise level. The classic Java I/O mechanism doesn't model these common I/O services. Real companies deploy real applications on real systems, not abstractions. In the real world, performance matters — it matters a lot. The computer systems that companies buy to deploy their large applications have high-performance I/O capabilities (often developed at huge expense by the system vendors), which Java has until now been unable to fully exploit. When the business need is to move a lot of data as fast as possible, the ugly-but-fast solution usually wins out over pretty-but-slow. Time is money, after all. JDK 1.4 is the first major Java release driven primarily by the Java Community Process. The JCP (http://jcp.org/) provides a means by which users and vendors of Java products can propose and specify new features for the Java platform. The subject of this book, Java New I/O (NIO), is a direct result of one such proposal. Java Specification Request #51 (http://jcp.org/jsr/detail/51.jsp) details the need for high-speed, scalable I/O, which better leverages the I/O capabilities of the underlying operating system. The new classes comprising java.nio and its subpackages, as well as java.util.regex and changes to a few preexisting packages, are the resulting implementation of JSR 51. Refer to the JCP web site for details on how the JSR process works and the evolution of NIO from initial request to released reference implementation. With the Merlin release, Java now has the tools to make use of these powerful operating-system I/O capabilities where available. Java no longer needs to take a backseat to any language when it comes to I/O performance. Organization This book is divided into six chapters, each dealing with a major aspect of Java NIO. Chapter 1 discusses general I/O concepts to set the stage for the specific discussions that follow. Chapter 2 through Chapter 4 cover the core of NIO: buffers, channels, and selectors. Following that is a discussion of the new regular expression API. Regular expression processing dovetails with I/O and was included under the umbrella of the JSR 51 feature set. To wrap up, we take a look at the new pluggable character set mapping capabilities, which are also a part of NIO and JSR 51. For the impatient, anxious to jump ahead, here is the executive summary: Buffers The new Buffer classes are the linkage between regular Java classes and channels. Buffers implement fixed-size arrays of primitive data elements, wrapped inside an object with state information. They provide a rendezvous point: a Channel consumes data you place in a Buffer (write) or deposits data (read) you can then fetch from the buffer. There is also a special type of buffer that provides for memory-mapping files.
📄 Page 6
6 We'll discuss buffer objects in detail in Chapter 2. Channels The most important new abstraction provided by NIO is the concept of a channel. A Channel object models a communication connection. The pipe may be unidirectional (in or out) or bidirectional (in and out). A channel can be thought of as the pathway between a buffer and an I/O service. In some cases, the older classes of the java.io package can make use of channels. Where appropriate, new methods have been added to gain access to the Channel associated with a file or socket object. Most channels can operate in nonblocking mode, which has major scalability implications, especially when used in combination with selectors. We'll examine channels in Chapter 3. File locking and memory-mapped files The new FileChannel object in the java.nio.channels package provides many new file-oriented capabilities. Two of the most interesting are file locking and the ability to memory map files. File locking is an essential tool for coordinating access to shared data among cooperating processes. The ability to memory map files allows you to treat file data on disk as if it was in memory. This exploits the virtual memory capabilities of the operating system to dynamically cache file content without committing memory resources to hold a copy of the file. File locking and memory-mapped files are also discussed in Chapter 3. Sockets The socket channel classes provide a new method of interacting with network sockets. Socket channels can operate in nonblocking mode and can be used with selectors. As a result, many sockets can be multiplexed and managed more efficiently than with the traditional socket classes of java.net. The three new socket channels, ServerSocketChannel, SocketChannel, and DatagramChannel, are covered in Chapter 3. Selectors
📄 Page 7
7 Selectors provide the ability to do readiness selection. The Selector class provides a mechanism by which you can determine the status of one or more channels you're interested in. Using selectors, a large number of active I/O channels can be monitored and serviced by a single thread easily and efficiently. We'll discuss selectors in detail in Chapter 4. Regular expressions The new java.util.regex package brings Perl-like regular expression processing to Java. This is a long-awaited feature, useful for a wide range of applications. The new regular expression APIs are considered part of NIO because they were specified by JSR 51 along with the other NIO features. In many respects, it's orthogonal to the rest of NIO but is extremely useful for file processing and many other purposes. Chapter 5 discusses the JDK 1.4 regular expression APIs. Character sets The java.nio.charsets package provides new classes for mapping characters to and from byte streams. These new classes allow you to select the mapping by which characters will be translated or create your own mappings. Issues relating to character transcoding are covered in Chapter 6. Who Should Read This Book This book is intended for intermediate to advanced Java programmers: those who have a good handle on the language and want (or need!) to take full advantage of the new capabilities of Java NIO for large-scale and/or sophisticated data handling. In the text, I assume that you are familiar with the standard class packages of the JDK, object-oriented techniques, inheritance, and so on. I also assume that you know the basics of how I/O works at the operating-system level, what files are, what sockets are, what virtual memory is, and so on. Chapter 1 provides a high-level review of these concepts but does not explain them in detail. If you are still learning your way around the I/O packages of the Java platform, you may first want to take a look at Java I/O by Elliote Rusty Harold (O'Reilly) (http://www.oreilly.com/catalog/javaio/). It provides an excellent introduction to the java.io packages. While this book could be considered a follow-up to that book, it is not a continuation of it. This book concentrates on making use of the new java.nio packages to maximize I/O performance and introduces some new I/O concepts that are outside the scope of the java.io package.
📄 Page 8
8 We also explore character set encoding and regular expressions, which are a part of the new feature set bundled with NIO. Those programmers implementing character sets for internationalization or for specialized applications will be interested in the java.nio.charsets package discussed in Chapter 6. And those of you who've switched to Java, but keep returning to Perl for the ease of regular expression handling, no longer need to stray from Java. The new java.util.regex package provides all but the most obscure regular expression capabilities from Perl 5 in the standard JDK (and adds a few new things as well). Software and Versions This book describes the I/O capabilities of Java, particularly the java.nio and java.util.regex packages, which first appear in J2SE, Version 1.4. Therefore, you must have a working version of the Java 1.4 (or later) SDK to use the material presented in this book. You can obtain the Java SDK from Sun by visiting their web site at http://java.sun.com/j2se/1.4/. I also refer to the J2SE SDK as the Java Development Kit (JDK) in the text. In the context of this book, they mean the same thing. This book is based on the final JDK version, 1.4.0, released in February 2002. Early access (beta) versions of 1.4 where widely available for several months prior. Important changes were made to the NIO APIs shortly before final release. For that reason, you may see discussions of NIO published before the final release that conflict with some details in this book. This text has been updated to include all known last-minute changes and should be in agreement with the final 1.4.0 release. Later releases of J2SE may introduce further changes that conflict with this text. Refer to the documentation provided with your software distribution if there is any doubt. This book contains many examples demonstrating how to use the APIs. All code examples and related information can be downloaded from http://www.javanio.info/. Additional examples and test code are available there. Additional code examples provided by the NIO implementation team are available at http://java.sun.com/j2se/1.4/docs/guide/nio/example/. Conventions Used in This Book Like all programmers, I have my religious beliefs regarding code-formatting style. The samples in this book are formatted according to my preferences, which are fairly conventional. I'm a believer in eight-column tab indents and lots of separating whitespace. Some of the code examples have had their indents reduced to four columns because of space constraints. The source code available on the web site has tab indents. When I provide API examples and lists of methods from a class in the JDK, I generally provide only the specific methods referenced in the immediate text. I leave out the methods that are not of interest at that point. I often provide the full class API at the
📄 Page 9
9 beginning of a chapter or section, then list subsets of the API near the specific discussions that follow. These API samples are usually not syntactically correct; they are extracts of the method signatures without the method bodies and are intended to illustrate which methods are available and the parameters they accept. For example: public class Foo { public static final int MODE_ABC public static final int MODE_XYZ public abstract void baz (Blather blather); public int blah (Bar bar, Bop bop) } In this case, the method baz() is syntactically complete because abstract declarations consist of nothing but signature. But blah() lacks a semi-colon, which implies that the method body follows in the class definition. And when I list public fields defining constants, such as MODE_ABC and MODE_XYZ, I intentionally don't list the values they are initialized to. That information is not important. The public name is defined so that you can use it without knowing the value of the constant. Where possible, I extract this API information directly from the code distributed with the 1.4 JDK. When I started writing this book, the JDK was at Version 1.4 beta 2. Every effort has been made to keep the code snippets current. My apologies for any inaccuracies that may have crept in. The source code included with the JDK is the final authority. Font Conventions I use standard O'Reilly font conventions in this book. This is not entirely by choice. I composed the manuscript directly as XML using a pure Java GUI editor (XXE from http://www.xmlmind.com/), which enforced the DTD I used, O'Reilly's subset of DocBook (http://www.oasis-open.org/). As such, I never specified fonts or type styles. I'd select XML elements such as <filename> or <programlisting>, and O'Reilly's typesetting software applied the appropriate type style. This, of course, means nothing to you. So here's the rundown on font conventions used in this text: Italic is used for: • Pathnames, filenames, and program names • Internet addresses, such as domain names and URLs • New terms where they are defined Constant Width is used for:
📄 Page 10
10 • Names and keywords in Java code, including method names, variable names, and class names • Program listings and code snippets • Constant values Constant Width Bold is used for: • Emphasis within code examples This icon designates a note, which is an important aside to the nearby text. This icon designates a warning relating to the nearby text. How to Contact Us Although this is not the first book I've written, it's the first I've written for general publication. It's far more difficult to write a book than to read one. And it's really quite frightening to expound on Java-related topics because the subject matter is so extensive and changes rapidly. There are also vast numbers of very smart people who can and will point out the slightest inaccuracy you commit to print. I would like to hear any comments you may have, positive or negative. I believe I did my homework on this project, but errors inevitably creep in. I'm especially interested in constructive feedback on the structure and content of the book. I've tried to structure it so that topics are presented in a sensible order and in easily absorbed chunks. I've also tried to cross-reference heavily so it will be useful when accessed randomly. Offers of lucrative consulting contracts, speaking engagments, and free stuff are appreciated. Spurious flames and spam are cheerfully ignored. You can contact me at ron@javanio.info or visit http://www.javanio.info/. O'Reilly and I have verified the information in this book to the best of our ability, but you may find that features have changed (or even that we have made mistakes!). Please let us know about any errors you find, as well as your suggestions for future editions, by writing to: O'Reilly & Associates, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 (800) 998-9938 (U.S. and Canada) (707) 829-0515 (international/local)
📄 Page 11
11 (707) 829-0104 (fax) You can also contact O'Reilly by email. To be put on the mailing list or request a catalog, send a message to: info@oreilly.com We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at: http://www.oreilly.com/catalog/javanio/ To ask technical questions or comment on the book, send email to: bookquestions@oreilly.com For more information about O'Reilly books, conferences, Resource Centers, and the O'Reilly Network, see O'Reilly's web site at: http://www.oreilly.com/ Acknowledgments It's a lot of work putting a book together, even one as relatively modest in scope as this. I'd like to express my gratitude to several people for their help with this endeavor. First and foremost, I'd like to thank Mike Loukides, my editor at O'Reilly, for affording me the chance join the ranks of O'Reilly authors. I still wonder how I managed to wind up with a book deal at O'Reilly. It's a great honor and no small responsibility. Thanks Mike, and sorry about the comma splices. I'd also like to thank Bob Eckstein and Kyle Hart, also of O'Reilly, for their efforts on my behalf: Bob for his help with early drafts of this book and Kyle for giving me free stuff at JavaOne (oh, that marketing campaign may be helpful too). Jessamyn Read turned my clumsy pictures into professional illustrations. I'd also like to thank the prolific David Flanagan for mentioning my minuscule contribution to Java in a Nutshell, Fourth Edition (O'Reilly), and for letting me use the regular expression syntax table from that book. Authors of technical books rely heavily on technical reviewers to detect errors and omissions. Technical review is especially important when the material is new and evolving, as was the case with NIO. The 1.4 APIs were literally a moving target when I began work on this project. I'm extremely lucky that Mark Reinhold of Sun Microsystems, Specification Lead for JSR 51 and author of much of the NIO code in JDK 1.4, agreed to be a reviewer. Mark reviewed a very early and very rough draft. He kindly set me straight on many points and provided valuable insight that helped me
📄 Page 12
12 tremendously. Mark also took time out while trying to get the 1.4.1 release in shape to provide detailed feedback on the final draft. Thanks Mark. Several other very smart people looked over my work and provided constructive feedback. Jason Hunter (http://www.servlets.com/) eagerly devoured the first review draft within hours and provided valuable organizational input. The meticulous John G. Miller, Jr., of Digital Gamers, Inc. (johnmiller@digigamers.com, http://www.digigamers.com/), carefully reviewed the draft and example code. John's real-world experience with NIO on a large scale in an online, interactive game environment made this book a better one. Will Crawford (http://www.williamcrawford.info/) found time he couldn't afford to read the entire manuscript and provided laser-like, highly targeted feedback. I'd also like to thank Keith J. Koski and Michael Daudel (mgd@ronsoft.com), fellow members of a merry band of Unix and Java codeslingers I've worked with over the last several years, known collectively as the Fatboys. The Fatboys are thinning out, getting married, moving to the suburbs, and having kids (myself included), but as long as Bill can suck gravy through a straw, the Fatboy dream lives on. Keith and Mike read several early drafts, tested code, gave suggestions, and provided encouragement. Thanks guys, you're "phaser enriched." And last but not least, I want to thank my wife, Karen. She doesn't grok this tech stuff but is wise and caring and loves me and feeds me fruit. She lights my soul and gives me reason. Together we pen the chapters in our book of life.
📄 Page 13
13 Chapter 1. Introduction Get the facts first. You can distort them later. —Mark Twain Let's talk about I/O. No, no, come back. It's not really all that dull. Input/output (I/O) is not a glamorous topic, but it's a very important one. Most programmers think of I/O in the same way they do about plumbing: undoubtedly essential, can't live without it, but it can be unpleasant to deal with directly and may cause a big, stinky mess when not working properly. This is not a book about plumbing, but in the pages that follow, you may learn how to make your data flow a little more smoothly. Object-oriented program design is all about encapsulation. Encapsulation is a good thing: it partitions responsibility, hides implementation details, and promotes object reuse. This partitioning and encapsulation tends to apply to programmers as well as programs. You may be a highly skilled Java programmer, creating extremely sophisticated objects and doing extraordinary things, and yet be almost entirely ignorant of some basic concepts underpinning I/O on the Java platform. In this chapter, we'll momentarily violate your encapsulation and take a look at some low-level I/O implementation details in the hope that you can better orchestrate the multiple moving parts involved in any I/O operation. 1.1 I/O Versus CPU Time Most programmers fancy themselves software artists, crafting clever routines to squeeze a few bytes here, unrolling a loop there, or refactoring somewhere else to consolidate objects. While those things are undoubtedly important, and often a lot of fun, the gains made by optimizing code can be easily dwarfed by I/O inefficiencies. Performing I/O usually takes orders of magnitude longer than performing in-memory processing tasks on the data. Many coders concentrate on what their objects are doing to the data and pay little attention to the environmental issues involved in acquiring and storing that data. Table 1-1 lists some hypothetical times for performing a task on units of data read from and written to disk. The first column lists the average time it takes to process one unit of data, the second column is the amount of time it takes to move that unit of data from and to disk, and the third column is the number of these units of data that can be processed per second. The fourth column is the throughput increase that will result from varying the values in the first two columns. Table 1-1. Throughput rate, processing versus I/O time Process time (ms) I/O time (ms) Throughput (units/sec) Gain (%) 5 100 9.52 (benchmark) 2.5 100 9.76 2.44 1 100 9.9 3.96 5 90 10.53 10.53 5 75 12.5 31.25
📄 Page 14
14 5 50 18.18 90.91 5 20 40 320 5 10 66.67 600 The first three rows show how increasing the efficiency of the processing step affects throughput. Cutting the per-unit processing time in half results only in a 2.2% increase in throughput. On the other hand, reducing I/O latency by just 10% results in a 9.7% throughput gain. Cutting I/O time in half nearly doubles throughput, which is not surprising when you see that time spent per unit doing I/O is 20 times greater than processing time. These numbers are artificial and arbitrary (the real world is never so simple) but are intended to illustrate the relative time magnitudes. As you can see, I/O is often the limiting factor in application performance, not processing speed. Programmers love to tune their code, but I/O performance tuning is often an afterthought, or is ignored entirely. It's a shame, because even small investments in improving I/O performance can yield substantial dividends. 1.2 No Longer CPU Bound To some extent, Java programmers can be forgiven for their preoccupation with optimizing processing efficiency and not paying much attention to I/O considerations. In the early days of Java, the JVMs interpreted bytecodes with little or no runtime optimization. This meant that Java programs tended to poke along, running significantly slower than natively compiled code and not putting much demand on the I/O subsystems of the operating system. But tremendous strides have been made in runtime optimization. Current JVMs run bytecode at speeds approaching that of natively compiled code, sometimes doing even better because of dynamic runtime optimizations. This means that most Java applications are no longer CPU bound (spending most of their time executing code) and are more frequently I/O bound (waiting for data transfers). But in most cases, Java applications have not truly been I/O bound in the sense that the operating system couldn't shuttle data fast enough to keep them busy. Instead, the JVMs have not been doing I/O efficiently. There's an impedance mismatch between the operating system and the Java stream-based I/O model. The operating system wants to move data in large chunks (buffers), often with the assistance of hardware Direct Memory Access (DMA). The I/O classes of the JVM like to operate on small pieces — single bytes, or lines of text. This means that the operating system delivers buffers full of data that the stream classes of java.io spend a lot of time breaking down into little pieces, often copying each piece between several layers of objects. The operating system wants to deliver data by the truckload. The java.io classes want to process data by the shovelful. NIO makes it easier to back the truck right up to where you can make direct use of the data (a ByteBuffer object).
📄 Page 15
15 This is not to say that it was impossible to move large amounts of data with the traditional I/O model — it certainly was (and still is). The RandomAccessFile class in particular can be quite efficient if you stick to the array-based read() and write() methods. Even those methods entail at least one buffer copy, but are pretty close to the underlying operating-system calls. As illustrated by Table 1-1, if your code finds itself spending most of its time waiting for I/O, it's time to consider improving I/O performance. Otherwise, your beautifully crafted code may be idle most of the time. 1.3 Getting to the Good Stuff Most of the development effort that goes into operating systems is targeted at improving I/O performance. Lots of very smart people toil very long hours perfecting techniques for schlepping data back and forth. Operating-system vendors expend vast amounts of time and money seeking a competitive advantage by beating the other guys in this or that published benchmark. Today's operating systems are modern marvels of software engineering (OK, some are more marvelous than others), but how can the Java programmer take advantage of all this wizardry and still remain platform-independent? Ah, yet another example of the TANSTAAFL principle.[1] [1] There Ain't No Such Thing As A Free Lunch. The JVM is a double-edged sword. It provides a uniform operating environment that shelters the Java programmer from most of the annoying differences between operating-system environments. This makes it faster and easier to write code because platform-specific idiosyncrasies are mostly hidden. But cloaking the specifics of the operating system means that the jazzy, wiz-bang stuff is invisible too. What to do? If you're a developer, you could write some native code using the Java Native Interface (JNI) to access the operating-system features directly. Doing so ties you to a specific operating system (and maybe a specific version of that operating system) and exposes the JVM to corruption or crashes if your native code is not 100% bug free. If you're an operating-system vendor, you could write native code and ship it with your JVM implementation to provide these features as a Java API. But doing so might violate the license you signed to provide a conforming JVM. Sun took Microsoft to court about this over the JDirect package which, of course, worked only on Microsoft systems. Or, as a last resort, you could turn to another language to implement performance-critical applications. The java.nio package provides new abstractions to address this problem. The Channel and Selector classes in particular provide generic APIs to I/O services that were not reachable prior to JDK 1.4. The TANSTAAFL principle still applies: you won't be able to access every feature of every operating system, but these new classes provide a
📄 Page 16
16 powerful new framework that encompasses the high-performance I/O features commonly available on commercial operating systems today. Additionally, a new Service Provider Interface (SPI) is provided in java.nio.channels.spi that allows you to plug in new types of channels and selectors without violating compliance with the specifications. With the addition of NIO, Java is ready for serious business, entertainment, scientific and academic applications in which high-performance I/O is essential. The JDK 1.4 release contains many other significant improvements in addition to NIO. As of 1.4, the Java platform has reached a high level of maturity, and there are few application areas remaining that Java cannot tackle. A great guide to the full spectrum of JDK features in 1.4 is Java In A Nutshell, Fourth Edition by David Flanagan (O'Reilly). 1.4 I/O Concepts The Java platform provides a rich set of I/O metaphors. Some of these metaphors are more abstract than others. With all abstractions, the further you get from hard, cold reality, the tougher it becomes to connect cause and effect. The NIO packages of JDK 1.4 introduce a new set of abstractions for doing I/O. Unlike previous packages, these are focused on shortening the distance between abstraction and reality. The NIO abstractions have very real and direct interactions with real-world entities. Understanding these new abstractions and, just as importantly, the I/O services they interact with, is key to making the most of I/O-intensive Java applications. This book assumes that you are familiar with basic I/O concepts. This section provides a whirlwind review of some basic ideas just to lay the groundwork for the discussion of how the new NIO classes operate. These classes model I/O functions, so it's necessary to grasp how things work at the operating-system level to understand the new I/O paradigms. In the main body of this book, it's important to understand the following topics: • Buffer handling • Kernel versus user space • Virtual memory • Paging • File-oriented versus stream I/O • Multiplexed I/O (readiness selection) 1.4.1 Buffer Handling Buffers, and how buffers are handled, are the basis of all I/O. The very term "input/output" means nothing more than moving data in and out of buffers. Processes perform I/O by requesting of the operating system that data be drained from a buffer (write) or that a buffer be filled with data (read). That's really all it boils down to.
📄 Page 17
17 All data moves in or out of a process by this mechanism. The machinery inside the operating system that performs these transfers can be incredibly complex, but conceptually, it's very straightforward. Figure 1-1 shows a simplified logical diagram of how block data moves from an external source, such as a disk, to a memory area inside a running process. The process requests that its buffer be filled by making the read() system call. This results in the kernel issuing a command to the disk controller hardware to fetch the data from disk. The disk controller writes the data directly into a kernel memory buffer by DMA without further assistance from the main CPU. Once the disk controller finishes filling the buffer, the kernel copies the data from the temporary buffer in kernel space to the buffer specified by the process when it requested the read() operation. Figure 1-1. Simplified I/O buffer handling This obviously glosses over a lot of details, but it shows the basic steps involved. Note the concepts of user space and kernel space in Figure 1-1. User space is where regular processes live. The JVM is a regular process and dwells in user space. User space is a nonprivileged area: code executing there cannot directly access hardware devices, for example. Kernel space is where the operating system lives. Kernel code has special privileges: it can communicate with device controllers, manipulate the state of processes in user space, etc. Most importantly, all I/O flows through kernel space, either directly (as decsribed here) or indirectly (see Section 1.4.2). When a process requests an I/O operation, it performs a system call, sometimes known as a trap, which transfers control into the kernel. The low-level open(), read(), write(), and close() functions so familiar to C/C++ coders do nothing more than set up and perform the appropriate system calls. When the kernel is called in this way, it takes whatever steps are necessary to find the data the process is requesting and transfer it into the specified buffer in user space. The kernel tries to cache and/or prefetch data, so the data being requested by the process may already be available in kernel space. If so, the data requested by the process is copied out. If the data isn't available, the process is suspended while the kernel goes about bringing the data into memory. Looking at Figure 1-1, it's probably occurred to you that copying from kernel space to the final user buffer seems like extra work. Why not tell the disk controller to send it directly to the buffer in user space? There are a couple of problems with this. First, hardware is usually not able to access user space directly.[2] Second, block-oriented hardware devices such as disk controllers operate on fixed-size data blocks. The user process may be
📄 Page 18
18 requesting an oddly sized or misaligned chunk of data. The kernel plays the role of intermediary, breaking down and reassembling data as it moves between user space and storage devices. [2] There are many reasons for this, all of which are beyond the scope of this book. Hardware devices usually cannot directly use virtual memory addresses. 1.4.1.1 Scatter/gather Many operating systems can make the assembly/disassembly process even more efficient. The notion of scatter/gather allows a process to pass a list of buffer addresses to the operating system in one system call. The kernel can then fill or drain the multiple buffers in sequence, scattering the data to multiple user space buffers on a read, or gathering from several buffers on a write (Figure 1-2). Figure 1-2. A scattering read to three buffers This saves the user process from making several system calls (which can be expensive) and allows the kernel to optimize handling of the data because it has information about the total transfer. If multiple CPUs are available, it may even be possible to fill or drain several buffers simultaneously. 1.4.2 Virtual Memory All modern operating systems make use of virtual memory. Virtual memory means that artificial, or virtual, addresses are used in place of physical (hardware RAM) memory addresses. This provides many advantages, which fall into two basic categories: 1. More than one virtual address can refer to the same physical memory location. 2. A virtual memory space can be larger than the actual hardware memory available. The previous section said that device controllers cannot do DMA directly into user space, but the same effect is achievable by exploiting item 1 above. By mapping a kernel space address to the same physical address as a virtual address in user space, the DMA hardware (which can access only physical memory addresses) can fill a buffer that is simultaneously visible to both the kernel and a user space process. (See Figure 1-3.) Figure 1-3. Multiply mapped memory space
📄 Page 19
19 This is great because it eliminates copies between kernel and user space, but requires the kernel and user buffers to share the same page alignment. Buffers must also be a multiple of the block size used by the disk controller (usually 512 byte disk sectors). Operating systems divide their memory address spaces into pages, which are fixed-size groups of bytes. These memory pages are always multiples of the disk block size and are usually powers of 2 (which simplifies addressing). Typical memory page sizes are 1,024, 2,048, and 4,096 bytes. The virtual and physical memory page sizes are always the same. Figure 1-4 shows how virtual memory pages from multiple virtual address spaces can be mapped to physical memory. Figure 1-4. Memory pages 1.4.3 Memory Paging To support the second attribute of virtual memory (having an addressable space larger than physical memory), it's necessary to do virtual memory paging (often referred to as swapping, though true swapping is done at the process level, not the page level). This is a scheme whereby the pages of a virtual memory space can be persisted to external disk storage to make room in physical memory for other virtual pages. Essentially, physical memory acts as a cache for a paging area, which is the space on disk where the content of memory pages is stored when forced out of physical memory. Figure 1-5 shows virtual pages belonging to four processes, each with its own virtual memory space. Two of the five pages for Process A are loaded into memory; the others are stored on disk. Figure 1-5. Physical memory as a paging-area cache
📄 Page 20
20 Aligning memory page sizes as multiples of the disk block size allows the kernel to issue direct commands to the disk controller hardware to write memory pages to disk or reload them when needed. It turns out that all disk I/O is done at the page level. This is the only way data ever moves between disk and physical memory in modern, paged operating systems. Modern CPUs contain a subsystem known as the Memory Management Unit (MMU). This device logically sits between the CPU and physical memory. It contains the mapping information needed to translate virtual addresses to physical memory addresses. When the CPU references a memory location, the MMU determines which page the location resides in (usually by shifting or masking the bits of the address value) and translates that virtual page number to a physical page number (this is done in hardware and is extremely fast). If there is no mapping currently in effect between that virtual page and a physical memory page, the MMU raises a page fault to the CPU. A page fault results in a trap, similar to a system call, which vectors control into the kernel along with information about which virtual address caused the fault. The kernel then takes steps to validate the page. The kernel will schedule a pagein operation to read the content of the missing page back into physical memory. This often results in another page being stolen to make room for the incoming page. In such a case, if the stolen page is dirty (changed since its creation or last pagein) a pageout must first be done to copy the stolen page content to the paging area on disk. If the requested address is not a valid virtual memory address (it doesn't belong to any of the memory segments of the executing process), the page cannot be validated, and a segmentation fault is generated. This vectors control to another part of the kernel and usually results in the process being killed. Once the faulted page has been made valid, the MMU is updated to establish the new virtual-to-physical mapping (and if necessary, break the mapping of the stolen page), and the user process is allowed to resume. The process causing the page fault will not be aware of any of this; it all happens transparently. This dynamic shuffling of memory pages based on usage is known as demand paging. Some sophisticated algorithms exist in the kernel to optimize this process and to prevent thrashing, a pathological condition in which paging demands become so great that nothing else can get done. 1.4.4 File I/O
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List