Tag Archives: java7

Java 6 EOL extended to Feb 2013

Oracle has extended the Java 6 end-of-life to Feb 2013. That’s not far away. After that there won’t be any further updates to Java 6. So I suggest we all start moving over to Java 7. Java 7 is now on update 6 and therefore I would say that it’s had enough time to mature as a major release now. A lack of updates for Java 6 is a fairly compelling release to move over as it is inevitable that Java 6 users will encounter JDK bugs in production and if they do then they will no longer have the luxury of receiving fixes through minor updates. And I’m sure I don’t to remind you of all the other reasons to move over. 🙂

jsr166e: Upcoming java.util.concurrent APIs for Java 8

Jsr166e is to Java8 as Jsr166y was to Java7. Jsr166y introduced the fork join framework and Phaser to Java7 which are worthy of blog posts of their own. The fork join framework will enable us to introduce fine grained inversion of concurrency whereby we can code logic without really needing to think about or implement how that logic will perform on arbitrary hardware.

Now that Java 7 has been released jsr166e has emerged as a repository of utilities that are intended for inclusion into Java 8 next year. Having followed the discussion on the concurrency mailing list I’ve become absolutely fascinated by the work going on in jsr166e for the simple reason that it is catering for use cases that have been directly relevant to my recent work. So without further delay here is an abridged guide to the current state of jsr166e.

Collections

  • ConcurrentHashMapV8: A candidate replacement for java.util.concurrent.ConcurrentHashMap with lower memory footprint. The exact improvements of this over the old implementation I’m yet to explore. Here’s the mail thread announcement and discussion.
  • ConcurrentHashMapV8.MappingFunction: A (well overdue) mechanism for automatically computing a value for a key that doesn’t already have one. I’ve waited a long time for this and in my opinion this is the most basic requirement of a concurrent map as without this you always end up locking to create a new mapping.
  • LongAdderTable: A concurrent counting map where a key is associated with an efficient concurrent primitive counter. This provides significantly improved performance over AtomicLong under high contention as it utilises striping across multiple values. I’ve desperately needed this in my job and I am overjoyed that this has finally been written by the experts group. I’ve been exploring and coding up various implementations of my own of such a class recently but I’d rather have this provided by the JDK. Again a very basic requirement and a class that’s well overdue.
  • ReadMostlyVector: Same as Vector but with reduced contention and better throughput for concurrent reads. I’m a little surprised about this one. Does anyone even use Vector anymore? Why not just replace the underlying implementations of Hashtable and Vector with more performant ones? Is there any backward compatibility constraint that’s restricting this?

Adders

The following adders are essentially high performance concurrent primitive counters that dynamically adapt to growing contention to reduce it. The key value add here is achieved by utilising striping across values on writes and acting across the stripes for read.

Again, high performance primitive counters, are something I’ve desperately needed in my work lately. Imagine if you are implementing client server protocols. You may need message sequence numbers to ensure you can discard out of order/older messages. You might also need request response id correlation for which id generation is necessary. For any such id generation I wanted to use primitive longs for efficiency and as a result needed a high performance primitive long counter and now I have one!

Important: It’s important to note one limitation of these counting APIs. There are no compound methods like incrementAndGet() or addAndGet() which significantly reduces the utility of such API. I can see why this is the case: although the writes can stripe across values the read must act across all striped values and as a result is quite expensive. I therefore need to think about how much this will compromise the use of this API for the use case of an efficient id generator.

  • DoubleAdder: A high performance concurrent primitive double counter.
  • LongAdder: A high performance concurrent primitive long counter.

MaxUpdaters

The following exhibit similar performance characteristics to the adders above but instead of maintaining a count or sum they maintain a maximum value. These also use striped values for writes and reading across striped values to compute aggregate values.

  • DoubleMaxUpdater: A high performance primitive double maximum value maintainer.
  • LongMaxUpdater: A high performance primitive long maximum value maintainer.

Synchronizers

  • SequenceLock: Finally, jsr166e adds an additional synchronisation utility. This is an interesting class which took me two or three reviews of the javadoc example to understand its value add. Essentially it offers the ability to conduct a more accommodating conversation between you and the lock provider whereby you can not only choose not to lock and still retain consistent visibility but also fundamentally allow you to detect when other threads have been active simultaneously with your logic thereby allowing you to retry your behaviour until your read of any state is completely consistent at that moment in time. I can see what value this adds and how to use it but I need to think about real world use cases for this utility.

What is still missing?

Sadly, despite the above, Java shows no signs of addressing a number of other real world use cases of mine.

  • Concurrent primitive key maps
  • Concurrent primitive value maps
  • Concurrent primitive key value maps
  • Externalised (inverted) striping utilities that allow you to hash an incoming key to a particular lock across a distribution of locks. This means that you no longer have to lock entire collections but just the lock relevant to the input you are working with. This is absolutely fundamental and essential in my opinion and has already been written by EhCache for their own use but this should ideally be provided as a building block by the JDK.
  • There’s also been a lot of talk about core-striping as opposed to lock striping which I suppose is an interesting need. In other words instead of the distribution of contention being across lock instances they are across representations (IDs) of physical processor cores. Check the mailing list for details.

Summary

I’m very excited indeed by the incorporations of jsr166e not only because they have directly addressed a number of my real world use cases but also because they give an early peek at what’s to come in Java 8. The additional support for primitives is welcome as they will eliminate reliance on the ghastly autoboxing and gc churn of primitive wrappers. I’ll certainly be using these utilities for my own purposes. Keep up the great work! However, I’d love to hear why the above use cases under ‘What’s missing’ still haven’t seen any activity in Java.

Java 7 loop predication bugs surface and workaround known

Software and bugs always have been and always will be inseparable. Java 7 certainly wasn’t going to be the first exception to this rule. Unsurprisingly, since internal testing can never compete with the testing that takes place through mass adoption, in less than a day after Java 7’s release bugs surfaced with loop predication.

Oracle are aware of the issues and Mark Reinhold has suggested a work around while they work on fixes for an early update release. Though apparently update 1 will be security fixes only and loop fixes are more likely to appear in update 2 though they will try to push into update 1 if possible. Keep at it Oracle – you have our support.

Supposedly the bugs were found by Apache Lucene and Solr but I’m sure I’m not the first to wonder why these projects neglected to test with all the openjdk nightly snapshots and particularly with the release candidate.

Update [01/08/2011]: The following links provide a bit more insight on what happened: ‘The real story behind the Java 7 GA bugs affecting Apache Lucene / Solr‘ and ‘Don’t Use Java 7? Are you kidding me?

Java 7 released!

As if you didn’t know – Java 7 is released (1, 2, 3). As the linked post says it’s been a long five years but hopefully more regular release cycles and expert innovation of the kind we’ve already seen in Java 7 will become the norm and turn the droves of skeptics, cynics and deserters back to believing in Java and the JVM as the supreme platform.

The delay hasn’t been all bad. In fact I think it’s been quite positive in many ways. The lack of growth of Java has fostered innovation in jvm languages to try and plug the inadequacies while also creating new things. It’s also spurred its loyal users to do more with less and explore alternative languages and paradigms but also contribute back to Java with what they’ve learnt. And in Java 8 with Project Lambda Java now has the benefit of hindsight in being able to examine, for example, how Scala and Clojure have done things, and take the best of all worlds but at the same it will need to compete effectively with other languages both on and outside of the JVM. The ubiquitous nature of Java means that it must grow and compete in all directions to continue to be so.

This is, in my opinion, as I’m sure you realise – if you look back to what’s gone on in the past year and what is to provisionally come in the next year or two – only the beginning. With Oracle heading Java now this is very much a commercial endeavour and with the first release over the audience is more than ever unrelenting and eagerly awaiting the next.

Oracle celebrates upcoming Java 7 release on video

Oracle recently celebrated the upcoming release of Java 7 with great pomp and show and subsequently made recordings of the event available as a series of videos available. If you haven’t already done so watch the videos in order below and read the blog post. There are also some thoughts on what’s upcoming in Java 8 in the final Q&A video.

It’s great to see Oracle engaging with the community to this extent and so publicly. This could have been just another release but I’m glad it received more publicity and visibility in this way, particularly, giving sub-project leads within Java 7 the recognition they deserve and the inspiration to carry on doing their great work I hope. I’ve also subscribed to the Oracle Java Magazine to see what it offers in due time.

Introducing Java 7: Moving Java Forward

Technical breakout sessions

In addition to the main presentation there were also smaller and more specialised technical breakout sessions as below.

Making Heads and Tails of Project Coin, Small Language Changes in JDK 7 (slides)

Divide and Conquer Parallelism with the Fork/Join Framework (slides)

The New File System API in JDK 7 (slides)

A Renaissance VM: One Platform, Many Languages (slides)

Meet the Experts: Q&A and Panel Discussion

Thoughts

A few thoughts that occurred to me having watched the above presentations follow below.

  • In Joe’s presentation I realised just how important good editor support is to prompt developers to adopt the project coin proposals over older ways of achieving the same ends. I was very impressed watching Netbeans detecting older syntax, prompting the developer through providing helpful warnings and being able to change old to new syntax instantaneously. I really hope Eclipse does the same. Eclipse has asked for quick fix, refactoring and template suggestions and in response to that I would say the most important incorporations above supporting the language would be supporting idiomatic transitions from Java 6 and Java 7.
  • Watching Joe Darcy go through how they implemented switch on strings and the associated performance considerations was fascinating. They actually use the hashcode values of strings to generate offsets and then use the offsets to execute the logic in the original case statements.
  • I found it very cool that Stuart Marks actually retrofitted the existing JDK code to utilise some of the Project Coin features not by hand but in an automated fashion. Apparently the JDK team also used annotation based processing and netbeans based tooling to help them upgrade the JDK codebase to use the new features.

Oracle discusses Java 7 & 8 new features on video

NOTE: If this post interests you you should definitely check out the Java 7 celebration launch and the detailed technical breakout session videos here.

Watch the fascinating video discussion (also embedded below) about upcoming features in Java 7 & 8 between Adam Messinger, Mark Reinhold, John Rose and Joe Darcy. They talk about quite a few things that may not be general knowledge and certainly haven’t been committed to officially yet.

  • The upcoming Project Lambda in Java 8 will provide multicore support with parallel execution of lambda code. This makes sense as functions just define units of work and whereas the developers define how those units of work are composed they do not need to be concerned about how they are scheduled or sequenced – that’s the VM’s job. An analogy can be drawn between this and the kinds of things that ParallelArray can do.
  • Java 8 will provide Project Jigsaw – Java’s first modularity feature.
  • Java 8 will provide Collection Literals which will finally eliminate boiler plate collection declaration and access code.
  • Java 8 will merge Jrockit into the Hotspot vm. This work has already begun in Java 7 with JMX enhancements and reduction of the use of permgen space. Permgen will be eliminated altogether in Java 7 updates taking inspiration from JRockit.
  • Java 8 may have big data support in collections (large collections) – 64 bit support and primitive collections.
  • There was a considerable amount of talk about the new invokedynamic instruction, the first new instruction since Java5, that will optimise how JVM languages perform but also help with parallelising closures but as I am not a language designer that wasn’t particularly relevant to me though I’m always happy to gain from evolving VM technology as that really is the true essence of Java in my opinion.

Needless to say I cannot wait for Java 7 but I absolutely cannot wait for Java 8. As Mark and Joe said Java 7 is an evolutionary release but Java 8 shall be a revolutionary release. I really must give credit to Oracle for getting all this moving after it remaining stagnant for so many years at Sun. My favourite features in Java7 are Project Coin and NIO2 and in Java 8 will be Project Lambda and Collection Literals though any VM features HotSpot gains from JRockit will obviously be significant.

Also, apparently, Java 7 is out tomorrow (7th July) though that post doesn’t actually clarify whether it’s just the party tomorrow or the release as well. I cannot believe Eclipse still isn’t supporting Java 7 when every other major editor has had this for quite some time. I guess I’ll have to fall back to Netbeans for a while (IntelliJ I just can’t figure out how to use).

Update [17/08/2012]: Nice to see this post featured on the StackOverflow discussion titled ‘Does .NET have something like Java’s permgen?‘.

Java SE 7 API Javadocs receive new colour scheme

The Java SE 7 API specification (colloquially known as javadocs) have received a stylistic facelift. Compare v7 with v6. What do you think? Do you think it’s an improvement? Do you think it was even necessary? My opinion is twofold.

Firstly, although overall the javadocs appear to look nicer and more professional and corporate (as opposed to academic), when it comes to the method specifications, they aren’t as visually prominent as they were before due to the styles and colours overwhelming the text. In this respect both the styles and the colour of the text are subtle making them more difficult to tell apart. It’s not as clear as bold blue on white as it was before. This means that the reader will probably have to start reading the text to tell methods apart instead of just glancing at the visual image which was previously quite striking. A friend of mine also mentioned that there was just too much going on in terms of boxes too and I can see what he means.

Secondly and this is the more important point – if at all they had decided to spend time on enhancing the javadocs what required the most attention was in fact the navigability and searchability. The activity that takes up most of my time when using javadocs is finding the pieces of information i’m interested in. Better indexing and type ahead retrieval for classes, packages, properties and methods would be immensely useful rather than relying on the browser multiframe search which can fail at times by searching in the wrong frame. And before anyone mentions this I’m aware that there are third party sites which do such indexing and retrieval but I want this officially. So, that’s my 2p Oracle. I appreciate the time you’ve put in to the Javadocs but there’s much more room for improvement. Being a purist I really feel that, with this, it is more about content and usability than it is about appearance.

P.S. I think the new Google dark colour scheme and navigation bar is absolutely horrid. I want the old google back! 🙁

JDK7 Project Coin Primer

Although I knew about the small language enhancements going into JDK7, named Project Coin, quite some time back it was only today that I got around to actually coding and trying them all out primarily due to prior laziness and poor editor support. Yes I know – there’s plenty of docs on Project Coin out there already. This is mine.

And now that JDK7 actually has a finite number of steps in its release schedule and is scheduled for release on 28/07/2011 we know our efforts in learning the new feature set are not going to waste and that very soon we’ll be able to write production code with this knowledge to make the industry a better place which is ultimately what is important.

Here I present a quick primer for the uninitiated. The small language enhancements going into JDK7 are as follows.

  1. Strings in switch
  2. Binary integral literals
  3. Underscores in numeric literals
  4. Multi-catch and more precise rethrow
  5. Improved type inference for generic instance creation(diamond)
  6. try-with-resources statement
  7. Simplified varargs method invocation

Below I provide one example per feature that will take you through each feature in a flash.

Strings in switch

A long overdue and seemingly basic feature but better late than never.

package name.dhruba.kb.jdk7;

public class StringsInSwitch {

    public static void main(String[] args) {
        for (String a : new String[]{"foo", "bar", "baz"}) {
            switch (a) {
                case "foo":
                    System.out.println("received foo!");
                    break;
                case "bar":
                    System.out.println("received bar!");
                    break;
                case "baz":
                    System.out.println("received baz!");
                    break;
            }
        }
    }
    
}

Binary integral literals

I can’t say I’ve ever felt the absence of this rather unusual feature. Though it seems it was felt compelling enough to be added in. This is primarily a readability advantage – a semantic representation.

package name.dhruba.kb.jdk7;

public class BinaryLiterals {

    public static void main(String[] args) {

        // An 8-bit 'byte' literal.
        byte aByte = (byte) 0b00100001;

        // A 16-bit 'short' literal.
        short aShort = (short) 0b1010000101000101;

        // Some 32-bit 'int' literals.
        int anInt1 = 0b10100001010001011010000101000101;
        int anInt2 = 0b101;
        int anInt3 = 0B101; // The B can be upper or lower case.

        // A 64-bit 'long' literal. Note the "L" suffix.
        long aLong = 0b1010000101000101101000010100010110100001010001011010000101000101L;

    }
    
}

Underscores in numeric literals

I’ve often found myself adding javadoc to make a constant declaration clearer. This makes them somewhat clearer which is definitely helpful.

package name.dhruba.kb.jdk7;

public class UnderScoredLiterals {

    public static void main(String[] args) {

        long creditCardNumber = 1234_5678_9012_3456L;
        long socialSecurityNumber = 999_99_9999L;
        float pi = 3.14_15F;
        long hexBytes = 0xFF_EC_DE_5E;
        long hexWords = 0xCAFE_BABE;
        long maxLong = 0x7fff_ffff_ffff_ffffL;
        byte nybbles = 0b0010_0101;
        long bytes = 0b11010010_01101001_10010100_10010010;

    }
    
}

Multi-catch

The multi-catch is very useful indeed and significantly reduces the number of lines of code to do such things from before.

package name.dhruba.kb.jdk7;

public class MultiCatchException {

    static class Exception1 extends Exception {}
    static class Exception2 extends Exception {}

    public static void main(String[] args) {
        try {
            boolean test = true;
            if (test) {
                throw new Exception1();
            } else {
                throw new Exception2();
            }
        } catch (Exception1 | Exception2 e) {
        }
    }
    
}

More precise exception rethrow

The more precise rethrow is a tricky one to understand. See if you can spot what the new feature is in the example below. Will it compile on pre-java-7? If not how would it need to be changed to compile on pre-java-7? The answers lie after the example.

package name.dhruba.kb.jdk7;

public class MorePreciseExceptionRethrow {

    static class Exception1 extends Exception {}
    static class Exception2 extends Exception {}
    
    public static void main(String[] args) throws Exception1, Exception2 {
        try {
            boolean test = true;
            if (test) {
                throw new Exception1();
            } else {
                throw new Exception2();
            }
        } catch (Exception e) {
            throw e;
        }
    }
    
}

On Java 6 compiling the above gives the following exception.

Foo.java:18: unreported exception java.lang.Exception; must be caught or declared to be thrown
            throw e;
            ^
1 error

This can be fixed for Java 6 by changing:

public static void main(String[] args) throws Exception1, Exception2 {

to:

public static void main(String[] args) throws Exception {{

So now you see the improvement that Java 7 offers with this feature. You can be more precise in the declaration of the exceptions that you rethrow. Very nice indeed.

Improved type inference for generic instance creation(diamond)

Oh my God. Thank you Oracle for this feature. I breathe a huge sigh of relief. How long has it taken to get this out? How many keystrokes and keyboards have I wasted over the period of my career? And how much of a penalty have the tips of my fingers paid over the years for typing out the right hand side of a generic assigment? This curse is no more. Particularly note the constructor inference which is new to Java 7 also.

package name.dhruba.kb.jdk7;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class GenericDiamondTypeInference {
 
    static class MyClass <X> {
        <Y> MyClass(Y y) {
        }
    }
    
    public static void main(String[] args) {
        
        // constructor inference
        // <X> = <Integer>, <Y> = <String>
        MyClass<Integer> myClass1 = new MyClass<>("");
        
        // standard stuff
        List<String> list = new ArrayList<>();
        Map<String, List<String>> map = new HashMap<>();
        
    }
    
}

try-with-resources statement

This is simply beautiful and incredibly reassuring for the simple reason that resource closing is not only automatic and requires a lot less code but if in the examples below multiples exceptions are thrown (i.e. one in the closing of the resource and one in the use of the resource) then not only does the latter not get swallowed (unlike pre-java-7) but finally you can access all suppressed exceptions (i.e. the former) via the new API.

package name.dhruba.kb.jdk7;

public class TryWithResources {

    static class MyResource1 implements AutoCloseable {
        @Override
        public void close() throws Exception {
            System.out.println("MyResource1 was closed!");
        }
    }
    
    static class MyResource2 implements AutoCloseable {
        @Override
        public void close() throws Exception {
            System.out.println("MyResource2 was closed!");
        }
    }


    public static void main(String[] args) throws Exception {
        /*
         * close() methods called in opposite order of creation
         */
        try (MyResource1 myResource1 = new MyResource1();
             MyResource2 myResource2 = new MyResource2()) {}
    }
    
}

NOTE: In the above example the resources are closed in an order opposite to their order of creation.

Simplified varargs method invocation

This was a very tricky one to investigate. It’s still not clear what ‘simplified varargs method invocation’ is referring to but one difference (improvement) I was able to narrow down was that of a more specific and helpful warning that jdk7 adds to certain varargs code as below.

package name.dhruba.kb.jdk7;

import java.util.Collections;
import java.util.List;
import java.util.Map;

public class BetterVarargsWarnings {

    static <T> List<T> foo(T... elements) {
        return null;
    }

    static List<Map<String, String>> bar() {
        Map<String, String> m = Collections.singletonMap("a", "b");
        return foo(m, m, m);
    }

}

In Java 6 the above generates the following warning.

/Users/dhruba/NetBeansProjects/SwitchTest/src/name/dhruba/kb/jdk7/BetterVarargsWarnings.java:15: warning: [unchecked] unchecked generic array creation of type java.util.Map<java.lang.String,java.lang.String>[] for varargs parameter
        return foo(m, m, m);
                  ^
1 warning

In Java 7 however an additional warning is generated.

/Users/dhruba/NetBeansProjects/SwitchTest/src/name/dhruba/kb/jdk7/BetterVarargsWarnings.java:9: warning: [unchecked] Possible heap pollution from parameterized vararg type T
    static <T> List<T> foo(T... elements) {
                                ^
  where T is a type-variable:
    T extends Object declared in method <T>foo(T...)
/Users/dhruba/NetBeansProjects/SwitchTest/src/name/dhruba/kb/jdk7/BetterVarargsWarnings.java:15: warning: [unchecked] unchecked generic array creation for varargs parameter of type Map<String,String>[]
        return foo(m, m, m);
                  ^
2 warnings

Compiling Java 7

  • Download JDK7. I used OpenJDK OS X Build on my Mac.
  • Download Netbeans 7 which is what I used again on the Mac. Or Intellij 10.5.
  • Be happy. There’s no link for that but if you don’t feel this at this point you may want to think of a change of career.

Thanks to LingPipe for linking to this post.

Concurrency Pattern: Finding and exploiting areas of latent parallelism

With the JDK 7 developer preview out and a final release fast approaching it’s important to not only to become aware of what the new version offers but also, in certain areas where existing programming paradigms have radically changed, to make a mental shift in the way we think and understand how we can leverage these new paradigms best to our advantage. One such area is that of finding and exploiting areas of latent parallelism using a coarse grained parallelism approach.

As I mentioned in my previous post about the JDK7 developer preview being released – we’ve been using jsr166y and extra166y at work for some time now and this post really stems from an impassioned discussion that took place on finding and exploiting areas of latent parallelism in code so here’s what I have to say on the matter (inspired obviously by Doug Lea, Brian Goetz and my esteemed colleagues). The traditional and very much outdated mindset has only understood threads and ever since java 5 the executor framework on top. However this mechanism is fundamentally limited in its design in the extent of parallelism it can offer.

Firstly threads are expensive not only in their creation and stack size allocation but also in terms of context switching between them. Deciding on how many threads to have is also always at best an educated guess. A particular service within a process may decide to use all available cores but if every service in the process does the same then you have a disproportionately large number of threads and I have worked with applications with more than 150-200 threads operating at a time. Secondly, the executor framework has helped considerably in taking away some of the decision making from the developer and absorbing that complexity but it still suffers from heavy contention from multiple threads on the internal queue of tasks that it holds again adversely impacting performance. Thirdly, threads and executor frameworks normally do not scale up or down based on the hardware that they’re running on and certainly do not scale based on load. Their performance is very much constant by way of their underlying design.

Enter the fork join framework and parallel arrays. This is not a paragraph about how to use these new features but, in my opinion, a far more important note on how to rid ourselves of a legacy mindset on parallelism and make room for a new one. The fork join framework and parallel arrays (which are backed by the fork join framework and fork join pool internally) should not be perceived only as threading tools. That’s very dangerous as it means that we are only likely to use them in those areas where we previously used threads. They can in fact help us find and exploit areas of latent parallelism.

What does that mean? In all applications there are areas of code that operate sequentially. This code may be thread confined or stack confined and we almost never reconsider the way they perform. With FJ/PA we can now start making these areas concurrent. How is this an improvement? Well FJ/PA offer the following key features which makes them an ideal fit for such a use case.

Firstly, they are fundamentally decoupled from the number of threads in the way they add value which is a good thing. They tend to perform well regardless of how many threads they are using. Secondly, instead of using a single work queue for all threads they use one work queue per thread. This means further decoupling between threads and the way tasks are stored. Thirdly, given multiple work queues and multiple threads, FJ/PA perform work stealing. Every queue is a double ended queue and when one thread has completed all its tasks it then starts to process the tasks from the tail of another queue and because it is dequeuing off the tail there is no contention on the head of the queue from which the owner of the queue is dequeuing. Not only that but the largest tasks are placed towards the end of queues so that when another thread does steal work off another queue it gets enough work to effectively reduce the interval at which it steals again thereby again reducing contention. And finally, and most importantly, given a piece of FJ/PA code it will not only scale up but effectively scale down based not only on the hardware it runs but also on the load of the incoming work. When you understand this new paradigm suddenly the legacy paradigm seems so primitive and fundamentally stunted.

So the next time you are browsing your code consider using jsr166y and extra166y to find and exploit latent areas of parallelism. Generally the rule of thumb should be that this approach works best for operations that are cpu intensive and the legacy paradigm is better for io or network bound operations for obvious reasons. If operations are io or network bound there is less contention and the limitations of the legacy paradigm are less exposed. Don’t forget that the two libraries above can be used in java 6 so there’s no need to wait for java 7!