Tag Archives: google

Java SE 7 API Javadocs receive new colour scheme

The Java SE 7 API specification (colloquially known as javadocs) have received a stylistic facelift. Compare v7 with v6. What do you think? Do you think it’s an improvement? Do you think it was even necessary? My opinion is twofold.

Firstly, although overall the javadocs appear to look nicer and more professional and corporate (as opposed to academic), when it comes to the method specifications, they aren’t as visually prominent as they were before due to the styles and colours overwhelming the text. In this respect both the styles and the colour of the text are subtle making them more difficult to tell apart. It’s not as clear as bold blue on white as it was before. This means that the reader will probably have to start reading the text to tell methods apart instead of just glancing at the visual image which was previously quite striking. A friend of mine also mentioned that there was just too much going on in terms of boxes too and I can see what he means.

Secondly and this is the more important point – if at all they had decided to spend time on enhancing the javadocs what required the most attention was in fact the navigability and searchability. The activity that takes up most of my time when using javadocs is finding the pieces of information i’m interested in. Better indexing and type ahead retrieval for classes, packages, properties and methods would be immensely useful rather than relying on the browser multiframe search which can fail at times by searching in the wrong frame. And before anyone mentions this I’m aware that there are third party sites which do such indexing and retrieval but I want this officially. So, that’s my 2p Oracle. I appreciate the time you’ve put in to the Javadocs but there’s much more room for improvement. Being a purist I really feel that, with this, it is more about content and usability than it is about appearance.

P.S. I think the new Google dark colour scheme and navigation bar is absolutely horrid. I want the old google back! 🙁

Concurrency pattern: Concurrent multimaps in Java

Preamble

Maintaining state is one of the most important ways in which we can increase the performance of our code. I don’t mean using a database or some goliath system somewhere. I mean local memory caches. Often they can be more than adequate to allow us to maintain state and provide a dramatic improvement in performance.

Before I start with the content of this post, let me just state the obvious or at least what should be obvious if it isn’t already to you, which is that the whole domain of caching is an incredibly difficult and inexhaustible area of study and work which is why dedicated distributed cache providers have sprung up all over the place that companies normally resort to in favour of in-memory caches.

However there is often a combination of in-memory and distributed caches in use and this post focuses on one aspect of in-memory caches – concurrent multimaps in Java and this post is the resource that I wish I had when I was tackling this problem numerous times in the past. The post focuses exclusively on copy on write multimap implementations as that allows the read operation to be lock free which can be a significant advantage depending on what you want to do on read.

Singular value caches

When an in-memory cache is desired one always resorts to some kind of a map structure. And if you’re storing singular key value pairs then creating a cache can be as easy as picking a map implementation though you still have the check-then-act operation of checking whether a value exists and if so returning it otherwise populating it and returning it which can result in some blocking. Nonetheless these problems have already been solved a thousand times over out there now.

For example, Google Guava MapMaker, provides an excellent implementation of the memoization pattern for a cache as follows which is probably the most complex case of a simple singular key value pair cache.

package name.dhruba.kb.concurrency.mapmaker;

import java.util.concurrent.ConcurrentMap;

import com.google.common.base.Function;
import com.google.common.collect.MapMaker;

public class CacheWithExpensiveValues<K, V> {

    private final ConcurrentMap<K, V> cache = new MapMaker().makeComputingMap(new Function<K, V>() {
        @Override
        public V apply(K input) {
            return acquireExpensiveCacheValue();
        }
    });

    public V get(K key) { return cache.get(key); }
    private V acquireExpensiveCacheValue() { return null; }

}

This implementation, in concept similar to the memoization pattern put forward by Brian Goetz in Java Concurrency In Practice, guarantees that a value for a given key is only acquired/resolved once in total during the lifetime of the cache in the event that it hasn’t already been computed which can be very useful if creating/computing the value is an expensive call. Threads which request an uncomputed value while it is being computed wait until the computation already in progress is finished.

This can be said to be strongly consistent in its guarantees. If you are willing to compromise on the guarantees that a cache makes, making it a weakly consistent cache, you may be able to achieve faster performance in some cases by relying solely on atomic CAS operations.

Multi-value caches

A standard singular key value pair cache is fairly straightforward these days. But what happens if you suddenly realise that you actually need multiple values per key? There’s a little bit more to it than first meets the eye. If you are well informed about what’s out there – as a knee jerk reaction, you might immediately think of Google Guava multimaps and create something similar to the example below.

package name.dhruba.kb.concurrency.guava;

import java.util.List;

import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.ListMultimap;
import com.google.common.collect.Multimaps;

public class MultiMapCacheExample<K, V> {

    private final ListMultimap<K, V> cache = Multimaps.synchronizedListMultimap(ArrayListMultimap
            .<K, V> create());

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        return cache.removeAll(k);
    }

    public void put(K k, V v) {
        cache.put(k, v);
    }

    public boolean remove(K k, K v) {
        return cache.remove(k, v);
    }

}

However, the astute programmer, soon realises the inadequacies of such a solution. The synchronised wrapper pattern here is very similar to that utilised in the JDK in that it synchronises all the calls in the interface. Also, it synchronises the entirety of all the methods in the interface meaning that all paths through any given method will need to contend for and acquire a lock. To put it another way no paths of execution through any method are non-blocking.

As a result, this implementation is likely to perform very poorly under heavy concurrent load. There will be a lot of contention and the cache will only be able to serve one operation at a time. So where do we go from here? Googling didn’t bring much success on the subject of concurrent multimaps in Java when I was looking for what was out there already so I decided to explore this area from first principles. Below I present the process of iteratively developing an efficient concurrent multimap implementation and over the course of a few implementations – making it eventually as non-blocking as possible.

It’s interesting to read why Google Guava have not and will not implement a concurrent multimap though I’m not sure I agree. I think a couple of general purpose concurrent multimaps or at the very least a copy on write multimap would be of value to the public as I’ve seen this pattern quite a lot over the years. But admittedly it wouldn’t just be one implementation. It would need to support a range of backing collections.

Concurrent multimaps

In the following example implementations I present only the most important mutative calls in the map interface as they are the most challenging and the best calls for illustration. Bear in mind also when reading through the implementations the following design considerations.

  • Mutative versus an immutable copy on write approach
  • Size of critical sections and thereby the degree of blocking
  • Strongly consistent or weakly consistent in mutual exclusion guarantees
  • When removing the last value for a key should the multimap remove the key and associated empty collection?

Weakly consistent implementations are very common in the industry but are prone to interleaving. For example, if a put() is in progress and someone calls remove() on the key altogether then, after the remove has been invoked, the put() will put the key value association back in which may not be desirable at all. Or maybe put tries to add to a value collection that is now no longer referenced because the key has been removed. These methods should ideally be mutually exclusive and the final implementation achieves this quality. Though it is important to bear in mind that for certain use cases weakly consistent guarantees are acceptable and it is for you to say what is acceptable to your use case and what isn’t.

Fully blocking multimap

The fully blocking implementation is equivalent to the synchronised wrapper approach because it synchronises the entirety of all the methods. This is without doubt the poorest performing implementation though on the plus side it has minimal allocation unlike the copy on write implementations that follow.

Advantages
  • Strongly consistent.
  • Doesn’t allocate any more than it needs to (unlike the copy on write pattern).
Disadvantages
  • Very poor performance.
  • Uses a hashmap which isn’t thread safe so offers no visibility guarantees.
  • All calls – reads/writes are blocking.
  • All paths through the blocking calls are blocking.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class FullyBlockingMutativeArrayListMultiMap<K, V> {

    private final Map<K, List<V>> cache = new HashMap<K, List<V>>();

    public synchronized List<V> get(K k) {
        return cache.get(k);
    }

    public synchronized List<V> remove(K k) {
        return cache.remove(k);
    }

    public synchronized void put(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            list = new ArrayList<V>();
            cache.put(k, list);
        }
        list.add(v);
    }

    public synchronized boolean remove(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        if (list.isEmpty()) {
            cache.remove(k);
            return false;
        }
        boolean removed = list.remove(v);
        if (removed && list.isEmpty()) {
            cache.remove(k);
        }
        return removed;
    }

}

Copy on write multimap using synchronisation

This is an initial implementation of a copy on write approach but without using the jdk copy on write collections. It is strongly consistent but it still synchronises too much on writes.

Advantages
  • Strongly consistent.
  • Uses concurrent hash map so we can have non-blocking read.
Disadvantages
  • The synchronisation lock blocks on the entire cache.
  • The blocking calls are entirely blocking so all paths through them will block.
  • Concurrent hash map is blocking itself although at a fine grained level using stripes.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

public class CopyOnWriteArrayListMultiMap<K, V> {

	private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

	public List<V> get(K k) {
		return cache.get(k);
	}

	public synchronized List<V> remove(K k) {
		return cache.remove(k);
	}

	public synchronized void put(K k, V v) {
		List<V> list = cache.get(k);
		if (list == null || list.isEmpty()) {
			list = new ArrayList<V>();
		} else {
			list = new ArrayList<V>(list);
		}
		list.add(v);
		cache.put(k, list);
	}

	public synchronized boolean remove(K k, K v) {
		List<V> list = cache.get(k);
		if (list == null) {
			return false;
		}
		if (list.isEmpty()) {
			cache.remove(k);
			return false;
		}
		boolean removed = list.remove(v);
		if (removed) {
			if (list.isEmpty()) {
				cache.remove(k);
			} else {
			    list = new ArrayList<V>(list);
				cache.put(k, list);
			}
		}
		return removed;
	}

}

Copy on write multimap but using the JDK CopyOnWriteArrayList

Here we opt to use the copy on write array list from the jdk. There is no synchronisation in the class itself (only within backing structure) but it is dangerously prone to interleaving and therefore weakly consistent. Personally I wouldn’t be happy about put() and remove() not being mutually exclusive and interleaving through each other. That to me would be unacceptable. Amazingly I’ve seen this implementation all too often at work.

Advantages
  • Uses {@link ConcurrentHashMap} for thread safety and visibility.
  • Uses {@link CopyOnWriteArrayList} for list thread safety and visibility.
  • No blocking in class itself. Instead the backing jdk classes handle blocking for us.
  • Blocking has been reduced to key level granularity instead of being at the cache level.
Disadvantages
  • Prone to interleaving. It is weakly consistent and does not guarantee mutually exclusive and atomic calls. The {@link remove(K)} call can interleave through the lines of the put method and potentially key value pairs can be added back in if a{@link remove(K)} is called part way through the {@link #put(K,V)} call. To be strongly consistent the {@link #remove(K)} and {@link #put(K,V)} need to be mutually exclusive.
package name.dhruba.kb.concurrency.multimap;

import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.CopyOnWriteArrayList;

public class JdkCopyOnWriteArrayListMultiMap<K, V> {

    private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        return cache.remove(k);
    }

    public void put(K k, V v) {
        List<V> list = cache.get(k);
        if (list == null) {
            list = new CopyOnWriteArrayList<V>();
            List<V> oldList = cache.putIfAbsent(k, list);
            if (oldList != null) {
                list = oldList;
            }
        }
        list.add(v);
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        if (list.isEmpty()) {
            cache.remove(k);
            return false;
        }
        boolean removed = list.remove(k);
        if (removed && list.isEmpty()) {
            cache.remove(k);
        }
        return removed;
    }

}

Partially blocking copy on write multimap

So from the previous implementation we return to a strongly consistent implementation but this time block only on certain paths through the put() and remove() methods at the cost of a little additional allocation. However the lock it uses is still a global one which means that operations on different keys will become sequential which is obviously not desirable.

Advantages
  • Strongly consistent.
  • Use of {@link ConcurrentHashMap} for thread safety and visibility guarantees.
  • The {@link #get(Object)} and {@link #remove(Object)} calls don’t block at all in this class.
  • The {@link #put(Object, Object)} and {@link #remove(Object, Object)} methods do block but only for certain paths. There are paths through these methods which won’t block at all. The {@link #put(Object, Object)} method only blocks if the {@link ConcurrentHashMap#putIfAbsent(Object, Object)} fails and the {@link #remove(Object, Object)} only blocks if there is something there to remove.
Disadvantages
  • We allocate a list initially in the {@link #put(Object, Object)} which may not be needed.
  • {@link ConcurrentHashMap} still blocks although at a finer level using stripes.
  • The blocking synchronisation we are using is still blocking the entire cache. What we really want is to block only the keys that hash to the value bucket that we are currently working with. A more fine grained blocking strategy is called for which we’ll see in the next implementation.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

public class PartiallyBlockingCopyOnWriteArrayListMultiMap<K, V> {

    private final ConcurrentMap<K, List<V>> cache = new ConcurrentHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        synchronized (cache) {
            return cache.remove(k);
        }
    }

    public void put(K k, V v) {
        List<V> list = Collections.singletonList(v);
        List<V> oldList = cache.putIfAbsent(k, list);
        if (oldList != null) {
            synchronized (cache) {
                list = cache.get(k);
                if (list == null || list.isEmpty()) {
                    list = new ArrayList<V>();
                } else {
                    list = new ArrayList<V>(list);
                }
                list.add(v);
                cache.put(k, list);
            }
        }
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        synchronized (cache) {
            list = cache.get(k);
            if (list == null) {
                return false;
            }
            if (list.isEmpty()) {
                cache.remove(k);
                return false;
            }
            boolean removed = list.remove(v);
            if (removed) {
                if (list.isEmpty()) {
                    cache.remove(k);
                } else {
                    list = new ArrayList<V>(list);
                    cache.put(k, list);
                }
            }
            return removed;
        }
    }

}

Striped lock copy on write multimap

The final implementation – strongly consistent, non-blocking backing structure, fine grained locking at key level and locking only on necessary paths. This example uses a striped lock provider. It’s purpose is to take a key as input and provide a lock as output to lock on. However it is consistent in that it always provides the same lock for the same key guaranteeing the mutual exclusion that is necessary.

It takes the number of locks desired as a constructor input (by default 2048) which means we can decide how many locks we want to make available in the distribution. It will then accordingly consistently hash across the distribution of locks. It also provides a better key distribution over non-concurrent hash maps. The concept behind the striped lock provider and its implementation is a very interesting topic and this will form a new post of its own in the future! Stay tuned!

Advantages
  • Strongly consistent. Implements correct mutual exclusion of calls.
  • Uses {@link NonBlockingHashMap} instead of {@link ConcurrentHashMap} so the backing cache member does not block at all. Far more efficient and scalable than {@link ConcurrentHashMap}.
  • The read calls are completely non-blocking even at the cache structure level.
  • The {@link #put(Object, Object)} and {@link #remove(Object, Object)} methods do block but only for certain paths. There are paths through these methods which won’t block at all. The {@link #put(Object, Object)} method only blocks if the {@link NonBlockingHashMap#putIfAbsent(Object, Object)} fails and the {@link #remove(Object, Object)} only blocks if there is something there to remove.
  • And to save the best for last – there is no longer any blocking at the cache level. We now apply mutual exclusion only at the key level.

This implementation has the best of all worlds really as long as the copy on write approach is acceptable to you.

Disadvantages
  • Fundamentally being a copy on write approach it does more allocation than a mutative approach.
package name.dhruba.kb.concurrency.multimap;

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import name.dhruba.kb.concurrency.striping.IntrinsicStripedLockProvider;

import org.cliffc.high_scale_lib.NonBlockingHashMap;

public class StripedLockArrayListMultiMap<K, V> {

    private final IntrinsicStripedLockProvider stripedLockProvider = new IntrinsicStripedLockProvider();
    private final ConcurrentMap<K, List<V>> cache = new NonBlockingHashMap<K, List<V>>();

    public List<V> get(K k) {
        return cache.get(k);
    }

    public List<V> remove(K k) {
        Object lock = stripedLockProvider.getLockForKey(k);
        synchronized (lock) {
            return cache.remove(k);
        }
    }

    public void put(K k, V v) {
        List<V> list = Collections.singletonList(v);
        List<V> oldList = cache.putIfAbsent(k, list);
        if (oldList != null) {
            Object lock = stripedLockProvider.getLockForKey(k);
            synchronized (lock) {
                list = cache.get(k);
                if (list == null || list.isEmpty()) {
                    list = new ArrayList<V>();
                } else {
                    list = new ArrayList<V>(list);
                }
                list.add(v);
                cache.put(k, list);
            }
        }
    }

    public boolean remove(K k, K v) {
        List<V> list = cache.get(k);
        if (list == null) {
            return false;
        }
        Object lock = stripedLockProvider.getLockForKey(k);
        synchronized (lock) {
            list = cache.get(k);
            if (list == null) {
                return false;
            }
            if (list.isEmpty()) {
                cache.remove(k);
                return false;
            }
            boolean removed = list.remove(v);
            if (removed) {
                if (list.isEmpty()) {
                    cache.remove(k);
                } else {
                    list = new ArrayList<V>(list);
                    cache.put(k, list);
                }
            }
            return removed;
        }
    }

}

Conclusion

When designing concurrent structures it is important not to always resort blindly to what’s out there and for custom concurrent data structures there will be no ready made solution. For those cases concurrent patterns such as this are invaluable and best practices such as reducing critical sections, reducing the amount of blocking to paths that need it, reducing the granularity of the locks being used and selection of the right backing structures are absolutely key to an efficient concurrent data structure. If you have any feedback for how to do better on the above or if I’ve made any mistakes please do let me know. Enjoy and thanks for reading.

Links

FYI – it seems Joe Kearney has done an alternative implementation that does not rely on copy on write.

Update [05/07/2011]: Code updated with bug fixes for edge cases.
Update [17/04/2013]: After a long wait I finally fixed the bug that Charlie reported below. Thanks Charlie!

Presentation: Development at the Speed and Scale of Google

Since I’ve never had the good fortune of being able to afford QCon (one day this will change) I appreciate the fact that InfoQ post QCon videos online for free albeit late. Recently I watched ‘Development at the Speed and Scale of Google‘.

Prior to watching this presentation I knew only what I had encountered in the wider industry and really could not have foreseen any of what I was about to watch. The tools that I use on a daily basis and the difficulties that impede me now both seem primitive and outdated in comparison to the progress that Google has made. The key point on the subject matter of this presentation is that it is not about development but what makes it possible to develop at the speed and scale of google: in this case – build and release engineering.

Highlights from the talk that I found worthy of note are listed below.

  • Working on build and engineering tools requires strong computer science skills and as such the best people.
  • We cannot improve what we cannot measure. Measure everything. This, in my opinion, is a fantastic quote. This stops a team going off on open ended endeavours that yield either intangible or no results.
  • Compute intensive IDE functions have been migrated to the cloud such as generating and searching indexes for cross referencing types across a large codebase.
  • The codebase required for building and running tests is generally larger than that which is worked upon but delivering the entire codebase to every developer either in source or in binary form would kill the network. Here – a fuse daemon detects when surrounding code is required using a fuse (user space) filesystem and retrieves it incrementally on demand.
  • For similar reasons to the above point – they’ve developed a virtual filesystem under Eclipse and contributed it back. The obvious benefit is that directly importing a large code base into Eclipse kills it whereas incremental loads perform.
  • They build off source and not binaries and maintain an extremely stable trunk from which they release. If you imagine that all code is in a single repository (in fact the largest Perforce repository in the world) then it really puts into perspective the achievement of using only trunk.
  • The designated owners for a given project who review code have at their fingertips all the intelligence metadata on the code to assist them in the reviewing process. If you think about it that makes a lot of sense. To review you need more than just the code to spend your time effectively. You may want the output of introspection, test runs etc.
  • Compilations are distributed and parallelised in the cloud and output is aggressively cached. It’s fascinating to hear a case study where this has actually been implemented. I’ve often considered remote compilations but never come across a concrete implementation until now.

The importance of build and release engineering is often underestimated. It is often portrayed and perceived as an area of work that’s second class in nature and rather unglamorous. However, as this talk attests, it is very much the contrary. It can massively boost developer and organisational productivity and efficiency and requires the best people. I’ll end with a quote from the presenter: “Every developer worth their salt has worked on a build system at least once”.

Android for Java Developers Talk

A quick reminder that there is a London Java Community talk on ‘Android for Java Developers‘ on Thu July 23 at 7pm at Google London. I’ll be there.

Update: Post impressions. Android was cool but given that it should have been a wholly developer oriented talk there was unfortunately rather a lot of marketing and pr and it overran quite a bit.

HTC Hero & Android – What’s the value add?

I just watched the official video on HTC Hero and I can’t help wondering what is the value add? Let me say at the outset that I admire the Android platform and handsets adopting it and that what I’m about to say is in no way a criticism. However I would like to ask the question: how will Android and adopting handsets distinguish themselves in the shadow of the iPhone? The fundamental problem that Android and adopting handsets are facing is that it’s all been done before by Apple who have had the added advantage of refining and maturing their product over time. The typical characteristics of the OS and the handset marketed in that video, touchscreen, seamless integration with the internet and catering for all possible needs of the user, have all been done before and replicating that unfortunately gives the impression of lack of originality.

Android attempts to add more animation and eye candy but it’s all too easy to overdo it. The only real unique value add I can see is that they allow you to fully customise your desktop and that in my opinion is one of the annoyances of the iPhone and a real plus point of Android. Some might say that the Android platform being open and supporting multiple languages including Java is a major win. However this cannot be the defining value add as if you have no consumer who will you develop for? For the common consumer there has to be a very real tangible value over alternative platforms and sadly having come much later than the iPhone and being relatively new still Android is in my opinion at a serious disadvantage. It nevertheless shows more promise than Symbian and other non-Apple platforms and in due time I’m sure it will have overcome this difficulty to an extent and established its place on the market. I look forward to its progress in the future. If I can get my hands on an Android handset cheaply maybe I’ll even try my hands on some development.

Jerry Yang to step down as CEO of Yahoo

So it seems that the series of recent mishaps at Yahoo have finally come to a head and had a collective consequence – the departure (techblog) of Jerry Yang as CEO of Yahoo! The breakdown of a struck deal with Google and repeated failure to reach an agreement on the takeover of Yahoo! by Microsoft severely damaged Yang’s and Yahoo!’s reputation and attracted a great deal of criticism from both shareholders and the public. Recently, Yahoo was even seen actively trying to pursue the deal with Microsoft. Although Yang was CEO for only 18 months what makes this news surprising is that he was the cofounder of Yahoo!. The irony is that shares in Yahoo soared 10% upon this news surfacing in the hope that the deal with Microsoft may yet be struck after his departure.

Google and TMobile release G1

Today Google and T-Mobile released the first Android handset in the UK. It is called the ‘G1‘ and under the hood is an HTC Dream. The user experience at first glance looks highly impressive although unsurprisingly falling far short of the usability of the Apple iPhone. The basic idea appears to be that you sign on with your google account and all google services become available immediately without you requiring to signon again. As everything is synced with the internet losing the phone means you lose only the hardware and not the data.

The user interface lacks the finesse and polish of that of the iPhone and is distinctly dull. However it provides great utility through all the usual Google provisions. The one strength that Android provides over the iphone however is an open development platform which rather excitingly is based on the language Java and an Eclipse plugin although they have written their own virtual machine, Dalvik, optimised for embedded use. The lack of a standard headphone socket however has annoyed some.

A friend of mine expressed earlier today that he will be getting one of these instead of an iPhone and my response to him is what prompted me to blog about this particular topic. In my reply I pointed out that T-Mobile has the worst network coverage of all four networks in the UK and that I had learnt the hard way after having purchased the iPhone that mobile applications are only as good as the network coverage and conditions allow them to be.

Today I struggled to do anything whatsoever on my iPhone while sitting in Starbucks. 2G and 3G coverage were both intermittent and neither worked reliably. The wifi was provided by T-Mobile/BTOpenZone and (surprisingly) required payment and there was no free ‘Cloud‘ hotspot available. As such all the applications I wanted to use and tried in vain to use were absolutely useless. It’s surprising just how little value and utility and iPhone offers when offline. Incidentally by offline I’m being inclusive of the times when frequently there is no voice reception either or intermittent at best.  I will definitely be reviewing my choice in 18 months at which point I’m hoping that Android, iPhone OS and the mobile networks will have learnt to work together more harmoniously for the betterment of the customer.

Update: With the release of Android handsets every move of either Android or iPhone will be compared with its competitor. This is already beginning to happen as Apple further constrains SDK developers’ free speech.