Comparison of Linux compression commands

Recently I had to back up a virtual machine server I had in the US before decommissioning it. As I had a lot of large folders I realised that compressing them would take some time. Being the lazy administrator that I am I took some time out to find the fastest compression command on Linux before I began my backup process.

Here is what I found for a 117Mb folder.


dhruba : /backup # time tar -cjf root.home.tbz /root/
real    0m34.600s
user    0m34.040s
sys     0m0.357s


dhruba : /backup # time tar -czf root.home.tgz /root/
real    0m6.255s
user    0m5.672s
sys     0m0.415s


dhruba : /backup # time tar --use-compress-program=lzop -cf root.home.tlzo /root/
real    0m2.624s
user    0m2.061s
sys     0m0.383s

The file sizes for the three commands were as follows.

dhruba : /backup # ls -lShr
-rw-r--r-- 1 root root 81M Dec 14 20:46 root.home.tbz
-rw-r--r-- 1 root root 83M Dec 14 20:43 root.home.tgz
-rw-r--r-- 1 root root 88M Dec 14 20:43 root.home.tlzo

As you can see lzop has by the fastest compression size but the largest compressed file size. If, like me, this is the trade-off you are looking for then lzop is your answer.

How to print assembly for your Java code in OS X

0. Write program.

package name.dhruba;
public class Main {
  public static void main(String[] args) {
    System.out.println("Hello World!");

1. Add JVM arguments to your program.

-XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly

2. Run your program. You will probably see the output below.

Java HotSpot(TM) 64-Bit Server VM warning: PrintAssembly is enabled; turning on DebugNonSafepoints to gain additional output
Could not load hsdis-amd64.dylib; library not loadable; PrintAssembly is disabled
Hello World!
Process finished with exit code 0

3. Download the missing library from here. The direct link to the lib file itself is here. I downloaded the one named ‘gnu-bsd-libhsdis-amd64.dylib’ as I’m running 64-bit. This produces a file on your system called ‘hsdis-amd64.dylib’.

4. Move it to where, the JDK that you are running with, looks for it. I was using Java8.

sudo mv ~/Downloads/hsdis-amd64.dylib /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/hsdis-amd64.dylib

5. Run your program again and see assembly! The output for Hello World is huge so I can’t paste all of it here but here’s the initial bit where you can see that the disassembler has been loaded.

Java HotSpot(TM) 64-Bit Server VM warning: PrintAssembly is enabled; turning on DebugNonSafepoints to gain additional output
Loaded disassembler from /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/hsdis-amd64.dylib
Decoding compiled method 0x000000010f4c23d0:
[Disassembling for mach='i386:x86-64']
[Entry Point]
  # {method} {0x00000001280fe660} 'arraycopy' '(Ljava/lang/Object;ILjava/lang/Object;II)V' in 'java/lang/System'
  # parm0:    rsi:rsi   = 'java/lang/Object'
  # parm1:    rdx       = int
  # parm2:    rcx:rcx   = 'java/lang/Object'
  # parm3:    r8        = int
  # parm4:    r9        = int
  #           [sp+0x60]  (sp of caller)
  0x000000010f4c2540: mov    0x8(%rsi),%r10d
  0x000000010f4c2544: shl    $0x3,%r10
  0x000000010f4c2548: cmp    %r10,%rax
  0x000000010f4c254b: je     0x000000010f4c2558
  0x000000010f4c2551: jmpq   0x000000010f407b60  ;   {runtime_call}
  0x000000010f4c2556: xchg   %ax,%ax

Credit: Nitsan Wakart.

Upcoming Coursera course: Principles of Reactive Programming

Just a quick note to say that a new Coursera course begins on 4th Nov 2013 that looks very interesting indeed titled ‘Principles of Reactive Programming‘ taught by Martin Odersky, Erik Meijer and Roland Kuhn. It is intended as a follow on course from its precursor course ‘Functional Programming Principles in Scala‘ also taught by Martin Odersky that concludes right when the new one starts. To hear the instructors give an introduction to the new course watch this video.

Scala Cookbook: Simplifying nested pattern matching

Pattern matching in Scala is a natural and intuitive way of expressing branched logic and can greatly simplify complex branched logic in terms of readability but what if you need to branch within a branch? This can result in unsightly code and duplication of logic. Let’s see this first hand with an example.

Let’s try to code a function removeAt(n, List) which removes the zero indexed nth element from the supplied list and returns the resulting list. Our first attempt, as we transcribe our thoughts step by step into code, may look as follows.

  def removeAt[T](n: Int, xs: List[T]): List[T] = n match {
    case 0 => xs match {
      case Nil => Nil
      case y :: ys => removeAt(n - 1, ys)
    case _ => xs match {
      case Nil => Nil
      case y :: ys => y :: removeAt(n - 1, ys)

Here you can see that, although the logic is correct and reasonably easy to interpret, there is duplication of code and the function could be made even easier to read and a little shorter. On our second attempt we may try to extract the duplicate Nil => Nil cases outside of the match as below.

  def removeAt[T](n: Int, xs: List[T]): List[T] =
    if (xs.isEmpty) Nil else n match {
      case 0 => xs match {
        case y :: ys => removeAt(n - 1, ys)
      case _ => xs match {
        case y :: ys => y :: removeAt(n - 1, ys)

This also works and has in fact achieved our goals – the function is shorter and does not duplicate code. However, at this point, it is blending if/else logic with pattern matching and most importantly producing warnings in Scala IDE in Eclipse. The warnings state that in our secondary (internal) pattern match we do not cater for the Nil case and that’s correct.

Description	Resource	Path	Location	Type
match may not be exhaustive. It would fail on the following input: Nil	ListUtils.scala	/scala-test/src/week5	line 13	Scala Problem
match may not be exhaustive. It would fail on the following input: Nil	ListUtils.scala	/scala-test/src/week5	line 16	Scala Problem

Even though Scala IDE may not see that in the bigger picture the logic is actually correct it is correct in saying what it is saying specifically and I also consider it bad practice to leave warnings in code so let’s go through one more simplification attempt. How else can we restructure this function to achieve the same goals?

  def removeAt[T](n: Int, xs: List[T]): List[T] =
    xs match {
      case Nil => Nil
      case y :: ys => n match {
        case 0 => removeAt(n - 1, ys)
        case _ => y :: removeAt(n - 1, ys)

The answer is to invert the order of the high level pattern match subjects by making the subject that’s being pattern matched redundantly rise to become the primary pattern match with the other subject then becoming secondary. And there we have the end result of the simplification.


A couple of after thoughts for completion follow here. Firstly – the above progression in implementation strategies was chosen to allow illustration of how to incrementally simplify a function and does not suggest that one cannot in fact end up with the final solution the first time. Secondly – the implementation algorithm above which is needlessly verbose has also been chosen to be illustrative of nested pattern matching and isn’t the simplest and most concise possible solution. For that see below.

  def removeAt[T](n: Int, xs: List[T]): List[T] =
    (xs take n) ::: (xs drop n+1)

Scala Cookbook: Tail recursive factorial

The direct port of a human readable description of the factorial algorithm may look as follows.

  def factorial(x: BigInt): BigInt = {
    if (x == 0) 1 else x * factorial(x - 1)

However – this isn’t tail recursive and will fail for large inputs.

Exception in thread "main" java.lang.StackOverflowError
	at java.math.BigInteger.getInt(

The classic nested accumulator helper function idiom can be used to make this tail recursive.

  def factorial(x: BigInt): BigInt = {
    def f(x: BigInt, acc: BigInt): BigInt = {
      if (x == 0) acc else f(x - 1, x * acc)
    f(x, 1)

So what’s going on here? Well it’s quite simple really.

The first function uses a number of stack frames proportional to the size of the input due to the fact that it is using right expansion. In other words when it calls itself it multiplies the result by x tainting the stack so that it can’t be thrown away or reused. The stack is only flattened when fully expanded and then collapsed.

The second function is tail recursive – in other words – it calls itself as the final call. This is essentially compiled to a loop by the Scala compiler using only one stack frame regardless of the size of the input. It is therefore not possible to get a StackOverflowError with a tail recursive function.

Note the @tailrec annotation which when applied if it compiles then your function is certified tail recursive by the Scala compiler. It is a quick and easy way to check whether your function is tail recursive at compile time as well as serving as a useful hint for other developers maintaining your code.

A pattern matched equivalent of the tail recursive function for fun is below.

  def factorial4(x: BigInt): BigInt = {
    def f(x: BigInt, acc: BigInt): BigInt = x match {
      case y: BigInt if y == 0 => acc
      case _ => f(x - 1, x * acc)
    f(x, 1)

Scala Cookbook: Pattern matching on BigInt

Let’s take the factorial algorithm as an example.

  def factorial(x: BigInt): BigInt = {
    if (x == 0) 1 else x * factorial(x - 1)

One might expect to convert this into a pattern matched equivalent as follows.

  def factorial(x: BigInt): BigInt = x match {
    case 0 => 1
    case _ => x * factorial(x - 1)

In fact this will give you the error below.

type mismatch;  found   : Int(0)  required: BigInt	Factorial.scala

The reason is that BigInt is not a case class and does not have an unapply() method in its companion object.

However a small modification allows the pattern matching implementation to succeed.

  def factorial(x: BigInt): BigInt = x match {
    case y: BigInt if y == 0 => 1
    case _ => x * factorial(x - 1)

Arguably the if/else equivalent is simpler but if you always use pattern matching you’ll want this version.

OpenCL Cookbook: 10 tips for high performance kernels

Today we adorn our performance hat in the OpenCL Cookbook series. OpenCL kernel optimisation is a black art and very very hard. Recently when I came across a list of 10 tips for high performance kernels in Matthew Scarpino’s excellent book: OpenCL in Action I just had to share it as it was a true gem. Bear in mind that this is an abridged account – if you want more detail read the appropriate section in the book.

  1. Unroll loops: Comparison operations are expensive – if you know how many iterations you need simply perform them individually.
  2. Disable processing of denormalised numbers: The processing of denormalised numbers takes time. Disable them using the -cl-denorms-are-zero option. You can also disable processing of infinite values and NaNs using -cl-finite-math-only.
  3. Transfer constant primitive values to the kernel with compiler defines instead of private memory parameters. Example: clBuildProgram(program, 0, NULL, "-DSIZE=128", NULL, NULL);.
  4. Store small variable values in private memory instead of local memory: Private memory is faster and should be used unless you need to share data between work items in which case use local memory.
  5. Avoid local memory bank conflicts by accessing local memory sequentially: Successive 32-bit elements are stored in successive local memory banks. Sequential bank access is parallel whereas contending on same bank access is serial.
  6. Avoid using modulo operator: This operator is slow and alternatives should be used instead.
  7. Reuse private variables throughout the kernel but use macros to distinguish each separate use.
  8. For multiply-and-add operations use the fma function if available.
  9. Inline non-kernel functions in a program file: This uses more memory but removes the context switches and stack operations associated with regular function calls.
  10. Avoid branch miss penalties by coding conditional statements to be true more often than false: Processors often optimise for the true case and this can result in a penalty for the false case known as the branch miss penalty. Code your conditionals to evaluate to true as often as possible.

It’s easy to list rules for optimising kernels but the truth is that optimising kernels is hard; very hard. The best way to approach it is, in my opinion, to profile your application and kernels as well as to experiment with various changes. There will certainly be optimisations that you’ll apply that will turn out to be ineffectual or even slower so trial and error is key.

AMD release APP SDK 2.8, CodeXL 1.0 and Bolt

This is old news now that AMD have released APP SDK 2.8, CodeXL 1.0 and Bolt. Bolt is a C++ abstraction of OpenCL which allows you to write OpenCL applications without actually writing OpenCL. I personally haven’t tried it as we are not going to be using it at work but it sounds a good step forward in making GPGPU more accessible to the mainstream. For anyone interested there are a number of blog posts by AMD on how to use Bolt.

OpenCL Cookbook: Multi-device utilisation strategies

A quick note on all the different ways in which one can utilise multiple devices in OpenCL. This overlaps with my previous post on the same subject in the series but is a higher level overview than that previous post.

Here are the different ways one can load balance across multiple devices.

  • Use CPU and GPU devices.
  • Use multiple GPU devices.

For each of the above one can utilise them in the following ways.

  • Single context – Use one context in total across all devices and one command queue per device.
  • Multiple contexts single-host-thread – Use one context and one command queue per device.
  • Multiple contexts multi-host-threaded – Same as multiple contexts above but use one host thread per device to enqueue to the command queue for that device. This can also be in the form of starting one host process instance per device which in essence is the same.

For an elaboration on the latter strategies see my previous post on the subject.