I’m free!

After almost 21 years as an employee I decided to take the chance, quit my job as a Senior Software iPhone Developer in Berlin and now I am self-employed.

Feels very good to make my own decisions. I am planning to concentrate my contract work on iPhone and Mac software development, Adobe Flex and AIR and finally Ruby and Ruby on Rails. The contract situation looks very promising at the moment, so if everything works fine I can also work half of the time on my own iPhone application idea which pinched me for some time now.

As a result of being my own boss I already booked the ticket, flight and hotel for the upcoming Adobe MAX conference in L.A. in October. Since I visited the MAX conference in 2007 in Barcelona and have to think about my findings there all the time I thought it would be a good idea to get new insights into Adobe plans for the future.

On the other hand I already ordered tons of new hardware ūüôā

I am typing this article on my new MacBook Pro 13″ with a 2.53 GHz processor. It arrived yesterday after I changed my order from the 250 GB HDD to a 320 GB one. They seem to have trouble with the huge success of this model. I’m planning to exchange the HDD with an OCZ Vertex 120 GB version and replace the superdrive with a OptiBay and a 500 GB HDD for in-time Time Machine backups and storing bigger files like movies. The setting is complete with a shiny new 24″ Cinema LED display and the bluetooth Mighty Mouse and keyboard.

I installed Snow Leopard from the DVD I got at the WWDC and it works very great although I have to live without 1password at the moment and Flex Builder wants to install Rosetta during install which I prefer not to do.

Time will tell if I will switch back to regular Leopard before I have full tool support.


Low power-consuming fileserver barebone

A couple of days ago Mr. Toto mentioned this barebone on Twitter. It has an Intel Atom 330 Dualcore CPU with hyperthreading and 5 HDD slots built-in:


Equiped with 1 GB RAM it will cost ¬£372 (appr. 394,- ‚ā¨) including VAT and shipping to Germany. That’s almost double the price of the hand-built system and I don’t know if Solaris will run perfectly on it…

Shopping cart

Maybe someone else has already tried this out?

Building my ZFS-Fileserver

After a good amount of time I have come up with my preferred hardware setup for my fileserver. I have a couple of old harddrives laying around so I won’t need them. I also don’t want to build in a CD-ROM drive permanently. I will use an external one for setup.

The German PC Builder site of Alternate is very good and has some nice checks built-in to prevent an incompatible setup. I looked always for the cheapest components.


AMD Sempron64 LE-1250 Boxed, OPGA, “Sparta”

with bundled cooling system. Although it’s single-core it can run in 64-Bit mode and should not get too warm.

30,99 ‚ā¨


Zalman ZM360B-APS

With 4 S-ATA power connectors.

49,99 ‚ā¨


Sharkoon Rebel9 Economy-Edition

Plenty of space for 9 external 5.25″ drives in a midi sized tower(200 mm x 435 mm x 486 mm).

41,99 ‚ā¨

Case cooling

Arctic-Cooling AF8025 PWM

(3,99 x 2) 7,98 ‚ā¨


Asrock N61P-S

It has 4 S-ATA connectors on a NVIDIA nForce6. The build-in LAN is only 100-MBit, but I want to connect the fileserver via PowerLAN so that should be enough. Although it is not listed on Sun’s Solaris hardware compatibility list I try my luck with it.

36,49 ‚ā¨


Crucial DIMM 2 GB DDR2-667

The single-core CPU can only handle this type of RAM, but I guess that will be sufficient.

19,79 ‚ā¨


That’s it for the bare system.

187,14 ‚ā¨

From the site:


Build your own Drobo-Replacement based on ZFS

The Drobo hype

I saw the first Drobo presentation video on YouTube almost 2 years ago. Since then I was longing to get one, but the price for an empty box being 490,- ‚ā¨ without a single harddrive was too much in my oppinion.

As a regular podcast listener I heared everywhere about the sexyness of the Drobo: MacBreak Weekly mentions it in every episode, the guys at BitsUndSo are collecting them like crazy, and finally Tim Pritlove also got one. But then he mentioned on his podcast Mobile Macs his strange difficulties with the Drobo which made me think a little bit more about the subject.

The Drobo has in my opinion some major drawbacks:

  • it doesn’t know its hosted data and the used filesystem, so on a resilvering task it has to duplicate all parts of a harddrive, also the ones with noise
  • one has to set an upper limit for the hosted filesystem, so Drobo acts to the host machine as one physical drive with a given size
  • you cannot use all the disk space if you use drives with different sizes
  • it is limited to a maximum amount of 4 drives
  • if your Drobo fails you cannot access your data

We can do better!

So what would I want from my dream backup device?

  • it should not have an upper storage limit
  • it should be able to heal itself silently
  • it should be network accessible (especially for Macs)
  • the drives should also work when connected to other hardware
  • it should be usable as a Time Machine target
  • it HAS to be cheaper than a Drobo

It doesn’t have to be beautiful or silent, since I want to put it in my storage room and want to forget about it.

Initial thoughts

In my oppinion the most modern and future proof filesystem at the moment is ZFS. After listening to the very good podcast episode of Chaos Radio Express CRE049 in German about ZFS by Tim Pritlove I always wanted to use it. Unfortunately Apple is very lazy with its ZFS plans. It is included in Leopard but can only read ZFS and the plans for Snow Leopard are very vague. So Mac OS X is no option at the moment. Since I want it to be cheap, Mac hardware is also no option.

FreeBSD seems to have a recent version of ZFS, but I gave OpenSolaris a try, since ZFS is developed by Sun I think Solaris is the first OS where new features of ZFS will appear. Bleading edge is always best ūüėČ so I looked further into this setting.

Test driving OpenSolaris

I wanted to make some more investigations so before using real hardware I wanted to test drive it with a virtualization software. After downloading the current ISO version of OpenSolaris 2008.11 I tried to install it on my Mac with VMWare Fusion but at that time I didn’t know that Solaris and OpenSolaris are the same so I had difficulties setting up the VM properly.

So I tried out VirtualBox and hoped, since it is now owned by Sun, it will work like a charm virtualizing Solaris. I set up a new VM with one boot disk and four raid disks.


I switched on all the fancy CPU extensions. There is only one ISO image for both x86/x64 so I turned on x64 and it automatically used the 64-Bit kernel. The hardware recommencations for ZFS say, that it works best with 64 Bit and at least 1 GB of RAM.

Installing OpenSolaris from the Live system worked very well. I was very surprised by the polished look thanks to Gnome (although I am a KDE fanboy). ZFS is now used by default as the boot filesystem on Solaris so I had to do nothing to activate it.

When the installation was complete and the system was up and running I made a little tweak to get access via SSH to the VM. Since it used NAT I set up a port forward from 2222 to 22 on my Mac. I edited the XML file of my virtual machine (~/Library/VirtualBox/Machines/Open Solaris/Open Solaris.xml in my case) and inserted the following lines to the DataItem section:

      <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/ssh/HostPort" value="2222"/>
      <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/ssh/GuestPort" value="22"/>
      <ExtraDataItem name="VBoxInternal/Devices/e1000/0/LUN#0/Config/ssh/Protocol" value="TCP"/>

After starting the VM I could connect with “ssh -l <user> -p 2222 localhost” to it.

I used to work for several years with a Linux system and after that on Mac OS X. I had no problems adapting to the Solaris world, since they took many products from the open source world like bash and integrated it. So the learing curve using this system seems very flat.

To get some info about the system I entered the following commands:

Get info about the kernel mode

$ isainfo -kv
64-bit amd64 kernel modules

Listing all system devices

$ prtconf -pv
System Configuration:  Sun Microsystems  i86pc
Memory size: 1061 Megabytes
System Peripherals (PROM Nodes):

Node 0x000001
    bios-boot-device:  '80'

Show the system log

$ cat /var/adm/messages

Setting up the storage pool

Since I want to have one huge storage pool which can grow over the time I used RAIDZ.

The following commands where entered as root.

First to get all connected storage devices:

# format
Searching for disks...done

       0. c3d0 <DEFAULT cyl 2044 alt 2 hd 128 sec 32>
       1. c5t0d0 <ATA-VBOX HARDDISK-1.0-8.00GB>
       2. c5t1d0 <ATA-VBOX HARDDISK-1.0-8.00GB>
       3. c5t2d0 <ATA-VBOX HARDDISK-1.0-8.00GB>
       4. c5t3d0 <ATA-VBOX HARDDISK-1.0-8.00GB>
Specify disk (enter its number): ^C

The green ids are the device ids we need to set up the storage pool. To create a pool named “tank” I entered:

zpool create -f tank raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0

To show the available pools type:

# zpool list
rpool  3,97G  2,90G  1,07G    73%  ONLINE  -
tank   31,8G   379K  31,7G     0%  ONLINE  -

rpool is the pool on the boot device. You can see, that the space you get connecting 4 drives with 8 GB is almost 32 GB. When you store something on that pool it is stored redundantly and uses about 30 % more space to ensure the safety when one device is failing.

Now I created a filesystem on that pool

# zfs create tank/home

It is linked automatically to /tank/home. To get all live zfs filesystems enter

# zfs list
rpool                   3,44G   482M    72K  /rpool
rpool/ROOT              2,71G   482M    18K  legacy
rpool/ROOT/opensolaris  2,71G   482M  2,56G  /
rpool/export             746M   482M    21K  /export
rpool/export/home        746M   482M    19K  /export/home
rpool/export/home/mk     746M   482M  45,4M  /export/home/mk
tank                     682M  22,7G  26,9K  /tank
tank/home                681M  22,7G  29,9K  /tank/home

In this example I copied the OpenSolaris ISO image to my new filesystem. It occupies 681M. On the pool it occupies 911M.

#zpool list
rpool  3,97G  3,44G   545M    86%  ONLINE  -
tank   31,8G   911M  30,9G     2%  ONLINE  -

A very nice feature of ZFS is built-in compression. Properties of file systems are inherited so if you set compression on tank/home and create a new system inside of it it is compressed automatically:

# zfs set compression=on tank/home
# zfs get compression tank/home
tank/home  compression  on         local
# zfs create tank/home/mk
# zfs get compression tank/home/mk
NAME          PROPERTY     VALUE         SOURCE
tank/home/mk  compression  on            inherited from tank/home

Health insurance

ZFS data validity is ensured by internal checksums so it can see on the fly if data is still valid and can reconstruct if necessary.

To get the status of a pool enter

# zpool status -v tank
  pool: tank
 state: ONLINE
 scrub: none requested

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0

errors: No known data errors

A scrub is a filesystem check which should be done with consumer quality drives on a weekly basis and can be trigged by

# zpool scrub tank

and after some time check the result with

# zpool status -v tank
  pool: tank
 state: ONLINE
 scrub: scrub completed after 0h2m with 0 errors on Tue Jan 13 11:36:38 2009

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c5t0d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0

errors: No known data errors

Gnome Time Machine

When logging into a Gnome session and browsing through the menues I noticed the program “Time Slider Setup”.


When you activate it, it will create ZFS snapshots on a regular basis. These snapshots don’t waste disk space and you can travel back in time (not as fancy as with Time Machine, but who cares) with the Gnome file browser Nautilus.


That is a killer feature and if I still would be a Linux/Java developer, this would be a good reason for me to switch to OpenSolaris. You don’t have to use a RAID with ZFS to make snapshots, so your filesystem has a built-in time-based versioning system. *cool*

Network accessiblity

My next step was to set up Netatalk on Solaris using this guide. I first had some difficulties and had to install gmake and gcc. The described patches didn’t work correctly since the file /usr/ucbinclude/sys/file.h was missing, so I changed the #ifdef statement from

#if defined( sun ) && defined( __svr4__ )
#include </usr/ucbinclude/sys/file.h>
#else /* sun __svr4__ */


#if defined( sun ) && !defined( __svr4__ )
#include </usr/ucbinclude/sys/file.h>
#else /* sun __svr4__ */

in all source files where it occurred:

  • etc/atalkd/main.c
  • etc/cnid_dbd/dbif.c
  • etc/papd/main.c
  • etc/papd/lp.c
  • sys/solaris/tpi.c

I need further work with this to get it running with Time Machine.

Getting real

So everything looks promising at first glance. The next logical step would be to buy¬† the hardware components and try everything out. I configured bare systems with midi towers using AMD or Intel processors for approximately 170,- to 190,- ‚ā¨. Thats about half the price of a Drobo. For each SATA hard drive I would buy a hot pluggable case. Another option would be to use external USB drives but that might lead to a quiet cluttered and fragile construction.

I need to investigate further to use the correct components. Motherboards tested with OpenSolaris are listed here: http://www.sun.com/bigadmin/hcl/

ZFS has a hotplug feature so if a device fails it can be replaced without rebooting and typing in any commands. But if that fails one can also enter some commands into the commandline.

Next steps

I really need to invest more time with the VM and try to corrupt some disk images and how ZFS reacts to that. Also expanding existing pools with different sizes will be interesting.

Of course I need to get Netatalk working and try to use it as a Time Machine target. I could use the VM on a different host to simulate to final system.

Stay tuned for my future investigations and please don’t start trolling about the superiority of the Drobo. I think that is a matter of taste.

Howto retrieve the current language for Mac OS X

I have a client who needs localizations for several languages. To retrieve the current language I used

NSString * localeString = [[NSLocale currentLocale] localeIdentifier];
NSString * language = [[[localeString componentsSeparatedByString:@"_"] objectAtIndex:0] uppercaseString];

Most of the time the content of localeString represented the settings in the system preference pane. But if I set English as the first language and Finland as the region in Formats it returns “fi_FI” as the localeString, but uses the Finnish nib files.

Preference pane Language

Preference pange Formats

To get the languages in the order from the preference pane “Language” one has to use the following code:

NSUserDefaults* defs = [NSUserDefaults standardUserDefaults];
NSArray* languages = [defs objectForKey:@"AppleLanguages"];
NSString* language = [[languages objectAtIndex:0] uppercaseString];

The variable language contains now “EN” as expected.

Using UITableViewCell with InterfaceBuilder

The code examples for the iPhone SDK show only how to construct your table cells programatically. If you want to use InterfaceBuilder to make a proper layout which might also be resizable etc. you can use this ViewFactory to handle the creation and reusage of your table cells.

First create a new empty Interface Builder file. Design all the UITableViewCells you want, but they must be root objects so they can be found by the ViewFactory. Each cell MUST have its “identifier” field set. It is in fact its reuseIdentifier required to make a proper recycling of your cell possible.


Now create the ViewFactory, the header ViewFactory.h

@interface ViewFactory : NSObject {
    NSMutableDictionary * viewTemplateStore;

- (id) initWithNib: (NSString*)aNibName;

- (UITableViewCell*)cellOfKind: (NSString*)theCellKind forTable: (UITableView*)aTableView;


and the implementation ViewFactory.m

#import "ViewFactory.h"

@implementation ViewFactory

- (id) initWithNib: (NSString*)aNibName
    if (self == [super init]) {
        viewTemplateStore = [[NSMutableDictionary alloc] init];
        NSArray * templates = [[NSBundle mainBundle] loadNibNamed:aNibName owner:self options:nil];
        for (id template in templates) {
            if ([template isKindOfClass:[UITableViewCell class]]) {
                UITableViewCell * cellTemplate = (UITableViewCell *)template;
                NSString * key = cellTemplate.reuseIdentifier;
                if (key) {
                    [viewTemplateStore setObject:[NSKeyedArchiver
                } else {
                    @throw [NSException exceptionWithName:@"Unknown cell"
                                                   reason:@"Cell has no reuseIdentifier"

    return self;

- (void) dealloc
    [viewTemplateStore release];
    [super dealloc];

- (UITableViewCell*)cellOfKind: (NSString*)theCellKind forTable: (UITableView*)aTableView
    UITableViewCell *cell = [aTableView dequeueReusableCellWithIdentifier:theCellKind];

    if (!cell) {
        NSData * cellData = [viewTemplateStore objectForKey:theCellKind];
        if (cellData) {
            cell = [NSKeyedUnarchiver unarchiveObjectWithData:cellData];
        } else {
            NSLog(@"Don't know nothing about cell of kind %@", theCellKind);

    return cell;


You have to make the ViewFactory instance available for your TableViews as a Singleton.

Whenever you want to fill you table cell in your UITableViewDataSource put this line in your method

- (UITableViewCell *)tableView: (UITableView *)aTableView cellForRowAtIndexPath: (NSIndexPath *)indexPath
   UITableViewCell * cell = [viewFactory cellOfKind:@"news" forTable:aTableView];
   // Access the views using [cell viewWithTag:X] in the old way
   return cell;

where “news” in this example is the value of the reuseIdentifier.

Internally I store each instance of the UITableViewCells as a template in a NSData archived format and make it accessible via its reuseIdentifier. So whenever cellOfKind:forTable: is called it looks into the reuseQueue of the given table if there is already an old table cell which can be reused or if it needs to instantiate a new one using the archived template to create the new table view cell.

This technique is much more flexible if you are using tables and I hope you will have fun using it.

Quick iPhone Dev Tip: Creating an UIColor with just one RGB value

Are you also tired of splitting that long RGB value into red, green and blue and converting them to a format that UIColor understands? I was, so I wrote a macro that converts the value on compile time and returns an UIColor autorelease object with the correct values:

#define UIColorFromRGB(rgbValue) [UIColor \
 colorWithRed:((float)((rgbValue & 0xFF0000) >> 16))/255.0 \
 green:((float)((rgbValue & 0xFF00) >> 8))/255.0 \
 blue:((float)(rgbValue & 0xFF))/255.0 alpha:1.0]

In your code you can use it like this:

UIColor myGreen = UIColorFromRGB(0x00FF00);

Put this in your global header referenced from <project_name>_Prefix.pch and you can use it everywhere in your project without importing it excplicitly.