Adding a reflection to an NSImage

To add an reflection in Cocoa to a NSImage object you can use the following NSImage category:

@interface NSImage(MKAddReflection)
- (NSImage*) addReflection:(CGFloat)percentage;
@end

@implementation NSImage(MKAddReflection)

- (NSImage*) addReflection:(CGFloat)percentage
{
	NSAssert(percentage > 0 && percentage <= 1.0, @"Please use percentage between 0 and 1");
	CGRect offscreenFrame = CGRectMake(0, 0, self.size.width, self.size.height*(1.0+percentage));
	NSBitmapImageRep * offscreen = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
														pixelsWide:offscreenFrame.size.width
														pixelsHigh:offscreenFrame.size.height
													 bitsPerSample:8
												   samplesPerPixel:4 
														  hasAlpha:YES
														  isPlanar:NO 
													colorSpaceName:NSDeviceRGBColorSpace
													  bitmapFormat:0
													   bytesPerRow:offscreenFrame.size.width * 4
													  bitsPerPixel:32];
	
	[NSGraphicsContext saveGraphicsState];
	[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:offscreen]];
	
	[[NSColor clearColor] set];
	NSRectFill(offscreenFrame);
	
	NSGradient * fade = [[NSGradient alloc] initWithStartingColor:[NSColor colorWithCalibratedWhite:1.0 alpha:0.2] endingColor:[NSColor clearColor]];
	CGRect fadeFrame = CGRectMake(0, 0, self.size.width, offscreen.size.height - self.size.height);
	[fade drawInRect:fadeFrame angle:270.0];	
	
    NSAffineTransform* transform = [NSAffineTransform transform];
    [transform translateXBy:0.0 yBy:fadeFrame.size.height];
    [transform scaleXBy:1.0 yBy:-1.0];
    [transform concat];
	
	// Draw the image over the gradient -> becomes reflection
	[self drawAtPoint:NSMakePoint(0, 0) fromRect:CGRectMake(0, 0, self.size.width, self.size.height) operation:NSCompositeSourceIn fraction:1.0];
	
	[transform invert];
	[transform concat];

	// Draw the original image
	[self drawAtPoint:CGPointMake(0, offscreenFrame.size.height - self.size.height) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
	
	[NSGraphicsContext restoreGraphicsState];
	
	NSImage * imageWithReflection = [[NSImage alloc] initWithSize:offscreenFrame.size];
	[imageWithReflection addRepresentation:offscreen];
	
	return imageWithReflection;
}

To get a copy of a NSImage with a reflection applied you call [image addReflection:0.3], where the float value defines the percentage of the reflection regarding the height of the input image, e.g.

NSImage * input = [[NSImage alloc] initWithContentsOfFile:@"/Users/mk/Desktop/input.jpg"];
NSImage * output = [input addReflection:0.4];

Kerning comparison OSX – Windows – TLF

I did some tests comparing the same text on OS X with the TextEdit, on Windows XP with WordPad and on OS X in Safari on the Adobe Text Layout Framework demo site. I used everywhere Times New Roman as the font and a font size of 28.
Although Windows does not use kerning the text looks almost the same with some pixel difference. Have a look for yourself:
Comparing a text on OSX, Windows XP and TLF

Gesture recognition on the iPhone

Inspired by this detailed article by Carl D. Worth I began experimenting with stroke recognition on the iPhone. Unfortunately the sources for xstroke are very difficult to find nowadays and are unsupported. I finally did find them but I did not want to port all that X11 stuff, so I decided to start from scratch and did a small feasibility study which I want to show you here.

I created the project “KrikelKrakel” (German for scribbling) on BitBucket.

The most interesting class you would look at is KrikelKrakelView. It inherits from UIView and does all the tracking and recognition. The gestures are recognized when the touches ended. The area where touches took place is divided in a grid with 9 cells and the path the finger took is then described by the cell ids. You should have a look at the article mentioned earlier about the details.

One can register as a delegate to get called on different occassions:

- (void) willDrawGesture;

This will be called right inside the touchesBegan method.

- (void) didLearnNewGesture:(NSString*)text;

When a gesture has not been recognized the user will be presented with an alert box where he can enter a letter or some more text.

- (void) didRecognizeGesture:(NSString*)text;

When a gesture has been recognized this method will be called and the stored letter/text will be delivered in the variable text.

The learned gestures a stored in the application documents directory with the name “strokes.dict”. If there is no file on first start the bundled strokes.dict will be used a the initial version.

See a demo video here.

Setting up an OpenSolaris root server at Hetzner

Several months ago I ordered a root server by the German hosting provider Hetzner called EQ4. It is quite powerful: an Intel Core i7-920 Quad-Core CPU, 8 GB RAM and two 750 GB HDDs for only 45,- €/month. Since they only provide several Linux flavors (openSuSE, Fedora, CentOS) at first glance I decided to use CentOS. I already some very good experiences with it a couple of years ago. The installation process was very easy.

After a couple of months without much time to fiddle with the server it just sat there in its rack and got bored.

After the very inspiring NoSQL meeting in Berlin last Thrusday I decided to spend some time with my server installing Erlang, CouchDB and nginx as a reverse proxy to do authentication and SSL stuff.

Installing the software packages went very well. Some of them I grabbed via yam others I installed from source. Connecting to my system via a ssh session worked very well but there was a very strange iptables setup in the CentOS installation which drove me crazy. I could not reach the proxy from outside and after several hours I decided to try a reinstall. At Hetzner one can reboot the server in a so called rescue mode. This mode can be of course Linux, but also FreeBSD and OpenSolaris. Digging a little further I discovered a site in the Hetzner wiki describing how to install OpenSolaris through this rescue system.

I used JollyFastVNC to establish a VNC session to the rescue system and used the graphical OpenSolaris installer to install it on the first HDD. After installation I used my directions from an earlier post to create a ZFS mirror using both HDDs.

This is my hardware configuration discovered by OpenSolaris:

# prtdiag -v
System Configuration: MSI MS-7522
BIOS Configuration: American Megatrends Inc. V8.2 04/20/2009

==== Processor Sockets ====================================

Version                          Location Tag
-------------------------------- --------------------------
Intel(R) Core(TM) i7 CPU         920  @ 2.67GHz CPU 1

==== Memory Device Sockets ================================

Type        Status Set Device Locator      Bank Locator
----------- ------ --- ------------------- ----------------
other       in use 0   DIMM0               BANK0
other       in use 0   DIMM1               BANK1
other       in use 0   DIMM2               BANK2
other       empty  0   DIMM3               BANK3
other       in use 0   DIMM4               BANK4
other       empty  0   DIMM5               BANK5
FLASH       in use 0                        

==== On-Board Devices =====================================

==== Upgradeable Slots ====================================

ID  Status    Type             Description
--- --------- ---------------- ----------------------------
1   available PCI              PCI1
2   available PCI Express      PCIE2
3   available PCI Express      PCIE3
4   available PCI Express      PCIE4

Next I used the CouchDB directions in the Joyent Wiki to install the entire required software stack from source. After some fiddling with directory write permissions I had my CouchDB system up and running.

To install nginx I used the official site. I wanted to have a password authentication on my site. Since nginx doesn’t come with htpasswd I used it on my Mac:

$ htpasswd -nbd user password
user:TYVlO9aeSogv6

I copied the output line into the file /etc/nginx/htpasswd on my server.

To create a self signed certificate in the folder /etc/nginx I used the following commands:

# openssl req -new -nodes -keyout selfsigned.key -out selfsigned.csr
Generating a 1024 bit RSA private key
............................................................................................................................++++++
........................++++++
writing new private key to 'selfsigned.key'
...
# openssl x509 -req -days 1095 -in selfsigned.csr -signkey selfsigned.key -out selfsigned.crt
Signature ok
...
Getting Private key

My nginx setup file contents are:

#/etc/nginx/nginx.conf

#user  nobody;
worker_processes  2;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] $request '
    #                  '"$status" $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
                auth_basic "Restricted";
                auth_basic_user_file /etc/nginx/htpasswd;
                rewrite /couchdb/(.*) /$1 break;
                proxy_pass http://localhost:5984;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

    #
    # HTTPS server
    #
    server {
        listen       443;
        server_name  localhost;

        ssl                  on;
        ssl_certificate      /etc/nginx/selfsigned.crt;
        ssl_certificate_key  /etc/nginx/selfsigned.key;

        ssl_session_timeout  5m;

        ssl_protocols  SSLv2 SSLv3 TLSv1;
        ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
        ssl_prefer_server_ciphers   on;

        location / {
                auth_basic "Restricted";
                auth_basic_user_file /etc/nginx/htpasswd;
                rewrite /couchdb/(.*) /$1 break;
                proxy_pass http://localhost:5984;
                proxy_redirect off;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

Now when I open http://my.secret.server.ipaddress/ I can log in with the created user credentials stored in htpasswd and get the warm CouchDB welcome message: ‘{“couchdb”:”Welcome”,”version”:”0.10.0″}’. I can also use the secure entry at https://my.secret.server.ipaddress/.

After every successful step I made a ZFS snapshot which is the greatest feature I can use now. By the way: a nice ZFS cheat sheet can be found here.

I don’t know why it worked so well with OpenSolaris and I had so many problems with CentOS. Maybe my system is now wide open and completely insecure, but this way I like it much better because now I can close all the open doors step by step and make it more secure.

Next I will move my domain also to Hetzner and let it point to my server. Then I will setup a mail server, maybe install some Ruby on Rails stuff (http://www.redmine.org/) and will write an Adobe Flex application for a customer which will rely completely on CouchDB #bliss.

Setting up my Solaris server as a centralized backup server

After some months of work I now have the time to set-up my server properly so that it can backup all my computers without a hassle. Since I wanted to let the server control, when the backups should be made I wrote a Ruby script which runs every hour and backs up all the available hosts (which are of course Macs ;-). The script should not run at the same time and produce a decent logfile.

Set up environment

First I had to make sure, that the server had the correct time. By default the ntp daemon did not run, so I configured it using the description at the grey blog. I did not use the European ntp server though instead I used de.pool.ntp.org.

To install the current version 1.8.7 of Ruby I entered as root:

# pfexec pkg install SUNWruby18

Then I created a ssh key for my Solaris root user:

# ssh-keygen -t dsa
Enter file in which to save the key (/root/.ssh/id_dsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
*DISCLOSED*

To get a ssh connection which is needed for rsync to each of my hosts I then copied the contents of /root/.ssh/id_dsa.pub to the to the file authorized_keys e.g. user1@host1:.ssh/authorized_keys

Now whenever I enter “ssh user1@host1″ no password is needed to get a remote shell.

The Ruby Script

#!/usr/bin/ruby
# This script will fetch the current files from a couple of hosts via rsync
# and stores them locally
require 'ping'
require 'tempfile'
require 'open3'
require 'logger'
require 'fileutils'

ZFS_POOL="your_zfs_pool_here"
LOCAL_BACKUP_PATH="/#{ZFS_POOL}"

# Parse the commandline parameters
ARGV.each do |arg|
  case
  when arg == '--stdout'
    LOG_STDOUT = true
  when arg == '--dry-run'
    DRY_RUN = true
  end
end
LOG_STDOUT = (LOG_STDOUT rescue false) # Set default value to false
DRY_RUN = (DRY_RUN rescue false)       # dto.

# Check which output should be used for logging
if LOG_STDOUT
  # Log to stdout
  $LOG = Logger.new($stdout)
  $LOG.datetime_format = '%H:%M:%S'
else
  # Logfile will not exceed 1 MB
  $LOG = Logger.new('/var/log/backup_rb.log', 0, 1 * 1024 * 1024)
  $LOG.datetime_format = '%d.%m.%y %H:%M:%S'
end

# Kill older processes of this script
pids_to_kill = []
`ps -Al -o pid -o args|grep -e ruby|grep -e #{__FILE__}`.split("\n").each do |line|
  other_pid = line.split(" ")[0].to_i
  if other_pid != $$
    pids_to_kill << other_pid
    `ps -Al -o pid,ppid=MOM -o args|grep "1 rsync"|grep -v grep`.split("\n").each do |child_line|
      child_pid = child_line.split(" ")[0].to_i
      pids_to_kill << child_pid
    end
  end
end

if pids_to_kill.length > 0
  $LOG.info "****** Cleaning up... *******"
  $LOG.info "Killing old backup processes #{pids_to_kill.join(",")}"
  `kill -9 #{pids_to_kill.join(" ")}`
end

# Execute a command and store its output
class ExecCmd
  attr_reader :output,:error_output,:cmd,:exec_time

  def initialize(cmd,cmd_id)
    @output = ""
    @error_output = ""
    @exec_time = 0
    @cmd = cmd
    @cmd_id = cmd_id
  end

  def run
    start_time = Time.now
    begin
      $LOG.info "[#{@cmd_id}] Starting command: #{@cmd}..."
      Open3.popen3(@cmd) do |stdin, stdout, stderr|
        @output = stdout.read
        @error_output = stderr.read
      end
    rescue Exception => e
      @error_output += e.to_s
    ensure
      @exec_time = Time.now - start_time
      $LOG.info "[#{@cmd_id}] Command completed in #{@exec_time} seconds."
    end
  end

  # Log the stdio and stderr outputs
  def log_results
    $LOG.info "[#{@cmd_id}] #{@cmd}:"
    if @error_output.length > 0
      @error_output.split("\n").each { |line| $LOG.error "[#{@cmd_id}]  #{line}" }
    end
    if @output.length > 0
      @output.split("\n").each { |line| $LOG.info "[#{@cmd_id}]  #{line}" }
    end
  end

  # Returns false if the command hasn't been executed yet
  def run?
    return @exec_time > 0
  end

  # Returns true if the command was successful.
  def success?
    return @error_output.length == 0
  end
end

# Define for each host which user accounts are being backed up and which files should be excluded
default_excludes = ['.Trash', 'Downloads', 'Desktop', 'Music/iTunes/iTunes Music/Podcasts',
                    'Library/Caches', 'Library/Logs']
# format: hostname => { username => [excluded_files] }
HOSTS={ 'host1' => { 'user1' => default_excludes },
        'host2' => { 'user2' => default_excludes, 'user1' => default_excludes },
        'host3' => { 'user2' => default_excludes + ['Music/iTunes']},
        'host4' => { 'user3' => default_excludes }
      }

$LOG.info "****** Backup started... *******"

# Make a ZFS snapshot
snapshot_name = "#{ZFS_POOL}@backup-#{Time.now.strftime('%y-%m-%d_%H:%M')}"
$LOG.info "Creating ZFS snapshot #{snapshot_name}"
`zfs snapshot #{snapshot_name}`

pending_commands = {}
HOSTS.each do |hostname,user_data|
  $LOG.info "Calling #{hostname} ..."
  if Ping.pingecho(hostname)
    user_data.each do |user,excluded_files|
      exclude_file = Tempfile.new("tempfile")
      excluded_files.each { |filepath| exclude_file << filepath << "\n" }
      exclude_file.close
      user_hostname = "#{user}@#{hostname}"
      $LOG.info "Backing up #{user_hostname} ..."
      local_backup_path = "#{LOCAL_BACKUP_PATH}/#{hostname}/#{user}"
      FileUtils.mkdir(local_backup_path) unless File.exists? local_backup_path
      command = "rsync -#{DRY_RUN ? 'n' : ''}avz --delete --partial --exclude-from=#{exclude_file.path} #{user_hostname}: #{local_backup_path}/"
      rsync = ExecCmd.new(command, user_hostname)
      pending_commands[user_hostname] = rsync
      Thread.new do
        rsync.run
      end
    end
  else
    $LOG.warn "#{hostname} does not respond!"
  end
end

# Wait for the backup processes to complete
while pending_commands.length > 0
  pending_commands.each do |user_hostname, exec_cmd|
    if exec_cmd.run?
      exec_cmd.log_results
      pending_commands.delete(user_hostname)
    end
  end

  if pending_commands.length > 0
    $LOG.info "Still #{pending_commands.length} tasks backing up #{pending_commands.keys.join(', ')}"
    sleep 60
  end
end

$LOG.info "****** Backup complete. *******\n"

What it does

You can use the command line argumens –dry-run and –stdout. The first one will call rsync with the –dry-run option and the second one will write to stdout instead of a logfile.

On start the script looks for other instances of itself and will kill them and all orphaned rsync child processes.

It will create a ZFS snapshot of the target pool with the current time and date as a label.

Then it will ping all the hosts defined in HOSTS and will construct the rsync command with all the excluded files and the defined users and start a separate thread in which the command will be executed.

Then it will loop until all the rsync tasks have been completed.

The logfile is /var/log/backup_rb.log and looks like this:

I, [15.10.09 11:44:11#7343]  INFO -- : ****** Cleaning up... *******
I, [15.10.09 11:44:11#7343]  INFO -- : Killing old backup processes 7316,7325,7332,7328,7334
I, [15.10.09 11:44:11#7343]  INFO -- : ****** Backup started... *******
I, [15.10.09 11:44:11#7343]  INFO -- : Creating ZFS snapshot daten@backup-09-10-15_11:44
I, [15.10.09 11:44:11#7343]  INFO -- : Calling host1 ...
I, [15.10.09 11:44:11#7343]  INFO -- : Backing up user2@host1 ...
I, [15.10.09 11:44:11#7343]  INFO -- : [user2@host1] Starting command: rsync -avz --delete --partial --exclude-from=/tmp/tempfile20091015-7343-1hpu6rm-0 user2@host1: /daten/host1/user2/...
I, [15.10.09 11:44:11#7343]  INFO -- : Calling host2 ...
I, [15.10.09 11:44:11#7343]  INFO -- : Backing up user1@host2 ...
I, [15.10.09 11:44:11#7343]  INFO -- : [user1@host2] Starting command: rsync -avz --delete --partial --exclude-from=/tmp/tempfile20091015-7343-1f9jzvu-0 user1@host2: /daten/host2/user1/...
I, [15.10.09 11:44:11#7343]  INFO -- : Calling host3 ...
W, [15.10.09 11:44:16#7343]  WARN -- : host3 does not respond!
I, [15.10.09 11:44:16#7343]  INFO -- : Calling host4 ...
I, [15.10.09 11:44:16#7343]  INFO -- : Backing up user2@host4 ...
I, [15.10.09 11:44:17#7343]  INFO -- : [user2@host4] Starting command: rsync -avz --delete --partial --exclude-from=/tmp/tempfile20091015-7343-yv60ta-0 user2@host4: /daten/host4/user2/...
I, [15.10.09 11:44:17#7343]  INFO -- : Backing up user1@host4 ...
I, [15.10.09 11:44:17#7343]  INFO -- : [user1@host4] Starting command: rsync -avz --delete --partial --exclude-from=/tmp/tempfile20091015-7343-644sq6-0 user1@host4: /daten/host4/user1/...
I, [15.10.09 11:44:17#7343]  INFO -- : Still 4 tasks backing up user1@host2, user1@host4, user2@host4, user2@host1
I, [15.10.09 11:45:17#7343]  INFO -- : Still 4 tasks backing up user1@host2, user1@host4, user2@host4, user2@host1
I, [15.10.09 11:45:41#7343]  INFO -- : [user2@host4] Command completed in 84.087238
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] rsync -avz --delete --partial --exclude-from=/tmp/tempfile20091015-7343-yv60ta-0 user2@host4: /daten/host4/user2/:
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] receiving file list ... done
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Dropbox/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Application Support/SyncServices/Local/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Application Support/SyncServices/Local/admin.syncdb
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Application Support/SyncServices/Local/TFSM/com.apple.Calendars/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Application Support/SyncServices/Local/clientdata/120c2b27e9ab530b442181ced8799e35b30c85cb/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Application Support/SyncServices/Local/conflicts/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Calendars/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Calendars/Calendar Cache
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Calendars/Calendar Sync Changes/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Calendars/FE3DF9D9-8D76-4F44-973A-525E02717BFE.calendar/Info.plist
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Logs/Sync/syncservices.log
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Mail/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Preferences/
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] Library/Preferences/iCalExternalSync.plist
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4]
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] sent 13772 bytes  received 1001023 bytes  13440.99 bytes/sec
I, [15.10.09 11:46:17#7343]  INFO -- : [user2@host4] total size is 64974563109  speedup is 64027.28
I, [15.10.09 11:46:17#7343]  INFO -- : Still 3 tasks backing up user1@host2, user1@host4, user2@host1
I, [15.10.09 11:47:17#7343]  INFO -- : Still 3 tasks backing up user1@host2, user1@host4, user2@host1

Run in crontab

Finally I added a new entry of the root user crontab with “crontab -e” which will start the script every hour.

0 * * * * /usr/bin/ruby /root/backup.rb

Setting up my Solaris fileserver (Part 2)

To enable mirroring on my two HDDs I tried to follow the steps described at http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html and http://malsserver.blogspot.com/2008/08/mirroring-resolved-correct-way.html but got a little confused by the different device names.

What I needed to do is copy the partition table from the first drive to the second one and then I could attach it to the rpool. The following steps I did as a root user.

# zpool status

 pool: rpool
 state: ONLINE
 scrub: none requested
config:

 NAME        STATE     READ WRITE CKSUM
 rpool       ONLINE       0     0     0
   c8d0s0    ONLINE       0     0     0

errors: No known data errors

Meaning: my first disk is c8d0s0 and it is attached directly to the rpool.

# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:
 0. c8d0 <DEFAULT cyl 19454 alt 2 hd 255 sec 63>
 /pci@0,0/pci-ide@9/ide@0/cmdk@0,0
 1. c8d1 <DEFAULT cyl 19454 alt 2 hd 255 sec 63>
 /pci@0,0/pci-ide@9/ide@0/cmdk@1,0

So my second drives name is c8d1. I chose option (1) and used the fdisk command to create a solaris2 partition. Then I quit the format command.

To copy the partition table from the first drive to the second one I used:

# prtvtoc /dev/rdsk/c8d0s2|fmthard -s - /dev/rdsk/c8d1s2

Then I could force attach the second drive to the rpool:

# zpool attach rpool c8d0s0 c8d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c8d1s0 overlaps with /dev/dsk/c8d1s2

# zpool attach -f rpool c8d0s0 c8d1s0
Please be sure to invoke installgrub(1M) to make 'c8d1s0' bootable.

# zpool status
 pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h2m with 0 errors on Mon Jul 13 10:16:55 2009
config:

 NAME        STATE     READ WRITE CKSUM
 rpool       ONLINE       0     0     0
   mirror    ONLINE       0     0     0
     c8d0s0  ONLINE       0     0     0
     c8d1s0  ONLINE       0     0     0  4,18G resilvered

errors: No known data errors

To make the second drive also bootable I invoked installgrub

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c8d1s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 271 sectors starting at 50 (abs 16115)

The next task will be to install the four different 1 TB HDDs I also bought into that Chenbro case and create a zpool for them.

Setting up my Solaris fileserver (Part 1)

Finally I had the time (and money :-) to fullfil my long lasting wish to setup my own fileserver running Solaris and ZFS. Since I wanted to use it also as a potential test server for my projects I decided to use a slightly bigger processor. I ordered a BTO system at http://www.alternate.de with the following configuration:

  • Fan: Arctic Alpine  64 PRO
  • CPU: AMD X2CV  GE4450E AM2 2300    2000 1MB
  • Power supply: Corsair CMPSU-400CX   400W ATX2
  • Case: A+case  Seenium         Black
  • Mainboard: GiBy GA-M85M-US2H GF8100 RGVSM
  • 1st boot HDD: Samsung  160 GB SAT2 HD161GJ
  • 2nd boot HDD: Maxtor   160 GB SATA STM3160813AS
  • RAM: D2 2GB  800-5     128×8     tMS
  • DVD: Lite DH-16D3P        16x AT        Bl  B

The first installation of OpenSolaris 2009.06 went quite smoothly but then I had to discover that I am back in tinker land: the onboard network interface was not recognized. After many hours of reinstalling, searching the web and so on (I know why I use a Mac ;-) I found this use page of a guy who had the same problems: http://www.linuxdynasty.org/basic-networking-howto-on-opensolaris.html

“scanpci -v” returned this in my case:

pci bus 0x0000 cardnum 0x0a function 0x00: vendor 0x10de device 0x0760
 nVidia Corporation MCP78S [GeForce 8200] Ethernet
 CardVendor 0x1458 card 0xe000 (Giga-byte Technology, Card unknown)
 STATUS    0x00b0  COMMAND 0x0007
 CLASS     0x02 0x00 0x00  REVISION 0xa2
 BIST      0x00  HEADER 0x00  LATENCY 0x00  CACHE 0x00
 BASE0     0xfc008000 SIZE 4096  MEM
 BASE1     0x0000dc00 SIZE 8  I/O
 BASE2     0xfc009000 SIZE 256  MEM
 BASE3     0xfc00a000 SIZE 16  MEM
 BASEROM   0x00000000  addr 0x00000000
 MAX_LAT   0x14  MIN_GNT 0x01  INT_PIN 0x01  INT_LINE 0x0f

Since I have a different mainboard with a Geforce 8200 chip I tried to use the latest drivers. After two reinstalls I got it up and running with these steps running as a root user:

gunzip nfo-2.6.3.tar.gz
tar -xvf nfo-2.6.3.tar
cd nfo-2.6.3
rm obj Makefile
ln -s Makefile.${KARCH}_${COMPILER} Makefile  ( for me it was ln -s Makefile.amd64_gcc  Makefile )
ln -s ${KARCH} obj ( for me it was ln -s amd64 obj )
rm Makefile.config
ln -s Makefile.config_gld3 Makefile.config
/usr/ccs/bin/make
/usr/ccs/bin/make install
cp nfo.conf /kernel/drv/nfo.conf
./adddrv.sh

After these steps a reboot did the trick.