Improvements to Snapshot Backup Scripts

The snapshot scripts that I blogged about earlier have undergone a number of important changes. I had been having a lot of problems with the cleanup of snapshot volumes and with the deletion of the old backup logical volumes. This was all related to this bug. After applying the workarounds there, the backup procedure is completely robust again.

Also, I have added improved logging together with scripts to check the result of a backup.
Additionally, the software is now also available on an RPM repository (works at least on opensuse 11.3).

For more information, have a look at the snapshot website.

Posted in Devops/Linux | Leave a comment

Git server setup on linux using smart HTTP

After seeing a presentation from Linus Torvalds I decided to read more about git. After looking into git more a I have decided to slowly move to git. New projects will be using git instead of subversion and I will move some existing projects over to git when I get the chance.

The first question that arises in such a case is how to deploy it. Of course, there are free solutions available such as github, but this some disadvantages for me. First of all, the access will be slower compared to a solution that I host at home, and second, I also have private repositories and these are really private so I really don’t want them to be hosted on github (even if github protects the data). Apart from this, the distributed nature of git would allow me anyway to easily put source code for an open source project on github, should one of my projects ever become popular.

So the question remains on how to host it at home. Of course, I have my current infrastructure already consisting of a linux server running apache. Looking at the options for exposing GIT, there are several solutions:

Alternative Pros Cons
Remote access through ssh.
  • Zero setup time because ssh is already running
  • requires complete trust of a client. Possible version incompatibilities.
  • requires a system account for every user and additional configuration to prevent logins and other types of access.
  • (corporate) firewalls can block SSH making it inaccessible from there.
apache webdav
remote access through apache using webdav
  • easy to setup, simple apache configuration
  • uses proven apache stability and security
  • additional configuration required in git to make this work (git update-server-info).
  • requires complete trust of a client, same risk with version incompatibilities
  • definite performance impact.
apache smart http
remote access using apache with CGI based solution (basically using HTTP as transport for git).
  • easy to setup
  • uses proven apache stability and security
  • does not require trusting a particular client
  • some overhead of HTTP (alhough much less than with webdav)
git native
  • doesn’t require trusting a client.
  • most efficient solution
  • does not easily pass through firewalls
  • server code maturity
  • lack of authentication.

In the above table, the phrase “trusting a client” means trusting the client software. Allowing a client full control over the modification of the repository files is risky. There could be clients with bugs or clients using different versions of git and there could even be malicious clients that could corrupt a repository. This risk is not present with the native git and smart http approaches.

As is clear from the above, ssh is the most problematic of all. On the other end, git is the most efficient but lacks the requires security and requires me to open up yet another port on the firewall and run yet another service. Because of these reasons I decided on an HTTP based setup. In fact, I experimented early on with the webdav based approach simply because I didn’t find the smart http approach which is relatively new. The setup however did show that HTTP webdav is much slower than the smart http setup. In fact, I think smart http is also faster than subversion when pushing changes.

The setup of smart HTTP is quite easy, basically it is a CGI based approach where HTTP requests are delegated to a CGI helper program which does the work. In effect, this is the git protocol over HTTP. Standard apache features are used to implement authentication and authorization. The smart HTTP approach is described already quite well here and here, but I encountered some issues and would like to clarify what I did to get it working.

These are the steps I took to get it working on opensuse 11.3:

  • Setup the user accounts and groups that you need to authenticate against using htpasswd and put it in a file /etc/apache2/conf.d/git.passwd
  • Make sure that the cgi, alias, and env modules are enabled by checking the APACHE_MODULES setting in /etc/sysconfig/apache2.
  • Now we are going to edit the apache configuration file for the (virtual) domain we are using. In this example, I assume we have /data/git/public hosting public repositories (anonymous read and authenticated write) and /data/git/private hosting private repositories (authenticated read and write). Also the git repositories are going to be exposed under a /git context root.
    • by default export all repositories that are found

      This can also be configured on a per repository basis, see the git-http-backend page for details.

    • Configure the CGI program used to handle requests for git.
      ScriptAlias /git/ /usr/lib/git/git-http-backend/

      This directive had me quite puzzled because the apache documentation mentions that the second ScriptAlias argument should be a directory, but in this case it is an executable and it works.

    • Set the root directory where git repositories reside
      SetEnv GIT_PROJECT_ROOT /data/git
    • By default, the git-http-backend allows push for authenticated
      users and this directive tells the backend when a user is authenticated.


      I had to google a lot to find this one because it is not mentioned in the documentation. Without this, I had to configure “http.receivepack” to “true” for every repository to allow “git push”.

    • General CGI configuration allowing the execution of the CGI programs. This is more or less self explanatory
      <Directory "/usr/lib/git/">
        AllowOverride None
        Options +ExecCGI -Includes
        Order allow,deny
        Allow from all
    • Next is the configuration of the public repositories
      <LocationMatch "^/git/public/.*/git-receive-pack$">
        AuthType Basic
        AuthName "Public Git Repositories on"
        AuthUserFile /etc/apache2/conf.d/git.passwd
        Require valid-user

      This requires an authenticated user for every push request. See the apache documentation for the various options such as requiring the user to belong to a group. In my setup, I simply use one global git.passwd file and any authenticated user has access to any repository.

    • Finally, there is the setup of the private repositories, which requires a valid user for any URL.
      <LocationMatch "^/git/private/.*$">
        AuthType Basic
        AuthName "Private Git Repositories on"
        AuthUserFile /etc/apache2/conf.d/git.passwd
        Require valid-user

      In this case I could have also used a “Location” element instead of “LocationMatch”.

    • Finally restart apache using
      /etc/init.d/apache restart

      or (“force-reload”) and try it out.

Hope this helps others setting up their git servers on linux. In my experience this setup is quite fast for both push and pull. I am currently working on one project that you can access by doing

  git clone

A gitweb interface for browsing the public repositories is here.

Have a lot of fun!

Posted in Devops/Linux | 9 Comments

Initial experiences with the Samsung Galaxy S II and Android

A few days ago, on May 11th, I received my new phone, the Samsung Galaxy S II. This is one of the first dual core phones runnning gingerbread. After a few days of working with it, I must say I am truly impressed with it. On the software side, the phone is rock-solid, really loads better then my previous Nokia N97 and (the absolutely terrible) Sony Ericsson P990i (it used to reset ‘to improve system performance’ in standby mode in the initial software version). It is nice to use a phone that just works. I haven’t even discovered a single glitch. Nokia and Sony Ericsson should take note here.

Even making calls is better than on the N97. On that phone you lose control completely everytime someone else hangs up: the screen would go black and you would not be able to do anything on it for the next 10 seconds. It is also nice to have a music application that actually performs.

It’s too early to say anything about battery life as my use of the phone has been extreme for the past days, which included continuous downloading over wifi for hours on end and almost continuous use.

The phone feels really solid and looks great. Performance is excellent. With this phone I have the feeling that finally we have similar performance and usability again as with the good old Palm from approx. 6 years ago. It is also nice to know that the phone has Gorilla glass and uses the latest Sirf Star IV chip for GPS. All in all a quality product.

Looking at Android and in particular the Android market I am also impressed. The quality of the applications that I tried is quite good. One such application is a tuner (for tuning musical instruments). In the past it was difficult to find good applications for this. On Palm I used phontuner for instance but all the other applications sucked and I haven’t been able to find a suitable application on Symbian at all. On Android I have tried two which both worked quite well. The review system on the market makes it relatively easy to find good applications and saves a lot of time dealing with the bad ones. Buying stuff is also easy and fast.

Posted in Misc, Software | 2 Comments

Countdown has started again!

Yes folks! The countdown timer has been started again. This time I decided to renew it a bit and use a bit of javascript with jQuery instead of the good ‘ole java applet.

It is an important occasion for me this time. Basically it is the closure of a long period starting in 1998 that is now over. It is exciting to start something new again!

Posted in Fun, Java, Misc | 1 Comment

Processor evolution, will history repeat itself?

It is interesting to see what is going on in the industry with regard to the development of CPUs. In particular, the first dual core smartphones are being released right now and there are even quad core CPUs expected later this year. The latter is quite interesting because it appears it can already beat a 2GHz desktop processor of only a couple of years ago. In addition NVidia is predicting a 75 fold increase in smartphone compute power within only a couple of years.

The demoes are quite impressive. At least they show typical single-core applications such as web browsers utilizing all cores and actually accelerating the experience. Of course, not all processor cores run at their maximum frequency all the time, but that is not that important since what counts is the end user experience. Also, this raises other interesting issues. For instance, will these mobile phone processors surpass desktop processors in performance? And if so, will mobile OSes such as Android and iOS be competing directly with current desktop and laptop systems with windows and linux?

How about Intel, will they be able to catch up with their smartphone atom processor? I would expect so of course given the large number of smart people they employ and their research budget (nothing can compete with that). And, how about the technology of these smartphone processors entering in the regular desktop and server domain?

Wait! This happened before when Core 2 Duo replaced Pentium D using the architecture from Pentium M processors. So there, mobile technology (laptops) entered the desktop domain. Will the same happen again with smartphone processors? I sure hope so because that will lead to more low power (quiet) servers and will be good for the planet as such. So let’s hope that history will repeat itself.

“All of this has happened before, and it will all happen again.” — Peter Pan — Battlestar Galactica

Posted in Devops/Linux | 1 Comment

Bad quality scales superlinearly

Take any given production process and assume it is not producing good enough quality let’s call it ‘crap’ to make it a bit more expressive. Now, ask yourself what happens if you scale this process by adding more workers and machines/material. Well? Continue reading

Posted in Misc, Process/WoW | Leave a comment

Looking back on the Nokia N97

I went to the phone shop today to get ale newer SIM card because it could have been the cause for my reception problems in Switzerland some time ago (so customer service told me). So I told the guy in the shop about this and after taking a short look at my phone he said: “well, I think it might just as well been the phone itself”. I asked “really?” and he said “Well, name me one problem and the Nokia N97 has it, these phones are really problematic.”.

And come to think of it, I think he is right about this. Just listing the problems I had with the phone produces an impressive list:

  • Reception problems: Other phones (also other Nokias) have reception while the N97 has lost it.
  • Poor keyboard: Indeed the keyboard took a lot of getting used to.
  • Unusable GPS reception: GPS reception quality was bad enough to be unusable for car navigation (even after the hardware fix).
  • Scratching lens: The lens has tendency to scratch easily because of the lid.
  • Battery life and life span: The battery often dies during one day of use. Also, I am now at my third battery.
  • Slow operating system: The N97 just gets slower and slower over time (unacceptable as I know that OSes exist that in fact get faster the longer you have them turned on).
  • Outlook incompatibility: Some repeating appointments in outlook can simply not be entered in the calendar application.
  • Bad screen connector: After 1.5 years the connector got unreliable. I am now using scotch tape to make sure I cannot open the phone in an attempt to keep it working a bit. Now using google calender to enter most of my appointments.
  • Unusable as a phone: After ending a call, you always lose control over it because the screen goes black and the phone doesn’t respond to anything for at least 10 seconds.

So indeed a big list. Now what should I do with the my N97 once I get my new phone? Please leave a comment on this post. I will then film the winning suggestion with my new phone and post the result on youtube.

Posted in Fun, Misc | 1 Comment

KVM Setup Overview

The server has been running stable now for quite some time in the new setup with several virtual machines providing the actual functionality using Kernel Virtual Machine.

The setup is as follows. The host (falcon) is running a linux server and runs 3 virtual machines: shikra, sparrow, and windowsxp. These virtual machines are all running using KVM. The windowsxp VM is switched off most of the time and only runs when I need it. Its main purpose is that it contains some licensed software that cannot be transported to another windows installation because of licensing reasons.

The shikra image is basically the old server minus minus the continuous integration and maven functionality. Every linux virtual machine provides two network interfaces, one bridged interface for the outside world and one NAT interface for pure host-VM and VM-VM communication. The latter interface is mainly used for backups because in that case it is useful to minimize impacts on the external network interfaces. Sparrow is dedicated to automated builds and it provides the nexus repository for RPM generation. Having this functionality separate from the core server (shikra) is desirable so that automated builds cannot functionally impact shikra.

From the internet, all SSH traffic is forwarded to the host so I can always get into the server, even if a VM is having problems, and HTTP, HTTPS, IMAPS, and SMTP traffic is routed directly to shikra.

In the future I want to generalize this setup a bit more, by creating a separate VM for mythtv functionality. Also, I am considering to create a separate, very small, VM for just the reverse proxy.

As part of this setup I had to automate some tasks for starting up and shutting down VMs. This is provided by the kvmcustom package (see the yum repository) . Also see the post about automated management of this yum repo.

Posted in Devops/Linux | 1 Comment

Two worlds meet (1): Automated creation of Yum Repos with Maven, Nexus, and Hudson

This is the first of a series of blogs titled ‘Two worlds meet’ talking about how two technologies can be used together to solve a problem. Mostly one world will be linux or more specifically a virtualized linux setup using kernel virtual machine, with the other world being java.

In this blog I will be looking at automated creation of a yum repository using maven, nexus, and hudson. First however, some background is needed. Some time ago I bought a new server with more than sufficient resources to run multiple virtual machines. The aim there was to do some separation of concerns having virtual machines with different purposes and also be able to run conflicting software. Doing that introduces a whole new problem of maintaining the software on these virtual machines. Of course, I use a standard linux distribution such as opensuse but I still have some custom scripts that I need and want to have available on all VMs.

Using the standard linux tooling an abvious method is to just create my own Yum repository to publish my own RPMs in and then add that Yum repo as a channel in all of my VMs. Of course, the challenge is then to easily and automatically create such a YUM repository. Fortunately, since I am working quite a lot with Java and Maven (earning a living with it basically), there is a quite easy solution with a nice separation of concerns.

The ingredients of this solution are:

  • maven for building the rpm using the maven rpm plugin
  • the maven release plugin for tagging a released version of the RPM, stepping its version, and publishing it into a maven repository
  • a maven repository such as nexus for publishing RPMs into
  • hudson for detecting changes and automatically updating/building the Yum repository upon changes to the RPMs.

In addition, some basic infrastructure is needed such as:

  • a version control system such as subversion
  • apache for providing access to subversion and for serving the Yum repo to all VMs
  • an application server such as glassfish for running hudson and nexus

This may seem like a lot of infrastructure, but before I started I already had most of this except for the nexus maven repository, so all in all the effort for this solution was quite limited.

The main new ingredient of the solution is the script to create the Yum repository from the nexus repository. This script exploits the fact that nexus stores its repositories in a standard maven directory structure (an approach using REST web services is also possible):



# Create the repo directory
rm -rf $YUM
mkdir -p $YUM
mkdir -p $YUM/noarch

# Find the RPMs in the nexus repository and use hardlinks
@ to preserve modification times and avoid the overhead of
# copying
for rpm in $( find $REPO -name '*.rpm' )
  echo "RPM $rpm"
  ln $rpm $YUM/noarch

# createrepo is a standard command available on opensuse
# to create a Yum repository
createrepo $YUM

# sign it
gpg -a --detach-sign $YUM/repodata/repomd.xml
gpg -a --export F0ABC836 > $YUM/repodata/repomd.xml.key

# sync the results to their final destination to make them
# available
rsync --delete -avz $YUM/ /data/www/http.wamblee.org_yum/public

Using this approach it is really easy to update an RPM and make it available on all my VMs. The procedure is basically as follows:

  • edit the source of the RPMs and check in
  • Now tag it and step the versions using:
    mvn release:prepare
  • Deploy the just tagged version to the nexus repository:
    mvn release:perform
  • Some time later the Yum repository has been automatically updated by hudson based on the contents of the nexus repository
  • On a specific VM simply use a standard update using
    zypper up

    Note that this may require an explicit

    zypper refresh

    to make sure that zypper sees the latest versions of all RPMs. Autorefresh will also work but might require some more time before zypper sees the latest versions.

Therefore, in the end a really simply procedure to quickly make RPMs available on all VMs and also make sure each version is properly tagged in subversion. The only issue is that hudson will always run on every SCM change, so not only when an RPM is released but I consider that a minor issue.

The YUM repo is here.

An example pom is below:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation=" ">
    <description>KVM guest support</description>
                    <copyright>Apache License 2.0, 2010</copyright>
                    <packager>Erik Brakkee</packager>

Posted in Devops/Linux, Java, Software | 15 Comments

Flexible JDBC Realm for Glassfish

Approximately three years a go I started the development of a simple JDBC based security realm for Glassfish. The reason was that I was migrating from JBoss to glassfish and was running into problems with one application. That application simpy stored authentication data (user, group, password digests) into a database. I had been relying on a simple configuration for this on JBoss but ran into limitations of JDBCRealm on Glassfish. Therefore, I made my own realm. It is now being used at several places already. With version 1.1 I consider this security realm feature complete. More info is at the web site.

Posted in Java, Software | 4 Comments