Creating a USB install for Centos 6.4

The days of rotating disks for storing information and in particular for installing OSes are nearing their end. Why rely on something with rotating parts for storing data in the 21st century? Unfortunately, not every software vendor has caught up with this so in some cases special measures must be taken for installing an OS from a USB disk. One example of this is Centos/RHEL which does not come with a USB install by default. There is procedure from Red Hat that can be used, but that procedure is limited to starting an installation when you already have the installation media available somewhere (e.g. on a hard drive).

One common method to create such a USB install is to use the livecd-iso-to-disk script. Unfortunately that did not appear to work and I have tried it many times. After reading the interesting discussion on unix.stackexchange.com, I tried to give it another shot and this time it worked.

What I did was the following on a laptop running Centos 6.4:

  • Insert the USB stick: Find out the device name (e.g. using dmesg). Make sure the stick is unmounted as it could be automounted.
  • Partitioning: Make sure the disk is partitioned to contain one single primary partition (e.g. /dev/sdb1) using for example cfdisk. For now I will assume that /dev/sdb is the USB stick. Make sure to substitute this for the correct device in the next instructions.
  • File system: Create an ext3 filesystem on /dev/sdb1
    mkfs.ext3 /dev/sdb1

    I did not try ext2 and ext4 but these could also work. You can also optionally do a

    tune2fs -m0 /dev/sdb1

    to increase the available space by removing reserved blocks for the kernel (these are not needed anyway).

  • Install livecd tools: Install using yum:
    yum install livecd-tools
  • Transfer the ISO to the USB stick: Transfer disk 1 of the Centos 6.4 installation to the USB stick:
    livecd-iso-to-disk  CentOS-6.4-x86_64-bin-DVD1.iso  /dev/sdb1

    Note that it is important to specify /dev/sdb1 here and not /dev/sdb.

     

Testing

After this step, the USB stick can be tested locally using qemu-kvm.

To simply verify the the USB stick is found and the boot menu is recognized, bootup a virtual machine with only the USB disk:

/usr/libexec/qemu-kvm -hda /dev/sdb -m 256 -vga std

And use a VNC viewer (e.g. vncviewer from tigervnc) to view the VM. This should show a boot menu and should allow you to start the installation until the point that the installation procedure cannot continue anymore.

If you want to test a full installation, create a disk using logical volume management

lvcreate -L 10g -n bladibla vg_mylaptop

where vg_mylaptop is a volume group where you have at least 10GB of space left, and start qemu-kvm with the created logical volume as disk hdb and give it a bit more memory:

/usr/libexec/qemu-kvm -boot c -hda /dev/sdb -hdb /dev/vg_mylaptop/bladibla -m 2048 -vga std

After the install is completed, start the VM again without the USB stick

/usr/libexec/qemu-kvm -boot c  -hda /dev/vg_mylaptop/bladibla -m 2048 -vga std

The VM should now start up successfully. The USB boot stick is also recognized by my laptop natively and I it looks like I can install a full OS also there (at least the upgrade, which did nothing of course in my case, worked completely).

Disclaimer: As mentioned in the discussion at the link above, the whole procedure might give different results based on the USB stick you might use. I tested this procedure on a Dell Latitude M4700 laptop using a Kingston GT160 8GB memory stick.

Posted in Devops/Linux, Software | 4 Comments

Java from the trenches: improving reliability

Java and the JVM are great things. In contrast to writing native code, making a mistake in your Java code will not (or should not) crash the virtual machine. However, in my new position working for a SAAS company I have been closer to production systems then ever before and in the short time I have been there I have already gained a lot of experience with the JVM. In any case, crashes and hangs occur but there is something we can do about it. These experiences are based on running Java 1.6 update 29 on Centos 6.2 and RHEL 6 as well as windows server 2003.

Java Service Wrapper

To start of with, I would like to recommend the Java Service Wrapper. This is a great little piece of software which allows you to run Java as a service with both a Windows and Linux implementation. The service wrapper monitors your java process and restarts it when it crashes or restarts it explicitly when it appears hung. The documentation is excellent and it works as advertized. It has given us no problems at all apart from tweaking the timeout to consider a java process hung.

The service wrapper writes its own log file but we found that it contained also every log statement written by the application. The cause of this turned out to be the ConsoleLogger of java.util.Logging which was still enabled. This problem was easily solved by setting the handler property empty in jre/lib/logging.properties

handler=
#handlers= java.util.logging.ConsoleHandler

This also solved a performance problem whereby  due to a bug in the application, excessive logging was being done and the java service wrapper simply could not keep up anymore.

With a default JRE logging configuration, the logging output can also be disabled by setting the following properties in the wrapper.conf file:

wrapper.syslog.loglevel=NONE
wrapper.console.loglevel=NONE
wrapper.logfile.loglevel=STATUS
wrapper.java.command.loglevel=STATUS

Of course, with the console logging turned off, it should be possible to remove the wrapper.console.loglevel setting (not tried yet).

Garbage collection

Since we would like to achieve low response time and minimize server freezes due to garbage collection, we settled on the CMS (Concurrent Mark and Sweep) garbage collector.

Using the CMS collector we found one important issue where on windows, the server would run perfectly but on linux it would become unresponsive after just a couple of hours traffic. The cause was quickly found to be permgen space. It turns out that garbage collection behavior on windows differed from linux. In particular, garbage collection of the permgen space was being done on windows but not on linux. After hours and hours of searching, we found this option that fixed this behavior:

-XX:+CMSClassUnloadingEnabled

The full list of options we use for garbage collection is now as follows:

-XX:+UseConcMarkSweepGC
-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSClassUnloadingEnabled
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-verbose:gc
-Xloggc:/var/log/gc.log

The last four options are for garbage collection logging which is useful for troubleshooting potential garbage collection issues after the fact.

One of the issues with the above configuration is that upon restart of the JVM, the garbage collection log file is overwritten instead of being appended to, thereby losing information when the JVM crashes. This problem can be worked around by using a ‘tail -F gc.log > gc.log.all’ command, but this solution is not nice as it will create very large log files. An optimal solution would be if the JVM would cooperate with standard facillities on linux such as logrotate. Similar to how, for instance, apache handles logging, the JVM could simply close the gc.log file when it receives a signal and then reopen it again. That would be sufficient for logrotate to work. Unfortunately, this is not yet implemented in the JVM as far as I can tell.

Crashes in libzip.so or zip.dll

It turns out that this problem can occur when a zip file is being overwritten while it is being read. The causes of this could be in the application of course, but still the JVM should not crash on this. It appears to be a known issue which was fixed in 6u21-rev-b09, but the solution for this is disabled by default.

If you set the system property

-Dsun.zip.disableMemoryMapping=true

then memory mapped IO will no longer occur for zip files which solves this issue. This system property only works on linux and solaris, and not on windows. Luckily a colleague found this solution. It is very difficult to find this setting on the internet, which is full of stories about crashes in the zip library, even if you know what you are looking for.

Crashes in networking libraries/general native code

Another issue we ran into were occasional crashes, mostly in networking libraries’ native code. This also appears to be a known issue with 64 bit JVMs. The cause of this is that there is insufficient stack space left for native code to execute.

How it works is as follows. First of all, the java virtual machine uses a fixed size for the stack of a thread. This size can be specified with the -Xss option if needed. While executing java code, the JVM can figure out whether there is enough space to execute the call and throw a StackOverflowError if there’s not. However, with native code, the JVM cannot do that so in that case it checks whether a minimum space is left for the native code. The minimum space is configured using the StackShadowPages option. It turns out that by default, this space is configured too low on older 64 bit JVMs, causing crashes in for instance socket libraries (e.g. when database access is being done). See for instance here. In particular, on JDK 1.6 update 29, the default value is 6 and on JDK 1.7 update 5 it is 20.

Therefore, a good setting of this flag is to use 20

-XX:StackShadowSize=20

The size of 1 page is 4096 bytes so increasing the stack shadow pages from 6 to 20 would mean that you need 56KB additional stack size. This page size can be verified by running java with a low stack size and passing different values for stack shadow pages like this:

erik@pelican> java -Xss128k -XX:StackShadowPages=19 -version
The stack size specified is too small, Specify at least 156

The stack size per thread may be important on memory constrained systems. For instance, with a stack size of 512KB a 1000 threads would consume about 500MB of memory. This may be important for smaller systems (especially 32 bit if these are still around), but are no issue at all for a modern server.

Debug JVM options

To find out what the final (internal) settings are for the JVM, execute:

java -XX:+PrintFlagsFinal <myadditionalflags> -version

Logging

If your environment still uses log4j for some reason then be aware that log4j synchronizes your entire application. We found an issue where an exception with a huge message string and stack trace was being logged. The toString() method of the exception in this case took about one minute during which time the entire application froze. To reduce these synchronization issues of log4j use AsyncAppender and specify a larger buffer size (128 is default) and set blocking to false. The async appender may have some overhead in single-threaded scenarios, but for a server application it is certainly recommended.

Posted in Devops/Linux, Java, Software | 1 Comment

Why do developers write instead of reuse?

I am frequently amazed at the amount of software that is being written instead of simply looking around and reusing what’s already available. In practice I have seen a lot of reasons for this:

  • Our problems are unique: The misconception that “our problems are unique”. I really can’t recall how many times I have seen this but this is really occurring a lot.
  • Not looking for similar solutions: Simply forgetting to look for similar solutions on the internet to see what’s available (if only as an inspiration on how to best solve the problem). This is often also a side effect of thinking that this is a unique problem.
  • Underestimation of the problem: The misconception that it’s easy to write it yourself. In most cases, it is easy to come up with a first (half) working version that does approximately what you need. However, the work involved in making the same solution maintainable and with the correct feature set will make it much more expensive (the 80-20% rule).
  • Limited scope: A developer specialized in platform X (e.g. X = java) will typically only look for solutions in that area, whereas looking broader will reveal more solutions.
  • Coolness factor: It is cool to develop it yourself. Perhaps it involves an opportunity to do something cool with clustering or another chance to use one of your favorite frameworks. Perhaps you could use one of those cloud databases?
  • Overestimation of oneself: The idea that we can do something better in a few weeks time than what the industry or open source community has come up with using man years of development.
  • The desire for fame by writing reusable software: Paradoxically, the desire for reusable software can stimulate to roll your own. The problem is that writing reusable software (or calling it reusable) provides you with fame (even if it’s only in your local department). The reality is however that reuse can only exist through the willingness of people to use other people’s software. If there is one developer writing a reusable piece of software and 20 others using it, then clearly the willingness to use other’s software far outweighs writing it yourself.

I have seen these problems in companies of all sizes.

Posted in Software | 2 Comments

Moving countdown


Yes folks! The countdown timer has been started again. This time it is counting down to the time when the move really starts and first boxes will be loaded onto a truck towards my new home.


Really looking forward to it…

Posted in Misc | Leave a comment

Moving

The last time I moved to a different city was 13 years ago. And before that time I had been moving every two years or so. So when I finally settled in 1998, I decided that I was going to stay in one place for a much longer time. It is time now however to move again, I got a new job in a new location and it makes a lot of sense to move. For one, I will have much better house (buying a house in the middle of the credit crunch), with a very nice garden, and it will reduce my traveling time to and from work considerably. Also, the environment is quite nice because my favorite mountainbiking locations are closer and there are also many more opportunities for mountainbiking close by.

One of the most important things when moving is of course…. my server. Of course, I am depending a lot on it. For one it is running my mail server and it also handles a number of mailing lists. It runs 4 web sites, and it is also my VCR (mythtv).

Therefore, it is important to me to minimize downtime of the server during the move. Luckily, I am already prepared for this since I am running the server as a virtual machine already. So as part of the move I will run this virtual machine on my laptop, which gives me plenty of time to disassemble the server rack and set it all up again at my new location. In fact as I am writing this, I am already running the server from my laptop. It is easy for me to do this because my regular server backups are bootable, see here.

Because of this setup, I can minimize the total down time of my web sites to the order of minutes and minimize mail down time to less than possibly one day in total (but no-one will notice that because mail servers retry sending mail).

Interestingly, I had quite a fight today to get things working again with my TVIX M-6500 which allows me to play movies hosted on the server (through NFS) on my TV. As it turns out there are subtle issues with network bridges on linux dropping UDP packages in some cases, see here.  As it turns out, the TVIX uses UDP for NFS, which can give problems with bridged network interfaces on virtual machines in some cases. Luckily, I managed to solve this by replacing the virtio network model on the machine by device emulation of a RTL8139 chipset. Anyway, all is  good now. The server VM is now fully functional again and I can watch movies, send/receive mail and all my websites are up. The only thing I cannot do is record at this time, but ok, this is only for the next 10 days or so. On the 16th of February I hope to be able to start the server again at its new location.

Posted in Misc | 1 Comment

Nested Logical Volume Management for VMs

As I blogged earlier, I have replaced the server setup that I originally had with a virtualized server setup. This introduces the concept of “hardware independent server” and makes it easy to run the server on any hardware without modification. More concretely, it allows me to run until the hardware fails. Previously I used to replace the server hardware before it really broke, but in this setup I can run it until it breaks. Should I have a serious hardware failure I can simply run the server(s) from any other hardware such as a laptop. This is because I have “bootable backups”. I.e. if the server breaks, I can either run a replacement server based on the same data or simply use a laptop and run the backup in a virtualized manner.

As part of the original migration from running native to virtualized I used the identical setup, which meant passing physical hardware partitions to the virtual machine. The virtual machine then used Linux Logical Volume Management based on these hardware partitions. For new virtual machines I used another approach which was allocating a disk logical volume on the host, and then partitioning this on the guest and using LVM again to manage storage within the guest. This in fact results in nested logical volume management and as I have seen from one of the new virtual machines works like a charm. It provides a nice separation of concerns where the host simply assigns storage to guests and the guests decide how to use this storage.

However, there was still one virtual machine (the original hardware based server) that was still being passed physical disk partitions. This introduced the problem of both the host and virtual machine seeing the same logical volumes and thus the chances for administrative error and data corruption when multiple OSes would concurrently access the same logical volumes.

To remedy this, I used the following procedure:

  • Allocate a physical volume on the host and a “disk” logical volume on it big enough to contain all logical volumes from the VM
  • Stop the VM
  • Add this virtual disk to the VM.
  • Start the VM
  • Partition the new disk on the VM and extend existing volume groups to use physical partitions on this disk.
  • Use pvmove to move data to the disk and remove the old unused physical partitions from the volume groups afterwards.
  • Stop the VM
  • Remove old physical partitions from the VM, leaving only the new “disk” logical volume
  • Start the VM

In executing this procedure I ran into the basic problem that I did not have enough storage. To solve this I used a separate disk that was connected temporarily to the server. Now, after executing this procedure, all physical storage on the existing logical volumes (RAID array) was unused, so I extended the logical volume for the disk with that from the RAID array on the host. Then again using pvmove to move data to the RAID array from the temporary disk. And afterwards removing the unused physical volumes on the temporary disk from the volume group. Of course, all done while the virtual machine was up and running (no-one likes downtime).

The new setup reduces the chance of administrative error considerably and allows me to move storage for virtual machines to other locations without even having to shutdown a virtual machine. It also nicely separates the allocation of storage to VMs on the host from how each VM uses its allocated storage.

Posted in Devops/Linux | Leave a comment

Improvements to Snapshot Backup Scripts

The snapshot scripts that I blogged about earlier have undergone a number of important changes. I had been having a lot of problems with the cleanup of snapshot volumes and with the deletion of the old backup logical volumes. This was all related to this bug. After applying the workarounds there, the backup procedure is completely robust again.

Also, I have added improved logging together with scripts to check the result of a backup.
Additionally, the software is now also available on an RPM repository (works at least on opensuse 11.3).

For more information, have a look at the snapshot website.

Posted in Devops/Linux | Leave a comment

Git server setup on linux using smart HTTP

After seeing a presentation from Linus Torvalds I decided to read more about git. After looking into git more a I have decided to slowly move to git. New projects will be using git instead of subversion and I will move some existing projects over to git when I get the chance.

The first question that arises in such a case is how to deploy it. Of course, there are free solutions available such as github, but this some disadvantages for me. First of all, the access will be slower compared to a solution that I host at home, and second, I also have private repositories and these are really private so I really don’t want them to be hosted on github (even if github protects the data). Apart from this, the distributed nature of git would allow me anyway to easily put source code for an open source project on github, should one of my projects ever become popular.

So the question remains on how to host it at home. Of course, I have my current infrastructure already consisting of a linux server running apache. Looking at the options for exposing GIT, there are several solutions:

Alternative Pros Cons
ssh
Remote access through ssh.
  • Zero setup time because ssh is already running
  • requires complete trust of a client. Possible version incompatibilities.
  • requires a system account for every user and additional configuration to prevent logins and other types of access.
  • (corporate) firewalls can block SSH making it inaccessible from there.
apache webdav
remote access through apache using webdav
  • easy to setup, simple apache configuration
  • uses proven apache stability and security
  • additional configuration required in git to make this work (git update-server-info).
  • requires complete trust of a client, same risk with version incompatibilities
  • definite performance impact.
apache smart http
remote access using apache with CGI based solution (basically using HTTP as transport for git).
  • easy to setup
  • uses proven apache stability and security
  • does not require trusting a particular client
  • some overhead of HTTP (alhough much less than with webdav)
git native
  • doesn’t require trusting a client.
  • most efficient solution
  • does not easily pass through firewalls
  • server code maturity
  • lack of authentication.

In the above table, the phrase “trusting a client” means trusting the client software. Allowing a client full control over the modification of the repository files is risky. There could be clients with bugs or clients using different versions of git and there could even be malicious clients that could corrupt a repository. This risk is not present with the native git and smart http approaches.

As is clear from the above, ssh is the most problematic of all. On the other end, git is the most efficient but lacks the requires security and requires me to open up yet another port on the firewall and run yet another service. Because of these reasons I decided on an HTTP based setup. In fact, I experimented early on with the webdav based approach simply because I didn’t find the smart http approach which is relatively new. The setup however did show that HTTP webdav is much slower than the smart http setup. In fact, I think smart http is also faster than subversion when pushing changes.

The setup of smart HTTP is quite easy, basically it is a CGI based approach where HTTP requests are delegated to a CGI helper program which does the work. In effect, this is the git protocol over HTTP. Standard apache features are used to implement authentication and authorization. The smart HTTP approach is described already quite well here and here, but I encountered some issues and would like to clarify what I did to get it working.

These are the steps I took to get it working on opensuse 11.3:

  • Setup the user accounts and groups that you need to authenticate against using htpasswd and put it in a file /etc/apache2/conf.d/git.passwd
  • Make sure that the cgi, alias, and env modules are enabled by checking the APACHE_MODULES setting in /etc/sysconfig/apache2.
  • Now we are going to edit the apache configuration file for the (virtual) domain we are using. In this example, I assume we have /data/git/public hosting public repositories (anonymous read and authenticated write) and /data/git/private hosting private repositories (authenticated read and write). Also the git repositories are going to be exposed under a /git context root.
    • by default export all repositories that are found
      SetEnv GIT_HTTP_EXPORT_ALL

      This can also be configured on a per repository basis, see the git-http-backend page for details.

    • Configure the CGI program used to handle requests for git.
      ScriptAlias /git/ /usr/lib/git/git-http-backend/

      This directive had me quite puzzled because the apache documentation mentions that the second ScriptAlias argument should be a directory, but in this case it is an executable and it works.

    • Set the root directory where git repositories reside
      SetEnv GIT_PROJECT_ROOT /data/git
    • By default, the git-http-backend allows push for authenticated
      users and this directive tells the backend when a user is authenticated.

      SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER

      I had to google a lot to find this one because it is not mentioned in the documentation. Without this, I had to configure “http.receivepack” to “true” for every repository to allow “git push”.

    • General CGI configuration allowing the execution of the CGI programs. This is more or less self explanatory
      <Directory "/usr/lib/git/">
        AllowOverride None
        Options +ExecCGI -Includes
        Order allow,deny
        Allow from all
      </Directory>
    • Next is the configuration of the public repositories
      <LocationMatch "^/git/public/.*/git-receive-pack$">
        AuthType Basic
        AuthName "Public Git Repositories on wamblee.org"
        AuthUserFile /etc/apache2/conf.d/git.passwd
        Require valid-user
      </LocationMatch>

      This requires an authenticated user for every push request. See the apache documentation for the various options such as requiring the user to belong to a group. In my setup, I simply use one global git.passwd file and any authenticated user has access to any repository.

    • Finally, there is the setup of the private repositories, which requires a valid user for any URL.
      <LocationMatch "^/git/private/.*$">
        AuthType Basic
        AuthName "Private Git Repositories on wamblee.org"
        AuthUserFile /etc/apache2/conf.d/git.passwd
        Require valid-user
      </LocationMatch>

      In this case I could have also used a “Location” element instead of “LocationMatch”.

    • Finally restart apache using
      /etc/init.d/apache restart

      or (“force-reload”) and try it out.

Hope this helps others setting up their git servers on linux. In my experience this setup is quite fast for both push and pull. I am currently working on one project that you can access by doing

  git clone https://wamblee.org/git/public/xmlrouter

A gitweb interface for browsing the public repositories is here.

Have a lot of fun!

Posted in Devops/Linux | 9 Comments

Initial experiences with the Samsung Galaxy S II and Android

A few days ago, on May 11th, I received my new phone, the Samsung Galaxy S II. This is one of the first dual core phones runnning gingerbread. After a few days of working with it, I must say I am truly impressed with it. On the software side, the phone is rock-solid, really loads better then my previous Nokia N97 and (the absolutely terrible) Sony Ericsson P990i (it used to reset ‘to improve system performance’ in standby mode in the initial software version). It is nice to use a phone that just works. I haven’t even discovered a single glitch. Nokia and Sony Ericsson should take note here.

Even making calls is better than on the N97. On that phone you lose control completely everytime someone else hangs up: the screen would go black and you would not be able to do anything on it for the next 10 seconds. It is also nice to have a music application that actually performs.


It’s too early to say anything about battery life as my use of the phone has been extreme for the past days, which included continuous downloading over wifi for hours on end and almost continuous use.

The phone feels really solid and looks great. Performance is excellent. With this phone I have the feeling that finally we have similar performance and usability again as with the good old Palm from approx. 6 years ago. It is also nice to know that the phone has Gorilla glass and uses the latest Sirf Star IV chip for GPS. All in all a quality product.

Looking at Android and in particular the Android market I am also impressed. The quality of the applications that I tried is quite good. One such application is a tuner (for tuning musical instruments). In the past it was difficult to find good applications for this. On Palm I used phontuner for instance but all the other applications sucked and I haven’t been able to find a suitable application on Symbian at all. On Android I have tried two which both worked quite well. The review system on the market makes it relatively easy to find good applications and saves a lot of time dealing with the bad ones. Buying stuff is also easy and fast.

Posted in Misc, Software | 2 Comments

Countdown has started again!

Yes folks! The countdown timer has been started again. This time I decided to renew it a bit and use a bit of javascript with jQuery instead of the good ‘ole java applet.

It is an important occasion for me this time. Basically it is the closure of a long period starting in 1998 that is now over. It is exciting to start something new again!

Posted in Fun, Java, Misc | 1 Comment