Kernel Virtual Machine (KVM) Benchmark Results

Test Setup

All tests are executed on a Sony Vaio F11 laptop with 8GB of memory, with a Core i7 Q720 processor (default 1.6MHz).  The hard disk is a 500GB 7200 RPM Seagate ST9500420AS. Tests are run at night or during the day (when I was at work). Runlevel 3 (multi-user with network) is used on both host and guest. This eliminates the overhead of the desktop environment on the host and the effects of user interaction.  Each virtual machine was given 4GB of memory.

To  compare results on the host with those on the guest,  tests on the host are carried out with half the memory used up of huge pages. This effectively reduces the available memory of the host to 4GB and makes it more comparable to a guest because normal applications cannot use the huge pages.

Tests were done on Opensuse 11.3 with kernel version 2.6.34.7-0.5-default, both for host and guest. The only difference was that (obviously) the host has more software installed because it is used as a full-featured desktop.

By default huge pages support was switched on and if not mentioned otherwise, the noop IO scheduler is used for the guest. The reason for this is that these are the optimal settings ss can be found on the internet. The noop scheduler seems to be a good choice as default for guests because the guest is unaware of the physical disks (it gets storage from the host), and the host can do a more efficient scheduling. The tests focus on showing what the effect of individual tuning parameters  is on performance by deviating from this baseline.

Disk based tests were carried out on an extended 4 (ext4) filesystem. Each guest gets a hard disk from the host which is a logical volume (as in LVM). The guest in turn uses a 512MB /boot partition (non-LVM) and uses LVM for the root partition. Therefore,  nested logical volume management is used.

The laptop was connected to a 100Mbps network uisng a wired connection. All VMs were configured to use a bridging setup. For the network tests, a different linux server was used on the local network for running the netperf server.

Before each test is executed, the caches are flushed on the host (and on the guest if a guest is involved). This is done using:

sync
echo 3 > /proc/sys/vm/drop_caches

This makes the tests more independent because it eliminates any reuse of caches from a previous test.

This entry was posted in Devops/Linux. Bookmark the permalink.

5 Responses to Kernel Virtual Machine (KVM) Benchmark Results

  1. Pingback: New Server setup is complete! | Nonsense and other useful things

  2. acathur says:

    Great post indeed, Thank you very much.
    It’s been days since I’ve been reading through lots of pages, including IBM’s library, on this subject and I must say this is a nice, well written and comprehensive article that summarize them all!
    Thanks again.

  3. Erik Brakkee says:

    Thanks for you comments. I started to look at KVM because I had a lot of issues with opensource XEN, including compatibility with display drivers, and stability problems.

    I have been using KVM now since December 2010 and I must say I am really satisfied. No issues whatsoever with this technology. At the moment I am running 4 virtual machines, where 3 of them are always running and I haven’t experienced any significant issues at all.

    One thing I learned recently is that with an LVM based setup, with VMs running straight from a logical volume, you should use native IO and disable caching. Especially the native IO can reduce workload on your host system. Although the workload is a bit artificial sometimes it is nice to have a workload value on the host which is never much higher than that on the guests combined. These settings are also the default on RHEL 6.2 and Centos 6.2.

  4. Erik Brakkee says:

    It is also interesting to know that recent linux kernels (and also the custom ones shipped with Centos/RHEL 6.2) have a feature called ‘transparent hugepages’ which eliminates the need to configure anything special for hugepages.

  5. acathur says:

    Yeah, I noticed that transparent haugepages on Ubuntu Server 12.04.
    Thanks for the heads up too, appreciate it.

Leave a Reply

Your email address will not be published. Required fields are marked *