Kernel Virtual Machine (KVM) Benchmark Results

Disk Performance

A disk performance test is done using bonnie++ (1.03d-6.1) passing it the option -r 4096 (which is the actual (available) RAM size of host and guest.

The chart below shows write performance for character based writes, block based writes, and random access writes. The scheduler for host and guest are given in the form ‘<host>-<guest>’. For example, deadline-cfq denotesĀ  that the host uses the deadline scheduler and the guest uses the cfq scheduler.

dyerware.com


The results from the reading test are shown below:

dyerware.com


From the tests it is clear that the scheduler does not have such a significant effect on performance, although the combination deadline-noop seems to have a slight advantage. The results for random seeks are also comparable for all configurations and are around 0.3KB/s.

Next we take a look at the effect of paraviritualized drivers (virtio), and the use of both IDE and SCSI emulation. Also, the performance of the host is measured.

dyerware.com


These results show that the effects of disk emulation are really big, especially when it comes to write performance. In addition, SCSI emulation performs significantly worse than IDE emulation for writes. What is also striking is that the host appears to perform slightly worse than the guest using virtio. One explanation for this is that the host performs caching for the guest so that additional optimizations can take place that are unavailable on the host. This is in effect comparable to giving the guest some extra memory for caching. A similar effect can be seen by reducing the ram size used by bonnie++ (-r option) further which will also show an increased performance.

This entry was posted in Devops/Linux. Bookmark the permalink.

5 Responses to Kernel Virtual Machine (KVM) Benchmark Results

  1. Pingback: New Server setup is complete! | Nonsense and other useful things

  2. acathur says:

    Great post indeed, Thank you very much.
    It’s been days since I’ve been reading through lots of pages, including IBM’s library, on this subject and I must say this is a nice, well written and comprehensive article that summarize them all!
    Thanks again.

  3. Erik Brakkee says:

    Thanks for you comments. I started to look at KVM because I had a lot of issues with opensource XEN, including compatibility with display drivers, and stability problems.

    I have been using KVM now since December 2010 and I must say I am really satisfied. No issues whatsoever with this technology. At the moment I am running 4 virtual machines, where 3 of them are always running and I haven’t experienced any significant issues at all.

    One thing I learned recently is that with an LVM based setup, with VMs running straight from a logical volume, you should use native IO and disable caching. Especially the native IO can reduce workload on your host system. Although the workload is a bit artificial sometimes it is nice to have a workload value on the host which is never much higher than that on the guests combined. These settings are also the default on RHEL 6.2 and Centos 6.2.

  4. Erik Brakkee says:

    It is also interesting to know that recent linux kernels (and also the custom ones shipped with Centos/RHEL 6.2) have a feature called ‘transparent hugepages’ which eliminates the need to configure anything special for hugepages.

  5. acathur says:

    Yeah, I noticed that transparent haugepages on Ubuntu Server 12.04.
    Thanks for the heads up too, appreciate it.

Leave a Reply

Your email address will not be published. Required fields are marked *