Kernel Virtual Machine (KVM) Benchmark Results


As with all benchmarking one has to be really careful in interpreting the results and there are indeed many objections one can have with the current setup. One of them is for instance that I considered running only one VM at a time. Another is that I did not measure the load on the host while testing a VM.  Also, the tests are done on a laptop. Results could vary on different hardware. Nevertheless, the purpose of this benchmarking was just to get some feeling for the effects of various tuning parameters.

The conclusions that I am drawing from this are:

  • The differences between the performance with various IO schedulers on host and guest are not that significant. There is however a slight tendency for deadline on the host and noop on the guest to be the best combination. This seems to be inline with what some manufacturers also recommend, see for instance here
  • The difference in network performance between paravirtualized drivers (virtio) and emulated drivers is negligible on a 100Mbps network. Also the performance of the guests is practically identical to that of the host.
  • Hugepages on the host can result in a small speedup of a guest.
  • Additional caching by the host probably helps the performance of the guests.
  • Use of para-virtualized drivers for disk IO does help performance a lot compared to emulated drivers, in particular when it comes to write performance. Nevertheless, unixbench and bonnie++ seem to contradict each other when it comes to comparing disk performance of the host with that of the guest.

Based on these benchmarking results I am going to use the following settings for KVM:

  • Use para-virtualized disk IO drivers. This is the most essential optimization to make.
  • Use hugepages on the host to be used by the guests.
  • Use deadline scheduler on the host.
  • Use noop scheduler on the guest.
  • Use para-virtualized network drivers: Even though it does not have a performance advantage in the benchmarks it should be more efficient, so I am including this for ‘theoretical’ reasons.

Use of paravirtualized drivers can give problems with maiden install and upgrades because it requires special drivers. Fortunately, it is easy to just start up a given VM using emulated drivers that was previously started using paravirtulized drivers. In fact this is what I did for these tests. In any case, the choices for these tuning parameters can easily be changed later, either in the VM configuration on the host or with a simple change inside the guest.

Finally, a great thanks to the KVM community for making such an excellent virtualization solution. In all these tests it held up fine and worked without a glitch. After this, I am completely convinced that I will use KVM on the new server.

This entry was posted in Server/LAN. Bookmark the permalink.

5 Responses to Kernel Virtual Machine (KVM) Benchmark Results

  1. Pingback: New Server setup is complete! | Nonsense and other useful things

  2. acathur says:

    Great post indeed, Thank you very much.
    It’s been days since I’ve been reading through lots of pages, including IBM’s library, on this subject and I must say this is a nice, well written and comprehensive article that summarize them all!
    Thanks again.

  3. Erik Brakkee says:

    Thanks for you comments. I started to look at KVM because I had a lot of issues with opensource XEN, including compatibility with display drivers, and stability problems.

    I have been using KVM now since December 2010 and I must say I am really satisfied. No issues whatsoever with this technology. At the moment I am running 4 virtual machines, where 3 of them are always running and I haven’t experienced any significant issues at all.

    One thing I learned recently is that with an LVM based setup, with VMs running straight from a logical volume, you should use native IO and disable caching. Especially the native IO can reduce workload on your host system. Although the workload is a bit artificial sometimes it is nice to have a workload value on the host which is never much higher than that on the guests combined. These settings are also the default on RHEL 6.2 and Centos 6.2.

  4. Erik Brakkee says:

    It is also interesting to know that recent linux kernels (and also the custom ones shipped with Centos/RHEL 6.2) have a feature called ‘transparent hugepages’ which eliminates the need to configure anything special for hugepages.

  5. acathur says:

    Yeah, I noticed that transparent haugepages on Ubuntu Server 12.04.
    Thanks for the heads up too, appreciate it.

Leave a Reply

Your email address will not be published. Required fields are marked *