General Approach
Over the past week, I have been doing benchmarking on KVM performance. The reason for this is that I want to use KVM on my new server and need to know what the impact of KVM is on performance and how significant the effect of certain optimizations is.
Googling about KVM performance, I found out that a number of optimizations can be useful for this:
- huge pages: By default, linux uses 4K memory pages but it is possible to use much larger memory pages as well (i.e. 2MB) for virtual machines, increasing performance, see for instance here for a more elaborate explanation.
- para-virtualized drivers (virtio): Linux includes virtio which is a standardized API for virtualized drivers. Using this API, it becomes possible to (re)use the same para-virtualized drivers for IO for different virtualization mechanism or different versions of the same vritualization mechanism. Para-virtualized drivers are attractive because they eliminate the overhead of device emulation. Device emulation is the approach whereby the guest emulates real existing hardware (e.g. RTL8139 network chipset) so that a guest can run unmodified. Para-virtualized drivers can be used for disk and/or network.
- IO Scheduler (elevator): Linux provided the completely fair queueing scheduler (CFQ), deadline scheduler, and noop scheduler. The question is what an optimal scheduler for the host is in combination with that for the guest.
The tests will include specific benchmarks focused on disk and network, s well as more general benchmarks such as unixbench and a kernel compilation..
Pingback: New Server setup is complete! | Nonsense and other useful things
Great post indeed, Thank you very much.
It’s been days since I’ve been reading through lots of pages, including IBM’s library, on this subject and I must say this is a nice, well written and comprehensive article that summarize them all!
Thanks again.
Thanks for you comments. I started to look at KVM because I had a lot of issues with opensource XEN, including compatibility with display drivers, and stability problems.
I have been using KVM now since December 2010 and I must say I am really satisfied. No issues whatsoever with this technology. At the moment I am running 4 virtual machines, where 3 of them are always running and I haven’t experienced any significant issues at all.
One thing I learned recently is that with an LVM based setup, with VMs running straight from a logical volume, you should use native IO and disable caching. Especially the native IO can reduce workload on your host system. Although the workload is a bit artificial sometimes it is nice to have a workload value on the host which is never much higher than that on the guests combined. These settings are also the default on RHEL 6.2 and Centos 6.2.
It is also interesting to know that recent linux kernels (and also the custom ones shipped with Centos/RHEL 6.2) have a feature called ‘transparent hugepages’ which eliminates the need to configure anything special for hugepages.
Yeah, I noticed that transparent haugepages on Ubuntu Server 12.04.
Thanks for the heads up too, appreciate it.