As I blogged earlier, I have replaced the server setup that I originally had with a virtualized server setup. This introduces the concept of “hardware independent server” and makes it easy to run the server on any hardware without modification. More concretely, it allows me to run until the hardware fails. Previously I used to replace the server hardware before it really broke, but in this setup I can run it until it breaks. Should I have a serious hardware failure I can simply run the server(s) from any other hardware such as a laptop. This is because I have “bootable backups”. I.e. if the server breaks, I can either run a replacement server based on the same data or simply use a laptop and run the backup in a virtualized manner.
As part of the original migration from running native to virtualized I used the identical setup, which meant passing physical hardware partitions to the virtual machine. The virtual machine then used Linux Logical Volume Management based on these hardware partitions. For new virtual machines I used another approach which was allocating a disk logical volume on the host, and then partitioning this on the guest and using LVM again to manage storage within the guest. This in fact results in nested logical volume management and as I have seen from one of the new virtual machines works like a charm. It provides a nice separation of concerns where the host simply assigns storage to guests and the guests decide how to use this storage.
However, there was still one virtual machine (the original hardware based server) that was still being passed physical disk partitions. This introduced the problem of both the host and virtual machine seeing the same logical volumes and thus the chances for administrative error and data corruption when multiple OSes would concurrently access the same logical volumes.
To remedy this, I used the following procedure:
- Allocate a physical volume on the host and a “disk” logical volume on it big enough to contain all logical volumes from the VM
- Stop the VM
- Add this virtual disk to the VM.
- Start the VM
- Partition the new disk on the VM and extend existing volume groups to use physical partitions on this disk.
- Use pvmove to move data to the disk and remove the old unused physical partitions from the volume groups afterwards.
- Stop the VM
- Remove old physical partitions from the VM, leaving only the new “disk” logical volume
- Start the VM
In executing this procedure I ran into the basic problem that I did not have enough storage. To solve this I used a separate disk that was connected temporarily to the server. Now, after executing this procedure, all physical storage on the existing logical volumes (RAID array) was unused, so I extended the logical volume for the disk with that from the RAID array on the host. Then again using pvmove to move data to the RAID array from the temporary disk. And afterwards removing the unused physical volumes on the temporary disk from the volume group. Of course, all done while the virtual machine was up and running (no-one likes downtime).
The new setup reduces the chance of administrative error considerably and allows me to move storage for virtual machines to other locations without even having to shutdown a virtual machine. It also nicely separates the allocation of storage to VMs on the host from how each VM uses its allocated storage.