has_many :codes

KVM virtual machines host setup


Note: this is a sort of mini series in two parts on getting started with KVM. In this first part, we’ll see how to set up a KVM virtual machines host. In the next post, I write about KVM LVM backup, cloning and more.

Why KVM virtual machines?

It occurred to me today that I haven’t written any new posts in a while… I have been very busy with work but I also moved from the UK to Finland, recently, and haven’t had a chance to even think about the blog in the meantime. But now that I am settled, I thought a good way to start again would be writing about a subject that is new to this blog (and somewhat long overdue): virtualisation.

The post won’t be a detailed description of the various technologies etc. – there’s Wikipedia for that. I’ll describe here setup and basic maintenance of a virtualisation environment similar to what I use regularly myself for development and testing; it’s a fairly simple setup, although its operation is done through a mix of a GUI and, often, command line tools. For the virtual machine host I use KVM. Again, I won’t go into the details of the technology here (I don’t know much about its architecture anyway, besides how to use it), but I’ll briefly highlight here what are the reasons why I like it and use it for development and testing:

  • KVM is already part of the Linux kernel, therefore it doesn’t require the installation of separate software (albeit you may have to install some packages, depending on your distro, in order to enable the hypervisor). So it’s not as complicated to install and configure as other hypervisors can be.
  • KVM is based on a simpler architecture than that of other hypervisors, in that it leaves the management of several things to Linux itself, rather than having to deal with scheduling, devices, etc. Therefore KVM is “focused” on just the particular features that concern the hypervisor. Because of this, and because it’s part of the Linux kernel, chances are that its development will be pretty fast over the time compared to that of competing hypervisors.
  • KVM already has some great support from companies such as Redhat, IBM, Intel, Cisco, etc. with some of them (e.g. Redhat) pushing KVM as the preferred hypervisor these days.
  • Performance is great is most scenarios.
  • KVM works with normal, unmodified operating systems, unlike other hypervisors.


  • KVM requires CPUs with hardware support for virtualisation (Intel VT-x or AMD-v), in order to work. However nearly all the server CPUs as well as many desktop ones offer virtualisation capabilities these days, so it’s no longer an issue, really. My work machine is a MacBook Pro, but I also own a PC with hardware virtualisation support, so it’s working great as KVM virtual machines host.

The setup I am about to describe works fairly well for me, since I just need to manage a few virtual machines at any time, but if you are looking for something more scalable (especially if you are looking to sell services based on virtualisation) and/or would prefer managing your virtualisation environment almost completely with a user friendly GUI, you really should look into a complete solution such as OnApp‘s cloud management software (disclaimer: I work for OnApp), which would give you a lot more than just a management tool for your virtualisation layer, and supports several hypervisors. You can even register for a fully featured free license that will let you manage hypervisors with up to 16 cores.

If you are looking for a super quick setup, and development & testing is the reason why you want to set up a virtualised environment, then the setup I am about to describe will be fine. You can also look into alternatives such as Virtualbox if you want to minimise the administration with command line tools.

If you go for my setup, be warned that graphics performance of VMs with Windows will be terrible, so if you need to virtualise Windows you may want to use something else. I still use Parallels Desktop on my Mac, rather than KVM virtual machines, whenever I need to test something on Windows, and its performance is just great.

KVM virtual machines host set up

Please note that I mostly use Ubuntu as distro, so the following instructions will be based on Ubuntu. However it shouldn’t be difficult to adapt them to other distros. Also note that while it is not strictly required, it is usually recommended to use a 64 bit Linux distro as KVM virtual machines host, so to ease management of amounts of memory larger than 4GB, among other things. I also recommend you to install the server edition of the OS, so to have more resources available to the KVM virtual machines. Thanks to X11 forwarding to your work machine, using a sever edition of Linux on the host won’t prevent you from using a GUI to perform part of the administration of your virtual machines, as we’ll see later.

I’ll assume here that you’ve already installed the OS on your host, and that you want to store the KVM virtual machines disks as logical volumes through LVM. Using LVM makes it easier to manage the available storage in general, and it also makes it easier to backup your virtual machines (without shutting them down) thanks to LVM snapshots, as we’ll see later in this post.

First of all, you need to ensure you can use KVM virtual machines on your host, since this depends on the CPU installed:

# as sudo
root@vmserver:~# egrep -c '(vmx|svm)' /proc/cpuinfo

The number you see in the output is the number of cores in your CPU(s) that have hardware support for virtualisation; if the number is 0, it means you won’t be able to run KVM virtual machines on your host. In this case, you may also want to check whether the hardware support is enabled in the BIOS or not.

Setting up LVM

Unless you’ve already partitioned your system disk with LVM when installing the OS on your host, you’ll need to install LVM.

sudo apt-get install lvm2 lvm-common

Next, you need to “initialise” each physical storage device that you want to use with LVM. This is done with the pvcreate command. You can list the physical disks available with:

fdisk -l /dev/{s,h}d? 2> /dev/null | grep '^Disk /dev.*'

(or just use the quicker df if you know what the info displayed means). For example in my case this is what I see:

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdb: 300.1 GB, 300069052416 bytes
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes

You can see that I have a 300GB drive which I use as system drive for the OS, and 3 x 1.5TB drives that I use for KVM virtual machines, backups and general storage. So in my case to use these disks with LVM I had to run:

pvcreate /dev/sda
pvcreate /dev/sdc
pvcreate /dev/sdd

You can check that LVM now “knows” about the drive(s) with pvs:

root@vmserver:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda virtual-machines lvm2 a- 1.36t 1.36t
/dev/sdb5 vmserver lvm2 a- 279.22g 12.00m
/dev/sdc storage lvm2 a- 1.36t 0
/dev/sdd storage lvm2 a- 1.36t 0

You can also use the pvscan or pvdisplay commands to see a few more details.

You can now create volume groups; a volume group is a collection of one or more physical disks, and contains one or more logical volumes created on those disks. A logical volume is simply some sort of allocation of space that belongs to the parent volume group, and will contain the partitions in turn containing the actual data.

For example on my host I dedicated one of the 1.5TB physical disks (/dev/sda) to a volume group named virtual-machines for use with KVM virtual machines, and the other two to a volume group named storage for backups and general storage. From the previous snippet you may notice in the “VG” column that I also have another volume group named vmserver (VG stands for volume group). This only contains the 300GB physical disk I use as system disk; the reason I have the OS also on a volume group is that when I installed Ubuntu on this host I chose to use LVM also to partition the system disk. You can create volume groups with the vgcreate command:

# create a volume group with a single disk
vgcreate <name of the volume group> /dev/sda

# create a volume group with multiple disks
vgcreate <name of the volume group> /dev/sda /dev/sdb

It is also possible to add to or remove disks from an existing volume group, which is one of the reasons why LVM is pretty flexible:

# add a disk to an existing volume group
vgextend <name of the volume group> /dev/sdc

# remove a disk from an existing volume group
vgreduce <name of the volume group> /dev/sdc

You can even rename volume groups, if needed:

vgrename <old name of the volume group> <new name>

Like for the physical disks, you can list the volume groups with vgs:

root@vmserver:~# vgs
VG #PV #LV #SN Attr VSize VFree
storage 2 2 0 wz--n- 2.73t 0
virtual-machines 1 1 0 wz--n- 1.36t 1.36t
vmserver 1 2 0 wz--n- 279.22g 12.00m

Or vgscan / vgdisplay if you want more details to be displayed. If you want to show details for a single volume group, just pass the volume group’s name as argument.

We’ll see later how to manage logical volumes. For now, make sure you have a volume group ready to contain the KVM virtual machines disks as logical volumes (in my case, it is “virtual-machines”).

Setting up KVM virtual machines

(Note: from this point on I am assuming you’ve started a session as root/sudo since it’s easier)

Installing / enabling KVM is pretty easy. For example on Ubuntu all you need is:

apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-manager

You can verify that KVM is correctly configured with the kvm-ok command:

root@vmserver:~# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

And, to verify that it is possible to connect to KVM:

root@vmserver:~# virsh -c qemu:///system list
Id Name State

If you see the same output everything is OK. The list should be empty for now since you don’t have any virtual machines yet at this stage.

Creating the first virtual machine

When you’ve installed the necessary packages with apt-get, you’ve also installed a GUI that you can use to create KVM virtual machines and perform some of the administration without having to use command line tools. I never manage to remember all the commands and all the syntax, so I find it handy. If you have followed my advice earlier and have installed a server edition of the OS, you won’t be able to run this GUI directly on the host, since the host OS won’t have a full desktop environment configured.

You can still use the GUI from your work machine, though, thanks to X11 forwarding. First, you need to ensure that your user on the hypervisor host is allowed to run virt-manager:

sudo adduser `id -un` kvm
sudo adduser `id -un` libvirtd

Then log off and on again.

Now, if you use Linux on your work machine start a new SSH session to the KVM server passing the argument -X:

ssh -X <hostname or IP of your virtual machine host>

And then just run virt-manager, which is the command you need to start the KVM administration GUI. You may also need to set the DISPLAY environment variable – you’ll see an error if you do.

On Mac, you can use instead the -Y argument, which doesn’t require you to se the DISPLAY env variable:

ssh -Y <hostname or IP of your virtual machine host>

If when you run virt-manager from within an SSH session, the GUI doesn’t show up, add the -v argument and SSH will show some information that might help you figure out what went wrong. If the output shows something like “connection refused” or similar, you may need to authorise your virtual machine host to run X11 apps on your Mac. To do this, run xterm on your Mac, and from that terminal run

xhost + <IP of the virtual machine host>

At this point you should be able to run virt-manager remotely, from your work machine. You should see the GUI in the picture below if X11 forwarding is working correctly:

Now from Edit choose Connection Details, and open the Storage tab. Click the + button to add a storage pool, and select logical: LVM Volume Group from type. Then select the volume group you’ve created earlier, and that you want to use to store your virtual machines:

Choose a name (I usually choose LVM so to remember that I am using a volume group to store my KVM virtual machines), and create the storage pool. I also recommend to remove the default storage pool (- button) so to avoid placing KVM virtual machines in it by mistake, since it doesn’t use LVM. Your Connection Details window should now look similar to the one in the following picture, with only the LVM storage pool configured:

Remember to check the Autostart option on the LVM storage pool and to click Apply to save the changes. You can now close the Connection Details window and click New to create your first virtual machine. The wizard is pretty straightforward: just give the VM a name, and optionally select the type of the OS you want to install in the VM (I am not sure if it actually makes any difference TBH); then select whether you want to insert the OS installation disc into the optical drive of the host, or whether you want to use an ISO image. Finally, select the amount of memory you want to allocate for your machine, the number of cores you want it to use (just pick the max available).

Next, you’ll have to create the virtual disk for your VM: choose Select managed or other existing storage => Browse. Then select the LVM storage pool (it should be the only one available), and then create a new volume within that storage pool.

Choose the newly created volume and complete the creation of your virtual machine. The VM will be automatically started and you’ll be able to install and setup your virtual machine from a nice graphical console:

Fixing” networking

One thing that you will notice soon when setting up and working with your KVM virtual machines is that they cannot communicate with the other devices and computers on the local network, nor can you SSH into them directly from your work machine. This is because of the default NAT setup that comes with the KVM virtual machines unless you customise the configuration of their (virtual) network nics.

To fix this, and allow your VMs to communicate with your normal network, you need to switch to a bridged configuration.

First of all, it’s easier if your host has a static IP address. If that’s not the case, edit /etc/network/interfaces (as sudo/root) and change:

auto eth0
iface eth0 inet dhcp


auto eth0
iface eth0 inet static

or something similar depending on your network configuration. Then add the following section and save the file.

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

Now edit /etc/resolv.conf and set the correct nameservers if needed. Next,

sudo apt-get install bridge-utils
sudo /etc/init.d/networking restart

The host should now be good to go. You can check whether the bridge has been correctly set up with:

root@vmserver:~# ifconfig br0
br0 Link encap:Ethernet HWaddr 00:01:29:a6:5e:45
inet addr: Bcast: Mask:
inet6 addr: fe80::201:29ff:fea6:5e45/64 Scope:Link
RX packets:1665925 errors:0 dropped:0 overruns:0 frame:0
TX packets:1036864 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1735996352 (1.7 GB) TX bytes:578254981 (578.2 MB)

Next, you need to update your virtual machine(s). From virt-manager, open a VM’s details window, and in the configuration of the virtual NIC change from NAT,


You will need to shutdown and restart the VM for the new configuration to be effective. Done that, you should be able to SSH into the VM from another device on the normal network and see those devices from within the VM.

Conclusion part 1; what’s next

This concludes this first part of this post on getting started with KVM in general. Over the next few days I will publish a second part on:

  • how to backup a VM’s raw disk
  • how to mount a virtual disk’s partition to a location on the host, so to access the data directly
  • how to take advantage of LVM snapshots for consistent backups
  • possible issues you may encounter when cloning virtual machines

So, if you are interested in knowing more, stay tuned! In the meantime, I hope you’ll find this first part useful. As usual, please let me know in the comments if there’s anything you’d like to add.

© Vito Botta