This HowTo describes an installation of an Ubuntu Cloud Image on a home-server running Ubuntu Xenial (16.04 LTS) with qemu-kvm. This will be done without the help of libvirt, which comes with an additional abstraction layer I don’t really need. This blog post is more about basic understanding of virtualization and how it is set up in a more or less simple way – unfortunately it’ll be more complex than I initially thought. But with this HowTo this shouldn’t be a problem anymore.
But what is the Ubuntu Cloud Image? – Well, this is the simple part to explain: The Ubuntu Cloud Image is a pre-installed virtual machine you only have to adjust a bit to your needs. Download it and start it (almost) right away. In the ‘old’ days one had to provide an Installation-Image first in order to install a virtual machine. You had to go through the whole installation process every time – but not anymore.
And what are the scenarios it can be used for? You can just copy and adjust an virtual machine by copy the original one and start it right away – If you decide to create a web-server or one more test client to play with – it’s a matter of a few minutes. And you can run it on every machine, where qemu-kvm is installed. Just copy the image and its seed to the destination host and start it again. Every virtual machine has its own IP-Address and hostname and you can access is e.g. with VNC from everywhere in you network. In other words: Awesome!
OK – here we go. This HowTo will describe these topics below. We will:
- Install all needed deb-packages
- Download the Ubuntu Cloud Image
- Create a seed image with cloud-localds
- Create a bridge interface
- Start the virtual machine
- Access the machine with VNC or Spice
- Create a common share for Guest and Host
- Tweak the Guest
Not all parts of this blog are necessary but make sense in real life situations. E.g. sharing the same harddisk is a pretty nice feature if you want to avoid the usage of cifs aka samba or remote file transfers. I also describe how to use the vmware-drv for X11 to run a resolution with 1920×1080. But you can quit reading after you’ve started your guest with a working internet connection – the rest is optional.
Now let’s begin.
Installation of deb-packages
We need the following debian (.deb) installation packages:
$ sudo apt install cloud-image-utils qemu-kvm
Cloud-image-utils: This will install cloud-localds. This is a helper-tool in oder to generate a image file, which will be given to our primary Ubuntu Cloud Image to mount at boot time. It contains password credentials so you can log into your new virtual machine.
Qemu-kvm: Since we do run a Linux inside a Linux and all are based on x86/x64 architecture, we can speed things up dramatically and use an additional Hypervisor (kvm). For emulating other architectures like ARM or Motorola 680xx CPUs qemu would be enough, but also slower.
Ubuntu Cloud Image download
Here one should pay attention – it is important to download the image containing the “*disk1*” in it. Otherwise it won’t work. Create a working directory and cd into it.
$ wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
Create a seed with credentials
First we have to create a file. Open therefore a file called seed with an editor of your choice (here it is nano):
$ nano seed
Paste the following content into it – please adjust the password to your needs.
#cloud-config
password: my_passw0rd_here
chpasswd: { expire: False }
ssh_pwauth: True
Save it and create the seed image now:
$ cloud-localds -H myhost_hostname_here seed.img seed
Please replace myhost_hostname_here with a hostname of your choice.
Your directory should now looks like this (well your user most likely won’t be ‘acme‘ of course):
drwxr-xr-x 2 root root 4096 Jun 19 20:40 ./
drwxr-xr-x 4 acme acme 4096 Jun 19 20:08 ../
-rwxr-xr-x 1 acme acme 98 Jun 19 20:23 seed*
-rw-r--r-- 1 acme acme 374784 Jun 19 20:40 seed.img
-rw-r--r-- 1 acme acme 287506432 Jun 18 17:15 xenial-server-cloudimg-amd64-disk1.img
Btw.: This is the first milestone. Starting cloud-localds without any parameter will tell you very briefly how to start this guest with an poor-mans LAN-access. It says at the very bottom:
Example:
* cat my-user-data
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
* echo "instance-id: $(uuidgen || echo i-abcdefg)" > my-meta-data<
* cloud-localds my-seed.img my-user-data my-meta-data
* kvm -net nic -net user,hostfwd=tcp::2222-:22 \
-drive file=disk1.img,if=virtio -drive file=my-seed.img,if=virtio
* ssh -p 2222 ubuntu@localhost
If you are inpatient, you can try this already – but it’s not really a solution though.
Creating a bridge (br0)
Now we are close to run our guest. Actually we could already boot the new image, but we wouldn’t be able to do much since we don’t have a network connection for. That’s why we’re going to create a so called ‘bridge’.
A bridge will enable your guest to communicate with the rest of your network and also to the internet. Since we keep things simple without libvert – there won’t be an automatically created interface (old-style: virbr0 / new-style: lxcbr0). Our bridge will be called the old fashion way: br0 – And we make it static!
The traffic of your host and the traffic of your guests (virtual machines) will we all routet over this bridge interface. Additionally and fully automatic another network device will be created: It is called tap0. Don’t worry, qemu-kvm will create it and use it automatically. Btw. tap does the transport on layer 2, where as tun (not used here) does layer 3. In other words, your guest will think it has a fully fledged nic at its disposal.
In order to run our guest without root priveleges and networking at the same time, we need to circumvent more of less a bug actually present around the qemu-bridge-helper tool. We have to do two steps before we create the bridge (br0) in the /etc/network/interfaces):
Chanching the privileges of qemu-bridge-helper
$ sudo chmod u+s /usr/lib/qemu/qemu-bridge-helper
Also documented here: http://wiki.qemu.org/Features/HelperNetworking
Granting permissions to br0
Create a directory first:
$ sudo mkdir /etc/qemu
Now create the file ‘bridge.conf‘ with the following one-liner inside the newly created directory:
allow br0
Now creating the bridge for real
Now open /etc/network/interfaces and add this to it:
#auto enp4s0 #iface enp4s0 inet dhcp auto br0
iface br0 inet static address 192.168.1.27 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports enp4s0 bridge_fd 15 bridge_stp yes dns-nameserver 192.168.1.1 8.8.8.8
Important
- Remove, or better comment all lines which configure your current enpXs0 device. The new br0 will bring the device up for you.
- Also consider to adjust all red marked entries above, since these are the one for my machine and network right now.
Restarting the network
Now it time to bring the new interface up. You can do this by invoking:
$ sudo systemctl restart networking
The result of ifconfig should look approximately like this:
br0 Link encap:Ethernet HWaddr d1:50:19:12:2e:6b
inet addr:192.168.1.27 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:56733 errors:0 dropped:218 overruns:0 frame:0
TX packets:38977 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:297510526 (297.5 MB) TX bytes:2897392 (2.8 MB)
enp4s0 Link encap:Ethernet HWaddr d1:50:19:12:2e:6b
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:226482 errors:0 dropped:17 overruns:0 frame:0
TX packets:42793 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:310279886 (310.2 MB) TX bytes:3130336 (3.1 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:320 errors:0 dropped:0 overruns:0 frame:0
TX packets:320 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:23680 (23.6 KB) TX bytes:23680 (23.6 KB)
If this doesn’t bring up your br0 interface – reboot. If you encounter interface names like virbr0 or lxcbr0, that you have to check, if you accidentally installed libvirt. ($ dpkg -l | grep libvirt). Remove this packages, since you will end up in a mess. Also don’t install cloud-init!
Starting the virtual machine
Now it’s the great moment. We can finally start the virtual machine. Before you do, please don’t use ‘sudo‘ – Running a virtual machine as ‘root‘ is mostly discouraged.
When starting the machine, a window with your console will pop up. This of course can only happen, if you are really in front of a real Linux box. I’ve done it all with ‘xhost +‘ and ‘export DISPLAY‘ or ‘ssh -XC‘ remotely, but this is not part of this blog. So start your guest now with:
$ kvm -hda xenial-server-cloudimg-amd64-disk1.img -hdb seed.img -net nic,macaddr=52:54:00:22:33:44,netdev=hn0 -netdev tap,id=hn0,helper=/usr/lib/qemu/qemu-bridge-helper
If everything went alright, you should see a terminal window booting your new guest. You will and up in the login prompt. The default user is ubuntu. Changing the user is not possible, unless we use more sophisticated tools like cloud-init. But we wont do it here.
Please enter your previously chosen passord now and hit enter. If everthing went right, you should be able to ping the internet or your machines in your network.
The MAC-Address from above is important for your router at home. This is what the router will see and will handle the hostname and routing accordingly. For each new machine, you must provide a different MAC-Address, else you’ll end up with a chaotic situation (well I did) on your router. Just vary the last digits.
to be contiued the next days (don’t worry, I don’t have to figure out the rest first – I’ve got it all running here already. I just need time to write it down.)
Connecting the Guest
This chapter is going to describe how one can connect the new guest with either VNC or Spice.
Connection with VNC

If you want to connect your new guest with VNC, you will have to provide this on launch.
This is the switch:
-display vnc=:0,share=ignore
This will allow the connection from everywhere. The “:0
” reflects this. This is of course only meaningful, if you are in your private local arean network. If you want to narrow the access, just provide an IP in front of the “:0
” with no space in between (e.g. “-vnc=192.168.1.0:0
“). The first “0
” means access for the whole netblock. The zero behind the colon refers to the monitor number (usually it’s zero). The “...,share=ignore
switch says, it’s not an exclusive connection. Everyone will be able to join the same session. So these settings are quite permissive – be careful.
The VNC client I use on my Windows machine is the RealVNC Viewer (V6.1.1 – 64Bit). The xvnc4viewer
and x11vnc
for linux used to crash sometimes. I didn’t investiagate the reason so far, because I’m primarily working on my Windows machine.
Connection with Spice

Spice is also a connection protocol like VNC, but it is sayd to be faster.
I was struggeling on both sides, Linux and Windows with stability. I can’t remember seeing overwhelming speed differences. This could be due to the 1GBit LAN I’m sitting in – like it’s common nowadays. Possible this protocol rules supremely on slow connections. Here’s the switch for spice:
-spice port=5900,addr=0.0.0.0,disable-ticketing
“..., disable-ticketing
” says it (in an akward manner), that you don’t need authentication. The addr
switch is again chosen with a network-mask for access from everywhere. The port is also the default VNC port here. The windows client I used for testing was virt-viewer 5.0 Win x64 (see screenshot on the right). Odly enough the window caption says “Connection details
” – in start menu it’s called “Remote viewer
“, the application itself is called remote-viewer.exe
and resides in the C:\Program Files\VirtViewer v5.0-256\bin\VirtViewer
folder after installation (MSI) on your harddisk – well.
A common Share for Guest and Host
This one took me a while. The scenario is straight forward. I want to use the same folder on my host aswell as on the guest machine for data exchange without samba aka cifs
. I ran in a bunch of errors, since there are many different HowTos out there with many slightly varying configuration settings. I won’t go into details – If you are using Ubuntu 16.04 this probably will work for you out of the box:
- Create a share (non-root) on your Host system
$ mkdir host-share
- Create a share (non-root) on your Guest system
$ mkdir client-share
- Add in your Guest this to your
/etc/fstab
(at the end).
client-share /client-share 9p trans=virtio 0 0
- Now start your Guest with these additional switches:
-fsdev local,security_model=mapped,id=fsdev0,path=./host-share -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=client-share
Of course you can call “host-share
” and “client-share
” whatever you like. You have to match it only in /etc/fstab
in your guest and the launch parametes when launching from your host. If you encounter access or permission problems, increase the permissions of your folder (
)$ chmod 777 host-share
Tweaking
There are some obvious things missing yet. Your guest starts with 512MB of memory and a single CPU-core. Also a default VGA VESA driver is used – which won’t allow you running X with a resolution of e.g. 1920×1080 – which is default nowadays. Also the keyboard will default to US and you maybe want to start your guest as a daemon. And maybe you want to increase the partition, since the original one is pretty small.
Change Memory size
To change the memory, invoke your guest with:
-m 2048
in order to be able to use 2GB of memory in this example.
Adding some CPU-cores
Start your virtual machine with more cores, e.g. with two:
-smp 2
Starting as daemon
Maybe you don’t want to create a systemd service to start your guest. Then you can start your guest manually as a daemon in the background like this:
-daemonize
Increasing the partition size
There is not much to say, just perform:
$ qemu-img resize your-image-here.img +5G
Customizing the keyboard
If you’re using a non-US keyboard, like I do – then you might want to use this switch (here a German keyboard):
-k de
It might be neccessary to adjest the console also. This can be done this way:
$ apt install console-data
$ dpkg-reconfigure console-data
Navigate through these settings and choose the most obvious ones.
Note: If after reconfiguration theAlt Gr
keys still don’t work with the VNC-Viewer – then you could try a absolutely freaky workaround I’ve figured out by accident:
PressCtrl+Shift
, release these keys again, then pressAlt Gr
+<your desired key>
now. The drawback is of course you have it to do every time you want to useAlt Gr
key combination, for e.g. the pipe or backslash. At least you can use them at all, because these two are quite vital for Linux 😉
Using vmware VGA driver with 1920×1080
The vmware driver is very versatile and has a bunch of parameters. Force your client to use the vmware
driver and adopt your xorg.conf
on the guest-side and use the 1920x1080
resolution like this:
- Start your guest with:
-vga vmware
- Edit or create the
xorg.conf
file in the/etc/X11
folder on your guest system as root. Paste this into it:Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "HWcursor" # [<bool>] #Option "Xinerama" # [<bool>] #Option "StaticXinerama" # <str> #Option "GuiLayout" # <str> #Option "AddDefaultMode" # [<bool>] #Option "RenderAccel" # [<bool>] #Option "DRI" # [<bool>] #Option "DirectPresents" # [<bool>] #Option "HWPresents" # [<bool>] #Option "RenderCheck" # [<bool>] Identifier "Card0" Driver "vmware" BusID "PCI:0:2:0" # <-- use lspci to figure out your VGA BusID EndSection Section "Monitor" Identifier "Monitor0" HorizSync 1.0 - 100000000.0 VertRefresh 1.0 - 10000.0 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync # 1368x768 59.88 Hz (CVT) hsync: 47.79 kHz; pclk: 85.25 MHz Modeline "1368x768" 85.25 1368 1440 1576 1784 768 771 781 798 -hsync +vsync # 1280x720 59.86 Hz (CVT 0.92M9) hsync: 44.77 kHz; pclk: 74.50 MHz Modeline "1280x720" 74.50 1280 1344 1472 1664 720 723 728 748 -hsync +vsync EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" ViewPort 0 0 Depth 24 Modes "1920x1080" "1368x768" "1280x720" EndSubSection SubSection "Display" ViewPort 0 0 Depth 32 Modes "1920x1080" "1368x768" "1280x720" EndSubSection EndSection
If you are missing the other sections, like keyboard, mouse, etc – well – they are not neccessary. Leave them out, this works much better this way, since X defaults to the best settings itself.
Important:
I’m not sure, if the modelines will work for you – in case of problems please create these modelines yourself. This is quite easy. Just invoke this on the command line:
$ cvt 1920 1080
The result for me is:# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
Now you can insert or replace your result with the one in the
xorg.conf
.
Restart your X11 now and you should default to 1920×1080, this is also true for the VNC- and Spice-viewer.
Openbox Installation
If you haven’t already done so, here is quick Openbox installation guide. Openbox is a very lightweight window manager for X. I use it almost everywhere. In the previous chapter I described how to tweak the xorg.conf
file. This is of course useless, unless you install a window manager, like Gnome, KDE, LXDE, … – or Openbox 🙂
Here are the steps on how to install and launch it:
$ sudo apt get install openbox lxpanel menu lxterminal dbus-x11 xserver-xorg-legacy xinit xserver-xorg
$ dpkg-reconfigure xserver-xorg-legacy
Choose “Anybody
“, else you’ll only be able start Openbox as root and get an error starting it as a regular user.- create a
~/.xinitrc
and populate it with this content:
#!/bin/bash
lxpanel &
#setxkbmap de #optional (for German keyboard only)
dbus-launch --exit-with-session openbox-session
- Start Openbox now with:
$ startx
And now all together…
If we take now all the switches together, this is how it looks like (on my system):
kvm -hda ./ubuntu-16.04-server-cloudimg-amd64-disk1.img -hdb seed.img -net nic,macaddr=52:54:00:12:34:57,netdev=hn0 -netdev tap,id=hn0,helper=/usr/lib/qemu/qemu-bridge-helper -m 4096 -smp 2 -fsdev local,security_model=mapped,id=fsdev0,path=/mnt/A/kvm-images/jdl/share2 -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare -display vnc=:0,share=ignore -k de -daemonize -vga vmware
Yeah, I know – this is how you can scare noobs.