===== KVM Howto ===== ==== Documentation ==== //Before trying anything with KVM read the official Red Hat documentation on KVM:// http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/index.html I really mean it, READ THE OFFICIAL DOCUMENTATION! ==== Host installation ==== To install a kvm host it's necessary to use SL>=5.4; the minimum required rpms are: * kmod-kvm * kvm * etherboot-roms-kvm * qemu-img * vnc If you're using yum, simply issue this command: yum install kvm ==== Network Configuration ==== To be able to use public IP within your VMs, you need to perform the actions described in this section. Be careful because the operations described in this paragraph may put your network offline, be ready to access a physical console of the host you are configuring. On your host add the file ///etc/sysconfig/network-scripts/ifcfg-br0// with the following info: DEVICE=br0 TYPE=Bridge ONBOOT=yes DELAY=0 BOOTPROTO=static IPADDR= NETMASK= After this edit the file ///etc/sysconfig/network-scripts/ifcfg-eth0// resembling this template; DEVICE=eth0 HWADDR=00:30:48:c0:93:00 ONBOOT=yes BRIDGE=br0 After this restart the network with '''service network restart''' and the machine should be configured with the new bridge. To be sure, you can use this command: #brctl show bridge name bridge id STP enabled interfaces br0 8000.003048c09300 no eth0 virbr0 8000.000000000000 yes If //eth0// is in the same line of //br0//, your configuration is ok, you can now install machines with public ip addresses. ==== Libvirt ==== It's highly suggested to use libvirt tools to install, launch and monitor VMs. === VM installation with libvirt === Here is an example of VM virt-install with a standard infn-t1 configuration: virt-install \ --connect qemu:///system \ --name wn-test-kvm \ --accelerate \ --ram 2048 \ --disk path=/virtual/wn-test-kvm_disk,size=10 \ --network bridge:br0 \ --mac 00:16:3e:00:00:xx \ --arch x86_64 \ --location http://os-server.cnaf.infn.it/distro/SL/53/x86_64/ \ --vnc \ --extra-args ks=http://quattorsrv.cr.cnaf.infn.it/ks/wn-test-kvm.cr.cnaf.infn.it.ks This creates a VM named wn-test-kvm using kvm, with 2GB ram, a single virtual CPU, a 10GB disk, a network connected to the bridge sw0 to use public ip, and installs via network using os-server as a repository server, with a specific ks file. For other options see virt-install man page == Virtio installation == To use virtio drivers it's necessary to have latest kernel from SL release on your '''guest''' machine: sl4.x: kernel >= 2.6.9-89.0.3.EL sl5.x: kernel >= 2.6.18-164.6.1.el5 If you want to install a machine with virtio drivers you should add these lines to your virt-install command: --os-type=linux \ --os-variant=virtio26 \ Using this command you enable the virtio drivers at install time (for both sl4 and sl5). Please remember that virtio disks are called vdX instead of hdX, and this has to be taken into account if you are installing a machine via kickstart, because the partitioning directives may break. === VM management with libvirt === After installing a machine with //virt-install// open //virt-manager// and you will find a specific entry for your machine: from here you can perform several activities like accessing the console, monitoring resources, migration and manage power. === VM management with virsh === //virsh// allows you to issue several useful control commands to VMs (see on-line manual). In order to do that, you have to install a machine with virt-install or to add an already running machine to libvirt control (undocumented in this wiki). When you have a machine under libvirt/virsh control, you can find an xml file describing the configuration under /etc/libvirt/qemu/.xml. If you know exactly what you are doing, you can edit directly the //xml// file to change some parameters like, for example enable virtio for network. After having edited the configuration, you must give this command: virsh define .xml This will update the info of your VMs matching the definition in the xml file. It's very important to use this command while virt-manager is not running on the hypervisor and while the VM is not running, otherwise the changes done to the xml may not be taken into consideration and the xml overwritten. === Serial Console === To enable a serial console for every VM you need to do the following steps: in ///etc/inittab// add this line: co:2345:respawn:/sbin/agetty ttyS0 9600 vt100-nav in ///boot/grub/menu.lst// add this to kernel line: console=ttyS0 After this, while the machine is running, to access the console of your VM, you can use this command: virsh console To disconnect simply use the combination '''ctrl-]''' ---- ==== Legacy ==== Following is a section that has to be considered legacy. RedHat integration of kvm is rather stable; users should not need to use the base command line to control kvm VMs. === Network Configuration === If you did not configure your host with br0 as described before, or if you are on a legacy node, to be able to properly configure a tunnel for a public IP address you also need: * bridge-utils * tunctl To configure kvm service on host node you need to add a couple of scripts. #!/bin/sh switch=$(/sbin/ip route list | awk '/^default / { print $NF }') /sbin/ifconfig $1 0.0.0.0 up /usr/sbin/brctl addif ${switch} $1 This has to be put in ///etc/qemu-ifup// and will be invoked every time you start a virtual machine, to properly configure a tunnel for your net interface. The second script sets-up a bridge to be used when connecting VMs with a public ip address. It can be accessed via the quattor template: http://quattorsrv.cr.cnaf.infn.it/scdb/trunk/cfg/standard/kvm/host/config.tpl === Disk image creation === To create a file to be used as a disk image: qemu-img create -f qcow2 10G === Launching a machine === To launch a machine: qemu-kvm -hda= -net nic,model=e1000,macaddr=00:16:3e:00:00:00 -net tap -m 2048 -vnc :0 This command line specifies a machine with these options: * disk on a file * e1000 network driver (suggested!) * mac address 00:16:3e:00:00:00 * 2048 MB ram * vnc configured on port 0. The console can be reached with ''vncviewer :0'' For other options, see the qemu-kvm manual. ==== Appendix A: Converting a running machine from standard IO to virtio ==== === Net Virtio === In order to enable your virtual machine to use //virtio net driver// you need to add a line to the xml file defining your guest. A section like this: must become: **** If you don't know how to deal with the xml, read the section about libvirt. After this operation, turn on the VM and follow the messages on the console. It may happen that the system does not recognize your new network card. In this case edit the modprobe.conf file adding this line: alias eth0 virtio_net and possibly reboot. The node should use the new driver now. === HD virtio === Quite often you will want to install virtio drivers on the boot hd. This causes problems due to the missing virtio drivers inside the initrd used at boot time. A quick and dirty hack is to copy a initrd file for your kernel from a machine already using the virtio disk driver on the same kernel of the machine you want to convert. Anyway, due to some differences that may be hidden in configuration, it's better to re-create a working initrd with something like: mkinitrd -f -v --with=virtio_pci --with=virtio_net --with=virtio_blk --with=virtio --with=virtio_ring initrd.img `uname -r` If the output is correct, you can move the new initrd in /boot and link it in grub.conf for your kernel. After this, you should make sure that grub.conf and fstab do not contain links to hdX (in case they do, you need to change them to vdX. After this is done, shut down the VM and change the xml file this way: **** **** You can now power on your machine again and virtio disk //should// work. ---- ==== Appendix B: Virtio using command line ==== If you want to use virtio devices via command line, you should use something like this: /usr/libexec/qemu-kvm -m 2048 -smp 1 -name wn-test-kvm -boot c -drive file=/virtual/wn-test-kvm_disk,if=virtio,boot=on -net nic,macaddr=00:16:3e:00:00:01,model=virtio -net tap,ifname=vnet0 -vnc :0 ==== Appendix C: Migrating net driver from e1000 to virtio_net, using sl53_x86_64 ==== **DISCLAIMER: This method has been tested on a sl53_x86_64 guest node.** To switch a machine from e1000 driver to virtio (and vice versa) follow these steps: * edit ///etc/modprobe.conf// and change **alias eth0 e1000** in **alias eth0 virtio_net** * disable kudzu on boot: //chkconfig kudzu off// * restart the machine After rebooting if your machine is configured to get address via dhcp, it will load the new module and get the ip automatically. An alternative approach is this: * edit ///etc/modprobe.conf// and remove the line **alias eth0 e1000** * remove the file ///etc/sysconfig/network-scripts/ifcg-eth0// ==== Appendix D: Migrating net driver from e1000 to virtio_net, using slc4x_x86_64 ==== **DISCLAIMER: This method has been tested on a slc4x_x86_64 guest node, with kernel 2.6.9-89.0.16.EL.cern.** To switch a machine from e1000 driver to virtio (and vice versa) follow these steps: * edit ///etc/modprobe.conf// and remove the line **alias eth0 e1000** * disable kudzu on boot: //chkconfig kudzu off// * remove the line starting with HWADDR under ///etc/sysconfig/network-scripts/ifcg-eth0// * restart the machine After rebooting if your machine is configured to get address via dhcp, it will load the new module and get the ip automatically. ===== KVM Howto Legacy ===== ===== Repository ===== The suggested repository to get latest RPMs for kvm is: http://www.lfarkas.org/linux/packages/centos/ ===== Requirements ===== The hypervisor must be a sl 5.x node. ===== Installation ===== To install a kvm capable machine the minimum required rpms are: * kmod-kvm * kvm * etherboot-roms-kvm * qemu-img * vnc To be able to properly configure a tunnel for a public IP address you also need: * bridge-utils * tunctl Since the kvm module is external to the standard SL kernel, while installing the //kvm-kmod// rpm, you have to pay attention for which kernel version the module was compiled. To do so perform a query on kmod-kvm rpm and see the version number. ===== Configuration ===== To configure kvm you need to add a couple of scripts. #!/bin/sh switch=$(/sbin/ip route list | awk '/^default / { print $NF }') /sbin/ifconfig $1 0.0.0.0 up /usr/sbin/brctl addif ${switch} $1 This has to be put in /etc/qemu-ifup and will be invoked every time you start a virtual machine, to properly configure a tunnel for your net interface. The second script is a modification of the standard init script contained in kvm rpm: #!/bin/sh # kvm_cnaf init script Takes care for all VMM tasks # # chkconfig: - 99 01 # description: The KVM is a kernel level Virtual Machine Monitor. \ # Currently it starts a bridge and attached eth0 for it dir=$(dirname "$0") ifnum=${ifnum:-$(ip route list | awk '/^default / { print $NF }' | sed 's/^[^0-9]*//')} ifnum=${ifnum:-0} switch=${sw0:-sw${ifnum}} pif=${pif:-eth${ifnum}} antispoof=${antispoof:-no} command=$1 if [ -f /etc/sysconfig/network-scripts/network-functions ]; then . /etc/sysconfig/network-scripts/network-functions fi #check for bonding link aggregation bond_int=$(awk < /etc/sysconfig/network-scripts/ifcfg-${pif} '/^MASTER=/ { print $BF }' | sed 's/MASTER=//') if [ ${bond_int}"0" != "0" ]; then pif=${bond_int} fi if [ -f /etc/sysconfig/network-scripts/ifcfg-${pif} ]; then . /etc/sysconfig/network-scripts/ifcfg-${pif} fi get_ip_info() { addr=`ip addr show dev $1 | egrep '^ *inet' | sed -e 's/ *inet //' -e 's/ .*//'` gateway=$(ip route list | awk '/^default / { print $3 }') broadcast=$(/sbin/ip addr show dev $1 | grep inet | awk '/brd / { print $4 }') } #When a bonding device link goes down, its slave interfaces #are getting detached so they should be re-added bond_link_up () { dev=$1 is_bonding=$(echo ${dev} | awk '/^bond/ { print $NF }') if [ ${is_bonding}"0" != "0" ]; then for slave in `awk < /proc/net/bonding/bond0 '/Slave Interface: / {print $3 }'`; do ifenslave $dev $slave done fi } do_ifup() { if [ ${addr} ] ; then ip addr flush $1 bond_link_up $1 ip addr add ${addr} broadcast ${broadcast} dev $1 ip link set dev $1 up fi } link_exists() { if ip link show "$1" >/dev/null 2>/dev/null then return 0 else return 1 fi } create_switch () { local switch=$1 if [ ! -e "/sys/class/net/${switch}/bridge" ]; then brctl addbr ${switch} >/dev/null 2>&1 brctl stp ${switch} off >/dev/null 2>&1 brctl setfd ${switch} 0.1 >/dev/null 2>&1 fi ip link set ${switch} up >/dev/null 2>&1 } add_to_switch () { local switch=$1 local dev=$2 if [ ! -e "/sys/class/net/${switch}/brif/${dev}" ]; then brctl addif ${switch} ${dev} >/dev/null 2>&1 fi ip link set ${dev} up >/dev/null 2>&1 } #taken from Xen transfer_routes () { local src=$1 local dst=$2 # List all routes and grep the ones with $src in. # Stick 'ip route del' on the front to delete. # Change $src to $dst and use 'ip route add' to add. ip route list | sed -ne " /dev ${src}\( \|$\)/ { h s/^/ip route del / P g s/${src}/${dst}/ s/^/ip route add / P d }" | sh -e } change_ips() { local src=$1 local dst=$2 #take care also for case we do not have /etc/sysconfig data (the switch as a src case) if [ -x $BOOTPROTO ]; then if [ -x $(pgrep dhclient) ];then BOOTPROTO="null" else BOOTPROTO="dhcp" fi fi if [ $BOOTPROTO = "dhcp" ]; then ifdown ${src} >/dev/null 2>&1 || true ip link set ${src} up >/dev/null 2>&1 bond_link_up ${src} pkill dhclient >/dev/null 2>&1 for ((i=0;i<3;i++)); do pgrep dhclient >/dev/null 2>&1 || i=4 sleep 1 done dhclient ${dst} >/dev/null 2>&1 else get_ip_info ${src} ifconfig ${src} 0.0.0.0 do_ifup ${dst} transfer_routes ${src} ${dst} ip route add default via ${gateway} dev ${dst} fi } antispoofing () { iptables -P FORWARD DROP >/dev/null 2>&1 iptables -F FORWARD >/dev/null 2>&1 iptables -A FORWARD -m physdev --physdev-in ${dev} -j ACCEPT >/dev/null 2>&1 } status () { local dev=$1 local sw=$2 echo '============================================================' ip addr show ${dev} ip addr show ${sw} echo ' ' brctl show ${sw} echo ' ' ip route list echo ' ' route -n echo '============================================================' gateway=$(ip route list | awk '/^default / { print $3 }') ping -c 1 ${gateway} || true echo '============================================================' } start () { if [ "${switch}" = "null" ] ; then return fi create_switch ${switch} add_to_switch ${switch} ${pif} change_ips ${pif} ${switch} if [ ${antispoof} = 'yes' ] ; then antispoofing fi grep -q GenuineIntel /proc/cpuinfo && /sbin/modprobe kvm-intel grep -q AuthenticAMD /proc/cpuinfo && /sbin/modprobe kvm-amd } stop () { if [ "${switch}" = "null" ]; then return fi if ! link_exists "$switch"; then return fi change_ips ${switch} ${pif} ip link set ${switch} down brctl delbr ${switch} grep -q GenuineIntel /proc/cpuinfo && /sbin/modprobe -r kvm-intel grep -q AuthenticAMD /proc/cpuinfo && /sbin/modprobe -r kvm-amd /sbin/modprobe -r kvm } case "$command" in start) echo -n $"Starting KVM: " start echo ;; stop) echo -n $"Shutting down KVM: " stop echo ;; status) status ${pif} ${switch} ;; *) echo "Unknown command: $command" >&2 echo 'Valid commands are: start, stop, status' >&2 exit 1 esac ===== Usage ===== Here are some useful commands to have a kvm machine started ==== Disk image creation ==== To create a sparse file to be used as a disk image: qemu-img create -f qcow2 10G ==== Launching a machine ==== To launch a machine: qemu-kvm -hda= -net nic,model=e1000,macaddr=00:16:3e:00:00:00 -net tap -m 2048 -vnc :0 This command line specifies a machine with these options: * disk on a file * e1000 network driver (suggested!) * mac address 00:16:3e:00:00:00 * 2048 MB ram * vnc configured on port 0. The console can be reached with //vncviewer :0// For other options, see the qemu-kvm manual. ==== Virtio ==== Para-virtualized drivers are available starting from SL5 kernel as a guest. **YOU can not use para-virtualized drivers on a sl 4 guest machine or below.** To use these drivers the command line to use is something like: qemu-kvm -drive file=wn-test-kvm_virtio_disk,if=virtio,boot=on -boot c -net nic,model=virtio,macaddr=00:16:3e:00:00:00 -net tap -m 2048