Virtualization

30 Notes
+ virt-manager -Assign Mobile to Guest (June 11, 2022, 10:15 p.m.)

Assigning Host USB device to a Guest VM:: 1- Enable Developer Mode on the mobile. 2- Enable USB Debugging on the mobile. 3- Open VirtManager. Select the VM. From the menus click on "Edit", and select the "Virtual Machine Details". 4- Click the "Add Hardware" button, and choose "USB Host Device" and finally choose your mobile name from the list.

+ virt-manager - Set up shared folders (Feb. 15, 2022, 6:06 p.m.)

1- Turn off the guest operating system. 2- Switch the view to detail hardware view: View > Details 3- Go to Add hardware > Filesystem 4- Fill in the source path (/home/mohsen/Programs) and virtual target path (/media/Programs) 5- Switch mode to Mapped if you need to have write access from the guest 6- Confirm and start the VM again 7- Now you can mount your shared folder from the VM: sudo mkdir -p /media/Programs sudo mount -t 9p -o trans=virtio /media/Programs /media/Programs

+ VirtualBox - Extension Pack (Feb. 8, 2022, 12:59 p.m.)

For enabling USB 2 or USB 3 in Virtual Box, when checking the `Enable USB 2.0...` in settings, I noticed an alert at the bottom of the window `Invalid settings detected`. Hovering the mouse over it, it displayed: "USB 2.0 is currently enabled for this virtual machine. However, this requires the Oracle VM VirtualBox Extension Pack to be installed..." So, for solving this problem: 1-Check what version of virtual box you're using: VBoxManage -version It will display something like 6.1.32r149290 2-Open this link and follow the version of virtual box you got from `step 1`: http://download.virtualbox.org/virtualbox/ 3-Find the package and download it: Oracle_VM_VirtualBox_Extension_Pack-6.1.32r149290.vbox-extpack 4-Install the package: sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.6-91406.vbox-extpack 5-Now, you need to add your username to the "vboxusers" group in order to gain access to your USB devices in the Virtual Machine: sudo usermod -a -G vboxusers mohsen 6-Restart your PC/Laptop. Finished. For viewing a list of installed packages: VBoxManage list extpacks For uninstalling the package: sudo vboxmanage extpack unistall "Oracle VM VirtualBox Extension Pack"

+ Windows 7 & 10 - Guest Agents (Sept. 27, 2020, 9:33 a.m.)

For Windows 10, download the following ISO file, mount it and install the tools: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso -------------------------------------------------------------------- For Windows 7, download: http://www.spice-space.org/download/windows/spice-guest-tools/spice-guest-tools-latest.exe --------------------------------------------------------------------

+ Virsh - Enable libvirt debug logging (Feb. 5, 2020, 4:19 p.m.)

vim /etc/libvirt/libvirtd.conf log_level = 1 log_outputs="1:file:/var/log/libvirt/libvirtd.log" service libvirtd restart

+ GPU passthrough with libvirt qemu kvm (Feb. 2, 2020, 12:36 p.m.)

1- Go into BIOS (EFI) settings and turn on VT-d and IOMMU support. VT-d and Virtualization configuration params are same Some EFI doesn't have IOMMU configuration settings IOMMU (input-output memory management unit) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. -------------------------------------------------------------------------------- 2- Enable IOMMU support in kernel: vim /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="splash quiet mem_encrypt=off maybe-ubiquity intel_iommu=on vfio-pci.ids=10de:1f06,10de:10f9,10de:1ada,10de:1adb video=efifb:off" GRUB_CMDLINE_LINUX="" And apply changes: update-grub2 # grub-mkconfig -o /boot/grub/grub.cfg Reboot server and check that IOMMU is turned on and works: dmesg | grep 'IOMMU enabled' [ 0.000000] DMAR: IOMMU enabled -------------------------------------------------------------------------------- 3- Locate the PCI IDs of the passthrough GPU device: The following command will locate the IOMMU groups on your rig: for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d); do echo "IOMMU group $(basename "$iommu_group")"; for device in $(ls -1 "$iommu_group"/devices/); do echo -n $'\t'; lspci -nns "$device"; done; done Look for VGA and HDMI Audio device: 09:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f06] (rev a1) 09:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1) Note the PCI IDs of the GPU and also the HDMI audio device. The IDs are the last indications in brackets at the end of each device 10de:1f06 and 10de:10f9 in my case. -------------------------------------------------------------------------------- 4- Virtual Function I/O (or VFIO) allows a virtual machine (VM) direct access to a PCI hardware resource, such as a graphics processing unit (GPU). Virtual machines with set up GPU passthrough can gain close to a bare-metal performance, which makes running games in a Windows virtual machine possible. Create a file /etc/modprobe.d/vfio.conf and add both PCI IDs of the device to pass-through with the PCI IDs of previous step: options vfio-pci ids=10de:1f06,10de:10f9,10de:1ada,10de:1adb After adding this file and doing update-initramfs in the next steps, the command "nvidia-smi" in your server will not display the information of GPU! That means you will receive such error: (and that is okay!) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. Since the Windows 10 update 1803 the following additional entry needs to be set: /etc/modprobe.d/kvm.conf options kvm ignore_msrs=1 -------------------------------------------------------------------------------- 5- In order to alter the load sequence in favor of vfio_pci before the Nvidia driver, create a file in the modprobe.d folder and add the following lines: vim /etc/modprobe.d/nvidia.conf softdep nouveau pre: vfio-pci softdep nvidia pre: vfio-pci softdep nvidia* pre: vfio-pci softdep drm pre: vfio-pci options kvm_amd avic=1 update-initramfs -u -------------------------------------------------------------------------------- 6- Enable passthrough in qemu’s config: vim /etc/libvirt/qemu.conf Uncomment: nvram = ["/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd"] Delete the rest of the lines inside this list, or you'll get error in libvirtd start runtime. -------------------------------------------------------------------------------- 7- Create a VM in virt-manager or select an existing VM you wish to pass through the PCI device. From virt-manager click on Add Hardware > PCI Host Device and select the passthrough GPU and then also its HDMI audio device. It is recommended to pass all components of a GPU device. The Video & Audio would be: 09:00.0 09:00.1 09:00.2 * * I had to add this 09:00.2 device too because of the following error: Please make sure all devices within the iommu_group are bound to their vfio bus driver. Start the VM, once inside Operating System, be sure to install the GPU’s drivers as one would do with any bare metal OS install. -------------------------------------------------------------------------------- 8- Add the following codes to your VM virsh XML: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null'/> </qemu:commandline> -------------------------------------------------------------------------------- Debugging: nvidia-smi lsmod | grep -i nvidia update-pciids lspci | grep -i vga glxinfo|egrep "OpenGL vendor|OpenGL renderer*" lspci -vnnn | perl -lne 'print if /^\d+\:.+(\[\S+\:\S+\])/' | grep VGA lspci -vnn | grep VGA -A 12 lshw -numeric -C display lshw -short | grep -i --color display dmesg | grep NVRM dmesg | grep -i -e DMAR -e IOMMU lspci -v -s $(lspci | grep ' VGA ' | cut -d" " -f 1) --------------------------------------------------------------------------------

+ Install KVM Hypervisor on Debian 10 (Buster) (Jan. 27, 2020, 10:42 a.m.)

1- Check Whether Virtualization Extension is enabled or not: egrep -c '(vmx|svm)' /proc/cpuinfo If the output of the above command is more than zero then we can say Virtualization technology enabled on your system. If the output is zero then we must restart the system, go to bios settings and then enable VT-x (Virtualization Technology Extension) for Intel processor and AMD-V for AMD processor. 2- Verify whether your processor is Intel / AMD and support hardware virtualization: grep -E --color '(vmx|svm)' /proc/cpuinfo If the output contains vmx then you have an Intel-based processor and SVM confirms that it is an AMD processor. 3- Install QEMU-KVM & Libvirt packages: apt install qemu-kvm libvirt-daemon bridge-utils virtinst libvirt-daemon-system 4- Start default network and add vhost_net module: virsh net-list --all As we can see in the above output, the default network is inactive so to make it active and auto-restart across the reboot, run the following commands: virsh net-start default virsh net-autostart default 5- When you create a virtual machine, it will get NAT and you can't have a separate IP address in range of your local network in office. For solving this problem (i.e. having static IP addresses for each creating VMs) you need to create a bridge network on server machine. Open the file /etc/network/interfaces: allow-hotplug enp4s0 auto br1 iface br1 inet static bridge_ports enp4s0 bridge_stp off address 192.168.1.100/24 gateway 192.168.1.1 dns-nameservers 8.8.8.8 "enp4s0" is the name of the interface on server machine "192.168.1.100" is the IP address we're assigning to server machine 6- You need to select "br1" when creating a virtual machine now. This way, you will be able to assign a separate IP address in the range of your office network. auto ens3 allow-hotplug ens3 iface ens3 inet static address 192.168.1.101/24 gateway 192.168.1.1 dns-nameservers 8.8.8.8 ---------------------------------------------------------------------- Bridge configuration for netplan (step 5): vim /etc/netplan/01-netcfg.yaml network: version: 2 renderer: networkd ethernets: enp4s0: dhcp4: no bridges: br1: interfaces: [enp4s0] dhcp4: no addresses: [ 192.168.1.100/24 ] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8, 8.8.4.4] ---------------------------------------------------------------------- Network configuration for netplan (step 6): vim /etc/netplan/01-netcfg.yaml network: version: 2 renderer: networkd ethernets: enp1s0: addresses: [ 192.168.1.101/24 ] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8, 8.8.4.4] ----------------------------------------------------------------------

+ Virtualbox - Increase VDI size (July 29, 2019, 10:40 a.m.)

vboxmanage modifymedium /media/mohsen/Programs/Virtual\ OS/VirtualBox\ VMs/Windows\ 10/ --resize 22000 After resizing, using the Disk Management tool available in Windows, right click on partition C: and extend it.

+ Virtualbox (Nov. 4, 2015, 10:01 a.m.)

Download from the following link, and install: https://www.virtualbox.org/wiki/Linux_Downloads OR 1- Add the following line to your /etc/apt/sources.list: deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian <mydist> contrib 2- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A2F683C52980AECF 3- To install VirtualBox, do apt update apt-get install virtualbox-6.1

+ VMware Workstation (June 21, 2016, 4:07 p.m.)

Using this address, find the bundle file in "/linux/core/": http://softwareupdate.vmware.com/cds/vmw-desktop/ws/ Extract the file (if it's a tar file) and run the bundle file with root permission: # bash ./VMware-Workstation-12.5.2-4638234.x86_64.bundle ------------------------------------------------------------------------------ After installation, you'll need a serial number. Google the version and you'll find it ;-) For this current version (12.5.2) the serial number is: 5A02H-AU243-TZJ49-GTC7K-3C61N ------------------------------------------------------------------------------ Uninstall: # vmware-installer -u vmware-workstation ------------------------------------------------------------------------------

+ libvirt - Change the default Storage Pool (May 23, 2018, 10:01 a.m.)

1- virsh pool-list 2- virsh pool-destroy default 3- virsh pool-undefine default 4- virsh pool-define-as --name default --type dir --target /home/mohsen/virt-manager/pool 5- virsh pool-autostart default 6- virsh pool-start default 7- virsh pool-list

+ TLS encryption on SPICE server (May 29, 2018, 2:15 p.m.)

1- Uncomment the following lines from the file: /etc/libvirt/qemu.conf spice_tls = 1 spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice" 2- Using the note "SPICE - Script to generate cert files for SSL" in my Virtualization note, create the cert files. 3- Test the connection: virsh domdisplay <machine_id>

+ SPICE - Script to generate cert files for SSL (May 29, 2018, 3:31 p.m.)

#!/bin/bash SERVER_KEY=server-key.pem # creating a key for our ca if [ ! -e ca-key.pem ]; then openssl genrsa -des3 -out ca-key.pem 1024 fi # creating a ca if [ ! -e ca-cert.pem ]; then openssl req -new -x509 -days 1095 -key ca-key.pem -out ca-cert.pem -subj "/C=IL/L=Raanana/O=Red Hat/CN=my CA" fi # create server key if [ ! -e $SERVER_KEY ]; then openssl genrsa -out $SERVER_KEY 1024 fi # create a certificate signing request (csr) if [ ! -e server-key.csr ]; then openssl req -new -key $SERVER_KEY -out server-key.csr -subj "/C=IL/L=Raanana/O=Red Hat/CN=my server" fi # signing our server certificate with this ca if [ ! -e server-cert.pem ]; then openssl x509 -req -days 1095 -in server-key.csr -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem fi # now create a key that doesn't require a passphrase openssl rsa -in $SERVER_KEY -out $SERVER_KEY.insecure mv $SERVER_KEY $SERVER_KEY.secure mv $SERVER_KEY.insecure $SERVER_KEY # show the results (no other effect) openssl rsa -noout -text -in $SERVER_KEY openssl rsa -noout -text -in ca-key.pem openssl req -noout -text -in server-key.csr openssl x509 -noout -text -in server-cert.pem openssl x509 -noout -text -in ca-cert.pem # copy *.pem file to /etc/pki/libvirt-spice if [[ -d "/etc/pki/libvirt-spice" ]]; then cp ./*.pem /etc/pki/libvirt-spice else mkdir /etc/pki/libvirt-spice cp ./*.pem /etc/pki/libvirt-spice fi # echo --host-subject echo "your --host-subject is" \" `openssl x509 -noout -text -in server-cert.pem | grep Subject: | cut -f 10- -d " "` \"

+ libvirt - description (May 30, 2018, 12:58 p.m.)

libvirt is a technology by RedHat that implements a single interface for different types of virtualization methods.

+ Virtualbox - Fedora (Nov. 14, 2018, 10:24 a.m.)

1- cd /etc/yum.repos.d/ 2- wget http://download.virtualbox.org/virtualbox/rpm/fedora/virtualbox.repo 3- dnf update 4- Install following dependency packages: dnf install binutils gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-devel dkms 5- Install VirtualBox Latest Version (currently 5.2.20): dnf install VirtualBox-5.2 6- /usr/lib/virtualbox/vboxdrv.sh setup 7- Add VirtualBox User(s) to vboxusers Group: usermod -a -G vboxusers mohsen

+ Virtualbox - SSH to a guest (Nov. 14, 2018, 10:24 a.m.)

1- In Network settings choose NAT 2- Find the name of your guest machine using the command "VBoxManage list vms". Then create a port forwarding rule for the machine: VBoxManage modifyvm <machine_name> --natpf1 "ssh,tcp,,3022,,22" 3- ssh -p 3022 user@127.0.0.1 ---------------------------------------------------------------- - The command "VBoxManage list vms" will show the virtual machines created by the current user. - If Virtualbox is open, you won't see the create Port Forwarding rule in the GUI until you close and reopen it. - For viewing the created Port Forwarding rule in the terminal use the command: VBoxManage showvminfo Debian | grep 'Rule' ----------------------------------------------------------------

+ Virtualbox - Commands (Nov. 17, 2018, 9:03 a.m.)

List running machines: VBoxManage list runningvms ---------------------------------------------------------- Run in background: VBoxHeadless --startvm Debian ---------------------------------------------------------- Retrieve a full list of hard disks: vboxmanage list hdds ---------------------------------------------------------- Delete a virtual disk: vboxmanage unregistervm <NAME> vboxmanage closemedium disk <location.vdi> --delete rm -r <path_to_folder> ---------------------------------------------------------- Clone: vboxmanage clonevm "Ubuntu 18.04" --name "Countly" vboxmanage registervm /home/mohsen/VirtualBox\ VMs/Countly/Countly.vbox ---------------------------------------------------------- List of Port Forwarding Rules: vboxmanage showvminfo Grafana | grep -i rule ---------------------------------------------------------- Create Port Forwarding Rule: VBoxManage modifyvm <machine_name> --natpf1 "guest_ssh,tcp,,2222,,22" VBoxManage modifyvm <machine_name> --natpf1 "guest_http,tcp,,3000,,3000" If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule: VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,,2222,10.0.2.19,22" In this example the NAT engine is being told that the guest can be found at the 10.0.2.19 address. To forward all incoming traffic from a specific host interface to the guest, specify the IP of that host interface like this: VBoxManage modifyvm "VM name" --natpf1 "guestssh,tcp,127.0.0.1,2222,,22" ---------------------------------------------------------- Delete Port Forwarding Rule: VBoxManage modifyvm "Ubuntu 18.04" --natpf1 delete guestssh ---------------------------------------------------------- VBoxManage list vms VBoxManage startvm "Debian - 8" ----------------------------------------------------------

+ VNCviewer (Dec. 17, 2018, 11:59 a.m.)

VNC stands for Virtual Network Computing. It is a remote display system which allows you to view a computing `desktop' environment not only on the machine where it is running but from anywhere on the Internet and from a wide variety of machine architectures. The difference between the xtightvncviewer and the normal vncviewer is the data encoding, optimized for low bandwidth connections. If the client does not support jpeg or zlib encoding it can use the default one. Later versions of xvncviewer (> 3.3.3r2) support a new automatic encoding that should be equally good as the tightvnc encoding. ------------------------------------------------------------ Download & Install: https://www.realvnc.com/en/connect/download/viewer/linux/ ------------------------------------------------------------

+ Install VMware tools (June 15, 2019, 9:01 a.m.)

1- apt install binutils cpp gcc make psmisc linux-headers-$(uname -r) 2- Mount the cdrom drive inside the guest server: mount /dev/cdrom /mnt OR mount /dev/sr0 /mnt 3- Extract VMware tools to the tmp directory tar -C /tmp -zxvf /mnt/VMwareTools-x.x.x-x.tar.gz 4- Start the installation: /tmp/vmware-tools-distrib/vmware-install.pl

+ qemu - command (March 6, 2018, 2:24 p.m.)

qemu-system-x86_64 --enable-kvm -m 3000 Win10.qcow --------------------------------------------------------------- qemu-system-x86_64 -hda vdisk.img -cdrom /path/to/boot-media.iso -boot d -m 384 --------------------------------------------------------------- System emulation using QEMU : dd if=/dev/zero of=disk.img bs=1048576 count=4096 qemu-system-i386 -hda disk.img -cdrom your-favourite-os-install.iso -boot d qemu-system-i386 -hda disk.img --------------------------------------------------------------- qemu-img resize <the_image> +200G --------------------------------------------------------------- Converting between image formats: qemu-img convert -f vmdk -O raw image.vmdk image.img qemu-img convert -f raw -O qcow2 image.img image.qcow2 qemu-img convert -f vmdk -O qcow2 image.vmdk image.qcow2 qemu-img convert -f qcow2 -O raw image.qcow2 image.iso qcow to VirtualBox (VDI): qemu-img convert -O vdi Windows-7even.qcow2 Windows-7even.vdi ---------------------------------------------------------------

+ QEMU options (March 6, 2018, 1:50 p.m.)

Display options: -display sdl - Display video output via SDL (usually in a separate graphics window). -display curses - Displays video output via curses. -display none - Do not display video output. This option is different than the -nographic option. See the man page for more information. -display gtk - Display video output in a GTK window. This is probably the option most users are looking for. -display vnc <X> - Start a VNC server on display X (accepts an argument (X) for the display number). Substitute X for the number of the display For example to have QEMU send the display to a GTK window add the following option to the list: ---------------------------------------------------------------------- Processor -cpu <CPU> - Specify a processor architecture to emulate. To see a list of supported architectures, run: qemu-system-x86_64 -cpu ? -cpu host - (Recommended) Emulate the host processor. -smp <NUMBER> - Specify the number of cores the guest is permitted to use. The number can be higher than the available cores on the host system. ---------------------------------------------------------------------- RAM -m MEMORY - Specify the amount of memory (default: 128 MB). For instance: -m 256M (M stands for Megabyte, G for Gigabyte). ---------------------------------------------------------------------- Hard drive -hda IMAGE.img - Set a virtual hard drive and use the specified image file for it. -drive - Advanced configuration of a virtual hard drive: -drive file=IMAGE.img,if=virtio - Set a virtual VirtIO hard drive and use the specified image file for it. -drive file=/dev/sdX#,cache=none,if=virtio - (Recommended) Set a virtual VirtIO hard drive and use the specified partition for it. -drive id=disk,file=IMAGE.img,if=none -device ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0 - Set emulation layer for an ICH-9 AHCI controller (not yet stable ) and use the specified image file for it. The AHCI emulation supports NCQ, so multiple read or write requests can be outstanding at the same time. ---------------------------------------------------------------------- Optical drives -cdrom IMAGE.iso - Set a virtual CDROM drive and use the specified image file for it. -cdrom /dev/cdrom - Set a virtual CDROM drive and use the host drive for it. -drive - Advanced configuration of a virtual CDROM drive: -drive file=IMAGE.iso,media=cdrom - Set a virtual CDROM drive and use the specified image file for it. With this syntax you can set multiple drives. ---------------------------------------------------------------------- Boot order -boot c - Boot the first virtual hard drive. -boot d - Boot the first virtual CD-ROM drive. -boot n - Boot from virtual network. ---------------------------------------------------------------------- Graphics card QEMU can emulate several graphics cards: -vga cirrus - Simple graphics card. Every guest OS has a built-in driver. -vga std - Support resolutions >= 1280x1024x16. Linux, Windows XP and newer guest have a built-in driver. -vga vmware - VMware SVGA-II, more powerful graphics card. Install x11-drivers/xf86-video-vmware in Linux guests, VMware Tools in Windows XP and newer guests. -vga qxl - More powerful graphics card for use with SPICE. To get more performance use the same color depth for your host as you use in the guest. ---------------------------------------------------------------------- USB -usbdevice tablet - (Recommended) Use a USB tablet instead of the default PS/2 mouse. Recommend, because the tablet sends the mouse cursor's position to match the host mouse cursor. -usbdevice host:VENDOR-ID:PRODUCT-ID - Pass-through of a host USB device to the virtual machine. Determine with lsusb the device's vendor and product ID, e.g.: user $lsusb Bus 001 Device 006: ID: 08ec:2039 M-Systems Flash Disk Pioneers 08ec is the vendor ID, 2039 is the product ID. ---------------------------------------------------------------------- Keyboard layout -k LAYOUT - Set the keyboard layout, e.g. de for german keyboards. Recommend for VNC connections. ---------------------------------------------------------------------- Snapshot -snapshot - Temporary snapshot: write all changes to temporary files instead of hard drive image. -hda OVERLAY.img - Overlay snapshot: write all changes to an overlay image instead of hard drive image. The original image is kept unmodified. To create the overlay image: user $qemu-img create -f qcow2 -b ORIGINAL.img OVERLAY.img ----------------------------------------------------------------------

+ Virsh - Errors (March 6, 2018, 1:36 p.m.)

Error - Unsupported machine type Use -machine help to list supported machines. 1- qemu-system-x86_64 --machine help 2- virsh edit <the_machine> (Make sure you're editing the file with "virsh edit") 3- Change the machine type to one of the supported types listed in step 1: <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> </os>

+ KVM - Move Guest to Another Host (Jan. 27, 2018, 5:30 p.m.)

On the Old Host (kvm01) 1- Shutdown the VM: virsh shutdown vm 2- Dump its definition to /tmp/vm.xml and copy it over to the new host: virsh dumpxml vm > /tmp/vm.xml scp /tmp/vm.xml kvm02:/tmp/vm.xml 3- Copy the VM's image to the new host scp /var/lib/libvirt/images/vm.qcow2 kvm02:/var/lib/libvirt/images/vm.qcow2 4- Undefine the VM and delete its image virsh undefine vm rm /var/lib/libvirt/images/vm.qcow2 ---------------------------------------------------------------------- On the New Host (kvm02) 1- Define the VM from /tmp/vm.xml virsh define /tmp/vm.xml 2- Start the VM virsh start vm

+ Setting up Bridge Network (Jan. 27, 2018, 4:06 p.m.)

Between VM guests By default, QEMU uses macvtap in VEPA mode to provide NAT internet access or bridged access with other guests. Unfortunately, this setup could not let the host communicate with any guests. -------------------------------------------------------------------------- Between VM host and guests To let communications between the VM host and VM guests, you may set up a macvlan bridge on top of a dummy interface similar as below. After the configuration, you can set using interface dummy0 (macvtap) in bridged mode as the network configuration in VM guests configuration. modprobe dummy ip link add dummy0 type dummy ip link add link dummy0 macvlan0 type macvlan mode bridge ifconfig dummy0 up ifconfig macvlan0 192.168.1.2 broadcast 192.168.1.255 netmask 255.255.255.0 up -------------------------------------------------------------------------- Between VM host, guests, and the world In order to let communications between the host, guests, and the outside world, you may set up a bridge as described on the QEMU page. (Mohsen: Look at the next tutorial for an easier bridge connection creation) For example, you may modify the network configuration file /etc/network/interfaces for setup ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in the VM guests configuration. auto lo iface lo inet loopback # The primary network interface auto eth0 #make sure we don't get addresses on our raw device iface eth0 inet manual iface eth0 inet6 manual #set up bridge and give it a static ip auto br0 iface br0 inet static address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 dns-nameservers 8.8.8.8 #allow autoconf for ipv6 iface br0 inet6 auto accept_ra 1 -------------------------------------------------------------------------- https://www.cyberciti.biz/faq/how-to-add-network-bridge-with-nmcli-networkmanager-on-linux/ nmcli con show nmcli connection show --active nmcli con add ifname br0 type bridge con-name br0 # In the next command, instead of eno1, write the DEVICE name from previous command nmcli con add type bridge-slave ifname eno1 master br0 nmcli connection show Now turn on the bridge interface: nmcli con down "Wired connection 2" nmcli con up br0 # After like 10 seconds it will get connected nmcli con show # Delete br0 and slave. (In case you wanted to delete the connections) nmcli connection delete br0 nmcli connection delete bridge-slave- --------------------------------------------------------------------------

+ Virt Tools - Description (Jan. 27, 2018, 4:01 p.m.)

QEMU: QEMU is a generic and open source machine emulator and virtualizer. When used as an emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own x86_64 PC). When used as a virtualizer, QEMU achieves near native performance by executing the guest code directly on the host CPU using KVM. -------------------------------------------------------------------------- KVM: KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on hardware containing virtualization extensions. It consists of a loadable kernel module that exposes virtualization APIs to userspace for use by applications such as QEMU. -------------------------------------------------------------------------- Libvirt: Libvirt is a library and daemon providing a stable open source API for managing virtualization hosts. It targets multiple hypervisors including QEMU, KVM, LXC, Xen, OpenVZ, VMWare ESX, VirtualBox and more. -------------------------------------------------------------------------- Libguestfs: Libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used / free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks, resizing disks, and much more. -------------------------------------------------------------------------- virt-manager Virt-manager is a desktop user interface for managing virtual machines through libvirt. It primarily targets KVM VMs, but also manages Xen and LXC. It also includes the command line provisioning tool virt-install. -------------------------------------------------------------------------- libosinfo: Libosinfo provides a database of information about operating system releases to assist in optimally configuring hardware when deploying virtual machines. It includes a C library for querying information in the database, which is also accessible from any language supported by GObject Introspection. --------------------------------------------------------------------------

+ KVM (Jan. 27, 2018, 9:16 a.m.)

KVM stands for Kernel Virtual Machine, and it is a module of the Linux kernel which allows a program to access and make use of the virtualization capabilities of modern processors, by exposing the /dev/kvm interface. ------------------------------------------------------------------- Qemu is the software which actually performs the OS emulation. It is an open source machine emulator and virtualizer which can use the acceleration feature provided by KVM when running an emulated machine with the same architecture of the host. ------------------------------------------------------------------- Check Whether Virtualization Extension is enabled or not: egrep -c '(vmx|svm)' /proc/cpuinfo If the output of the above command is more than zero then we can say Virtualization technology enabled on your system. If the output is zero then we must restart the system, go to bios settings and then enable VT-x (Virtualization Technology Extension) for Intel processor and AMD-V for AMD processor. ------------------------------------------------------------------- Verify whether your processor is Intel / AMD and support hardware virtualization: grep -E --color '(vmx|svm)' /proc/cpuinfo If the output contains vmx then you have an Intel-based processor and SVM confirms that it is an AMD processor. ------------------------------------------------------------------- Finally, we have to start the libvirtd daemon: the following command both enables it at boot time and starts it immediately: # systemctl enable --now libvirtd -------------------------------------------------------------------

+ Installing Virtual Machines with virt (Jan. 24, 2018, 9:52 a.m.)

Install these packages: apt install qemu-kvm libvirt-clients libvirt-daemon-system libosinfo-bin bridge-utils adduser root libvirt adduser root libvirt-qemu ------------------------------------------------------------------- Create the new virtual machine: virt-install --virt-type=kvm --name=debian-9 --vcpus=1 --memory=2048 --cdrom=/root/debian-9.3.0-amd64-netinst.iso --disk size=4 --os-variant debian9 ------------------------------------------------------------------- --os-variant option is not mandatory. It is highly recommended to be used since it can improve the performance of the virtual machine. The option will try to fine-tune the guest to the specific OS version. If the option is not passed, the program will attempt to auto-detect the correct value from the installation media. To obtain a list of all supported systems you can run: $ osinfo-query os ------------------------------------------------------------------- If the path option is not specified, the disk will be created in $HOME/.local/share/libvirt/images if the command is executed as normal user (member of the kvm group) or in /var/lib/libvirt/images if running it as root. -------------------------------------------------------------------

+ Virsh (Aug. 26, 2017, 12:58 p.m.)

Managing guests with virsh: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization/chap-Virtualization-Managing_guests_with_virsh.html ------------------------------------------------------------------------- Connecting to the hypervisor: virsh connect {hostname OR URL} Creating a virtual machine XML dump (configuration file): virsh dump XML {domain-id, domain-name or domain-uuid} virsh dumpxml GuestID > guest.xml Creating a guest from a configuration file: virsh create configuration_file.xml Editing a guest's configuration file: virsh edit softwaretesting Suspending a guest: virsh suspend {domain-id, domain-name or domain-uuid} Resuming a guest: virsh resume {domain-id, domain-name or domain-uuid} Save a guest: virsh save {domain-name, domain-id or domain-uuid} filename Shut down a guest: virsh shutdown {domain-id, domain-name or domain-uuid} Rebooting a guest: virsh reboot {domain-id, domain-name or domain-uuid} Forcing a guest to stop: virsh destroy {domain-id, domain-name or domain-uuid} Getting the domain ID of a guest: virsh domid {domain-name or domain-uuid} Getting the domain name of a guest: virsh domname {domain-id or domain-uuid} Getting the UUID of a guest: virsh domuuid {domain-id or domain-name} Displaying guest Information: virsh dominfo {domain-id, domain-name or domain-uuid} Displaying host information: virsh nodeinfo Displaying the guests: virsh list The --inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and the --all option lists all guests. virsh list --all Displaying virtual CPU information: virsh vcpuinfo {domain-id, domain-name or domain-uuid} Configuring virtual CPU affinity: virsh vcpupin domain-id vcpu cpulist Configuring virtual CPU count: virsh setvcpus {domain-name, domain-id or domain-uuid} count Configuring memory allocation: virsh setmem {domain-id or domain-name} count Displaying guest block device information: virsh domblkstat GuestName block-device Displaying guest network device information: virsh domifstat GuestName interface-device Migrating guests with virsh: virsh migrate --live GuestName DestinationURL Managing virtual networks: virsh net-list This command generates output similar to: # virsh net-list Name State Autostart default active yes vnet1 active yes vnet2 active yes To view network information for a specific virtual network: # virsh net-dumpxml NetworkName Other virsh commands used in managing virtual networks are: virsh net-autostart network-name — Autostart a network specified as network-name. virsh net-create XMLfile — generates and starts a new network using an existing XML file. virsh net-define XMLfile — generates a new network device from an existing XML file without starting it. virsh net-destroy network-name — destroy a network specified as network-name. virsh net-name networkUUID — convert a specified networkUUID to a network name. virsh net-uuid network-name — convert a specified network-name to a network UUID. virsh net-start nameOfInactiveNetwork — starts an inactive network. virsh net-undefine nameOfInactiveNetwork — removes the definition of an inactive network. ------------------------------------------------------------------------- Guest management commands: help Prints basic help information. list Lists all guests. dumpxml Outputs the XML configuration file for the guest. create Creates a guest from an XML configuration file and starts the new guest. start Starts an inactive guest. destroy Forces a guest to stop. define Outputs an XML configuration file for a guest. domid Displays the guest's ID. domuuid Displays the guest's UUID. dominfo Displays guest information. domname Displays the guest's name. domstate Displays the state of a guest. quit Quits the interactive terminal. reboot Reboots a guest. restore Restores a previously saved guest stored in a file. resume Resumes a paused guest. save Save the present state of a guest to a file. shutdown Gracefully shuts down a guest. suspend Pauses a guest. undefine Deletes all files associated with a guest. migrate Migrates a guest to another host. ------------------------------------------------------------------------- Resource management options: setmem Sets the allocated memory for a guest. setmaxmem Sets maximum memory limit for the hypervisor. setvcpus Changes number of virtual CPUs assigned to a guest. Note that this feature is unsupported in Red Hat Enterprise Linux 5. vcpuinfo Displays virtual CPU information about a guest. vcpupin Controls the virtual CPU affinity of a guest. domblkstat Displays block device statistics for a running guest. domifstat Displays network interface statistics for a running guest. attach-device Attach a device to a guest, using a device definition in an XML file. attach-disk Attaches a new disk device to a guest. attach-interface Attaches a new network interface to a guest. detach-device Detach a device from a guest, takes the same kind of XML descriptions as command attach-device. detach-disk Detach a disk device from a guest. detach-interface Detach a network interface from a guest. domxml-from-native Convert from native guest configuration format to domain XML format. See the virsh man page for more details. domxml-to-native Convert from domain XML format to native guest configuration format. See the virsh man page for more details. -------------------------------------------------------------------------

+ SPICE (April 29, 2017, 11:51 a.m.)

What is SPICE? SPICE (Simple Protocol for Independent Computing Environments) is a communication protocol for virtual environments. It allows users to see the console of virtual machines (VM) from anywhere via the Internet. It is a client-server model that imagines Virtualization Station as a host and users can connect to VMs via the SPICE client. -------------------------------------------------------------------- remote-viewer spice://srv1:5908 remote-viewer "spice://srv1:5901?password=1362913207771306286" -------------------------------------------------------------------- SPICE Tools: https://www.spice-space.org/download.html -------------------------------------------------------------------- To compile SPICE agent on Linux, download the agent from the following link: https://www.spice-space.org/download/releases/spice-vdagent-0.17.0.tar.bz2 Install the following packages: 1- apt install libglib2.0-dev libdrm-dev sudo libxxf86vm-dev libxt-dev xutils-dev flex bison xcb libx11-xcb-dev libxcb-glx0 libxcb-glx0-dev xorg-dev libxcb-dri2-0-dev libasound2-dev libdbus-1-dev 2- Extract the already downloaded agent file, and: ./configure make sudo make install -------------------------------------------------------------------- SPICE client on Ubuntu: 1- sudo apt install spice-vdagent 2- Create a file /etc/default/spice-vdagentd with the value: SPICE_VDAGENTD_EXTRA_ARGS=-X --------------------------------------------------------------------

+ Virt-Manager (Jan. 9, 2017, 4:35 p.m.)

apt install virt-manager virtinst python3-libvirt qemu virt-viewer bridge-utils ----------------------------------------------------------------- Virsh: You can use the virsh application to manage virtual machines. This utility is built around the libvirt management API and operates as an alternative to the xm tool or the graphical Virtual Machine Manager. virsh is a command line interface that can be used to create, destroy, stop, start and edit virtual machines and configure the virtual environment (such as virtual networks etc) ----------------------------------------------------------------- virt-install is a command line tool that simplifies the process of creating a virtual machine. ----------------------------------------------------------------- virt-manager is a GUI that can be used to create, destroy, stop, start and edit virtual machines and configure the virtual environment (such as virtual networks etc). ----------------------------------------------------------------- Resize disk: https://fatmin.com/2016/12/20/how-to-resize-a-qcow2-image-and-filesystem-with-virt-resize/ -----------------------------------------------------------------