π₯οΈTopic 351: Full Virtualization

π§ 351.1 Virtualization Concepts and Theory
Weight: 6
Description: Candidates should know and understand the general concepts, theory and terminology of virtualization. This includes Xen, QEMU and libvirt terminology.
Key Knowledge Areas:
π₯οΈ Understand virtualization terminology
βοΈ Understand the pros and cons of virtualization
π οΈ Understand the various variations of Hypervisors and Virtual Machine Monitors
π Understand the major aspects of migrating physical to virtual machines
π Understand the major aspects of migrating virtual machines between host systems
πΈ Understand the features and implications of virtualization for a virtual machine, such as snapshotting, pausing, cloning and resource limits
π Awareness of oVirt, Proxmox, systemd-machined and VirtualBox
π Awareness of Open vSwitch
π 351.1 Cited Objects
π₯οΈ Hypervisors
π’ Type 1 Hypervisor (Bare-Metal Hypervisor)
π Type 1 Definition
Runs directly on the host's physical hardware, providing a base layer to manage VMs without the need for a host operating system.
π Type 1 Characteristics
β‘ High performance and efficiency.
β±οΈ Lower latency and overhead.
π’ Often used in enterprise environments and data centers.
π‘ Type 1 Examples
VMware ESXi: A robust and widely used hypervisor in enterprise settings.
Microsoft Hyper-V: Integrated with Windows Server, offering strong performance and management features.
Xen: An open-source hypervisor used by many cloud service providers.
KVM (Kernel-based Virtual Machine): Integrated into the Linux kernel, providing high performance for Linux-based systems.
π Type 2 Hypervisor (Hosted Hypervisor)
π Type 2 Definition
Runs on top of a conventional operating system, relying on the host OS for resource management and device support.
π Type 2 Characteristics
π οΈ Easier to set up and use, especially on personal computers.
π§ More flexible for development, testing, and smaller-scale deployments.
π’ Typically less efficient than Type 1 hypervisors due to additional overhead from the host OS.
π‘ Type 2 Examples
VMware Workstation: A powerful hypervisor for running multiple operating systems on a single desktop.
Oracle VirtualBox: An open-source hypervisor known for its flexibility and ease of use.
Parallels Desktop: Designed for Mac users to run Windows and other operating systems alongside macOS.
QEMU (Quick EMUlator): An open-source emulator and virtualizer, often used in conjunction with KVM.
βοΈ Key Differences Between Type 1 and Type 2 Hypervisors
Deployment Environment:
Type 1 hypervisors are commonly deployed in data centers and enterprise environments due to their direct interaction with hardware and high performance.
Type 2 hypervisors are more suitable for personal use, development, testing, and small-scale virtualization tasks.
Performance:
Type 1 hypervisors generally offer better performance and lower latency because they do not rely on a host OS.
Type 2 hypervisors may experience some performance degradation due to the overhead of running on top of a host OS.
Management and Ease of Use:
Type 1 hypervisors require more complex setup and management but provide advanced features and scalability for large-scale deployments.
Type 2 hypervisors are easier to install and use, making them ideal for individual users and smaller projects.
π Migration Types
In the context of hypervisors, which are technologies used to create and manage virtual machines, the terms P2V migration and V2V migration are common in virtualization environments. They refer to processes of migrating systems between different types of platforms.
π₯οΈβ‘οΈπ₯οΈ P2V - Physical to Virtual Migration
P2V migration refers to the process of migrating a physical server to a virtual machine.In other words, an operating system and its applications, running on dedicated physical hardware, are "converted" and moved to a virtual machine that runs on a hypervisor (such as VMware, Hyper-V, KVM, etc.).
Example: You have a physical server running a Windows or Linux system, and you want to move it to a virtual environment, like a cloud infrastructure or an internal virtualization server. The process involves copying the entire system state, including the operating system, drivers, and data, to create an equivalent virtual machine that can run as if it were on the physical hardware.
π₯οΈππ₯οΈ V2V - Virtual to Virtual Migration
V2V migration refers to the process of migrating a virtual machine from one hypervisor to another.In this case, you already have a virtual machine running in a virtualized environment (like VMware), and you want to move it to another virtualized environment (for example, to Hyper-V or to a new VMware server).
Example: You have a virtual machine running on a VMware virtualization server, but you decide to migrate it to a Hyper-V platform. In this case, the V2V migration converts the virtual machine from one format or hypervisor to another, ensuring it can continue running correctly.
π§© HVM and Paravirtualization
βοΈ Hardware-assisted Virtualization (HVM)
π HVM Definition
HVM leverages hardware extensions provided by modern CPUs to virtualize hardware, enabling the creation and management of VMs with minimal performance overhead.
π HVM Key Characteristics
π₯οΈ Hardware Support: Requires CPU support for virtualization extensions such as Intel VT-x or AMD-V.
π οΈ Full Virtualization: VMs can run unmodified guest operating systems, as the hypervisor provides a complete emulation of the hardware environment.
β‘ Performance: Typically offers near-native performance because of direct execution of guest code on the CPU.
π Isolation: Provides strong isolation between VMs since each VM operates as if it has its own dedicated hardware.
π‘ HVM Examples
VMware ESXi, Microsoft Hyper-V, KVM (Kernel-based Virtual Machine).
β HVM Advantages
β Compatibility: Can run any operating system without modification.
β‘ Performance: High performance due to hardware support.
π Security: Enhanced isolation and security features provided by hardware.
β HVM Disadvantages
π οΈ Hardware Dependency: Requires specific hardware features, limiting compatibility with older systems.
π§ Complexity: May involve more complex configuration and management.
π§© Paravirtualization
π Paravirtualization Definition
Paravirtualization involves modifying the guest operating system to be aware of the virtual environment, allowing it to interact more efficiently with the hypervisor.
π Paravirtualization Key Characteristics
π οΈ Guest Modification: Requires changes to the guest operating system to communicate directly with the hypervisor using hypercalls.
β‘ Performance: Can be more efficient than traditional full virtualization because it reduces the overhead associated with emulating hardware.
π Compatibility: Limited to operating systems that have been modified for paravirtualization.
π‘ Paravirtualization Examples
Xen with paravirtualized guests, VMware tools in certain configurations, and some KVM configurations.
β Paravirtualization Advantages
β‘ Efficiency: Reduces the overhead of virtualizing hardware, potentially offering better performance for certain workloads.
β Resource Utilization: More efficient use of system resources due to direct communication between the guest OS and hypervisor.
β Paravirtualization Disadvantages
π οΈ Guest OS Modification: Requires modifications to the guest OS, limiting compatibility to supported operating systems.
π§ Complexity: Requires additional complexity in the guest OS for hypercall implementations.
βοΈ Key Differences
π₯οΈ Guest OS Requirements
HVM: Can run unmodified guest operating systems.
Paravirtualization: Requires guest operating systems to be modified to work with the hypervisor.
β‘ Performance
HVM: Typically provides near-native performance due to hardware-assisted execution.
Paravirtualization: Can offer efficient performance by reducing the overhead of hardware emulation, but relies on modified guest OS.
π§° Hardware Dependency
HVM: Requires specific CPU features (Intel VT-x, AMD-V).
Paravirtualization: Does not require specific CPU features but needs modified guest OS.
π Isolation
HVM: Provides strong isolation using hardware features.
Paravirtualization: Relies on software-based isolation, which may not be as robust as hardware-based isolation.
π§© Complexity
HVM: Generally more straightforward to deploy since it supports unmodified OS.
Paravirtualization: Requires additional setup and modifications to the guest OS, increasing complexity.
π§ NUMA (Non-Uniform Memory Access)
NUMA (Non-Uniform Memory Access) is a memory architecture used in multiprocessor systems to optimize memory access by processors. In a NUMA system, memory is distributed unevenly among processors, meaning that each processor has faster access to a portion of memory (its "local memory") than to memory that is physically further away (referred to as "remote memory") and associated with other processors.
π Key Features of NUMA Architecture
Local and Remote Memory: Each processor has its own local memory, which it can access more quickly. However, it can also access the memory of other processors, although this takes longer.
Differentiated Latency: The latency of memory access varies depending on whether the processor is accessing its local memory or the memory of another node. Local memory access is faster, while accessing another nodeβs memory (remote) is slower.
Scalability: NUMA architecture is designed to improve scalability in systems with many processors. As more processors are added, memory is also distributed, avoiding the bottleneck that would occur in a uniform memory access (UMA) architecture.
β Advantages of NUMA
β‘ Better Performance in Large Systems: Since each processor has local memory, it can work more efficiently without competing as much with other processors for memory access.
π Scalability: NUMA allows systems with many processors and large amounts of memory to scale more effectively compared to a UMA architecture.
β Disadvantages
π οΈ Programming Complexity: Programmers need to be aware of which regions of memory are local or remote, optimizing the use of local memory to achieve better performance.
π’ Potential Performance Penalties: If a processor frequently accesses remote memory, performance may suffer due to higher latency. This architecture is common in high-performance multiprocessor systems, such as servers and supercomputers, where scalability and memory optimization are critical.
π Opensource Solutions
π oVirt: https://www.ovirt.org/
π Oracle VirtualBox: https://www.virtualbox.org/
π Open vSwitch: https://www.openvswitch.org/
ποΈ Types of Virtualization
π₯οΈ Hardware Virtualization (Server Virtualization)
π HV Definition
Abstracts physical hardware to create virtual machines (VMs) that run separate operating systems and applications.
π οΈ HV Use Cases
Data centers, cloud computing, server consolidation.
π‘ HV Examples
VMware ESXi, Microsoft Hyper-V, KVM.
π¦ Operating System Virtualization (containerization)
π containerization Definition
Allows multiple isolated user-space instances (containers) to run on a single OS kernel.
π οΈ containerization Use Cases
Microservices architecture, development and testing environments.
π‘ containerization Examples
Docker, Kubernetes, LXC.
π Network Virtualization
π Network Virtualization Definition
Combines hardware and software network resources into a single, software-based administrative entity.
π οΈ Network Virtualization Use Cases
Software-defined networking (SDN), network function virtualization (NFV).
π‘ Network Virtualization Examples
VMware NSX, Cisco ACI, OpenStack Neutron.
πΎ Storage Virtualization
π Storage Virtualization Definition
Pools physical storage from multiple devices into a single virtual storage unit that can be managed centrally.
π οΈ Storage Virtualization Use Cases
Data management, storage optimization, disaster recovery.
π‘ Storage Virtualization Examples
IBM SAN Volume Controller, VMware vSAN, NetApp ONTAP.
π₯οΈ Desktop Virtualization
π Desktop Virtualization Definition
Allows a desktop operating system to run on a virtual machine hosted on a server.
π οΈ Desktop Virtualization Use Cases
Virtual desktop infrastructure (VDI), remote work solutions.
π‘ Desktop Virtualization Examples
Citrix Virtual Apps and Desktops, VMware Horizon, Microsoft Remote Desktop Services.
π± Application Virtualization
π Application Virtualization Definition
Separates applications from the underlying hardware and operating system, allowing them to run in isolated environments.
π οΈ Application Virtualization Use Cases
Simplified application deployment, compatibility testing.
π‘ Application Virtualization Examples
VMware ThinApp, Microsoft App-V, Citrix XenApp.
ποΈ Data Virtualization
π Data Virtualization Definition
Integrates data from various sources without physically consolidating it, providing a unified view for analysis and reporting.
π οΈ Data Virtualization Use Cases
Business intelligence, real-time data integration.
π‘ Data Virtualization Examples
Denodo, Red Hat JBoss Data Virtualization, IBM InfoSphere.
π Benefits of Virtualization
β‘ Resource Efficiency: Better utilization of physical resources.
π° Cost Savings: Reduced hardware and operational costs.
π Scalability: Easy to scale up or down according to demand.
π§ Flexibility: Supports a variety of workloads and applications.
π Disaster Recovery: Simplified backup and recovery processes.
π Isolation: Improved security through isolation of environments.
Emulation
Emulation involves simulating the behavior of hardware or software on a different platform than originally intended.
This process allows software designed for one system to run on another system that may have different architecture or operating environment.
While emulation provides versatility by enabling the execution of unmodified guest operating systems or applications, it often comes with performance overhead.
This overhead arises because the emulated system needs to interpret and translate instructions meant for the original system into instructions compatible with the host system. As a result, emulation can be slower than native execution, making it less efficient for resource-intensive tasks.
Despite this drawback, emulation remains valuable for running legacy software, testing applications across different platforms, and facilitating cross-platform development.
systemd-machined
The systemd-machined service is dedicated to managing virtual machines and containers within the systemd ecosystem. It provides essential functionalities for controlling, monitoring, and maintaining virtual instances, offering robust integration and efficiency within Linux environments.
π§ 351.2 Xen


Weight: 3
Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot Xen installations. The focus is on Xen version 4.x.
Key Knowledge Areas:
Understand architecture of Xen, including networking and storage
Basic configuration of Xen nodes and domains
Basic management of Xen nodes and domains
Basic troubleshooting of Xen installations
Awareness of XAPI
Awareness of XenStore
Awareness of Xen Boot Parameters
Awareness of the xm utility
π§ Xen

Xen is an open-source type-1 (bare-metal) hypervisor, which allows multiple operating systems to run concurrently on the same physical hardware.Xen provides a layer between the physical hardware and virtual machines (VMs), enabling efficient resource sharing and isolation.
Architecture: Xen operates with a two-tier system where Domain 0 (Dom0) is the privileged domain with direct hardware access and manages the hypervisor. Other virtual machines, called Domain U (DomU), run guest operating systems and are managed by Dom0.
Types of Virtualization: Xen supports both paravirtualization (PV), which requires modified guest OS, and hardware-assisted virtualization (HVM), which uses hardware extensions (e.g., Intel VT-x or AMD-V) to run unmodified guest operating systems. Xen is widely used in cloud environments, notably by Amazon Web Services (AWS) and other large-scale cloud providers.
π’ XenSource
XenSource was the company founded by the original developers of the Xen hypervisor at the University of Cambridge to commercialize Xen.The company provided enterprise solutions based on Xen and offered additional tools and support to enhance Xenβs capabilities for enterprise use.
Acquisition by Citrix: In 2007, XenSource was acquired by Citrix Systems, Inc. Citrix used Xen technology as the foundation for its Citrix XenServer product, which became a popular enterprise-grade virtualization platform based on Xen.
Transition: After the acquisition, the Xen project continued as an open-source project, while Citrix focused on commercial offerings like XenServer, leveraging XenSource technology.
π Xen Project
Xen Project refers to the open-source community and initiative responsible for developing and maintaining the Xen hypervisor after its commercialization.The Xen Project operates under the Linux Foundation, with a focus on building, improving, and supporting Xen as a collaborative, community-driven effort.
Goals: The Xen Project aims to advance the hypervisor by improving its performance, security, and feature set for a wide range of use cases, including cloud computing, security-focused virtualization (e.g., Qubes OS), and embedded systems.
Contributors: The project includes contributors from various organizations, including major cloud providers, hardware vendors, and independent developers.
XAPI and XenTools: The Xen Project also includes tools such as XAPI (XenAPI), which is used for managing Xen hypervisor installations, and various other utilities for system management and optimization.
ποΈ XenStore
Xen Store is a critical component of the Xen Hypervisor. Essentially, Xen Store is a distributed key-value database used for communication and information sharing between the Xen hypervisor and the virtual machines (also known as domains) it manages.
Here are some key aspects of Xen Store:
Inter-Domain Communication: Xen Store enables communication between domains, such as Dom0 (the privileged domain that controls hardware resources) and DomUs (user domains, which are the VMs). This is done through key-value entries, where each domain can read or write information.
Configuration Management: It is used to store and access configuration information, such as virtual devices, networking, and boot parameters. This facilitates the dynamic management and configuration of VMs.
Events and Notifications: Xen Store also supports event notifications. When a particular key or value in the Xen Store is modified, interested domains can be notified to react to these changes. This is useful for monitoring and managing resources.
Simple API: Xen Store provides a simple API for reading and writing data, making it easy for developers to integrate their applications with the Xen virtualization system.
π XAPI
XAPI, or XenAPI, is the application programming interface (API) used to manage the Xen Hypervisor and its virtual machines (VMs). XAPI is a key component of XenServer (now known as Citrix Hypervisor) and provides a standardized way to interact with the Xen hypervisor to perform operations such as creating, configuring, monitoring, and controlling VMs.
Here are some important aspects of XAPI:
VM Management: XAPI allows administrators to programmatically create, delete, start, and stop virtual machines.
Automation: With XAPI, it's possible to automate the management of virtual resources, including networking, storage, and computing, which is crucial for large cloud environments.
Integration: XAPI can be integrated with other tools and scripts to provide more efficient and customized administration of the Xen environment.
Access Control: XAPI also provides access control mechanisms to ensure that only authorized users can perform specific operations in the virtual environment.
XAPI is the interface that enables control and automation of the Xen Hypervisor, making it easier to manage virtualized environments.
π Xen Summary
Xen: The core hypervisor technology enabling virtual machines to run on physical hardware.
XenSource: The company that commercialized Xen, later acquired by Citrix, leading to the development of Citrix XenServer.
Xen Project: The open-source initiative and community that continues to develop and maintain the Xen hypervisor under the Linux Foundation.
XenStore: Xen Store acts as a communication and configuration intermediary between the Xen hypervisor and the VMs, streamlining the operation and management of virtualized environments.
XAPI is the interface that enables control and automation of the Xen Hypervisor, making it easier to manage virtualized environments.
π₯οΈ Domain0 (Dom0)
Domain0, or Dom0, is the control domain in a Xen architecture. It manages other domains (DomUs) and has direct access to hardware. Dom0 runs device drivers, allowing DomUs, which lack direct hardware access, to communicate with devices. Typically, it is a full instance of an operating system, like Linux, and is essential for Xen hypervisor operation.
π» DomainU (DomU)
DomUs are non-privileged domains that run virtual machines. They are managed by Dom0 and do not have direct access to hardware. DomUs can be configured to run different operating systems and are used for various purposes, such as application servers and development environments. They rely on Dom0 for hardware interaction.
π§© PV-DomU (Paravirtualized DomainU)
PV-DomUs use a technique called paravirtualization. In this model, the DomU operating system is modified to be aware that it runs in a virtualized environment, allowing it to communicate directly with the hypervisor for optimized performance. This results in lower overhead and better efficiency compared to full virtualization.
βοΈ HVM-DomU (Hardware Virtual Machine DomainU)
HVM-DomUs are virtual machines that utilize full virtualization, allowing unmodified operating systems to run. The Xen hypervisor provides hardware emulation for these DomUs, enabling them to run any operating system that supports the underlying hardware architecture. While this offers greater flexibility, it can result in higher overhead compared to PV-DomUs.
π Xen Network
Paravirtualized Network Devices 
Bridging 
π 351.2 Cited Objects
π 351.2 Notes
vif
In Xen, βvifβ stands for Virtual Interface and is used to configure networking for virtual machines (domains).
By specifying βvifβ directives in the domain configuration files, administrators can define network interfaces, assign IP addresses, set up VLANs, and configure other networking parameters for virtual machines running on Xen hosts. For example: vif = [βbridge=xenbr0β], in this case, it connects the VMβs network interface to the Xen bridge named βxenbr0β.
Xen Lab
Use this script for lab provisioning: \xen.sh
π» 351.2 Important Commands
ποΈ xen-create-image
π xen-list-images
β xen-delete-image
ποΈ xenstore-ls
βοΈ xl
π₯οΈ 351.3 QEMU

Weight: 4
Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot QEMU installations.
Key Knowledge Areas:
Understand the architecture of QEMU, including KVM, networking and storage
Start QEMU instances from the command line
Manage snapshots using the QEMU monitor
Install the QEMU Guest Agent and VirtIO device drivers
Troubleshoot QEMU installations, including networking and storage
Awareness of important QEMU configuration parameters
π 351.3 Cited Objects
π οΈ 351.3 Important Commands
π 351.3 Others Commands
π§ͺ check kvm module
π ip
π brctl
πΎ qemu-img
π₯οΈ qemu-system-x86_64
π₯οΈ QEMU Monitor
For initiate QEMU monitor in commandline use -monitor stdio param in qemu-system-x86_64
Exit qemu-monitor:
π€ Guest Agent
For enable, use:
π’ 351.4 Libvirt Virtual Machine Management


Weight: 9
Description: Candidates should be able to manage virtualization hosts and virtual machines (βlibvirt domainsβ) using libvirt and related tools.
Key Knowledge Areas:
Understand the architecture of libvirt
Manage libvirt connections and nodes
Create and manage QEMU and Xen domains, including snapshots
Manage and analyze resource consumption of domains
Create and manage storage pools and volumes
Create and manage virtual networks
Migrate domains between nodes
Understand how libvirt interacts with Xen and QEMU
Understand how libvirt interacts with network services such as dnsmasq and radvd
Understand libvirt XML configuration files
Awareness of virtlogd and virtlockd
π 351.4 Cited Objects
π οΈ 351.4 Important Commands
π₯οΈ virsh
ποΈ virt-install
πΎ 351.5 Virtual Machine Disk Image Management

Weight: 3
Description: Candidates should be able to manage virtual machines disk images. This includes converting disk images between various formats and hypervisors and accessing data stored within an image.
Key Knowledge Areas:
Understand features of various virtual disk image formats, such as raw images, qcow2 and VMDK
Manage virtual machine disk images using qemu-img
Mount partitions and access files containerd in virtual machine disk images using libguestfish
Copy physical disk content to a virtual machine disk image
Migrate disk content between various virtual machine disk image formats
Awareness of Open Virtualization Format (OVF)
π 351.5 Cited Objects
π οΈ 351.5 Important Commands
πΎ 351.5.1 qemu-img
π guestfish
ποΈ guestmount
ποΈ guestumount
π virt-df
ποΈ virt-filesystems
π virt-inspector
π± virt-cat
π virt-diff
π§Ή virt-sparsify
π virt-resize
π₯ virt-copy-in
π€ virt-copy-out
π virt-ls
π virt-rescue
π§° virt-sysprep
π virt-v2v
π virt-p2v
π½ virt-p2v-make-disk
π 351.5 Notes
π¦ OVF: Open Virtualization Format
OVF: An open format that defines a standard for packaging and distributing virtual machines across different environments.
The generated package has the .ova extension and contains the following files:
.ovf: XML file with metadata defining the virtual machine environment
Image files: .vmdk, .vhd, .vhdx, .qcow2, .raw
Additional files: metadata, snapshots, configuration, hash
Last updated