What is virtualisation?

Virtualisation involves turning hardware, software, storage and networks into a digital format. This makes it easier to use these IT resources more effectively.

Cloud Migration with IONOS
The Hypervisor alternative
  • Great price-to-performance ratio with no virtualisation costs
  • Migration assistance from IONOS Cloud experts included
  • No vendor lock-in & open source based

What does virtualisation mean?

Virtualisation is the process of creating a virtual version of physical computing components with the aim of distributing these flexibly and in line with demand. This ensures better utilisation of resources. Both hardware and software components can be virtualised. An IT component created as part of virtualisation technology is called a virtual or logical component and can be used in the same way as its physical equivalent.

One of the main advantages of virtualisation is the abstraction layer between the physical resource and the virtual image. This forms the foundation of various cloud services that are growing increasingly vital in daily business operations. It’s important to differentiate virtualisation from the (often very similar) concepts of simulation and emulation.

What’s the difference between virtualisation, simulation and emulation?

If you are familiar with virtualisation technology, you’ll inevitably come across the terms simulation and emulation. These terms are often used synonymously but differ not only from each other, but also from the concept of virtualisation.

  • Simulation: Simulation is the complete simulation of a system using software. In this case, ‘complete’ means not only imitating the functionality that interacts with other systems, but also simulating all system components and their internal logic. Simulators are used to compile programs on a system that the they weren’t originally designed for, allowing the program to carry out analysis of the system.

  • Emulation: While simulation aims to replicate systems, emulation provides the functionality of hardware or software components, but not their internal logic. The aim of emulation is to achieve the same results with a simulated system that are achieved with its real counterpart. In contrast to a simulator, an emulator can replace the system completely.

Simulators and emulators are used in three scenarios:

  • Simulation of a hardware environment so that an operating system can be run on a processor platform that it wasn’t originally developed for
  • Simulation of an operating system is so that applications can be executed that were written for other systems
  • Simulation of a hardware environment for outdated software since the original components are no longer available

It’s important to distinguish emulators and simulators from software solutions that merely provide a compatibility layer to bridge incompatibilities between different hardware and software components. With this concept, only a part of a system is simulated (for example, an interface) and not the entire system. Examples include Wine (a recursive acronym for Wine Is Not an Emulator) and Cygwin.

How does virtualisation work?

Virtualisation is similar to simulation and emulation but serves a different purpose. Simulators and emulators implement the software model of a computer system to address compatibility issues. Ideally, virtualisation should be structured to minimise the need for simulation or emulation. The primary purpose of virtualisation technology is to create an abstraction layer that allows IT resources to be provided independently of their original physical form.

Here is an example: Virtualisation software can be used if you want to run one or more virtual versions of Windows 10 on a Windows 10 computer for test purposes. If you want to run two virtual versions of Ubuntu on the same computer, you’ll need virtualisation software to bridge the incompatibilities between the underlying Windows system and the Linux systems running on it by emulation.

Numerous software solutions used in virtualisation contain emulators. In practice, the two concepts often overlap. Nevertheless, the two concepts are different.

What types of virtualisation are there?

In modern IT landscapes, there are different types of virtualisation, which involve abstracting IT resources like software, storage, data or network components. Therefore, distinctions are made between:

  • Hardware virtualisation
  • Software virtualisation
  • Storage virtualisation
  • Data virtualisation
  • Network virtualisation

Hardware virtualisation

The term hardware virtualisation refers to virtualisation technology that makes it possible to provide hardware components using software regardless of their physical form. A good example of hardware virtualisation is a virtual machine (VM for short).

A VM behaves like a physical machine including the hardware and operating system. The abstraction layer between the physical basis and the virtual system is created during hardware virtualisation by different types of hypervisors.

Note

A hypervisor (also called Virtual Machine Monitor, VMM) is software that allows multiple guest systems to run on one host system.

Hypervisors manage the hardware resources provided by the host system such as CPU, RAM, hard disk space and peripherals, and distribute them to any number of guest systems. This can be done via full virtualisation or paravirtualisation.

  • Full virtualisation: In full virtualisation, the hypervisor creates a complete hardware environment for each virtual machine. Each VM has its own contingent of virtual hardware resources assigned by the hypervisor and can run applications on this basis. The physical hardware of the host system, on the other hand, remains hidden from the guest operating system. This approach allows the operation of unmodified guest systems. Popular full virtualisation software solutions include Oracle VM VirtualBox, Parallels Workstation, VMware Workstation, Microsoft Hyper-V and Microsoft Virtual Server.

  • Paravirtualisation: While full virtualisation provides a separate virtual hardware environment for each VM, the hypervisor only provides an application programming interface (API) for paravirtualisation, allowing the guest operating systems to directly access the physical hardware of the host system. Compared to full virtualisation, paravirtualisation offers a performance advantage. However, this requires that the kernel of the guest operating system has been ported to the API. This means that only modified guest systems can be paravirtualised.

For end users, the virtual machine is indistinguishable from a physical computer. Hardware virtualisation is therefore the concept of choice when it comes to providing a variety of virtual servers for different users based on a powerful computing platform. This is the basis of the popular shared hosting model.

Note

When it comes to shared hosting, a hosting provider operates and maintains the physical machine in an optimised data centre and provides its customers with virtualised hardware resources as closed guest systems.

Another application area of hardware virtualisation is server consolidation in corporate environments. This brings three benefits:

  • Improved server processor utilisation
  • Effective distribution of storage media
  • Lower power consumption for operation and cooling

Hardware virtualisation is considered a comparatively secure virtualisation type. Each guest system runs in an isolated virtual hardware environment. If one of the guest systems is infiltrated by hackers or its functions are affected by malware, this usually has no influence on other guest systems on the same host system.

Advantages and disadvantages of hardware virtualisation:

Advantages Disadvantages
Server consolidation allows hardware resources to be allocated dynamically and used more efficiently Simulating a hardware environment including the operating system leads to an overhead
Consolidated hardware is more energy efficient than separate computers The performance of a virtual machine can be affected by other VMs on the same host system
VMs offer a comparatively high degree of isolation and security for workload isolation

Software virtualisation

If software components are virtualised instead of hardware components, this is referred to as software virtualisation. Common approaches to this virtualisation concept are:

  • Application virtualisation
  • Desktop virtualisation
  • Operating system virtualisation

Application virtualisation

Application virtualisation is the abstraction of individual applications from the underlying operating system. Application virtualisation systems allow programs to run in isolated runtime environments and distribute across different systems without requiring changes to local operating or file systems and the respective registry.

Application virtualisation is suitable for local use and protects the underlying operating system from possible malware. Alternatively, virtualised applications can be provided on a server to multiple clients on the network. In this case, users can access virtualised applications via application streaming. The encapsulation of applications including the runtime environment also makes it possible to copy programs to portable storage media such as USB sticks and run them directly on these.

The goal of application virtualisation is to separate programs from the operating system so that they can be easily ported and centrally maintained. In a business context, this is useful for providing office applications such as Word, for example.

Advantages and disadvantages of application virtualisation:

Advantages Disadvantages
Application software can be provided, managed and maintained centrally Applications that are tightly integrated with the operating system or require access to specific device drivers cannot be virtualised
By isolating the application, the underlying system is protected against malware Application virtualisation raises licensing issues
The software can be completely removed from the system

Desktop virtualisation

Desktop virtualisation is a concept in which desktop environments can be centrally provided and accessed via a network. This approach is primarily applied in business contexts.

Desktop virtualisation is based on a client-server structure. Data transfer between server and client takes place via remote display protocols. Depending on where the computing power is used to provide a virtual desktop, a distinction is made between host and client-based approaches.

  • Host-based desktop virtualisation: Host-based desktop virtualisation includes all approaches that run virtual desktops directly on the server. With this type of desktop virtualisation, the entire computing power for providing the desktop environment and for operating applications is provided by the server hardware. Users access host-based virtual desktops with any client device over the network. Host-based desktop virtualisation can be implemented using the following approaches:

  • Host-based virtual machine: With this virtualisation approach, each user connects to their own virtual machine on the server via a client device. A distinction is made between persistent desktop virtualisation, in which a user connects to the same VM at each session, and non-persistent approaches, in which virtual machines are assigned randomly.

  • Terminal service: If the client is only used as a display device for centrally hosted desktop environments, it is referred to as presentation virtualisation or terminal services. These are provided by a terminal server.

  • Blade servers: If users need to remotely access separate physical machines, this is usually done using a blade server. This is a modular server or server housing containing several single-board computers known as blades.

  • Client-based desktop virtualisation: If desktop virtualisation works well in client-based form, the resources for operating the desktop environment must be provided by the respective client device.

  • Client-based virtual machines: With this approach to virtualisation, the desktop environment runs in a virtual machine on the client device. A hypervisor is usually used.

  • OS streaming: During OS streaming, the operating system of the desktop environment runs on the local hardware. Only the boot process is carried out remotely via an image on the server.

Advantages and disadvantages of desktop virtualisation:

Advantages Disadvantages
Desktop virtualisation enables central administration of desktop environments Desktop virtualisation is primarily suitable for homogeneous infrastructures
Users can access their virtual desktop from a variety of devices Some approaches require a constant network connection
Desktop virtualisation enables centralised backups High demands on server performance, storage capacity and network bandwidth
Thin clients enable cost savings in acquisition and operation

Operating system virtualisation

Virtualisation concepts at operating system level make use of native kernel functions of unixoid operating systems, which make it possible to run several isolated user space instances in parallel. This differs from hardware virtualisation, where a full guest system with its kernel is duplicated. In this type of virtualisation, applications that are virtualised at the operating system level utilise the host system’s kernel.

Note

For security reasons, modern operating systems distinguish between two virtual memory areas: kernel space and user space. While processes used to run the kernel and other core components run in kernel space, the user space is for user applications. On Unix operating systems, it is possible to execute several virtual user space instances in parallel. This feature is the basis of operating system virtualisation.

Each user space instance represents a self-contained virtual runtime environment, which is called a container, partition, virtualisation engine (VE) or jail, depending on the technology used. Operating system-based virtualisation celebrated a revival with container platforms such as Docker. Users now have sophisticated alternatives to the market leader in the form of rtk, OpenVZ/Virtuozzo and runC.

User space instances are virtualised using native chroot mechanisms that make all unixoid operating systems available. Chroot (short for ‘change root’) is a system call that allows you to change the root directory of a running process. Processes that are stored in a virtual root directory can only access files within this directory if implemented correctly. However, chroot alone does not encapsulate processes sufficiently. Although the system call provides basic virtualisation functions, it was never intended as a concept for securing processes. Container technologies therefore combine chroot with other native kernel functions such as Cgroups and namespaces to provide processes with an isolated runtime environment with limited access to hardware resources. This is called containerised processes.

  • Cgroups: Cgroups are resource management control groups that allow processes to limit access to hardware resources.
  • Namespaces: Namespaces are namespaces for system and process identification, interprocess communications (IPCs) and network resources. Namespaces can be used to restrict a process and its child processes to a desired section of the underlying system.

A software container contains an application including all dependencies such as libraries, utilities and configuration files. Applications can then be transferred from one system to another without further adaptations. The container approach therefore shows its strengths in providing applications in the network (deployment).

If containers are used as part of microservice architectures, users also benefit from high scalability.

Advantages and disadvantages of operation system virtualisation:

Advantages Disadvantages
Operating system level virtualisation concepts do not require a hypervisor and are therefore associated with minimal virtualisation shrinkage Virtualisation at the operating system level is geared towards microservice architectures. Container technology loses some of it advantages (for example, in terms of scalability) when used with monolithically structured applications
When containers are used in applications based on a combination of different microservices, users benefit from high scalability Unlike VMs, containers run directly in the kernel of the host operating system. This requires certain technical conditions. These dependencies limit the portability of containers. Linux containers cannot run on Windows systems without emulation
Containers can be provided immediately without complex installation processes Containers offer significantly less insulation than VMs. Container virtualisation is therefore not suitable for implementing security measures and strategies
Software can be completely removed
A large number of prefabricated containers are available online for the most important platforms

Storage virtualisation

The aim of storage virtualisation is to virtually map a company’s various storage resources such as hard drives, flash memory or tape drives and make them available as a coherent storage pool.

Virtual memory can also be divided into contingents and allocated to selected applications. Users can access stored data via the same file paths even when the physical location changes despite virtualisation. This is ensured by an assignment table managed by the virtualisation software and is known as mapping the physical storage media to a logical disk (also called volumes).

In business contexts, storage virtualisation is usually implemented in a block-based way. In block storage, data is divided into blocks of the same size. Each data block has a unique address. This is stored by the virtualisation software in the central mapping table. In practice, block-based virtualisation can be implemented on a host, device or network basis.

Host-based virtualisation is typically used in combination with virtual machines. In this concept, a host system presents one or more guest systems (see hardware virtualisation) with virtual disks on an abstraction level. Access to the hardware is possible via the host system’s device drivers.

Host-based storage virtualisation requires no additional hardware, supports any storage device and can be implemented with little effort. In addition, the approach offers the best performance compared to other concepts, since each storage device is addressed immediately so there is no latency time. However, users have to accept that storage virtualisation — and thus the possibility of optimising storage utilisation — is limited to the respective host.

Disk arrays—mass storage devices that can be used to provide hard disks in the network—also offer the possibility of virtualising storage resources in the context of device-based storage virtualisation. RAID schemes are used here. RAID (short for: Redundant Array of Independent Disks) is a data storage concept where several physical drives are combined into a virtual storage platform.

Tip

Further information about disk arrays and RAID schemes can be found in our article on network-attached storage (NAS).

Device-based storage virtualisation also offers good performance due to low I/O latency. Apart from the disk arrays to be merged, no other hardware components are required.

Network-based storage virtualisation is particularly useful if storage resources of heterogeneous systems are to be combined into a virtual storage pool. In business contexts, this approach is usually used as part of a storage area network (SAN).

Advantages and disadvantages of storage virtualisation:

Advantages Disadvantages
Physical storage resources are used more effectively Storage virtualization is always associated with an overhead resulting from the need to generate and process metadata
Physical storage resources combined into a logical drive can be managed centrally Under heavy load, processing I/O requests can become a bottleneck, slowing down the entire storage system

Data virtualisation

In the context of data warehouse analyses, data virtualisation combines different virtualisation approaches. These aim to provide applications with access to data abstracted from physical conditions by creating a master copy(a virtual image of the entire database). Data virtualisation can therefore be seen as a method for data integration. It allows data from different sources to be read and manipulated while leaving the data physically intact. Data virtualisation software solutions integrate data on a virtual level only and provide real-time access to the physical data source. In contrast, ETL (extract, transform, load) extracts data from differently structured sources and then merges them in a uniform format in a target database.

Advantages and disadvantages of data virtualisation:

Advantages Disadvantages
The storage requirement for physical data copies is reduced In contrast to the data warehouse approach, data virtualisation is not suitable for recording and maintaining historical snapshots of a database
Time-consuming data extraction (e.g. via ETL) is no longer necessary
New data sources can be connected via self-service BI tools without any technical knowledge
Virtualised data can be processed with a variety of data management tools

Network virtualisation

Network virtualisation comprises various approaches in which network resources at hardware and software level are abstracted from their physical basis. As a rule, this type of virtualisation is used as part of security strategies. There are basically two main objectives here:

  • Physical network resources should be combined into a logical unit by means of virtualisation.
  • Physical network resources should be divided into different virtual units by means of virtualisation.

An illustrative example of network virtualisation is the Virtual Private Network (VPN). A VPN is a virtual network based on a physical network. In practice, VPNs are used to establish secure connections over unsecure lines, for example, if an employee wants to access a company’s private network outside of the office.

Another example of network virtualisation is virtual local area networks (VLANs). VLANs are virtual subnetworks based on a physical computer network.

One concept that allows virtual network resources to be centrally controlled without having to manually access physical network components is software-defined networking (SDN). SDN is based on the separation of the virtual control plane from the physical network plane responsible for forwarding the data packets (data plane).

Advantages and disadvantages of network virtualisation:

Advantages Disadvantages
Cost savings through multiple use of the physical network infrastructure Running multiple virtual subnets on a physical network requires powerful hardware components
Virtual network resources can be centrally managed, easily scaled and dynamically allocated A redundant physical network infrastructure may be required to ensure resilience
Network virtualisation offers various approaches for implementing security measures at network level on the software side, making it more cost-effective
Was this article helpful?
Page top