Which piece of software provides an abstraction layer in server virtualization?

viable", i.e. the accuracy and consistence representing the mechanics of virtualization will be always true.

Specific implementative features can be heterogeneous and vary depending on different abstraction layers at which virtualization happens. In order to identify concepts characterizing virtualization in general terms, the idea of 'machine' must be enlarged in order to include any part fa a computer system. Virtualization introduces flexibility between interfaces. So a virtual machine must be intended in a more general way as the hosting environment that allows software to run as a 'containable` resource.

Old wine in new bottles

Virtualization first reached prominence in the early 70's, achieving commercial success with the IBM 370 series. They were mainframes able to run multiple operating systems simultaneously.

At that time, when memory and computing power were important resources to save, the virtualization multiplexing ability, also termed Server consolidation, was able to increase the level of sharing and utilization of computing resources [FDF05]. In fact, it allows software environments (as Virtual Machines - VMs) to execute software (either an individual process or a full system) in the same manner as the machine for which the software was developed.

IBM 370 series computer systems allow a number of different VMs to run together (as guest) on a single real machine (termed host). Each environment is isolated, that is it does not interfere with the other VMs or with the host machine; this enhance security and provide each user with the illusion of having a private machine.

The technology, originate for a specific hardware (i.e. IBM computer system) and for saving memory and computing resources needs, declined since the advent of low-cost microcomputers and personal computers in 80's.

Recently, it exploded in polularity to face up Horizontal Scalability. Although today servers are much less expensive and more powerful than the machines of decades past, their total cost of ownership includes maintenance, support, and administration, as well as the costs associated with security breaches and system failures. Server consolidation ability to run a number of VMs as guests on a host computer system enhances server utilization out of existing hardware, while providing security and availability.

Today Virtualization is the enabling technology for high level apps orchestrations and Cloud solutions.

Virtualization theory

Formally, virtualization involves the construction of an isomorphism that maps a virtual guest system to a real host (Popek and Goldberg [PG74]). So, when a system is virtualized, its interface and resources visible through the interface are mapped on the interface and resources of a real system which is implementing it. Consequently, the real system appears to be a different, virtual system or even a set of multiple virtual system.

The isomorphism theorized by Goldberg with Popek implies that the processes of virtualization consist of two parts:

  1. the mapping of virtual resources or state, e.g., registers, memory or files, to real sources in the underlying machine;
  2. the use of real machine instructions and/or system calls to carry out the action specified by virtual machine instructions and/or system calls.

This suggests a close connection between layers of abstraction (at which virtualization happens) and specific mechanisms which implement virtualization (based on real machine instructions). So implementative features are heterogeneous and vary depending on the abstraction layer taken into account for virtualization. Conversely, there are some virtualization properties which are general, characterizing this body of knowledge as a discipline.

In time, Goldberg work was progressively ignored and virtualization is described as a collection of single solutions and ad hoc implementations.

The increasing bag of tricks driving virtualization in recent years moved Smith and Nair to suggest an investigation from a more structured point of view. They start observing that "complexity of modern computer systems is managed by dividing them into levels of abstraction (or layers) separated by well-defined interfaces". Levels of abstraction allow implementation details at lower levels of a design to be ignored or simplified, thereby simplifying the design of components at higher levels. However, well-defined interfaces reduce interoperability because subsystems and components designed for one interface will not work with those designed for another; it is a limitation, especially in a world of networked computer where it is advantageous to move software as freely as data.

Especially interested in interfaces at or near the hardware/software boundary, Smith and Nair defines Virtualization as the layer between the hardware and the operating system. A virtualizing software provides a mapping between virtual interfaces with all resources visible through them and the interface and resources of the real system implementing it.

Their virtualization tassonomy is based on the term "machine". "A virtual machine executes software (either an individual process or a full system, depending on the type of machine) in the same manner as the machine for which the software was developed". On the other hand, "Computer software is executed by a machine (a term that dates back to the beginning of computers; platform is the term more in vogue today). So:

  • From the perspective of the operating system, a machine is largely composed of hardware, including one or more processors that run a specific instruction set, some real memory, and I/O devices of a computer.
  • From the perspective of application programs, for example, the machine is a combination of the operating system and those portions of the hardware accessible through user-level instructions".

In conclusion, the work of Smith and Nair is valuable but not complete. The model does not not explain virtual technologies such as para-virtualization performed by Xen, or User-Mode Linux. Furthermore, the model closely link the Operating System's point of view. Process and System VMs are related to well-specific abstraction layers, such as the one near the ISA (and connected to the kernel).

All parts of a computer system can be virtualized. So virtualization can happen at all the abstraction layers and a truly viable model should be more general than the one proposed, which seems to be motivated by the trivial need to distinguish Java from the rest of the world.

VMs as a monolithic "view"

Virtualization is basically intended as the art of developing VMs as full OS systems. At higher abstraction leve, such VMs can be orchestrated as balck-box technologies, which can be fast obtained (e.g. VMWare marketplace), and easily managed (e.g. Amazon EC2). Immediate results can be so achieved.

However, VMs define reusable, scalable, and secure components able to overcame heterogeneity constraints in general terms.

As performed by viewOS, we can break the traditional monolithic approach (i.e. VMs seen as a full system) into a more 'light' approach which addresses the challenge of reusable, scalable, and secure distributed environments which can be decompose and distribute through the Cloud.

J2EE Application Servers: a case study

J2EE application servers are especially interesting technologies. Basically, they adds a layer to the Java stack (J2SE) in order to abstract the notion of platform in the traditional meaning, i.e. the machine which allows software to execute.

J2EE enables a component-based paradigm that allows components to run in a special execution environment termed container. Containers use a well-known virtualization mechanism termed "code injection" to intercept calls to components and interpose internal managing code. In such a way, a container is able to both deliver middleware services (e.g. transaction and state management) to components, and manage their life cycle.

This is an example as Java should be taken into account as more than a HLL for code portability; it brings virtualization in an embryonic stage. Fore instance, J2EE uses mechanisms such as code incapsulation, polimorphism, sand boxing, and method overriding as bare "virtualities" upon which implementing a higher level "virtuality", i.e. the application server.

J2EE Java and component based architecture enhance IT agility, which refers to "an organization's ability to sense environmental change and respond efficiently and effectively to that change" (Gartner).

Partial "Virtualities": an extended idea of containers

The J2EE idea of containers can be extended as an hosting environment to execute software with the embedded services that are required to it for running.

In general terms, a container is a resource which "has the capability to host other resources or containables and which offer some set of services to those resources".

Containers can implement partial-VMs (i.e. VMs not implementing a full OS at all). A partial-VM is an executing environment which implements a "virtuality" at a specific abstraction level, but not necessarily a full OS.

Partial-VMs can be multiplexed by the computer system, i.e. you can drop the SW into "partial containers" in order to consolidate your server. So the concept of AS containers is enlarged in order to mean isolated components which act as separated physical environments.

We follow the viewOS idea [DGG06] to make free components, as well as processes, from the global view. As the OS imposes a view to processes in terms of e.g. the system name, the current time, etc., the AS container imposes a view to deployed appllications, i.e. .jar libraries, the service name, etc.

As partial-VMs, components will be able to run on different machines, while maintaining the perception of being managed by a single execution environment, i.e. a single application server instance.

The Heterogeneous Applicative Deployment (HEAD) which is the application server capability of running a different set of services and application components in the cluster. As a result, HEAD allows clustered application servers to be configured for optimum management in support of specific functions. As a further work, the idea of containers as partial-VMs can be extended to enable applications to be quickly assembled from components.

A finer-grained proposal

The structured approach introduces a finer granularity into the model in terms of "a view". Virtualization means designing a software environments able to run code as the real entity it virtualize. Such a software environment defines a view for the code running inside it as in a nutshell. Such a view can be more or less detailed depending on the implementative approach adopted.

The standard monolithic VM implementation strategy defines a coarse grained view: an entire system must be virtualized in order to enable specific 'virtualities' such as the file system.

View-Os and its partial virtual machine idea define a finer view: single virtualities can be enable in a mount-like way.

The container approach proposes a finer-grained view enabling pieces of code as components to be distributed in clustered environments and accessed with a Cloud Computing paradigm. Components can be dynamically connected together on demand and managed as services offered by the system without references to the resources utilized in providing that services (Cloud Computing paradigm).

In such a way it is possible for components to be deployed as part of a more complex code. The missed code is referenced and imported as a package with the required interfaces. This enhances scalability and reliance for the software supporting appliance and horizontal deployment for it. Furthermore, it can be suitable for enterprises to join projects without having to share code, as well as for special environments such as network of sensors to meet a program without having to share the full code, e.g. having little hardware capabilities but being able to share a huge computing power.

Conclusion

Up to now, many high level paradigms have been developed in order to scaling out infrastructures in an agile way, e.g. Vagrant.

Despite they do not devlare this, such solutions de facto meet the enlarged idea of containers.

Starting from the 2013, containers and partial-vitual machines are inherent in a single solution: Docker. As claim the home page, it is an open platform for developers and sysadmins to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Partial virtual machines and the idea of containers as the sand box where processes can run in a cross-platform infrastructure are de facto implementing a new abstraction level at the top of the computer system layered architecture, which is delivered by more or less granular "partial virtualities".

What is abstraction layer in virtualization?

Hardware Abstraction Layer (HAL) Virtualization at the HAL exploits the similarity in architectures of the guest and host platforms to cut down the interpretation latency. Virtualization technique helps map the virtual resources to physical resources and use the native hardware for computations in the VM.

Which software can be used for server virtualization?

VMware vSphere allows you to run multiple virtual machines. It's been a stable solution. The most valuable feature of Proxmox VE is its storage. Find out what your peers are saying about VMware, Proxmox, Microsoft and others in Server Virtualization Software.

Which of the following is a software abstraction layer that runs VMs as applications?

This abstraction layer is called a hypervisor. A hypervisor is a specialized software program that runs on the physical host and interacts with both the host machine and the VM, abstracting the host computer's resources to the VM.

Which virtualization is the process of abstracting IT hardware into virtual servers using virtualization software?

Virtualization 101 Virtualization relies on software to simulate hardware functionality and create a virtual computer system. This enables IT organizations to run more than one virtual system – and multiple operating systems and applications – on a single server.