In which of the following type of virtualization the hardware is not simulated?

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Virtualizing Hardware Platforms

Hardware virtualization, sometimes called platform or server virtualization, is executed on a particular hardware platform by host software. Essentially, it hides the physical hardware. The host software that is actually a control program is called a hypervisor. The hypervisor creates a simulated computer environment for the guest software that could be anything from user applications to complete OSes. The guest software performs as if it were running directly on the physical hardware. However, access to physical resources such as network access and physical ports is usually managed at a more restrictive level than the processor and memory. Guests are often restricted from accessing specific peripheral devices. Managing network connections and external ports such as USB from inside the guest software can be challenging. Figure 1.4 shows the concept behind virtualizing hardware platforms.

In which of the following type of virtualization the hardware is not simulated?

Figure 1.4. Hardware Virtualization Concepts

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Understanding Microsoft virtualization strategies

Thomas Olzak, ... James Sabovik, in Microsoft Virtualization, 2010

Hardware virtualization layer

The hardware virtualization layer is created by installing Microsoft Hyper-V on one or more compatible hardware platforms. Hyper-V, Microsoft's entry into the hypervisor market, is a very thin layer that presents a small attack surface. It can do this because Microsoft does not embed drivers. Instead, Hyper-V uses vendor-supplied drivers to manage VM hardware requests.

Warning

Hardware targeted for virtualization must support virtualization, as specified in Chapter 1.

Each VM exists within a partition, starting with the root partition. The root partition must run Windows 2008 Server ×64 or Windows 2008 Server Core ×64. Subsequent partitions, known as child partitions, usually communicate with the underlying hardware via the root partition. Some calls directly from a child partition to Hyper-V are possible using WinHv (defined below) if the OS running in the partition is “enlightened.” An enlightened OS understands how to behave in a Hyper-V environment. Communication is limited for an unenlightened OS partition, and applications there tend to run much more slowly than those in an enlightened one. Performance issues are generally related to the requirement for emulation software to interface hosted services.

Note

Enlightened-capable operating systems include Windows Server 2003/2008, Windows Vista, Windows XP, and SUSE Enterprise Linux.

The Hyper-V components responsible for managing VM, hypervisor, and hardware communication are the VMBus, VSCs, and VSPs. These and other Hyper-V components are shown in Figure 2.4.

In which of the following type of virtualization the hardware is not simulated?

▪ Figure 2.4. Hyper-V components.

Advanced Programmable Interrupt Controller (APIC)—An APIC allows priority levels to be assigned to interrupt outputs.

Hypercalls—Hypercalls are made to Hyper-V to optimize partition calls for service. An enlightened partition may use WinHv or UnixHv to speak directly to the hypervisor instead of routing certain requests through the root partition.

Integration Component (IC)—An IC allows child partitions to communicate with other partitions and the hypervisor.

Memory Service Routine (MSR)

Virtualization Infrastructure Driver (VID)—The VSD provides partition management services, virtual processor management services, and memory management services.

VMBus—The VMBus is a channel-based communication mechanism. It enables interpartition communication and device enumeration. It is included in and installed with Hyper-V Integration Services.

Virtual Machine Management Service (VMMS)—The VMMS is responsible for managing VM state associated with all child partitions. A separate instance exists for each VM.

Virtual Machine Worker Process (VMWP)—The VMWP is a user-mode component of the virtualization stack. It enables VMMSs for the root partition so it can manage VMs in the child partitions.

Virtualization Service Client (VSC)—The VSC is a synthetic device instance residing in a child partition. It uses hardware resources provided by VSPs. A VSC and VSP communicate via the VMBus.

Virtualization Service Provider (VSP)—The VSPs reside in the root partition. They work with VSCs to provide device support to child partitions over the VMBus.

Windows Hypervisor Interface Library (WinHv)—The WinHv is a bridge between a hosted operating system's drivers and the hypervisor. It allows drivers to call the hypervisor using standard Windows calling conventions when an enlightened environment is running within the partition.

Windows Management Instrumentation (WMI)—The WMI exposes a set of APIs for managing virtual machines.

Note

Hyper-V relies primarily on vendor-supplied drivers to communicate with the underlying hardware.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597494311000023

Virtualization

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

Hypervisors

A fundamental element of hardware virtualization is the hypervisor, or virtual machine manager (VMM). It recreates a hardware environment in which guest operating systems are installed. There are two major types of hypervisor: Type I and Type II (see Figure 3.7).

In which of the following type of virtualization the hardware is not simulated?

Figure 3.7. Hosted (left) and native (right) virtual machines. This figure provides a graphical representation of the two types of hypervisors.

Type I hypervisors run directly on top of the hardware. Therefore, they take the place of the operating systems and interact directly with the ISA interface exposed by the underlying hardware, and they emulate this interface in order to allow the management of guest operating systems. This type of hypervisor is also called a native virtual machine since it runs natively on hardware.

Type II hypervisors require the support of an operating system to provide virtualization services. This means that they are programs managed by the operating system, which interact with it through the ABI and emulate the ISA of virtual hardware for guest operating systems. This type of hypervisor is also called a hosted virtual machine since it is hosted within an operating system.

Conceptually, a virtual machine manager is internally organized as described in Figure 3.8. Three main modules, dispatcher, allocator, and interpreter, coordinate their activity in order to emulate the underlying hardware. The dispatcher constitutes the entry point of the monitor and reroutes the instructions issued by the virtual machine instance to one of the two other modules. The allocator is responsible for deciding the system resources to be provided to the VM: whenever a virtual machine tries to execute an instruction that results in changing the machine resources associated with that VM, the allocator is invoked by the dispatcher. The interpreter module consists of interpreter routines. These are executed whenever a virtual machine executes a privileged instruction: a trap is triggered and the corresponding routine is executed.

In which of the following type of virtualization the hardware is not simulated?

Figure 3.8. A hypervisor reference architecture.

The design and architecture of a virtual machine manager, together with the underlying hardware design of the host machine, determine the full realization of hardware virtualization, where a guest operating system can be transparently executed on top of a VMM as though it were run on the underlying hardware. The criteria that need to be met by a virtual machine manager to efficiently support virtualization were established by Goldberg and Popek in 1974 [23]. Three properties have to be satisfied:

Equivalence. A guest running under the control of a virtual machine manager should exhibit the same behavior as when it is executed directly on the physical host.

Resource control. The virtual machine manager should be in complete control of virtualized resources.

Efficiency. A statistically dominant fraction of the machine instructions should be executed without intervention from the virtual machine manager.

The major factor that determines whether these properties are satisfied is represented by the layout of the ISA of the host running a virtual machine manager. Popek and Goldberg provided a classification of the instruction set and proposed three theorems that define the properties that hardware instructions need to satisfy in order to efficiently support virtualization.

Theorem 3.1

For any conventional third-generation computer, a VMM may be constructed if the set of sensitive instructions for that computer is a subset of the set of privileged instructions.

This theorem establishes that all the instructions that change the configuration of the system resources should generate a trap in user mode and be executed under the control of the virtual machine manager. This allows hypervisors to efficiently control only those instructions that would reveal the presence of an abstraction layer while executing all the rest of the instructions without considerable performance loss. The theorem always guarantees the resource control property when the hypervisor is in the most privileged mode (Ring 0). The nonprivileged instructions must be executed without the intervention of the hypervisor. The equivalence property also holds good since the output of the code is the same in both cases because the code is not changed.

Theorem 3.2

A conventional third-generation computer is recursively virtualizable if:

It is virtualizable and

A VMM without any timing dependencies can be constructed for it.

Recursive virtualization is the ability to run a virtual machine manager on top of another virtual machine manager. This allows nesting hypervisors as long as the capacity of the underlying resources can accommodate that. Virtualizable hardware is a prerequisite to recursive virtualization.

Theorem 3.3

A hybrid VMM may be constructed for any conventional third-generation machine in which the set of user-sensitive instructions is a subset of the set of privileged instructions.

There is another term, hybrid virtual machine (HVM), which is less efficient than the virtual machine system. In the case of an HVM, more instructions are interpreted rather than being executed directly. All instructions in virtual supervisor mode are interpreted. Whenever there is an attempt to execute a behavior-sensitive or control-sensitive instruction, HVM controls the execution directly or gains the control via a trap. Here all sensitive instructions are caught by HVM that are simulated.

This reference model represents what we generally consider classic virtualization—that is, the ability to execute a guest operating system in complete isolation. To a greater extent, hardware-level virtualization includes several strategies that differentiate from each other in terms of which kind of support is expected from the underlying hardware, what is actually abstracted from the host, and whether the guest should be modified or not.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000036

Cloud Computing Architecture

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

4.2.2 Infrastructure- and hardware-as-a-service

Infrastructure- and Hardware-as-a-Service (IaaS/HaaS) solutions are the most popular and developed market segment of cloud computing. They deliver customizable infrastructure on demand. The available options within the IaaS offering umbrella range from single servers to entire infrastructures, including network devices, load balancers, and database and Web servers.

The main technology used to deliver and implement these solutions is hardware virtualization: one or more virtual machines opportunely configured and interconnected define the distributed system on top of which applications are installed and deployed. Virtual machines also constitute the atomic components that are deployed and priced according to the specific features of the virtual hardware: memory, number of processors, and disk storage. IaaS/HaaS solutions bring all the benefits of hardware virtualization: workload partitioning, application isolation, sandboxing, and hardware tuning. From the perspective of the service provider, IaaS/HaaS allows better exploiting the IT infrastructure and provides a more secure environment where executing third party applications. From the perspective of the customer it reduces the administration and maintenance cost as well as the capital costs allocated to purchase hardware. At the same time, users can take advantage of the full customization offered by virtualization to deploy their infrastructure in the cloud; in most cases virtual machines come with only the selected operating system installed and the system can be configured with all the required packages and applications. Other solutions provide prepackaged system images that already contain the software stack required for the most common uses: Web servers, database servers, or LAMP1 stacks. Besides the basic virtual machine management capabilities, additional services can be provided, generally including the following: SLA resource-based allocation, workload management, support for infrastructure design through advanced Web interfaces, and the ability to integrate third-party IaaS solutions.

Figure 4.2 provides an overall view of the components forming an Infrastructure-as-a-Service solution. It is possible to distinguish three principal layers: the physical infrastructure, the software management infrastructure, and the user interface. At the top layer the user interface provides access to the services exposed by the software management infrastructure. Such an interface is generally based on Web 2.0 technologies: Web services, RESTful APIs, and mash-ups. These technologies allow either applications or final users to access the services exposed by the underlying infrastructure. Web 2.0 applications allow developing full-featured management consoles completely hosted in a browser or a Web page. Web services and RESTful APIs allow programs to interact with the service without human intervention, thus providing complete integration within a software system. The core features of an IaaS solution are implemented in the infrastructure management software layer. In particular, management of the virtual machines is the most important function performed by this layer. A central role is played by the scheduler, which is in charge of allocating the execution of virtual machine instances. The scheduler interacts with the other components that perform a variety of tasks:

In which of the following type of virtualization the hardware is not simulated?

Figure 4.2. Infrastructure-as-a-Service reference implementation.

The pricing and billing component takes care of the cost of executing each virtual machine instance and maintains data that will be used to charge the user.

The monitoring component tracks the execution of each virtual machine instance and maintains data required for reporting and analyzing the performance of the system.

The reservation component stores the information of all the virtual machine instances that have been executed or that will be executed in the future.

If support for QoS-based execution is provided, a QoS/SLA management component will maintain a repository of all the SLAs made with the users; together with the monitoring component, this component is used to ensure that a given virtual machine instance is executed with the desired quality of service.

The VM repository component provides a catalog of virtual machine images that users can use to create virtual instances. Some implementations also allow users to upload their specific virtual machine images.

A VM pool manager component is responsible for keeping track of all the live instances.

Finally, if the system supports the integration of additional resources belonging to a third-party IaaS provider, a provisioning component interacts with the scheduler to provide a virtual machine instance that is external to the local physical infrastructure directly managed by the pool.

The bottom layer is composed of the physical infrastructure, on top of which the management layer operates. As previously discussed, the infrastructure can be of different types; the specific infrastructure used depends on the specific use of the cloud. A service provider will most likely use a massive datacenter containing hundreds or thousands of nodes. A cloud infrastructure developed in house, in a small or medium-sized enterprise or within a university department, will most likely rely on a cluster. At the bottom of the scale it is also possible to consider a heterogeneous environment where different types of resources—PCs, workstations, and clusters—can be aggregated. This case mostly represents an evolution of desktop grids where any available computing resource (such as PCs and workstations that are idle outside of working hours) is harnessed to provide a huge compute power. From an architectural point of view, the physical layer also includes the virtual resources that are rented from external IaaS providers.

In the case of complete IaaS solutions, all three levels are offered as service. This is generally the case with public clouds vendors such as Amazon, GoGrid, Joyent, Rightscale, Terremark, Rackspace, ElasticHosts, and Flexiscale, which own large datacenters and give access to their computing infrastructures using an IaaS approach. Other solutions instead cover only the user interface and the infrastructure software management layers. They need to provide credentials to access third-party IaaS providers or to own a private infrastructure in which the management software is installed. This is the case with Enomaly, Elastra, Eucalyptus, OpenNebula, and specific IaaS (M) solutions from VMware, IBM, and Microsoft.

The proposed architecture only represents a reference model for IaaS implementations. It has been used to provide general insight into the most common features of this approach for providing cloud computing services and the operations commonly implemented at this level. Different solutions can feature additional services or even not provide support for some of the features discussed here. Finally, the reference architecture applies to IaaS implementations that provide computing resources, especially for the scheduling component. If storage is the main service provided, it is still possible to distinguish these three layers. The role of infrastructure management software is not to keep track and manage the execution of virtual machines but to provide access to large infrastructures and implement storage virtualization solutions on top of the physical layer.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000048

Cloud Hardware and Software

Dan C. Marinescu, in Cloud Computing (Second Edition), 2018

8.1 Challenges; Virtual Machines and Containers

Computing systems have evolved from single processors to multiprocessors, to multicore multiprocessors, and to clusters. Warehouse-scale computers (WSCs) with hundreds of thousands of processors are no longer a fiction, but serve millions of users, and are analyzed in computer architecture textbooks [56,228].

WSCs are controlled by increasingly complex software stacks. Software helps integrate a very large number of system components and contributes to the challenge of ensuring efficient and reliable operation. The scale of the cloud infrastructure combined with the relatively low mean-time to failure of the off-the-shelf components used to assemble a WSC make the task of ensuring reliable services quite challenging.

At the same time, long-running cloud services require a very high degree of availability. For example, a 99.99% availability means that the services can only be down for less than one hour per year. Only a fair level of hardware redundancy combined with software support for error detection and recovery can ensure such a level of availability [228].

Virtualization. The goal of virtualization is to support portability, improve efficiency, increase reliability, and shield the user from the complexity of the system. For example, threads are virtual processors, abstractions that allow a processor to be shared among different activities thus, increasing its utilization and effectiveness. RAIDs are abstractions of storage devices designed to increase reliability and performance.

Processor virtualization, running multiple independent instances of one or more operating systems, pioneered by IBM in early 1970, was revived for computer clouds. Cloud Virtual Machines run applications inside a guest OS which runs on virtual hardware under the control of a hypervisor. Running multiple VMs on the same server allows applications to better share the server resources and achieve higher processor utilization. The instantaneous demands for resources of the applications running concurrently are likely to be different and complement each other; the idle time of the server is reduced.

Processor virtualization by multiplexing is beneficial for both users and cloud service providers. Cloud users appreciate virtualization because it allows a better isolation of applications from one another than the traditional process sharing model. CSPs enjoy larger profits due to the low cost for providing cloud services.

Another advantage is that an application developer can chose to develop the application in a familiar environment and under the OS of her choice. Virtualization also provides more freedom for the system resource management because VMs can be easily migrated. The VM migration proceeds as follows: the VM is stopped, its state is saved as a file, the file is transported to another server, and the VM is restarted.

On the other hand, virtualization contributes to increased complexity of the system software and has undesirable side-effects on application performance and security. Processor sharing is now controlled by a new layer of software, the hypervisor, also called a Virtual Machine Monitor. It is often argued that a hypervisor is a more compact software with only a few hundred thousand lines of code versus the million lines of code of a typical OS, thus the hypervisor is less likely to be faulty.

Unfortunately, though the footprint of the hypervisor is small, a server must run a management OS in addition to the hypervisor. For example, Xen, the hypervisor used by AWS and others, invokes initially Dom0, a privileged domain that starts and manages unprivileged domains called DomU. Dom0 runs the Xen management toolstack, is able to access the hardware directly, and provides Xen with virtual disks and network access for guests.

Containers. Containers are based on operating-system-level virtualization rather than hardware virtualization. An application running inside a container is isolated from another application running in a different container and both applications are isolated from the physical system where they run. Containers are portable and the resources used by a container can be limitted. Containers are more transparent than VMs thus, easier to monitor and manage. Containers have several other benefits including:

1.

Streamline the creation and the deployment of applications.

2.

Applications are decoupled from the infrastructure; application container images are created at build time rather than deployment time.

3.

Support portability; containers run independently of the environment.

4.

Support an application-centric management.

5.

Have an optimal philosophy for application deployment; applications are broken into smaller, independent pieces that can be managed dynamically.

6.

Support higher resource utilization.

7.

Lead to predictable application performance.

Containers were initially designed to support the isolation of the root file system. The concept can be traced back to the chroot system call implemented in 1979 in Unix to: (i) change the root directory for the running process issuing the call and for its children; and (ii) to prohibit access to files outside the directory tree. Later, BSD and Linux adopted the concept and in 2000, FreeBSD expanded it and introduced the jail command. The environment created with chroot was used to create and host a new virtualized copy of the software system.

Container technology has emerged as an ideal solution combining isolation with increased productivity for application developers who need no longer be aware of the details of the cluster organization and management. Container technology is now ubiquitous and has a profound impact on cloud computing. Docker's containers gained widespread acceptance for ease of use, while Google's Kubernetes are performance-oriented.

Cluster management systems have evolved and each system has benefited from the experience gathered from the previous generation. Mesos, a system developed at U.C. Berkeley is now widely used by more than 50 organizations and has also morphed in a variety of systems such as Aurora used by Twitter, Marathon offered by Mesospheres,1 and Jarvis used by Apple. Borg, Omega, and Kubernetes are the milestones in Google's cluster management development effort discussed in this chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012812810700011X

Choosing the Right Solution for the Task

In Virtualization for Security, 2009

Hardware Virtualization

Computer processing chips that offer capabilities to run multiple virtual machines simultaneously are considered a form of hardware virtualization. These capabilities improve a virtualization platform's ability to switch from running one virtual machine to another. Examples of hardware virtualization technologies include Intel's Virtualization Technology (VT) and AMD Virtualization (AMD-V). Some virtualization platforms such as Microsoft Hyper-V require virtualization extensions in order to run.

Tools & Traps…

Management

Development of virtualization technology began in the early 1960s but has skyrocketed in the past 10 years as it has been applied to the ubiquitous x86 line of processing technology. In that time there has been a renaissance of virtualization technology such as hypervisors and hardware virtualization, but what has become increasingly apparent is that management technology is the largest driver of virtualization solutions. The more you work with virtualization, the more you will want to script, automate, and manage virtualization deployments. When you are considering what type of virtualization technology to use, make sure that there is a strong programmatic interface to the platform. A virtualization platform with a strong application programming interface allows you to customize how the solution works and enables open source projects as well as third party vendors to develop new and innovative solutions for the platform.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000025

Virtualization on embedded boards as enabling technology for the Cloud of Things

B. Bardhi, ... L. Taccari, in Internet of Things, 2016

6.2.3 KVM ARM virtualization

KVM stands for Kernel-based Virtual Machine. KVM is a Type-2 hypervisor based on the Linux kernel, which supports a variety of processors with hardware virtualization extensions. KVM was merged in the Linux kernel in 2007, and over the years it was ported from x86 to a number of different architectures, including PowerPC and ARM. KVM consists of a loadable kernel module that provides the core virtualization infrastructure and different processor-specific modules. Using them, the Linux kernel acts as a host that can run multiple VMs, each with private virtualized hardware. On the ARM architecture porting, KVM introduces split-mode virtualization, allowing a hypervisor to split its execution across CPU modes [19]. This means that KVM can use the Hyp mode provided by ARM processors with hardware virtualization capabilities. The hypervisor is split into low-visor and high-visor components. The low-visor runs in Hyp mode, deals directly with the hardware, and manages interrupts and the isolation of execution contexts. The high-visor, instead, runs in kernel mode and uses the Linux kernel to execute operations that do not directly need access to the Hyp mode. As a Type-2 hypervisor, KVM allows the VMs to use the real-host processor (thus being transparent to them), using context switches to alternate the host and the VMs on the processor. As such, the role of the hypervisor is to save and restore the state of the host and/or VMs during the context switches. On the ARM architecture, during these operations the Hyp stack is used to store register content, and the Stage-2 page table base-register-content is modified accordingly to the VM or host that has to be executed. On every architecture, interrupts may be trapped, depending on which kernel is going to be executed. KVM uses the Stage-2 translation page-table in order to access the memory allocated to each VM, thus simplifying memory virtualization architecture. I/O virtualization, instead, is based on load and store operations to MMIO device regions. The Stage-2 translation makes it impossible for a VM to use the physical devices directly. Finally, KVM virtualizes the interrupts, using the kernel to trap physical-device interrupts to the Hyp mode, and forwarding them to the VMs by means of virtual interrupts. Timer virtualization, instead, is based directly on ARM hardware virtualization features, allowing VMs to directly read timers and counters.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012805395900006X

Designing Your Exchange 2007 Server

Pierre Bijaoui, Juergen Hasslauer, in Designing Storage for Exchange 2007 SP1, 2008

Appropriate use Cases

What are appropriate use cases for running Exchange in a virtual environment? If in the future Microsoft changes the support policy, then you can run Exchange as a guest in hardware virtualization solutions without worries about support issues. Until such time, we recommend using virtualization for lab environments. This is what we do on a daily basis—it's a perfect test environment. There are only a few things that you cannot test in a virtual environment, such as VSS-based backups using VSS hardware providers.

Let's assume that you accept the reduced support or that Microsoft has already changed its support statement. What are the most appropriateExchange server roles to be deployed in a virtual environment? You have already learned that there are issues with backup and restore of large mailbox servers. If you have strict mailbox quotas and the database size is manageable for LAN backups, then you can consider the mailbox server role. Other interesting options are using Standby Continuous Replication (SCR) to create database copies on a mailbox server running in a remote virtual environment. Once again, this is interesting for smaller deployments; if you have to activate the SCR target, then the virtual machine has to handle the resource requirements of an active mailbox server accessed by clients. For small environments you can create a cost-efficient recovery data center using virtualization.

CAS or HT servers are a better fit compared to mailbox servers. These roles do not hold large amount of data that you have to back up on a daily basis. You might only back up servers with the CAS and HT role after a configuration change. The I/O demands of these roles are rather low compared to a mailbox server. So you can consider the CAS and HT roles.

If your RTO allows it, then you can think about running a small mailbox server in a virtual environment. We do not doubt that a VMware ESX server can provide adequate performance for a mailbox server during a regular user workload. It is the lack of appropriate backup and recovery methods that are the reason why you should think twice about whether it is a good idea to run a large mailbox server in a virtual machine.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781555583088000053

Introduction

Erez Metula, in Managed Code Rootkits, 2011

Why Do You Need This Book?

This book covers application-level rootkits and other types of malware, hidden inside the application VM runtime. It is the first book on this subject, covering a concept rather than vulnerability—a problem that won't go away by simply installing a missing patch.

Tip

Do not confuse the application-level VM with the OS-level VM. The application VM provides a platform-independent programming environment for processes, whereas the OS VM provides hardware virtualization for execution of a complete operating system.

Most of this book was written from the attacker's point of view, to teach you (one of the “good guys”) what the bad guys probably already know. Part II of the book covers techniques for developing and deploying MCRs. We'll cover the basics of managed code environments, and move on to malware deployed as managed code inside the VM. We'll also talk about practical problems the attacker needs to resolve when deploying malware on your system.

Attackers aren't the only ones who can employ MCR techniques for tasks such as manipulating the runtime, as we'll be covering in Part II. You can use these techniques to create your own version of a VM—for example, to create a subclass of a VM that is dedicated to solving issues with security and performance, fixing bugs, and basically doing anything you want your VM to do. The same techniques used to deploy a backdoor, for example, can be used to deploy security mechanisms for creating a “hardened” VM. It all depends on the user and his intentions.

Note

Proliferation of managed code environments in the future could potentially raise the significance of this kind of research.

How This Book Is Organized

Before digging into the details of MCRs, let's review the book's structure. The book is divided into four main parts, titled “Overview,” “Malware Development,” “Countermeasures,” and “Where Do We Go from Here?”

Part I: Overview

In Part I of the book, which comprises this chapter and Chapter 2, you'll receive an overview of MCRs. In this chapter, we'll explore managed code environment models and how they use application VMs so that we can understand how managed code can be related to rootkits. In Chapter 2, we'll discuss attack scenarios and discover why MCRs are attractive to attackers.

Part II: Malware Development

In Part II, which comprises Chapters 3 through 8Chapter 3Chapter 4Chapter 5Chapter 6Chapter 7Chapter 8, you'll learn all about MCR development, from analysis to successful deployment. You'll do that while focusing on interesting MCR attack vector scenarios—from backdooring authentication forms, to deploying secret reverse shells inside the VM, performing DoS attacks, and stealing encryption keys, among other scenarios.

We'll start in Chapter 3, where we'll look at what tools are used to produce and deploy MCRs. Then we'll move on to Chapter 4, where we'll demonstrate how you can change the meaning of a programming language, thereby forcing the language grammar to change and creating different meanings for keywords.

Next, in Chapter 5, we'll discuss how to manipulate the runtime, before moving on to Chapter 6, where we'll go over the steps required to strategically develop an MCR, along with the ability to extend the language grammar by adding a new malware API to the language via function injection.

Next, we'll take a look in Chapter 7 at ReFrameworker, a language modification tool that helps tremendously with the intense process of deploying an MCR.

We'll round out Part II with Chapter 8 and a discussion of advanced topics related to MCR deployment and language manipulation.

Part III: Countermeasures

Part III, which consists of Chapter 9, deals with the possible countermeasures you can deploy to protect yourself from an MCR.

We'll start with a discussion of how MCRs are everybody's problem, from developers to system administrators to end users, and what we can do to minimize the risks associated with MCRs.

We'll also talk about technical solutions, focusing on prevention, detection, and response tactics.

Part IV: Where Do We Go from Here?

Part IV of the book, which consists of Chapter 10, provides a gateway for further research. Specifically, we look at how MCR-like techniques can be applied as an alternative problem-solving approach to creating more secure runtimes, performing runtime optimizations, and so on. We'll also see how to use ReFrameworker to help us in these tasks.

How This Book Is Different from Other Books on Rootkits

Most malware books are related to unmanaged (native) code, such as assembly, C, or C++, and cover malware topics from an OS point of view.

In this book, we talk about high-level attacks developed in intermediate languages (i.e., languages that are executed by an application VM). This book covers those attacks from an application-level point of view. Specifically, in Part II, we talk about attacking mechanisms inside the applications rather than looking at the system as a whole.

Also, we focus on three popular runtimes based on an application VM—the .NET CLR, the Java JVM, and Android Dalvik, which we'll use in case studies to demonstrate the concepts and ideas expressed in this book. Since the concept we cover is not tied to a specific OS or VM, it is intended to serve as a stepping-stone for research of other platforms as well.

Note

Although the technical details of implementing MCRs differ from one runtime environment to another, the methods stay the same.

Application VMs and managed code environments are becoming increasingly important and are often seen today as a better option for new software projects, whether in .NET, Java, or some other platform based on managed code concepts in which use of a VM software layer provides many functionalities, such as exception management, memory management, and garbage collection that takes care of runtime exceptions, memory allocation, cleanup, disposal, and addressing. With application VMs and managed code environments, the significance of critical security problems such as buffer overflows, heap overflows, array indexing, and so on, which have been major vulnerabilities in unmanaged code such as C/C++, is minimized. A buffer overflow or array indexing problem that could overwrite the return address on the stack, for instance, is now caught by the runtime, which throws an exception. Although it is still possible to create a DoS attack since the application can crash due to uncaught exceptions, the attack surface has been reduced drastically.

Application VMs are even integrated deep into the OS. Take the Microsoft Windows family, for example, in which the .NET Framework and its associated CLR are performing more OS functions than ever before. As Table 1.1 shows, the .NET Framework has been preinstalled in the Windows family of operating systems since Windows Server 2003.

Table 1.1. Major .NET Framework Version List in Relation to Windows OS

.NET Framework VersionRelease DatePreinstalled in Windows
1.0 February 2002 No
1.1 April 2003 Windows Server 2003
2.0 November 2005 No
3.0 November 2006 Windows Vista, Windows Server 2008
3.5 November 2007 Windows 7, Windows Server 2008 R2
4.0 April 2010 No (not yet)

Similarly, the Java JVM is preinstalled in many OSes, such as Mac OS X, various Linux OS distributions, and the Solaris OS, among others.

In the future, Microsoft plans to release an entire OS developed in managed code. In this experimental OS codenamed Singularity, which has been in development since 2003, the kernel, device drivers, and applications are all written in managed code. Although the lowest-level interrupt code is written in assembly language and C, most of the OS core, including the kernel, is using a runtime written in the Sing# language (an extension of C#). For more information, please refer to the Microsoft Research homepage on the Singularity OS: http://research.microsoft.com/en-us/projects/singularity/.

Other interesting managed code OSes include the following:

Midori Microsoft's future OS based on the Singularity research project

SharpOS An open source General Public License (GPL) OS in C#

Cosmos An open source Berkeley Software Distribution (BSD) OS in C#

In other words, rootkits considered user-mode rootkits today are the kernel or Ring 0 rootkits of the future.

Tip

MCRs implemented in a managed code OS are equivalent to the kernel-level rootkits of today's operating systems. When managed code OSes are used, MCRs will become even more important, since MCRs will go even deeper. Don't forget to review this book again when that day arrives.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495745000015

Are We There Yet?

Christian B. Lahti, Roderick Peterson, in Sarbanes-Oxley IT Compliance Using Open Source Tools (Second Edition), 2007

Xen Virtual Machine

Xen is the premier Open Source product for server virtualization for the Linux platform. The Open Source version allows you to create Linux and NetBSD, providing the fastest and most secure virtualization software available for these architectures. XenSource and other vendors offer a Windows version and formal support. Xen allows you to increase your server utilization and lower your TCO by consolidating multiple virtual servers on a smaller number of physical systems, each with resource guarantees to ensure that its application layer performance is met, hence enabling you to meet your SLAs With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server's hardware and the operating system. This thin software layer provides an abstraction layer that allows each physical server to run one or more “virtual servers,” effectively decoupling the operating system and its applications from the underlying physical server.

Once a virtual server image has been created it can run on any server that supports Xen. Some of the key features of Xen include:

Support for up to 32-way SMP guest.

Intel® VT-x and AMD Pacifica hardware virtualization support.

PAE support for 32 bit servers with over 4 GB memory.

X86/64 support for both AMD64 and EM64T.

Extreme compactness – less than 50,000 lines of code. That translates to extremely low overhead and near-native performance for guests.

Live relocation capability – the ability to move VM's to any machine brings the benefits of server consolidation and increased utilization to the vast majority of servers in the enterprise.

Superb resource partitioning for CPU, memory, and block and network I/O – this resource protection model leads to improved security because guests and drivers immune to denial of service attacks. Xen is fully open its security is continuously tested by the community. Xen is also the foundation for a multi-level secure system architecture being developed by XenSource, IBM and Intel.

Extraordinary community support – industry has endorsed Xen as the de-facto open source virtualization standard and is backed by the industry's leading enterprise solution vendors.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492164000082

In which type of virtualization hardware is not simulated?

Para-virtualization. In paravirtualization, the hardware not simulates and the guest software runs its isolated system.

Which of the following type of hardware virtualization the virtual machine simulates the hardware?

Full virtualization is a virtualization technique used to provide a VME that completely simulates the underlying hardware. In this type of environment, any software capable of execution on the physical hardware can be run in the VM, and any OS supported by the underlying hardware can be run in each individual VM.

Which of the following is not a virtualization type?

Expert-Verified Answer. Option (c) file management level is not a virtualization level. In computing, virtualization refers to the act of creating a virtual(rather than actual) version of something, this includes virtual computer hardware, virtual storage devices, and virtual network resources.

Which virtual machine simulates hardware can be independent of the underlying system hardware?

In _______ the virtual machine simulates hardware, so it can be independent of the underlying system hardware. Answer : emulation.