Understanding Virtualization: Hardware and Cloud Platforms

A simple understanding of how social media platforms function and how people use social media sites reveal the need for additional computer hardware usage in order to satisfy a growing demand for computing power. There are at least two major requirements in order for a computer system to function properly and support a cloud platform. First, the computer hardware must have the capability to run a particular operating system or OS. Second, the computer system must interpret the commands and request that users make via the OS. As a consequence, a greater demand for functional computer systems requires the purchase of a greater number of computer hardware. In order to reduce the cost of buying more expensive hardware, a process known as virtualization was developed for the purpose of reducing the cost of meeting certain requirements and the increasing demand for greater computing power.

The Essence of Virtualization

In a nutshell, virtualization comes from the root word virtual meaning to simulate the appearance of an object or idea. For example, when people talk about virtual reality they are discussing a process of simulating a real-world setting, such as, a persons bedroom or living room. As a result, when images of these areas are made available, one can see objects and shapes that resemble the said areas. However, these are all virtual images or simulated images of the actual rooms. In a book entitled, Virtualization Essentials, the author stated that this particular process is a form of technology that enables the user to create simulated environments or dedicated resources using a single computer system (Portnoy, 2015). Red Hat, a company renowned for developing world-class software, described the procedure as the utilization of a specific software that allows the user to connect directly to the hardware, and at the same time, splitting the system into separate units known as virtual machines or VMs (Red Hat, Inc., 2017, par. 1). The different VMs are not only separate, they are also distinguished from each other, so that a single VM is one distinct system.

The separation and the distinction of each individual VM explain the cost-efficiency advantage of using virtualization technology, because each VM functions as if it owns a dedicated computer system. However, it has to be made clear that there are several VMs that are operating within one virtual environment, and all of these machines are using only one computer hardware device. According to Matthew Portnoy, businessmen prefer to use this configuration, especially if they are not yet sure how many servers are needed to support a project (2015). For example, if the company accepts to work on new projects, the server needs are going to be satisfied by using virtual servers and not by purchasing new physical servers. Thus, the appropriate utilization of the said technology enables a business enterprise or a corporation to maximize the companys resources.

The Virtualization of a Computer Hardware

The need for virtualization came into existence because of the computer system or personal computers original design. For the purpose of providing a simplified illustration of the concept, imagine the form and function of typical personal computer. A basic system requires a hardware and software combination. The software handles the commands coming from the user, and then the same software utilizes the computer hardware to perform certain calculations. In this configuration, one person typing on a keyboard elicits a response from the computer hardware setup. In this layout, it is impossible for another person to access the same computer, because it is dedicated to one user.

The virtualization of a computer hardware requires the use of a software known as hypervisor, for the purpose of creating a mechanism that enables the user or the system administrator to share the resources of a single hardware device. As a result, several students, engineers, designers, and professionals may use the same server or computer system. This setup makes sense, because one person cannot utilize the full computing power of a single hardware device. As a result, an ordinary person without the extensive knowledge of a network administrator can use a computer thinking that it is linked to a dedicated computer powered by its own processor. Sharing the computing capability of a single computer hardware maximizes the full potential of the said system.

It is important to point out that virtualization is not only utilized for the purpose of reducing the cost of operations. According to Red Hat, the application of the virtualization technology leads to the creation of separate, distinct, and secure virtual environments (Red Hat, Inc., 2017, par. 1). In other words, there are two additional advantages when administrators and ordinary users adopt this type of technology. First, the creation of distinct VMs makes it easier to isolate and study errors or problems in the system. Second, the creation of separate VMs makes it easier to figure out the vulnerability of the system or the source of external attacks to the system (Portnoy, 2015). Therefore, the adoption of virtualization technologies is a practical choice in terms of safety, ease of management, and cost-efficiency.

The Virtualization of the Central Processing Unit or CPU

A special software known as the hypervisor enables the user to create virtualization within a computer hardware system. The specific target of the hypervisor is the computers central processing unit or the CPU. Once in effect, the hypervisor unlocks the OS that was once linked to a specific CPU. After the unlocking process, the hypervisor in effect creates multiple operating systems or guest operating systems (Cvetanov, 2015). Without this procedure, the original OS is limited to one CPU. In a traditional setup, there is a one-on-one relationship between the OS and the CPU. For example, a rack server requires an OS to function as a web server. Thus, when there is a need to build twenty web servers, it is also required to purchase the same number of machines. By placing a hypervisor on top of a traditional OS, one can enjoy the benefits of twenty web servers, but using only the resources of one rack server. However, the success of the layout depends on the quality of the RAM and the processors. Thus, the quality of the VMs are dependent on the quality of the CPU.

Specialized software like the hypervisor primarily functions as a tool that manipulates the computers CPU, in order for the CPU to share its processing power to multiple VMs. Thus, it is not the natural design of the CPU to share its resources to multiple virtual machines. However, in the article entitled How Server Virtualization Works the author pointed out that CPU manufacturers are designing CPUs that are ready to interact with virtual servers. One can argue that cutting-edge technology in new CPUs can help magnify the advantages that are made possible through virtualized environments. On the other hand, the ill-advised or improper use of deployment strategies created to virtualize CPUs can lead to severe performance issues.

It is imperative to ensure the flawless operation of VMs. One can argue that a major criteria in measuring the successful implementation of a virtualized CPU is to have a system that functions the same way when its core processor was not yet shared by multiple VMs. In other words, it is imperative that users are unable to detect the difference between an ordinary computer and the one that utilizes shared resources via the process of virtualization. In Portnoys book, he described the importance of figuring out the virtualization technology that is the best fit to the organizations needs. Three of the most popular virtualization technologies available in the market are: Xen, VMWare, and QEMU (Portnoy, 2015). The following pages describe how network administrators use Xens hypervisor.

Organizing a Xen Virtual Machine

Xens virtualization software package directs the hypervisor to interact with the hardwares CPU. In the book entitled, Getting Started with Citrix XenApp, the author highlighted the fact that the hypervisor manipulates the CPUs scheduling and memory partitioning capability in order to properly manage the multiple VMs using the hardware device (Cvetanov, 2015). In organizing a Xen virtual machine, the network administrator must have extensive knowledge on at least three major components: the Xen hypervisor; Domain O; and Domain U. Although the importance of the hypervisor has been outlined earlier, it is useless without the Domain O, otherwise known as Domain zero or simply the host domain (Cvetanov, 2015). In the absence of the Domain O, Xens virtual machines are not going to function correctly. It is the job of the Domain O as the host domain to initiate the process, and paves the way for the management of the Domain U, known as DomU or simply as underprivileged domains (Cvetanov, 2015). After activating Xens Domain O, it enables users of VMs to access the resources and capabilities of the hardware device.

Xens Domain O is actually a unique virtual machine that has two primary functions. First, it has special rights when it comes to accessing the CPUs resources and other aspects of the computer hardware. Second, it interacts with other VMs, especially those classified as PV and HVM Guests (Cvetanov, 2015). Once the host domain is up and running, it enables the so-called underprivileged VMs to make requests, such as, requesting for support network and local disk access. These processes are made possible by the existence of the Network Backend Driver and the Block Backend Driver.

All Domain U VMs have no direct access to the CPU. However, there are two types of VMs under the Domain U label: Domain U PV Guests and Domain U HVM Guests. The Domain U PV Guest is different because it was designed to know its limitations with regards to accessing the resources of the physical hardware. Domain U PV Guests are also aware of the fact that there are other VMs utilizing the same hardware device. This assertion is based on the fact that Domain U PV Guest VMs are equipped with two critical drivers, the PV Network Driver and the PV Block Driver that the VMs employ for the purpose of network and disk utilization. On the other hand, Domain U HVM Guests do not have the capability to detect that the presence of a setup that allows the sharing of hardware resources.

In a simplified process, the Domain U PV Guest communicates to the Xen hypervisor via the Domain O.This is known as a network or disk request, and activates the PV Block Driver linked to the Domain U PV Guest to receive the request to access the local disk in order to write a specific set of data. This procedure is made possible by the hypervisor directing the request to a specific local memory that is shared with the host domain. The conventional design of the Xen software features the less than ideal process known as the event channel wherein requests go back and forth between the PV Block Backend Driver and the PV Block Driver. However, recent innovative changes enables the Domain U Guest to access the local hardware without going through the host domain.

An Example of a Virtual Environment

Oracles application of the Xen technology in running multiple virtual servers.
Fig. 1 Oracles application of the Xen technology in running multiple virtual servers (Oracle Corporation, 2017).

Oracles use of Xen technology in running a number of x86 type of servers provides an example of a virtual environment. In this example, the hypervisor and the Domain O virtual machine function like a lock and key. The Oracle VM labeled as domain zero initiated the process, and the hypervisor acts as the mediator that processes the requests from other VMs to utilize a single computers CPU, RAM, network, storage, and PCI resources. Based on this illustration, one can see how a single server became the host to multiple VMs.

In the said configuration, one can see three additional benefits of using virtual environments. First, the user benefits from the power of consolidation, because a network administrator is able to combine the capability of several VMs using only one server, reducing the need for more space, and eliminating the problem of excess amounts of heat that are emitted from a large number of servers cramped in one area (Strickland, 2017). Second, it allows the administrator to experience the advantages of redundancy without the need to spend more money in buying several systems, because the VMs made it possible to run a single application on different virtual environments (Strickland, 2017). As a result, when a single virtual server fails to perform as expected, there are similar systems that were created beforehand to run the same set of applications. Finally, network administrators are given the capability to test an application or an operating system without the need to buy another physical machine (Strickland, 2017). This added capacity is made possible by the fact that a virtualized server is not dependent or linked to other servers. In the event that an unexpected errors or bug in the OS lead to irreversible consequences, the problem is isolated in that particular virtual server and not allowed to affect the other VMs.

Virtualization in Cloud Platforms

The advantages of VMs in terms of cost-efficiency, redundancy, consolidation, and security are magnified when placed within the discussion of cloud platforms. Cloud platforms are physical resources provided by third parties or companies that are in the business of cloud computing. In simpler terms, an ordinary user does not need to buy his or her own storage facility in order to manage and secure important data. In the past, businessmen and ordinary individuals had to acquire physical machines to store data. The old way of storing information is expensive and risky, because the user had to spend a lot of money in acquiring the physical devices and compelled to spend even more in the maintenance of the said computer hardware devices. In addition, users had to invest in the construction of an appropriate facility in order to sustain the business operation that depended on the said computer systems. Nevertheless, in the case of man-made disasters and other unforeseen events, the data stored in the said devices are no longer accessible or useful. It is much better to have the capability to transmit or store critical data via a third party service provider. However, in the absence of virtualization technologies the one handling the storage facility has to deal with the same set of problems described earlier.

Cloud platforms utilize the same principles that were highlighted in the previous discussion. Cloud platforms were created to handle the tremendous demand for storage space and the use of added resources. The only difference is that the server is not located within the company building or the building that houses the companys management information system. The servers that hosted the VMs are located in different parts of the country or in different parts of the globe. Although the configuration is different because the user does not have full control of the host computer or the VMs, the same principles that made virtualization cost-efficient, reliable, and secure are still in effect. Consider for example the requirements of companies like Google and Facebook. Without the use of virtualization the demand for storage space and additional computing power becomes unmanageable. However, with appropriate use of virtualization technology, it is possible for multiple users to share resources when sending emails and accessing images that they stored via cloud platforms. It is interesting to note that when users of Gmail or Facebook access these two websites, they are not conscious of the fact that they are utilizing a system of shared resources via a process known as virtualization technology.

Conclusion

The existence of virtualization technologies came about after realizing the limitations of a conventional computer design. In the old setup, one user has limited access to the resources of a computer hardware device. However, a typical usage does not require the full capacity of the CPU, RAM, storage, and networking capability of the computer system. Thus, virtualization technologies enabled the sharing of resources and maximizing the potential of a single computer system. This type of technology allows user to enjoy the benefits of consolidation, redundancy, safety, and cost-efficiency. The technologys ability to create distinct and separate VMs made it an indispensable component of cloud computing. As a result, network administrators, programmers, and ordinary users are able to develop a system that runs the same set of applications in multiple machines. It is now possible not only to multiply the capability of a single computer hardware configuration, but also test applications without fear of affecting the other VMs that are performing critical operations.

References

Cvetanov, K. (2015). Getting started with Citrix XenApp. Birmingham, UK: Packt Publishing.

Oracle Corporation. (2017).Web.

Portnoy, M. (2015). Virtualization essentials. Indianapolis, IN: John Wiley & Sons.

Red Hat, Inc. (2017).Web.

Strickland, J. (2017). Web.

Virtualization Versus Emulation

There are critical differences between virtualization and emulation that demand particular attention. Virtualization is a method of splitting a single physical device into several environments (Taylor, n.d.). Emulation is a way to execute processes for one type of a system within another one with a different architecture (Taylor, n.d.). This research paper will discuss what differences exist between these two concepts and their importance.

Virtualization and emulation may often be mistaken by a regular user, although they are fundamentally distinct. Virtual machines possess the code that is sufficient to run on a computer, while an emulator uses an interpreter to converse with the already running operating system (Taylor, n.d.). Virtualization accesses the systems hardware resources directly, creating the potential for optimization (Hammad, 2021). In turn, emulation is much slower than virtualization since it translates its actions first (Hammad, 2021). The significant advantage of emulation is its cost efficiency, as virtual machines are often more demanding (Taylor, n.d.). Their usage differs since virtualization is usually implemented to increase the workload efficiency, while emulation is deployed when the product must be used or tested within a different environment (Hammad, 2021). Understanding these concepts makes it easier for a developer to assess the necessary resources for work adequately. Virtualization can take the form of several instances of an operating system on a machine, while emulation can be presented as software that behaves in a way the real machine with the desired specifics would.

In conclusion, virtualization and emulation are two different processes that are defined by their usage of the systems resources, cost-efficiency, their role in testing environments, and the general purpose. Machines that run virtualizations have these systems installed directly and use more resources, while emulation is deployed upon the existing structure, making it easier to manipulate. They can serve as testing environments, but each has its specific application.

References

Hammad, M. (2021). . GeeksforGeeks.

Taylor, K. (n.d.). HiTechNectar.

Impact of Virtualization Technologies on Data Management in Organizations

Introduction

Technology is completely changing the face of the earth. New inventions are being introduced in the society, which makes work much easier. Four factors have always motivated the work of scientists. The first factor that scientists have always been struggling to find a solution is time. Time is very important in the current society.

Every second counts, and people are struggling to find the best way in which they can manage time. Scientists are therefore concerned with coming up with machines that would help in the management of time.

The second factor is comfort. According to Menken (2008, p. 124), as technology advances, a human being becomes lazier. Individuals in the current society like comfort in every single activity they do. They prefer machines that would help them work with the least effort.

The third factor is space. The population of the world is becoming larger with every passing minute. The level of literary is also increasing. This means that more people are being hired in various offices in various firms. This reduces the space available. As firms grow, the numbers of potential customers increase.

Therefore, there would be need for increased space. Every additional space is important however little it may be. The scientists have the responsibility of coming with strategies that would ensure that the little space is fully utilized. The last factor has been security.

There is need to ensure that the world becomes safe. Scientists work tirelessly to come up with technologies that would enhance the safety of human beings. These are the challenges that a typical firm faces in its operations.

Virtualization technologies therefore came at the right time. It came when it was needed most not only by large corporations, but also by individuals. Companies require systems that would ensure security of their data in this increasingly competitive market environment.

They need a system that would be fast in entering, saving, transferring and retrieving data (Shroff 2010, p. 78). These firms also need a system that would enhance comfort and reduce space in the office. The system should also allow data sharing in the relevant departments. All these factors play a major role in enhancing this technology.

Various technological firms currently offer various solutions in this field. VMware is one of the leading technological institutions that have come out strongly in the market. One of the most popular solutions that this firm offers is cloud computing.

This technology has become very popular because of the space it consumes. This technology can allow firms to store large data without acquiring the physical hardware. This has a number of economic benefits.

One of the customers of VMware, Abacus International Pte Ltd, narrates how the technology has helped it. The application of cloud computing has helped it in becoming efficient in the market hence very competitive.

Advantages and Disadvantages of Moving To a Virtual Environment/ Cloud

According to Vagadia (2012, p. 45), the business society is moving to the virtual world and this cannot be avoided. Technology is moving very fast. The above four factors have made the physical environment a little too clumsy, especially when it concerns data management.

Space for keeping large files is scarce. These firms must come up with virtual technology where space, speed, and efficiency will not be an issue. The virtual technology comes with both advantages and disadvantages, as discussed below.

Advantages

Virtualization comes with a number of advantages. It is one of the most current technological inventions in the field of technology. One of the main advantages of this technology is the reduced number of hardware it requires. This technology would integrate all the servers to be operated using a single computer.

This has threefold effect. The first benefit of this integration is that the space used by the system is reduced. The other benefit is that the cost of fuel and hardware is reduced. A single computer would use lesser power than when a firm has five of them running.

Moreover, the firm would only spend funds on buying a single computer and its maintenance would be cheap. The third benefit of this technology is that it enhances the process of sharing data. Various departments would find it easy to share data without the challenge of using the internet.

The workforce that this firm would require for the management of the data and maintenance of the hardware will be reduced (Shroff 2010, p. 78). This would reduce the cost of operation in the firm. The above benefits are clearly elaborated in the case study below on Abacus. This technology transformed the operations of this firm completely. The firm became more efficient in its operations, hence more profitable.

Disadvantages

This technology comes with various advantages, as discussed above. However, any firm that is planning to implement it should not ignore some disadvantages. Understanding the disadvantages is very crucial as it enables the implementing team in determining whether it would be ready to withstand the risks associated with the project.

It has high risks when there is a failure of the physical hardware supporting the whole system. It is a big advantage when large data from various servers can be supported by a physical hardware. This impact would bring the whole system to a standstill and this may affect the operations of the firm. This would cost the firm because it would delay service delivery.

The system is very complicated. Virtualization process is very challenging to people who lack knowledge of VMware. It would require the firm to hire employees who have a deep understanding of this software for the process to be successful. Finding such technocrats may not be easy. The limited number of such workforce in the labor market makes it very expensive.

One of the biggest challenge that is associated with cloud computing is security. Security of data is one of the leading concerns as firms use the emerging technologies. However, the process of storing data in a cloud puts the firm in a very dangerous position.

The system provider may compromise the system and access customers data. If such data reaches a competitor, it would be very easy for the competitor to come up with counter strategies that would make it edge out the company from the market. There are cases where an individual would hack into the system and tamper with the information of the firm.

When such occurrences take place, it becomes very easy for the firm to fall despite its financial strength. Some application may not support this technology.

There are cases where the system would act in a different manner without giving clues. This is very dangerous as the result may be loss of data. Although the firm in this case study has not been directly affected by this, the threat cannot be ignored as it may strike any minute.

The Economic Benefits Achieved through Application of Cloud Computing

A Case Study of Virtualization at Abacus International Pet Ltd.

Case Study: Abacus International Pte Ltd Cuts its Server Costs by Implementing VMware Software.

Abacus is a large firm that operates globally. This firm was started in 1988 and has its headquarters in Singapore. The firm operates in tours and travel industry. It enables travel agencies to book airlines to various destinations online (Abacus International Pte Ltd, 2005, p. 1).

The parent firm, Abacus International Pte Ltd, started this firm to help it operate in an industry that was getting increasingly competitive. The parent firm wanted to manage local competition that had been complicated by the entry of new airlines.

Moreover, other large airline firms dominated the industry. It had to select a niche and devise ways through which it could protect this market. It was therefore necessary to start another firm that would increase the competitiveness of the firm by focusing on a market segment.

This firm was very successful, and Abacus International Pte Ltd (2005, p. 2) reports, it was granted semi-autonomous operations in 1990. However, the management of this firm came to understand that the number of customers who visited the facility looking for their services were customers of other airlines.

There were travel agencies that needed their services in order to book other airlines such as British Airways. At first, the firm turned down their request. However, the level of inquiry increased.

It became clear to this firm that it could no longer ignore this market. After consulting with its parent firm, Abacus expanded its operations. This included assisting customers in booking any other airline that the customer would prefer.

This industry is very lucrative. However, it is very competitive. Various other firms in this industry have come out with strong strategies on how to remain competitive in the market. One of the most important factors that would determine competitiveness of a firm in this industry is communication.

Information flow is very important in this sector. For a firm to be competitive in this industry, it would be expected to communicate with its customers effectively and pass this information to the desired airlines. The industry heavily relies on repeat purchases.

Most of the customers are regular travelers who go around the world as tourists (Abacus 2012, p. 3). It would therefore be prudent to offer them maximum satisfaction in order to convince them to make regular purchases. Abacus was having problems with its communication and data storage system. There were cases where customer requests were not appropriately followed due to technical hitches.

The management of this firm decided to use VMware to help enhance its communication and data storage system. The management integrated its servers and hired a technocrat who would ensure that the system runs normally. The result of this move was massively successful.

The expenses on energy were reduced because the number of hardware was reduced. The management also reduced the number of employees in this department because work was made much easier.

The biggest impact of this technology was its efficiency in handling customer needs. The communication process was improved. It could easily communicate with customers, code communication appropriately, and pass the coded message to the respective airlines that the customer preferred.

The airline would give a confirmation that the booking process was complete. The firm would then reach back to the customer with all the relevant information about his or her purchase. According to Abacus (2012, p. 2), this saw the firm increase its customer base by over twenty percent in the larger Asia region. The revenues were also increased because the number of workforce required was reduced.

This has seen this firm succeed even during recession. Currently, this firm is one of the largest companies in this industry in Asia. This system has increased its efficiency. For instance, the report by Abacus International Pte Ltd (2005, p. 3) states that the system saves 40 man-hours per day. The report further demonstrates how other areas of this field have been improved, including a reduction of migration by 50 percent.

It can be hypothesized this system is reducing the costs of operation by about 60 percent, given the figures given by the report. The figure below shows a hypothesized data of how the costs at the firm have been reduced. The cost benefits for this firm are illustrated in the comparative calculation below.

These benefits do not cover the increased revenues caused by the increased customer base. They are based on the cost of production.

Table 1: Cost of Production before and After Implementing VMware

Internal Labor Outsourced Labor Maintenance Hardware Purchase Consultancy Cost Emergency Energy Total
Cost Before ($) 530 280 187 630 128 198 240 2193
Cost After ($) 210 121 114 210 68 101 112 936

Source: Hypothesized data for this research

The above data can best be represented in a graph. It is very clear from the graph that the cost of production has been reduced by over sixty percent. This is an increased efficiency.

The bar graph below clearly shows that Abacus reduced its costs of operations by over sixty percent following the implementation of this technology. It clearly demonstrates that this technology is beneficial to firms, especially in the current world where technology has redefined the way of life.

Graph: Cost of Production before and After Implementing VMware

Source: Derived from the above data

How Virtualization or Cloud Technologies Has Helped Businesses

The biggest concern for firms in the industry is continuity. The ability of the firm to withstand forces in the external environment is always very important. Profitability of the firm must be based on sustainability. Emerging technologies have resulted to big success for some firms, just as they have pushed some out of the market.

Based on how the management of a firm responds to the emerging technologies, the result can be either positive or negative for the firm. Virtualization technology is one of the best ways through which a firm can remain sustainable in the market. The above case about Abacus clearly demonstrates this. Three main factors are important in ensuring business continuity.

The factors are reduced cost of operation, strong base of loyal customers, and the ability to put to check market competition. These are some of the factors that virtualization has improved. The discussion above clearly demonstrates that virtualization is one of the best ways through which a firm can cut down on its costs of operation.

This would increase its profitability in the market making it stronger enough to withstand forces of competition in the market. The case study above also shows that this technology improves the quality of service that a firm offers to its customers.

Abacus was able to offer its customers quality experience after implementing virtualization technologies. This would automatically lead to increased satisfaction of the customer, which is a key factor in developing a base of loyal customers.

With the loyal customers and increased profitability, it would be easy for a firm to put market competition into check. It would be able to monitor the actions of its competitors and come up with the best response strategies. These factors would maintain continuity (Huss 2010, p. 41)

The Four-Phased Methodology Designed To Create a Range of Virtual Infrastructure Solutions

The process of implementing a range of virtual infrastructure must take a very clear process for it to be successful. A four-faced methodology structure would be appropriate in ensuring that the process is implemented without any hitches. Abacus considered it a wise move to implement technology that was new in the industry.

The management followed the four-phased technology in this implementation, and the result was a big success (Clark 2005, p 85).The first step in this strategy would be initiation. At this stage, the concerned parties would bring the idea of implementing virtual technologies in the firm.

They will have to explain to all stakeholders the need for this technology in the firm. The biggest role would be to create the need for the project in the entire firm. The management plays a big role in this stage. The second stage would be planning. This task would involve the management and all the implementing parties within the firm.

Not all projects involving virtual technologies have succeeded. A number of projects have failed because of inadequate planning. All officers must come up with a proper plan that would help the firm succeed in this project. The management must have a clear understanding of the present situation of the firm, and the desired destination. The desired destination must be defined by the current capacity of the firm.

This is because the present forces that would dictate the future of the firm. The planning must therefore, be based on the current financial strength of the firm, and its workforce. The third stage would be execution of the project. All planned strategies would be put in place to ensure that the project achieves its mission.

It would involve integration of serves and installation of the necessary hardware and software in the system. The management of the firm must ensure that all officers of are present so that they understand how this system should work.

The management must also work closely, at this stage, with the production and marketing department to ensure that the system is working to their advantage. The two departments are very important because they are directly involved in ensuring that customers get superior value from the products. The last stage would be inspection and maintenance of the system.

Technocrats would have to conduct regular inspection to ensure that the system is running with the least problems. Good maintenance of the system starts with a deep understanding of the system.

All the staff members must have a deep understanding of the system. They should know how to operate the system in a way that would increase its duration. This way, cases of mechanical damage would be eliminated.

Evaluation of Hybrid cloud solutions Cross Platform Technologies

In the current business environment, a firm must be smart in dealing with its data. Hybrid cloud solution is one of the best strategies that a firm can consider implementing. In this strategy, a firm would make use of both private and public clouds. These two types of cloud technology would help the firm reduce the cost of managing data while still maintaining secrecy of its strategies.

In this approach, a firm would collaborate with a provider of public cloud. The firm would determine which information to store in the public cloud, and which to store in a private cloud. It is always preferred that the firm stores all its sensitive data in the private cloud. All data relating to strategies that the firm is using in the market, and data relating to customers of the firm, should be kept in the private cloud (Hiles 2011, p. 67).

Through this, the firm would prevent competitors or any other enemy from accessing sensitive information of the firm. General information of the firm that does not have serious implications if revealed to the public should be kept in the public cloud. Cross-platform technologies would be important to enhance the compatibility of the software and make the system more users friendly.

Conclusion

Virtualization technologies are the future of data management in organizations. Emerging technologies are changing the approach taken in the business society. These new technological inventions are redefining the way business organizations should approach strategic plans for their organizations. It is evident from the above discussion that these technologies cannot be ignored.

Firms must come to appreciate the fact that the world is being transformed. Some concerns such as comfort, security, speed, and efficiency are some of the defining factors behind the emerging technologies. It would require a firm to understand these forces in order to manage market competition. Virtualization offers a firm with an opportunity to integrate the servers hence eliminating the need for several servers in the firm.

This is important in ensuring that the organizations data is centralized, and easily accessed by any member of the organization from any department. Information is crucial to the prosperity of a firm in the current competitive market.

Management of data within the organization through this technology would not only enhance the process, but also reduce the cost. The case study above shows that this technology has the capacity to reduce the costs of operation that is related to data management by over sixty percent. It is important to embrace the emerging technologies. It is only through this that a firm may ensure continuity in the market.

List of References

Abacus International Pte Ltd, 2005, Abacus International Cuts Server Costs as First Singapore-Based Firm to Implement VMware Virtual Infrastructure Software, News Release, VMware, Inc. Palo Alto. Abacus, Abacus international poised to help airlines and agencies capitalize on china market opening, News Release, Abacus, Palo Alto.

Clark, T 2005, Storage virtualization: technologies for simplifying data storage and management, Addison-Wesley, Upper Saddle River.

Hiles, A 2011, The definitive handbook of business continuity management, John Wiley & Sons, Chichester.

Huss, M 2010, Disaster Recovery Professional Business Continuity, Course Technology Ptr, New York.

Menken, I 2008, A complete guide on virtualization, Emereo Pty, Melbourne.

Shroff, G 2010, Enterprise cloud computing: technology, architecture, and applications, Cambridge University Press, Cambridge.

Vagadia, B 2012, Strategic outsourcing: the alchemy to business transformation in a globally converged world, Springer, Berlin.

Virtualization Network Security

The visualization has made a great impact on the development of IT technologies and the network communication. Moreover, it is a great benefit from the point of view of saving of the investment for the data centers. The implementation of the virtualization provides the certain advantages to the environment from the point of view of the security.

In case of the theft or the loss of the device, the risk of the important data loss is diminished due to the centralized storage. If the virtual machine is affected by the virus attack, it can be recovered by returning it to the previous state. The implementation of the desktop virtualization provides the additional possibilities for the operational system control, by bringing it into accordance with the requirements of the organization. The visualization decreases the possibilities of the system errors and improves the ways of their solution. The possibility of the roles separation promotes the arrangement of the enterprises activity in accordance with its requirements.

In general, the virtualization gives the new security opportunities for the enterprises. It creates another network “that is a hybrid between the established physically centred network and the new virtual or logical environment”(Virtualization Security,2012). For the purposes of the better security of the network, the virtualisation must be implemented simultaneously with the new virtual infrastructure. The virtualization resulted the creation of the data service centres of the new generation with the improved efficiency and with the readiness for the most critical workloads.

One of the possible solutions for the virtualization is the Microsoft Hyper-V. It is the specific platform that permits the isolated systems to share a single hardware platform.

With the help of the virtualization technology Hyper-V allows to create the virtualized environment and to control it. During the mounting of Hyper-V, the installation of the required components takes place.

Hyper-V provides the infrastructure that allows to realize the virtualization of the applications and the workloads. It permits to fulfil the different tasks, aiming the increasing of the effectiveness of the firm and the cost cutting. For instance the creation and the extension of the private cloud. Hyper-V helps to expand the usage of the common resources and to regulate their usage in accordance with the changing requirements. Hyper-V promotes the effective usage of the equipment. By means of the concentrating of the servers and the workloads on the small part of the more powerful computers, it is possible to reduce the resources consumption. The strategy of the installation of the centralized virtual desktops promotes the faster fulfilment of the business tasks and the incensement of the data security.

Another possible solution is VMware ESXi. It is the hardware hypervisor, which is set directly on the physical server and which separates it on several virtual machines. All the virtual machines use the same resources and may work simultaneously. The Hypervisors control of the VMware ESXi platform is realized by means of the remote control.

The control means of the hypervisor is integrated in the VMkernel. Thereby the volume of the required disc space is decreased up to 150 Mbyte. This fact significantly reduces the sensitivity of the hypervisor to the attacks of the harmful software and the network dangers, making it more reliable and secure.

The VMware ESXI architecture has not many generic parameters and is more convenient in the usage. That is why the virtual infrastructure, which is based on it, is more convenient in servicing.

Reference List

. (2012). Web.

Server Virtualization and Its Benefits for Company

In order to survive in the present-day highly competitive and fast-paced business world, companies have to be flexible and quick in adopting new technologies and strategies for further modernization. The globalized market has become largely reliant on IT landscape, which enables organizations to achieve a competitive benefit through agility, scalability, server consolidation, cost optimization, and innovation. The primary objective of any company is to minimize expenses using the available resources in the most efficient way possible. This and a lot of other goals can be obtained with the help of virtualization. (Chowdhury & Boutaba, 2010).

If defined in the most general terms, virtualization is an introduction of software abstraction layer (which is called virtual machine monitor or a hypervisor) in-between the physical platform and the operating system. It is now applied in testing, training, production, and other kinds of environments in order to extend independent virtual resources (Metzler, 2011).

Besides the demands advanced by the world of IT, virtualization is implemented with a view to economy as it helps reduce power consumption and air conditioning needs of a company. Moreover, it does not require additional space which is always associated with growth (Chowdhury & Boutaba, 2010).

If we sum up the major benefits that a company may receive through virtualization, they will run as follows:

  1. It helps save energy thereby supporting environment-friendliness. As it has already been mentioned, virtualization saves money spent on electricity and thus contributes to resolving ecological problems (Truong, 2010).
  2. It ensures consolidation, which means that fewer physical platforms are required to combine all the workloads. This leads to optimization of hardware, which becomes capable of supporting a number of various environments (Metzler, 2011).
  3. It makes server provisioning faster, which guarantees instantaneous capacity of business units on request (Metzler, 2011).
  4. It releases from the necessity to stick to one server vendor. Since virtualization abstracts from physical platforms, the company becomes more flexible in choosing equipment (Truong, 2010).
  5. It improves recovery from faults. A disaster recovery system is independent from hardware that can be easily replaced when failover is required (Truong, 2010).
  6. It increases uptime. The ability to switch from one server to another, which allows immediate recovery from system outages, also ensures continuity (Truong, 2010).
  7. It provides isolation of applications, which improves utilization of resources and thereby minimizes server waste (Metzler, 2011).
  8. It prolongs life of old applications. Virtualization allows encapsulating applications together with the environment that supports them (Metzler, 2011).
  9. It helps join the cloud. Virtualization provides you with storage of an infinite capacity (Truong, 2010).

However, even knowing all the advantages of virtualization, the management should also be aware of its risks and challenges in order to come to a reasonable solution concerning its implementation. The most common problem now is the security risk, which arises from fluctuating workloads, VM vulnerabilities, incompetent process management, wrong configuration settings, and a number of other factors. Besides, it is important to consider the overall maintenance cost of a virtualized environment, which includes hardware and software upgrades, licensing, VM support, etc. Although these problems cannot be called minor, they may still be resolved with the due approach (Metzler, 2011).

If an organization performing computing operations of multiple sites with a server count of more than 400 opts for virtualization, it is essential to determine the necessary size, capacity, and capabilities of the host system. With Hyper-V you should first and foremost develop a proper configuration (Velte & Velte, 2009). The implementation plan will be:

  • to size your server for supporting virtualization (since, in this case, the server count amounts to 400, we deal with high system utilization and considerable traffic, which means that server requirements might not be sufficient for virtualization);
  • to consider RAM (since we are concerned with a lot of guest sessions, we should make sure that the host system is configured with minimum 24GB of RAM);
  • to identify processor requirements (in order to meet the performance demands, each session should have up to four cores);
  • to provide disk storage for the server (there should be enough space to support both system files and guest sessions);
  • to plan the budget (there must be enough for satisfying all the server needs) (Velte & Velte, 2009).

Since there exist several approaches to server virtualization, the most effective strategy to explain the basic concept to senior management would be to highlight the main benefits and drawbacks of each approach, for them to decide which one suits the organization’s needs best:

  • full virtualization: provides the ability of entire simulation of physical hardware;
  • para virtualization: provides simulation of most but not all the components of the physical platform;
  • hardware-assisted virtualization: allows to set communication, with the hypervisor running in the root level;
  • operating system virtualization: allows running several instances of OS in parallel (Metzler, 2011).

Taking into account the above-given plan and the specifics of the organization, it seems reasonable to opt for para virtualization. This type of infrastructure is cheaper and much easier to introduce than full virtualization. It is accounted for by the fact that the host OS switches to a suspension mode as soon as it launches VM emulator, whereas the guest OS continues running in an active state. It will ensure high performance for the network in cases when hardware assistance is no longer available.

References

Chowdhury, N. M. K., & Boutaba, R. (2010). A survey of network virtualization. Computer Networks, 54(5), 862-876.

Metzler, J. (2011). Virtualization: Benefits, challenges, and solutions. Riverbed Technology, San Francisco.

Truong, D. (2010). How cloud computing enhances competitive advantages: A research model for small businesses. The Business Review, Cambridge, 15(1), 59-65.

Velte, A., & Velte, T. (2009). Microsoft virtualization with Hyper-V. McGraw-Hill, Inc..

Cloud Computing and Virtualization Technologies

Introduction

Cloud computing and virtualization technologies refer to new systems of information communication where many computers are linked to each other through a real-time communication network, such as the internet. Scientists identify cloud computing as the ability of operating a program on various processors simultaneously.

It is useful in advertising, particularly in disposing hosted services in the logic of application service provisioning. It runs client server software even in an inaccessible position.

Cloud computing and virtualization technologies were previously available to large organizations only, but they are currently employed in boosting small-scale businesses in the modern society (Ryan, 2011).

It should be understood that the two technologies do not mean the same thing, but instead they stand for different concepts. Virtualization is a type of expertise that is elementary since it facilitates cloud computing. However, the two technologies are employed in organizations to create a private cloud infrastructure that enables information distribution.

Virtualization software permits every physical server to operate several computing services. In other words, it entails acquiring a number of servers for every physical server bought in the organization. On the other hand, cloud computing utilizes a number of data centers, which are full of networks that enable servers to move cloud offerings. However, cloud computing alone is not able to offer a single server to the customer.

In this regard, cloud-computing suppliers divide the data in the server in order to permit every customer to work discretely with an effective instance of related software. Through cloud computing, the organization is in a position to access complex applications and enormous computing sources through the internet.

In the new merger, the organization can employ cloud computing by simply subscribing to a cloud based service, which include Cisco WebEx as opposed to coming up with its own cloud infrastructure.

Benefits of Cloud Computing and Virtualization Technologies

Through cloud computing and virtualization technologies, the organization is likely to benefit from economies of scale. This is mainly because cloud computing depends on resource sharing, which would probably achieve coherence.

Cloud computing and virtualization technologies are connected to other theories, such as congregated infrastructure and collective services, which would certainly lower the expenses for the new business. Maximization of resource utilization and effectiveness is guaranteed through cloud computing since a number of users do not only share resources, but also they are repositioned with dynamism based on demand.

Based on this, it is possible to allocate resources to other users in varied periods and time zones (Ryan, 2011). Since the new organization is said to be a multinational corporation, it can benefit a lot from the two technologies given the fact that users in one continent could utilize it during their business hours while the information would reach other users in different continents with different time zones.

In this regard, users in Europe could use an application, such as an email while other users in North America would be able to receive the information through a different application, such as web server. Through this, computer powers would be maximized while at the same time reducing environmental effects. The technology uses less power meaning that the organization would not need extra services, such as air conditioning.

As earlier noted, cloud computing and virtualization technologies are closely related to other concepts , such as moving cloud, which means that the two technologies permit large-scale organizations to cut costs associated with infrastructure.

This would indeed give organizations employing the two technologies an advantage in the market since they would have an opportunity to come up with other projects that would easily differentiate their operations.

Cloud in simple terms imply the process in which an organization moves away from conventional capex model, which is characterized by traditional practices that force an organization to acquire hardware before moving to install the software. The organization would move to a new model referred to as the opex model whereby various servers would be shared easily among various computers.

Some proponents are of the view that cloud computing and virtualization technologies are faster when compared to other traditional applications. The new technologies come with improved manageability, reduced costs of repair, and improved information technology systems. This would definitely help an organization in meeting irregular and erratic business demands.

Virtualization technology is mostly applied in small-scale enterprises, even though the new merger between the local organization and an international corporation would still benefit from its services. The organization stands to benefit since it would have to buy few servers. Moreover, the maintenance costs of the servers would be less as compared to servicing a number of servers.

In this case, the organization will operate efficiently since fewer resources are used in maintaining computer related services. Information experts have proved through research that a virtualized server tends to utilize the capacity of the server more effectively as opposed to a non-virtualized server.

Individuals working with virtualized technology are in a position of running more applications on each server, which increases productivity (Winkler, 2011). Through virtualization software, the resources of each physical server can be partitioned to generate a number of virtual environments, which are referred to as virtual machines.

Each virtual appliance has the ability to run its own operating system, as well as other production appliances, as per the corporation requirements. In some instances, virtualization technology can be employed for storage purposes whereby it is used in storing the computer hardware. Information technology experts observe that an organization is likely to benefit by storing the hardware since it would allow maximum utilization.

Storage virtualization has the ability of assembling all computer sources from numerous storage set-ups into one shared implicit storage storehouse, which would be accessible to all systems, irrespective of the place.

Cloud computing is applicable in large-scale organizations, which implies that it would be recommended for use in the new organization. Unlike virtualization technology, cloud computing would be of great importance since it does demand that each person should have his or her own network, but instead it relies on the network.

The organization will employ cloud computing effectively in resolving various customer issues since it allows the implementation of enterprise-grade appliance, including customer relationship management (Winkler, 2011). Other customer related services offered through cloud computing include hosted voice over IP (VoIP) and off-site storage.

The costs of these services would be high if cloud-computing technology is not applied. Employees are able to access an application or a service through a web browser. Cloud computing and virtualization technologies operate based on a one-to-many theory implying that one server is shared among a variety of processors. The technology allows many computers to share a network.

References

Ryan, F. (2011). Regulation of the Cloud in India. Journal of Internet Law, 15(4), 103-121.

Winkler, V. (2011). Securing the Cloud: Cloud Computer Security Techniques and Tactics. Waltham: Elsevier.

Concept of the Network Virtualization

Abstract

Network virtualization is a relevant study because assumptions about system gadgets, topology, and administration must be reconsidered based on self-administration, versatility, and asset sharing prerequisites of cloud computing foundations. Network virtualization (NV) is characterized by the capacity to make consistent, virtual systems that are decoupled from the core system equipment to guarantee the network can incorporate and support progressively virtual situations.

The paper also highlights the structures and layer of the OSI model. OSI is a reference concept on the communication of applications over a system. A reference display is a calculated structure for understanding connections. The motivation behind the OSI reference framework is to assist merchants and designers in creating products that interoperate and to encourage an unmistakable formation that depicts the elements of a network or transmission framework.

Introduction

This paper reviews the concept of network virtualizations and its importance to data centers and organizations. The research will be conducted using secondary sources such as academic journals and related literature.

This paper will be structured with four sections, which include an abstract, introduction, an overview of network virtualization, and a conclusion. Network consultants focus on server virtualization when they examine cloud computing; however, they do not consider the underlying ramifications of that innovation. The capacity to build virtual situations implies that network providers can make, obliterate, actuate, deactivate, and move them around in the cloud framework.

This versatility and portability have significant ramifications for how network services are characterized and managed to provide cloud services. Thus, servers and network assets benefit from virtualization. System virtualization is turning into a debated issue and not only for exchange but also for organizations like Oracle and other firms who have net virtualization centers.

Network virtualization is a relevant study because assumptions about system gadgets, topology, and administration must be reconsidered based on self-administration, versatility, and asset sharing prerequisites of cloud computing foundations. Static, network and traffic flow should be reevaluated and redesigned to exploit innovations such as virtual NICs and switches, transmission capacity control, load control, and network separation.

For instance, conventional multi-level Web administrations that offer net activity over Ethernet wires would now be able to be virtualized and facilitated on shared-asset frameworks that communicate in a large server at high speed, expanding execution and diminishing wired system traffic. Virtualized activity streams can be observed and balanced as expected to upgrade network execution for cloud infrastructures and requirement.

Furthermore, as virtual environment evolve, system setup strategies cannot accommodate the routing and adaptability requirements; virtualizing the network is a necessity. Network virtualization is relevant because it decreases the number of physical gadgets required for computing, effortlessly segments networks, grants quick change, versatility and smart development, secure physical devices, and creates a failover mode.

Network Virtualization

Virtualization is the capacity to simulate an equipment platform, for example, a server, or system asset in programming. The functionality is isolated from the hardware platform and duplicated with the capacity to work like the conventional equipment (Abdelaziz et al., 2017). A single equipment platform can support different virtual gadgets or machines. Accordingly, a virtualized framework is much more compact, adaptable and financially savvy than conventional hardware-based arrangement.

Network virtualization (NV) is characterized by the capacity to make consistent, virtual systems that are decoupled from the core system equipment to guarantee the network can incorporate and support progressively virtual situations. NV conceptualizes connectivity network and services that have been conveyed using hardware into a virtual system that is decoupled and operates autonomously over a physical interface. NV integrates virtualized L4-7 services and comprehends systems challenges, helping data centers to program and organize network on-request without making physical contact with the infrastructures (Han, Gopalakrishnan, Ji, & Lee, 2015).

With NV, associations can rearrange how they take off, scale, and alter capacities and assets to meet computing requirements. Virtual networking empowers server centers to organize and control suitable and effective routing structures for cloud applications and to adjust the system configurations utilizing programming based management (Jain & Paul, 2013). It is possible to build intelligent systems that are decoupled from physical servers and arrange tasks over virtual space.

Table 1: Network Virtualization Capabilities.

Network virtualization Link sharing
Network virtualization Address isolation
Network virtualization Node sharing
Network virtualization Performance isolation
Network virtualization Topography abstraction
Network virtualization Control isolation

The Operating Concept of Network Virtualization

One of the methods of virtual network is that it disaggregates the hardware service capacities from hardware, into programming. Portions of the elements of a physical device include switches and router infrastructure.

These infrastructures transcend the second and third layers of the open system interconnection (OSI) model. A virtual network adapter serves as the bridge or interfaces unit between the server and the system. Utilizing a virtual network, a user can replace many physical assets with software programming. For instance, a virtual switch contains the entire bundle-forwarding algorithm captured in a physical hub. As a result, the algorithm performs forwarding, locating the addresses of different packet destinations and direction (Jain & Paul, 2013). Thus, network virtualization is an act of decoupling physical devices with hardware addresses and destinations.

Physical Network and Virtual Network

The physical network includes system links, server units, workstations, switches, adapters, connecting cables, and physical servers used to create networking capabilities between one or more links (Figure 1).

Physical Network.
Figure 1: Physical Network.

However, a virtual network builds on logic sensing, creating virtual nodes and links while replacing physical infrastructures. The fundamental motivation behind a virtual system is to empower a service center to provide a reliable network for different application hosts. A virtual network is clarified into three categories namely, virtual private network (VPN), virtual LAN, and virtual extensible LAN (Khatibi & Correia, 2015).

Virtual Network.
Figure 2: Virtual Network.

General Approach to Network Virtualization

Virtualization is a strategy to link or separate assets of a PC. Virtualization abstracts from the physical server yet give the client the impression of collaborating straightforwardly with it through the portion of equipment assets.

Other than the quick advancement of virtualization of working frameworks, scientists started to consider the virtualization of switches. This capability implies running a few virtual switches on the same machine from the premise of controlling different systems. Each switch or router belongs to a dedicated server, giving the user full access and control. Although virtual private networks provide a virtualized channel over a physical server, they have a couple of weaknesses in correlation with VN (Khatibi & Correia, 2015). Virtual networks must be founded on similar conventions, topology and address configurations. It makes it difficult to convey or deploy different systems.

Virtual local area networks (VLANs) are the most widely recognized case of link virtualization, permitting the establishment of separated systems or networks where every isolation hub is in a secluded communication area (Khatibi & Correia, 2015).

Most Ethernet routers support link virtualization, which include tagged and untagged VLANs. Other than the virtual local area network, tunneling infrastructures exemplification of Ethernet casings to IP datagrams and protocol label routing are used to make VPNs and in the long-run assemble the establishment for virtual systems in Wide Zone Networks. Links could be wired virtualization or wireless virtualization (Liang & Yu, 2014).

In wired link virtualization, time-sharing frameworks permit clients to work from remote terminals associated with a centralized computer PC, with the impression of working at an individual computer. A node or PC equipment is a physical gadget that can decide the result of abstract activities through physical controls. The components of a node include input, storage, processing, and output. The input is the instantiation of active operations in a physical state.

Virtualization expands asset proficiency and offers clients enhanced functionality. The reviews give additional data about handling virtualization methods, and an incredibly intriguing and enlightening narrative on the compatibility of time-sharing frameworks. One strategy for storage virtualization is a virtual memory, which is another component of node virtualization (Liang & Yu, 2014). The utilization of virtual memory not just mechanizes the storage allotment issue effectively, but empowers machine autonomy, program modularity, sufficient memory addressing and the ability in organizing structured information.

On the other hand, node virtualization facilitates productive sharing and separation of processing assets, for example, CPU, and memory, with the goal that different working frameworks can run simultaneously in physical hardware. Procedures for virtualizing physical nodes incorporate full virtualization, paravirtualization, holder based virtualization. Full virtualization gives a wholly copied machine whereby a guest-operating device can function properly.

Full virtualization supports the best level of isolation in a virtual network; however, it could create performance and asset challenges. Paravirtualization systems create a VM monitor to operate and manage multiple virtual machines (VM). Each VM shows up as a different system with its practical framework and programming (Liang & Yu, 2014). Paravirtualization offers expanded adaptability when several operating devices are required in a physical hub.

This ability results in extended overhead contrasted with compartment-based virtualization. Container-based virtualization builds numerous allotments of operating framework assets, called holders or containers, depending on propelled planning for asset isolation. Thus, the level of achievable seclusion usually is lower in correlation with paravirtualization components (Han et al., 2015). Every container is bound to run the same form of the operating framework from the host machine.

Link Virtualization

The purpose of links is to exchange data between nodes in a dependable way. Links consist of conceptual entities and physical assets which instantiate that data can be transferred between hubs as seen in figure 3.

Thus, connections must be virtualized in theory space, as inks can be wired or remote. There are critical contrasts among wired and remote links, due to the idea of the physical assets utilized. Both wired and remote links transfer data utilizing the electromagnetic range. However, in wired networks, the electromagnetic spectrum is segregated from different connections using physical links. In remote systems, since all connections are transmitted, the network user provides effective isolation and consistent quality.

Links could be wired virtualization or wireless virtualization. In wired link virtualization, time-sharing frameworks permit clients to work from remote terminals associated with a centralized computer PC, with the impression of working at an individual PC (Abdelaziz et al., 2017). Network communications are executed either utilizing dial-up lines or through private lines rented from the PSTN administrators. The dial-up lines are considerably less expensive, but experience security challenges and functionality. However, virtual lines are secured and costly.

VPN services offer the security and performance at cheaper rates exploiting the nature of network traffic between nodes. In this manner, physical connections can be time-shared to give the figment of private connections, known as virtual circuits. VPNs can be scaled and customized to suit client inclinations. Although virtual private networks can separate diverse network systems over a common framework, they are inclined to a few impediments (Abdelaziz et al., 2017).

For example, the conjunction of various organizing arrangements is not conceivable, and virtual systems are not entirely autonomous. Another restriction is that transmission is not supported like other naïve systems. Consequently, it builds additions cost of operations while limiting the network, quality, consistency, and latency (Panchal, Yates, & Buddhikot, 2013).

Although wired connection virtualization has existed for a long time, wireless connection virtualization is a functioning examination zone. Wireless connection virtualization is the procedure of virtualizing remote connections, making virtual assets that are separated and can utilize different innovation and designs freely. Wireless asset sharing is generally unique with virtualization since wireless asset sharing does not make free resources.

Furthermore, virtualization enables the grouping of assets, which asset sharing does not permit. A few difficulties that exist in wireless connection virtualization do not exist in wired connection virtualization because of the distinction between the wired network and the wireless network (Panchal et al., 2013). Due to the characteristic disparity of remote channels, it is difficult to forecast the data throughput of remote connections. Secondly, remote connections are broadcast, and in this way can meddle with a different wireless connection, though in wired connections this does not happen. Another difficulty for remote connections is that wireless nodes have a tendency to be exceptionally versatile, which implies it is difficult to forecast the data exchange between nodes.

Link Virtualization Network.
Figure 3: Link Virtualization Network.

Node Virtualization

Some terms are required in the study of network virtualization. Abstraction refers to the act of disregarding or concealing elements to study general qualities, instead of solid realities. Based on this premise, abstraction deals with the manner by which frameworks collaborate, and the intricacy of the communication, by concealing points of interest that are not significant to the association. Thus, abstraction enables structures to be utilized more effectively, however, this comes at the expense of diminished adaptability and customization.

Although abstraction is an essential idea in computing, as it administers the communication between people and PCs, it is not really of significance to virtualization. Nevertheless, it is imperative to note the contrast between abstraction and abstract. The term abstract alludes to thoughts and ideas that do not have physical manifestations. This section considers how virtualization applies to remote systems. A remote system or wireless network is as an arrangement of nodes that can exchange data through links, where a portion of the connections might be broadcast (Figure 4). Thus, wireless systems comprise of two sections, which include links and nodes.

Node Virtualization Network.
Figure 4: Node Virtualization Network.

A node or PC equipment is a physical gadget that can decide the result of abstract activities through physical controls. The components of a node include input, storage, processing, and output. The input is the instantiation of active operations in a physical state. All capacities are required in a hub because it is futile to have processing component with no input. Therefore, these four capacities are reliable with the IPO+S framework of registering. It is important to note each of these assets can be represented in the abstract field and virtualized, which enables assets to be utilized more proficiently, and can create new functionality.

Nevertheless, virtualization must be done in theory space. As a result, a network representative can virtualize these assets simultaneously using an additional level of “representation.” If every one of the four capacities is to be virtualized, it is essential to present an extra level of representation. Node virtualization facilitates productive sharing and separation of processing assets, for example, CPU, and memory, with the goal that different working frameworks can run simultaneously in physical hardware. Procedures for virtualizing physical nodes incorporate full virtualization, paravirtualization, holder based virtualization.

The capacities and function of node virtualization make them vital in wireless systems. Process virtualization and storage virtualization are some types of node virtualization. Time-sharing was created to beat the constrained man to machine communication of cluster computing, which had prompted programming blunders and investigating time, and more overwhelming programs. As a result, time-sharing empowers people to make utilization of a PC at the same time. Other than offering clients authorized links to processing assets, which can prompt work overloads and memory issues, the network operator buffer input and consecutively runs client programs effectively.

The full succession of client programs happens much of the time enough that a PC seems to be entirely receptive for all clients. By mapping client-computing time and coordinating strict isolation, clients have the view of the selective utilization of committed processors. As a result, the illusion of different ‘virtual’ processors is made. Thus, virtualization expands asset proficiency and offers clients enhanced functionality.

The reviews give additional data about handling virtualization methods, and an enlightening narrative on the compatibility of time-sharing frameworks. One strategy for storage virtualization is a virtual memory, which is another component of node virtualization. The utilization of virtual memory not just mechanizes the storage allotment issue effectively, but empowers machine autonomy, program modularity, sufficient memory addressing and, the ability in organizing structured information.

Network Virtualization and Open Systems Interconnection (OSI) Model

OSI is a reference concept on the communication of applications over a system. A reference display is a calculated structure for understanding connections. The motivation behind the OSI reference framework is to assist merchants and designers in creating products that interoperate and to encourage a design that depicts the elements of a network or transmission framework (Scroggins, 2017). Most merchants associated with broadcast communications endeavor to represent their items and administrations in connection to the OSI framework (Suresh, 2016). It is vital for an IT expert to understand the OSI Model. The OSI Layered methodology for investigating system issues facilitates network troubleshooting.

Layer 1 of the OSI Model

The first layer of the open systems intercommunication (OSI) is called the physical layer. It characterizes electrical and physical determinations for gadgets. The physical layer describes the connection between a device and a transmission link, for example, a copper or optical link. The layer incorporates the format of pins, voltages, link determinations, centers, repeaters, network connectors, or bus connectors. The significant functions performed by the physical layer can be summarized in three categories.

  1. Creating and terminating links to a communication system.
  2. Building participation capacities whereby the correspondence assets are shared among clients.
  3. Modulation or transformation between the display of information on client hardware and signals transmitted on the network channel. These are signals working over the physical link or radio connection.

Layer 2 of the OSI Model

The second layer of the OSI model is called the data link. This layer creates the medium to exchange information between system servers and monitor network errors. This layer is planned for point and multipoint media, usually for extensive territory media in the phone framework. It is important to note the ISO framework in Project 802 created the LAN architecture autonomously (Suresh, 2016). The institute of electrical and electronics engineering design accepted sub-layering and administration capacities not required for WAN. In current practice, error discovery is available in data link conventions and LAN networks.

The data link layer also provides a control channel, which is utilized at the transport layer. For example, the TCP is deployed in specialties where X.25 offers execution interest (Suresh, 2016). The ITU-T G.hn standard, which gives a fast LAN speed over existing wires, incorporates an entire information interface layer and error traffic control using selective window protocol. Both wireless area network and local area network, create bits from the physical layer into sequences called frames. However, not all bits are deployed in frames, as some perform physical functions.

Layer 3 of the OSI Model

The third layer of the open systems intercommunication is called the network layer. The network layer implements routing capabilities, disintegration reassembly, and error reports. Switches and routers work on this layer by sending information through the expanded system and making the Internet conceivable. The design is an intelligent non- hierarchical framework whose qualities the system expert picks.

Various layer-administration conventions have a place with the system layer, including routing conventions, multicast administration, network data, and system address task. The network layer of the OSI model is segmented in three sub-layers, which include a sub-network access, a sub-network conveyance dependent, and a sub-network conveyance independent. Sub-network access deals with protocols that integrate with the interface to the network. An example this process is the X.25.

Consequently, the sub-network dependent convergence creates balance among networks on a different level. The sub-network independent convergence regulates and controls communications between different networks. The IPv7 ISO 8473 is a specific example of the sub-network independent convergence (Panchal et al., 2013).

Conclusion

Network virtualization describes the combination of one or more platforms to form a virtual network. Virtualization is the act of combining hardware and software infrastructures. It enables network users to emulate links between services and applications. As a result, IT experts combine one or more foundations, which include network hardware, network elements, networks, storage devices, machine elements, network media, and mobile network devices.

Network virtualization could be internal or external. It is important to note that network virtualization is possible through links and nodes. Network nodes include process virtualization, storage virtualization, machine virtualization, input, and output virtualization. The combination of process and storage network virtualization creates the machine node virtualization. Network link virtualization creates an active connection for data transfer. Link virtualization could be wired or wireless network. There are seven layers within the OSI model. The first three layers are the physical layer, data link layer, and network layer.

References

Abdelaziz, A., Fong, A., Gani, A., Khan, S., Alotaibi, F., & Khan, M. (2017). On software-defined wireless network (SDWN) network virtualization: Challenges and open issues. The Computer Journal, 60(10), 1510-1519. Web.

Han, B., Gopalakrishnan, V., Ji, L., & Lee, S. (2015). Network function virtualization: Challenges and opportunities for innovations. IEEE Communications Magazine, 53(2), 90-97.

Jain, R., & Paul, S. (2013). Network virtualization and software defined networking for cloud computing: A survey. IEEE Communications Magazine, 51(11), 24-31.

Khatibi, S., & Correia, L. (2015). A model for virtual radio resource management in virtual RANs. EURASIP Journal on Wireless Communications and Networking, 2015(1), 67-68.

Liang, C., & Yu, F. (2014). Wireless network virtualization: A survey, some research issues, and challenges. Communications Surveys & Tutorials, IEEE, 17(1), 358-380.

Panchal, J., Yates, R., & Buddhikot, M. (2013). Mobile network resource sharing options: Performance comparisons. IEEE Transactions on Wireless Communications, 12(9), 4470-4482.

Scroggins, R. (2017). Emerging virtualization technology. Global Journal of Computer Science and Technology: Information & Technology, 17(3), 1-7.

Suresh, P. (2016). Survey on seven-layered architecture of OSI model. International Journal of Research in Computer Applications and Robotics, 4(8), 1-10.

Understanding Virtualization: Hardware and Cloud Platforms

A simple understanding of how social media platforms function and how people use social media sites reveal the need for additional computer hardware usage in order to satisfy a growing demand for computing power. There are at least two major requirements in order for a computer system to function properly and support a cloud platform. First, the computer hardware must have the capability to run a particular operating system or OS. Second, the computer system must interpret the commands and request that users make via the OS. As a consequence, a greater demand for functional computer systems requires the purchase of a greater number of computer hardware. In order to reduce the cost of buying more expensive hardware, a process known as virtualization was developed for the purpose of reducing the cost of meeting certain requirements and the increasing demand for greater computing power.

The Essence of Virtualization

In a nutshell, virtualization comes from the root word “virtual” meaning to simulate the appearance of an object or idea. For example, when people talk about “virtual reality” they are discussing a process of simulating a real-world setting, such as, a person’s bedroom or living room. As a result, when images of these areas are made available, one can see objects and shapes that resemble the said areas. However, these are all virtual images or simulated images of the actual rooms. In a book entitled, Virtualization Essentials, the author stated that this particular process is a form of technology that enables the user to create simulated environments or dedicated resources using a single computer system (Portnoy, 2015). Red Hat, a company renowned for developing world-class software, described the procedure as the utilization of a specific software that allows the user to connect directly to the hardware, and at the same time, splitting the system into separate units known as virtual machines or VMs (Red Hat, Inc., 2017, par. 1). The different VMs are not only separate, they are also distinguished from each other, so that a single VM is one distinct system.

The separation and the distinction of each individual VM explain the cost-efficiency advantage of using virtualization technology, because each VM functions as if it owns a dedicated computer system. However, it has to be made clear that there are several VMs that are operating within one virtual environment, and all of these machines are using only one computer hardware device. According to Matthew Portnoy, businessmen prefer to use this configuration, especially if they are not yet sure how many servers are needed to support a project (2015). For example, if the company accepts to work on new projects, the server needs are going to be satisfied by using virtual servers and not by purchasing new physical servers. Thus, the appropriate utilization of the said technology enables a business enterprise or a corporation to maximize the company’s resources.

The Virtualization of a Computer Hardware

The need for virtualization came into existence because of the computer system or personal computer’s original design. For the purpose of providing a simplified illustration of the concept, imagine the form and function of typical personal computer. A basic system requires a hardware and software combination. The software handles the commands coming from the user, and then the same software utilizes the computer hardware to perform certain calculations. In this configuration, one person typing on a keyboard elicits a response from the computer hardware setup. In this layout, it is impossible for another person to access the same computer, because it is dedicated to one user.

The virtualization of a computer hardware requires the use of a software known as hypervisor, for the purpose of creating a mechanism that enables the user or the system administrator to share the resources of a single hardware device. As a result, several students, engineers, designers, and professionals may use the same server or computer system. This setup makes sense, because one person cannot utilize the full computing power of a single hardware device. As a result, an ordinary person without the extensive knowledge of a network administrator can use a computer thinking that it is linked to a dedicated computer powered by its own processor. Sharing the computing capability of a single computer hardware maximizes the full potential of the said system.

It is important to point out that virtualization is not only utilized for the purpose of reducing the cost of operations. According to Red Hat, the application of the virtualization technology leads to the creation of separate, distinct, and secure virtual environments (Red Hat, Inc., 2017, par. 1). In other words, there are two additional advantages when administrators and ordinary users adopt this type of technology. First, the creation of distinct VMs makes it easier to isolate and study errors or problems in the system. Second, the creation of separate VMs makes it easier to figure out the vulnerability of the system or the source of external attacks to the system (Portnoy, 2015). Therefore, the adoption of virtualization technologies is a practical choice in terms of safety, ease of management, and cost-efficiency.

The Virtualization of the Central Processing Unit or CPU

A special software known as the hypervisor enables the user to create virtualization within a computer hardware system. The specific target of the hypervisor is the computer’s central processing unit or the CPU. Once in effect, the hypervisor unlocks the OS that was once linked to a specific CPU. After the unlocking process, the hypervisor in effect creates multiple operating systems or guest operating systems (Cvetanov, 2015). Without this procedure, the original OS is limited to one CPU. In a traditional setup, there is a one-on-one relationship between the OS and the CPU. For example, a rack server requires an OS to function as a web server. Thus, when there is a need to build twenty web servers, it is also required to purchase the same number of machines. By placing a hypervisor on top of a traditional OS, one can enjoy the benefits of twenty web servers, but using only the resources of one rack server. However, the success of the layout depends on the quality of the RAM and the processors. Thus, the quality of the VMs are dependent on the quality of the CPU.

Specialized software like the hypervisor primarily functions as a tool that manipulate’s the computer’s CPU, in order for the CPU to share its processing power to multiple VMs. Thus, it is not the natural design of the CPU to share its resources to multiple virtual machines. However, in the article entitled “How Server Virtualization Works” the author pointed out that CPU manufacturers are designing CPUs that are ready to interact with virtual servers. One can argue that cutting-edge technology in new CPUs can help magnify the advantages that are made possible through virtualized environments. On the other hand, the ill-advised or improper use of deployment strategies created to virtualize CPUs can lead to severe performance issues.

It is imperative to ensure the flawless operation of VMs. One can argue that a major criteria in measuring the successful implementation of a virtualized CPU is to have a system that functions the same way when its core processor was not yet shared by multiple VMs. In other words, it is imperative that users are unable to detect the difference between an ordinary computer and the one that utilizes shared resources via the process of virtualization. In Portnoy’s book, he described the importance of figuring out the virtualization technology that is the best fit to the organization’s needs. Three of the most popular virtualization technologies available in the market are: Xen, VMWare, and QEMU (Portnoy, 2015). The following pages describe how network administrators use Xen’s hypervisor.

Organizing a Xen Virtual Machine

Xen’s virtualization software package directs the hypervisor to interact with the hardware’s CPU. In the book entitled, Getting Started with Citrix XenApp, the author highlighted the fact that the hypervisor manipulates the CPU’s scheduling and memory partitioning capability in order to properly manage the multiple VMs using the hardware device (Cvetanov, 2015). In organizing a Xen virtual machine, the network administrator must have extensive knowledge on at least three major components: the Xen hypervisor; Domain O; and Domain U. Although the importance of the hypervisor has been outlined earlier, it is useless without the Domain O, otherwise known as Domain zero or simply the host domain (Cvetanov, 2015). In the absence of the Domain O, Xen’s virtual machines are not going to function correctly. It is the job of the Domain O as the host domain to initiate the process, and paves the way for the management of the Domain U, known as DomU or simply as underprivileged domains (Cvetanov, 2015). After activating Xen’s Domain O, it enables users of VMs to access the resources and capabilities of the hardware device.

Xen’s Domain O is actually a unique virtual machine that has two primary functions. First, it has special rights when it comes to accessing the CPU’s resources and other aspects of the computer hardware. Second, it interacts with other VMs, especially those classified as PV and HVM Guests (Cvetanov, 2015). Once the host domain is up and running, it enables the so-called underprivileged VMs to make requests, such as, requesting for support network and local disk access. These processes are made possible by the existence of the Network Backend Driver and the Block Backend Driver.

All Domain U VMs have no direct access to the CPU. However, there are two types of VMs under the Domain U label: Domain U PV Guests and Domain U HVM Guests. The Domain U PV Guest is different because it was designed to know its limitations with regards to accessing the resources of the physical hardware. Domain U PV Guests are also aware of the fact that there are other VMs utilizing the same hardware device. This assertion is based on the fact that Domain U PV Guest VMs are equipped with two critical drivers, the PV Network Driver and the PV Block Driver that the VMs employ for the purpose of network and disk utilization. On the other hand, Domain U HVM Guests do not have the capability to detect that the presence of a setup that allows the sharing of hardware resources.

In a simplified process, the Domain U PV Guest communicates to the Xen hypervisor via the Domain O.This is known as a network or disk request, and activates the PV Block Driver linked to the Domain U PV Guest to receive the request to access the local disk in order to write a specific set of data. This procedure is made possible by the hypervisor directing the request to a specific local memory that is shared with the host domain. The conventional design of the Xen software features the less than ideal process known as the “event channel” wherein requests go back and forth between the PV Block Backend Driver and the PV Block Driver. However, recent innovative changes enables the Domain U Guest to access the local hardware without going through the host domain.

An Example of a Virtual Environment

Oracle’s application of the Xen technology in running multiple virtual servers.
Fig. 1 Oracle’s application of the Xen technology in running multiple virtual servers (Oracle Corporation, 2017).

Oracle’s use of Xen technology in running a number of x86 type of servers provides an example of a virtual environment. In this example, the hypervisor and the Domain O virtual machine function like a lock and key. The Oracle VM labeled as domain zero initiated the process, and the hypervisor acts as the mediator that processes the requests from other VMs to utilize a single computer’s CPU, RAM, network, storage, and PCI resources. Based on this illustration, one can see how a single server became the host to multiple VMs.

In the said configuration, one can see three additional benefits of using virtual environments. First, the user benefits from the power of consolidation, because a network administrator is able to combine the capability of several VMs using only one server, reducing the need for more space, and eliminating the problem of excess amounts of heat that are emitted from a large number of servers cramped in one area (Strickland, 2017). Second, it allows the administrator to experience the advantages of redundancy without the need to spend more money in buying several systems, because the VMs made it possible to run a single application on different virtual environments (Strickland, 2017). As a result, when a single virtual server fails to perform as expected, there are similar systems that were created beforehand to run the same set of applications. Finally, network administrators are given the capability to test an application or an operating system without the need to buy another physical machine (Strickland, 2017). This added capacity is made possible by the fact that a virtualized server is not dependent or linked to other servers. In the event that an unexpected errors or bug in the OS lead to irreversible consequences, the problem is isolated in that particular virtual server and not allowed to affect the other VMs.

Virtualization in Cloud Platforms

The advantages of VMs in terms of cost-efficiency, redundancy, consolidation, and security are magnified when placed within the discussion of cloud platforms. Cloud platforms are physical resources provided by third parties or companies that are in the business of cloud computing. In simpler terms, an ordinary user does not need to buy his or her own storage facility in order to manage and secure important data. In the past, businessmen and ordinary individuals had to acquire physical machines to store data. The old way of storing information is expensive and risky, because the user had to spend a lot of money in acquiring the physical devices and compelled to spend even more in the maintenance of the said computer hardware devices. In addition, users had to invest in the construction of an appropriate facility in order to sustain the business operation that depended on the said computer systems. Nevertheless, in the case of man-made disasters and other unforeseen events, the data stored in the said devices are no longer accessible or useful. It is much better to have the capability to transmit or store critical data via a third party service provider. However, in the absence of virtualization technologies the one handling the storage facility has to deal with the same set of problems described earlier.

Cloud platforms utilize the same principles that were highlighted in the previous discussion. Cloud platforms were created to handle the tremendous demand for storage space and the use of added resources. The only difference is that the server is not located within the company building or the building that houses the company’s management information system. The servers that hosted the VMs are located in different parts of the country or in different parts of the globe. Although the configuration is different because the user does not have full control of the host computer or the VMs, the same principles that made virtualization cost-efficient, reliable, and secure are still in effect. Consider for example the requirements of companies like Google and Facebook. Without the use of virtualization the demand for storage space and additional computing power becomes unmanageable. However, with appropriate use of virtualization technology, it is possible for multiple users to share resources when sending emails and accessing images that they stored via cloud platforms. It is interesting to note that when users of Gmail or Facebook access these two websites, they are not conscious of the fact that they are utilizing a system of shared resources via a process known as virtualization technology.

Conclusion

The existence of virtualization technologies came about after realizing the limitations of a conventional computer design. In the old setup, one user has limited access to the resources of a computer hardware device. However, a typical usage does not require the full capacity of the CPU, RAM, storage, and networking capability of the computer system. Thus, virtualization technologies enabled the sharing of resources and maximizing the potential of a single computer system. This type of technology allows user to enjoy the benefits of consolidation, redundancy, safety, and cost-efficiency. The technology’s ability to create distinct and separate VMs made it an indispensable component of cloud computing. As a result, network administrators, programmers, and ordinary users are able to develop a system that runs the same set of applications in multiple machines. It is now possible not only to multiply the capability of a single computer hardware configuration, but also test applications without fear of affecting the other VMs that are performing critical operations.

References

Cvetanov, K. (2015). Getting started with Citrix XenApp. Birmingham, UK: Packt Publishing.

Oracle Corporation. (2017).Web.

Portnoy, M. (2015). Virtualization essentials. Indianapolis, IN: John Wiley & Sons.

Red Hat, Inc. (2017).Web.

Strickland, J. (2017). Web.

Process Virtualization Theory Overview

Introduction

The world is becoming more virtual than ever before. ‘Process virtualization’ is common in most areas like formal education, shopping, and the development of friendship. However, there are differences in processes of virtualization due to various extents of amenability. In other words, electronic shopping may work well in some applications than others. Based on this observation, this paper focuses on factors that influence the ‘virtual inability of a process’ (Overby 277).

‘Virtualizability of a process’ has gained recognition as information technology has transformed most physical processes into virtual processes. Therefore, the article proposes a ‘process virtualization theory’, which includes “four main constructs (sensory requirements, relationship requirements, synchronism requirements, and identification and control requirements) that affect whether a process is amenable or resistant to being conducted virtually” (Overby 277).

Justification

Most aspects of traditional processes previously conducted through physical means have turned into virtual processes due to developments in information technology (IT). These processes include online learning, shopping, and meeting friends. We can conclude that developments in information technology have enabled society to replace physical processes with virtual processes, and most virtual processes have gained recognition among users.

The process of virtualization has gained full momentum. This is evident from the virtualization of processes that seemed difficult to change a few years ago. However, virtualization processes have been simple in other areas than others. This article focuses on a theory of process virtualization, which has two parts. First, it explores factors like “sensory, relationship, synchronism, and identification and control requirements” (Overby 277). These requirements determine whether a process complies with the virtual processes or resists virtual processes. Second, the article also explores factors related to “the representation, reach, and monitoring capabilities of information technology” (Overby 278) and its roles in transforming various activities in virtual processes that affect businesses and society. Therefore, the author found it appropriate to address the theoretical underpinning of IT in virtualization processes to account for the gap in the field. The author based the study on a work by Orlikowski and Iacono, which advocates for theoretical models as a way of addressing the role of IT, its intended and unintended effects, and why IT matters.

Critique

Eric Overby acknowledges previous works of other authors in the same field. This gives his work credibility. The article provides a fundamental role in understanding IT and the need to understand changes in society, as well as factors that influence virtualization. This knowledge enables interested parties to understand virtualization theory.

The author enables us to understand whether the virtualization process is resistant or amenable. In this context, he focuses on “sensory, relationship, synchronism, and identification and control requirements” (Overby 278) in virtualization. These requirements apply whether processes use IT systems or not.

He acknowledges that high processes are complex while low processes have low levels of complexity. The author shows that IT capabilities facilitate the integration of virtual processes. This is why society has experienced several IT-based applications in the digital age.

The theory is broad because it focuses on diverse areas in IT applications under virtual processes. This broad-based application of the theory limits its effectiveness in a given field. This is because various areas of IT virtualization processes have different factors, which influence outcomes.

The researcher also acknowledges the limitations of his work. For instance, there are some aspects, that this theory cannot address and need further study. This theory applies to migration from physical processes to virtual processes only. Therefore, it does not account for virtual to physical migration.

Summary

The author proposes process virtualization theory to explain and predict issues that influence amenability or resistance to virtualization. According to Overby, “the transition from a physical process to a virtual process is process virtualization” (Overby 278). The process shows that there is a lack of interaction between people and people, or objects. This theory is necessary for providing a framework that can help in understanding factors that influence process virtualization regardless of whether the process is virtual or physical.

The facilitating force behind processes virtualization is the development of IT. However, there are also non-IT-based virtualization processes, which do not depend on IT for virtualization.

The theory has three elements that affect the process ‘virtual ability’. First, sensory requirements include “tasting, seeing, smelling, hearing, and touching” (Overby 279). It accounts for the need of users in experiencing sensory aspects of processes. Sensory requirements have “negative impacts on the process virtual inability” (Overby 282). Second, there are also relationship requirements. This requires that process participants must interact with others professionally and socially. Interaction among process participants often leads to the acquisition of “knowledge, trust, and friendship” (Overby 280). However, relationship requirements have “negative impacts on process virtual inability” (Overby 282). Third, we have synchronism requirements. This refers to “the degree to which the activities that make up a process need to occur quickly with minimal delay” (Overby 282). These requirements also have “negative relations to process virtual inability” (Overby 282). Finally, identification and control requirements focus on unique aspects of user identification and the ability of the process to control users’ behaviors. It also has “negative relations to process virtual inability” (Overby 282).

Process virtualization theory has three characteristics that affect virtualization and constructs (representation, reach, and monitoring capability). These elements have “positive moderation on the relations between process virtual inability and the main constructs” (Overby 283). The three aspects of capability constructs also have applicability in some non-IT-based virtual processes.

Representation refers to the ability of IT processes to provide relevant information to users.

Reach construct allows for participation across both time and time and space. As a result, many processes can occur throughout a given period.

Another construct is monitoring capability. This aspect of process virtualization theory provides capabilities for process authentication and tracks users’ activities. It aids in the identification and control requirements in processes.

Findings, Limitations, Conclusion, and Remarks

The process virtualization shows whether a process is resistant or amenable to virtual processes. The theory has “sensory, relationship, synchronism, and identification and control requirements” (Overby 277). These aspects apply to both IT-based and non-IT-based processes. High processes are difficult to virtualize. IT constructs like “reach, representation, and monitoring capabilities” (Overby 282) facilitate the integration of virtual processes with required elements.

Process virtualization theory applies to both research studies and practices. The theory provides frameworks to classify amenable or resistant factors in virtualization. The virtualization process is in almost every aspect of society. Therefore, it applies to various fields like communication, sociology, economics, and management studies. The theory shows the role of IT in society and explains why IT has significant influences on society and business.

Process virtualization theory provides analytical opportunities for migrating processes to virtual applications. It also provides a virtual process design for migrating systems. It enables practitioners to consider elements like “representation, reach, and monitoring capabilities” (Overby 282) of processes so that they can meet other requirements. Process virtualization theory is broad. Based on this broadness, the theory lacks concrete explanations for various domains. This is because factors differ from one domain to another, and the theory does not address these specific factors.

Some aspects like teamwork, governance, and cultures of organizations are not a part of process virtualization theory. In addition, the theory fails to account for the suitability of virtual or physical processes. In other words, the theory fails to provide reasons to explain why some consumers prefer online shopping while others prefer visiting a bookstore. The theory only focuses on migration from physical to virtual processes. This implies that the theory cannot account for new systems without physical beginnings. The author proposes this as an area for further studies.

Overby concludes that process virtualization theory needs empirical data to support it. This shall ensure that the model improves and provides explanations for various constructs of the theory. Empirical data may also be useful in the identification of new constructs for improving the theory. The author also notes that studies in areas like distance learning, media usage, virtual teams, and electronic commerce can provide useful information for developing process virtualization theory.

Developments in IT applications shall increase virtual applications in society and business. The author also notes that society is not ready to abandon physical processes for virtual processes. Process virtualization theory also enables us to “understand processes that resist virtualization” (Overby 289). These are processes with “high relationships, sensory, synchronism, and identification and control requirements” (Overby 289). Developments of various theories for explaining migration to virtual environments shall be essential as businesses and society continue to change.

Works Cited

Overby, Eric. “Process Virtualization Theory and the Impact of Information Technology.” OrganizationScience 19.2 (2008): 277–291. Print.

VMware Server Virtualization Solution

Abstract

VMware server and desktop virtualization solution is used to reduce Total Cost of Ownership for the hardware and software used in an Information Technology organization. VMware virtualization infrastructure solution provides high availability with Disaster Recovery, Consolidated Backup and VMotion to migrate Virtual Machines. This report describes how the three main components of a computer, the CPU, input/output and the memory are virtualized in VMware virtualization solution. The report describes the fault tolerance and security features in VMware infrastructure.

Enterprise desktop is one of the important advantages of VMware that not only eliminates the CPU unit from a desktop but also enables easy configuration and control of desktops in an enterprise. The consolidation of servers on one virtual server and enterprise desktop together provide a test and development environment with optimized hardware resources and centralized management via VMware VirtualCenter.

Server containment is an additional advantage of consolidation: once a virtual server is configured new VMs can be configured on the server for new applications instead of adding new boxes. The report also includes two case studies on how VMware server and desktop virtualization solution solved seemingly unrelated problems. Reduction of carbon footprint and rapid desktop deployment problems are solved for WWF-UK and Bell Canada respectively with VMware solution.

Introduction

The term virtualization broadly describes the separation of a resource or request for a service from the underlying physical delivery of that service” (Virtualization Overview, 2006).

Virtualization is a hardware and software solution for the allocation and configuration of computing resources for optimum utilization and to reduce the operational and maintenance costs for higher Return On Investment in a business. The virtualization method is useful in computing intensive businesses such as data centre where virtualization can be used to consolidate multiple server functionality on a single physical server.

Ignorance about virtualization technology and the cost afford-ability of computer servers leads to simple deployment for reduction of software conflict probability. Due to easy availability of computer servers administrators may configure fewer applications on one server. However the increased number of computer resources leads to higher maintenance cost and wasted processing power. In order to increase resource utilization and to reduce the operational and maintenance cost of computer servers powerful processors are used in the computer servers and multiple applications are installed on a single server. (Enhanced Virtualization on Intel Architecture-based Servers, 2006).

Thesis – Virtualization Overview

  • Virtual Machine has Single OS &

A layer is created between the hardware and the Operating System (OS) that manages the Virtual Machines (VM) created on top of the OS. This layer enables these VMs to access the underlying hardware resources. This layer is generally called Virtual Machine Monitor (VMM) or Virtualization Layer and manages the control of VMs that are abstractions of the hardware resources on a single server. The VMs thus give an illusion of multiple servers on a single physical server. The VMs run a Guest OS this OS is different from the underlying layer that may be a hypervisor or hosted OS. The applications running on a VM assume that there is a single OS i.e. the Guest OS running inside the Virtual Machine and an underlying hardware as in a non-virtual server (Virtualization Technology Overview, 2007).

VMware Infrastructure

VMware Infrastructure
Figure 1, VMware Infrastructure (Virtualization Technology Overview, 2007).

VMware virtualization technology allocates resources to VMs according to the resource requirement policies for the applications running on the Virtual Machine. Each VM has its own resources such as CPU, memory, network bandwidth, auxiliary storage and BIOS resources. The VMware Central Management Server manages ESX servers and the VMs running on these servers. A consolidated database of physical resources of all hosts, host configurations and VM resources is maintained. The control information such as VM statistics, alarms and user permissions are stored in order to ensure that critical applications have required resources at all times. Wizard driven templates provide control for the creation and management of VMs and for automated routine management.

The pool of hardware resources from more than one physical server provides virtual server solution for a data centre; this solution reduces Total Cost of Ownership (TCO) and provides better utilization of resources. Virtual Machines and applications running on VMs can be assigned resources from this hardware resource pool thus making virtualization a software and hardware technology. Virtualization is a technique that separates a service from the resources used by the service. The resources may be shared between services. The services may not necessarily be aware about the presence of other services.

A service may assume that available resources are for its exclusive use. The VMware virtual infrastructure that provides an abstract layer between services and resources provides an abstraction of pooled resources that makes it convenient for administrators to manage resources and better utilize organization infrastructure (Virtualization Overview, 2006).

System configurations before and after virtualization
Figure 2 System configurations before and after virtualization (Virtualization Overview, 2006).

Virtualization Advantages

(Virtualization Overview, 2006; Intel Virtualization Technology, 2006).

Before Virtualization After Virtualization
The physical server machine hosts a single Operating System image. The physical server machine memory and other hardware and software resources are partitioned to emulate a single physical server for each VM. Each such partition has a separate OS image.
The OS binds the software applications to the underlying hardware resources. An abstract layer (virtualization layer) of software that emulates the server resources is present between the partitions and the hardware resources.
When multiple applications are executed on the same machine there may be conflict for resources that may result in either system overload condition or data corruption. The mutual exclusion mechanism for resources or additional resources may be added at additional hardware cost. Conflicting applications may otherwise be executed on different physical servers. The abstract layer ensures mutual exclusion of resources between partitions (also known as VMs). The conflicting applications can be executed on different VMs in the same physical server.
There may be under-utilized resources on the server due to system idle time and due to the presence of extra resources. The system idle time is reduced due to the presence of multiple VMs and more number of applications. Hence the system resources are better utilized.
The tightly-coupled architecture makes system inflexible as all applications are run on the same OS image. The under-utilized resources add to the operational and maintenance cost*. Configuration of VMs provides dynamism because applications can be run on different Guest OS images. Better resource utilization reduces system operational and maintenance cost.

Note: * Operational cost is administrative, cooling and power cost of the running system. Maintenance cost is the refurbishment cost due to damages or new requirements.

Dissertation Structure

This report describes the server virtualization technology. VMware virtualization solution is used as a reference to describe the server virtualization technology. The report is organized as follows:

  • Chapter 1: Introduction describes server virtualization in brief.
  • Chapter 2: Virtualization Techniques describes the hardware and software techniques used to implement server virtualization.
  • Chapter 3: VMware ESX Server Architecture describes the processes and features of the VMware ESX server.
  • Chapter 4: Fault Tolerance in VMware describes the add-on applications that provide fault-tolerance functionality in the VMware virtualization solution. The applications described in this chapter are: VMware High Availability, Disaster Recovery and VMotion for migration of Virtual Machine and Consolidated Backup for Virtual Machine backup on the secondary storage device.
  • Chapter 5: Advantage of Server Virtualization is a descriptive list of benefits of implementing server virtualization in an enterprise.
  • Chapter 6: Security in VMware describes how security is implemented in server virtualization.
  • Chapter 7: Conclusion is the summary of the report.

Theory – Virtualization Techniques

Hardware Level

The hardware level virtualization is implemented by the sharing of physical server hardware resources between the VMs configured on the server.

Virtual CPU

The challenges to virtualization are:

  1. sharing of server physical resources
  2. execution of instructions that cannot be virtualized

The first objective is achieved in the hosted OS or hypervisor methods for virtual server configuration. The physical resources such as I/O interface, memory and CPU are shared amongst the VMs configured on this server. The second requirement is implemented with the following techniques (Understanding Full Virtualization, Para-virtualization, and Hardware Assist, 2007):

Full virtualization using binary translation
ARABIC 3 Binary Translation for Full Virtualization
Figure SEQ Figure * ARABIC 3 Binary Translation for Full Virtualization (Understanding Full Virtualization, Paravirtualization, and Hardware Assist, 2007)

The instructions that cannot be virtualized are replaced with instructions that can be executed in the virtual server. The kernel code with replaced instructions has the same effect on the virtual server hardware resources as the original kernel code would affect non-virtual (native) server hardware resources. The new kernel code is placed in the virtualization layer (hypervisor). This translation of instructions is called binary translation. Full virtualization is achieved with binary translation because the Guest OS is decoupled from the underlying hardware resources and the hypervisor performs the translation of instructions that cannot be executed on virtual resources. The translation is performed on-the-fly, i.e. after the instruction is issued by the VM hypervisor will translate it before it is executed by the CPU.

The non-OS instructions in the user application are executed directly on the host CPU but the OS instructions that require complex transactions such as access to other resources shall require simplification that is achieved by binary translation performed in VMM. VMM provides the decoupling between Guest OS & hardware layer and thus emulates a real CPU as virtual CPU. The advantage of full virtualization is that since translation is performed in the hypervisor and the Guest OS remains unmodified the VM can be ported to any virtual server or native hardware.

OS assisted virtualization or para-virtualization
ARABIC 4 Para-virtualization
Figure SEQ Figure * ARABIC 4 Para-virtualization (Understanding Full Virtualization, Paravirtualization, and Hardware Assist, 2007)

Para-virtualization is also called “alongside virtualization” due to the fact that this method addresses the communication issues between the Guest OS and the hypervisor in order to resolve the instructions that cannot be virtualized. “Para-” is also an English affix that means “alongside” and therefore para-virtualization is a method that is implemented to achieve virtualization. Unlike full virtualization where Guest OS is not modified but instead instructions are translated in the kernel, in para-virtualization hypercalls or wrapper functions are provided between Guest OS and kernel in order to communicate instructions that cannot be virtualized. Kernel may also provide hypercalls for other complex instructions such as for memory management, interrupts and timers. The Guest OS must be modified in order to include these hypercalls.

The modified Guest OS runs in Ring 0 i.e. the layer for the most privileged instructions. Due to the modifications required in the Guest OS and the kernel the para-virtualization method compatibility and portability is poor because the modified Guest OS cannot be ported on to a new virtualization layer (hypervisor) without implementation of hypercalls compatible to the new virtualization layer. For these reasons para-virtualization also adds to the virtual server maintenance cost. Para-virtualization is also used because it is relatively easy to modify Guest OS and virtualization layer rather than achieve full virtualization. Example: in open source Xen project virtual processor and virtual memory are achieved with Linux kernel modification and virtual I/O is achieved with Guest OS device drivers.

Comparison of virtual CPU techniques
Full virtualization Para-virtualization Hardware assisted virtualization
Modifications Virtualization Layer Guest OS Virtualization Layer & Hardware
Portability Yes, VM is portable Poor, VM is not portable Yes, VM is portable

Other than virtual CPU para-virtualization is also applied to implement virtual I/O with shared memory between Guest OS and kernel; example: VMware vmxnet.

Hardware assisted virtualization
ARABIC 5 Hardware Assisted Virtualization
Figure SEQ Figure * ARABIC 5 Hardware Assisted Virtualization (Understanding Full Virtualization, Paravirtualization, and Hardware Assist, 2007).

Hardware assisted virtualization is a technique to eliminate software modifications such as binary translation and para-virtualization required for the execution of privileged instructions that cannot be virtualized. This technique therefore requires enhancements in the CPU. The Intel Virtualization Technology (VT-x) and AMD Virtualization (AMD-V) have produced first generation of hardware assisted virtualization solutions. The solution introduces a new CPU execution mode so that the instructions that can not be executed in the virtual server are trapped to the VMM in the hypervisor. These instructions are executed in the new CPU execution mode and the VM state information is stored in Virtual Machine Control Structures (VT-x) or Virtual Machine Control Blocks (AMD-V) of Intel and AMD processor respectively.

VM runs in a non-privileged CPU mode and VMM executes in the privileged CPU mode to trap the OS instructions. The CPU virtualization is achieved by ensuring that all instructions issued by VM are executed on the virtual resources. In the software solution the modifications that are required to execute instructions on the shared hardware resources are performed in the Guest OS and layers beneath the VMs. The new CPU execution mode transition overhead involved in the first generation of hardware assisted virtualization is higher than the VMware binary translation solution for full virtualization.

Virtual I/O

A virtual server that hosts Virtual Machines must also provide a dedicated input/output interface for each VM. The possible solutions must consider the hardware cost of the NIC, the operational and maintenance cost and the affect of the solution on server consolidation. The software-based I/O virtualization and hardware-based I/O virtualization solutions are discussed here (The Future of Ethernet I/O Virtualization is Here Today, 2006):

Software-based virtual I/O
ARABIC 6 Software-based virtual I/O
Figure SEQ Figure * ARABIC 6 Software-based virtual I/O (The Future of Ethernet I/O Virtualization is Here Today, 2006)

In a virtual server the I/O port is virtualized by the introduction of a virtual NIC per VM. This virtual NIC provides native NIC connection to the VM through a Virtual Switched Network (VSN). The virtual NIC is assigned a MAC address and IP address thus emulating the physical NIC that connects the virtual server to the organization network. The VSN uses shared memory for buffers and asynchronous buffer descriptors for I/O data transfer. The VSN resides in Virtual Machine Monitor (VMM). Since the VSN has to process I/O requests from multiple Virtual NICs the CPU utilization of the virtual server increases. VSN has to ensure that every virtual NIC must get appropriate share of resources such as shared memory buffers, buffer descriptors and network bandwidth in order to ensure native NIC like functionality.

The advantages of software-based virtual I/O are:

CPU utilization – In a virtual server due to the presence of multiple VMs the traffic from these multiple data sources is multiplexed on to the same physical NIC. The multiplexing of data packets from multiple sources results in less idle time for the CPU as the packet queue will remain non-empty for most of the time. In a non-virtual server the traffic from/to single physical source is to be processed therefore the CPU utilization remains low.

Network Bandwidth Utilization – The physical link bandwidth that remains under utilized in a non-virtual server due to the traffic from only one physical data source is better utilized when traffic from multiple VMs is multiplexed onto one physical link in a virtual server.

The disadvantages of software-based virtual I/O are:

Latency – Since there is one VM corresponding to each consolidated server on a virtual server therefore the allocation and de-allocation of shared resources for the multiplexed data from these VMs is performed by VSN. The processing overhead of VSN may add latency to the traffic. The latency factor may hinder the consolidation of servers in a data centre by restricting the number of servers that can be consolidated onto one physical server.

Hardware-based virtual I/O

Hardware based virtual I/O is achieved with an intelligent NIC that implements virtual I/O, TCP/IP and upper layer functionality in the NIC. The main advantage of hardware-based virtual I/O is that it shall remove virtual I/O functionality implemented in Virtual Machine Monitor from the data path and thus eliminate the latency introduced by software-based virtual I/O.

The methods for implementing hardware-based virtual I/O are:

Direct Assignment

Multiple NICs are installed in a virtual server; at least one NIC is configured per Virtual Machine. The virtual I/O functionality implemented in the intelligent NIC eliminates the requirement for data packet processing in the Virtual Monitor.

ARABIC 7 Direct Assignment of NICs to VMs
Figure SEQ Figure * ARABIC 7 Direct Assignment of NICs to VMs (The Future of Ethernet I/O Virtualization is Here Today, 2006).

In direct assignment of NIC to Virtual Machine since Virtual Machine Monitor is bypassed there has to be a mechanism in either VM or hardware to associate memory resources used for I/O with specific VM. In order to achieve the memory association with a VM a DMA Remapping function must be implemented in the hardware. This function maps the system memory accessed by the I/O device to the VM specific memory pool, I/O page tables are used for remapping. VMM has a role in controlling the I/O operations of VMs; it isolates the DMA access requests of a VM from another VM, but is not involved in data packet processing.

This situation can be illustrated as follows: There is one DMA engine in the virtual server; the Direct Memory Access Remapping functionality assigns memory M1, M2 and M3 from the memory pool to Virtual Machines VM1, VM2 and VM3 respectively. Virtual Machine Monitor is involved in the configuration of memory pools for VMs in DMA remapping tables in order to isolate one VM from another VM. The VM may then issue the memory transfer command by writing into the DMA Queue.

When a new data packet arrives DMA engine will write the new data packet memory buffer address into the VM Queue. The VMM is not involved in the processing of the data packet to determine the source/destination VM of a data packet (this is shown in the figure as VM to DMA and vice versa path in the background of VMM). The benefit of bypassing the VMM software layer in the data path must be weighed against the advantages and requirements for server consolidation in the data centre. This method of hardware-based virtual I/O has an additional cost of a NIC per VM.

Shared Physical Architecture
ARABIC 8 Shared Physical NIC Architecture
Figure SEQ Figure * ARABIC 8 Shared Physical NIC Architecture (The Future of Ethernet I/O Virtualization is Here Today, 2006).

All VMs share the physical NIC device(s) of the physical server and there is no separate physical NIC device for each VM. The shared physical NIC device is called virtual NIC. The DMA remapping is implemented in the virtual NIC hardware, this virtual NIC supports configuration of a separate virtual NIC driver for every configured VM. The virtual NIC is also responsible for the configuration of separate resources for every VM and for the implementation of virtual networking functionality for virtual I/O that is otherwise performed by VMM in the software-based virtual I/O method. Parameters that must be configured are MTU size, TCP segmentation parameters, interrupts and physical link bandwidth allocation configuration. The data packet L2/L3 header processing to determine the destination VM is also implemented in the virtual NIC. The virtual NICs on the server must have a function to interface with the host OS and to enable VMM to configure the virtual NIC.

Enhancements for virtual I/O

The other enhancements proposed to reduce the processing overhead for virtual I/O and to enable inter-operability of virtual I/O devices are:

Support for logical CPU – The processor hardware will support a logical CPU for each configured VM on the physical server. I/O Memory Management Unit will be allocated for every logical CPU and DMA remapping function is included in the logical CPU. The virtual NIC will maintain a DMA I/O Translation Look-aside Buffer in the cache for recently accessed and pre-fetched addresses. These enhancements will bypass the VMM and Guest OS processing required for I/O memory access.

PCI support for I/O Virtualization – PCI Express I/O Endpoints with I/O Virtualization WG specification implementation will enable configuration of virtual servers with shared PCI based I/O devices. These PCIe I/O Endpoints will be able to interoperate with platform specific (such as different CPU or OS) DMA Remapping functions. It will also be possible to share a single PCIe I/O Virtualization Endpoint between multiple VMs on a single virtualized server. Example: Blades in a blade server may host multiple VMs and each blade may have a PCIe I/O Endpoint, this endpoint will be a virtual I/O interface for the VMs on the blade.

Virtual Memory

ARABIC 9 Memory hierarchy in a computer system
Figure SEQ Figure * ARABIC 9 Memory hierarchy in a computer system (Mano, pg 446).

In a computer system programs and data are stored in an auxiliary memory. The program and data are brought into computer main memory as and when required by the CPU. The program and data are transferred between auxiliary and main memory by the I/O processor. The size of main memory is generally smaller than the auxiliary memory; therefore it is possible that complete program and data is not transferred into main memory in one I/O transaction.

The programmer who builds the program and data uses virtual address to refer to a memory location. When the program is executed or data is accessed by the CPU this virtual address is mapped to the main memory physical address. In a computer with virtual memory the virtual address space is larger than main memory physical address space.

Memory Virtualization.
Figure 10 Memory Virtualization (Understanding Full Virtualization, Paravirtualization, and Hardware Assist, 2007).

In a virtual server the Virtual Machine Monitor is responsible for mapping the virtual address to the main memory address. The physical memory shown in the figure is the server main memory that is a shared resource and is allocated per VM running on the server. The machine memory is the external memory that could be hard disk or SAN/NAS.

Storage Area Network

Storage Area Network (SAN) is a network of block-memory storage devices. SAN devices appear as a locally attached auxiliary memory to the Operating System of hosts and servers that connect to SAN. The communication protocol used between servers and SAN devices is SCSI protocol. Since SAN can be co-located in the server room or may be distantly located the SCSI cables are not used for the physical interface between the SAN device and the server.

The physical media used to connect the SAN to the servers is the fibre optic cable that runs Ethernet protocol. The SCSI protocol is mapped over TCP/IP Ethernet, the mapping protocol could be iSCSI or HyperSCSI, there are other options also such as iFCP, FICON, etc. The iSCSI protocol requires TCP/IP over Ethernet and the underlying physical layer does not necessarily have to be fibre optic cable. The SAN disk drives are assigned a Logical Unit Number (LUN). The server that has to access the SAN device initiates a TCP/IP session to the target LUN, the established iSCSI session emulates a SCSI hard disk for the server.

The storage area networks are used to provide virtual memory for servers in the data centre. In a virtual server since more than one server is consolidated on to one physical server therefore a larger capacity hard disk is required. Instead of installing multiple hard disk devices inside a virtual server the hard disk devices are consolidated to form a Storage Area Network. These devices are accessible by LUN address and are also called “virtual hard drives”. There are many benefits of providing virtual memory access for Virtual Machines configured on the server:

  • Consolidation of hard disks on a SAN eliminates the requirement for installing additional hard disks on the physical virtual server. Data security that would have been achieved with a separate hard disk for every VM can still be achieved with a configuration of separate hard disk on SAN for each VM. However in order to optimize the storage utilization and reduce the power & cooling cost and hardware cost for hard disks partitions may be created on the SAN devices. SAN is also used for containment of hard disks that get scattered due to consolidation of servers as VMs on one physical server.
  • Disaster recovery is simplified and system downtime is reduced with SAN connected to the physical servers. The VM image and data is stored on the SAN that is connected to both the active and the passive servers. When a VM has to be migrated for disaster recovery only VM active state data needs to be ported offline or transferred online with VMotion. The new active VM after recovery shall access the SAN disk with the same LUN that the previous VM had used.
  • The hard disks that are used by the Consolidated Backup for VM backup may also be configured on the same SAN, these disks may be either co-located or at a distance.
Network Attached Storage

NAS is a file-level storage device connected to a LAN/WAN. NAS devices are used as server appliances that are used only to store data and have a functionality to allow data access and management of data access and storage activities. The hard disks in the NAS system may be arranged as logical & redundant storage devices or as Redundant Arrays of Independent Disks (RAID). NAS is accessible with file-based protocols such as NFS or SMB on UNIX and Windows systems respectively.

The advantage of using NAS over SAN is that the former provides file-system based storage whereas the latter is a block-memory storage that must be supported with a separate file-system. NAS may be used to attach an external memory resource to a virtual server. Secondary storage SAN that stores VM backup may be accessible through NAS.

Blade Server

“A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is a server in its own right, often dedicated to a single application” (Blade Server, 2008).

“A server blade is a thin, modular electronic circuit board containing one, two, or more microprocessors and memory, that is intended for a single, dedicated application (such as serving Web pages) and that can be easily inserted into a blade server, which is a space-saving rack with many similar servers” (Server Blade, 2005).

Blade servers evolved as a requirement to reduce infrastructure complexity in data centres. The requirement was to increase the processing power and storage space in a single physical system without adding complexity to the infrastructure layout and rise in operational costs for power, cooling and floor space (Blade Server Technology Overview, 2008). A blade server may contain more than one server blade and a server blade can host more than one VM.

Advantages of blade servers are:

  • The processing power and memory are directly proportional to the number of server blades contained in the blade server.
  • Blade server also reduces the infrastructure space requirements by consolidating more than one computer system in one box, every server blade can be considered as an independent computer system.
  • Blade servers also reduce operational and maintenance cost by reducing power requirements and simplifying cabling requirements.
  • Other than on-board memory, blade servers may include separate auxiliary memory that is connected to the high-speed bus that connects all the server blades inside the box. Blade server may also connect to Network Attached Storage (NAS) or Storage Area Network (SAN) through iSCSI protocol on network port.
  • The advantage of consolidating multiple resources into a single box is that it reduces administrative overhead and all resources can be managed through single management interface.

The blade servers are also referred as high-density server and used for clustering servers that perform similar task in order to achieve server consolidation and containment. Load balancing and fault tolerance features for high availability are also built into the blade server.

Desktop Virtualization

The application and data are stored at a remote server that is accessible from the thin-client user desktop. In client-server architecture a thin client is client computer/software that is used for data input and output from and to the user. The thin client does not store or process the data but provides input data to the remote server for data processing and receives the processed data for output.

In financial services such as banking data security is of utmost importance therefore data is processed and stored on the remote servers and data encryption is used for input & output data transfer. The speed of data input and output operations on the desktop depend on the data processing speed of the backend server and the Input/output interface processing speed of the server. The data processing and I/O interface of the backend server are shared by all desktops connected to the server.

The types of virtual desktop configurations are (Desktop Virtualization, 2008):

  • Single Remote Desktop – The desktop PC is accessed from remote location via remote access tools such as GoToMyPC and PCAnywhere etc.
  • Shared Desktop – Multiple desktop connect to a server to access shared server resources. A powerful server can handle hundreds of desktop sessions simultaneously, e.g. mainframe computers.
  • Virtual Machine Desktop – Desktop connect to a server with multiple VMs.
  • Physical PC Blade Desktop – PC blades are used to host multiple user sessions for better utilization of resources. Centralized processing power is provided for user desktop that connect to a PC blade.

In order to better utilize the CPU processing power ClearCube provides a PC blade virtualization solution that enables multiple users to connect to a PC blade (Virtualization Solutions, n.d.).

ClearCube PC blade virtualization solution
Figure 11. ClearCube PC blade virtualization solution (Virtualization Solutions, n.d.).

A PC blade is an Intel based computer that consists of all PC components and provides a centralized PC functionality to multiple users thus ensuring optimized utilization of computer resources. ClearCube PC blades provide virtualization with VMware virtualization solution and are manageable via ClearCube Sentral (PC Blades, n.d.).

Intel Virtualization Technology

Diverse operating systems can be installed and executed on the same server. And hence by installing multiple environments on the same server more data processing can be achieved with fewer computer systems thus reducing requirement for large data centre. Multiple environments on a single computer also assist developers and engineers to build and test applications for diverse environments. Small infrastructure also reduces power and cooling cost. The work load and complexity for system administrators is reduced with fewer number of computer systems to manage for same maintenance or administrative operation, such as backup. Virtualization is a software solution that isolates operating systems and their applications from platform hardware resources and from each other (Intel Virtualization Technology, 2006).

Interaction between I/O and processor virtualization
Figure 12. Interaction between I/O and processor virtualization (Intel Virtualization Technology for Directed I/O, 2007).

The partition with an OS is called a Virtual Machine (VM) and the applications in this partition run on this VM. The Operating System contained in a VM is called Guest OS. VM has its own security arrangements to secure Guest OS resources, application and data. A VM may operate in an isolated mode without affecting the activities in other VM. Virtual Machine Monitor (VMM) is an interface between VM and the hardware resources. VMM emulates hardware resources for the Guest OS and keeps control of the hardware platform resources. Guest OS assumes that hardware resources that are available to it are not shared and that the Guest OS has direct access to these resources.

The advantage of virtualization is that redundant VMs can be configured on the same hardware to provide system high availability for business continuity. Both test and production environments can be created on the same server in order to reduce cost by elimination of duplicate systems. New software releases can be tested on testing VM and then loaded on production VM at the time of release. Heterogeneous OS on the same server have greater compatibility. Hardware assisted virtualization has greater OS and VMM independence. VMs isolated with hardware assisted virtualization reduce security risks and protect software applications from faults that may propagate through VM and VMM in a software virtualization centre. Intel Virtualization Technology enables 64-bit support for OS and applications.

Hardware-based tasks

The Intel virtualization technology reduces the computation load on VMM by performing compute-intensive operations in the hardware. Without this technology it may be required to pass the platform control to the OS. The types of compute-intensive operations that can be performed in the hardware are memory buffer management such as allocation and de-allocation of buffers from the memory space allocated for a VM. The “dedicated memory space that stores CPU and OS state information (Intel Virtualization Technology, 2006)” is accessible by VMM only to prevent corruption of data.

Software Level

Hosted

In hosted virtual system architecture the VMs run on top of a host Operating System. Thus there are two Operating Systems a Guest OS included in a VM and a native OS that resides between the VM and the hardware resources.

Virtualization Hosted Architecture.
Figure SEQ Figure * ARABIC 13 Virtualization Hosted Architecture.

The virtualization layer provides an interface between Guest OS and Host OS. Host OS has direct control of all the underlying hardware resources. A VM is installed and executed as any other application runs on the host OS. The advantage of the hosted architecture is that the VM may contain a Guest OS different from the host OS (Virtualization Overview, 2006). In hosted virtual system architecture the VMs run on top of a host Operating System. Thus there are two Operating Systems a Guest OS included in a VM and a native OS that resides between the VM and the hardware resources.

The virtualization layer provides an interface between Guest OS and Host OS. Host OS has direct control of all the underlying hardware resources. A VM is installed and executed as any other application runs on the host OS. The advantage of the hosted architecture is that the VM may contain a Guest OS different from the host OS (Virtualization Overview, 2006).

Hypervisor

The virtualization software runs on top of the clean x86 system without any underlying OS and therefore hypervisor architecture is also called a “bare-metal” approach.

Virtualization Hypervisor Architecture.
Figure 14 Virtualization Hypervisor Architecture.

There is neither a separate virtualization layer included in a VM nor a host OS as in hosted architecture. The virtualization layer above clean x86 platform controls the system hardware resources and provides configuration interface for the resources. The Operating System included in the VM acts as a native OS unaware of the virtualization layer beneath it. In VMware hypervisor solution a service console is provided for system resources configuration management. The bare-metal hypervisors may be agnostic to Operating Systems and may therefore provide configuration flexibility by supporting different OS VMs on the same physical server. A hypervisor that is tightly coupled with any one OS will support VMs with this OS only.

A hypervisor may also support more than one but a finite number of OS. Because of direct access to the hardware resources hypervisor provides better resource utilization and hence greater performance and scalability. The absence of host OS reduces failure points and therefore disaster recovery is simple and system is more robust. The VMM in the hypervisor software provides an abstract hardware resource interface for each VM that is supported by the VMM (Virtualization Overview, 2006; Understanding Full Virtualization, Para-virtualization, and Hardware Assist, 2007).

Research Method 1- VMware ESX Server Architecture

VMware ESX server is a hypervisor that consists of VMkernel and applications that run on top of it. The ESX server 3i is either embedded in the server firmware or distributed as software to be installed from the boot disk.

VMkernel is a POSIX-like operating system, the core functionalities provided are (What is POSIX?, n.d.):

  • Resource scheduling
  • I/O stacks
  • Device Drivers

ESX server may be embedded in the firmware or installed on the boot disk of the virtual server. On system boot the VMkernel detects the hardware devices present in the server and installs the device driver for these devices. The default configuration files are created; these are accessible and modifiable with VMware management tools such as VMware VirtualCenter and VI Client. After the system initialization DCUI process is launched.

ESX Server Architecture.
Figure 15 ESX Server Architecture.

The processes that run on top of VMkernel are:

Direct Console User Interface

This is a menu-driven “local user interface that is displayed only on the console of an ESX server 3i system” (The Architecture of VMware ESX Server 3i, 2007). DCUI provides an interface for initial minimal configuration such as to set root password or to assign IP address in order to connect the server to the network if automatic DHCP configuration does not work. At a later time when network connectivity is available remote management tools may be used. Remote Management tools provided by VMware are VirtualCenter, VI Client and remote command line interface (The Architecture of VMware ESX Server 3i, 2007).

Virtual Machine Monitor

VMM is the process that provides an execution environment such as software and hardware resource control interface for the VM. There is exactly one instance of VMM and the helper process VMX per VM (The Architecture of VMware ESX Server 3i, 2007).

Agents for remote interface with VMware Infrastructure management –

Common Information Model (CIM) system

The CIM system consists of CIM Object Manager called CIM broker and a set of providers that provide access mechanism to computing resources such as device drivers and other hardware resources, these providers are called CIM providers. VMware has CIM providers for virtualization specific resources, storage infrastructure and to monitor server hardware (The Architecture of VMware ESX Server 3i, 2007).

Other User World Processes

The “user world” is the framework to run processes in hypervisor architecture of a virtual server. The native VMkernel applications are run in the “user world” and general purpose arbitrary applications are not run in the “user world”. Some of the management agents that run in the “user world” are (The Architecture of VMware ESX Server 3i, 2007):

  • The hostd process provides an interface for user management. It integrates with direct VI Client connections and the VI API.
  • The vpxa process is an intermediary process that provides a connection between hostd agent and VirtualCenter.
  • The VMware HA agent also runs in the user world.
  • The syslog daemon provides the logging facility on local and remote storage. When remote logging feature is enabled the system logs are stored on the remote target.
  • The iSCSI target discovery process finds the target iSCSI device and then VMkernel handles the iSCSI traffic.

Open Network Ports

There are some open network ports on ESX server 3i to access important services such as (The Architecture of VMware ESX Server 3i, 2007):

  • 80 – To display a static web page.
  • 423 – A reverse proxy port for SSL-encrypted communication with services such as VMware Virtual Infrastructure (VI) API.
  • 427 – A service location protocol port for locating appropriate VI API for a service.
  • 5989 – CIM server port for 3rd party management tools

VI API provides access to services such as RCLI, VirtualCenter Server, VI Client and SDK.

File System

VMkernel has in-memory file system that is different from the VMware VMFS that stores VM. The later may be stored on any external memory. The in-memory files such as log files do not persist on power shut-down and therefore if the files are to be saved they must be stored on an external memory. A syslog daemon exists that stores logs on the remote target and in the local memory. The advantage of storing VMFS on external memory is that if they are stored on NAS or SAN then a local hard disk is not required and hence power is saved and system fault probability due to hard disk failure is reduced.

The file systems in-memory and VMFS are accessible with remote command line interface. The HTTPS get and put access is provided for authenticated users. The user and group authentication information and access privileges are configured on the local server i.e. the server that is the owner of the files. The local server may store these files in local or remote storage (The Architecture of VMware ESX Server 3i, 2007).

User and Groups

The system can distinguish between users that access the server through the Virtual Infrastructure Client, the remote command line interface or the VIM API. Groups are created for the ease of configuration; configurations for multiple users can be set in single step by assigning them to a group. The files /etc/passwd, /etc/shadow and /etc/group store the user and group configuration definitions. These files are present in the in-memory file system and are restored from persistent storage on reboot (The Architecture of VMware ESX Server 3i, 2007).

State Information

The VMkernel stores the system configuration in the in-memory, and since this configuration information is also copied to persistent memory periodically to avoid information loss due to sudden power shutdown, therefore the hard disk is not necessary in the VMware virtual server. The ability to store configuration information on a remote storage facilitates server configuration backup. So if a server or a VM on a server fails the failed server or the VM can be restored to the pre-failure state by downloading the configuration from the backup (The Architecture of VMware ESX Server 3i, 2007).

VI API

The external applications can integrate with VMware infrastructure through Virtual Infrastructure API or command line interface, both provide interface to VirtualCenter that maintains agents and the state of transactions. The ESX server 3i is positioned behind VirtualCenter and remains stateless since no local agents are installed here; therefore all system resources are available for computing. The advantage of this model is that the agents for monitoring and management can be stored and executed on the external system and the ESX server remains independent of any changes to these agents. VI API and CIM thus provide a model for easy maintenance and control of ESX server resources (The Architecture of VMware ESX Server 3i, 2007).

Research Method 2 – Fault Tolerance in VMware

In order to ensure business continuity with minimum Total Cost of Ownership (TCO), minimum system downtime, minimum recovery time and almost no service disruption VMware Virtualization software includes fault-tolerance applications that may be installed as add-on modules.

High Availability

The VMware High Availability (HA) feature ensures business continuity by the detection of system faults and recovery from the system faults. The VMware HA feature monitors the ESX server host for the occurrence of a fault and if the fault results in the failure of a Virtual Machine, the standby VM is restarted on the same or the alternate host. VMware Failure Monitoring functionality in VMware HA monitors both Host OS and Guest OS failures. The Host OS is the Operating System between the VMs and the server hardware resources. The Guest OS is the Operating System that is included in the Virtual Machine. (Refer figure 1) The VMware Failure Monitoring also monitors the VM availability by checking the heartbeat. The VMware Tools on a Virtual Machine sends a heartbeat to VMware Failure Monitoring every second and VMware Failure Monitoring checks the heartbeats every 20 seconds, a defaulting Virtual Machine is declared as failed and is reset.

The VMware Failure Monitoring differentiates between Virtual Machines that are powered off, migrated, suspended or powered-on and sending heartbeats or that has stopped sending heartbeats or is overloaded and therefore has resource starved VMware Tools. Starved VMware tools get less CPU time and hence fewer heartbeats are sent. The heartbeat send frequency, heartbeat monitoring time and stabilization time after start-up are configurable parameters (Virtual Machine Failure Monitoring, 2007).

Disaster Recovery

Traditional disaster recovery methods require redundant hardware equipment and same software configuration on both active and standby systems. This method may require new hardware purchases and operational and maintenance overhead for the redundant hardware. The standby system that is used in recovery scenarios may remain functionally inactive most of the time but shall still occupy floor space and also require cooling and power in order to keep it up-to-date. In addition to this the disaster recovery requires that the active system backup must be taken at regular intervals in order to keep the latest system state information in a secure storage for recovery from a catastrophic failure when both the active and the standby systems may be lost. The backup may be taken offline on tapes or online on a live network connection that shall keep the secondary storage up-to-date at all times.

The disaster recovery process is complex because it is required that in order to minimize disruption in services and to maintain the business continuity the maximum downtime of the system is reduced. To eliminate double fault scenarios such as tape fault or system image fault or both active and standby system fault more stringent disaster recovery mechanisms are required such as methods to avoid physical handling of tapes, simplification of image creation for different system configurations and testing of standby system. Virtualized servers provide solution for the complex disaster recovery methods (Disaster Recovery Solutions from VMware, 2007).

Traditional disaster recovery
Figure 16. Traditional disaster recovery (Disaster Recovery Solutions from VMware, 2007).

The hardware configuration, OS installation and system configuration are one time initial process that is repeated for every active server in the data centre. The recurring expenses can be reduced by backup of only critical applications that have highest impact on the business continuity and recovery time objectives (RTO). “Recovery time objective (RTO), (which) is the maximum outage duration that your end users can withstand without being disruptive to your business” (Disaster Recovery Solutions from VMware, 2007). Higher RTO values can be defined for other non-critical applications so that frequent backups are not required. The risks involved in this mechanism are that critical applications are mostly coupled with lower-end or non-critical applications therefore the faults in these lower-ends or non-critical application must not propagate to the critical applications and hence the RTO for the critical applications must be carefully defined.

The core properties of VMware infrastructure that are useful in disaster recovery are (Disaster Recovery Solutions from VMware, 2007):

  • Partitioning – Virtual Machines (VM) are created by partitioning of server resources. Multiple applications and Operating Systems are consolidated onto one physical system in order to save costs on hardware resources, floor space, cooling and power requirements. Partitioning isolates applications from one another and helps reduce disaster recovery expenditure because of server consolidation.
  • Hardware Independence – VMware VM can be installed on any x86 hardware platforms. The hardware independence property of VM makes configuration of virtualized servers simple. The disaster recovery is faster because complexity in creation of system images for different hardware platforms is eliminated and system start-up is simplified. Also, instead of purchasing new hardware for disaster recovery, the VM for disaster recovery can be installed on any server with required hardware resources for the VM.
  • Encapsulation – The entire server/VM image including OS, applications, configurations, policies, data and current state information can be stored in a file. The migration to another server machine or VM is simplified by this property. For disaster recovery a simple file transfer is required and there is no need to rebuild the entire system from the scratch.
  • Isolation – Since the applications in a VM are isolated from the applications in another VM the system updates, faults and disaster recovery in VMs can be isolated from other VMs.

The recovery process in a virtualized server is illustrated as shown in the figure below:

Recovery in Virtual Server
Figure 17. Recovery in Virtual Server (Disaster Recovery Solutions from VMware, 2007).

In comparison to the traditional disaster recovery process as shown in figure 16, the VMware infrastructure recovery process for a virtualized server is simple and short. The physical-to-physical server traditional disaster recovery process would take 40hrs whereas the virtual-to-virtual server disaster recovery process would take only 4 hrs. This reduction in time is due to the fact that the number of tasks involved in the disaster recovery process is reduced in virtual recovery process because a separate hardware configuration and bare-metal configuration are not required for every system.

Tapes are the most common mechanism for system backup; VMware infrastructure provides the following three methods (Disaster Recovery Solutions from VMware, 2007):

Backup from within a VMware Virtual Machine

A third-party backup agent is installed in the VM. If the VM is connected to the network intranet/internet the default backup location can be specified. Also the agent may have additional features to specify frequency of backup and file/folders that must should be backup. A file-level backup can be taken and restored on the VM.

Backup from VMware ESX Server Service Console

From the ESX server console the full system image can be made, the backup agent resides on the server. This is a method that can be used to make a backup file for the VMs on the server. This file can then be used to port or restore the VM. The backup mechanism granularity is not file level but instead a full VM image is made without effecting the applications installed in the VM. The in-machine mechanism described above complements this method and the two together provide a complete backup feature for the VM.

VMware Consolidated Backup

VMware consolidated backup agent can be integrated with a third-party centralized backup application in order to facilitate the backup of VM contents from a central server. Example: Microsoft Windows 2003 proxy server. There are many advantages of consolidated backup:

  • Both VM file level and VM image backup can be taken.
  • A backup agent is not required on every VM, VMware consolidated backup is included in VMware ESX server.
  • Backup traffic on LAN is eliminated because third-party agent can directly attach to the SAN that connects to the VMs on the virtual server. As shown in the figure the backup agent can mount SAN storage disks to make backup disks.

VMware Consolidated Backup is explained in more detail in section 4.4.

VMware infrastructure can work with many types of replication solutions such as host server based, Redundant Array of Inexpensive Disks (RAID) and network solutions SAN or NAS. VMware infrastructure also supports the periodic testing of the disaster recovery plan.

VMotion

VMware VMotion technology is used for the migration of VM from one physical server to another physical server or to optimize the server resources dynamically without any disruption in the Guest OS or system downtime. VMotion enhances the data centre efficiency because it provides a non-disruptive solution to maintenance activities. The VM state data is stored on a shared memory that is accessible from both the active and the redundant server. The active state data is then transmitted over a high-speed virtual network for quick migration (VMware VMotion and CPU Compatibility, 2007).

All records of migration activities are maintained in order to comply with audit requirements. VM migration schedule can be pre-configured in order to eliminate the requirement for the administrator presence at the actual time of migration. VMotion also ensures that the shared and virtual resources are available to the mission-critical applications during migration in order to avoid any disruption in the services. While migration of a VM to any hardware platform is supported in VMotion the disruption in services provided by a VM is avoided by first finding the most optimal placement for the VM before migration is initiated. The most optimal placement is the server from where the shared resources are accessible and the required virtual resources such as virtual I/O device, virtual memory and virtual CPU are available.

Requirements for VMotion host and destination are (VMware VMotion and CPU Compatibility, 2007):

  • Datastore compatibility – The shared resources on the network may be made available with SAN or iSCSI interface. Alternately VMFS or shared NAS may also be used to share storage disks between the source and the destination of the VMotion.
  • Network compatibility – The source and destination servers of VMotion and ESX servers must be connected to the same gigabit Ethernet subnet.
  • CPU compatibility – The source and destination host server CPU compatibility is required for VMotion in order to ensure that the virtual CPU functionality in the target VM is supported on the destination server. Refer section 2.1.1 for virtual CPU functionality.
  • If all the components of a server are virtualized then the Virtual Machine on the server shall have the state information for virtual BIOS, virtual I/O, virtual CPU and virtual memory. All the VM state data can be encapsulated in a file that can be easily ported on another server for a smooth and fast migration. Following types of migration can be performed:
  • Powered Off – Also known as cold migration, the host VM is powered OFF and after file transfer is complete the VM is powered ON on the destination server.
  • Suspended – Also known as suspend/resume migration, the host VM is suspended and then resumed on the destination server after the VM state file transfer is complete.
  • Powered-on or live – Also known as live migration performed by VMotion, the VM is migrated from the host to destination server without any disruption.

In suspended or live migration since the VM application state is unaffected by the migration therefore these are also known as hot migration. VMotion performs the CPU compatibility check for the destination server before migration in order to ensure that the migrated VM does not crash after it is made live. CPUID instruction is used to determine the host CPU instructions and features. CPUs undergo improvements and ISA is augmented for new instructions.

CPU vendors code new modules in the OS to support new features. If a CPU feature or instruction set extension is present on host server and is used by VM, the same feature and instruction set extension must be made available to VM on the destination server after migration in order to pass the VMotion compatibility test. Example: Intel supports multimedia features with SSE instruction set for compatibility with AMD’s 3DNow!. The compatibility check is performed based on the following features (VMware VMotion and CPU Compatibility, 2007):

  • CPU micro-architecture: Different CPU vendors or a processor family with different micro-architecture may support the same Instruction Set Architecture (ISA). Example: Intel Core-based CPU and P4 have different micro-architecture but same x86 ISA is implemented.
  • Privileged/non-privileged Code: OS related instructions that execute at the highest privilege level (0) are atomic and cannot be pre-empted. Applications have most non-privileged instructions but may also use OS APIs that are privileged applications. Since the privileged instructions are atomic the execution of an instruction shall be completed without any interruption due to live migration. Migration of a VM from one server to another requires that in hypervisor architecture the Guest OS APIs are mapped to the same privileged instructions in the kernel and in the hosted architecture the host OS supports the Guest OS APIs.

Example: SSE3 instruction set in Intel for faster floating-point operations and 3Dnow! instruction set in AMD for better multimedia processing.

VMware tools for CPU compatibility check

  • CPUID instruction – “VMware provides a bootable CPUID image on VMware ESX Server media” (VMware VMotion and CPU Compatibility, 2007). An application may run CPUID instruction at the beginning and based on the result choose to run or not run some features.
  • Managed Object Browser – This is a browser on ESX hosts to determine the host CPU features. The browser is launched with web browser.

Consolidated Backup

VMware Consolidated Backup feature performs backup of Virtual Machines, the backup process is made simple, efficient and agile by performing Virtual Machine backup from proxy servers instead of the production ESX host server. The advantages of Consolidated Backup are (VMware Consolidated Backup, 2007):

  • Online backup of Virtual Machine snapshot is possible.
  • Workload on ESX server hosts is reduced by performing backup from proxy servers.
  • A backup agent is not required on every Virtual Machine and backup is performed from centralized backup proxy.
  • There is a facility to leverage VMFS snapshot technology in order to backup file snapshot.
  • VMware Consolidated Backup can also integrate with any other backup software present on the Virtual Machine.
  • Backup of Virtual Machine stored on iSCSI SAN, NAS or ESX server local memory is supported.
  • The Virtual Machine backup snapshots can be moved over the LAN and hence SAN based storage can be eliminated. The use of SAN based storage requires that SAN device must be mounted on the system and then it is used as a local hard disk.
  • The VMware Consolidated Backup proxy server may be installed and run in a Virtual Machine without the requirement for a tape drive in the Virtual Machine. The backup may be stored on the SAN, NAS or another secondary storage connected to the LAN.
  • Consolidated Backup images can be restored with VMware Converter; image may also be customized before restoration. VMware converter is an add-on component of VirtualCenter. It is also used to create a VM template that may be used as a Consolidated Backup image for provisioning multiple Virtual Machines on a virtual server.
  • Both 64-bit and 32-bit Windows Server 2003 is supported as proxy server.

Consolidated Backup with iSCSI storage

Consolidated Backup with iSCSI SAN
Figure 18. Consolidated Backup with iSCSI SAN (VMware Consolidated Backup, 2007).

The iSCSI connections are implemented in two ways:

  • Software initiator – The iSCSI functionality is emulated in the software and a simple Ethernet adapter is used for network connectivity.
  • Hardware initiator – The iSCSI network adapters that act as HBA adapters are used for network connectivity.

In the figure the ESX server host, the iSCSI SAN, and the backup proxy server are all connected via software or hardware iSCSI initiators. With software iSCSI initiator the backup proxy server can also be consolidated into a virtual server as a Virtual Machine and it shall function just like a backup proxy server on a physical host. The green colour shows the Virtual Machine, its data file and the backup path to the storage device. The VLUN driver is installed on the Virtual Machine or a physical host that has a backup proxy server, the VM3 data file VMDK is made available by the driver as a virtual drive for the backup proxy server. The VMDK file is then stored on the secondary storage device connected to the backup proxy server.

LAN-based Data Mover

When the backup proxy server and the Virtual Machine are hosted on a separate physical server the two servers may be connected via LAN interface. The advantage of the LAN based data mover technology is that with Consolidated Backup the online backup of Virtual Machine snapshot can be taken even without block-based storage such as iSCSI SAN or any other shared memory. The other advantages of the LAN based Consolidated Backup feature are (VMware Consolidated Backup, 2007):

  • The backup of Virtual Machine stored on a Network Attached Storage (NAS) device or local storage can be done without any disruption.
  • The existing infrastructure with storage and Ethernet connections can be leveraged to implement Consolidated Backup of Virtual Machine without a requirement for iSCSI SAN.
  • The backup proxy server can be implemented in a dedicated physical server or a Virtual Machine that is connected to the LAN.
  • The backup device for a Virtual Machine may be same as the original storage for the Virtual Machine such as the shared local memory on the host virtual server. It is not necessary to install the backup agent on the Virtual Machine in order to take the backup; the backup can be taken from a centralized backup proxy server that is connected to the LAN by mounting the local memory of the target Virtual Machine.
LAN based Consolidated Backup
Figure 19. LAN based Consolidated Backup (VMware Consolidated Backup, 2007).

In this figure the green colour is used to highlight the Virtual Machine, local memory resource and the backup path for the Consolidated Backup feature. The backup proxy server creates a snapshot VMDK of the Virtual Machine VM2, this local memory is accessible via the LAN interface that connects the ESX server to the LAN, the VMDK file is mounted on the backup proxy server and copied to the secondary storage device. Since the Virtual Memory local memory is accessible on LAN and the resources such as CPU, network bandwidth and memory buffers are made available by this VM therefore the backup procedure may affect the performance of the other services provided by the VM. Hence in order to ensure best utilization of resources without any disruption in critical services the backup activity must be scheduled in off-peak hours.

Usability of Consolidated Backup

If backup fails the Consolidated Backup performs the cleanup of the failed job and generates notifications. Also graceful exit after cleanup of temporary files and snap shots is provided when the Consolidated Backup procedure is interrupted. Both Consolidated Backup proxy and VirtualCenter Management can be installed on the same host.

Resource Management

The ESX server provides management interface for the following software and hardware resources (ESX Server 2 Systems Management, 2004):

  • The Virtual Machine Guest OS and the Virtual Machine applications.
  • The configuration and management of multiple Virtual Machines on the virtual server.
  • The virtualization of hardware resources such as allocation of CPU, memory, storage and network bandwidth for the Virtual Machines.
  • Management of hardware resources on the virtual server.

The ESX server has three type of management interface that can be integrated with other enterprise system management software:

  1. Web-based graphical user interface for management and monitoring of ESX server and the Virtual Machines running on top of it, called VMware Management Interface.
  2. Perl and COM APIs are supported to facilitate integration with the proprietary management application of an enterprise.
  3. ESX Server has a SNMP Management Information Base and an agent to provide configuration and performance information. The SNMP MIB and the agent can be integrated with HP OpenView or IBM Director.

Research Method 3 – Security

The performance of virtual server is affected by the following hardware dependant issues:

Protection exception error

The critical system resources such as some key instructions, memory locations and processor registers may have access restrictions such that only core OS and kernel has access privileges. In a virtual server the Guest OS on the VM may not be provided access to these resources. For this reason the VM may behave differently from actual server and therefore have some feature or behavioural limitations. It is important that the hypervisor or hosted OS restrict the VM Guest OS access to these critical system resources. Unauthorized access to critical system resources may cause “protection error”.

Virtual address space access

The table lookup buffer (TLB) in x86 processor is maintained in the hardware. This makes VM virtual address space management difficult. In order to provide native server like resources for VM it may be required that Virtual Machine Monitor (VMM) maintains a set of memory management tables for all VMs. Since VMM has direct access to the hardware resources, VMM may then perform mapping between VM memory map and the hardware resources and thus keep VM oblivious to the presence of a layer between the VM and the hardware resources.

VMware ESX server has the following security arrangements for the three main components: Virtual Machines, VMware Virtualization Layer and Service Console (ESX Server 2 Security White Paper, 2004):

  • All Virtual Machines are isolated from each other and run a dedicated Operating System image, since the Operating System is protected by VM isolation therefore access to OS resources is secured. If the kernel in one Virtual Machine crashes this does not effect other Virtual Machines and they continue to run uninterrupted.
  • A Virtual Machine can communicate with other hosts, servers and other devices on the network through the Virtual I/O interface device such as dedicated NIC for a Virtual Machine as in direct assignment of NIC configuration mode or a virtual NIC as in shared physical architecture (Refer section 2.1.2.2.2). If security software such as firewall or antivirus software is installed on a Virtual Machine, the Virtual Machine is protected from external faults in the same manner as it would have been on a native physical server.
  • Since Guest OS is included in the Virtual Machine and the virtualization layer lies beneath the Virtual Machines therefore due to this isolation of the Virtual Machines from each other even if a user has system administrator privileges for a Virtual Machine, the user cannot access another Virtual Machine. (Refer figure 1) Access to a Virtual Machine requires system administrator privileges for the Virtual Machine. The login access to the system shell is restricted.
  • In order to secure the resource allocation for Virtual Machines to avoid the performance degradation and denial of service due to resource consumption by other Virtual Machines that share the resource fine-grained resource controls are used. The minimum resource allocation is configured for a Virtual Machine to ensure business continuity without any disruption in the services provided by the Virtual Machine. The Virtual Machine kernel that mediates the allocation of the resource allocation does not provide a mechanism to allow a Virtual Machine to access another Virtual Machine resource.
  • VMware has a provision for configuring per VM network security policies such as information access privileges for NAS/SAN.
  • VMware products are audited by independent 3rd party firms who get access to the source code and support from VMware engineers.
  • The promiscuous mode configuration for network adapters is disabled by default. In promiscuous mode a guest network adapter provides its own hardware address for IP-to-physical address mapping for other network adapters that lie behind the guest network adapter. Therefore in promiscuous mode configuration the guest VM shall receive all the packets destined for other network adapters and can “sniff” these packets which can be a security hazard. Promiscuous mode may be used for the configuration of the virtual NIC in shared physical architecture for virtual I/O. In direct assignment architecture for NIC one of the NIC may be configured in promiscuous mode in order to add an additional security layer for other Virtual Machines. The Virtual Machine with this promiscuous mode NIC shall “sniff” and filter the packets destined for other Virtual Machines on the server.
  • Troubleshooting information may be logged in files however the size of files must be limited in order to avoid a denial of service attack due to storage resource shortage. The log files may also be saved on the Storage Area Network or Network Attached Storage disks.

Practical Part – Advantages of Server Virtualization

Server virtualization is an abstraction technology that enables the division of the hardware resources of a given server into multiple execution environments and enables the consolidation of multiple servers and hardware resources into a single computing resource” (Virtualization Technology Overview, 2007).

The advantages of server virtualization are:

  • In a virtual server more than one physical server boxes are consolidated into one virtual server box and their workload is virtualized by sharing of the physical resources of this box. The one high-performance computer system thus reduces the number of boxes present in the infrastructure. Further, the operational and maintenance cost is reduced and Return On Investment (ROI) is increased by adding Virtual Machines to the virtual server instead of adding a new box.
  • The user gets an illusion of running the service on the legacy hardware, i.e. a dedicated physical server, no modifications are required in the existing service software, hardware resources and the network infrastructure when the service is migrated onto a high performance virtualization enabled machine. The virtual server for migration is selected by matching the service hardware requirements with the availability of the shared hardware resources on the virtual server.
  • The virtual server provides increased flexibility in the configuration and the utilization of the existing resources in the data centre due to an ability to run more than one Operating System and applications on a single hardware system simultaneously. Each Virtual Machine that is run on the virtual server has its own Operating System called Guest OS.
  • The Virtual Machines of the server are isolated from each other and can communicate either via underlying virtualization layer or through the external network such as LAN. Each VM may be assigned an IP address and a dedicated physical NIC or shared virtual NIC to connect to LAN. This secure configuration of VMs ensures that fault in an application on a VM does not affect applications running on the other VMs and that there is no adverse impact on the system performance and the system downtime is not affected. A standby VM may be configured on the same virtual server or a separate virtual server for disaster recovery in order to ensure business continuity.
  • The system provisioning, high availability and system migration of VMs is made simpler with single management interface for similar tasks. In order to achieve this task that constitutes multiple applications and would otherwise be configured on separate legacy servers must be configured on VMs of the same virtual server.
  • Virtualization does not restrict a user from working simultaneously on different VMs of a virtual server. Also, more than one user can work simultaneously on the same or different VMs in a virtual server. If a VM crashes it does not impact the other users of the other VM on the server.

In order to benefit from these advantages of a virtual server it is required that the virtual server must provide equal or higher performance than the total performance of all legacy servers that are being replaced by this virtual server. The performance implies processing power, memory, I/O ports, network interfaces, power & cooling requirements, cost and floor space.

On a blade server dynamic load balancing can be achieved by moving an entire VM from overloaded server blade to another server blade with available resources. The dynamism is achieved by the ability in VMware VMotion to move VM run-time based on the application and the system context data. Application context data may be time-of-day when consumer requirement such as web site access or downloads are expected to be at peak. System context data may be CPU utilization and memory buffer thresholds.

Multiple servers required for a testing infrastructure can be replaced with a single blade in a blade server or a virtual server. This replacement and the virtualization technology shall eliminate the cabling requirement, reduce the power and cooling cost and allow multiple users to simultaneously test applications on different VMs without any interference with the other VM or users.

More advantages of server virtualization are described below:

Server Consolidation and Containment

With the VM configuration on a server more than one application is contained on a single server and hence the server resources utilization rate is increased from 5-15% to 60-80%. The consolidation of more than one OS platforms onto a single server eliminates server sprawl problem. The aim of virtualization is not only to enhance system performance but also to reduce the number of servers in the organization infrastructure, i.e. to reduce the ‘box-count’. This reduction in ‘box-count’ also affects the recurring operational cost by reducing power cost and floor space requirement.

The goal of server containment is unification therefore new applications are installed on a VM on existing server rather than purchasing new hardware. Benefits of server consolidation and containment can be measured by Total Cost of Ownership (TCO), i.e. one-time purchasing and setup cost plus recurring operational and maintenance cost. For effective server consolidation and containment automated provisioning of resources such as CPU scheduling, memory allocation, Direct Memory Access for disk read/write operations, iSCSI interface for SAN, network bandwidth allocation, high availability configuration and load balancing between applications on a VM or between the VMs is required.

Test & Development Optimization

The virtual server has a provision to run more than one OS platforms on a single hardware platform. The Guest OS that run on the Virtual Machines can be different. By using virtual server in a test environment the number of server boxes is reduced and hence the development and the test activity is optimized because the administrative and maintenance cost is reduced. Developer/tester may use the VMs on a single server to develop and test the applications that require multiple environments.

Business Continuity

The system downtime is reduced by consolidation of more than one server activities onto a single physical server. The hot standby of the active Virtual Machine can be configured on the same or different virtual server. The hot standby may be configured to use the active VM memory resource and in case of fault or overload condition the standby VM can take the role of an active VM without affecting the system performance. The VMware VMotion technology can be used for migration of the VM to a different server in critical conditions. The critical conditions can be detected by pre-configured threshold values such as CPU utilization percentage, memory buffer usage, latency in network traffic due to congestion because of high user traffic to avail the service provided by the VM, e.g. access to web site.

The VMware High Availability feature may also be used to detect the system condition when either Disaster Recovery or migration is required. In the event of disaster recovery the image of single computer system may be copied to a new server to restore the system. VMware Consolidated Backup may be used to store the VM on the secondary storage for restoration in case of faults such as memory corruption due to application fault or a security breach.

Enterprise Desktop

A virtual desktop can be connected to a backend virtual server with multiple VMs. More than one desktop may be connected to the backend server. A workstation dedicated for a single user has wasted resources due to idle time when the system is not in use or non-optimal use at other times. Virtual Desktop provides the optimized utilization of computing, storage and networking resources. Also the information security risks such as password loss or hard disk theft in a dedicated workstation are mitigated with virtual desktop. The end user autonomy is secured in virtual desktop by including a security policy layer in the VM enclosing software, e.g. user authentication and data access permissions on each VM. The hardware resource for memory is not present at the site of virtual desktop and is kept in more secure server rooms.

Case Study

WWF-UK

A Carbon Footprint is a measure of the impact our activities have on the environment in terms of the amount of greenhouse gases we produce. It is measured in units of carbon dioxide (Carbon Footprint, 2008).

World Wide Fund for Nature (WWF-UK) is a science-based conservation organization that addresses environmental issues such as species & habitat survival, climate change, environmental education and sustainable business. WWF-UK office in Godalming, Surrey has a staff of over 300. As a conservation organization WWF-UK has environmental friendly business practices. Reduction in organization’s carbon footprint was set as an objective as part of organization’s business practices. In order to meet this objective the IT department decided to reduce cooling requirements. To meet this requirement the number of servers used by IT department were to be reduced.

To ensure business continuity and disaster recovery with less number of servers WWF-UK decided to install VMware server virtualization software on the servers. VMware Infrastructure 3 Enterprise was installed by SNS Ltd., a VMware® VIP Enterprise Reseller. VMware components installed included VMware ESX Server 3, VMware VirtualCenter 2 and VMware VMotion.

Result

WWF-UK achieved reduced carbon foot print, small hardware infrastructure and reliable business continuity with VMware server virtualization solution. Networking software, financial system, contact information database of millions of supporters and other HR applications were configured to run on these virtual servers.

Bell Canada

Bell Canada Enterprises (BCE) is Canada’s largest communications company. The main subsidiary, Bell Canada, provides local telephone, long distance, wireless communications, Internet access, data, satellite television and other services to residential and business customers through some 27 million customer connections (Bell Canada & CGI Group, 2006).

To meet the customer support requirements for its large customer base, Bell Canada took an initiative in October 2004 to avoid hardware attrition and reduce the TCO. Bell Canada set a goal to provide customized workstations for its 8,000 call agents in order to facilitate outsourcing and telecommuting.

“Bell Canada came to us with a project to provision, connect and securely deploy 400 desktop environments within three weeks,” says Martin Quigley, CGI senior technical consultant to Bell Canada (Bell Canada & CGI Group, 2006).

Because of space and security restrictions, CGI suggested that Bell Canada use VMware virtualization solution to provide virtual desktops. Bell Canada had specific requirements due to the nature of their business:

Due to the enhanced security requirement from the Bell Canada client it must be possible to create a ‘lockdown environment’ for employees. Due to file sharing no programs could be installed on desktop hard-drive.

VMware desktop virtualization solution could fulfill Bell Canada requirements:

Low Total Cost of Ownership

  • Since the server count is reduced and separate CPU unit is not required with the virtual desktop, TCO is reduced.
  • Reduced number of hardware units saves floor space and power and cooling cost.
  • Operational and maintenance overhead is reduced.
  • Eliminates need for site visit by telecommute employee or need to courier desktop hardware for upgrades or incidents.
  • Eliminates MAC request.

Telecommute support

The telecommute employees desktop can be controlled from the datacenter because virtual desktop runs as VM on virtual server.

Optimized Development Environment

Multiple desktops could share the hardware resources of a virtual server. A separate Virtual Machine could be configured on a virtual server for each connected desktop. This VM emulates the physical desktop and the user cannot tell the difference between the two configurations.

Centralized Simplified Management

Centralized desktop control. A single VM image shall be configured and then multiple copies can be loaded on virtual server through VMware VirtualCenter.

Rapid Deployment

The centralized and simplified desktop management with VMware VirtualCenter reduces desktop configuration time and hence the time from order to deployment is considerably reduced.

Hardware Independence

Call agents who telecommute may use any desktop hardware if they are not using Bell Canada desktop hardware. The requirement is that Microsoft Windows XP RDP (remote desktop protocol) must be installed on agent desktop in order to connect with Bell Canada virtual server.

Seamless user experience

When employees are moved to different LAN only one network connection of virtual server must be changed.

Disaster recovery and backup

Since all virtual desktop environments are identical VMotion and Disaster Recovery can be used to migrate and restore VMs respectively to different servers.

Result

VMware provides secure desktop environment that is used by Bell Canada internal and external agents to serve the clients. Hardware attrition is eliminated with less number of boxes in the infrastructure and hence TCO is also reduced.

Conclusion

Server virtualization is a technology that is used to reduce the Total Cost of Ownership of hardware and software resources. In a data computing intensive business an enterprise has a requirement for computing, storage and networking resources. In order to gain high Return On Investments it is necessary that optimum utilization of resources is made and operational and maintenance overhead such as floor space, power and cooling requirement is reduced. After the study of VMware virtualization solution for server virtualization it is found that VMware provides necessary software components that can be integrated with hardware solutions to provide the secure and fault-tolerant virtualization solution.

If the enterprise hardware resources do not support hardware assisted virtualization as provided by Intel and AMD, VMware para-virtualization solution can be used to run complex applications that include instructions that cannot be executed on shared hardware resources.

VMware virtualization solution ESX server enables use of Network Attached Storage or Storage Area Network for sharing external memory in order to eliminate hard disk requirement on the virtual server. Desktop virtualization integrated with a virtual server that connects to NAS or SAN provides hardware modularity where user interfaces, computing and storage can be physically isolated without the adverse affect on the quality of service. The advantage of this configuration is that large enterprises can centralize the data storage and have localized computing to ensure data integrity.

Also the VMware fault-tolerance features ensure that system downtime is reduced and resource utilization is optimized with the ability for live migration of a Virtual Machine with VMotion in case of a fault or service requirements such as due to reduced customer traffic on certain days of week less computing resources are required. The ability to create online backup of a Virtual Machine with Consolidated Backup mitigates the risk of damage to physical devices.

VMware provides Web based, API and SNMP interfaces for integration with enterprise system management software to provision the shared resources of a virtual server.

References

A Guide to Harvard Referencing. (2005) Leeds Metropolitan University. Web.

Bell Canada & CGI Group. (2006) . VMware. Web.

Blade Server Technology Overview. (2008) Blade. Web.

. (2008) SearchDataCenter. Web.

Carbon Footprint. (2008). Web.

. (2008) Wikipedia. Web.

Disaster Recovery Solutions from VMware. (2007) VMware. Web.

Enhanced Virtualization on Intel Architecture-based Servers. (2006) Intel. Web.

ESX Server 2 Security White Paper. (2004) VMware. Web.

ESX Server 2 Systems Management. (2004) VMware. Web.

Intel Virtualization Technology for Directed I/O. (2007) Intel. Web.

Intel Virtualization Technology. (2006) Intel. Web.

. (2008) Wikipedia. Web.

Mano, Morris, M. (1999) Computer System Architecture. 3rd Ed. Prentice Hall.

. (2008) Wikipedia. Web.

PC Blades. (n.d.) ClearCube. Web.

. (2008) Wikipedia. Web.

Server Blade. (2005) SearchDataCenter. Web.

. (2008) Wikipedia. Web.

The Architecture of VMware ESX Server 3i. (2007) VMware. Web.

The Future of Ethernet I/O Virtualization is Here Today. (2006) Netxen. Web.

Thin Client. (2008) Wikipedia. Web.

Understanding Full Virtualization, Paravirtualization, and Hardware Assist. (2007) VMware. Web.

. (2006) VMware. Web.

Virtualization Solutions. (n.d.) ClearCube. Web.

Virtualization Technology Overview. (2007) Blade. Web.

VMware Consolidated Backup. (2007) VMware. Web.

VMware VMotion and CPU Compatibility. (2007) VMware. Web.

What is POSIX? (n.d.) LYNUXWORKS. Web.