Virtualization Virtual Servers and Virtual Storage
Virtualization is one of the major software trends of 2007. This type of software allows multiple OSs (operating systems) to run side by side from a single server. Servers are computer systems that provide services to other systems or clients. All major vendors are emphasizing virtualization, including IBM, Intel, Microsoft and, especially, VMWare.
Introduction
Virtualization supports virtual operating systems, servers, storage, and network functions. It works in conjunction with autonomic computing, in which an organization’s IT manages itself according to perceived activity and utility. Clients pay for application use and processing power on an as-need basis.
The goal of virtualization and the trends that accompany it are to centralize administrative processing and improve scalability and workloads. Virtualization reduces expenditures on infrastructure and consolidation.
This article will discuss the history of virtualization in IT, the components of current trends in virtualization, such as virtual servers and storage, and how those trends improve productivity.
Background
Virtualization began in the late ‘60s to early 70’s with IBM’s release of the IBM System 370 that contained virtual memory support. In 1972, the first virtual operating system, VS1, was released. Storage area networks were released in 1999 and the technology has continued evolving ever since.
The first technology to become virtualized in IT was the data network. This was accomplished through a network switch and router. Switches and routers provided the hardware to connect multiple computers within a local area network (LAN) and forward (and filter) data packets. The same technology that virtualized telephones was used in this early incarnation of virtualization.
Here, instead of manually establishing connections between telephones through operators, standardized switches were established in central offices. Telephones compatible with centralized switches could request a connection and connect automatically, without the need for human intervention. Data network virtualization worked in much the same way, allowing computers to connect with each other regardless of location or the destination computer’s connectivity because the connection is virtual.
Storage became the next branch of IT to become virtualized. Commons storage protocols and high-speed connectivity created storage switches that provided common storage for a multitude of servers. Storage area networks (SANs) protect against failure because if one server fails to connect to the shared storage database, another connection is made automatically through a functioning server. Since the data is stored virtually over a network any server that can make the connection can be utilized.
However, in this early manifestation, server failure required the re-establishment of all database connections and requests, which were difficult to manage.
The current generation of virtualization focuses on data area networks (DANs). These work similarly to SANs and, indeed, require SAN technology to function. Application servers connect to individual IP (Internet Protocol) addresses and ports. DAN switches then route the connection to the appropriate server.
Like SANs, if a server fails the connection is automatically routed to a functioning server. Routing is determined by optimal use of resources, processing priorities, server availability and security.
Uses
As the history of virtualization suggests there are three mains fields of virtualization: network virtualization, storage virtualization and server virtualization. Network virtualization combines available network resources by dividing bandwidth into independent channels that can be assigned to particular servers in real time.
Storage virtualization collects storage from multiple network devices into what appears to be a single storage device managed by a central console. Finally, server virtualization hides server resources such as the number and identity of individual servers and operating systems to simplify the user’s experience.
Server resource details are hidden while resource sharing, utilization and expansion capacities increase and conform automatically behind the scenes, as it were.
The benefits of virtualization provide optimum utilization and efficiency of database resources by providing high levels of availability and scalability. In addition, the ability to run several systems on one piece of high-performance hardware mean that less machines are running at a higher rate of utilization.
Personnel expenditures are reduced as well, since fewer machines are running that require less people to supervise and maintain them. The benefits of virtualization coincide with the benefits of consolidation.
Points of Interest
Virtualization trends for 2007 include the mandatory incorporation of SANs into the virtual architecture of an organization. In hardware, the trend is toward high-density, multi-core central processing units (CPUs). For example, in 2007, an 8-core host will be able to sustain 32 virtual machines. Disaster data recovery will also be a concern for virtual platform development. Virtual machine (VM) management will encourage vendors to develop data center automation tools. Application virtualization in 2007 sees Microsoft, Citrex, and Symantec continuing to develop products in this area that they introduced in 2006.