How Virtualization Works
Virtualization is a type of process in which numerous virtual operating systems are handled at the same time on one computer. It is a technique of increasing and sharing the physical resources so that a piece of hardware can be pushed to its maximum limit.
To understand the power of virtualization, it is first necessary to become familiar with Moore’s Law. This law has predicted that computer processing power would double every 18 months or so.
The law also stated that the geometric growth in computer power and the necessary hardware requirements will not change a great deal, particularly when it comes to carrying out the identical computing tasks.
Because of this, it is possible to take a 1U dual core system that is not that expensive, and transform it into multiple virtual servers that each run a separate OS, which means that multiple operating systems are used. Virtualization is a type of technology which is extremely powerful because it allows one to gain more density within a server.
At the same time, it does not increase the complete amount of processing power, it will decrease it by a small amount due to the overhead which is necessary for it to function. At the same time, because the modern $3,000 server is a lot more powerful than the more expensive eight socket servers that existed a few years back, it is possible for individuals and groups to exploit this power for an affordable cost.
The best way to exploit this power is simply by enhancing the amount of logical operating systems which it hosted. This will slash the vast majority of hardware acquisition, as well as the maintenance costs, and this can lead to greatly reduced savings for both firms and organizations.
However, the power, efficiency, and flexibility of virtualization technology means little if one does not take the time to learn how to use it properly. But even more important than this is knowing "when" to make use of virtualization technology.
When Virtualization Should be Used
Virtualization is great when it is used for applications that are designed for either small to moderate sized usages. However, it should never be used for applications which are extremely high end.
Typical situations which are defined as being "high end" include scenarios where multiple servers must be clustered together so that performance requirements can be achieved, since the added overhead and the complexity would lower the level of performance. When you think about it, you will be greatly increasing the capability of the server, but if you are working with multiple servers, and some are in the idle mode, this could cause some conflicts.
Many experts in the virtualization industry like to spend too much time thinking about a higher than average number of CPU utilization, and they do not spend enough time thinking about the responsiveness of the application.
One thing that you must always keep in mind is that you should never allow the server to go beyond 50% of CPU during the peak loads, and it is also crucial to remember that the application response times should not be allowed to go beyond a certain SLA, or Service Level Agreement.
Many of the Modern servers that are used for the in-house task will take up no more than 5% of the CPU. When you are running multiple operating systems on one server, the peak could be elevated at around 50%. At the same time, on average it may be a lot lower, since both the valleys along with the peaks for the operating systems may cancel out each other.
Despite the fact that the overhead for the CPU in most contemporary virtualization tools are quite low, the I/O overhead for the networking tends to be quite high, and this is true for storage as well. If you are working with a server that has a storage rate which is extremely high, then it would be a good idea to process them on bare metal.
Additional Virtualization Facts
One issue that has come to light in regards to virtualization is called "having all your eggs in a single basket" syndrome. What this means is that some people make the mistake of placing all their servers, even those which are critical, on one physical server. This is extremely dangerous, and the best way to avoid this problem is to be sure that one service is not simply stuck on one server.
There are a number of different server types, and some of them include HTTP, FTP, DNS, DHCP, and RADIUS. These server types can each be placed on separate servers, and redundancy will still be maintained. These server types are simple to cluster, and the reason for this is because they are simple to switch over, particularly when one server fails.
Once a physical server goes down, or it needs to be repaired, the virtual server on the other physical server should be capable of taking over for it. But when you straddle numerous servers, the critical services do not need to be down, and the reason for this is because on one hardware failure. When you are dealing with more complex services like MySQL or Oracle, clustering technologies may be used for the purpose of synchronizing the dual logical servers.
This method is very efficient because it would reduce the amount of down time that would normally be experienced in the transition phase, and this may take as long as five minutes. The reason for this is not because of virtualization, but is due to the complexity for clustering which may require time for transitioning.