How will CIO deploy when the hottest tide of virtu

2022-09-27
  • Detail

How will CIO deploy when virtualization tide strikes

whether we like it or not, virtualization has begun to become an integral part of our servers, storage, and even PCs. In 2008, Microsoft's upcoming Windows Server 2008 has set the virtualization function as standard, and various Linux have also selected integrated open source virtualization products. UNIX's virtual partition technology is more sophisticated. In addition to the booming software virtualization, Intel and AMD have also launched x86 chip products Intel VT and AMD-V that support virtualization

however, generally speaking, despite this trend, the data centers of domestic enterprises are generally cautious in the face of virtualization. HaoChen media also conducted an investigation on this, and the results are shown in the figure below

as shown in the above figure, in the past 2007, only 14.7% of enterprises have applied virtualization. Most of these 14.7% of enterprises only deploy virtualization on non core it. It can be said that few enterprises deploy virtualization on the most core and critical it. Enterprises that have no virtualization plan at all still account for 50.4%, indicating that the vast majority of enterprises still stay away from virtualization. However, 20% to 30% of enterprises are planning virtualization deployment. We must admit that virtualization does not apply to all users. If an enterprise's data center resource utilization has exceeded 70%, then it is completely unnecessary for the enterprise to deploy virtualization. Because virtualization can only be used to improve resource utilization, it is impossible to create new hardware resources, and the virtual layer itself will occupy 10% - 20% of resources. However, if the resource utilization is less than 30% and the IT environment is not too complex, then virtualization can be considered. As for those enterprises that do not plan to deploy virtualization, as shown in the figure below, they also have their own good reasons

data source: HaoChen media November 2007

(I) security issues of Virtualization:

we found that the common situation in China is to worry about the security of virtualization, which accounts for 41.6%

the first worry about security:

the first worry about security stems from users' lack of confidence in the x86 chip itself. In fact, I understand this security concern very well, because users may have used RISC chips before, and then migrated to the CISC platform (of course, it may have been built by the CISC platform). The primary reason for using CISC is that it environment built by CISC can be cheaper, but for these CIOs who have just moved to the new platform, the operation effect of the new platform still needs time to be verified. Jokingly, CIOs and CTOs of major enterprises may still be computer enthusiasts in schools ten years ago. They may still remember the past of overclocking and frequent crashes of Intel and AMD. At that time, they were in admiration for SPARC and power series chips, but they were out of reach. We also admit that x86 chips are indeed a little late in entering the server market: the previous Pentium Pro shadow new industry entrants have a low level of production technology and very limited impact, while Intel's first generation Xeon processor for servers was released in 1998. In the first five years of Xeon's release, its influence was also relatively limited, but the past five years have been the five years of rapid growth for Intel Xeon and AMD Haolong. X86 chips have won more and more trust with stable performance

the world's top 500 computers have always been the wind vane of the IT industry. As shown in the above figure, according to TOP500 statistics, as of November 2007, most of the world's strongest TOP500 computers used Intel and AMD products. Specifically, Xeon 51xx (Woodcrest core) in TOP500 has a monopoly of 215 sets, accounting for 43.00%, while Xeon 53xx (clovertown) has also reached 102 sets, accounting for 20.40%, and AMD Haolong Opteron dual core has 69 sets, accounting for 13.80%

the following table is the top ten TOP500 computers in the world

we found that the third, fourth and fifth ranked companies adopt Xeon, while the sixth ranked company adopts AMD Haolong. To be fair, for example, if the computing application center in New Mexico, which ranks third in the world, goes down or suffers from CPU failure, the loss must be more serious than that of any domestic enterprise. Since these TOP500 trust the x86 instruction CPUs of Intel and AMD, why do we have any reason to be suspicious of X86 CPUs? Intel Xeon Woodcrest dual core processor was also used by Shengli Oilfield Geophysical Exploration Research Institute of Sinopec, which ranked first in China's HPC Top100 in November 2007. It can be seen that we don't have to worry about the position of in grounding installation, which should be easy to connect the x86 chips of Tel and AMD. Ten years ago, or even five years ago, such worries were reasonable, but their capabilities have long changed, and we can trust x86

the second worry about security:

maybe some CIOs will say at this time, in fact, I am not very worried about x86 chips, my worry stems from the high resource utilization after deploying virtualization. Indeed, after virtualization deployment, multiple key applications or data are integrated into the same server. If the downtime cost before virtualization is 100000, then the downtime cost after virtualization deployment may be as high as 500000, which makes CIOs and CTOs afraid of the disastrous consequences that downtime may bring

the sharp increase in downtime costs is indeed another reason for it managers to stop virtualization. I want to give an example. All CIOs have a deep understanding of SaaS, and I believe many CIOs have heard of 37signals, the pioneer of SaaS. In short, SaaS manufacturer 37signals transfers the tasks of some enterprise data centers to their own data centers. Users can simply use project management and other services through the browser. It is believed that 37signals pays more attention to the data center than most domestic enterprises. After all, the data center is all that 37signals depends on for survival. As of November 2007, the company had 30 servers, a total of 100 CPUs and 200GB of memory, and stored more than 5.9 TB of user uploaded files (in addition, part of the storage was outsourced to Amazon S3). Under such circumstances, after careful evaluation, the company decided to fully deploy virtualization from November 2007, reducing 30 servers to 16. The overall deployment is expected to end in February 2008. At present, it is the middle of January. All signs indicate that the virtualization of the company is progressing smoothly. No data of thousands of companies of 37signals has been damaged or lost, and the overall system remains stable

in fact, as in the example of 37signals, if handled properly, we don't have to worry about downtime loss due to high resource utilization. In fact, virtualization software manufacturers have considered our needs. As shown in the figure below, for example, vmwareha can continuously monitor the health of all physical servers in the resource pool, and ensure that there is enough space in the resource pool. When some servers fail, restart the virtual machine on other servers

in fact, the current x86 virtualization software has been very powerful after 10 years of development. Even if there is a hardware failure, it will not affect the operation of our virtual machine. In addition, if there is an inevitable disaster in the entire data center environment, our key data is backed up by disk and tape, and recovery time objectives (RTO) and recovery point objectives (RPO) are set, as well as disk redundancy, etc., then the data with and without virtualization is actually the same

our concerns about X86 platform OS may also make us distrust virtualization of X86 platform. Linux and windows always feel less stable than UNIX. I believe that all users of the early version of windows have had many blue screen experiences, and our understanding of Linux at that time seemed to be only the interest of fans in the open source community. But these views have become history. In fact, the glory of UNIX has passed. Let's look at the table below, which shows the share of TOP500 computers in 1997, of which UNIX accounted for 98.8%, and the remaining 1.2% did not belong to Linux and windows, but BSD. After all, Linux and windows were not introduced long ago in 1997

but this situation has reversed in the past 10 years. The following table is the TOP500 in November 2007. We found that UNIX has only 6% seats left, while Linux accounts for 85.2%. Since the TOP500, as the weathervane of the computer industry, trusts Linux so much, what reason do we have to doubt it? After 10 years of development, Linu's design requirements must understand the expansion characteristics of small-scale (crack length from a few microns to 2mm) fatigue cracks (referred to as small cracks). X has long been a hobby of the open source community. Linux has developed into a software industry with hundreds of commercial manufacturers

as for Microsoft Windows, the server market share has been growing. According to IDC's statistics in the second quarter of 2007, the server market share of Windows Server reached 38.2%, surpassing UNIX by nearly 7 percentage points, becoming the most widely deployed operating system in the server market. Many world-class financial groups have migrated their data from UNIX platform to windows in the past 10 years

in fact, x86 virtualization can be compared with UNIX virtualization. UNIX virtualization has been popular for decades, and the resource utilization rate of the system has always been very high. We used to agree that UNIX system is stable, but now Linux and windows show the same strong strength. Why don't we accept the virtualization of these platforms? It can be seen that the view of "don't put eggs in one basket" is untenable in x86 virtualization. Windows and Linux, as well as x86 virtualization itself, have begun to have more and more users, which eloquently shows that our concerns about x86 chips and the stability of the operating system are unnecessary

(II) understanding and promotion of Virtualization:

doubts about the maturity of virtualization technology also accounted for 33.2% in this survey. It is easy to understand that these users belong to a group that does not know much about virtualization, and they are probably not it professionals. It often takes time to accept a new concept. In fact, x86 virtualization has been developed for 10 years. Perhaps in the early years, virtualization technology was not comprehensive enough, but as with windows and Linux, we can't look at the development of virtualization from the perspective of 10 years ago. Take VMware, the largest virtualization manufacturer, as an example. The current product line takes into account all aspects of the data center

as shown in the above figure, VMware's virtualization products have been upgraded from version 1.0 in 1998 to version 3.0 in 2006, and have just been upgraded from 3.0 to 3.5. Its software plug-ins are also very comprehensive. There are up to 7 plug-ins in the VI3 package, which can not only integrate servers, but also preliminarily have the ability to integrate storage. Of course, this is not an advertisement for VMware. In fact, Xen and Microsoft's virtual server are also very good choices. I just want to explain that just like today's Windows Server 2003 R2 can't have a blue screen from time to time like Windows95, virtualization has been very mature, and many large enterprises have deployed virtualization

in fact, for CIOs, they are computer professionals with no relatively moving parts contact surface, and they have long been concerned about various IT trends. I believe that CIOs themselves may have a very good understanding of virtualization. But everyone in the enterprise except CIO knows a little about virtualization

Copyright © 2011 JIN SHI