Blade Systems and Modular Computing

Copyright © 2003, John Harker
Principal Technical Marketing Consultant
ZNA Communications

At the recent Intel Developers Forum (IDF Trade Show) in San Jose Intel announced their entry into the blade server market with the introduction of a dual Xeon processor blade codeveloped with IBM. Along with the system - called the Server Compute Blade SBXL52 – came a roadmap of future two and four-processor blade systems, a proposed standard for BIOS extensions useful for single point management of rack and blade systems, and a bundled offering of Veritas distributed system management software.

They also renewed their commitment to what they called ‘Enterprise Modular Computing” aiming for systems made of scaleable compute and storage network-based resources, automated policy-based administration, using common management interface standards, and with system virtualization capabilities. It is clear that Intel sees the standardization of modular computing as important in making blade servers profitable volume sellers.

What is Modular Computing?

What Intel is focusing on is a new computer architecture for Enterprise servers that has emerged centered on modular systems. (Ref.: Gartner’s George Weiss – “Future of the Server” – http://www4.gartner.com/pages/story.php.id.9288.s.8.jsp ). This emerging architecture is being driven by the evolution in the data center to racks of a variety of computers and storage units instead of one or more proprietary UNIX systems or mainframes. The market is driving rapid advances in this new “convergence” market in high-speed networking, Internet and Web services interfaces, blade servers and clustered and grid computing. Soon all midrange and higher servers will follow this modular systems model and a standard “server” will include multiple computers with shared storage.

A computer architecture incorporates the nature and elements of the system and the way they are assembled. Traditionally a computer system has included a CPU, memory, I/O devices, and storage. It is assumed that there is an operating system that allows programs to run in that environment. In the past, it was also assumed that there was a great deal of overhead in accessing other systems or storage available via the network. This helped drive a ‘single system centric’ model. But that is changing.

Now, the evolving data center is increasingly using multiple systems made up of a set of standard, modular building blocks – network edge devices, load balancing devices, application servers and high-performance hardened database systems. These are deployed in three-tier configurations, with front ends of network edge and load balancing devices, a second tier of banks of application servers in parallel, and a third tier of back-end high performance database servers. Much of the work being done is network transaction occurring across multiple systems.

Multiple rack mounted PCs and blade systems are key to this building block idea. The customer goal is to use processors as a resource, just like storage. If more power is needed, bring more processors to bear. If the workload changes, shift processors from one application to another. Increasingly, multiple PCs in a rack environment are being used by IT professionals to deploy solutions with these characteristics. And industry leaders such as Intel and Microsoft are reacting and helping define extensions to PC architecture standards, such as the BIOS standard, in order to facilitate managing processors as a modular resource.

High-speed networking, once the domain of custom switches and Fibre Channel, is rapidly advancing and becoming a commodity priced item. Gigabit Ethernet is now common and 10 Gigabit is becoming available. At those speeds and with the economies of volume manufacturing, interconnecting discrete computers at bus-level speeds is becoming practical and affordable.

Storage Area Networks (SANs) are changing the way storage is viewed as storage systems move from being a system attached resource to a high-speed network attached resource. This removes the performance penalty formerly associated with remote storage. SANs are being driven by their optimal use of disk capacity, along with the easier management and control of stored data.

How do applications run on a Modular System?

There are three evolving areas of distributed systems software that enable modular computing: Web services, grid/clustered computing and distributed provisioning and management services. They are complementary in that they can be used together to solve the whole puzzle.

Web services decouple an application from any single operating system environment. Applications written to a Web services model can transparently access any other portion of the application via TCP/IP -- no matter what computer or operating system it is running on.

Grids, clusters and virtual operating systems enable processor power to be scaled to the application on demand. They also facilitate building fault-tolerant applications for improved reliability. After years as a proprietary high-end offering from OEMs such as IBM, DEC, Tandem, HP, and Seimens, clustering is becoming standards based and open system oriented. Grid computing offers the same advantages as clustering but over multiple operating systems and hardware platforms without the need for tightly integrated system software. Both are complementary with Web services and are important elements of solutions that scale applications and make them more reliable.

Distributed provisioning and management software allows software installation and ongoing control of multiple servers and/or the individual blades in a blade server. Interestingly enough, blade servers are driving systems software vendors to improve their offerings since management and provisioning are fundamental requirements for blade server customers.

All of these elements are converging to define the new modular computing architecture in which the standard system environment consists of multiple computers of a variety of types with high-performance, networked access to a unified data store. In this converged environment, applications run over Internet, Web services, and grid computing interfaces that together make up a new ‘operating system’ that provide a uniform view and access to all of the systems in the server. Products implementing these elements will help each other establish themselves in the market and greatly reduce the cost of ownership for customers of advanced server systems.

Why the interest in Modular Computing?

Intel and the PC OEMs have long been interested in this type of higher-margin high-end system. Long stymied by the limits of even SMP microprocessor-based systems, the advent of modular open systems standards-based distributed systems has let PC architectures vault into the lead (over specialized proprietary architectures) in high-performance computing and soon in corporate computing systems via cluster- and grid-based foundation applications. Hence their interest in modular computing standards – and from Intel’s view the more that is standardized, the better.

# # # #