Cloud computing is a term which has been used to describe the many networks of infrastructure (server, storage, networking and other I/O subsystems) which make up the data center. Faster and more efficient processors available to server architects being used in the cloud enable the need for a better interconnect between the I/O subsystem and the processor. PCI Express is already a key component in the today's high performance processor architectures and as such it is not a surprise to see PCI Express as the interconnect of choice between the processor and the I/O, as is the case in the box with servers and storage subsystems. Outside the box, PCI Express offers significant benefits in a clustering environment that makes the "Cloud". Low latency accesses between the applications running in the servers through a PCI Express connection have a direct impact on the increased performance of the cloud. However, the server and I/O is not the only play for PCI Express in the cloud. Its versatility has allowed PCI Express to penetrate and embed itself in most all components of the cloud infrastructure. From the data plane in the servers and I/O subsystems in which actual applications are run, to the control plane of the routers and switches whose function is to provide interconnect between systems and a gateway to the cloud from the outside world.
High availability and redundancy are key requirements in the cloud environment. Downtime and/or loss of data are detrimental for any businesses which rely on the cloud as a means for providing services to the end user. PCI Express provides standard Hot-Plug support which allows the replacement of modules to occur without bringing the system down. Moreover, PCI Express switches available from PLX provide support for Non-Transparency as well as Multi-Root features which are industry proven to provide the right levels of redundancy and failover in enterprise type applications.
Scalable switch fabrics make the cloud usage model a reality. The ability to increase the density of the compute and I/O resources to the data center is paramount in scaling the number of applications and users subscribed to a given data center. The physical interface of PCI Express is also well suited for out of the box cable applications through various transport media (Copper and Optical) and thus providing a scalable path to increase the number of system to suite the business/application needs. The Non-Transparent function provides the necessary mechanisms required in fail-over systems and clustering environments.
For the backplane, PLX offers a variety of PCIe switches ranging from 48- to 96-lanes providing the connectivity between server nodes and/or I/O subsystems. PCI Express is already in the box with it being implemented directly on the processor. Consequently, PCIe benefits greatly from the economies of scale in that any I/O device interfacing to the processor needs to provide a PCIe connection. This results in industry standard controlled components of high quality and low cost.
Efficient use of the data center resources is beneficial in maximizing the return on capital expenditures. I/O virtualization is a key component as it allows a single I/O device to be used by multiple hosts and applications thus eliminating device inefficiencies, such as idle time. Additional benefits for an I/O virtualization approach are the consolidation of I/O devices which results in lower power consumption and consequently in lower operational costs. PCI Express switches available from PLX offer the lowest power consumption without sacrificing performance. Furthermore, PLX has solutions right now which enable I/O virtualization.
PLX is at the leading edge of PCI Express switch development enabling new ways to enhance system performance by embedding the DMA function for data movement between processors and between CPUs and I/O. As the leading vendor in PCI Express switches and bridges, PLX is in the heart of the data center.