To understand the context and motivation of SRSD, it is useful to review the evolution of computing architectures over the last several decades. From the late 1990s to the early 2000s, for customers looking to build their data centers, the best practice in IT was to select best-of-breed proprietary hardware and software technologies for each layer of the stack, and then hire consultants to connect the pieces together to support a small number of enterprise applications. It was a great time for the technology companies, especially Cisco, EMC, Sun Microsystems and Oracle, collectively dubbed the Four Horsemen of the Internet. Then the internet bubble burst. In the aftermath, in order to survive, technology companies offered more mid-range and low-end products to make technology more affordable to customers. Some companies started offering integrated IT stack solutions such as Vblocks and FlexPods to streamline the deployments and further squeeze costs out of IT infrastructure. Hyperconvergence, an architecture championed by Nutanix, started to gain tractions among enterprise customers who were eager to get their hands on an elastic infrastructure without having a significant payroll for computer talents. Open source software and standard x86 hardware became a scale-out alternative to proprietary scale-up architecture. Over the past few years, cloud, social, mobile and IoT helped fuel the rise of hyperscale computing giants. In an effort to further data center efficiency and cost savings, the Open Compute Project (OCP) was founded in 2011. The mission of OCP is to rethink the design of data center hardware through sharing of Intellectual Property (IP) and technical specifications.
That brings us to Rack Scale Design (RSD) today. For people who do not live and breathe RSD, its messaging can be daunting. Intel® often describes RSD with three elements: disaggregation of physical resources (e.g., compute, networking, and storage), silicon photonics as high-speed and low latency rack interconnects, and an open RESTful API called Redfish that’s designed to replace IPMI. Redfish* provides secure hardware management at rack scale and enables interoperability among RSD offerings from different vendors. Acting as a south-bound API for the underlying hardware and a north-bound API for plugging into exiting application environments such as OpenStack, Redfish*, in my opinion, is the heart and soul of RSD. It allows data center operators to deploy RSD compliant solutions today on existing rack mount hardware without requiring new technologies such as silicon photonics.
Now that I have given you a long-winded answer on why rack scale, let me address why now. Unlike the mega public cloud players such as Amazon, the vast majority of cloud service providers, telecoms, and Fortune 500 companies, simply can’t afford to build their own Facebook-style modern data centers that’s capable of achieving PUE of 1.07 at full load. To these customers, a solution like RSD is very appealing because it offers a rack-scale total solution that they can then use to build their own agile, efficient, software-defined data centers.
In booth 600 at IDF 2016, we will demonstrate how to elevate hardware management from individual server BMC to rack level using our new Redfish* based Pod Manager and Rack Management Module (RMM), which operate on existing Supermicro rack mount servers (i.e., X10 servers). To give you a glimpse how this technology can be used in existing application environments, we will show case OpenStack running on top of pooled SRSD compute resources. We will also participate in the multi-vendor interoperability RSD demo at the Intel Pavilion.
We are so excited to take our first step towards a long journey that promises data center agility, freedom and efficiency to our cloud service providers, telecoms and large enterprise customers. Come visit us at booth 600. We look forward to talking to you!