MENLO PARK, California — Since you last saw Frank Frankovsky, his beard has grown to epic lengths. And it suits him.
As the man at the center of Facebook’s Open Compute Project, Frankovsky spent the last two years rethinking the very essence of the computer hardware that runs the company’s massive social network — and sharing his ever-evolving data center ideology with the rest of the tech world. He’s a kind of hardware philosopher. And now he looks like one too.
When you sit down with the burly Texan, inside Facebook’s Northern California headquarters, he takes the Open Compute philosophy to new extremes, revealing the blueprint for a computer server that doesn’t even look like a computer server. This design lets you add or remove a server’s primary part — the processor — whenever you like. Nowadays, if you want a new processor, you need, well, a new server. But Frankovksy and the Open Compute Project aim to change that, sharing the new design with anyone who wants it.
“By modularizing the design, you can rip and place the bits that need to be upgraded, but you can leave the stuff that’s still good,” Frankovsky says, pointing to memory and flash storage as hardware that you don’t have to replace as often as the processor. “Plus, you can better match your hardware to the software that it’s going to run.”
The new design is still a long way from live data centers. At this point, it’s just a specification for a motherboard slot that processors will plug into. But Intel and AMD — the two largest server chip designers — have put their weight behind the idea, as have two companies working to build servers using low-power ARM processors akin to the one in your iPhone: Calxeda and AppliedMicro.
‘By modularizing the design, you can rip and place the bits that need to be upgraded, but you can leave the stuff that’s still good.’
— Frank Frankovsky
It’s one more way the Open Compute Project seeks to significantly reduce the cost and the hassle of the hardware that underpins today’s online operations. Facebook and Frankovsky founded the project in the spring of 2011, urging companies across the industry to share and collaborate on new data-center hardware designs, and though Facebook is still the primary force behind the project, Open Compute has now been spun off as a not-for-profit operation — with its own full-time employee — and it’s backed by a wide range of companies, including hardware buyers such as Rackspace, Goldman Sachs, and Fidelity as well as hardware makers and sellers such as Intel, AMD, and Dell.
At first glance, some may seem out of place. Dell is a participant even though the project’s open source design threaten to cut into its traditional server business — Facebook’s servers are built by lesser-known manufacturers in Asia — and in backing the project’s modular processor idea, Intel is giving buyers a way to readily replace their Intel chips with processors from AMD and countless outfits backing the ARM architecture. But this can only be a sign of how important the project has become. And Frankovsky says there’s no point in trying to parse the industry politics.
“The tend to ignore the politics. Nobody should take sides over technology. Everyone should test, see what works best for them, and choose that. There shouldn’t be any other motivation other than what delivers the best results for the infrastructure,” Frankovsky says. “[The Open Compute Project] is about empowering the user to take control of infrastructure design.”
Due for a formal unveiling on Wednesday, when Open Compute members meet in Santa Clara, California, for their latest summit, the modular processor spec is a natural extension of earlier hardware design “open sourced” by Facebook. In May, at the previous summit, Frankovsky unveiled a new breed of server rack capable of holding its own power supplies, which meant you could separate the power supply from the servers housed in the rack. “You don’t have to embed a new power supply every time you install a new CPU,” Frankovsky said then.
Now, the Facebook and others have also separated the processor from the server. Basically, Facebook has offered up the spec for the motherboard slot that processors can plug in to, and four companies — Intel, AMD, AppliedMicro, and Calxeda — have already built preliminary hardware that uses this spec. As Facebook man John Kenevey demonstrates, just before Wednesday’s Open Compute summit, the setup even allows to two different processors from two different manufacturers to operate on the same motherboard.
“It’s always frustrated me — for years — that we’ve had to design two separate motherboards: one for Intel [processor] sockets and one for AMD sockets,” says Frankovsky, who worked at Dell for 14 years before moving to Facebook. “But now any [processor] maker in the world can design to this new specification. It will be the great equalizer.” The common slot used by these processors — or SoCs, systems on a chip — is based on the PCIe connector used in today’s servers.
At the same time, Intel has released the specifications for a 100-Gigabit silicon phonics bus that will sit in the rack and connect these modular servers connect to networking switches, the devices that tie your servers to a larger network of machines. In short, the project is working to split servers into a many pieces as possible — all of which you can install or remove with relative ease.
“Historically, the industry has built very monolithic servers. Everything got put onto a motherboard. The motherboard got put into a chassis. The chassis got put into a rack. And the chassis got connected to a switch,” Frankovsky says. “We want to better match how the software is going to exercise the hardware. We want to dis-aggregate the hardware components so you can better take advantage of each component.”
As this effort continues to gestate, Facebook has also open sourced two other new server designs. One is the latest version of the Facebook web server — a machine that delivers webpages — and the other is the company’s first custom-built database server. Both are meant to reduce costs by stripping the hardware to the bare essentials, but the database goes a step further. It doesn’t use a hard drive. It runs entirely on flash memory, the superfast solid-state storage medium that gradually replacing the hard drive across the industry.
Codenamed “Dragonstone,” the Facebook database is designed for use with a new 3.2-terabyte flash memory card from Silicon Valley outfit Fusion-io. According to Frankovsky and Fusion-io CEO David Flynn, the card was designed in tandem with Facebook engineers — Facebook wanted all storage space on a single card — but it’s now available to the rest of the world as well. Plugging into a PCIe connector, this sort of flash card provides an added level of performance, but it’s also more reliable than a mechanical hard drive, which, in Frankovsky’s words, breaks down more often than any other device in the data center. The new server even boots from the flash card.
These servers were built specifically for Facebook’s data centers. The Dragonstone database machine is slated for use in the company’s new facility in Lulea, Sweden. But in sharing the designs with the world at large, Facebook hopes that others can use them too — or at least re-purpose parts of them in machines tailored to different tasks.
It seems like such an idealistic endeavor. But it’s working. Inspired by Facebook, Texas-based cloud computing outfit Rackspace was due to unveil its own server designs on Wednesday, following in the footsteps of AMD and Intel, which have designed boards in tandem with financial houses like Fidelity and Goldman Sachs. And it was Intel was designed the modular processor prototype set to be flaunted at the summit, allowing, yes, its x86 processors to run alongside an ARM design from AppliedMicro.
The man with the beard is worth listening to.