OpenFlow @ GOOG

2012-04-17 Thread Eugen Leitl

http://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/all/1

Going With The Flow: Google’s Secret Switch To The Next Wave Of Networking

By Steven Levy April 17, 2012 | 11:45 am | 

Categories: Data Centers, Networking

In early 1999, an associate computer science professor at UC Santa Barbara
climbed the steps to the second floor headquarters of a small startup in Palo
Alto, and wound up surprising himself by accepting a job offer. Even so, Urs
Hölzle hedged his bet by not resigning from his university post, but taking a
year-long leave.

He would never return. Hölzle became a fixture in the company — called
Google. As its czar of infrastructure, Hölzle oversaw the growth of its
network operations from a few cages in a San Jose co-location center to a
massive internet power; a 2010 study by Arbor Networks concluded that if
Google was an ISP it would be the second largest in the world (the largest is
Tier 3, which services over 2,700 major corporations in 450 markets over
100,000 fiber miles.) ‘You have all those multiple devices on a network but
you’re not really interested in the devices — you’re interested in the
fabric, and the functions the network performs for you,’ Hölzle says.

Google treats its infrastructure like a state secret, so Hölzle rarely speaks
about it in public. Today is one of those rare days: at the Open Networking
Summit in Santa Clara, California, Hölzle is announcing that Google
essentially has remade a major part of its massive internal network,
providing the company a bonanza in savings and efficiency. Google has done
this by brashly adopting a new and radical open-source technology called
OpenFlow.

Hölzle says that the idea behind this advance is the most significant change
in networking in the entire lifetime of Google.

In the course of his presentation Hölzle will also confirm for the first time
that Google — already famous for making its own servers — has been designing
and manufacturing much of its own networking equipment as well.

“It’s not hard to build networking hardware,” says Hölzle, in an advance
briefing provided exclusively to Wired. “What’s hard is to build the software
itself as well.”

In this case, Google has used its software expertise to overturn the current
networking paradigm.

If any company has potential to change the networking game, it is Google. The
company has essentially two huge networks: the one that connects users to
Google services (Search, Gmail, YouTube, etc.) and another that connects
Google data centers to each other. It makes sense to bifurcate the
information that way because the data flow in each case has different
characteristics and demand. The user network has a smooth flow, generally
adopting a diurnal pattern as users in a geographic region work and sleep.
The performance of the user network also has higher standards, as users will
get impatient (or leave!) if services are slow. In the user-facing network
you also need every packet to arrive intact — customers would be pretty
unhappy if a key sentence in a document or e-mail was dropped.

The internal backbone, in contrast, has wild swings in demand — it is
“bursty” rather than steady. Google is in control of scheduling internal
traffic, but it faces difficulties in traffic engineering. Often Google has
to move many petabytes of data (indexes of the entire web, millions of backup
copies of user Gmail) from one place to another. When Google updates or
creates a new service, it wants it available worldwide in a timely fashion —
and it wants to be able to predict accurately how quickly the process will
take.

“There’s a lot of data center to data center traffic that has different
business priorities,” says Stephen Stuart, a Google distinguished engineer
who specializes in infrastructure. “Figuring out the right thing to move out
of the way so that more important traffic could go through was a challenge.”

But Google found an answer in OpenFlow, an open source system jointly devised
by scientists at Stanford and the University of California at Berkeley.
Adopting an approach known as Software Defined Networking (SDN), OpenFlow
gives network operators a dramatically increased level of control by
separating the two functions of networking equipment: packet switching and
management. OpenFlow moves the control functions to servers, allowing for
more complexity, efficiency and flexibility.

“We were already going down that path, working on an inferior way of doing
software-defined networking,” says Hölzle. “But once we looked at OpenFlow,
it was clear that this was the way to go. Why invent your own if you don’t
have to?”

Google became one of several organizations to sign on to the Open Networking
Foundation, which is devoted to promoting OpenFlow. (Other members include
Yahoo, Microsoft, Facebook, Verizon and Deutsche Telekom, and an innovative
startup called Nicira.) But none of the partners so far have announced any
implementation as extensive as Google’s.

Why is 

Re: OpenFlow @ GOOG

2012-04-17 Thread Marshall Eubanks
I wonder if this will be contributed to the DC (DataCenter) work
currently gearing up in the IETF.

Regards
Marshall

On Tue, Apr 17, 2012 at 12:37 PM, Eugen Leitl eu...@leitl.org wrote:

 http://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/all/1

 Going With The Flow: Google’s Secret Switch To The Next Wave Of Networking

 By Steven Levy April 17, 2012 | 11:45 am |

 Categories: Data Centers, Networking

 In early 1999, an associate computer science professor at UC Santa Barbara
 climbed the steps to the second floor headquarters of a small startup in Palo
 Alto, and wound up surprising himself by accepting a job offer. Even so, Urs
 Hölzle hedged his bet by not resigning from his university post, but taking a
 year-long leave.

 He would never return. Hölzle became a fixture in the company — called
 Google. As its czar of infrastructure, Hölzle oversaw the growth of its
 network operations from a few cages in a San Jose co-location center to a
 massive internet power; a 2010 study by Arbor Networks concluded that if
 Google was an ISP it would be the second largest in the world (the largest is
 Tier 3, which services over 2,700 major corporations in 450 markets over
 100,000 fiber miles.) ‘You have all those multiple devices on a network but
 you’re not really interested in the devices — you’re interested in the
 fabric, and the functions the network performs for you,’ Hölzle says.

 Google treats its infrastructure like a state secret, so Hölzle rarely speaks
 about it in public. Today is one of those rare days: at the Open Networking
 Summit in Santa Clara, California, Hölzle is announcing that Google
 essentially has remade a major part of its massive internal network,
 providing the company a bonanza in savings and efficiency. Google has done
 this by brashly adopting a new and radical open-source technology called
 OpenFlow.

 Hölzle says that the idea behind this advance is the most significant change
 in networking in the entire lifetime of Google.

 In the course of his presentation Hölzle will also confirm for the first time
 that Google — already famous for making its own servers — has been designing
 and manufacturing much of its own networking equipment as well.

 “It’s not hard to build networking hardware,” says Hölzle, in an advance
 briefing provided exclusively to Wired. “What’s hard is to build the software
 itself as well.”

 In this case, Google has used its software expertise to overturn the current
 networking paradigm.

 If any company has potential to change the networking game, it is Google. The
 company has essentially two huge networks: the one that connects users to
 Google services (Search, Gmail, YouTube, etc.) and another that connects
 Google data centers to each other. It makes sense to bifurcate the
 information that way because the data flow in each case has different
 characteristics and demand. The user network has a smooth flow, generally
 adopting a diurnal pattern as users in a geographic region work and sleep.
 The performance of the user network also has higher standards, as users will
 get impatient (or leave!) if services are slow. In the user-facing network
 you also need every packet to arrive intact — customers would be pretty
 unhappy if a key sentence in a document or e-mail was dropped.

 The internal backbone, in contrast, has wild swings in demand — it is
 “bursty” rather than steady. Google is in control of scheduling internal
 traffic, but it faces difficulties in traffic engineering. Often Google has
 to move many petabytes of data (indexes of the entire web, millions of backup
 copies of user Gmail) from one place to another. When Google updates or
 creates a new service, it wants it available worldwide in a timely fashion —
 and it wants to be able to predict accurately how quickly the process will
 take.

 “There’s a lot of data center to data center traffic that has different
 business priorities,” says Stephen Stuart, a Google distinguished engineer
 who specializes in infrastructure. “Figuring out the right thing to move out
 of the way so that more important traffic could go through was a challenge.”

 But Google found an answer in OpenFlow, an open source system jointly devised
 by scientists at Stanford and the University of California at Berkeley.
 Adopting an approach known as Software Defined Networking (SDN), OpenFlow
 gives network operators a dramatically increased level of control by
 separating the two functions of networking equipment: packet switching and
 management. OpenFlow moves the control functions to servers, allowing for
 more complexity, efficiency and flexibility.

 “We were already going down that path, working on an inferior way of doing
 software-defined networking,” says Hölzle. “But once we looked at OpenFlow,
 it was clear that this was the way to go. Why invent your own if you don’t
 have to?”

 Google became one of several organizations to sign on to the Open Networking
 Foundation, which is