Re: /. Terabit Ethernet is Dead, for Now

2012-09-27 Thread Dan Shechter
If they would have rolled out 1000G networks now, I guess we will have to
plug in 17 MTP interfaces ;)



HTH,
Dan #13685 (RS/Sec/SP)
 The CCIE troubleshooting blog: http://dans-net.com
 Bring order to your Private VLAN network: http://marathon-networks.com





On Thu, Sep 27, 2012 at 2:51 PM, Eugen Leitl eu...@leitl.org wrote:


 http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/

 Terabit Ethernet is Dead, for Now

 by Mark Hachman | September 26, 2012

 A straw poll of the IEEE's high-speed Ethernet group finds that 400-Gbits/s
 is almost unanimously preferred.

 Sorry, everybody: terabit Ethernet looks like it will have to wait a while
 longer.

 The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group
 met
 this week in Geneva, Switzerland, with attendees concluding—almost to a
 man—that 400 Gbits/s should be the next step in the evolution of Ethernet.
 A
 straw poll at its conclusion found that 61 of the 62 attendees that voted
 supported 400 Gbits/s as the basis for the near term “call for interest,”
 or
 CFI.

 The bandwidth call to arms was sounded by a July report by the IEEE, which
 concluded that, if current trends continue, networks will need to support
 capacity requirements of 1 terabit per second in 2015 and 10 terabits per
 second by 2020. In 2015 there will be nearly 15 billion fixed and
 mobile-networked devices and machine-to-machine connections.

 The report goes on to predict that, from 2010 to 2015, global IP traffic
 will
 experience a fourfold increase from 20 exabytes per month in 2010 to 81
 exabytes per month in 2015, a 32 percent CAGR. Storage growth is expected
 to
 grow to 7910 exabytes in 2015, with over half of it accessed via Ethernet.
 Of
 course, one of the first places the new, faster Ethernet links will occur
 will be in the data center.

 With that in mind, the IEEE 802.3 group began formulating a response.
 However, virtually all attendees seemed to be in agreement before the
 meeting
 opened, as only one presentation focused on the feasibility of one-terabit
 Ethernet, eventually concluding that 400 Gbits/s made more sense in the
 near
 term.

 Kai Cui and Peter Stassar from Huawei Technologies suggested that the most
 cost-effective method for developing a 1-terabit Physical Medium Dependent
 (PMD) would be to leverage today’s 100-Gbit technology, which isn’t yet in
 high volume, and therefore not cost-optimized. “[The] cost target for 1Tb/s
 needs to be at or below 100G cost/bit*sec and required RD investments
 should
 be modest,” they wrote as part of their presentation.

 “100GbE technology based architecture would imply 40 lanes at 25G, which
 clearly would imply impractically big packages and large amount of
 interface
 signals,” Cui and Stassar added, which would need to reduce the number of
 electrical and optical interface lanes to enable a reasonable package size.
 While alternative modulation formats could be used (5λx200G DP-16QAM, 4
 bits/symbol, 25G) “neither the multi-level nor the phase modulation format
 based technologies have been demonstrated to be sufficiently mature to
 justify usage in client PMDs towards 100Gb/s to 1Tb/s applications.”

 They concluded: “1Tb/s does seem a ‘bridge too far’ at least for the
 coming 3
 to 4 years.”

 Chris Cole of optical components maker Finisar presented the case for a
 400-Gbit CFI, with backing from Brocade, Cisco, HP, IBM, Intel, Juniper,
 and
 Verizon, among others.

 Like Huawei’s Cui and Stassar, Cole indicated that 400-Gbit Ethernet can
 reuse 100 GbE building blocks, and fits within the existing dense 100 GbE
 roadmap. Faster data rates require “exotic” implementations, with higher
 RD
 investments required and a longer time to market. “Data rates beyond
 400Gb/s
 require an increasingly impractical number of lanes if 100GbE technology is
 reused,” he said.

 400 Gbit/s also makes more sense than a 4×100 Gb/s link aggregation, Cole
 added, as fewer items promotes management efficiency. Individual link
 congestion is also a concern: “Without faster links, [the] link count grows
 exponentially, therefore management pain grows exponentially.”

 Cole suggested that a potential 400 Gb/s MAC/PCS ASIC could be fabricated
 in
 either 20- or 28-nm CMOS, using a 400-bit wide bus and a 1 GHz clock rate.
 “There is a strong desire to reuse 802.3ba, 802.3bj, and 802.3bm technology
 building blocks,” he said.

 That’s not to say that terabit Ethernet won’t be needed, Cole concluded, or
 1.6 terabit Ethernet, at that. The timeframes for those followon CFIs could
 be between 3 to 6 years, he said.

 The CFI hasn’t formally occurred; until it does, nothing has been decided.
 So
 far, the most likely dates for formalizing the CFI will take place in
 either
 November or next month. But at this point, it looks like terabit Ethernet
 is
 a dead duck, at least for the near future.




Re: Are people still building SONET networks from scratch?

2012-09-09 Thread Dan Shechter
OT, what is the _expected_ latency on each hop/ADM in the SDH/SONET network?


HTH,
Dan #13685 (RS/Sec/SP)
The CCIE troubleshooting blog: http://dans-net.com
Bring order to your Private VLAN network: http://marathon-networks.com




On Sun, Sep 9, 2012 at 6:20 PM, Robert E. Seastrom r...@seastrom.com wrote:


 Will Orton w...@loopfree.net writes:

  I've considered using J's PE-4CHOC3-CE-SFP (OC3 emulated SAToP), then I
  could do it all with gig-e underneath. Does anyone make a cheaper OC3
  circuit emulation module or box? Most likely the customer wouldn't believe
  such a thing is possible and we'd have to put something in the contract
  allowing them SLA credit if their OC3 suffers too many timing slips or
  something.

 And so you find yourself at the intersection of two timeless maxims:

 1) The customer is always right, but not everyone needs to be our customer.

 2) Don't say no to the customer, let the customer say no thanks.

 Time to model the cost/benefit/profit margin of having these folks as
 a customer at all (I'd imagine that this circuit is not the only thing
 that they buy from you or you'd be running away even today).  What are
 your engineering costs for this trick?  Are you passing that on to the
 customer?

 You may find it advantageous to do a pricing model where you do
 circuit emulation on a hope-for-the-best basis and count on a maximum
 SLA payout every month (and still make money).  Then if you fail to
 pay SLA credits from time to time, that's pure gravy.

 -r






Re: Bird vs Quagga revisited

2012-08-31 Thread Dan Shechter
Just for the records, OpenBSD got fully functional MPLS stack.


HTH,
Dan #13685 (RS/Sec/SP)
The CCIE troubleshooting blog: http://dans-net.com
Bring order to your Private VLAN network: http://marathon-networks.com


On Fri, Aug 31, 2012 at 2:44 PM, Laurent GUERBY laur...@guerby.net wrote:

 On Wed, 2012-08-29 at 16:39 +0100, Edward J. Dore wrote:
  MikroTik RouterOS is indeed based on Linux, however I believe they rolled 
  their own MPLS stack.

 Hi,

 Does Mikrotik publish their modified Linux kernel source? Might be
 interesting to look at it.

 Laurent

  Last time I looked, the mpls-linux project over at SourceForge was 
  incomplete and slow - I have no idea if this has changed at all recently 
  however.
 
  Edward Dore
  Freethought Internet
 
  - Original Message -
  From: Walter Keen walter.k...@rainierconnect.net
  To: Seth Mattinen se...@rollernet.us
  Cc: nanog@nanog.org
  Sent: Wednesday, 29 August, 2012 2:00:52 AM
  Subject: Re: Bird vs Quagga revisited
 
  I'm fairly sure that Mikrotik software is based on linux, and supports MPLS.
 
  Not too sure which package they use, or if they rolled their own MPLS 
  support...
 
 
 
 
  - Original Message -
 
  From: Seth Mattinen se...@rollernet.us
  To: nanog@nanog.org
  Sent: Tuesday, August 28, 2012 4:42:14 PM
  Subject: Re: Bird vs Quagga revisited
 
 
  What's the state of MPLS on Linux these days?
 
  ~Seth