Re: 10gig coast to coast

2013-06-18 Thread Jakob Heitz
 Date: Mon, 17 Jun 2013 22:04:52 -0600
 From: Phil Fagan philfa...@gmail.com
 ... you could always
 thread the crap out of whatever it is your transactioning across the link
 to make up for TCP's jackknifes...

What is a TCP jackknife?

Cheers.
Jakob.



Re: 10gig coast to coast

2013-06-18 Thread Fred Reimer
It is also called a sawtooth or similar terms.  Just google tcp
sawtooth and you will see many references, and images that depict the
traffic pattern.

HTH,

Fred Reimer | Secure Network Solutions Architect
Presidio | www.presidio.com http://www.presidio.com/
3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309
D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | frei...@presidio.com
CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265




On 6/18/13 9:20 AM, Jakob Heitz jakob.he...@ericsson.com wrote:

 Date: Mon, 17 Jun 2013 22:04:52 -0600
 From: Phil Fagan philfa...@gmail.com
 ... you could always
 thread the crap out of whatever it is your transactioning across the
link
 to make up for TCP's jackknifes...

What is a TCP jackknife?

Cheers.
Jakob.





Re: 10gig coast to coast

2013-06-18 Thread Jakob Heitz
Thanks Fred. Sawtooth is more familiar.
How much of that do you actually see in practice?

Cheers,
Jakob.


On Jun 18, 2013, at 6:27 AM, Fred Reimer frei...@freimer.org wrote:

 It is also called a sawtooth or similar terms.  Just google tcp
 sawtooth and you will see many references, and images that depict the
 traffic pattern.
 
 HTH,
 
 Fred Reimer | Secure Network Solutions Architect
 Presidio | www.presidio.com http://www.presidio.com/
 3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309
 D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 | frei...@presidio.com
 CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
 
 
 
 
 On 6/18/13 9:20 AM, Jakob Heitz jakob.he...@ericsson.com wrote:
 
 Date: Mon, 17 Jun 2013 22:04:52 -0600
 From: Phil Fagan philfa...@gmail.com
 ... you could always
 thread the crap out of whatever it is your transactioning across the
 link
 to make up for TCP's jackknifes...
 
 What is a TCP jackknife?
 
 Cheers.
 Jakob.
 
 



Re: 10gig coast to coast

2013-06-18 Thread Phil Fagan
Sorry; yes Sawtooth is the more accurate term. I see this on a daily
occurance with large data-set transfers; generally if the data-set is large
multiples of the initial window. I've never tested medium latency(
100ms) with small enough payloads where it may pay-off threading out many
thousands of sessions. However, medium latency with large files (50M-10G)
threads well in the sub 200 range and does a pretty good job at filling
several Gig links. None of this is scientific; just my observations from
the wild.infulenced by end to end tunings per environment.




On Tue, Jun 18, 2013 at 7:45 AM, Jakob Heitz jakob.he...@ericsson.comwrote:

 Thanks Fred. Sawtooth is more familiar.
 How much of that do you actually see in practice?

 Cheers,
 Jakob.


 On Jun 18, 2013, at 6:27 AM, Fred Reimer frei...@freimer.org wrote:

  It is also called a sawtooth or similar terms.  Just google tcp
  sawtooth and you will see many references, and images that depict the
  traffic pattern.
 
  HTH,
 
  Fred Reimer | Secure Network Solutions Architect
  Presidio | www.presidio.com http://www.presidio.com/
  3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309
  D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 |
 frei...@presidio.com
  CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
 
 
 
 
  On 6/18/13 9:20 AM, Jakob Heitz jakob.he...@ericsson.com wrote:
 
  Date: Mon, 17 Jun 2013 22:04:52 -0600
  From: Phil Fagan philfa...@gmail.com
  ... you could always
  thread the crap out of whatever it is your transactioning across the
  link
  to make up for TCP's jackknifes...
 
  What is a TCP jackknife?
 
  Cheers.
  Jakob.
 
 




-- 
Phil Fagan
Denver, CO
970-480-7618


RE: 10gig coast to coast

2013-06-18 Thread James Braunegg
Dear All

We Deal with TCP window size all day every day across the southern cross from 
LA to Australia which adds around 160ms...  I've given up looking for a 
solution to get around physical physics of sending TCP traffic a long distance 
at a high speed

UDP traffic however comes in very fast

Kindest Regards

James Braunegg
W:  1300 769 972  |  M:  0488 997 207 |  D:  (03) 9751 7616
E:   james.braun...@micron21.com  |  ABN:  12 109 977 666   



This message is intended for the addressee named above. It may contain 
privileged or confidential information. If you are not the intended recipient 
of this message you must not use, copy, distribute or disclose it to anyone 
other than the addressee. If you have received this message in error please 
return the message to the sender by replying to it and then delete the message 
from your computer.


-Original Message-
From: Phil Fagan [mailto:philfa...@gmail.com] 
Sent: Wednesday, June 19, 2013 12:16 AM
To: Jakob Heitz
Cc: nanog@nanog.org
Subject: Re: 10gig coast to coast

Sorry; yes Sawtooth is the more accurate term. I see this on a daily occurance 
with large data-set transfers; generally if the data-set is large multiples of 
the initial window. I've never tested medium latency(
100ms) with small enough payloads where it may pay-off threading out many 
thousands of sessions. However, medium latency with large files (50M-10G) 
threads well in the sub 200 range and does a pretty good job at filling several 
Gig links. None of this is scientific; just my observations from the 
wild.infulenced by end to end tunings per environment.




On Tue, Jun 18, 2013 at 7:45 AM, Jakob Heitz jakob.he...@ericsson.comwrote:

 Thanks Fred. Sawtooth is more familiar.
 How much of that do you actually see in practice?

 Cheers,
 Jakob.


 On Jun 18, 2013, at 6:27 AM, Fred Reimer frei...@freimer.org wrote:

  It is also called a sawtooth or similar terms.  Just google tcp 
  sawtooth and you will see many references, and images that depict 
  the traffic pattern.
 
  HTH,
 
  Fred Reimer | Secure Network Solutions Architect Presidio | 
  www.presidio.com http://www.presidio.com/
  3250 W. Commercial Blvd Suite 360, Oakland Park, FL 33309
  D: 954.703.1490 | C: 954.298.1697 | F: 407.284.6681 |
 frei...@presidio.com
  CCIE 23812, CISSP 107125, HP MASE, TPCSE 2265
 
 
 
 
  On 6/18/13 9:20 AM, Jakob Heitz jakob.he...@ericsson.com wrote:
 
  Date: Mon, 17 Jun 2013 22:04:52 -0600
  From: Phil Fagan philfa...@gmail.com ... you could always thread 
  the crap out of whatever it is your transactioning across the link 
  to make up for TCP's jackknifes...
 
  What is a TCP jackknife?
 
  Cheers.
  Jakob.
 
 




--
Phil Fagan
Denver, CO
970-480-7618


Re: 10gig coast to coast

2013-06-18 Thread Valdis . Kletnieks
On Tue, 18 Jun 2013 15:53:48 -, James Braunegg said:
 We Deal with TCP window size all day every day across the southern cross from
 LA to Australia which adds around 160ms...  I've given up looking for a
 solution to get around physical physics of sending TCP traffic a long distance
 at a high speed

http://www.extremetech.com/extreme/141651-caltech-and-uvic-set-339gbps-internet-speed-record

It's apparently doable. ;)

A quick cheat sheet for the low-hanging fruit:

http://www.psc.edu/index.php/networking/641-tcp-tune

Though to get to *really* high througput, you may have to play some games
with TCP slow-start so it's not quite as slow (otherwise for long hauls it
can take literally hours to open the window after a packet burp at 10G or 
higher)

Also, you may want to look at CODEL or related queueing disciplines to
minimize the amount of trouble that bufferbloat can cause you at high
speeds.





pgpumeH3Q0OnP.pgp
Description: PGP signature


RE: 10gig coast to coast

2013-06-18 Thread James Braunegg
Dear Valdis

Thanks for your comments, whilst I know you can optimize servers for TCP 
windowing I was more talking about network backhaul where you don't have 
control over the server sending the traffic.

ie backhauling IP transit over the southern cross cable system

Kindest Regards


James Braunegg
W:  1300 769 972  |  M:  0488 997 207 |  D:  (03) 9751 7616
E:   james.braun...@micron21.com  |  ABN:  12 109 977 666   



This message is intended for the addressee named above. It may contain 
privileged or confidential information. If you are not the intended recipient 
of this message you must not use, copy, distribute or disclose it to anyone 
other than the addressee. If you have received this message in error please 
return the message to the sender by replying to it and then delete the message 
from your computer.


-Original Message-
From: valdis.kletni...@vt.edu [mailto:valdis.kletni...@vt.edu] 
Sent: Wednesday, June 19, 2013 3:19 AM
To: James Braunegg
Cc: Phil Fagan; Jakob Heitz; nanog@nanog.org
Subject: Re: 10gig coast to coast

On Tue, 18 Jun 2013 15:53:48 -, James Braunegg said:
 We Deal with TCP window size all day every day across the southern 
 cross from LA to Australia which adds around 160ms...  I've given up 
 looking for a solution to get around physical physics of sending TCP 
 traffic a long distance at a high speed

http://www.extremetech.com/extreme/141651-caltech-and-uvic-set-339gbps-internet-speed-record

It's apparently doable. ;)

A quick cheat sheet for the low-hanging fruit:

http://www.psc.edu/index.php/networking/641-tcp-tune

Though to get to *really* high througput, you may have to play some games with 
TCP slow-start so it's not quite as slow (otherwise for long hauls it can take 
literally hours to open the window after a packet burp at 10G or higher)

Also, you may want to look at CODEL or related queueing disciplines to minimize 
the amount of trouble that bufferbloat can cause you at high speeds.






Re: 10gig coast to coast

2013-06-18 Thread Valdis . Kletnieks
On Wed, 19 Jun 2013 00:24:15 -, James Braunegg said:

 Thanks for your comments, whilst I know you can optimize servers for TCP
 windowing I was more talking about network backhaul where you don't have
 control over the server sending the traffic.

If you don't have control over the server, why are you allowing your
customer to make their misconfiguration your problem?  (Mostly a rhetorical
question, as I know damned well how this sort of thing ends up happening)




pgpA6QDct8FQj.pgp
Description: PGP signature


Re: 10gig coast to coast

2013-06-18 Thread Ben Aitchison
On Tue, Jun 18, 2013 at 08:47:41PM -0400, valdis.kletni...@vt.edu wrote:
 On Wed, 19 Jun 2013 00:24:15 -, James Braunegg said:
 
  Thanks for your comments, whilst I know you can optimize servers for TCP
  windowing I was more talking about network backhaul where you don't have
  control over the server sending the traffic.
 
 If you don't have control over the server, why are you allowing your
 customer to make their misconfiguration your problem?  (Mostly a rhetorical
 question, as I know damned well how this sort of thing ends up happening)

maybe his customers are connecting to normal internet servers.  there's a lot of
servers with strangely low limits on window size out there.

like on speedtest.net under palo alto there's Fiber Internet Center which 
seems
to have a window size of 128k.

it requests files from 66.201.42.23, and if you do something like:

curl -O  http://66.201.42.23/speedtest/random4000x4000.jpg 

then do ping 66.201.42.23 then divide 1000 by the latency, for example 1000 / 
160 then
muitply by 128 then that number is about what curl will show on a fast 
connection.

speedtest.net seems to use 2 parallel connections which raises the speed 
slightly, but
it seems reasonably common to come across sites with sub-optimal tcp/ip 
configurations,
like a while back i noticed www.godaddy.com seems to use 2 packets initial 
window size, and if
using a proxy server that sends Via they seem to disable compression, so the 
web page will
load very slowly from a remote location using a proxy server.

Using recent Linux default kernels network speeds can be very good over high 
latency connections
both for small files, and larger files assuming minimal packet loss.  The 
combination of initial
window size of 10 packets and cubic congestion control helps both small and 
large tranfers, and
Linux has been improving their TCP/IP stack a lot.  But there are still quite a 
few less ideal
TCP/IP peers around.

Also big buffers really help microbursts of traffic on fast connections.  And 
using small buffers
can really increase the sawtooth effects of TCP/IP.  With all this talk of 
buffer bloat, in my
experience sfq works better than codel for long distance throughput.. 

Ben.



10gig coast to coast

2013-06-17 Thread eric clark
Greetings


I may be needing  10 gig from the West Coast to the East Coast some time in
the next year. I've got my ideas on what that would cost, but I don't have
anything that big.

This could be a leased line, part of a cloud with Verizon, NTT, Sprint, or
whoever as the provider, etc. I'm just looking to see what a budget cost
for something like this is, and who can provide such service.

Your help is greatly appreciated, feel free to respond directly or to the
thread.


E


Re: 10gig coast to coast

2013-06-17 Thread Valdis . Kletnieks
On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:

 I may be needing  10 gig from the West Coast to the East Coast

Might want to be more specific.  Catalina Island, CA to Buxton, NC
(home of Cape Hatteras High School) will probably be way different
than downtown LA to downtown Boston.


pgpDed_RjN2kU.pgp
Description: PGP signature


Re: 10gig coast to coast

2013-06-17 Thread eric clark
Fair enough

Seattle to Boston is the general route, real close.

On Monday, June 17, 2013, wrote:

 On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:

  I may be needing  10 gig from the West Coast to the East Coast

 Might want to be more specific.  Catalina Island, CA to Buxton, NC
 (home of Cape Hatteras High School) will probably be way different
 than downtown LA to downtown Boston.



Re: 10gig coast to coast

2013-06-17 Thread Carlos Alcantar
It's typically that the last mile portion of the circuit is going to cost
you the most, so it's important to know those details.

Carlos Alcantar
Race Communications / Race Team Member
1325 Howard Ave. #604, Burlingame, CA. 94010
Phone: +1 415 376 3314 / car...@race.com / http://www.race.com





-Original Message-
From: eric clark cabe...@gmail.com
Date: Monday, June 17, 2013 3:22 PM
To: valdis.kletni...@vt.edu valdis.kletni...@vt.edu
Cc: nanog@nanog.org nanog@nanog.org
Subject: Re: 10gig coast to coast

Fair enough

Seattle to Boston is the general route, real close.

On Monday, June 17, 2013, wrote:

 On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:

  I may be needing  10 gig from the West Coast to the East Coast

 Might want to be more specific.  Catalina Island, CA to Buxton, NC
 (home of Cape Hatteras High School) will probably be way different
 than downtown LA to downtown Boston.






Re: 10gig coast to coast

2013-06-17 Thread George Herbert
Also, what are reliability and redundancy requirements.

10 gigs of bare naked fiber is one thing, but if you need extra paths
redundancy, figure that out now and specify.

Is this latency, bandwidth, both?  Mission critical, business critical,
less priority?  24x7x365, or subset of that, or intermittent only?


On Mon, Jun 17, 2013 at 6:48 PM, Carlos Alcantar car...@race.com wrote:

 It's typically that the last mile portion of the circuit is going to cost
 you the most, so it's important to know those details.

 Carlos Alcantar
 Race Communications / Race Team Member
 1325 Howard Ave. #604, Burlingame, CA. 94010
 Phone: +1 415 376 3314 / car...@race.com / http://www.race.com





 -Original Message-
 From: eric clark cabe...@gmail.com
 Date: Monday, June 17, 2013 3:22 PM
 To: valdis.kletni...@vt.edu valdis.kletni...@vt.edu
 Cc: nanog@nanog.org nanog@nanog.org
 Subject: Re: 10gig coast to coast

 Fair enough

 Seattle to Boston is the general route, real close.

 On Monday, June 17, 2013, wrote:

  On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:
 
   I may be needing  10 gig from the West Coast to the East Coast
 
  Might want to be more specific.  Catalina Island, CA to Buxton, NC
  (home of Cape Hatteras High School) will probably be way different
  than downtown LA to downtown Boston.
 






-- 
-george william herbert
george.herb...@gmail.com


Re: 10gig coast to coast

2013-06-17 Thread Jeff Kell
On 6/17/2013 10:32 PM, George Herbert wrote:
 Also, what are reliability and redundancy requirements.

 10 gigs of bare naked fiber is one thing, but if you need extra paths
 redundancy, figure that out now and specify.

 Is this latency, bandwidth, both?  Mission critical, business critical,
 less priority?  24x7x365, or subset of that, or intermittent only?

And are you looking for dark fiber or can you deal with a lambda?  Can
you supply tuned optics for the passive mux carriers?

Dark coast-to-coast is going to cost you a few appendages.  You may land
a lambda for a reasonable price depending on the endpoints, you'll need
an established carrier with DWDM gear on both ends.

Jeff




Re: 10gig coast to coast

2013-06-17 Thread Eric Clark
all of these questions are valid.

The guys who will use it would love to have line rate on the 10G, for a single 
conversation, but that's not going to happen. So, there's a certain amount of 
expectation management. 

For the purpose we're proposing, this would be an additional link to an 
existing office, a link for test/lab traffic specifically. We would run the lab 
management on the existing link (s) and provide some sort of restricted 
failover as well.

Sorry I'm not going into more detail, just trying to balance the need for some 
info versus ... you know.

This link wouldn't need to be 5 Nines, but with the office primary and backup, 
we can provide the connectivity almost 100% of the time.

Thanks for all the comments everyone, they have been helpful.

Eric

On Jun 17, 2013, at 7:32 PM, George Herbert george.herb...@gmail.com wrote:

 Also, what are reliability and redundancy requirements.
 
 10 gigs of bare naked fiber is one thing, but if you need extra paths
 redundancy, figure that out now and specify.
 
 Is this latency, bandwidth, both?  Mission critical, business critical,
 less priority?  24x7x365, or subset of that, or intermittent only?
 
 
 On Mon, Jun 17, 2013 at 6:48 PM, Carlos Alcantar car...@race.com wrote:
 
 It's typically that the last mile portion of the circuit is going to cost
 you the most, so it's important to know those details.
 
 Carlos Alcantar
 Race Communications / Race Team Member
 1325 Howard Ave. #604, Burlingame, CA. 94010
 Phone: +1 415 376 3314 / car...@race.com / http://www.race.com
 
 
 
 
 
 -Original Message-
 From: eric clark cabe...@gmail.com
 Date: Monday, June 17, 2013 3:22 PM
 To: valdis.kletni...@vt.edu valdis.kletni...@vt.edu
 Cc: nanog@nanog.org nanog@nanog.org
 Subject: Re: 10gig coast to coast
 
 Fair enough
 
 Seattle to Boston is the general route, real close.
 
 On Monday, June 17, 2013, wrote:
 
 On Mon, 17 Jun 2013 12:51:28 -0700, eric clark said:
 
 I may be needing  10 gig from the West Coast to the East Coast
 
 Might want to be more specific.  Catalina Island, CA to Buxton, NC
 (home of Cape Hatteras High School) will probably be way different
 than downtown LA to downtown Boston.
 
 
 
 
 
 
 
 -- 
 -george william herbert
 george.herb...@gmail.com




Re: 10gig coast to coast

2013-06-17 Thread Eric Clark
I'm looking for options.

With dark fiber, obviously, I have the ultimate in options.

However, its the ultimate in cost as you say.

The requirement we have is 10gig of actual throughput. Precisely what mechanism 
is used to transport it isn't all that important, though I'm certain that there 
will be complaints... :)

I'd LOVE to have me some DWDM, always wanted to run some of that gear, but at 
that point, why stop at 10G

On Jun 17, 2013, at 7:42 PM, Jeff Kell jeff-k...@utc.edu wrote:

 On 6/17/2013 10:32 PM, George Herbert wrote:
 Also, what are reliability and redundancy requirements.
 
 10 gigs of bare naked fiber is one thing, but if you need extra paths
 redundancy, figure that out now and specify.
 
 Is this latency, bandwidth, both?  Mission critical, business critical,
 less priority?  24x7x365, or subset of that, or intermittent only?
 
 And are you looking for dark fiber or can you deal with a lambda?  Can
 you supply tuned optics for the passive mux carriers?
 
 Dark coast-to-coast is going to cost you a few appendages.  You may land
 a lambda for a reasonable price depending on the endpoints, you'll need
 an established carrier with DWDM gear on both ends.
 
 Jeff
 
 




Re: 10gig coast to coast

2013-06-17 Thread Phil Fagan
I've had pretty good luck with CenturyLinks 10G wave offerings:
http://shop.centurylink.com/largebusiness/enterprisesolutions/products/ethernet/qwave.html

Ethernet hand-off at both sites with IPsec or GRE provided a pretty solid
environment. You should be able to take advantage of some UDP blasters at
what the latency profile will look like for you. Otherwise you could always
thread the crap out of whatever it is your transactioning across the link
to make up for TCP's jackknifes along with other tuning.



On Mon, Jun 17, 2013 at 9:51 PM, Eric Clark cabe...@gmail.com wrote:

 I'm looking for options.

 With dark fiber, obviously, I have the ultimate in options.

 However, its the ultimate in cost as you say.

 The requirement we have is 10gig of actual throughput. Precisely what
 mechanism is used to transport it isn't all that important, though I'm
 certain that there will be complaints... :)

 I'd LOVE to have me some DWDM, always wanted to run some of that gear, but
 at that point, why stop at 10G

 On Jun 17, 2013, at 7:42 PM, Jeff Kell jeff-k...@utc.edu wrote:

  On 6/17/2013 10:32 PM, George Herbert wrote:
  Also, what are reliability and redundancy requirements.
 
  10 gigs of bare naked fiber is one thing, but if you need extra paths
  redundancy, figure that out now and specify.
 
  Is this latency, bandwidth, both?  Mission critical, business critical,
  less priority?  24x7x365, or subset of that, or intermittent only?
 
  And are you looking for dark fiber or can you deal with a lambda?  Can
  you supply tuned optics for the passive mux carriers?
 
  Dark coast-to-coast is going to cost you a few appendages.  You may land
  a lambda for a reasonable price depending on the endpoints, you'll need
  an established carrier with DWDM gear on both ends.
 
  Jeff
 
 





-- 
Phil Fagan
Denver, CO
970-480-7618