Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread joel jaeggli

On 3/23/13 9:13 PM, Matt Palmer wrote:

On Sat, Mar 23, 2013 at 07:47:12PM -0700, Kyle Creyts wrote:

You do realize that there are quite a few people (home broadband
subscribers?) who just go do something else when their internet goes
down, right?

[...]


Will they really demand ubiquitous, unabridged connectivity?

When?

Probably around the time their phone, TV, books, shopping, and *life* are
all delivered over that connectivity.  Especially if they don't have any
meaningful local storage or processing, as everything has been delegated to
the cloud.
When the cable is down there's the verizon usb stick (which this point 
can be into the router and serve the whole house), when verizon is down 
there's t-mobile handset. when t-mobile is down there's a workphone with 
att.


When the cable/verizon/t-mobile/att are all down for any signifcant 
length of time, I expect to be digging my neighbors our of the sorts of 
natural disasters that befall California and listening to the radio and 
maybe 2-meter.

In practice, however, I suspect that we as providers will just get a whole
lot better at providing connectivity, rather than have everyone work out how
to do fully-diverse BGP from their homes.
I'm going to be somewhat contrarian, connectivity/availability with 
cloud services is important, where you access them from not so much. I 
doubt very much that reliance on the cloud drives multihoming for 
end-sites/consumers, it drives a demand for connectivity diversity so 
that one failure mode doesn't leave you stranded.


- Matt







Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread William Herrin
On Sat, Mar 23, 2013 at 10:47 PM, Kyle Creyts kyle.cre...@gmail.com wrote:
 Will they really demand ubiquitous, unabridged connectivity?

 When?

When the older generation that considers the Internet a side show dies off.

When your grandparents' power went out, they broke out candles and
kerosene lamps.

When yours goes out, you pull out flashlights and generators. And when
it stays out you book a motel room so your family can have air
conditioning and television.

For most folks under 30 and many who are older, Internet isn't a side
show, it's a way of life. An outage is like a power failure or the car
going kaput: a major disruption to life's flow.


This need won't be ubiquitous for two to three decades, but every year
between now and then the percentage of your customer base which
demands unabridged connectivity will grow.

What do you have in the pipeline to address that demand as it arrives?
BGP multihoming won't get the job done for the hundred million
households in North America, let alone the seven billion people in the
world.

Regards,
Bill Herrin


-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread George Herbert




On Mar 23, 2013, at 7:47 PM, Kyle Creyts kyle.cre...@gmail.com wrote:

 Will they really demand ubiquitous, unabridged connectivity?

Let's back up.  End users do not as a rule* have persistent inbound 
connections.  If they have DSL and a Cable Modem they can switch manually (or 
with a little effort automatically) if one goes down.

* Servers-at-home-or-small-office is the use case for Owen's magic BGP box.  
Which is true for many of us and other core geeks but not an appreciable 
percent of the populace.

I believe that full BGP to end user is less practical for this use case than a 
geographically dispersed BGP external facing intermediary whose connectivity to 
the end user servers is full-mesh multi-provider-multi-physical-link VPNs. 

It's a lot easier to manage and has less chance of a config goof blowing up 
bigger network neighbors.

Every time I look at productizing this, though, the market's too small to 
support it.  Which probably means it's way too small for home BGP...


George William Herbert
Sent from my iPhone




Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread John Curran
On Mar 24, 2013, at 12:06 PM, William Herrin b...@herrin.us wrote:
 ...
 For most folks under 30 and many who are older, Internet isn't a side
 show, it's a way of life. An outage is like a power failure or the car
 going kaput: a major disruption to life's flow.

Yes, this is increasingly the case (and may not be as generational
as you think)

 This need won't be ubiquitous for two to three decades, but every year
 between now and then the percentage of your customer base which
 demands unabridged connectivity will grow.

I believe that the percentage which _expects_ unabridged connectivity 
today is quite high, but that does not necessarily mean actual _demand_
(i.e. folks who go out and make the necessary arrangements despite the
added cost and hassle...)

The power analogy might be apt here; I know many folks who have a home
UPS, a few that have a manual generator, and just one or two who did
the entire home automatic UPS/generator combo that's really necessary
for 100% reliable power.  This reflects a truism: while many people may
expect 100% reliable today, the demand (in most areas) simply doesn't 
match.
 
 What do you have in the pipeline to address that demand as it arrives?

See above: increasing expectations does not necessarily equate with demand.

FYI,
/John

Disclaimer: My views alone.  Sent via less than 100% reliable networks.





Bandwidth.com SIP trunking issue

2013-03-24 Thread Nanog Mailing List

Hello,

I am curious if anyone else whom may use Bandwidth.com for their SIP 
trunks are receiving the All circuits are busy now error message today.


This just started this morning, and nothing has changed in machine 
configuration since yesterday (when everything worked correctly).



Thanks for the heads up.

-Bret
Worldlink ISP
206-361-8785



Call for Papers: Energy-Efficient HPC Communication workshop (E2HPC2) 2013. Deadline: March 29th, 2013

2013-03-24 Thread Javier Garcia Blas
Apologies for multiple posting.
=


The first Energy-Efficient High Performance Computing  Communication workshop 
will be co-located with EuroMPI 2013 in Madrid. Energy-awareness is now a main 
topic for HPC systems. The goal of this workshop is to discuss latest 
researches on the impact and possibles leverages of communications for such 
systems. E2HPC2 solicits original and non-published or under-review articles on 
the field of energy-aware communication in HPC environment. This workshop is 
co-located with EuroMPI as MPI is the main communication interface in those 
environments.

http://www.irit.fr/~Georges.Da-Costa/e2hpc2.html

E2HPC will be held on Tuesday 17th of September 2013.
Submission of full papers: March 29th, 2013

Program Chair : Georges Da Costa

Topics
--
Relevant topics for the conference include, but are not limited to the 
following:
 * Energy efficient MPI implementation
 * Performance/Energy tradeoffs in MPI programs
 * Energy evaluation and models of HPC applications and communication library
 * Energy-Aware large scale HPC runtime
 * Return of experience on complex applications from an energy/power point of 
view

Time-line
-
 * Submission of full papers: March 29th, 2013
 * Author notification: May 11th, 2013
 * Camera Ready papers due: June 15th, 2013
 * Workshop: September 17th, 2013

Program Committee
-
 * Georges Da CostaIRIT/Toulouse III
 * Jesus CarreteroUniversidad Carlos III de Madrid
 * Emmanuel JeannotINRIA
 * Jean-Marc PiersonIRIT
 * Lars DittmannTechnical University of Denmark, Department of Photonics 
Engineering
 * Anne-Cécile OrgerieCNRS
 * Ariel OleksiakPoznan Supercomputing and Networking Center
 * Manuel F. DolzDepto. de Ingeniería y Ciencia de los Computadores, 
Universitat Jaume I
 * Laurent LefevreINRIA
 * Julius ÅœilinskasVilnius University
 * Francisco AlmeidaLa Laguna University
 * Davide CareglioUniversitat PolitÚcnica de Catalunya

Submission
--
Contributors are invited to submit a full paper as a PDF document not exceeding 
6 pages in English. The title page should contain an abstract of at most 100 
words and five specific, topical keywords. The paper must be formatted 
according to double-column ACM ICPS proceedings style. The usage of LaTeX for 
preparation of the contribution as well as the submission in camera ready 
format is strongly recommended. Style files can be found at 
http://www.acm.org/publications/icps-instructions/. Selected and presented 
during the workshop papers will be published in the digital library of ACM.

All contributions will be fully peer reviewed by the program committee. Papers 
shall be submitted electronically via Easychair, see 
https://www.easychair.org/conferences/?conf=e2hpc22013.



Re: Sabey opens Intergate.Manhattan DC

2013-03-24 Thread joel jaeggli

On 3/23/13 11:20 AM, Jay Ashworth wrote:

1M sq ft datacenter in former VZN CO at 375 Pearl:

http://www.wallstreetandtech.com/it-infrastructure/worlds-largest-high-rise-data-center-ope/240151399

 From the story:


Intergate.Manhattan is not only one of the largest facilities [at 32 stories, 
all rentable space], but it also the only data center that is located in a city 
center, according to Sabey.

totally wrong without the context.

Sabey Data Center Properties operates 13 facilities in the United 
States, totaling approximately 2 million square feet of data center 
space. Intergate.Manhattan is not only one of the largest facilities, 
but it also the only data center that is located in a city center, 
according to Sabey.


plenty on other DCs in city centers, the only one of theirs that is...

However, some financial services IT experts question locating a large data center in Manhattan. 
The fact that a building has integrity, and back up systems, is just one thing, says 
Steve Rubinow, CIO, Marketplaces Division, Thomson Reuters. But, as we learned on 9/11 and 
during Sandy, you also need to be able to get people and materiel to and from the building in times 
of an emergency. And after 9/11, it became very apparent that building data centers on an island, 
whether it is Manhattan island or any other island, it is a real bother. It creates an added layer 
of complexity.

Currently, the data center is in its first phase, providing 100,000 square feet 
of data center space and 5.4 Megawatts of power. Eventually, 
Intergate.Manhattan will accommodate 40 Megawatts of data center capacity on 
600,000 square feet of data center floor space, with the additional 400,000 
square feet dedicated to power, cooling and infrastructure.


Cheers,
-- jra





Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread Kyle Creyts
As an under-30, working in the industry, I have to say, when the power goes
out at home for a few days, we pull out the camping gear.

When our cable-based internet goes out, our life changes hardly at all. We
go for a walk, or hike, do the things we would normally. I can imagine that
an outage of 1 week would be slightly different, but I'm pretty sure that
the spans of most of the outages which would be resolved by multi-provider
solutions like those outlined herein would probably only apply to
situations where the outage would only last less than 48 hours.

On Sun, Mar 24, 2013 at 9:06 AM, William Herrin b...@herrin.us wrote:

 On Sat, Mar 23, 2013 at 10:47 PM, Kyle Creyts kyle.cre...@gmail.com
 wrote:
  Will they really demand ubiquitous, unabridged connectivity?
 
  When?

 When the older generation that considers the Internet a side show dies off.

 When your grandparents' power went out, they broke out candles and
 kerosene lamps.

 When yours goes out, you pull out flashlights and generators. And when
 it stays out you book a motel room so your family can have air
 conditioning and television.

 For most folks under 30 and many who are older, Internet isn't a side
 show, it's a way of life. An outage is like a power failure or the car
 going kaput: a major disruption to life's flow.


 This need won't be ubiquitous for two to three decades, but every year
 between now and then the percentage of your customer base which
 demands unabridged connectivity will grow.

 What do you have in the pipeline to address that demand as it arrives?
 BGP multihoming won't get the job done for the hundred million
 households in North America, let alone the seven billion people in the
 world.

 Regards,
 Bill Herrin


 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004




-- 
Kyle Creyts

Information Assurance Professional
BSidesDetroit Organizer


Re: Is multihoming hard? [was: DNS amplification]

2013-03-24 Thread Owen DeLong
I assume those people will not bother with any attempt to multihome in any form.
They are not, therefore, part of what is being discussed here.

Owen

On Mar 23, 2013, at 19:47 , Kyle Creyts kyle.cre...@gmail.com wrote:

 You do realize that there are quite a few people (home broadband 
 subscribers?) who just go do something else when their internet goes down, 
 right?
 
 There are people who don't understand the difference between a site being 
 slow and packet-loss. For many of these people, losing internet service 
 carries zero business impact, and relatively little life impact; they might 
 even realize they have better things to do than watch cat videos or scroll 
 through endless social media feeds.
 
 Will they really demand ubiquitous, unabridged connectivity?
 
 When?
 
 On Mar 23, 2013 12:58 PM, Owen DeLong o...@delong.com wrote:
 
 
  On Mar 23, 2013, at 12:12 , Jimmy Hess mysi...@gmail.com wrote:
 
   On 3/23/13, Owen DeLong o...@delong.com wrote:
   A reliable cost-effective means for FTL signaling is a hard problem 
   without
   a known solution.
  
   Faster than light signalling is not merely a hard problem.
   Special relativity doesn't provide that information may travel faster
   than the maximum
   speed C.If you want to signal faster than light, then slow down the 
   light.
  
   An idiot-proof simple BGP configuration is a well known solution. 
   Automating
   it would be relatively simple if there were the will to do so.
  
   Logistical problems...  if it's a multihomed connection, which of the
   two or three providers manages it,  and gets to blame the other
   provider(s) when anything goes wrong: or are you gonna rely on the
   customer to manage it?
  
 
  The box could (pretty easily) be built with a Primary and Secondary 
  port.
 
  The cable plugged into the primary port would go to the ISP that sets the
  configuration. The cable plugged into the other port would go to an ISP
  expected to accept the announcements of the prefix provided by the ISP
  on the primary port.
 
  BFD could be used to illuminate a tri-color LED on the box for each port,
  which would be green if BFD state is good and red if BFD state is bad.
 
  At that point, whichever one is red gets the blame. If they're both green,
  then traffic is going via the primary and the primary gets the blame.
 
  If you absolutely have to troubleshoot which provider is broken, then
  start by unplugging the secondary. If it doesn't start working in 5 minutes,
  then clearly there's a problem with the primary regardless of what else
  is happening.
 
  Lather, rinse, repeat for the secondary.
 
   Someone might be able to make a protocol that lets this happen, which
   would need to detect on a per-route basis any performance/connectivity
   issues, but I would say it's not any known implementation of BGP.
 
  A few additional options to DHCP could actually cover it from the primary
  perspective.
 
  For the secondary provider, it's a little more complicated, but could be
  mostly automated so long as the customer identifies the primary provider
  and/or provides an LOA for the authorized prefix from the primary to
  the secondary.
 
  The only complexity in the secondary case is properly filtering the 
  announcement
  of the prefix assigned by the primary.
 
   1.   ISPs are actually motivated to prevent customer mobility, not 
   enable it.
  
   2.   ISPs are motivated to reduce, not increase the number of multi-homed
sites occupying slots in routing tables.
  
  This is not some insignificant thing.   The ISPs have to maintain
   routing tables
  as well;  ultimately the ISP's customers are in bad shape, if too many 
   slots
  are consumed.
  
 
  I never said it was insignificant. I said that solving the multihoming 
  problem
  in this manner was trivial if there was will to do so. I also said that the 
  above
  were contributing factors in the lack of will to do so.
 
   How about
 3.  Increased troubleshooting complexity when there are potential
   issues or complaints.
  
 
  I do not buy that it is harder to troubleshoot a basic BGP configuration
  than a multi-carrier NAT-based solution that goes woefully awry.
 
  I'm sorry, I've done the troubleshooting on both scenarios and I have
  to say that if you think NAT makes this easier, you live in a different
  world than I do.
 
   The concept of a fool proof  BGP configuration is clearly a new sort of 
   myth.
 
  Not really.
 
  Customer router accepts default from primary and secondary providers.
  So long as default remains, primary is preferred. If primary default goes
  away, secondary is preferred.
 
  Customer box gets prefix (via DHCP-PD or static config or whatever
  either from primary or from RIR). Advertises prefix to both primary
  and secondary.
 
  All configuration of the BGP sessions is automated within the box
  other than static configuration of customer prefix (if static is desired).
 
  

Last mile multihoming

2013-03-24 Thread Charles Wyble
So isnt the most likely interruption to service due to a last mile physical 
media issue?  Or say a regional fiber cut that takes out the towers you can 
reach and the upstream connection from your cable and telco providers? Imo at 
the edge, BGP mostly protects you from layer 8 fail  (if youve done some basic 
best practice configuration). In theory, issues below that (at least in the 
dist/core at l1 to 3) are handled by other redundancy protections hidden from 
you (hsrp, fiber ring with protected path etc).  

As for dfz explosion, would mpls/private as/ vrf be a workable approach for bgp 
at the edge? 

So I live in Austin. I have available to me two hfc providers (grande and twc) 
and att. I also have sprint/clear vzw/tmo. I havent done an analysis of wisp 
offerings (if any are on list, please email me at char...@thefnf.org as im 
looking for a non ilec path for redunancy).

So lets break this down:

I only know of one att co in town. (Im sure if there is more, you will let me 
know). So the chances of that failing are decently high. Also my experience 
with att dsl have been mixed, unless im homed direct to the co. Vz dsl otoh has 
always been rock solid. Also att is retiring dsl/copper. I refuse to use uverse 
as they dont offer a unbundled modem/router or a way to do bridge mode. Oh and 
no ipv6. (If you can put a modem in bridge mode and still have working tv, 
please let me know. Ive not been able to find a solution).

The chances of someone driving into the dslam serving my complex or the 
pedastal down the street is high (100% as it has happend a couple times).

So this means I need a wireless backhaul. All of the providers I can reach 
colocate on exactly one tower. Surrounded by a chain link fence, across from a 
walmart. (Im in north austin near cameron and 183 for anyone who lives in 
town). The chances of the fiber serving that tower being cut is unknown, but 
not outside the realm of possibility. Or say the walmart big rig over 
correcting due to a driver coming around the blind curve near there and plowing 
into thr tower. Etc.

So my best bet for uninterrupted connectivity seems to be running two openvpn 
tunels on my home edge pfsense router, each to a endpoint in a colo.

I already have a full rack of gear in joesdatacenter in kc, and its fully 
redundant. I also run all of my web/mail/software dev from there, so its not 
soley for bgp purposes. Most folks I imagine may have their stuff in a colo as 
well and not want to run that at home. (I started a thread on that once upon a 
time). It so happens, that I have various things which I cant run there (rf 
equipment which I need to frequently reflash and move around). So running bgp 
on my colo gear and announcing a /48 that ive assigned to my house seems like a 
good idea. And I can easily cross connect to kcix and have lots of bgp fun. The 
latency would be a bit high, but it already is and I dont have any redundant 
connectivitym

Ok. So thats great. Now who is my secondary? Is a vps at say linode sufficient 
for a secondary bgp announcer? Will they sell me bgp enabled transit? Will 
other vps providers?  Do I need a box in a rack at a local nap? Is there an ix 
in austin, or should I rack a box in Dallas?

Once i have two providerdls, then i can easily use pfsense multi wan failover 
and if a circuit goes down, life goes on as I rely on bgp to detect the link 
failure and handle it. Yes? No? Maybe?

So to me, this seems like a solved problem. Run multilple diverse (carrier, 
media type) circuits to your edge, put a pfsense (asa, whatever is your poison 
but i like pfsense the best for multi wan failover), openvpn (i cant stand 
ipsec) to colo, cross connect to ... oh I dunno he.net :) bgp for free. Done. 

For about... hmmm.. 500.00 a month? (Many colos might not do bgp with you for 
less then a quarter rack, and I presume anyone serious enough about 
uninterrupted service on a reasonable budget can do 500.00 a month). 

Thie discussion on soho multihoming has been fascinating to me, and I wanted to 
go through a thought exercise for what I imagine is a common scenario (main 
gear in a bgp enabled sp,  office gear needing to be reachable by remote 
personnel in a non bgp enabled sp).

Would love to hear what you folks think. 



--
Charles Wyble 
char...@thefnf.org / 818 280 7059 
CTO Free Network Foundation (www.thefnf.org)