Re: Yahoo is now recycling handles

2013-09-04 Thread Leo Bicknell

On Sep 3, 2013, at 10:47 PM, Peter Kristolaitis alte...@alter3d.ca wrote:

 The issue was studied thoroughly by a committee of MBAs who, after extensive 
 thought (read: 19 bottles of scotch), determined that there was money to be 
 made.
 
 whatcouldpossiblygowrong?

Apparently it was implemented by a group of low-bid programmers in a far off 
land.

I have, err, had, a Yahoo! account I used for two things, getting e-mail from 
Yahoo! groups and accessing Flickr.  I was on Flickr not a two or three months 
ago to fix a picture someone noticed was in the wrong album.

When I saw this I thought I should log in again to reset my one year ticker.  
Off to www.yahoo.com and click sign in.

Enter userid, enter password.

Drops me to a CAPTCHA screen, that's odd, never seen that before, but ok.

Enter CAPTCHA and it redirects me to https://edit.yahoo.com/forgot;, which 
when reached from said CAPTCHA screen renders as a 100% blank page.

That's some fine web coding.

I went to the flickr site, tried to log in.  At least there it tells me my 
userid is in the process of being recycled.  No option to recover.

Try creating a new account with the same userid, sorry, it's in use.

So as far as I can tell:
  - The must be inactive for one year is BS, and/or logging into Flickr didn't 
count in my case.
  - No notifications are sent, so if you're a person who is there for things 
like Yahoo groups and forwards your e-mail elsewhere you may be using the 
service in a way that generates no logs.
  - There is no way to get an account back that is in the recycling phase, 
which is frankly stupid.

As a result Yahoo! has lost a Flickr and Groups member, and I'm not sure I see 
any reason to sign up again at this point.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Yahoo is now recycling handles

2013-09-04 Thread Leo Bicknell

I've got to apologize publicly to Yahoo! here as part of my issue was my own 
stupidity.  It appears in the past I've had multiple Yahoo! ID's and I was 
trying to use the wrong one, one that may have gone away a long time ago, 
rather than my still active ID.  Some helpful people at Yahoo got me 
straightened out on that point.  My apologies for disparaging Yahoo! when it 
was my own fault.

There's still the much more minor point that when I tried to self serve I 
ended up at a blank page on the Yahoo! web site, hopefully they will figure 
that out as well.

On Sep 4, 2013, at 8:36 AM, Leo Bicknell bickn...@ufp.org wrote:

 Apparently it was implemented by a group of low-bid programmers in a far off 
 land.
 
 I have, err, had, a Yahoo! account I used for two things, getting e-mail from 
 Yahoo! groups and accessing Flickr.  I was on Flickr not a two or three 
 months ago to fix a picture someone noticed was in the wrong album.
 
 When I saw this I thought I should log in again to reset my one year ticker.  
 Off to www.yahoo.com and click sign in.
 
 Enter userid, enter password.
 
 Drops me to a CAPTCHA screen, that's odd, never seen that before, but ok.
 
 Enter CAPTCHA and it redirects me to https://edit.yahoo.com/forgot;, which 
 when reached from said CAPTCHA screen renders as a 100% blank page.
 
 That's some fine web coding.
 
 I went to the flickr site, tried to log in.  At least there it tells me my 
 userid is in the process of being recycled.  No option to recover.
 
 Try creating a new account with the same userid, sorry, it's in use.
 
 So as far as I can tell:
  - The must be inactive for one year is BS, and/or logging into Flickr didn't 
 count in my case.
  - No notifications are sent, so if you're a person who is there for things 
 like Yahoo groups and forwards your e-mail elsewhere you may be using the 
 service in a way that generates no logs.
  - There is no way to get an account back that is in the recycling phase, 
 which is frankly stupid.
 
 As a result Yahoo! has lost a Flickr and Groups member, and I'm not sure I 
 see any reason to sign up again at this point.


-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: IP Fragmentation - Not reliable over the Internet?

2013-08-27 Thread Leo Bicknell

On Aug 27, 2013, at 6:24 AM, Saku Ytti s...@ytti.fi wrote:

 On (2013-08-27 10:45 +0200), Emile Aben wrote:
 
 224 vantage points, 10 failed.
 
 48 byte ping:42 out of 3406 vantage points fail (1.0%)
 1473 byte ping: 180 out of 3540 vantage points fail (5.1%)
 
 Nice, it's starting to almost sound like data rather than anecdote, both
 tests implicate 45% having fragmentation issues.
 
 Much larger number than I intuitively had in mind.


I'm pretty sure the failure rate is higher, and here's why.

The #1 cause of fragments being dropped is firewalls.  Too many admins 
configuring a firewall do not understand fragments or how to properly put them 
in the rules.

Where do firewalls exist?  Typically protecting things with public IP space, 
that is (some) corporate networks and banks of content servers in data centers. 
 This also includes on-box firewalls for Internet servers, ipfw or iptables on 
the server is just as likely to be part of the problem.

Now, where are RIPE probes?  Most RIPE probes are probably either with somewhat 
clueful ISP operators, or at Internet Clueful engineer's personal connectivity 
(home, or perhaps a box in a colo).  RIPE probes have already significantly 
self-selected for people who like non-broken connectivity.  What's more, the 
ping test was probably to some known good host(s), rather than a broad 
selection of Internet hosts, so effectively it was only testing the probe end, 
not both ends.

Basically, I see RIPE probes as an almost best-case scenario for this sort of 
broken behavior.

I bet the ISC Netalyzer folks have somewhat better data, perhaps skewed a bit 
towards broken connections as people run Netalyzer when their connection is 
broken!  I suspect reality is somewhere between those two book ends.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: WaPo writes about vulnerabilities in Supermicro IPMIs

2013-08-16 Thread Leo Bicknell

On Aug 15, 2013, at 9:18 PM, Brandon Martin lists.na...@monmotha.net wrote:

 As to why people wouldn't put them behind dedicated firewalls, imagine 
 something like a single-server colo scenario. 

I have asked about this on other lists, but I'll ask here.

Does anyone know of a small (think Raspberry Pi sized) device that is:

  1) USB powered.
  2) Has two ethernet ports.
  3) Runs some sort of standard open source OS?

You might already see where I'm going with this, a small 2-port firewall device 
sitting in front of IPMI, and powered off the USB bus of the server.  That way 
another RU isn't required.  Making it fit in an expansion card slot and using 
an internal USB header might be interesting too, so from the outside it wasn't 
obvious what it was.

I would actually like to see the thing only respond on the USB side, power + 
console, enabling consoling in and changing L2 firewall rules.  No IP stack on 
it what so ever.  That would be highly secure and simple.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/








Re: How big is the Internet?

2013-08-15 Thread Leo Bicknell

On Aug 14, 2013, at 3:27 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 Once you define what you mean by how bit is the Internet, I'll be happy to 
 spout off about how big it is. :)

Arbitrary definition time: A Internet host is one that can send and receive 
packets directly with at least one far end device addressed out of RIR managed 
IPv4 or IPv6 space.

That means behind a NAT counts, behind a firewall counts, but a true private 
network (two PC's into an L2 switch with no other connections) does not, even 
if they use IP protocols.  Note that devices behind a pure L3 proxy do not 
count, but the L3 proxy itself counts.

Now, take those Internet hosts and create a graph where each node has a binary 
state, forwards packets or does not forward packets the result is a set of edge 
nodes that do not forward packets.  The simple case is an end user PC, the 
complex case may be something like a server in a data center that while 
connected to multiple networks does not forward any packets, and is an edge 
node on all of the networks to which it is attached.

To me, all Internet traffic is the sum of all in traffic on all edge nodes. 
 Note if I did my definition carefully out = in - (packet loss + 
undeliverable), which means on the scale of the global Internet I suspect out 
== in, when rounded off.

So please, carry on and spout off as to how big that is, I think an estimate 
would be very interesting.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/








Re: How big is the Internet?

2013-08-15 Thread Leo Bicknell

On Aug 15, 2013, at 1:27 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 My laptop at home is an edge node under the definition above, despite being 
 behind a NAT. My home NAS is as well. When I back up my laptop to my NAS over 
 my home network, that traffic would be counted as Internet traffic by your 
 definition.
 
 I have a feeling that does not come close to matching the mental model most 
 people have in their head of Internet traffic. But maybe I'm confused.

It matches my mental model.  Your network is connected to the Internet, that's 
traffic between two hosts, it's Internet traffic.

Let's take the same two machines, but I own one and you own one, and let's put 
them on the same network behind a NAT just like your home, but at a coffee 
shop.  Rather than backups we're both running bit torrent and our two machines 
exchange data.

That's Internet traffic, isn't it?  Two unrelated people talking over the 
network?  They just happen to be on the same LAN.

My definition was arbitrary, so feel free to argue another arbitrary definition 
is more useful in some way, but for my arbitrary definition you've applied the 
rules correct, and I would argue it's the right way to think about things.  In 
a broad english sense IP packets traversing an Internet connected network are 
Internet traffic.

It's all graph cross sections.  Peering volume totals a set of particular 
links in the graph, omitting traffic from your laptop to your file server, or 
your NAS to your laptop.  My model attempts to isolate every edge on the graph, 
and generate the total sum of IP traffic crossing any Internet connected 
network, which would always include all forms of local caches (Akamai, Google, 
Netflix) and even your NAT.  I think that's a more interesting number, and a 
number that's easier to count and defend than say a peering or backbone 
number.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/








Re: Service provider T1/PPP question

2013-06-28 Thread Leo Bicknell

On Jun 28, 2013, at 7:26 PM, Mike mike-na...@tiedyenetworks.com wrote:

 I am a clec with colocated facilities, and my targets are rural unserved 
 areas where none of the factors above are considerations. I just want to 
 connect with anyone who's done this and has a qualified technical opinion on 
 optimal deployment strategies; the business considerations are already done.

I find this fascinating, but here's the scoop.

When T1's were the bee's knees (why is that a saying, anyway?) they were sold 
to what today we would call a business customer.  The concept of residential 
users as we know them know didn't really exist during T1's heyday.  Also during 
this time period MLPPP for high speed (yes, T1's qualified wasn't really a 
thing, rather external multiplexers and HSSI (remember those fun cables?) into 
a DS3 interface were a thing.  Remember providers that did Frame Relay over 
NxT1 just so they could mux the multiple customers on to DS-3/OC-3 into 
routers?  Fun times...not.  Anyway, the customer would have a router, like a 
real router, well, a 25xx anyway, and know how to configure it.

That doesn't seem to be the world you're describing though, which is why I 
think you're getting crickets on the mailing list.

Here's my $0.02, this is going to work best if you go somewhat old school in 
the config.  HDLC or PPP over a single T1 to any equipment that supports either 
should more or less work just fine.  Static assignment will work just fine, PPP 
learned assignments should work more or less just fine.  Any of the things that 
can channelize DS-3/OC-3/OC-12/OC-48 down to T1 should work just dandy, it's 
all a matter of how many you have and your particular economics.  Bonded T1's 
is where it gets interesting.  MLPPP should be possible with modern hardware, 
but honestly it got a workout on devices that did things like ISDN, not T1's.  
Still, if you choose carefully I don't see any reason why MLPPP shouldn't be 
reliable and work just fine in today's world.

If you're willing to do without modern features, you should be able to pick up 
a ton of gear that does all this for dirt cheap.  A 7513 with channelized DS-3 
cards is still quite spiffy for terminating static routed T1's for instance, 
and people may even pay you take them at this point. :)  The CPE will be more 
interesting, there are several vendors that still make CPE with T1 interfaces, 
but that's much more rare.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Google's QUIC

2013-06-28 Thread Leo Bicknell

On Jun 28, 2013, at 5:24 PM, Octavio Alvarez alvar...@alvarezp.ods.org wrote:

 That's the point exactly. Google has more power and popularity to
 influence adoption of a protocol, just like with SPDY and QUIC.

This is the main reason why I'm very supportive of this effort.  I'm a bit 
skeptical of what I have read so far, but I know that it's nearly impossible to 
tell how these things really work from theory and simulations.  Live, real 
world testing is required competing with all sorts of other flows.

Google with their hands in both things like www.google.com and Chrome is in an 
interesting position to implement server and client side of these 
implementations, and turn them on and off at will.  They can do the real world 
tests, tweak, report on them, and advance the state of the art.

So for now I'll reserve judgement on this particular protocol, while 
encouraging Google to do more of this sort of real world testing at the 
protocol level.

Now, how about an SCTP test? :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Are undersea cables tapped before they get to ISP's? [was Re: Security over SONET/SDH]

2013-06-25 Thread Leo Bicknell

On Jun 25, 2013, at 7:38 AM, Phil Fagan philfa...@gmail.com wrote:

 Are these private links or customer links? Why encrypt at that layer? I'm
 looking for the niche usecase.

I was reading an article about the UK tapping undersea cables 
(http://www.guardian.co.uk/uk/2013/jun/21/gchq-cables-secret-world-communications-nsa)
 and thought back to my time at AboveNet and dealing with undersea cables.  My 
initial reaction was doubt, there are thousands of users on the cables, ISP's 
and non-ISP's, and working with all of them to split off the data would be 
insanely complicated.  Then I read some more articles that included quotes like:

  Interceptors have been placed on around 200 fibre optic cables where they 
come ashore. This appears to have been done with the secret co-operation 
(http://www.wired.co.uk/news/archive/2013-06/24/gchq-tempora-101)

Which made me immediately realize it would be far simpler to strong arm the 
cable operators to split off all channels before connecting them to the 
customer.  If done early enough they could all be split off as 10G channels, 
even if they are later muxed down to lower speeds reducing the number of 
handoffs to the spy apparatus.

Very few ISP's ever go to the landing stations, typically the cable operators 
provide cross connects to a small number of backhaul providers.  That makes a 
much smaller number of people who might ever notice the splitters and taps, and 
makes it totally transparent to the ISP.  But the big question is, does this 
happen?  I'm sure some people on this list have been to cable landing stations 
and looked around.  I'm not sure if any of them will comment.

If it does, it answers Phil's question.  An ISP encrypting such a link end to 
end foils the spy apparatus for their customers, protecting their privacy.  The 
US for example has laws that provide greater authority to tap foreign 
communications than domestic, so even though the domestic links may not be 
encrypted that may still pose a decent roadblock to siphoning off traffic.

Who's going to be the first ISP that advertises they encrypt their links that 
leave the country? :) 

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Security over SONET/SDH

2013-06-25 Thread Leo Bicknell

On Jun 25, 2013, at 6:34 PM, s...@wwcandt.com wrote:

 I believe that if you encrypted your links sufficiently that it was
 impossible to siphon the wanted data from your upstream the response would
 be for the tapping to move down into your data center before the crypto.
 
 With CALEA requirements and the Patriot Act they could easily compel you
 to give them a span port prior to the crypto.

The value here isn't preventing insert federal agency from getting the data, 
as you point out there are multiple tools at their disposal, and they will 
likely compel data at some other point in the stack.  The value here is 
increasing the visibility of the tapping, making more people aware of how much 
is going on.  Forcing the tapping out of the shadows and into the light.

For instance if my theory that some cables are being tapped at the landing 
station is correct, there are likely ISP's on this list right now that have 
transatlantic links /and do not know that they are being tapped/.  If the links 
were encrypted and they had to serve the ISP directly to get the unencrypted 
data or make them stop encrypting, that ISP would know their data was being 
tapped.

It also has the potential to shift the legal proceedings to other courts.  The 
FISA court can approve tapping a foreign cable as it enters the country in near 
perfect, unchallengeable secrecy.  If encryption moved that to be a regular 
federal warrant under CALEA there would be a few more avenues for challenging 
the order legally.

People can't challenge what they don't know about.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: net neutrality and peering wars continue

2013-06-20 Thread Leo Bicknell

On Jun 20, 2013, at 5:47 PM, Robert M. Enger na...@enger.us wrote:

 Perhaps last-mile operators should
 A) advertise each of their metropolitan regional systems as a separate AS
 B) establish an interconnection point in each region where they will accept 
 traffic destined for their in-region customers without charging any fee

C) Buck up and carry the traffic their customers are paying them to carry.

Least I just sound like a complainer, I actually think this makes rational 
business sense.

The concept of peering was always equal benefit, not equal cost.  No one 
ever compares the price of building last mile transport to the cost of building 
huge data centers all over with content close to the users.  The whole 
bit-mile thing represents an insignificant portion of the cost, long haul (in 
large quantities) is dirt cheap compared to last mile or data center build 
costs.  If you think of a pure content play peering with a pure eyeball play 
there is equal benefit, in fact symbiosis, neither could exist without the 
other.  The traffic flow will be highly asymmetric.

Eyeball networks also artificially cap their own ratios with their products.  
Cable and DSL are both 3x-10x down, x up products.  Their TOS policies prohibit 
running servers.  Any eyeball network with a asymmetric edge technology and 
no-server TOS need only look in the mirror to see why their aggregate ratio is 
hosed.

Lastly, simple economics.   Let's theorize about a large eyeball network with 
say 20M subscribers, and a large content network with say 100G of peering 
traffic to go to those subscribers.  

* Choice A would be to squeeze the peer for bad ratio in the hope of getting 
them to pay for, or be behind some other transit customer.  Let's be generous 
and say $3/meg/month, so the 100G of traffic might generate $300,000/month of 
revenue.  Let's even say you can squeeze 5 CDN's for that amount, $1.5M/month 
total.

* Choice B would be to squeeze the subscribers for more revenue to carry the 
100G of imbalanced traffic.  Perhaps an extra $0.10/sub/month.  That would be 
$2M/month in extra revenue.

Now, consider the customer satisfaction issue?  Would your broadband customers 
pay an extra $0.10 per month if Netflix and Amazon streaming never went out in 
the middle of a movie?  Would they move up to a higher tier of service?

A smart end user ISP would find a way to get uncongested paths to the content 
their users want, and make it rock solid reliable.  The good service will more 
than support not only cost recovery, but higher revenue levels than squeezing 
peers.  Of course we have evidence that most end user ISP's are not smart, they 
squeeze peers and have some of the lowest customer satisfaction rankings of not 
just ISP's, but all service providers!  They want to claim consumers don't want 
Gigabit fiber, but then congest peers so badly there's no reason for a consumer 
to pay for more than the slowest speed.

Squeezing peers is a prime case of cutting off your nose to spite your face.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: net neutrality and peering wars continue

2013-06-19 Thread Leo Bicknell

On Jun 19, 2013, at 6:03 PM, Randy Bush ra...@psg.com wrote:

 as someone who does not really buy the balanced traffic story, some are
 eyeballs and some are eye candy and that's just life, seems like a lot
 of words to justify various attempts at control, higgenbottom's point.

I agree with Randy, but will go one further.

Requiring a balanced ratio is extremely bad business because it incentivizes 
your competitors to compete in your home market.

You're a content provider who can't meet ratio requirements?  You go into the 
eyeball space, perhaps by purchasing an eyeball provider, or creating one.

Google Fiber, anyone?

Having a requirement that's basically you must compete with me on all the 
products I sell is a really dumb peering policy, but that's how the big guys 
use ratio.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: net neutrality and peering wars continue

2013-06-19 Thread Leo Bicknell

On Jun 19, 2013, at 7:31 PM, Benson Schliesser bens...@queuefull.net wrote:

 What do you mean not really buy the balanced traffic story? Ratio can 
 matter when routing is asymmetric. (If costs can be approximated as distance 
 x volume, forwarding hot-potato places a higher burden on the recipient...) 
 And we've basically designed protocols that route asymmetrically by default. 
 Measuring traffic ratios is the laziest solution to this problem, and thus 
 the one we should've expected.

That was a great argument in 1993, and was in fact largely true in system that 
existed at that time.  However today what you describe no longer really makes 
any sense.

While it is technically true that the protocols favor asymmetric routing, your 
theory is based on the idea that a content site exists in one location, and 
does not want to optimize the user experience.  That really doesn't describe 
any of the large sources/sinks today.  When you access www.majorwebsite.com 
today a lot of science (hi Akamai!) goes into directing users to servers that 
are close to them, trying to optimize things like RTT to improve performance.  
Content providers are generally doing the exact opposite of hot potato, they 
are cold potatoing entire racks into data centers close to the eyeballs at 
great cost to improve performance.

But to the extent a few people still have traffic patterns where they can 
asymmetrically route a large amount of traffic, the situation has also changed. 
 In 1993 this was somewhat hard to detect, report, and share.  Today any major 
provider has a netflow infrastructure where they can watch this phenomena in 
real time, no one is pulling the wool over their eyes.   There are also plenty 
of fixes, for instance providers can exchange MED's to cold potato traffic, or 
could charge a sliding fee to recover the supposed differences.

The denial of peering also makes bad business sense from a dollars perspective. 
 Let's say someone is asymmetric routing and causing an eyeball network extra 
long haul transport.  Today they deny them peering due to ratio.  The chance 
that the content network will buy full-priced transit from the eyeball network? 
 Zero.  It doesn't happen.  Instead they will buy from some other provider who 
already has peering, and dump off the traffic.  So the eyeball network still 
gets the traffic, gets it hidden in a larger traffic flow where they can't 
complain if it comes from one place, and get $0 for the trouble.

A much better business arrangement would be to tie a sliding fee to the ratio.  
Peering up to 2:1 is free.  Up to 4:1 is $0.50/meg, up to 6:1 is $1.00/meg, up 
to 10:1 is $1.50 a meg.  Eyeball network gets to recover their long haul 
transport costs, it's cheaper to the CDN than buying transit, and they can 
maintain a direct relationship where they can keep up with each other using 
things like Netflow reporting.  While I'm sure there's some network somewhere 
that does a sane paid peering product like this, I've sure never seen it.  For 
almost all networks it's a pure binary decision, free peering or full priced 
transit.

Quite frankly, if the people with MBA's understood the technical aspects of 
peering all of the current peering policies would be thrown out, and most of 
the peering coordinators fired.  Settlement is a dirty word in the IP realm, 
but the basic concept makes sense.  What was a bad idea was the telco idea of 
accounting for every call, every bit of data.  Remember ATT's 900 page iPhone 
bills when they first came out?  Doing a settlement based on detailed traffic 
accounting would be stupid, but doing settlements based on traffic levels, and 
bit-mile costs would make a lot of sense, with balanced traffic being free.

Oh, and guess what, if people interconnected between CDN and eyeball networks 
better the users would see better experiences, and might be more likely to be 
satisfied with their service, and thus buy more.  It's good business to have a 
product people like.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: huawei

2013-06-13 Thread Leo Bicknell

On Jun 13, 2013, at 11:35 AM, Patrick W. Gilmore patr...@ianai.net wrote:

 Also, I find it difficult to believe Hauwei has the ability to do DPI or 
 something inside their box and still route at reasonable speeds is a bit 
 silly. Perhaps they only duplicate packets based on source/dest IP address or 
 something that is magically messaged from the mother ship, but I am dubious.

This could be a latent, not used feature from _any_ vendor.

A hard coded backdoor password and username.  A sequence of port-knocking that 
enables ssh on an alternate port with no ACL.  Logins through that mechanism 
not in syslog, not in the currently logged in user table, perhaps the 
process(es) hidden from view.

Do we really trust Cisco and Juniper more than Hueawei? :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/








Re: chargen is the new DDoS tool?

2013-06-11 Thread Leo Bicknell

On Jun 11, 2013, at 10:39 AM, Bernhard Schmidt be...@birkenwald.de wrote:

 This seems to be something new. There aren't a lot of systems in our
 network responding to chargen, but those that do have a 15x
 amplification factor and generate more traffic than we have seen with
 abused open resolvers.

The number is non-zero?  In 2013?

While blocking it at your border is probably a fine way of mitigating the 
problem, I would recommend doing an internal nmap scan for such things, finding 
the systems that respond, and talking with their owners.

Please report back to NANOG after talking to them letting us know if the owners 
were still using SunOS 4.x boxes for some reason, had accidentally enabled 
chargen, or if some malware had set up the servers.  Inquiring minds would like 
to know!

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Single AS multiple Dirverse Providers

2013-06-10 Thread Leo Bicknell

On Jun 10, 2013, at 12:08 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 however, providers a/b at site1 do not send us the two /24s from
 site b..
 
 This is probably incorrect.
 
 The providers are almost certainly sending you the prefixes, but your router 
 is dropping them due to loop detection. To answer your later question, this 
 is the definition of 'standard' as it is written into the RFC.
 
 Use the allow-as-in style command posted later in this thread to fix your 
 router.


I've done this many places, and find allow-as-in can be, uh, problematic. :)  
Everyone says to just turn it on, but it's possible to get some strange paths 
in your table that way, in some circumstances.

For most users having a default route is just as good of a solution.  Each site 
will have a full table minus the small number of prefixes at the other site, 
and a static default will get packets to your upstream that has those routes.  
Don't like a default?  Just static the netblocks at the other side to a 
particular provider.  Already have a default because you weren't taking full 
tables?  You're good to go, no special config needed.

Of course it depends on what your site-to-site requirements are, if they are 
independent islands or talking to each other with critical data all the time.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Single AS multiple Dirverse Providers

2013-06-10 Thread Leo Bicknell

On Jun 10, 2013, at 2:22 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 Is it enough to keep the standard? Or should the standard have a specific 
 carve out, e.g. for stub networks only, not allowing islands to provide 
 transit. Just a straw man.

For the moment I'm not going to make a statement one way or another if this 
should be enshrined in an RFC or not...

I would like to be able to apply a route map to allow as in behavior:

ip prefix-list SPECIAL permit 192.168.0.0/24
!
route-map SAFETY permit 10
  match ip prefix-list SPECIAL
  set community no-export
!
router bgp XXX
  neighbor a.b.c.d allowas-in route-map SAFETY

This is a belt and suspenders approach; first you can limit this behavior to 
only the netblocks you use at other locations, and be extra safe by marking 
them no-export on the way in.  Implementation should be easy, anything that 
would normally be rejected as an AS-Path loop gets fed into the route-map 
instead.

This would mitigate almost all of the bad effects I can think of that can 
happen when the network and/or its upstreams fail to properly apply filters and 
all the sudden there are a lot more routes looping than should be, and no 
mechanism to stop them anymore! :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/








Re: PGP/SSL/TLS really as secure as one thinks?

2013-06-07 Thread Leo Bicknell

On Jun 7, 2013, at 10:14 AM, Jeroen Massar jer...@massar.ch wrote:

 If you can't trust the entities where your data is flowing through
 because you are unsure if and where they are tapping you, why do you
 trust any of the crypto out there that is allowed to exist? :)
 
 Think about it, the same organization(s) that you are suspecting of
 having those taps, are the ones who have the top crypto people in the
 world and who have been influencing those standards for decades...

I believe there are two answers to your question, although neither is entirely 
satisfactory.

The same organization(s) you describe use cryptography themselves, and do 
influence the standards.  They have a strong interest in keeping their own 
communication secure.  It would be a huge risk to build in some weakness they 
could exploit and hope that other state funded entities would not be able to 
find the hidden flaw that allows decryption.

Having unbreakable cryptography is not necessary to affect positive change.  
Reading unencrypted communications is O(1).  If cryptography can make reading 
the communications (by breaking the crypto) harder, ideally at least O(n^2), it 
would likely prevent it from being economically feasible to do wide scale 
surveillance.  Basically if they want your individual communications it's still 
no problem to break the crypto and get it, but simply reading everything going 
by from everyone becomes economically impossible.

There's an important point to the second item; when scanning a large data set 
one of the most important details algorithmically is knowing which data _not_ 
to scan.  When the data is in plain text throwing away uninteresting data is 
often trivial.  If all data is encrypted, cycles must be spent to decrypt it 
all just to discover it is uninteresting.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: PRISM: NSA/FBI Internet data mining project

2013-06-06 Thread Leo Bicknell

On Jun 6, 2013, at 8:06 PM, jim deleskie deles...@gmail.com wrote:

 Knowing its going on, knowing nothing online is secret != OK with it, it
 mealy understand the way things are.

While there's a whole political aspect of electing people who pass better laws, 
NANOG is not a political action forum.

However many of the people on NANOG are in positions to affect positive change 
at their respective employers.

- Implement HTTPS for all services.
- Implement PGP for e-mail.
- Implement S/MIME for e-mail.
- Build cloud services that encrypt on the client machine, using a key that is 
only kept on the client machine.
- Create better UI frameworks for managing keys and identities.
- Align data retention policies with the law.
- Scrutinize and reject defective government legal requests.
- When allowed by law, charge law enforcement for access to data.
- Lobby for more sane laws applied to your area of business.

The high tech industry has often made the government's job easy, not by 
intention but by laziness.  Keeping your customer's data secure should be a 
proud marketing point.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/







signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: It's the end of the world as we know it -- REM

2013-04-23 Thread Leo Bicknell
In a message written on Tue, Apr 23, 2013 at 05:41:40PM -0400, Valdis Kletnieks 
wrote:
 I didn't see any mention of this Tony Hain paper:
 
 http://tndh.net/~tony/ietf/ARIN-runout-projection.pdf
 
 tl;dr: ARIN predicted to run out of IP space to allocate in August this year.

Here's a Geoff Houston report from 2005:
https://www.arin.net/participate/meetings/reports/ARIN_XVI/PDF/wednesday/huston_ipv4_roundtable.pdf

I point to page 8, and the prediction RIR Pool Exhaustion, 4 June
2013.

Those of us who paid attention are well prepared.

tl;dr: Real statistical models properly executed in 2005 were remarkably
   close to the reality 8 years later.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpKm2Uw9uySH.pgp
Description: PGP signature


Re: Verizon DSL moving to CGN

2013-04-08 Thread Leo Bicknell
In a message written on Mon, Apr 08, 2013 at 01:41:34AM -0700, Owen DeLong 
wrote:
 Respectfully, I disagree. If the major content providers were to deploy
 IPv6 within the next 6 months (pretty achievable even now), then the
 need for CGN would at least be very much reduced, if not virtually
 eliminated.

I'm going to disagree, because the tail here I think is quite long.

Owen is spot on when looking at the percentage of bits moved across
the network.  I suspect if the top 20 CDN's were to IPv6 enable
_everything_ that 50-90% of the bits in most networks would be moved
over native IPv6, depending on the exact mix of traffic in the
network.

However, CDN's are a _very_ small part of the address space.  I'd
be surprised if the top 20 CDN's had 0.01% of all IPv4 space.  That
leaves a lot of hosts that need to be upgraded.  There's a lot of
people who buy a $9.95/month VPS to host their personal blog read
by 20 people who don't know anything about IPv4 or IPv6 but want
to be able to reach their site.  The traffic level may be
non-interesting, but they will be quite unhappy without a CGN
solution.

Moving the CDN's to IPv6 native has the potential to save the access
providers a TON of money on CGN hardware, due to the bandwidth
involved.  However those access providers still have to do CGN,
otherwise their NOC's will be innondated with complaints about the
inability to reach a bunch of small sites for a long period of time.

If I were deploying CGN, I would be exerting any leverage I had on CDN's
to go native IPv6.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp2htLLgGzH1.pgp
Description: PGP signature


Re: route for linx.net in Level3?

2013-04-05 Thread Leo Bicknell
In a message written on Fri, Apr 05, 2013 at 09:32:52AM +0200, Adam Vitkovsky 
wrote:
 I thought people where doing it because IGP converged faster than iBGP and
 in case of an external link failure the ingress PE was informed via IGP that
 it has to find an alternate next-hop. 
 Though now with the advent of BGP PIC this is not an argument anymore. 

You're talking about stuff that's all 7-10 years after the decisions
were made that I described in my previous e-mail.  Tag switching
(now MPLS) had not yet been invented/deployed when the first
next-hop-self wave occured it was all about scaling both the IGP
and BGP.

In some MPLS topologies it may speed re-routing to have edge interfaces
in the IGP due to the faster convergence of IGP's.  YMMV, Batteries not
Included, Some Assembly Required.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp_AJVWtJwTg.pgp
Description: PGP signature


Re: route for linx.net in Level3?

2013-04-04 Thread Leo Bicknell
In a message written on Thu, Apr 04, 2013 at 02:57:11PM -0400, Jay Ashworth 
wrote:
 Yes.  In the fallout from the Cloudflare attack of last week it was
 announced that several IXs were going to stop advertising the 
 address space of their peering lan, which properly does not need to
 be advertised anyway.

Well, now that's a big maybe.  I was a big advocate for the peering
exchanges each having their own ASN and announcing the peering block
back in the day, and it seems people may have forgotten some of the
issues with unadvertised peering exchange blocks.

It breaks traceroute for many people:

  The ICMP TTL Unreachable will come from a non-routed network (the
  exchange LAN).  If it crosses another network boundary doing uRPF,
  even in loose mode, those unreachables will be dropped.

  It also reduces the utility of a tool like MTR.  Without the ICMP
  responese it won't know where to ping, and even if it receives
  the ICMP it's likely packets towards the LAN IP's will be dropped
  with no route to host.

It has the potential to break PMTU discovery for many people:

  If a router is connected to the exchange and a lower MTU link a
  packet coming in with DF set will get an ICMP would-fragment
  reply.  Most vendors source from the input interface, e.g. the
  exchange IP.  Like the traceorute case, if crosses another network
  boundary doing uRPF,   even in loose mode, those ICMP messages
  will be lost, resulting in a PMTU black hole.

  Some vendors have knobs to force the ICMP to be emitted from a
  loopback, but not all.  People would have to turn it on.

But hey, this is a good thing because a DDOS caused issues, right?
Well, not so much.  Even if the exchange does not advertise the
exchange LAN, it's probably the case that it is in the IGP (or at
least IBGP) of everyone connected to it, and by extension all of
their customers with a default route pointed at them.  For the most
popular exchanges (AMS-IX, for instance) I suspect the percentage
of end users who can reach the exchange LAN without it being
explicitly routed to be well over 80%, perhaps into the upper 90%
range.  So when those boxes DDOS, they are going to all DDOS the
LAN anyway.

Security through obscurity does not work.  This is going to annoy some
people just trying to do their day job, and not make a statistical
difference to the attackers trying to take out infrastructure.

How about we all properly implement BCP 38 instead?

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpfLwy5Hpdxc.pgp
Description: PGP signature


Re: route for linx.net in Level3?

2013-04-04 Thread Leo Bicknell
In a message written on Fri, Apr 05, 2013 at 10:01:34AM +0900, Randy Bush wrote:
 it's putting such things in one's igp that disgusts me.  as joe said,
 igp is just for the loopbacks and other interfaces it takes to make your
 ibgp work.

While your method is correct for probably 80-90% of the ISP networks,
the _why_ people do that has almost been lost to the mysts of time.
I'm sure Randy knows what I'm about to type, but for the rest of
the list...

The older school of thought was to put all of the edge interfaces
into the IGP, and then carry all of the external routes in BGP.
This caused a one level recursion in the routers:
  eBGP Route-IXP w/IGP Next Hop-Output Interface

The Internet then became a thing, and there started to be a lot of
BGP speaking customers (woohoo! T1's for everyone!), and thus lots
of edge /30's in the IGP.  The IGP convergence time quickly got
very, very bad.  I think a network or two may have even broken an
IGP.

The solution was to take edge interfaces (really redistribute
connected for most people) and move it from the IGP to BGP, and
to make that work BGP had to set next-hop-self on the routes.
The exchange /24 would now appear in BGP with a next hop of the
router loopback, the router itself knew it was directly connected.
A side effect is that this caused a two-step lookup in BGP:
  eBGP-Route-IXP w/Router Loopback Next Hop-Loopback w/IGP Next Hop-Output 
Interface

IGP's went from O(bgp_customers) routes to O(router) routes, and
stopped falling over and converged much faster.  On the flip side,
every RIB-FIB operation now has to go through an extra step of
recursion for every route, taking BGP resolution from O(routes) to
O(routes * 1.1ish).

Since all this happened, CPU's have gotten much faster, RAM has
gotten much larger.  Most people have never revisited the problem,
the scaling of IGP's, or what hardware can do today.

There are plenty of scenarios where the old way works just spiffy,
and can have some advantages.  For a network with a very low number of
BGP speakers the faster convergence of the IGP may be desireable.

Not every network is built the same, or has the same scaling
properties.  What's good for a CDN may not be good for an access
ISP, and vice versa, for example.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp7nvnvQHk2B.pgp
Description: PGP signature


Re: BCP38 - Internet Death Penalty

2013-03-28 Thread Leo Bicknell
In a message written on Thu, Mar 28, 2013 at 11:39:45AM -0400, William Herrin 
wrote:
 Single homed stub site is not a configuration option in any BGP
 setup I'm aware of, so how would the router select RPF as the default
 for a single-homed stub site?

I'm not sure if this is what the OP was talking about or not, but
it reminded me of a feature I have wanted in the past.

If you think about a simple multi-homing situation where a person
has their own IP space, their own ASN, and connects to two providers
they will announce all of their routes to both providers.  They may
in fact do prepending, or more specifics such that one provider is
preferred, but to get full redundancy all of their blocks need to
go to both providers.

uRPF _strict_ only allows traffic where the active route is back
out the interface.  There are a number of cases where this won't
be true for my simple scenario above (customer uses a depref
community, one ISP is a transit customer of the other being used
for multi-homing, customer has more than one link to the same ISP
and uses prepending on one, etc).  As a result, it can't be applied.

uRPF _loose_ on the other hand only checks if a route is in the
table, and with the table rapidly approaching all of the IP space
in use that's denying less and less every day.

The feature I would like is to set the _packet filter_ based on the
_received routes_ over BGP.  Actually, received routes post prefix list.
Consider this syntax:

 neighbor 1.2.3.4 install-dynamic-filter Gig10/1/2 prefix-list customer-prefixes

Anything that was received would go through the prefix-list
customer-prefixes (probably the same list used to filter their
announcements), and then get turned into a dynamic ACL applied to
the inbound interface (Gig10/1/2 in this case).

I suspect such a feature would allow 99.99% of the BGP speakers to be
RPF filtered in a meaningful way, automatically, where uRPF strict is
not usable today.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpejxsgvUy8m.pgp
Description: PGP signature


Re: BCP38 - Internet Death Penalty

2013-03-28 Thread Leo Bicknell
In a message written on Thu, Mar 28, 2013 at 01:10:53PM -0400, William Herrin 
wrote:
 Since you've configured a prefix list to specify what BGP routes
 you're willing to accept from the simple multihomed customer (you
 have, right?) why set a source filter from the same data instead of
 trying to build it from routing table guesswork?

In the simplest case I described (user has for instance one netblock)
the packet filter will match the routing filter, and doing what you
described would not be a huge extra burden.  Howver, it is still a
burden, it's writing everything twice (prefix list plus ACL), and
it's making configs longer and less readable.

But the real power here comes by applying this filter further up the
food chain.  Consider peering with a regional entity at an IX.  Most
people don't prefix filter there (and we could have a lively argument
about the practicality of that), so the prefix list might be something
like:

deny my_prefix/foo le 32
permit 0.0.0.0/0 le 24

With a max-prefix of 100.

That doesn't turn into a useful packet filter for the peer, but using my
method the peer could be RPF filtered based on what they send,
automatically.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpxZG3D5ieGm.pgp
Description: PGP signature


Re: Open Resolver Problems

2013-03-26 Thread Leo Bicknell
In a message written on Tue, Mar 26, 2013 at 06:40:38PM +0200, Saku Ytti wrote:
 Pwn authorative server catering moderately popular domain and share query
 sources as torrent file.

Run authortative server hosting command and control for a botnet,
log query sources and use for your own purposes.

IPv6 Temporary / Privacy addresses and making your DNS server query from
them outbound might be a useful trick.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgppr75Hftf9B.pgp
Description: PGP signature


Re: Is multihoming hard? [was: DNS amplification]

2013-03-25 Thread Leo Bicknell
In a message written on Sun, Mar 24, 2013 at 12:54:18PM -0400, John Curran 
wrote:
 I believe that the percentage which _expects_ unabridged connectivity 
 today is quite high, but that does not necessarily mean actual _demand_
 (i.e. folks who go out and make the necessary arrangements despite the
 added cost and hassle...)

Actually, I think most of the people who care have made alternate
arrangements, but I have to back up first.

Like most of the US, I live in a area with a pair of providers,
BigCable and BigTelco.  The reality of those two providers is that
from my house they both run down the same backyard.  The both go
to pedistals next to each other at the edge of the neighborhood.
They both ride the same poles down towards the center of town.  At
some point they finally diverge, to two different central offices.

About 80% of the time when one goes out the other does as well.
The backhoe digs up both wires.  The pole taken out by a car accident
takes them both down.  Heck, when the power goes out to a storm
neither has a generator for their pedistal.  The other 20% of the
time one has an equipment failure and the other does not.

Even if I wanted to pay 2x the monthly cost to have both providers
active (and could multi-home, etc), it really doesn't create a
significanlty higher uptime, and thus is economically foolish.

However, there is an alternative that shares none of this infrastructure.
A cell card.  Another option finally available due to higher speeds
and better pricing is a satellite service.  These provide true
redundancy from all the physical infrastructure I described above.

It could be aruged then, the interesting multi-homing case is between
my Cable Modem and my Cell Card, however even that is not the case.
Turns out my cell hard has bad latency compared to the cable modem,
so I don't want to use it unless I have to, and it also turns out
the cell provider charges me for usage, at a modestly high rate,
so I don't want to use it unless I have to.

The result is an active/passive backup configuration.  A device
like a cradlepoint can detect the cable modem being down and switch
over to the cell card.  Sure, incoming connections are not persisitent,
but outbound it's hard to notice other than performance getting
worse.

TL;DR People paying for redundancy want physical redundancy including
the last mile.  In the US, that exists approximately nowhere for
residential users.  With no diverse paths to purchase, the discussion of
higher level protocol issues is academic.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpeRJib6c7Ca.pgp
Description: PGP signature


Re: Cloudflare is down

2013-03-04 Thread Leo Bicknell
In a message written on Mon, Mar 04, 2013 at 09:31:13AM +0200, Saku Ytti wrote:
 Probably only thing you could have done to plan against this, would have
 been to have solid dual-vendor strategy, to presume that sooner or later,
 software defect will take one vendor completely out. And maybe they did
 plan for it, but decided dual-vendor costs more than the rare outages.

From what I have heard so far there is something else they could
have done, hire higher quality people.

Any competent network admin would have stopped and questioned a
90,000+ byte packet and done more investigation.  Competent programmers
writing their internal tools would have flagged that data as out
of rage.

I can't tell you how many times I've sat in a post mortem meeting
about some issue and the answer from senior management is why don't
you just provide a script to our NOC guys, so the next time they
can run it and make it all better.  Of course it's easy to say
that, the smart people have diagnosed the problem!

You can buy these scripts for almost any profession.  There are
manuals on how to fix everything on a car, and treatment plans for
almost every disease.  Yet most people intuitively understand you
take your car to a mechanic and your body to a doctor for the proper
diagnosis.  The primary thing you're paying for is expertise in
what to fix, not how to fix it.  That takes experience and training.

But somehow it doesn't sink in with networking.  I would not at all
be surprised to hear that someone over at Cloudflare right now is
saying let's make a script to check the packet size as if that
will fix the problem.  It won't.  Next time the issue will be
different, and the same undertrained person who missed the packet
size this time will miss the next issue as well.  They should all be
sitting around saying, how can we hire compentent network admins for
our NOC, but that would cost real money.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpLe7k3TVi1O.pgp
Description: PGP signature


Re: Is Google Fiber a model for Municipal Networks?

2013-02-04 Thread Leo Bicknell
In a message written on Sun, Feb 03, 2013 at 09:53:50PM -0600, Frank Bulk wrote:
 Sure, Verizon has been able to get their cost per home passed down to $700

To be fair, Verizon has chosen to build their FIOS network in many
expensive to build locations, because that's where they believe
there to be the most high profit customers.  While perhaps not the most
expensive builds possible, I would expect Verizon's FIOS experience to
be on the upper end of the cost scale.

 Real-world FTTH complete overbuilds among RLECs (rural incumbent LECs) are
 typically between $2,000 and $5,000 per home served (that includes the ONT
 and customer turn-up).  Slide 13 of
 http://www.natoa.org/events/NATOAPresentationCalix.pdf shows an average of
 $2,377 per home passed (100% take rate).  You can see on Slide 14 how the
 lower households per square mile leads to substantially greater costs.  

Rural deployments present an entirely different problem of geography.  I
suspect the dark fiber model I advocate for is appropriate for 80% of
the population from large cities to small towns; but for the 20% in
truely rural areas it doesn't work and there is no cheap option as far
as I can tell.

 And for Verizon's cost per home passed: Consider the total project cost of
 Verizon's FiOS, $23B, and then divide that not by the 17M homes passed (as I
 did), but with the actual subscribers (5,1M), This would result in a cost
 per subscriber of $23B/5.1M = $4,500.

But Verizon knows that take rate will go up over time.  Going from
a 5.1M - 10M take rate would cut that number in half, going to the
full 17M would cut it by 70%.  Fiber to the home is a long term play,
paybacks in 10-20 year timeframes.  I'm sure wall-street doesn't want to
hear that, but it's the truth.

 Remember that Google cherry-picked which city it would serve, so it was able
 to identify location that is likely less challenging and expensive to serve
 than the average.  A lot of Google's Kansas City build will not be buried

True, but I think it means we've bound the problem.  It appears to
take $1400-$4500 to deploy fiber to the home in urban and suburban
areas, depending on all the fun local factors that effect costs.

Again, if the ROI calculation is done on a realistic for infrastructure
10-20 year time line, that's actually very small money per home.  If
it's done on a 3 year, wall street turnaround it will never happen as
it's not profitable.

Which is a big part of why I want municipalities to finance it on 10-30
year government bonds, rather than try and have BigTelco and BigCableCo
raise capital on wall street to do the job.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/



Re: Rollup: Small City Municipal Broadband

2013-02-03 Thread Leo Bicknell
In a message written on Sun, Feb 03, 2013 at 12:07:34AM -0500, Jean-Francois 
Mezei wrote:
 When municipality does the buildout, does it just pass homes, or does it
 actually connect every home ?

I would argue, in a pure dark muni-network, the muni would run the
fiber into the prem to a patch panel, and stop at that point.  I
believe for fiber it should be inside the prem, not outside.  The
same would apply for both residential and commercial.

Basically when the customer (typically the service provider, but
not always) orders a loop to a customer the muni provider would
OTDR shoot it from the handoff point to the service provider to the
prem.  They would be responsible for insuring a reasonable performance
of the fiber between those two end points.

The customer (again, typically the service provider) would then
plug in any CPE, be it an ONT, or ethernet SFP, or WDM mux.

Note I say typically the service provider, because I want to enable
in this model the ability for you and I, if we both have homes in
this area, to pay the same $X/month and get a patch between our two
homes.  No service provider involved.  If we want to stand up GigE
on it because that's cheap, wonderful.  If we want to stand up
16x100GE WDM, excellent as well.

It's very similar to me to the traditional copper model used by the
ILECs.  There is a demark box that terminates the outside plant and
allows the customer to connect the inside plant.  The facilities
provider stops at that box (unless you pay them to do more, of
course).  The provisioning process I'm advocating is substantially
similar to ordering a dry pair in the copper world, although perhaps
with a bit more customer service since it would be a service the muni
wants to sell!

 In any event,  you still have to worry about responsability if you allow
 Service Providers to install their on ONT or whatever CPE equipment in
 homes. If they damage the fibre cable when customer unsubscribes, who is
 responsible for the costs of repair ? (consider a case where either
 homeowner or SP just cuts the fibre as it comes out of wall when taking
 the ONT out to be returned to the SP.

The box is the demark.  If they damage something on the customer
side, that's their own issue.  If the damage something on the
facilities provider side, the facilities provider will charge them
to fix it.

There would be no just coming out of the wall.  There would be a 6-12
SC (FC?) connector patch panel in a small plastic enclosure, with the
outside plant properly secured (conduit, in the wall, etc) and not
exposed.  The homewowner or their service provider would plug into that
patch panel.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpz2_XJdFRjf.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-02-03 Thread Leo Bicknell
In a message written on Sat, Feb 02, 2013 at 10:53:04PM -0500, Scott Helms 
wrote:
 tightly defined area that is densely populated today.  I'd also say that
 this is not the normal muni network in the US today, since generally
 speaking muni networks spring up where the local area is poorly served by
 commercial operators.

Exactly, that's what I would like to fix.  I personally haven't
talked much about the poltical and regulatory polices that go along
with the technical details I've discussed, but I want to build these
muni-networks in the areas today dominated by the cablecos and
telcos, and force them to use it rather than doing their own builds
along with everyone else.

As a citizen I get less people digging up streets and yards, and
more competition since now more than just the incumbant telco and
cableco can play ball.

If Glasgow Kentucky (population ) can have fiber between any two
buildings in town for $300 a month, why can't I?  Somehow a small
rural town of 14,000 can do it, but the big cities can't?

http://www.epblan.com/ethernet.html

I also suspect we could drop that cost a full order of magnitude with
some economies of scale.

People are doing this, and it does work, it's just being done in
locations the big telcos and cablecos have written off...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp8XolOffvIq.pgp
Description: PGP signature


Re: Rollup: Small City Municipal Broadband

2013-02-03 Thread Leo Bicknell
In a message written on Sun, Feb 03, 2013 at 02:39:39PM -0500, Scott Helms 
wrote:
  Basically when the customer (typically the service provider, but
  not always) orders a loop to a customer the muni provider would
  OTDR shoot it from the handoff point to the service provider to the
  prem.  They would be responsible for insuring a reasonable performance
  of the fiber between those two end points.
 
 Been tried multiple times and I've never seen it work in  the US, Canada,
 Europe, or Latin America. That's not to say it can't work, but there lots
 of reasons why it doesn't and I don't think anyone has suggested anything
 here that I haven't already seen fail.

Zayo (nee AboveNet/MFN), Sunesys, Allied Fiber, FiberTech Networks,
and a dozen smaller dark fiber providers work this way today, with
nice healthy profitable business.  Granted, none of them are in the
residential space today, but I don't see any reason why the prem
being residential would make the model fail.

Plenty of small cities sell dark as well, at least until the incumbant
carriers scare/bribe the legislatures into outlawing it.  I think that's
evidence it works well, they know they can't compete with a muni
network, so they are trying to block it with legal and lobbying efforts.

They all cost a lot more than would make sense for residential, but
most of that is that they lack the economies of scale that going
to every residence would bring.  Their current density of customers
is simply too low.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp7xiamuhbfB.pgp
Description: PGP signature


Is Google Fiber a model for Municipal Networks?

2013-02-03 Thread Leo Bicknell

I've been searching for a few days on information about Google
Fiber's Kansas City deployment.  While I wouldn't call Google
secretive in this particular case, they haven't been very outgoing
on some of the technologies.  Based on the equipment they have deployed
there is speculation they are doing both GPON and active thernet
(point2point).

I found this presentation:
http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/pubs/archive/36936.pdf

It has a very good summary of the tradeoffs we've been discussing
regarding home run fibers with active ethernet compared with GPON,
including costs of the eletronics compared to trenching, the space
required in the CO, and many of the other issues we've touched on
so far.

Here's an article with some economics from several different
deployments: 
http://fastnetnews.com/fiber-news/175-d/4835-fiber-economics-quick-and-dirty

Looks like $500-$700 in capex per residence is the current gold
standard.  Note that the major factor is the take rate; if there are two
providers doing FTTH they are both going to max at about a 50% take
rate.  By having one provider, a 70-80% take rate can be driven.

Even with us a 4%, 10 year government bond, a muni network could finance
out a $700/prem build for $7.09 per month!  Add in some overhead and
there's no reason a muni-network couldn't lease FTTH on a cost recovery
bases to all takers for $10-$12 a month (no Internet or other services
included).

Anyone know of more info about the Google Fiber deployment?

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp4GxILKmy7y.pgp
Description: PGP signature


Re: Is Google Fiber a model for Municipal Networks?

2013-02-03 Thread Leo Bicknell
In a message written on Sun, Feb 03, 2013 at 05:03:52PM -0500, Jay Ashworth 
wrote:
  From: Leo Bicknell bickn...@ufp.org
  Looks like $500-$700 in capex per residence is the current gold
  standard. Note that the major factor is the take rate; if there are
  two providers doing FTTH they are both going to max at about a 50% take
  rate. By having one provider, a 70-80% take rate can be driven.
 
 I was seeing 700 to drop, and another 650 to hook up; is that 5-700 supposed
 to include an ONT?

I believe the $500-$700 would include an ONT, if required, but
nothing beyond that.  Hook up would include installing a home
gateway, testing, setting up WiFi, installing TV boxes, etc.

So in the model I advocate, the muni-network would have $500-$700/home
to get fiber into the prem, and the L3-L7 service provider would
truck roll a guy and supply the equipment that comprise another
$500-$700 to turn up the customer.

In Google Fiber's model they are both, so it's probably $1000-$1400
a home inclusive.  $1400 @4% for 10 years is $14.17 a month per
house passed.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpvPRn8QAqU0.pgp
Description: PGP signature


Re: Followup: Small City Municipal Broadband

2013-02-02 Thread Leo Bicknell
In a message written on Sat, Feb 02, 2013 at 06:14:56PM -0500, Brandon Ross 
wrote:
 This whole thing is the highway analogy to me.  The fiber is the road. 
 The city MIGHT build a rest stop (layer 2), but shouldn't be allowed to 
 either be in the trucking business (layer 3), nor in the 
 business of manufacturing the products that get shipped over the road 
 (IPTV, VOIP, etc.), and the same should apply to the company that 
 maintains the fiber, if it's outsourced.

I think your analogy is largely correct (I'm not sure Rest Stop ==
Layer 2 is perfect, but close enough), but it is a very important
way of describing things to a non-technical audience.

FTTH should operate like roads in many respects.  From ownership
and access, to how the network is expanded.  For instance a new
neighborhood would see the developer build both the roads and fiber
to specifications, and then turn them over to the municipality.
Same model.

Having multiple people build the infrastructure would be just as
inefficeint as if every house had two roads built to it by two private
companies.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpC6aLOUA4B5.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-02-02 Thread Leo Bicknell
In a message written on Sat, Feb 02, 2013 at 08:55:34PM -0500, Jay Ashworth 
wrote:
  From: Robert E. Seastrom r...@seastrom.com
  There is no reason whatsoever that one can't have centralized
  splitters in one's PON plant. The additional costs to do so are
  pretty much just limited to higher fiber counts in the field, which
  adds, tops, a couple of percent to the price of the build. 
 
 Ok, see, this is what Leo, Owen and I all think, and maybe a couple others.
 
 But Scott just got done telling me it's *so* much more expensive to 
 home-run than ring or GPON-in-pedestals that it's commercially infeasible.

Note, both are right, depending on the starting point and goals.

Historically teclos have installed (relatively) low count fiber
cables, based on a fiber to the pedistal and copper to the prem
strategy.  If you have one of these existing deployments, the cost
of home run fiber (basically starting the fiber build from scratch,
since the count is so low) is more expensive, and much greater cost
than deploying GPON or similar over the existing plant.

However, that GPON equipment will have a lifespan of 7-20 years.

In a greenfield scenario where there is no fiber in the ground the
cost is in digging the trench.  The fiber going into it is only ~5%
of the cost, and going from a 64 count fiber to a 864 count fiber
only moves that to 7-8%.  The fiber has a life of 40-80 years, and
thus adding high count is cheaper than doing low count with GPON.

Existing builds are optimizing to avoid sending out the backhoe and
directional boring machine.  New builds, or extreme forward thinking
builds are trying to send them out once and never again.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpTLITWvZ_KU.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-02-02 Thread Leo Bicknell
In a message written on Sat, Feb 02, 2013 at 09:28:06PM -0500, Scott Helms 
wrote:
 I'm not saying that you have to, but that's the most efficient and
 resilient (both of those are important right?) way of arranging the gear.
  The exact loop length from the shelves to the end users is up to you and
 in certain circumstances (generally really compact areas) you can simply
 home run everyone.  Most muni networks don't look that way though because
 while town centers are generally compact where people (especially the
 better subdivisions) live is away from the center of town in the US.  I
 can't give you a lot insight on your specific area since I don't know it,
 but those are the general rules.

If the goal is the minimize the capital outlay of a greenfield
build, your model can be more efficient, depending on the geography
covered.  Basically you're assuming that the active electronics to
make a ring are cheaper than building high count fiber back to a
central point.  There are geographies where that is both true, and
not true.  I'll give you the benefit of the doubt that you're model is
cheaper for a majority of builds.

On the other hand, I am not nearly as interested in minimizing the
up front capital cost.  It's an issue, sure, but I care much more
about the total lifecycle cost.  I'd rather spend 20% more up front
to end up with 20-80% lower costs over 50 years.  My argument is
not that high count fiber back to a central location is cheaper in
absolute, up front dollars, but that it's at worst a minimal amount
more and will have neglegable additonal cost over a 40-80 year
service life.

By contrast, the ring topology you suggest may be slightly less
expensive up front, but will require the active parts that make up
the ring to be swapped out every 7-20 years.  I believe that will
lead to greater lifecycle cost; and almost importantly impeed
development of new services as the existing gear ends up incompatable
with newer technologies.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpCc5V1mvTmK.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-02-02 Thread Leo Bicknell
In a message written on Sat, Feb 02, 2013 at 10:17:24PM -0500, Scott Helms 
wrote:
 Here's the thing, over the time frame your describing you're probably going
 to have to look at more fiber runs just because of growth in areas that you
 didn't build for before.  Even if you nail the total growth of homes and
 businesses in your area your chances of getting both the numbers right
 _and_ the locations are pretty slim. Also, you're going to have to replace
 gear no matter where it is core or nodes on a ring.  Granted gear that
 lives in a CO can be less expensive but its not that much of a difference
 (~1% of gear costs).  Having a ring topology is basically the best way
 we've come up with as of yet to hedge your bets, especially since you can
 extend your ring when you need.

I'm not sure I understand your growth argument; both models will
require additional build costs for growth to the network, and I
think they roughly parallel the tradeoff's we've been discussing.

As for the gear, I agree that the cost per port for the equipment
providing service (Ethernet switch, GPON bits, WDM mux, whatever)
is likely to be roughly similar in a CO and in the field.  There's
not a huge savings on the gear itself.

But I would strongly disagree the overall costs, and services are
similar.  Compare a single CO of equipment to a network with 150
pedistals of active gear around a city.  The CO can have one
generator, and one battery bank.  Most providers don't even put
generator with each pedistal, and must maintain separate battery
banks for each.  A single CO could relatively cheaply have 24x7x356
hands to correct problems and swap equipment, where as the distributed
network will add drive time to the equation and require higher
staffing and greater costs (like the truck and fuel).

Geography is a huge factor though.  My concept of home running all fiber
would be an extremely poor choice for extremely rural, low density
networks.  Your ring choice would be much, much better.  On the flip
side, in a high density world, say downtown NYC, my dark fiber to the
end user network is far cheaper than building super-small rings and
maintaining the support gear for the equipment (generators and
batteries, if you can get space for them in most buildings).

Still, I think direct dark fiber has lower lifecycle costs for 70-80% of
the population living in cities and suburban areas.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpUSC9UGSK2M.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-02-01 Thread Leo Bicknell
In a message written on Fri, Feb 01, 2013 at 03:29:32PM -0500, Scott Helms 
wrote:
 You're basing your math off of some incorrect assumptions about PON.  I'm

I'd like to know more about the PON limitations, while I understand
the 10,000 foot view, some of the rubber hitting the road issues
are a mystery to me.

My limited understanding is that fiber really has two parameters,
loss and modal disperson.  For most of the applications folks on
this mailing list deal with loss is the big issue, and modal disperson
is something that can be ignored.  However for for many of the more
interesting applications involving splitters, super long distances,
or passive amplifiers modal disperson is actually a much larger
issue.

I would imagine if you put X light into a 32:1 splitter, each leg
would leg 1/32nd of the light (acutally a bit less, no doubt), but
I have an inking the disperson characteristics would be much, much
worse.

Is this the cause of the shorter distance on the downstream GPON
channel, or does it have to do more with the upstream GPON channel,
which is an odd kettle of fish going through a splitter backwards?
If it is the issue, have any vendors tried disperson compensation with
any success?

The only place PON made any sense to me was extreme rural areas.
If you could go 20km to a splitter and then hit 32 homes ~1km away
(52km fiber pair length total), that was a win.  If the homes are
2km from the CO, 32 pair (64km fiber pair length total) of home
runs was cheaper than the savings on fiber, and then the cost of
GPON splitters and equipment.  I'm trying to figure out if my assessment
is correct or not...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp24SCG7kQSV.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-01-31 Thread Leo Bicknell
In a message written on Wed, Jan 30, 2013 at 09:30:31PM -0800, Owen DeLong 
wrote:
  I would like to build an infrastrucutre that could last 50-100 years,
  like the telephone twisted pair of the last century.  The only tech I
  can see that can do that is home run single mode fiber to the home.
  Anything with electronics has no chance of that lifespan.  Anything with
  splitters and such will be problematic down the road.  Simpler is
  better.
 
 An interesting claim given that the Telco twisted pair you are holding up
 as a shining example did involve electronics, splitters (known as bridge
 taps) etc.

Actually, you're making my point for me.  Telcos have spent billions
removing the electronics, splitters, and bridge taps so they can
have unadulterated copper for higher speed DSL.  To make the new
tech work all of the old tech had to be removed from the plant.

Those things may have seemed cheaper/better at the time, but in the
end I don't think their lifecycle cost was lower.  Private industry
is capital sensitive to a higher degree than government; if a telco
could save $1 of capital cost with a bridge tap, use it for 30
years, and then spend $500 to remove the bridge tap that looked
better in their captial model.  I'm suggesting it's better to
spend the $1 up front, and never pay the $500 down the road.

The real win isn't the $500 savings, it's the _opportunity_.
Customers in some parts of the US have waited _years_ for high speed
DSL because of the time it takes to remove bridge taps and otherwise
groom the copper plant.  That's years they are behind other citizens
who aren't on plants with that problem.  Had that junk never been
there in the first place they could have received upgrades much faster.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpUYahuo4sOC.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-01-30 Thread Leo Bicknell
In a message written on Wed, Jan 30, 2013 at 08:33:35AM -0600, Jason Baugher 
wrote:
 There is much talk of how many fibers can fit in a duct, can be brought
 into a colo space, etc... I haven't seen much mention of how much space the
 termination in the colo would take, such as splice trays, bulkheads, etc...
 Someone earlier mentioned being able to have millions of fibers coming
 through a vault, which is true assuming they are just passing through the
 vault. When you need to break into one of those 864-fiber cables, the room
 for splice cases suddenly becomes a problem.

Corning makes a pre-terminated breakout bay for the 864 cable
nicknamed the mamu.  It is in essence a 7' rack, which is about
90% SC patch panels and 10% splice trays.  The cable comes in and
is fusion spliced to tails already pre-terminated in the rack.  I
don't know if they now have an LC option, which should be more
dense.  They are perhaps 1' deep as well, being just patch panels
in a 2-post rack, so they take up much less space than a cabinet.

To run some rough numbers, I live in a town with a population of
44,000 people, grouped into 10,368 households.  It is the size
that if the MMR were pretty much perfectly centered 10km optics
should reach all corners of the town, but were it not centered more
than one MMR would be needed.

To put that in patch panel racks, 10,368 households * 6 fibers per
house (3 pair) / 864 per rack = 72 racks of patch panels.  Using a
relatively generous for 2-post patch panels 20sq feet per rack it
would be 1,440 sq feet of colo space to house all of the patch
panels to homes.

Now, providers coming in would need a similar amount of fiber, so
basically double that amount.  There would also need to be some
room for growth.  Were I sizing a physical colo for this town I
would build a 5,000 square foot space designed to take ~250 fiber
racks.  That would handle today's needs ( 150 racks) and provide
years of growth.

Note also that the room is 100% patch panels and fiber, no electronics.
There would be no need for chillers and generators and similar
equipment.  No need for raised floor, or a DC power plant.  The sole
difficult part would be fiber patch management, a rather elaborate
overhead tray system would be required.

 The other thing I find interesting about this entire thread is the
 assumption by most that a government entity would do a good job as a
 layer-1 or -2 provider and would be more efficient than a private company.
 Governments, including municipalities, are notorious for corruption, fraud,
 waste - you name it. Even when government bids out projects to the private
 sector these problems are seen.

There is almost nothing to bid out here in my model.  Today when a
new subdivision is built the builder contracts out all of the work
to the telco/cable-co specifications.   That would continue to be
the case with fiber.  The muni would contract out running the main
trunk lines to each neighborhood, and the initial building of the
MMR space.  Once that is done the ongoing effort is a man or two
that can do patching and testing in the MMR, and occasionally
contracting out repair work when fiber is cut.

The real win here is that there aren't 2-5 companies digging up streets
and yards.  Even if the government is corrupt to the tune of doubling
every cost that's the same in real dollars as two providers building
competitive infrastructureadd in a third and this option is still
cheaper for the end consumer.

However in my study of government, the more local the less corruption;
on average.  Local folks know what's going on in their town, and can
walk over and talk to the mayor.  City budgets tend to be balanced as a
matter of law in most places.  This would be an entirely local effort.

Would it be trouble free?  No.  Would it be better than paying money to
$BigTelcoCableCo who uses their money to argue for higher PUC rates,
probably!

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpg4vFmkKvCy.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-01-30 Thread Leo Bicknell
In a message written on Wed, Jan 30, 2013 at 08:27:27PM -0500, Jay Ashworth 
wrote:
 You're assuming there, I think, that residential customers will have
 mini-GBIC ports on their routers, which has not been my experience.  :-)

They don't today because there is no demand for such a feature.  My
point is that if people deployed FTTH in this way, there would be demand
for such products.  Many of the chipsets inside these boxes already
support SFP PHY, they just don't put an SFP connector on them to save a
couple of bucks.  If there was demand vendors would have a product out
in months not years, probably within $10 of current prices (not counting
optics).

 Understand that I'm not concerned with minimizing the build cost to the
 muni; I'm interested in *maximizing the utility of the build*, both to the 
 end-user customers, *and* to local businesses who might/will serve them.

Yes, which is why you want to remove anything electronic possible,
and to large extent any prismatic devices.  Single mode from the
1990's will carry 10GE today, if unfettered.  Today's single mode
will carry 100GE+ for a 50+ year lifespan, if properly installed.
Electronics last 5-10, and then must be replaced, at a cost passed
on to consumers.  The GigE GPON isn't cutting it anymore?  Fine,
let's replace all the electronics and update the splitters to 10GE,
at great cost!

By having a direct fiber pair to the home ISP's could run 100Mbps to one
customer, GigE to another customer, and 10x10GE WDM to a third customer,
just for the cost of equipment.  Own two business locations?  You don't
even contract with an ISP; you pay the Muni $10/month for fiber to each
prem, and $2/month for a cross connect and light it up however you want.
Plug in a GigE LAN switch on each end and off you go.  It's the ultimte
empowerment, fiber for everyone!

 Based also on the point Owen makes about reducing truck rolls by having
 netadmin controlled hardware at the customer end, I'm not at all sure
 I agree; I think it depends a lot on what you're trading it off *against*.

That can be fixed in other ways.  It would be easy to make a standard
SNMP mib or something that the service provider could poll from the
customer gateway, and service providers could require compatable
equipment.  There are ethernet OAM specs.

When I get a Cisco router with an integrated CSU and the telco sends a
loop-up my device does it.  No reason the same can't be done with
ethernet, other than no demand today.

In a message written on Wed, Jan 30, 2013 at 09:24:51PM -0500, Jay Ashworth 
wrote:
  To put that in patch panel racks, 10,368 households * 6 fibers per
  house (3 pair) / 864 per rack = 72 racks of patch panels. Using a
  relatively generous for 2-post patch panels 20sq feet per rack it
  would be 1,440 sq feet of colo space to house all of the patch
  panels to homes.
 
 Oh, I hope to ghod we can get higher density that that.

I'm sure it's possible.  I would be there is an LC solution by now, and
this is also discounting direct fusion splicing which would be 20-40x
smaller in footprint.

That said, the fiber MMR I'm proposing is of similar size to the telco
CO's serving the same size towns today; except of course the Telco CO is
filled with expensive switches, generators, battery banks, etc.

I don't want to understate the fiber management problem in the MMR, it's
real.  Some thought and intelligence would have to go into the design of
how patches are made, making heavy use of fusion splice trays rather
than connectors, high density panels, and so on.  That said, Telcos did
a fine job of this with copper for hundreds of years when every line ran
back to a central frame.  There are fiber providers doing similar things
today, not quite on the same scale but in ways that could easily scale
up.

I would like to build an infrastrucutre that could last 50-100 years,
like the telephone twisted pair of the last century.  The only tech I
can see that can do that is home run single mode fiber to the home.
Anything with electronics has no chance of that lifespan.  Anything with
splitters and such will be problematic down the road.  Simpler is
better.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpv5IVOTRWma.pgp
Description: PGP signature


Re: Will wholesale-only muni actually bring the boys to your yard?

2013-01-30 Thread Leo Bicknell
In a message written on Wed, Jan 30, 2013 at 09:37:24AM -0500, Art Plato wrote:
 While I agree in principle. The reality is, from my perspective is that the 
 entities providing the services will fall back to the original position that 
 prompted us to build in the first place. Provide a minimal service for the 
 maximum price. There is currently no other provider in position in our area 
 to provide a competitive service to Charter. Loosely translated, our 
 constituents would lose. IMHO.

I'm not sure what your particular situation is, but I urge you to
look at the hurdles faced by a small business trying to use your
infrastructure.

Back in the day there were a ton of dial up ISP's.  Why?  Well, all
they had to do was order an IP circuit from someone, a bank of phone
lines from another person, and a few thousand dollars worth of
equipment and boom, instant ISP.

Exclude your muni-fiber for the moment, and consider someone who wants
to compete with Charter.  They have to get permits to dig up streets,
place their own cable to each house, be registered with the state PUC as
a result, respond to cable locates, obtain land and build pedistals with
power and network to them, etc; all before they can think about turning
up a customer.  The barrier to entry is way too high.

Muni-fiber shold be able to move things much closer to the glory
days of dial up, rather than the high barrier to entry the incumbant
telcos and cablecos enjoy.  Look at your deployment, what are the
up front costs to use it?  Do you require people to have a minimum
number of customers, or a high level of equipment just to connect?
What's the level of licensing and taxation imposed by your state?

Many of the muni-fiber plants I've read about aren't much better.  They
are often GPON solutions, and require a minimum number of customers to
turn up, purchase of a particular amount of colo space to connect, and
so on.  Just to turn up the first customer is often in the tens of
hundreds of thousands of dollars; and if that is the case the incombants
will in.

Some of this is beyond the reach of muni-fiber.  State PUC's need to
have updated rules to encourage these small players in many cases.  I
think the CALEA requirements need a bit of an overhaul.  If the
providers want to offer voice or video services there's an entirely
different level of red tape.  All of these things need to be moderized
with muni-fiber deployments, and sadly in many cases the incumbants are
using their muscle to make these ancillary problems worse just to keep
out new entrants...


-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp51ao3RyNNk.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-01-30 Thread Leo Bicknell
In a message written on Wed, Jan 30, 2013 at 10:00:47PM -0500, Jay Ashworth 
wrote:
  That can be fixed in other ways. It would be easy to make a standard
  SNMP mib or something that the service provider could poll from the
  customer gateway, and service providers could require compatable
  equipment. There are ethernet OAM specs.
 
 Customer gateway.  Isn't that the box you're denigrating? :-)
 
 Or do you mean the FSLAM? 

No, there's an important distinction here, and we have a great example
today.

The Cable Modem.

The Cable Modem is in many ways very similar to a FTTH ONT.  It takes
one media (cable, fiber), does some processing, provides some security
and a test point to the provider, and then hands off ethernet to the
customer.  A majority of customers then plug in a Home Gateway (router,
one of those linksys/netgear/belkin things), although some plug in a
single device.

What goes wrong?  Well, the Home Gateway sees a 1000Mbps GigE to the
cable modem, and tries to send at that rate.  The cable modem is only
allowed to transmit to the plant at maybe 10Mbps though, and so it must
buffer and drop packets, at what appears to be L2.  At which point
virtually any ability the customer had to do QoS is gone!  I believe
some Verizon FIOS customers had similar issues with GigE to the ONT, and
then 100Mbps upstream service.

Havng the two separate devices significantly degrades the customer
experience in many cases, particularly where there is a speed mismatch.
I want to chuck the cable modem and/or ONT out the window never to be
seen again, and let the customer plug their home gateway in directly.
No middle box to buffer or drop packets, or otherwise mangle the data
stream in bad ways.

I have no issues with the Home Gateway responding to OAM testing from
the provider.  I have no issues with it learning part of its config
(like a maximum transmit speed) from the provider.

A Cable Modem or ONT is a glorified media converter which should not
exist.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpgBuvFZhSvX.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 10:59:31AM -0500, Jay Ashworth 
wrote:
 Regular readers know that I'm really big on municipally owned fiber networks
 (at layer 1 or 2)... but I'm also a big constitutionalist (on the first, 
 second, fourth, and fifth, particularly), and this is the first really good
 counter-argument I've seen, and it honestly hadn't occurred to me.
 
 Rob, anyone, does anyone know if any 4th amendment case law exists on muni-
 owned networks?

I don't, but I'd like to point out here that I've long believed
both sides of the muni-network argument are right, and that we the
people are losing the baby with the bath water.

I am a big proponent of muni-owned dark fiber networks.  I want to
be 100% clear about what I advocate here:

  - Muni-owned MMR space, fiber only, no active equipment allowed.  A
big cross connect room, where the muni-fiber ends and providers are
all allowed to colocate their fiber term on non-discriminatory terms.

Large munis will need more than one, no run from a particular MMR
to a home should exceed 9km, allowing the providers to be within
1km of the MMR and still use 10km optics.

  - 4-6 strands per home, home run back to the muni-owned MMR space.
No splitters, WDM, etc, home run glass.  Terminating on an optical
handoff inside the home.

  - Fiber leased per month, per pair, on a cost recovery basis (to
include an estimate of OM over time), same price to all players.

I do NOT advocate that munis ever run anything on top of the fiber.
No IP, no TV, no telephone, not even teleporters in the future.
Service Providers of all types can drop a large count fiber from
their POP to the muni-owned MMR, request individual customers be
connected, and then provide them with any sort of service they like
over that fiber pair, single play, double play, triple play, whatever.

See, the Comcast's and ATT of the world are right that governments
shouldn't be ISP's, that should be left to the private sector.  I
want a choice of ISP's offering different services, not a single
monopoly.  In this case the technology can provide that, so it
should be available.

At the same time, it is very ineffecient to require each provider
to build to every house.  Not only is it a large capital cost and
barrier to entry of new players, but no one wants roads and yards
dug up over and over again.  Reducing down to one player building
the physical in the ground part saves money and saves disruption.

Regarding your 4th amendment concerns, almost all the data the
government wants is with the Service Provider in my model, same as
today.  They can't find out who you called last week without going
to the CDR or having a tap on every like 24x7 which is not cost
effective.  Could a muni still optically tap a fiber in this case
and suck off all the data?  Sure, and I have no doubt some paranoid
service provider will offer to encrypt everything at the transport
level.

Is it perfect?  No.  However I think if we could adopt this model
capital costs would come down (munis can finance fiber on low rate, long
term muni-bonds, unlike corporations, plus they only build one network,
not N), and competition would come up (small service providers can
reach customers only by building to the MMR space, not individual homes)
which would be a huge win win for consumers.

Maybe that's why the big players want to throw the baby out with the
bath water. :P

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp5AIWIbjcNx.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 12:54:26PM -0500, Jay Ashworth 
wrote:
 Hmmm.  I tend to be a Layer-2-available guy, cause I think it lets smaller
 players play.  Does your position (likely more deeply thought out than 
 mine) permit Layer 2 with Muni ONT and Ethernet handoff, as long as clients
 are *also* permitted to get a Layer 1 patch to a provider in the fashion you
 suggest?

No, and there's good reason why, I'm about to write a response to Owen
that will also expand on why.

There are a number of issues with the muni running the ONT:

 - Muni now has to have a different level of techs and truck rolls.
 - The Muni MMR now is much more complex, requiring power (including
   backup generators, etc) and likely 24x7 staff as a result.
 - The muni-ont will limit users to the technologies the ONT supports.
   If you want to spin up 96x10GE WDM your 1G ONT won't allow it.
 - The optic cost is not significantly different if the muni buys them
   and provides lit L2, or if the service/provider user provides them.

The muni should sell L1 patches to anyone in the MMR.  Note, this
_includes_ two on-net buildings.  So if your work and home are connected
to the same muni-MMR you could order a patch from one to the other.
It may now be max ~20km, so you'll need longer reach optics, but if you
want to stand up 96x10GE WDM you're good to go.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpa1jke7mH3w.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 02:14:46PM -0800, Owen DeLong 
wrote:
 The MMR should, IMHO be a colo facility where service providers can
 lease racks if they choose. The colo should also be operated on a cost
 recovery basis and should only be open to installation of equipment
 directly related to providing service to customers reached via the MMR.

I'm not sure I agree with your point.

The _muni_ should not run any equipment colo of any kind.  The muni
MMR should be fiber only, and not even require so much as a generator
to work.  It should not need to be staffed 24x7, have anything that
requires PM, etc.

I fully support the muni MMR being inside of a colocation facility
run by some other company (Equinix/DLR/CoreSite, whatever) so folks
can colo on site.  I think it is also important someone be able
to set up a colo down the street and just drop in a 1000 strand
fiber cable to the actual MMR.

Why is this important?  Well, look at one of the failure modes of
the CO system.  When DSL was in its hayday, CO's would become full,
and no new DSL providers would be able to get colo space.  Plus the
CO's could use space/power/hands time/etc as profit centers.
Muni-fiber should stay as far away from these problems as possible.

I think it's also important to consider the spectrum of deployments
here.  A small town of 1000 homes may have MuniMMRREIT come in and
build a 5,000 sq foot building with 1,000 of that leased to the
muni for fiber patch panels, and the other 4,000 sold to ISP's by
the rack to provide service.  On the other side consider a space
like New York City, where MuniFiberCo builds out 50,000 square feet
for fiber racks somewhere, and ISP #1 drops in 10,000 strands from
111 8th Ave, and ISP #2 drops in 10,000 strands from 25 Broadway,
and so on.  In the middle may be a mid-sized town, where the build the
MMR in a business park, and 3 ISP's erect their own colos, and a colo
provider builds the fourth a houses a dozen smaller players.

In the small town case, MuniMMRREIT may agree to a regulated price
structure for colo space.  In the New York City case, it would make
no sense for one colo to try and house all the equipment now and
forever, and there would actually (on a per strand basis) be very
minimal cost to pull 10,000 strands down the street.  I'll argue
that running 10,000 strands (which is as few as 12 860 strand fiber
cables) a block or two down the street is far less cost than trying
to shoehorn more colo into an existing building where it is hard
to add generators/chillers/etc.

Basically, running fiber a block or two down the street opens up a
host of cheaper realestate/colo opportunities, and it doesn't cost
significanly more than running the fiber from one end of a colo to
another relative to all the other costs.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpcPnmYQ0Y32.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 03:03:51PM -0500, Zachary Giles 
wrote:
 Not to sidestep the conversation here .. but, Leo, I love your concept
 of the muni network, MMR, etc. What city currently implements this? I
 want to move there! :)

I don't know any in the US that have the model I describe.  :(

My limited understanding is some other countries have a similar model,
but I don't know of any good english language summaries.  For instance I
believe the model used in Sweeden is substantially similar to what I
describe...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpBu2MnwbPIL.pgp
Description: PGP signature


Re: Muni fiber: L1 or L2?

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 07:11:56PM -0800, Owen DeLong 
wrote:
 I believe they should be allowed to optionally provide L2 enabled services of 
 various
 forms.

Could you expand on why you think this is necessary?  I know you've
given this some thought, and I'd like to understand.

The way I see it, for $100 in equipment (2x$50 optics) anyone can
light 1Gbps over the fiber.  The only way the muni has significantly
cheaper port costs than a provider with a switch and a port per
customer is to do something like GPON which allows one port to
service a number of customers, but obviously imposes a huge set of
limitions (bandwiths, protocols you can run over it, etc).

I also think the ONT adds unnecesary cost.  They are used today
primarily for a handoff test point, and to protect shared networks
(like GPON) from a bad actor.  With a dedicated fiber pair per
customer I think they are unnecessary.  I can see a future where
the home gateway at the local big box has an SFP port (or even fixed
1000baseLX optics) and plugs directly into the fiber pair.

No ONT cost, no ONT limitations, no need to power it (UPS battery
replacement, etc).  It's a value subtract, not a value add.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpQvpQliT8s4.pgp
Description: PGP signature


Re: Muni network ownership and the Fourth

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 07:46:06PM -0800, Owen DeLong 
wrote:
 Case 2, you move the CO Full problem from the CO to the adjacent
 cable vaults. Even with fiber, a 10,000 strand bundle is not small.
 
 It's also a lot more expensive to pull in 10,000 strands from a few
 blocks away than it is to drop a router in the building with the MMR
 and aggregate those cross-connects into a much smaller number
 of fibers leaving the MMR building.
[snip]
 But what happens when you fill the cable vaults?

It's really not an issue.  10,000 fibers will fit in a space not
much larger than my arm.

I have on my desk a 10+ year old cable sample of a Corning 864
strand cable (36 ribbons of 24 fibers a ribbon).  It is barely
larger around than my thumb.  Each one terminated into an almost-full
rack of SC patch panels.

A web page on the cable:
http://catalog.corning.com/CableSystems/en-US/catalog/ProductDetails.aspx?cid=pid=105782vid=106018

My company at the time build a duct bank by building 6x4 conduit,
installing 3x1.25 innerduct in each conduct, and pulling one of
those cables in each innerduct.  That's a potential capacity of
15,525 fibers in a duct bank perhaps 14 wide by 8 tall.

A vault as used for traditional telco or electrical (one big
enough for a man to go down in) could hold millions of these fibers.
They were never used, because they were way too big.  There's also
plenty of experience in this area, telcos have been putting much
larger copper cables into CO's for a long time.

Were there demand, they could easily put more ribbons in a single
armored sheeth.  The actual stack of fibers is about 1/2 wide and
3/8 thick for the 864 strands.  You could extrapolate a single
10,000 strand cable that would be smaller than the power cables
going to a typical commercial transformer.

The cost of fiber is terminating it.  Running 864 strands from one
end of a colo to another inside, compared with running it a block
down the street isn't significantly different; modulo any construction
costs.  Obviously if it costs $1M to dig up the street that's bad,
but for instance if there is already an empty duct down the street
and it's just pulling cable, the delta is darn near zero.

That's why I think rather than having the muni run colo (which may
fill), they should just allow providers to drop in their own fiber
cables, and run a fiber patch only room.  There could then be hundreds
of private colo providers in a 1km radius of the fiber MMR, generating
lots of competition for the space/power side of the equation.  If one
fills up, someone will build another, and it need not be on the same
square of land

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/



Re: Muni fiber: L1 or L2?

2013-01-29 Thread Leo Bicknell
In a message written on Tue, Jan 29, 2013 at 07:53:34PM -0800, Owen DeLong 
wrote:
 It really isn't. You'd be surprised how many uncompensated truck rolls
 are eliminated every day by being able to talk to the ONT from the
 help desk and tell the subscriber Well, I can manage your ONT and
 it's pretty clear the problem is inside your house. Would you like to
 pay us $150/hour to come out and troubleshoot it for you?

I would love statistics from actual providers today.

I don't know of any residential telco services (pots, ISDN BRI, or
DSL) that has an active handoff they can test to without a truck
roll.

I don't know of any cable services with an active handoff similar
to an ONT, although they can interrogate most cable boxes and modems
for signal quality measurements remotely to get some idea of what
is going on.  On the flip side, when CableCo's provide POTS they
must include a modem with a battery, and thus incur the cost of
shipping new batteries out and old batteries back every ~5 years;
which they sometimes do by truck roll...

So it seems to me both of those services find things work just fine
without an ONT-like test point.  ONTs seem unique to FTTH deployments,
of which most today are GPON...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/



Re: OOB core router connectivity wish list

2013-01-09 Thread Leo Bicknell

I think this list goes too far, and has a decent chance of introducing
other fun failure modes as a result.  The goal of OOB is generally
to gain control of a misbehaving device.  Now, misbehaving can
take many forms, from the device actually being ok and all of it's
circuits going down (fiber cut isolating it), to the device being
very much not ok with a bad linecard trying to lock up the bus,
core dumps, etc.

I'm going to pick on one specific example:

In a message written on Wed, Jan 09, 2013 at 03:37:16PM +0100, Mikael 
Abrahamsson wrote:
 [P1]: Powercycle the RP, switchfabrics and linecards (hard, as in they 
 might be totally dead and I want to cut power to it via the back plane. 
 Also useful for FPGA upgrades).

Most Cisco high end devices can do this today from the CLI (test
mbus power off on a GSR, for example).  Let's consider what it would
take to move that functionality from the live software to some sort
of etherent oob as proposed...

The first big step is that some sort of computer to operate the
ethernet oob is required.  I think where you're going with this is
some sort of small SoC type thing connected to the mangement buss
of the device, not unlike an IPMI device on a server.  Using IPMI
devices on servers as an example this is now another device that
must be secured, upgraded, and has the potential for bugs of its
own.  Since only a small fraction of high end users will use the
OOB at all (inband is fine for many, many networks), there will not
be a lot of testing of this code, or demand for features/fixes.

So while I agree with the list of features in large part, I'm not sure I
agree with the concept of having some sort of ethernet interface that
allows all of this out of band.  I think it will add cost, complexity,
and a lot of new failure modes.

The reality is the current situation on high end gear, a serial console
plus ethernet management port is pretty close to a good situation, and
could be a really good situation with a few minor modifications.  My
list would be much simpler as a result:

1) I would like to see serial consoles replaced with USB consoles.  They
   would still appear to be serial devices to most equipment, but would
   enable much faster speeds making working on the console a much more
   reasonable option.  For bonus points, an implementation that presents
   2-4 serial terminals over the same USB cable would allow multiple
   people to log into the device without the need for any network
   connectivity.

   This would also allow USB hubs to be used to connect multiple devices
   in a colo, rather than the serial terminal servers needed today.

2) I would like to see manangement ethernets that live in their own
   walled off world out of the box.  Yes, I know with most boxes you can
   put them in a VRF or similar, but that should be the default.  I
   should be able to put an IP and default route on a management ethernet
   and still have a 100% empty (main) routing table.  This would allow
   the management port to be homed to a separate network simply and
   easily.

3) I would like to see legacy protocols dumped.  TFTP, bye bye.  FTP,
   bye bye.  rcp, bye bye.  HTTP, HTTPS, and SCP should be supported
   for all operations at all levels of the OS.

In this ideal world, the deployment model is simple.  A small OOB
device would be deployed (think like a Cisco 1900, or Juniper SRX
220), connected to a separate network (DSL, cable modem, cell modem,
ethernet to some other provider, or gasp, even an old school analog
modem).  Each large router would get an ethernet port and usb console
to that device.  SSH to the right port would get the USB console,
ideally with the 2-4 consoles exposed where hitting the same port just
cycles through them.

At that point all of the functionality described in the original
post should be available in the normal CLI on the device.  File
transfer operations should be able to specify the management port
copy [mangement]http://1.2.3.4/newimage.code flash: to use that
interface/routing table.

I also think on most boxes this would require no hardware changes.  The
high end boxes have Ethernet, they have USB...it's just updating the
software to make them act in a much more useful way, rather than the
half brain-dead ways they act now...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpPBJXXdWsPu.pgp
Description: PGP signature


Re: OOB core router connectivity wish list

2013-01-09 Thread Leo Bicknell
In a message written on Wed, Jan 09, 2013 at 06:39:28PM +0100, Mikael 
Abrahamsson wrote:
 IPMI is exactly what we're going for.

For Vendors that use a PC motherboard, IPMI would probably not be
difficult at all! :)

I think IPMI is a pretty terrible solution though, so if that's your
target I do think it's a step backwards.  Most IPMI cards are prime
examples of my worries, Linux images years out of date, riddled with
security holes and universally not trusted.  You're going to need a
firewall in front of any such solution to deploy it, so you can't
really eliminate the extra box I proposed just change its nature.

I also still think there's a lot of potential here to take gigantic
steps backwards.  Replacing a serial console with a Java applet in
a browser (a la most IPMI devices) would be a huge step backwards.
Today it's trival to script console access, in a Java applet world,
not so much.

Having a IPMI like device with dedicated ethernet and connection to the
management bus would allow it to have a web interface to do things like
power cycle individual line cards and may be a win, but I would posit
these things are to work around horribly broken upgrade procedures that
vendors have not given enough thought.  They could be solved with more
intelligent software in the ROM and on the main box without needing any
add on device.

 So I want to retire serial ports in the front to be needed for normal 
 operation. Look at the XR devices from Cisco for instance. For normal 
 maintenance you pretty much require both serial console (to do rommon 
 stuff one would imagine shouldn't be needed) and also mgmt ethernet (to 
 use tftp for downloading software when you need to turbo-boot because the 
 system is now screwed up because the XR developer (install) team messed 
 up the SMUs *again*).

Your vendor is going to hire those same developers to write the code for
your OOB device.  The solution here is not bad developers writing and
deploying even more code, it's to demand your vendors uplevel their
developers and software.

Ever have these problems on Vendor J?  No, the upgrade process there is
smooth as silk.  Not to say that vendor is perfect, they just have
different warts.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpyNw9nxqeU1.pgp
Description: PGP signature


Re: JunOS IPv6 announcements over IPv4 BGP

2012-12-21 Thread Leo Bicknell
In a message written on Fri, Dec 21, 2012 at 11:45:24AM -0700, Pete Ashdown 
wrote:
 I've got a peer who wishes me to send my IPv6 announcements over IPv4 BGP. 
 I'm running around in circles with JTAC trying to find out how to do this
 in JunOS.  Does anyone have a snippet they can send me?

A believe you got the snippet, but I wanted to expand on why this
is a bad idea.  From a protocol perspective, BGP can create one
session over a particular transport (IPv4, or IPv6 typically) and
then exchange routes for multiple address families (IPv4 unicast,
IPv4 multicast, IPv6 unicast, IPv6 multicast, or even all sorts of
fun MPLS stuff).  From a network management perspective doing so
can complicate things immensely.

Today networks want to deploy IPv6 without impacting their IPv4
network.  Adding IPv6 AFI to an IPv4 transport session will tear
it down, impacting IPv4 customers.

Tomorrow, when IPv4 transport fails, IPv6 customers are also impacted
by the failure of the transport, even though there may be no IPv6
routing issues.  There is also a chance that IPv6 forwarding fails, but
the routing information lives on running the traffic into a black hole
since the routing information isn't sharing the failed transport.

In the future, IPv4 will be removed from the network.  If all of
the transport is IPv4, those sessions will have to be torn down and
new ones built with IPv6 transport before the IPv6 only network can
live on.

I believe the vast majority, approaching 100% of larger ISP's move
IPv4 routes over IPv4 transport, and IPv6 routes over IPv6 transport,
treating the two protocols as ships in the night.  It elminates all
three problems I've listed above at the grand expense of your router
having to open/track 2 TCP connections rather than one; a trivial
amount of overhead compared to the routes being exchanged.

Of course, there are people who like to be different, sometimes for good
reasons, often not... :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp9VvGPsfgIy.pgp
Description: PGP signature


Re: NTP Issues Today

2012-11-20 Thread Leo Bicknell
In a message written on Mon, Nov 19, 2012 at 04:21:55PM -0700, Van Wolfe wrote:
 Did anyone else experience issues with NTP today?  We had our server
 times update to the year 2000 at around 3:30 MT, then revert back to 2012.

I'm surprised the various time geeks aren't all posting their logs, so
I'll kick off:

/tmp/parse-peerstats.pl peerstats.20121119
56250 76367.354 192.5.41.41 91b4 -378691200.312258363 0.088274002 0.014835425 
0.263515353
56250 77391.354 192.5.41.41 91b4 -378691200.312258363 0.088274002 0.018668790 
0.263749719
56250 78204.354 192.5.41.40 90b4 -378691200.785377324 0.088179350 0.014812585 
0.263668835
56250 78416.355 192.5.41.41 91b4 -378691200.785974681 0.088312507 0.014832943 
0.209966600
56250 79229.355 192.5.41.40 90b4 -378691200.785377324 0.088179350 0.018668723 
378691200.785523713
56250 79442.355 192.5.41.41 91b4 -378691200.785974681 0.088312507 0.018689918 
378691200.786114931

Or in more human readable form:
/tmp/parse-peerstats.pl peerstats.20121119
192.5.41.41 off by -378691200.312258363
192.5.41.41 off by -378691200.312258363
192.5.41.40 off by -378691200.785377324
192.5.41.41 off by -378691200.785974681
192.5.41.40 off by -378691200.785377324
192.5.41.41 off by -378691200.785974681

The script, if you want to run against your own stats:

#!/usr/bin/perl

while () {
  chomp;
  ($day, $second, $addr, $status, $offset, $delay, $disp, $skew) = split;
  if (($offset  10) || ($offset  -10)) {
#print $addr off by $offset\n; # More human friendly
print $_\n;   # Full details
  }
}

It just looks for servers off by more than 10 econds and then prints
the line.  378691200 seconds is ~12 years, which lines up with the
year 2000 dates some are reporting.

The IP's are tick.usno.navy.mil and tock.usno.navy.mil.

I can confirm from my vantage point that tick and tock both went about
12 years wrong on Nov 19th for a bit, I can also report that my NTP
server with sufficient sources correctly determined they were haywire
and ignored them.

If your machines switched dates yesterday it probably means you're
NTP infrastructure is insufficiently peered and diversified.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp9aMW4WaOuy.pgp
Description: PGP signature


Re: NTP Issues Today

2012-11-20 Thread Leo Bicknell

After some private replies, I'm going to reply to my own post with
some information here.

It appears many people don't understand how the NTP protocol works.
I suspect many people have configured a primary and a backup
NTP server on many of their devices.  It turns out this is the
_WORST_ possible configuration if you want accurate time:

http://support.ntp.org/bin/view/Support/SelectingOffsiteNTPServers#Section_5.3.3.

To protect against two falseticking servers (tick and tock, as we saw on
the 19th) you need _FIVE_ servers minimum configured if they are both in
the list.  More importantly, if you want to protect against a source
(GPS, CDMA, IRIG, WWIV, ACTS, etc) false ticking, you need a minimum of
_FOUR_ different source technologies in the list as well.

It's not hard, my box that I posted the logs from peers with 18 servers
using 8 source technologies, all freely available on the Internet...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpTEbSQgVa9z.pgp
Description: PGP signature


Re: NTP Issues Today

2012-11-20 Thread Leo Bicknell
In a message written on Tue, Nov 20, 2012 at 02:28:19PM -0500, Jay Ashworth 
wrote:
 I'm curious, Leo, what your internal setup looks like.  Do you have an
 internal pair of masters, all slaved to those externals and one another, 
 with your machines homed to them?  Full mesh?  Or something else?

My particular internal setup is a tad weird, and so rather than
answer your question, I'm going to answer with some generalities.
The right answer of course depends a lot on how important it is
that boxes have the right time.

If you have 4 or more physical sites, I believe the right answer
is to have on the order of 8 NTP servers.  2 each in 4 sites reaches
the minimum nicely with redundancy.  These boxes can have GPS, CDMA
or other technologies if you want, but MUST peer with at least 10
stratum-1 sources outside of your network.  Of course if you have
more sites, one server in each of 8 sites is peachy.  Those on a
budget could probably get by with 4 servers total, but never less!

All critical devices should then be synced to the full set of
internal servers.  4 boxes minimum, 8-10 preferred.  NTP will only
use the 10 best servers in it's calculations, so there is a steep
dropoff of diminishing returns beyond 10.  For most ISP's I would
include all routers in this list.

For the  non-critical devices?  Well, there it gets more complex.
For most I would only configure one server, their default gateway
router.  Of course, pushing out a set of 4+ to themm if that is
easy is a great thing to do.

The interesting thing here is that no devices except for your NTP
servers should ever peer with anything outside of your network.
Why?  Let's say your NTP servers all go crazy together.  The outside
world is cut off, GPS is spoofed, the world is ending.  All that
you have left is that all of your devices are in time to each
otherso at least your logs still coorelate and such.  So having
every device under your master set of NTP servers is important.
One guy with an external peer may choose to use that, and leave the
hive mind, so to speak.

For small players, less than 4 sites, typically just use the NTP
pool servers, configuring 4 per box minimum.  If you want the same
protection I just outlined in the paragraph before, make 4 of your
servers talk to the outside world, and make everything else talk
to those.  Want to give back to the community?  Get a GPS/CDMA/Whatever
box and make it part of the NTP pool.  Want to step up your game (which
is what I do), reach out to various Stratum-1's on the net (or find
free, open ones) and peer up 8-20 of them.

 In my last big gig, it was recommended to me that I have all the machines 
 which had to speak to my DBMS NTP *to it*, and have only it connect to the
 rest of my NTP infrastructure.  It coming unstuck was of less operational
 impact than *pieces of it* going out of sync with one another...

Yep, a prime example of the scenario I described above.  Depending on
your level of network redundancy, number of NTP servers, and so on, this
is a fine solution.  With one NTP server (the DBMS) the downstream will
always use it, and stay in sync.  It's a valid and good config in many
situations.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpkzZWO3GDPn.pgp
Description: PGP signature


Re: Google/Youtube problems

2012-11-19 Thread Leo Bicknell
In a message written on Mon, Nov 19, 2012 at 03:59:22PM +0200, Saku Ytti wrote:
 What I'm trying to say, I can't see youtube generating anywhere nearly
 enough revenue who shift 10% (or more) of Internet. And to explain this
 conundrum to myself, I've speculated accounting magic (which I'd frown
 upon) and leveraging market position to get free capacity (which is ok, I'd
 do the same, had I the leverage)

I suspect you're thinking about revenue in terms of say, the
advertisements they run with the videos.  I beleive you're right, that
would never pay the bills.

Consider a different model.  Google checks out your gmail account, and
discovers you really like Red Bull and from your YouTube profile knows
you watch a lot of Ke$ha videos.  It also discovers there are a lot more
folks with the same profile.  They can now sell that data to a marketing
firm, that there is a strong link between energy drinks and Ke$ha
videos.

GOOG-411 - building a corpus of voice data for Android's voice
recognition.

ReCaptcha - improving visual recognition for their book scanning
process.

Most of the free services are simply the cheapest way to get the data
needed for some other service that can make much more money.  It may
seem weird to write off all the costs of YouTube as data aquisition
costs, but there's far more money to be made selling marketing data than
ads against streaming videos...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpfgrl6pelK6.pgp
Description: PGP signature


Re: What is BCP re De-Aggregation: strict filtering /48s out of /32 RIR minimums.

2012-11-14 Thread Leo Bicknell
In a message written on Wed, Nov 14, 2012 at 01:10:57PM +, Ben S. Butler 
wrote:
 I am hoping for a bit of advice.  We are rolling out IPv6 en mass now to 
 peers and I am finding that our strict IPv6 ingress prefix filter is 
 meaning a lot of peers are sending me zero prefixes.  Upon investigation I 
 determine they have de-agregrated their /32 for routing reasons / non 
 interconnected islands of address space and in consequence advertise no 
 covering /32 route.  The RIR block that the allocation is from is meant to 
 have a minimum assignment of /32.

You are conflating two different issues, which are essentially
toally unrelated.  There is the smallest size block an RIR will
allocate out of some chuck of address space, and then there is how
people announce it on the Internet.  In the real world they have
almost nothing to do with each other, something folks understand today
in IPv4 but seem to think IPv6 magically fixes, it doesn't.

[Historically there were folks who maintained filters on IPv4 space, but
they gradually disappeared as the filters became so long they were
unmaintinable, and people discovered when your job is to connect people
throwing away routes is a bad thing.]

For instance, there are folks who could use the multiple discrete
networks policy to get a /48 for each of their 5 sites.  But instead
they get on /32, use a /48 at each site, and announce them
independantly.  Same prefixes in the table, but filtering on the
RIR /32 boundry means you won't hear them.

I'll point out it's not just longer, but shorter prefixes as well:

 ipv6 prefix-list ipv6-ebgp-strict permit 2001:500::/30 ge 48 le 48

F-Root announces 2001:4f8:500:2e::/47.  You're going to miss it.
There are other servers in this block that are in /47's or /46's.

If connectivity is what you value, here's the right filter:

ipv6 prefix-list ipv6-ebgp-permissive 2001::/12 ge 13 le 48

Yes, the DOD has a /13, and yes, people expect to be able to announce
down to a /48.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpMoWNXRMIDy.pgp
Description: PGP signature


Re: /. Terabit Ethernet is Dead, for Now

2012-09-27 Thread Leo Bicknell
In a message written on Thu, Sep 27, 2012 at 08:58:09AM -0400, Darius 
Jahandarie wrote:
 I recall 40Gbit/s Ethernet being promoted heavily for similar reasons
 as the ones in this article, but then 100Gbit/s being the technology
 that actually ended up in most places. Could this be the same thing
 happening?

Everything I've read sounds like a repeat of the same broken decision
making that happened last time.

That is unsurprising though, the same people are involved.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpjsPLOroQsw.pgp
Description: PGP signature


Re: The Department of Work and Pensions, UK has an entire /8

2012-09-19 Thread Leo Bicknell
In a message written on Tue, Sep 18, 2012 at 09:11:50PM -0700, Mike Hale wrote:
 I'd love to hear the reasoning for this.  Why would it be bad policy
 to force companies to use the resources they are assigned or give them
 back to the general pool?

While I personally think ARIN should do more to flush out addresses
that are actually _not in use at all_, the danger here is very
clear.

Forcing the return of address space that is in use but not in the
global default free routing table is making a value judgement
about the use of that address space.  Basically it is the community
saying that using public address space for private, but possibly
interconnected networks is not a worthy use of the space.

For a few years the community tried to force name based virtual
hosting on the hosting industry, rather than burning one IP address
per host.  That also was a value judegment that turned out to not
be so practical, as people use more than plain HTTP in the hosting
world.

The sippery slope argument is where does this hunt for underutilized
space stop?  Disconnected networks are bad?  Name based hosting is
required?  Carrier grade NAT is required for end user networks?
More importantly are the RIR's set up to make these value judgements
about the usage as they get more and more subjective?

There's also a ROI problem.  People smarter than I have done the
math, and figured out that if X% of the address space can be reclaimed
via these efforts, that gains Y years of address space.  Turns out
Y is pretty darn small no matter how agressive the search for
underutilized space.  Basically the RIR's would have to spin up
more staff and, well, harass pretty much every IP holder for a
couple of years just to delay the transition to IPv6 by a couple
of years.  In the short term moving the date a couple of years may
seem like a win, but in the long term its really insignificant.
It's also important to note that RIR's are paid for by the users,
the ramp up in staff and legal costs of such and effort falls back
on the community.  Is delaying IPv6 adoption worth having RIR fees
double?

If the policy to get companies to look at and return such resources
had been investigated 10-15 years ago it might have been something
that could have been done in a reasonable way with some positive
results.  It wasn't though, and rushing that effort now just doesn't
make a meaningful difference in the IPv4-IPv6 transition, particularly
given the pain of a rushed implementation.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpoe687k6sw9.pgp
Description: PGP signature


Re: Big Temporary Networks

2012-09-14 Thread Leo Bicknell
In a message written on Fri, Sep 14, 2012 at 11:53:01AM -0400, Jay Ashworth 
wrote:
 Yes, and I'm told by my best friend who did attend (I didn't make it
 this year) that the hotel wired/wifi was essentially unusable, every
 time he tried.  Hence my interest in the issue.

I find more and more hotel networks are essentially unusable for
parts of the day, conference or no.  Of course, bring in any geek
contingent with multiple devices and heavy usage patterns and the
problems get worse.

What I find most interesting is more often than not the problem
appears to be an overloaded / undersized NAT/Captive portal/DNS
Resolver system.  Behaviors like existing connections working fine,
but no new ones can be created (out of ports on the NAT?).  While
bandwidth is occasionally an issue, I've found an ssh tunnel out
to some other end point solves the issues in 9 out of 10 cases.

I wonder how many hotels upgrade their bandwidth but not the gateway,
get a report that their DS-3/OC-3/Metro-E is only 25% used, and think
all is well.  Mean while half their clients can't connect to anything
due to the gateway device.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpyboWnvCgz5.pgp
Description: PGP signature


Re: NANOG poll: favorite cable labeler?

2012-08-22 Thread Leo Bicknell
In a message written on Wed, Aug 22, 2012 at 01:50:08PM +, Jensen Tyler 
wrote:
 What are people using to label LC jumpers (simplex and duplex)? I like the 
 Sheet idea from Leo but does it work well with things not cat5?

It works fine with fiber, you just get less space.  If you think
about how the label wraps around part of the bottom of the white
tab wraps over part of the top.

With a small amount of trial and error it's quite easy to hand-write
or make a machine print at the right spacing on the label to get
the words to line up with the flat sides of a duplex fiber jumper.

Simplex you get like one line if you wrap it tight to the cable.  You
can flag them, put the simplex jumper at the bottom of the white,
wrap the clear around the back and up and over the written part so
it stands out like a flag.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp7TWVO4JWlv.pgp
Description: PGP signature


Re: Comcast vs. Verizon for repair methodologies

2012-08-21 Thread Leo Bicknell
In a message written on Tue, Aug 21, 2012 at 10:22:54AM -0500, Daniel Seagraves 
wrote:
 Comcast annoys me. I never have any problems with the people you get when you 
 call in, or the tech support people, but their contractors STINK. 

Whenever techs come to my house I like to ask them about their job.
I find it's fun idle chatter, and you can learn a lot.

In my area the standard residential work for both ATT and Comcast
is done by contractors.  In both cases they get paid by the job.
IIRC the Comcast tech got like $65 to come out and do a residential
install.  Didn't matter if it took 5 minutes or 5 hours, he got the
same money, provided the service was turned up.

The incentives here should immediately be clear.  If your job is
as simple as plugging in a box and making sure it comes up they are
happy as a clam, will chit chat for a while, check another outlet
with their tester, and so on.

Now imagine if you have a problem where a new line needs to be run,
and it's a pain to run the line so it takes 3 hours to get it placed
properly.  The guy is not only killing his hourly rate on your
install, but also missing the next one where he could make another
$65.  The tech will not be happy, and will do anything to get out
of there as fast as possible, provided the service works.

In my area business class dispatches full time staff from both ATT
and Comcast, and as far as I can tell they get paid hourly no matter
how long the job takes.  If it's a 5 hour repair they are happy to get
it done right.

As a result when my cable goes out I always call in the outage on my
business class cable modem rather than my residential TV service.

It's a classic follow the money situation, and I'm sure the bean
counters in higher management look only at $/truck roll and how to
minimize that metric, not paying much attention to truck rolls/sub,
uptime, or customer satisfaction.  People work towards the incentives
that have been set.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpsSOYDPjH42.pgp
Description: PGP signature


Re: NANOG poll: favorite cable labeler?

2012-08-21 Thread Leo Bicknell

Fancy printers are fine, but if you buy sheets of laser printer self
laminating labels you have the choice of printing on a laser printer,
or simply using a sharpie (fine point recommended) to hand-write a label
and then self laminate.

http://www.cablelabelsusa.com/category/PER-SHEET-16

Keeping a sheet in your bag makes for quick, cheap, durable labels
that require no batteries or machine to use.

Any machine that prints on the same type is a nice upgrade for
high volume labeling.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpbegkENR6cC.pgp
Description: PGP signature


Re: Comcast vs. Verizon for repair methodologies

2012-08-20 Thread Leo Bicknell
In a message written on Mon, Aug 20, 2012 at 04:12:22PM -0400, Patrick W. 
Gilmore wrote:
 The story: A piece of underground cable went bad.  The techs didn't pull new 
 underground cable.  They decided it was better to do it arial (if you can 
 call 2 feet arial).  They took apart the two pedestals on either side of 
 the break and ran a new strand of RG6 (yes, the same stuff you use inside 
 your home, not the outside-plant rated stuff) tied to trees with rope.

Why is that cable still in place?

That's a hint, not really a question. :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpsA17Zthpfo.pgp
Description: PGP signature


Re: Any Idea About Spectrum-DMR-104-1 ?!

2012-08-16 Thread Leo Bicknell
In a message written on Thu, Aug 16, 2012 at 11:30:40PM +0430, Shahab 
Vahabzadeh wrote:
 Dear Owen,
 Thanks for your reply, in reply to your factors:
 
 1. 1~2 Kilometers
 2. PTP
 3. Directional
 4. 29db Dish (single or dual)

I know someone already pointed you to the product, but that just
screams like what you want is the Ubiquity airFiber product.  You
should easily get near the max 1.4Gbps throughput at 1-2km if you
have clear line of site.  It's plug and play, in that you should
have to do very mimimal tuning to get that performance.  Mostly
making sure the two units are aligned properly.

I've not gotten a quote myself, but the Internet forums suggest the 
gear is $3k for a single link (so two units).  Just to do high quality
801.11n with dish antennas would probably cost $1k or more.

The 24Ghz band they use should be worldwide license free (check with
your country) and also have less interference than the 5Ghz band.

http://www.ubnt.com/airfiber

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpT6n6JSRo8u.pgp
Description: PGP signature


Re: DNS Changer items

2012-08-15 Thread Leo Bicknell
In a message written on Wed, Aug 15, 2012 at 10:46:52AM +0100, Stephen Wilcox 
wrote:
 https://www.ripe.net/internet-coordination/news/clarification-on-reallocated-ipv4-address-space-related-to-dutch-police-order

From the article:

] The address space was quarantined for six weeks before being returned to
] the RIPE NCC's available pool of IPv4 address space. It was then
] randomly reallocated to a new resource holder according to normal
] allocation procedures.
] 
] As the RIPE NCC nears IPv4 exhaustion, it will reduce the quarantine
] period of returned address space accordingly to ensure that there is no
] more IPv4 address space available before the last /8 is reached. The
] RIPE NCC recognises that this shortened quarantine could lead to
] routability problems and offers its members assistance to reduce this.

While I understand that in the face of IPv4 exhaustion long quarantine
periods are probably no longer a good idea, I think 6 weeks is
shockingly short.  I also think to blanket apply the quarantine is
a little short sighted, there are cases that need a longer cooling
off period, and this may be one of them.

I think the RIPE membership, and indeed the policy making bodies
of all RIR's should look at their re-allocation policies with this
case in mind and see if a corner case like this doesn't present a
surprising result.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpxgaDQlclCD.pgp
Description: PGP signature


Re: DNS Changer items

2012-08-15 Thread Leo Bicknell
In a message written on Wed, Aug 15, 2012 at 08:01:15AM -0700, joel jaeggli 
wrote:
 Remediation of whatever wrong with a given prefix is an active activity, 
 it's not likely to go away unless the prefix is advertised.

Actually, that's not true on two fronts.

From a business relationship front, if the problem is contacting
the right people when the right people have been arrested and now
some police agent now needs to generate the right paperwork, produce
court paperwork, see a judge, time will absolutely help.  I can see
a scenario here where it might have been worked out to transfer the
block to the appropriate law enformcement agency for a year (with
them paying the usual fees) such that they could wind this down in
an orderly way.

If the problem is technical badness, the block has appeared on
blacklists or grey lists, or been placed in to temporary filters
to block DNS changer badness time will also help.  Most (although
not all) of those activities are aged out.  As ISP's stop seeing
hits on their DNS changed ACL's because the machines have been
cleaned up they will remove them.  Greylists will age out.

Indeed both of these is why there is a cooling off period in place
now at all RIR's.  They have been proven to work.  Previously in
some cases they were 6-12 months though, and what the community has
said is that given that we're out of IPv4 those time periods should
be shorter.  The question becomes how much shorter?  Clearly holding
them back for 1 day isn't long enough to make any business or
technical difference.  The community is saying 6-12 months is too
long.

I am saying 6 weeks sounds too short to me, but if it is appropriate
for ordinary blocks there needs to be an exception for extrodinary
ones.  From time to time we hear about blocks like DNSChanger that
millions of boxes are configured to hit, or I remember the University
of Wisconsin DDOSed by NTP queries from some consumer routers.  When
the box still has high levels of well known, active badness, perhaps
it should be held back longer.

 In the case of dns changer, I would think that if you don't have working 
 DNS for long enough you're going to have your computer fixed or throw it 
 out. if you were an operator using that prefix to prevent customer 
 breakage you should be on notice that's not sustainable indefinitely or 
 indeed for much longer.

The problem here isn't just the infected computers.  Would you want to
receive a netblock from an RIR that came with tens or hundreds of
megabits of DDOS, I mean, background noise when you turned it on?
Whoever receives this block is in for a world of hurt.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpzJQlaF6ONs.pgp
Description: PGP signature


Re: Does anyone use anycast DHCP service?

2012-08-13 Thread Leo Bicknell
In a message written on Mon, Aug 13, 2012 at 08:51:09AM +, Joe wrote:
 We are considering setup  reduant DHCP server clusers by using anycast.

I already see people pointing out problems with Anycast here, but
no one pointing out the best available solution.

Assuming your DHCP servers are properly clustered, simply have your
routers relay all requests to both servers.  Here's instructions
on setting up ISC DHCPD for redundant (pooled) servers:
http://www.madboa.com/geek/dhcp-failover/

Then configure your routers to send to both DHCP servers with
multiple helper-address lines:

interface Gig0/0
  ip helper-address 10.0.0.1
  ip helper-address 10.128.0.1

The way this work is when a box comes up the router sends DHCP
requests to both servers.  The DHCP server that reponds first will
be used by the client, which will complete negotiation with that
server via unicast.  The two DHCP servers will then synchronize
their pools.

Works great, no single point of failure, no anycast.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpQP04Y7I9ye.pgp
Description: PGP signature


Re: Does anyone use anycast DHCP service?

2012-08-13 Thread Leo Bicknell
In a message written on Mon, Aug 13, 2012 at 08:54:09AM -0500, Ryan Malayter 
wrote:
 1) No third-party witness service for the cluster, making
 split-brain scenarios a very real possibility.

The ISC implementation is designed to continue to work with a split
brain.  I believe the Microsoft solution is as well, but I know
less about it.  There's no need to detect if the redundant pair
can't communicate as things continue to work.  (With some caveats,
see below.)

 2) Multi-master databases are quite challenging in practice. This one
 appears to rely on timestamps from the system clock for conflict
 detection, which has been shown to be unreliable time and again in the
 application space.

You are incorrect.  The ISC implementation divides the free addresses
between the two servers.  The client will only interact with the
first to respond (literally, no timestamps involved).  Clients
talking to each half of a split brain can continue to receive
addresses from the shared range, no timestamps are needed to resolve
conflicts, because the pool was split prior to the loss of
server-to-server communication.

There is a down-side to this design, in that if half the brain goes
away half of the free addresses become unusable with it until it
resynchronizes.  This can be mitigated by oversizing the pools.

 3) There are single points of failure. You've traded hardware as a
 single point of failure for bug-free implementation of clustering
 code on both DHCP servers as a single point of failure. In general,
 software is far less reliable than hardware.

Fair enough.

However I suspect most folks are not protecting against hardware
or software failures, but rather circuit failures between the client
and the DHCP servers.

I've actually never been a huge fan of large, centralized DHCP
servers, clustered or otherwise.  Too many eggs in one basket.  I
see how it may make administration a bit easier, but it comes at
the cost of a lot of resiliancy.  Push them out to the edge, make
each one responsible for a local network or two.  Impact of an
outage is much lower.  If the router provides DHCP, the failure
modes work together, router goes down so does the DHCP server.

I think a lot of organizations only worry about the redundancy of
DHCP servers because the entire company is dependant on one server
(or cluster), and the rest of their infrastructure is largely
non-redundant.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpYLNbdiVRPm.pgp
Description: PGP signature


Re: BGPttH. Neustar can do it, why can't we?

2012-08-06 Thread Leo Bicknell
In a message written on Mon, Aug 06, 2012 at 01:49:07AM -0500, jamie rishaw 
wrote:
 discuss.

It's a short sighted result created by the lack of competition.

IP access is a commodity service, with thin margins that will only
get thinner.  Right now those margins are being propped up in many
locations by monopoly or near-monopoly status, which creates a
situation where companies neither need to compete on features and
service quality nor do they need to turn to those areas to maintain
a profit.

If there was meaningful competition, the margin on raw IP access
would decline and companies would have to turn to value-add services
to maintain a profit margin.  From the simple up-sell of a static
IP address that some do today, to a fee for BGP, a fee for DDOS
mitigation, and so on.  These are all things it's not uncommon to
see when buying service in carrier netural colos where there is
competition.

There is no technological problem here.  BGP to the end user has a
cost.  The current business climate is causing people to cut all
possible costs and offering no incentive to innvovate and up-charge.

Which leads to an interesting question.  If on top of your $100/month
business class Internet the provider were to charge $50/month for
BGP Access to cover their costs of having a human configure the
session, larger access gear to handle the routes, and larger backbone
gear to deal with a larger routing table, would you still be as
gung ho about BGP to the business?

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpEkPkYy56yO.pgp
Description: PGP signature


Re: BGPttH. Neustar can do it, why can't we?

2012-08-06 Thread Leo Bicknell
In a message written on Mon, Aug 06, 2012 at 10:05:30AM -0500, Chris Boyd wrote:
 Speaking as someone who does a lot of work supporting small business IT, I 
 suspect the number is much lower.  As a group, these customers tend to be 
 extremely cost averse.  Paying for a secondary access circuit may become 
 important as cloud applications become more critical for the market segment, 
 but existing smart NAT boxes that detect primary upstream failure and switch 
 over to a secondary ISP will work for many cases.  Yes, it's ugly, but it 
 gets them reconnected to the off-site email server and the payment card 
 gateway.

I don't even think the dual-uplink NAT box is that ugly of a solution.
Sure it's outbound only, but for a lot of applications that's fine.

However, it causes me to ask a differnet question, how will this
work in IPv6?  Does anyone make a dual-uplink IPv6 aware device?
Ideally it would use DHCP-PD to get prefixes from two upstream
providers and would make both available on the local LAN.  Conceptually
it would then be easy to policy route traffic to the correct provider.
But of course the problem comes down to the host, it now needs to
know how to switch between source addresses in some meaningful way,
and the router needs to be able to signal it.

As messy as IPv4 NAT is, it seems like a case where IPv6 NAT might
be a relatively clean solution.  Are there other deployable, or nearly
deployable solutions?

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgplOaoI7mtWG.pgp
Description: PGP signature


Re: BGPttH. Neustar can do it, why can't we?

2012-08-06 Thread Leo Bicknell
In a message written on Mon, Aug 06, 2012 at 05:51:02PM +0100, Mike Jones wrote:
 If you have a router that sends out RAs with lifetime 0 when the
 prefix goes away then this should be deployable for poor mans
 failover (the same category I put IPv4 NAT in), however there are

If your provider does Unicast RPF strict mode, which I hope _all_
end user and small business connections default to doing this won't
work.  The traffic has to be policy routed out based on the source
IP.

Having the host stack do that is an acceptable solution (your dual
router model) I think, but I don't know of a single one that does
that today.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp26jdIW5Fr2.pgp
Description: PGP signature


Re: Cisco Update

2012-07-05 Thread Leo Bicknell
In a message written on Thu, Jul 05, 2012 at 03:51:40PM +, Mario Eirea 
wrote:
 Has anyone seen this yet? Looks like Cisco was forcing people to join its 
 Cloud service through an update for it's consumer level routers.

Perhaps going right to the source would be educational:

http://home.cisco.com/en-us/cloud

The short version appears to be Cisco wanted to move to a model
where you could manage your home gateway remotely, and also store
settings that may (in the future) be able to be reused if you
replaced your device.  All in all it sounds a lot to me like Meraki's
solution (caveta, I've not used Meraki, just gotten the presentation).
There's probably even a market for this sort of service.

Where they appear to have gone horribly wrong is that several models
of Linksys routers with auto-update enabled downloaded this update and
moved to this new management model with no user intervention, notice,
or method of being down graded.  Thus folks who didn't want these
features and may not have upgraded to them were caught by surprise, and
have been effectively forced to take the new features due to a lack of
downgrade path.

Technology wise it's pretty non-interesting.  Others have been doing
similar things.

From a customer relations point of view it's a total disaster, and
one that should have been entirely predictable.

I was never much of a fan of Linksys pre-Cisco, but post-Cisco it seems
to be in a non-stop downhill slide...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp7DNUixq3eq.pgp
Description: PGP signature


Re: job screening question

2012-07-05 Thread Leo Bicknell
In a message written on Thu, Jul 05, 2012 at 01:02:08PM -0400, William Herrin 
wrote:
 You implement a firewall on which you block all ICMP packets. What
 part of the TCP protocol (not IP in general, TCP specifically)
 malfunctions as a result?
 
 My questions for you are:
 
 1. As an expert who follows NANOG, do you know the answer? Or is this
 question too hard?

I suspect you're looking for Path MTU Discovery as an answer.

 2. Is the question too vague? Is there a clearer way to word it?

I believe if you understand ICMP, it could be considered to be
vague.

For instance, blocking all ICMP means that if the network breaks
during communication and a Host/Net unreachable is generated the
connection will have to go through a timeout rather than an immeidate
tear down.  Similarly, blocking ICMP source quench might break
throttling in the 3 TCP implementations in the world that do that.
:)

 3. Is there a better screening question I could pass to HR to ask and
 check the candidate's response against the supplied answer?

A firewall is configured to block all ICMP packets and a system
 administrator reports problems with TCP connections not transferring
 data.  What is the most likely cause?

ICMP Packet-Too-Big being dropped and breaking PMTU discovery is
the correct answer.

When I study for my CCIE Recert every 2 years I find myself relearning
The Cisco Answer, rather than the right answer.  It's not that the
Cisco answers are often wrong per-se, but they teach the most likely
causes of things and want them back as the right answer.  Cribbing
from their test materials and study guides puts the questions in familar
terms that your candidates are likely to have seen, making them less
likely to be thrown off by the question.

Unless you want to throw them off.  Depends on the level of folks you
want to hire.  I would answer your question with I would never
implement a firewall that breaks all TCP. :)

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpOZcMGR0mW6.pgp
Description: PGP signature


Re: job screening question

2012-07-05 Thread Leo Bicknell
In a message written on Thu, Jul 05, 2012 at 08:32:46PM -0400, William Herrin 
wrote:
 What's an ethernet collision domain? Seriously, when was the last time
 you dealt with a half duplex ethernet?

5 segments
4 repeaters
3 segments with transmitting hosts
2 transit segments
1 collision domain

If any employer thought that was useful knowledge for a job today I
would probably run away, as fast as possible!

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpbkFMsT1ual.pgp
Description: PGP signature


Re: job screening question

2012-07-05 Thread Leo Bicknell
In a message written on Thu, Jul 05, 2012 at 11:05:21PM -0400, William Herrin 
wrote:
 Incidentally, 100m was the segment limit. IIRC the collision domain
 comprising the longest wire distance between any two hosts was larger,
 something around 200m for fast ethernet. Essentially, the collision

Actually it can be much longer, having worked on a longer such ethernet
many, many moons ago.

The longest spec-complaint, repeated only network looks like:

   |
   | Host Segment
   | 
   + Copper to Fiber Repeater
   |
   | 2km fiber, no hosts
   |
   + Copper to Fiber Repeater
   |
   | Host Segment, with or without hosts
   |
   + Copper to Fiber Repeater
   |
   | 2km fiber, no hosts
   |
   + Copper to Fiber Repeater
   |
   | Host Segment
   | 

With 10base5, a copper segment can be 500m, so 500+2000+500+2000+500, or
5.5km.

With 10base2, a copper segment can be 185m, so 185+2000+185+2000+185, or 
4.5km.

WIth 10baseT, a copper segment can be 100m, so 100+2000+100+2000+100, or
4.4km.

The introduction of fiber repeaters is why folks started to use the
broken term half repeater.  This was so folks who learned the
rules as 2 repeaters in the path could deal with the fact that
it's actually the 5-4-3 rule, so they called the 4 repeaters two
half repeaters.

Of course, each repeater could be a multi-port repeater (or a hub in
10baseT speak) and thus have a star configuration off of it in the
diagram.

Add in a couple of 2 port bridges to reframe things, and it's quite
possible to run a layer 2 ethernet that is 10's of km long, and has
thousands of hosts on it.  There was a day when 3000-4000 hosts on
a single layer 2 network at 10Mbps was living large.

Thankfully, not anymore.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpVwnSRGQOxr.pgp
Description: PGP signature


Re: F-ckin Leap Seconds, how do they work?

2012-07-03 Thread Leo Bicknell
In a message written on Tue, Jul 03, 2012 at 10:47:52PM +0200, Eugen Leitl 
wrote:
 Notice that in inertial frame dragging context it's provably
 impossible to synchronize oscillators. Luckily, Earth has
 negligible frame dragging, for the kind of accuracy we
 currently need.

I think everyone on this list is going in the wrong direction with
this issue.  What you're all arguing over is the correct time for
some defintion of correct.

I'm a bit more practical.  How about we write software so a leap
second doesn't crash everything?  We can then allow the time nuts
get back to arguing which leap seconds we should use, or time
reference, or whatever.

I'd even take off by a second but didn't crash, over crashed.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp3RLdVofLej.pgp
Description: PGP signature


Re: FYI Netflix is down

2012-07-02 Thread Leo Bicknell
In a message written on Mon, Jul 02, 2012 at 11:30:06AM -0400, Todd Underwood 
wrote:
 from the perspective of people watching B-rate movies:  this was a
 failure to implement and test a reliable system for streaming those
 movies in the face of a power outage at one facility.

I want to emphasize _and test_.

Work on an infrastructure which is redundant and designed to provide
100% uptime (which is impossible, but that's another story) means
that there should be confidence in a failure being automatically
worked around, detected, and reported.

I used to work with a guy who had a simple test for these things,
and if I was a VP at Amazon, Netflix, or any other large company I
would do the same.  About once a month he would walk out on the
floor of the data center and break something.  Pull out an ethernet.
Unplug a server.  Flip a breaker.

Then he would wait, to see how long before a technician came to fix
it.

If these activities were service impacting to customers the engineering
or implementation was faulty, and remediation was performed.  Assuming
they acted as designed and the customers saw no faults the team was
graded on how quickly the detected and corrected the outage.

I've seen too many companies who's test is planned months in advance,
and who exclude the parts they think aren't up to scratch from the test.
Then an event occurs, and they fail, and take down customers.

TL;DR If you're not confident your operation could withstand someone
walking into your data center and randomly doing something, you are
NOT redundant.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpb37PwndunF.pgp
Description: PGP signature


Re: FYI Netflix is down

2012-07-02 Thread Leo Bicknell
In a message written on Mon, Jul 02, 2012 at 12:13:22PM -0400, david raistrick 
wrote:
 you mean like this?
 
 http://techblog.netflix.com/2011/07/netflix-simian-army.html

Yes, Netflix seems to get it, and I think their Simian Army is a
great QA tool.  However, it is not a complete testing system, I
have never seen them talk about testing non-software components,
and I hope they do that as well.  As we saw in the previous Amazon
outage, part of the problem was a circuit breaker configuration.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpznsW9y4T3n.pgp
Description: PGP signature


Re: FYI Netflix is down

2012-07-02 Thread Leo Bicknell
In a message written on Mon, Jul 02, 2012 at 12:23:57PM -0400, david raistrick 
wrote:
 When the hardware is outsourced how would you propose testing the 
 non-software components?  They do simulate availability zone issues (and 
 AZ is as close as you get to controlling which internal power/network/etc 
 grid you're attached to).

Find a provider with a similar methodology.  Perhaps Netflix never
conducts a power test, but their colo vendor would perform such
testing.

If no colo providers exist that share their values on testing, that
may be a sign that outsourcing it isn't the right answer...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpzGpzGDjIwI.pgp
Description: PGP signature


Re: How to fix authentication (was LinkedIn)

2012-06-22 Thread Leo Bicknell
In a message written on Thu, Jun 21, 2012 at 04:48:47PM -1000, Randy Bush wrote:
 there are no trustable third parties

With a lot of transactions the second party isn't trustable, and
sometimes the first party isn't as well. :)

In a message written on Thu, Jun 21, 2012 at 10:53:18PM -0400, Christopher 
Morrow wrote:
 note that yubico has models of auth that include:
   1) using a third party
   2) making your own party
   3) HOTP on token
   4) NFC
 
 they are a good company, trying to do the right thing(s)... They also
 don't necessarily want you to be stuck in the 'get your answer from
 another'

Requirements of hardware or a third party are fine for the corporate
world, or sites that make enough money or have enough risk to invest
in security, like a bank.

Requiring hardware for a site like Facebook or Twitter is right
out.  Does not scale, can't ship to the guy in Pakistan or McMurdo
who wants to sign up.  Trusting a third party becomes too expensive,
and too big of a business risk.

There are levels of security here.  I don't expect Facebook to take
the same security steps as my bank to move my money around.  One
size does not fit all.  Making it so a hacker can't get 10 million
login credentials at once is a quantum leap forward even if doing
so doesn't improve security in any other way.

The perfect is the enemy of the good.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpSFnXpJbil9.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-21 Thread Leo Bicknell

I want to start by saing, there are lots of different security problems
with accessing a cloud service.  Several folks have already brought up
issues like compromised user machines or actually verifing identity.

One of the problems in this space I think is that people keep looking
for a silver bullet that magically solves all the problems.  It doesn't
exist.  We need a series of technologies that work with each other.

In a message written on Thu, Jun 21, 2012 at 10:43:44AM -0400, AP NANOG wrote:
 How will this prevent man in the middle attacks, either at the users 
 location, the server location, or even on the compromised server itself 
 where the attacker is just gathering data.  This is the same concerns we 
 face now.

There is a sign up problem.  You could sign up with a MTM web site,
which then signs you up with the real web site.

There are a number of solutions, you can try and prevent the MTM attack
with something like DNSSEC, and/or verify the identity of the web site with
something like X.509 certificates verified by a trusted party.  The
first relationship could exchange public keys in both directions, making
the attack a sign-up attack only, once the relationship is established
its public key in both directions and if done right impervious to a MTM
attack.

Note that plenty of corporations hijack HTTPS today, so MTM attacks
are very real and work should be done in this space.

 Second is regarding the example just made with bickn...@foo.com and 
 super...@foo.com.  Does this not require the end user to have virtually 
 endless number of email addresses if this method would be implemented 
 across all authenticated websites, compounded by numerous devices 
 (iPads, Smartphones, personal laptop, work laptop, etc..)

Not at all.  Web sites can make the same restrictions they make
today.  Many may accept my bickn...@ufp.org key and let me us
that as my login.  A site like gmail or hotmail may force me to use
something like bickn...@gmail.com, because it actually is an e-mail,
but it could also give me the option of using an identifier of my
choice.

While I think use of e-mails is good for confirmation purposes, a
semi-anonymous web site that requires no verification could allow
a signup with bob or other unqualified identifier.

It's just another name space.  The browser is going to cache a mapping
from web site, or domain, to identifier, quite similar to what it does
today...

Today:
  www.facebook.com, login: bob, password: secret

Tomorrow:
  www.facebook.com, key: bob, key-public: ..., key-private: ...


-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgph9Oiiinnwc.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-20 Thread Leo Bicknell
In a message written on Wed, Jun 20, 2012 at 03:30:58PM -0400, AP NANOG wrote:
 So the question falls back on how can we make things better?

Dump passwords.

The tech community went through this back in oh, 1990-1993 when
folks were sniffing passwords with tcpdump and sysadmins were using
Telnet.  SSH was developed, and the problem was effectively solved.

If you want to give me access to your box, I send you my public
key.  In the clear.  It doesn't matter if the hacker has it or not.
When I want to log in I authenticate with my private key, and I'm
in.

The leaks stop immediately.  There's almost no value in a database of
public keys, heck if you want one go download a PGP keyring now.  I can
use the same password (key) for every web site on the planet, web
sites no longer need to enforce dumb rules (one letter, one number, one
character your fingers can't type easily, minimum 273 characters).

SSL certificates could be used this way today.

SSH keys could be used this way today.

PGP keys could be used this way today.

What's missing?  A pretty UI for the users.  Apple, Mozilla, W3C,
Microsoft IE developers and so on need to get their butts in gear
and make a pretty UI to create personal key material, send the
public key as part of a sign up form, import a key, and so on.

There is no way to make passwords secure.  We've spent 20 years
trying, simply to fail in more spectacular ways each time.  Death to
traditional passwords, they have no place in a modern world.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpjw9J6dZJCK.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-20 Thread Leo Bicknell
In a message written on Wed, Jun 20, 2012 at 02:19:15PM -0700, Leo Vegoda wrote:
 Key management: doing it right is hard and probably beyond most end users.

I could not be in more violent disagreement.

First time a user goes to sign up on a web page, the browser should
detect it wants a key uploaded and do a simple wizard.

  - Would you like to create an online identity for logging into web
sites?Yes, No, Import

User says yes, it creates a key, asking for an e-mail address to
identify it.  Import to drag it in from some other program/format,
No and you can't sign up.

Browser now says would you like to sign up for website 'foobar.com',
and if the user says yes it submits their public key including the
e-mail they are going to use to log on.  User doesn't even fill out
a form at all.

Web site still does the usual e-mail the user, click this link to verify
you want to sign up thing.

User goes back to web site later, browser detects auth needed and
public key foo accepted, presents the cert, and the user is logged in.

Notice that these steps _remove_ the user filling out forms to sign up
for simple web sites, and filling out forms to log in.  Anyone who's
used cert-based auth at work is already familiar, the web site
magically knows you.  This is MUCH more user friendly.

So the big magic here is the user has to click on yes to create a key
and type in an e-mail once.  That's it.  There's no web of trust.  No
identity verification (a-la ssl).  I'm talking a very SSH like system,
but with more polish.

Users would find it much more convenient and wonder why we ever used
passwords, I think...

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpmm2yrs9KQ1.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-20 Thread Leo Bicknell
In a message written on Wed, Jun 20, 2012 at 03:05:17PM -0700, Aaron C. de 
Bruyn wrote:
 You're right.  Multiple accounts is unpossible in every way except
 prompting for usernames and passwords in the way we do it now.
 The whole ssh-having-multiple-identities thing is a concept that could
 never be applied in the browser in any sort of user-friendly way.
 /sarcasm

Aw come on guys, that's really not hard, and code is already in the
browsers to do it.

If you have SSL client certs and go to a web site which accepts
multiple domains you get a prompt, Would you like to use identity
A or identity B.  Power users could create more than one identity
(just like more than one SSH key).  Browsers could even generate
them behind the scenes for the user create new account at foo.com
tells the browser to generate bickn...@foo.com and submit it.  If
I want another a quick trip to the menu creates super...@foo.com
and saves it.  When I go to log back in the web site would say send
me your @foo.com signed info.

Seriously, not that hard to do and make seemless for the user; it's all
UI work, and a very small amount of protocol (HTTP header probably)
update.

In a message written on Wed, Jun 20, 2012 at 02:54:10PM -0700, Matthew Kaufman 
wrote:
 Yes. Those users who have a single computer with a single browser. For 
 anyone with a computer *and* a smartphone, however, there's a huge 
 missing piece. And it gets exponentially worse as the number of devices 
 multiplies.

Yeah, and no one has that problem with a password.

Ok, that was overly snarky.  However people have the same issue
with passwords today.  iCloud to sync them.  Dropbox and 1Password.
GoodNet.  Syncing certs is no worse than syncing passwords.

None of you have hit on the actual down side.  You can't (easily) log in
from your friends computer, or a computer at the library due to lack of
key material.  I can think of at least four or five solutions, but
that's the only hard problem here.

This has always failed in the past because SSL certs have been tied to
_Identity_ (show me your drivers license to get one).  SSH keys are NOT,
you create them at will, which is why they work.  You could basically
coopt SSL client certs to do this with nearly zero code provided people
were willing to give up on the identity part of X.509, which is
basically worthless anyway.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgprmtzyIhFHO.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-20 Thread Leo Bicknell
In a message written on Wed, Jun 20, 2012 at 06:37:50PM -0400, 
valdis.kletni...@vt.edu wrote:
 I have to agree with Leo on this one.  Key management *is* hard - especially
 the part about doing secure key management in a world where Vint Cerf
 says there's 140M pwned boxes.  It's all nice and sugary and GUI-fied and
 pretty and Joe Sixpack can do it - till his computer becomes part of the 140M
 and then he's *really* screwed.

I'm glad you agree with me. :)  

That's no different than today.  Today Joe Sixpack keeps all his
passwords in his browsers cache.  When his computer becomes part of the
botnet the bot owner downloads that file, and also starts a keylogger to
get more passwords from him.

In the world I propose when his computer becomes part of the botnet
they will download the private key material, same as before.

My proposal neither helps, nor hurts, the problem of Joe Sixpack's
machine being broken into is orthoganal and not addressed.  It needs to
be, but not by what I am proposing.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpfhOcWj5KHW.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-20 Thread Leo Bicknell
In a message written on Thu, Jun 21, 2012 at 08:02:58AM +0900, Randy Bush wrote:
 what is the real difference between my having holding the private half
 of an asymmetric key and my holding a good passphrase for some site?
 that the passphrase is symmetric?

The fact that it is symmetric leads to the problem.

The big drawback is that today you have to provide the secret to
the web site to verify it.  It doesn't matter if the secret is
transfered in the clear (e.g. http) or encrypted (e.g. https), they
have it in their RAM, or on their disk, and so on.  Today we _trust_
sites to get rid of that secret as fast as possible, by doing things
like storing a one way hash and then zeroing the memory.

But what we see time and time again is sites are lazy.  The secret
is stored in the clear.  The secret is hashed, but with a bad hash
and no salt.  Even if they are good guys and use SHA-256 with a nice
salt, if a hacker hacks into their server they can intercept the secret
during processing.

With a cryptographic solution the web site would say something like:

Hi, it's 8:59PM, transaction ID 1234, cookie ABCD, I am foo.com, who are you.

Your computer would (unknown to you) would use foo.com to figure out
that bickn...@foo.com (or super...@foo.com) was your login, do some
math, and sign a response with your private key that says:

Hi, I'm bickn...@foo.com, I agree it's 8:59 PM, transaction 1234,
cookie ABCD.

Even if the attacker had fully compromised the server end they get
nothing.  There's no reply attack.  No shared secret they can use to log
into another web site.  Zero value.

 s/onto web sites/this web site/  let's not make cross-site tracking any
 easier than it is today.

Yep.  Don't get me wrong, there's an RFC or two here, a few pages of
code in web servers and browsers.  I am not asserting this is a trival
change that could be made by one guy in a few minutes.  However, I am
suggesting this is an easy change that could be implemented in weeks not
months.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpL49D9nWRoj.pgp
Description: PGP signature


Re: LinkedIn password database compromised

2012-06-07 Thread Leo Bicknell
In a message written on Wed, Jun 06, 2012 at 11:14:58PM -0700, Aaron C. de 
Bruyn wrote:
 Heck no to X.509.  We'd run into the same issue we have right now--a
 select group of companies charging users to prove their identity.

Why?

A user providing the public half of a self-signed certificate is
exactly the same as the user providing the public half of a
self-generated SSH key.

The fact that you can have a trust chain may be useful in some
cases.  For instance, I'm not at all opposed to the idea of the
government having a way to issue me a signed certificate that I
then use to access government services, like submitting my tax
return online, renewing my drivers license, or maybe even e-voting.

The X.509 certificates have an added bonus that they can be used
to secure the transport layer, something that your ssh-key-for-login
proposal can't do.

This is all a UI problem.  If Windows/OSX or Safari/Firefox/Chrome
prompted users to create or import a user certificate when first
run, and provided a one-click way to provide it to a form when signing
up there would be a lot more incentive to use that method.  Today pretty
much the only place you see certificates for users is Enterprises with
Microsoft's certificate tools because of the UI problem.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpWPTkGZcThO.pgp
Description: PGP signature


Re: Vixie warns: DNS Changer ‘blackouts’ inevitable

2012-05-31 Thread Leo Bicknell
In a message written on Thu, May 31, 2012 at 08:14:40AM -0500, cncr04s/Randy 
wrote:
 Exactly how much can it cost to serve up those requests... I mean for
 9$ a month I have a cpu that handles 2000 *Recursive* Queries a
 second. 900 bux could net me *200,000* a second if not more.
 The government overspends on a lot of things.. they need some one whos
 got the experience to use a bunch of cheap servers for the resolvers
 and a box that hosts the IPs used and then distributes the query
 packets.

The interesting bit with DNSChanger isn't serving up the requests,
but the engineering to do it in place.  Remember, all of the clients
are pointed to specific IP addresses by the malware.

The FBI comes in and takes all the servers because they are going
to be used in the court case, and then has to pay someone to figure
out how to stand a service back up at the exact same IP's serving
those infected clients in a way they won't notice.  This includes
include working with the providers of the IP Routing, IP Address
blocks, colocation space and so on to keep providing the service.

In this case it was also pre-planned to be nearly seamless so that
end users would not see any down time, and the servers had to be
fully instrumented to capture all of the infected client IP addresses
and report them to various parties for remediation, including further
evidence to the court for the legal proceedings.  The FBI also had
to convince a judge this was the right thing to do, so I'm sure
someone had to pay some experts to explain all of this to a judge
to make it happen.

I suspect the cost of the hardware to handle the queries is neglegable,
I doubt of all the money spent more than a few thousand dollars
went to the hardware.  It seems like the engineering and coordination
was rather significant here, and I'll bet that's where all the money
was spent.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpkAb1qwXgzp.pgp
Description: PGP signature


Re: HE.net BGP origin attribute rewriting

2012-05-31 Thread Leo Bicknell
In a message written on Thu, May 31, 2012 at 12:22:16PM -0500, Richard A 
Steenbergen wrote:
 out of the protocol. I don't see anyone complaining when we rewrite 
 someone else's MEDs, sometimes as a trick to move traffic onto your 
 network (*), or even that big of a complaint when we remove another 
 networks' communities, so I don't see why anyone cares about this one.

Take all the politics and contracts out of it, and look at MED from
a 100% pure engineering perspective, with the traditional view that
MED reflects IGP cost, and origin reflects where the route came
from in the first place.

I would argue the right engineering answer is that each network,
on outbound, should set the MED equal to the IGP cost.  Basically
if an ASN gets 4 routes with 4 different MEDS on 4 peering points
and picks the best, when it passes it on to the next metric the
IGP cost an AS away no longer makes any sense.

If the behavior is for each ASN to inject their own MED on outbound,
then rewriting inbound or outbound is just an extension of the
entirely local policy anyway, no different than changing IGP metrics.
Don't want to reflect IGP metrics, rewrite to a fixed value.

The origin is different, at least conceptually.  The origin type
should reflect the state of the route before it went into BGP, a
property which does not change per-AS hop along the way.

That's why with a pure engineer hat on I would be much more
surprised/upset to see someone rewriting origin while I would expect
them to be rewriting MED.

Of course the real world isn't 100% engineering based.  ISP's do
all sorts of weird and fun things, and customers can (usually) vote
with their dollars.  I don't have a problem with an ISP implementing
pretty much any BGP policy they want /provided they disclose it to
their BGP customers/.

Perhaps if a large number of people were a bit more rational with their
peering policies we wouldn't have enginers dedicated to generating
routing funkyness just to meet peering criteria.  It's not helping
anyone get reliable, high performing network access.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpo5TzStqdZd.pgp
Description: PGP signature


Re: Vixie warns: DNS Changer ‘blackouts’ inevitable

2012-05-23 Thread Leo Bicknell
In a message written on Wed, May 23, 2012 at 12:35:05PM +0900, Randy Bush wrote:
 father of bind?  that's news.

I believe the error is in Paul Vixie's Wikipedia page, and I don't
do Wikipedia editing so I won't be fixing it.

http://en.wikipedia.org/wiki/Paul_Vixie

  In 1988, while employed by DEC, he started working on the popular
   internet domain name server BIND, of which he was the primary author and
   architect, until release 8.

ISC has spent some effort on properly documenting the history of
BIND, and the result of that effort is located at:

http://www.isc.org/software/bind/history

You'll note there are two full paragraphs and a dozen folks involved
before Paul had anything to do with BIND.

ISC is always interested in updating the history if folks have any
additional information.  Feel free to e-mail me if you think you have
something important to add.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpJ1lMQZ5bkZ.pgp
Description: PGP signature


Re: CDNs should pay eyeball networks, too.

2012-05-01 Thread Leo Bicknell
In a message written on Tue, May 01, 2012 at 08:23:07PM +0200, Dominik Bay 
wrote:
 Feeding via some bigger peer networks oder classic transit

You have made the assumption that their choice is peering with your
network or sending it out transit.  They may in fact peer with your
upstream.

That makes their choice peer with you, or peer with your upstream.
Peering with your upstream may allow them to reach many people like
you for cost of managing only a single peering session, as compared
to maintaining a few dozen.

Also, many networks have minimum volume amounts for peering
relationships.  They may be able to get settlement free peering
with your upstream by having some minimum traffic level that they
would not have if they peer with some of the individual customers
behind that upstream.  Peering with you may drop them below the
threshold, causing them to pay for transit on 10's of Gigs of
traffic.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpBlUPztVq3L.pgp
Description: PGP signature


Re: Operation Ghost Click

2012-05-01 Thread Leo Bicknell
In a message written on Tue, May 01, 2012 at 07:41:35PM +, Livingood, Jason 
wrote:
 All of this above! Plus, the remediation tools to clean up an infection are 
 insufficient to the task right now. Better tools are needed. (See also 
 http://tools.ietf.org/html/rfc6561#section-5.4)

Hey Jason, I'm going to put you on the spot with a crazy idea.

Many customers of the major internet providers also have other
services from them, like TV and Phone.  Perhaps expanding the notice
to those areas would be useful?  Turn on your cable box and get a
notice, or pick up the phone and get a notice?

It might really help in cases where one member of the family (e.g.
the children) are doing something bad that the bill payer (e.g. mom
and dad) doesn't know about.  Hit them on a medium they know more
about.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpKcyNPrSODj.pgp
Description: PGP signature


Re: CDNs should pay eyeball networks, too.

2012-05-01 Thread Leo Bicknell
In a message written on Tue, May 01, 2012 at 03:45:29PM -0500, Jerry Dent wrote:
 Can be for the end users if they wind up on a less direct network path.

Direct is not the only measure.

I would take a 4-hop, 10GE, no packet loss path over a 1-hop, 1GE,
5% packet loss path any day of the week.

Shorter {hops, latency, as-path} does not mean a higher quality end
user experience.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgp5GFrwqbZ0d.pgp
Description: PGP signature


Re: Most energy efficient (home) setup

2012-04-16 Thread Leo Bicknell
In a message written on Sun, Apr 15, 2012 at 09:54:14PM -0400, Luke S. Crawford 
wrote:
 On my current fleet (well under 100 servers)  single bit errors are so rare
 that if I get one, I schedule that machine for removal from production. 

In a previous life, in a previous time, I worked at a place that
had a bunch of Cisco's with parity RAM.  For the time, these boxes
had a lot of RAM, as they had distributed line cards each with their
own processor memory.

Cisco was rather famous for these parity errors, mostly because of
their stock answer: sunspots.  The answer was in fact largely
correct, but it's just not a great response from a vendor.  They
had a bunch of statistics though, collected from many of these
deployed boxes.

We ran the statistics, and given hundreds of routers, each with
many line cards the math told us we should have approximately 1
router every 9-10 months get one parity error from sunspots and
other random activity (e.g. not a failing RAM module with hundreds
of repeatable errors).  This was, in fact, close to what we observed.

This experience gave me two takeaways.  First, single bit flips are
rare, but when you have enough boxes rare shows up often.  It's
very similar to anyone with petabytes of storage, disks fail every
couple of days because you have so many of them.  At the same time
a home user might not see a failure in their lifetime (of disk or
memory).

Second though, if you're running a business, ECC is a must because
the message is so bad.  This was caused by sunspots is not a
customer inspiring response, no matter how correct.  We could have
prevented this by spending an extra $50 on proper RAM for your $1M
box is even worse.

Some quick looking at Newegg, 4GB DDR3 1333 ECC DIMM, $33.99.  4GB
DDR3 1333 Non-ECC DIMM, $21.99.  Savings, $12.  (Yes, I realize the
Motherboard also needs some extra circuitry, I expect it's less than $1
in quantity though).

Pretty much everyone I know values their data at more than $12 if it
is lost.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpxNc88GvYD9.pgp
Description: PGP signature


Re: Network Storage

2012-04-15 Thread Leo Bicknell
In a message written on Thu, Apr 12, 2012 at 05:16:27PM -0400, Maverick wrote:
 1) My goal is to store the traffic may be fore ever, and analyze it in
 the future for security related incidents detected by ids/ips.

Let's just assume you have enough disk space that you can write out
every packet, or even just packet header.  That's a hard problem,
but you've received plenty of suggestions on how to go down that
path.

Once you have that data, how are you going to process it?

Yes, disk reads are faster than disk writes, but not by that much.
If it takes you 24 hours to write a day of data to disk, it might
take you 12 hours just to read it all back off and process it.
Processing a weeks worth of back data could take days.  I'm also
not even starting to count the CPU and memory necessary to build
state tables and statistical analysis tables to generate useful
data.

There's a reason why most network traffic tools summarize early,
as early as on the network device when using Netflow type collection.
It's not just to save storage space on disk, but it's to make the
processing of the data fast enough that it can be done in a short
enough time that the data is still relevant when the processing is
complete.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpfUOXA2ZFtU.pgp
Description: PGP signature


Re: Cheap Juniper Gear for Lab

2012-04-11 Thread Leo Bicknell
In a message written on Tue, Apr 10, 2012 at 08:31:04PM -0500, Tim Eberhard 
wrote:
 While I know you are a smart engineer and obviously have been working
 with this gear for a long time you're really not adding anything or
 backing up your argument besides saying yet again the packet
 forwarding is different. While this maybe true..It's my understanding
 that enabling packet mode does turn it into a normal packet based
 junos.

I honestly don't remember what caused the problem when I ran into
it, but the first time I configured IPv6 on a SRX I used per-packet
and I had all sorts of problems.  After contacting Juniper support
and some friends who ran them they all told me to configure flow-based
for IPv6, and it started working properly.  Juniper support basically
said IPv6 didn't work at all unless it was in flow mode.

My vague memory at least was OSPFv3 would not come up in IPv6
per-packet mode no matter what changes were made, but with flow
mode it came right up.

In any event, I will back up Owen on this one.  Any JunOS box with
a security {} section (which I think means of Netscreen lineage)
does a number of weird things when you're used to the JunOS boxes
without a security section.  For instance they basically default
to a stateful firewall, so when I used a pair for redundancy and
had asymmetrical paths it took way too many lines of config (4-5
features that had to be turned off) to make it not-stateful.  That's
a big surprise when you come from working on M-series.

Still, they are very nice boxes, particularly for the capabilities
you get at the price point.  It's just that darn security {} section
that seems to be quite poorly thought out, even all the working
parts are just laid out in a way that's not intuitive to me and
don't seem to match the rest of JunOS well.  Want to list a netblock,
you have to put it in an address book.  Want to list two, it has
to be in an address-book group, you can't just list them between
brackets, and so on.  It may be the only router platform where I turn to
the web gui from time to time to configure things, otherwise it's an
exercise in frustration trying to get the syntax right.

-- 
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/


pgpiuZJqUZnvh.pgp
Description: PGP signature


<    1   2   3   4   5   6   >