Sending ARP request to unicast MAC instead of broadcast MAC address?

2010-06-16 Thread Chris Woodfield
OK, this sounds Really Wacky (or, Really Hacky if you're into puns) but there's 
a reason for it, I swear...

Will typical OSS UNIX kernels (Linux, BSD, MacOS X, etc) reply to a crafted ARP 
request that, instead of having FF:FF:FF:FF:FF:FF as its destination MAC 
address, is instead sent to the already-known unicast MAC address of the host? 

Next, what would be your utility of choice for crafting such a packet? Or is 
this something one would need to code up by hand in a lower-level language?

Thanks,

-C


Re: Sending ARP request to unicast MAC instead of broadcast MAC address?

2010-06-17 Thread Chris Woodfield
Looks like all the replies I got were private, so thanks all - to summarize, I 
got everything from Read The Fine Kernel Source to Read The Fine RFC to 
Read RFC 1122, Section 2.3.2.1, it's quite a Fine read. 

So for other folks out there like me who obviously can't read RFCs, the answer 
is yes. :)

-C

On Jun 16, 2010, at 3:57 51PM, Chris Woodfield wrote:

 OK, this sounds Really Wacky (or, Really Hacky if you're into puns) but 
 there's a reason for it, I swear...
 
 Will typical OSS UNIX kernels (Linux, BSD, MacOS X, etc) reply to a crafted 
 ARP request that, instead of having FF:FF:FF:FF:FF:FF as its destination MAC 
 address, is instead sent to the already-known unicast MAC address of the 
 host? 
 
 Next, what would be your utility of choice for crafting such a packet? Or is 
 this something one would need to code up by hand in a lower-level language?
 
 Thanks,
 
 -C




40/100GbEthernet standard ratified

2010-06-23 Thread Chris Woodfield
So let us commence the shipping of stupidly overpriced silicon...802.3ba is an 
official IEEE standard.

http://www.businesswire.com/portal/site/home/permalink/?ndmViewId=news_viewnewsId=20100621006382newsLang=en

-C


Re: XO Routing

2010-09-16 Thread Chris Woodfield
The unconfirmed chatter I'm hearing is that they were leaking peering routes to 
other peers. Can anyone check and confirm this? Renesys?

-C

On Sep 16, 2010, at 9:09 12AM, William Byrd wrote:

 XO Engineers are telling us that they are aware of packet loss across their
 network and are looking into it. We're experiencing slow/degraded
 connectivity out of St. Louis and Nashville but Atlanta and Dallas are
 problem free.
 
 William Collier-Byrd
 w...@collier-byrd.net
 
 
 On Thu, Sep 16, 2010 at 12:04 PM, Patrick W. Gilmore patr...@ianai.netwrote:
 
 This should probably be on outages@, but XO is definitely having problems
 to places like speedtest.com  RCN from Boston.
 
 --
 TTFN,
 patrick
 
 
 On Sep 16, 2010, at 11:59 AM, Charles Mills wrote:
 
 The internet health report is showing high latency to most of their
 peers.
 
 Chuck.
 
 On Sep 16, 2010 11:57 AM, Stefan Molnar ste...@csudsu.com wrote:
 
 Anyone know the impact on the XO Routing/Peering that is happening right
 now? We have had spotty connectivity for the last hour.
 
 Stefan
 




Re: Did Internet Founders Actually Anticipate Paid, Prioritized Traffic?

2010-09-17 Thread Chris Woodfield

On Sep 17, 2010, at 6:48 02AM, Jack Bates wrote:

 On 9/17/2010 4:52 AM, Nathan Eisenberg wrote:
 True net-neutrality means no provider can have a better service than 
 another.
 
 This statement is not true - or at least, I am not convinced of its truth.  
 True net neutrality means no provider will artificially de-neutralize their 
 service by introducing destination based priority on congested links.
 
 This is what you want it to mean. If I create a private peer to google, I 
 have de-neutralized their service(destination based priority, even though in 
 both cases, it's the source of the packets we care about) by allowing 
 dedicated bandwidth and lower latency to their cloud.
 

Practically, this is not the case. These days, most congestion tends to happen 
at the customer edge - the cable head-end or the DSL DSLAM, not the backbone or 
peering points. 

Also, Google, Yahoo, et al tend to base their peering decisions on technical, 
not business, standards, which makes sense because peering, above all other 
interconnect types, is mutually beneficial to both parties. More to the point, 
even the likes of Comcast won't shut down their peers to Yahoo because Google 
sends them a check.

 Also, let's not forget that the design of many p2p programs were specifically 
 designed to ignore and bypass congestion controls... ie, screw other apps, I 
 will take every bit of bandwidth I can get. This type of behavior causes p2p 
 to have higher priority than other apps in a network that has no traffic 
 prioritized.
 
 While I agree that traffic type prioritization would be preferred over 
 destination based priorities, it often isn't feasible with hardware. 
 Understanding the amount of traffic between your customers and a content 
 provider helps you decide which content providers might be prioritized to 
 give an overall service increase to your customer base.
 
 The fact that a content provider would even pay an ISP, is a high indicator 
 that the content provider is sending a high load of traffic to the ISP, and 
 bandwidth constraints are an issue with the service. Video and voice, in 
 particular, should always try and have precedence over p2p, as they 
 completely break and become unusable, where p2p will just be forced to move 
 slower.
 
 From a false assumption follows false conclusions.
 
 Not really. It's not a neutral world. Private peering is by no means neutral. 
 The provider that does enough traffic with google to warrant a private 
 peering will have better service levels than the smaller guy who has to take 
 the public paths. You view net neutrality as customers within an ISP, while I 
 view it as a provider within a network of providers.
 

It may not be neutral, but it's hardly discrinatory in the ways that I've seen 
many of the Non-net-neutrality schemes play out, which seems to be all about 
*deliberately* - either proactively or via actively deciding to not upgrade 
capacity - creating congestion in order to create a financial incentive for 
content providers to have their traffic prioritized.

And I do agree, a private peer is definitely one technical means by which this 
prioritization could happen, but that's not the practice today. 

 The levels of service and pricing I can maintain as a rural ISP can't be 
 compared to the metropolitan ISPs. A west coast ISP won't have the same level 
 of service as an east coast ISP when dealing with geographical based content. 
 We could take it to the international scale, where countries don't have equal 
 service levels to content.
 
 
 Why do you feel it's true that net-neutrality treads on private (or even 
 public) peering, or content delivery platforms?  In my understanding, they 
 are two separate topics: Net (non)-neutrality is literally about 
 prioritizing different packets on the *same* wire based on whether the 
 destination or source is from an ACL of IPs.  IE this link is congested, 
 Netflix sends me a check every month, send their packets before the ones 
 from Hulu and Youtube.  The act of sending traffic down a different link 
 directly to a peers' network does not affect the neutrality of either party 
 one iota - in fact, it works to solve the congested link problem (Look!  
 Adding capacity fixed it!).
 
 So you are saying, it's perfectly okay to improve one service over another by 
 adding bandwidth directly to that service, but it's unacceptable to 
 prioritize it's traffic on congested links (which effectively adds more 
 bandwidth for that service). It's the same thing, using two different methods.
 
 If we consider all bandwidth available between the customer and content (and 
 consider latency as well, as it has an effect on the traffic, especially 
 during congestion), a private peer dedicates bandwidth to content the same as 
 prioritizing it's traffic. If anything, the private peer provides even more 
 bandwidth.
 
 ISP has 2xDS3 available for bandwidth total. Netflix traffic is 20mb/s. 
 Bandwidth is considered 

Re: Did Internet Founders Actually Anticipate Paid, Prioritized Traffic?

2010-09-17 Thread Chris Woodfield

On Sep 17, 2010, at 9:23 09AM, Jack Bates wrote:
 
 Is it unfair that I pay streaming sites to get more/earlier video feeds over 
 the free users? I still have to deal with advertisements in some cases, which 
 generates the primary revenue for the streaming site. Why shouldn't a content 
 provider be able to pay for a higher class of service, so long as others are 
 equally allowed to pay for it?

No, it is definitely not, because *you* are the one paying for priority access 
for the content *you* feel is worth paying extra for faster access to. This is 
not the same thing as a content provider paying the carrier for priority access 
to your DSL line to the detriment of other sites you are interested it.

How would you feel if you paid for priority access to hulu.com via this means, 
only to see your carrier de-prioritize that traffic because they're getting a 
check from Netflix?

 Jack



Re: RIP Justification

2010-09-29 Thread Chris Woodfield
I know of one large-ish provider that does it exactly like that - RIPv2 between 
POP edge routers and provider-managed CPE. In addition to the simplicity, it 
lets them filter routes at redistribution without having to fiddle with 
inter-area OSPF (or, ghod forbid, multiple OSPF processes redistributing 
between each other...)

Where folks run into trouble is vendors that decide that RIP is so 
under-utilized they don't need to fully support or QA it anymore. 
Implementations tend to be a bit more...quirky than OSPF or BGP running on 
the same box. And occasionally you run into the odd vendor that doesn't care 
about things like being able to adjust hello/dead intervals...

-C

On Sep 29, 2010, at 2:50 PM, Jonathon Exley wrote:

 RIP is useful as an edge protocol where there is a single access - less 
 system overhead than OSPF.
 The service provider and the customer can redistribute the routes into 
 whatever routing protocol they use in their own networks.
 
 Jonathon 
 
 -Original Message-
 From: Jesse Loggins [mailto:jlogginsc...@gmail.com] 
 Sent: Thursday, 30 September 2010 9:21 a.m.
 To: nanog@nanog.org
 Subject: RIP Justification
 
 A group of engineers and I were having a design discussion about routing 
 protocols including RIP and static routing and the justifications of use for 
 each protocol. One very interesting discussion was surrounding RIP and its 
 use versus a protocol like OSPF. It seems that many Network Engineers 
 consider RIP an old antiquated protocol that should be thrown in back of a 
 closet never to be seen or heard from again. Some even preferred using a 
 more complex protocol like OSPF instead of RIP. I am of the opinion that 
 every protocol has its place, which seems to be contrary to some engineers 
 way of thinking. This leads to my question. What are your views of when and 
 where the RIP protocol is useful? Please excuse me if this is the incorrect 
 forum for such questions.
 
 --
 Jesse Loggins
 CCIE#14661 (RS, Service Provider)
 This email and attachments: are confidential; may be protected by
 privilege and copyright; if received in error may not be used,copied,
 or kept; are not guaranteed to be virus-free; may not express the
 views of Kordia(R); do not designate an information system; and do not
 give rise to any liability for Kordia(R).
 
 




Re: RIP Justification

2010-09-29 Thread Chris Woodfield
On Sep 29, 2010, at 6:14 PM, Scott Morris wrote:

 But anything, ask why you are using it.  To exchange routes, yes...  but
 how many.  Is sending those every 30 seconds good?  Sure, tweak it.  But
 are you gaining anything over static routes?

For simple networks, RIP(v2, mind you) works fine. You're correct that the 
number of advertisements sent over the wire every 30 seconds won't scale, but 
with today's routers and bandwidths it takes quite a lot to start to cause 
issues.

The real nail in RIP's coffin is that with most (if not all) routers out there 
today, it's no more work to turn on and configure OSPF than it is to do RIP, 
and OSPF will help you scale much better as you go without being too complex 
for the simpler setups as well. As such, it really doesn't make sense to go 
with RIP for mere nostalgia's sake. If you have a specific reason not to run 
OSPF, fine, but those reasons are few and far between.

-C


Submarine cable sample?

2011-02-23 Thread Chris Woodfield
Hi,

Was wondering where one in the SF Bay area might be able to borrow (or 
otherwise procure at a reasonable cost) a short - less than 1 meter - section 
of undersea fiber cable for a presentation I'll be giving in a few weeks. Feel 
free to unicast your reply if you are in a position to assist.

Thanks,

-Chris


Re: ARIN and IPv6 Requests

2011-02-23 Thread Chris Woodfield
(Yeah, high reply latency...)

Is Carrier V still filtering at sub-/32 on their IPv6 peerings? Last I was in a 
position to check, not even Apple's /45 was visible from inside AS701.

-C

On Feb 10, 2011, at 12:25 PM, Eric Clark wrote:

 Don't remember about the v4 part, but 3 years ago they issued me a /48, 
 specifically for my first site and indicated that a block was reserved for 
 additional sites. I can probably dig that up.
 
 Sent from my iPad
 
 On Feb 10, 2011, at 12:18 PM, Jason Iannone jason.iann...@gmail.com wrote:
 
 It also looks like there isn't a policy for orgs with multiple
 multihomed sites to get a /48 per site.  Is there an exception policy
 somewhere?
 
 On Thu, Feb 10, 2011 at 12:50 PM,  adw...@dstsystems.com wrote:
 Initial. Documenting IPv4 usage is in the request template.
 
 --
 Adam Webb
 
 
 
 
 
 From:
 Nick Olsen n...@flhsi.com
 To:
 nanog@nanog.org
 Date:
 02/10/2011 01:45 PM
 Subject:
 re: ARIN and IPv6 Requests
 
 
 
 We requested our initial allocation without any such questions. Is this
 your initial or additional?
 
 Nick Olsen
 Network Operations
 (855) FLSPEED  x106
 
 
 
 From: adw...@dstsystems.com
 Sent: Thursday, February 10, 2011 2:38 PM
 To: nanog@nanog.org
 Subject: ARIN and IPv6 Requests
 
 Why does ARIN require detailed usage of IPv4 space when requesting IPv6
 space? Seems completely irrelevant to me.
 
 --
 Adam Webb
 EN  ES Team
 desk: 816.737.9717
 cell: 916.949.1345
 ---
 The biggest secret of innovation is that anyone can do it.
 ---
 
 -
 Please consider the environment before printing this email and any
 attachments.
 
 This e-mail and any attachments are intended only for the
 individual or company to which it is addressed and may contain
 information which is privileged, confidential and prohibited from
 disclosure or unauthorized use under applicable law.  If you are
 not the intended recipient of this e-mail, you are hereby notified
 that any use, dissemination, or copying of this e-mail or the
 information contained in this e-mail is strictly prohibited by the
 sender.  If you have received this transmission in error, please
 return the material received to the sender and delete all copies
 from your system.
 
 
 
 
 




Re: Internet Edge Router replacement - IPv6 route table size considerations

2011-03-09 Thread Chris Woodfield
I think this is the point where I get a shovel, a bullwhip and head over to the 
horse graveyard that is CAM optimization...

-C

On Mar 8, 2011, at 5:18 20PM, Chris Enger wrote:

 Our Brocade reps pointed us to the CER 2000 series, and they can do up to 
 512k v4 or up to 128k v6.  With other Brocade products they spell out the CAM 
 profiles that are available, however I haven't found specifics on the CER 
 series.
 
 Chris
 
 -Original Message-
 From: Julien Goodwin [mailto:na...@studio442.com.au] 
 Sent: Tuesday, March 08, 2011 5:09 PM
 To: 'nanog@nanog.org'
 Cc: Chris Enger
 Subject: Re: Internet Edge Router replacement - IPv6 route table size 
 considerations
 
 On 09/03/11 12:08, Julien Goodwin wrote:
 On 09/03/11 11:57, Chris Enger wrote:
 I did look at a Juniper J6350, and the documentation states it can handle 
 400k routes with 1GB of memory, or 1 million with 2GB.  However it doesn’t 
 spell out how that is divvyed up between the two based on a profile setting 
 or some other mechanism.
 It's a software router so the short answer is it isn't
 
 With 3GB of RAM both a 4350 and 6350 can easily handle multiple IPv4
 feeds and an IPv6 feed (3GB just happens to be what I have due to
 upgrading from 1GB by adding a pair of 1GB sticks)
 
 If you need more then ~500Mbit or so then you would want something
 bigger. The MX80 is nice and has some cheap bundles at the moment; it's
 specced for 8M routes (unspecified, but the way Juniper chips typically
 store routes there's less difference in size then the straight 4x)
 
 From others the Cisco ASR1k or Brocade NetIron XMR (2M routes IIRC) are
 the obvious choices.
 And I meant Brocade NetIron CES here.




Re: IP tunnel MTU

2012-10-29 Thread Chris Woodfield
True, but it could be used as an alternative PMTUD algorithm - raise the 
segment size and wait for the I got this as fragments option to show up...

Of course, this only works for IPv4. IPv6 users are SOL if something in the 
middle is dropping ICMPv6.

-C

On Oct 29, 2012, at 4:02 PM, Templin, Fred L wrote:

 Hi Bill,
 
 Maybe something as simple as clearing the don't fragment flag and
 adding a TCP option to report receipt of a fragmented packet along
 with the fragment sizes back to the sender so he can adjust his mss to
 avoid fragmentation.
 
 That is in fact what SEAL is doing, but there is no guarantee
 that the size of the largest fragment is going to be an accurate
 reflection of the true path MTU. RFC1812 made sure of that when
 it more or less gave IPv4 routers permission to fragment packets
 pretty much any way they want.
 
 Thanks - Fred
 fred.l.temp...@boeing.com
 




Re: Verio taking twitter down during Iran Election Riots?

2009-06-16 Thread Chris Woodfield
What's interesting is that the !NANOG part of the universe presumes  
the maintenance was to be performed by Twitter, not by their carrier  
(i.e. server, not network, upgrades). Given the fact that the  
WhaleFail has become a commonly-recognizable sight, I can see this  
make people a bit, um, nervous. The real impact of the maintenance  
would have most likely been minimal short of a Murphy strike.


That said, kudos to NTT for backing off in the face of some pretty  
momentous current events, and hope the delay doesn't cause too many  
ripple-effect problems for them.


-C

On Jun 16, 2009, at 10:48 AM, Jack Bates wrote:


Erik Fichtner wrote:

And yet, all upgrades can be postponed with the right... motivation.



Hmmm, you do know that motivation may have strictly been, Your  
maintenance corresponds with a major event, can you put it off for a  
day?


The maintenance in question has obviously been marked critical by  
NTTA with what appears to be short notification and limiting the  
delay to a minimum. They may have been unaware of the event and its  
importance to their customers.


I'm more curious about what maintenance they are actually  
performing. I know they run mixed Cisco/Juniper, and all their  
Junipers should be able to handle in service upgrades. Of course,  
even switching hits of an upgrade warrants setting a maintenance  
window and notification due to Murphy.


Jack






Re: DNS caches that support partitioning ?

2012-08-19 Thread Chris Woodfield
What Patrick said. For large sites that offer services in multiple data centers 
on multiple IPs that can individually fail at any time, 300 seconds is actually 
a bit on the long end.

-C

On Aug 18, 2012, at 3:43 PM, Patrick W. Gilmore patr...@ianai.net wrote:

 On Aug 18, 2012, at 8:44, Jimmy Hess mysi...@gmail.com wrote:
 
 And I say that, because some very popular RRs have insanely low TTLs.
 
 Case in point:
 www.l.google.com.300INA74.125.227.148
 www.l.google.com.300INA74.125.227.144
 www.l.google.com.300INA74.125.227.146
 www.l.google.com.300INA74.125.227.145
 www.l.google.com.300INA74.125.227.147
 www.l.google.com.300INA74.125.227.148
 
 Different people have different points of view.
 
 IMHO, if Google losses a datacenter and all users are stuck waiting for a 
 long TTL to run out, that is Very Bad.  In fact, I would call even 2.5 
 minutes (average of 5 min TTL) Very Bad.  I'm impressed they are comfortable 
 with a 300 second TTL.
 
 You obviously feel differently.  Feel free to set your TTL higher.
 
 -- 
 TTFN,
 patrick
 
 




Re: APIs for domain registration and management

2012-09-13 Thread Chris Woodfield
Dynect has a RESTful API as well. They even host a number of sample scripts at 
GitHub: 

http://dyn.com/managed-dns-dynect-5-api-access-load-balancing-geo-traffic-management/
https://github.com/dyninc

-C

On Sep 12, 2012, at 5:18 PM, Miles Fidelman mfidel...@meetinghouse.net wrote:

 Hi Folks,
 
 I expect folks on NANOG would know:  Are there any domain registrars who 
 provide APIs for managing domains and/or DNS records?  It's kind of a pain 
 managing large numbers of domains via klunky web interfaces.  It sure would 
 be nice to tie registry accounts into equipment inventory management systems.
 
 Thanks,
 
 Miles Fidelman
 
 -- 
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra
 
 



Re: IPv6 enabled carriers?

2010-03-11 Thread Chris Woodfield
To pile on in the spirit of if people don't complain, nothing will  
change - is VZB still insisting on filtering /32 at their peers?  
While ARIN is allocating /40s and /48s directly?


-C

On Mar 10, 2010, at 2:18 PM, Seth Mattinen wrote:


On 3/10/10 11:00 AM, Charles Mills wrote:

Does anyone have a list of carriers who are IPv6 capable today?

I would assume this would be rolled out in larger cities first but
anything outside of testbed environments and trials as in
Comcast's recent announcement seems to be all that is available.

I'm being tasked with coming up with an IPv6 migration plan for a  
data center.


Mostly interested in if ATT, Level3, GLBX, Saavis, Verizon Business
and Qwest are capable as those are the typical ones I deal with.



Ones I have personal experience with:

GLBX - yes
SAVVIS - no
VZB - yes, good luck
ATT - Beginning in 1Q2010 MIS will provide the ability to support  
IPv6

in a dual stack mode.

When I disconnected my SAVVIS circuit in November 2009 I explicitly  
told

them IPv6 was a deciding factor. Not all of Verizon's pops are IPv6
enabled, which may cause you trouble ordering it. It's put me in month
11 of trying to turn up a dual-stack circuit because they refuse to  
read

the order and keep putting it in Sacramento (v4 only) when it needs to
go to San Jose (dual-stack). Sprint wasn't on your list, but they are
rolling out native IPv6 support on all of 1239. I've been using their
6175 testbed since 2005.

~Seth






Re: ouch..

2011-09-17 Thread Chris Woodfield
Or...Go ahead and keep buying 6509 chassis, the 7600 brand is just a marketing 
thing

-C

On Sep 14, 2011, at 7:41 AM, Leigh Porter wrote:

 
 
 -Original Message-
 From: Always Learning [mailto:na...@u61.u22.net]
 Sent: 14 September 2011 14:39
 To: N. Max Pierson
 Cc: nanog@nanog.org
 Subject: Re: ouch..
 
 
 On Wed, 2011-09-14 at 08:33 -0500, N. Max Pierson wrote:
 
 Either way, it's pathetic. If someone is going to slander in the
 fashion the site has done, they should at least put a contact form
 somewhere for some feedback :)
 
 Slander means falsehood. Cisco tells lies ?
 
 
 --
 With best regards,
 
 Paul.
 England,
 EU.
 
 
 Lies? So who has 100G MX series cards then..?
 
 --
 Leigh
 
 
 
 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email 
 __
 




ATT Wireless outage in SoCal

2011-09-24 Thread Chris Woodfield
Hearing rumblings of a major ATT Wireless outage in southern California. 
Anyone have more detail? Limited to cell towers or are transit circuits 
affected?

-Chris


Re: So Philip Smith / Geoff Huston's CIDR report becomes worth a good hard look today

2014-08-13 Thread Chris Woodfield
Same reason no vendor has bothered to prune redundant RIB entries (i.e. 
more-specific pointing to the same NH as a covering route) when programming the 
TCAM...

-C

On Aug 13, 2014, at 1:42 PM, Randy Bush ra...@psg.com wrote:

 half the routing table is deagg crap.  filter it.  
 
 you mean your vendor won't give you the knobs to do it smartly ([j]tac
 tickets open for five years)?  wonder why.
 
 randy



Re: So Philip Smith / Geoff Huston's CIDR report becomes worth a good hard look today

2014-08-13 Thread Chris Woodfield

 
 Pruning FIB entries, on the other hand, can be done quite safely as
 long as you're willing to accept the conversion of null route to
 don't care. Some experiments were done on this in the IETF a couple
 years back. Draft-zhang-fibaggregation maybe? Savings of 30% in
 typical backbone nodes looked possible. That's 30% of your TCAM
 reclaimable.
 

Hence the “when programming the TCAM” part of my original statement :)

 For the moment it seems to be cheaper to just build bigger TCAMs.
 Cheaper for the router vendors anyway.
 

I think of it more like “why spend development dollars on a feature that will 
cause my customers to keep their existing hardware longer and delay upgrades?” 
Yes, vendors do think like that.

-C

 Regards,
 Bill Herrin
 
 
 
 -- 
 William Herrin  her...@dirtside.com  b...@herrin.us
 Owner, Dirtside Systems . Web: http://www.dirtside.com/
 Can I solve your unusual networking challenges?



Re: Shared cabinet "security"

2016-02-13 Thread Chris Woodfield
I've seen colos sell half-racks where both the top and bottoms of the racks 
have their own cabinet doors. It's not a common thing though.

-C

> On Feb 12, 2016, at 18:58, Mike Hammett  wrote:
> 
> There are more options when you're not just using someone else's datacenter. 
> 
> 
> 
> 
> - 
> Mike Hammett 
> Intelligent Computing Solutions 
> http://www.ics-il.com 
> 
> Midwest-IX 
> http://www.midwest-ix.com 
> 
> - Original Message -
> 
> From: "Bevan Slattery"  
> To: "Mike Hammett"  
> Cc: "North American Network Operators' Group"  
> Sent: Friday, February 12, 2016 4:44:34 PM 
> Subject: Re: Shared cabinet "security" 
> 
> In a past life we worked with our supplier to create physically separate 
> sub-enclosures.1/2 and 1/3. Able to build in a separate and secure cable path 
> for interconnects to the meet-me-room and connection to power supplies. 
> 
> Can be done and I think there are now rack suppliers that do this as 
> standard. Been out of DC space for a few years now. 
> 
> [b] 
> 
>> On 13 Feb 2016, at 6:58 AM, Mike Hammett  wrote: 
>> 
>> 
>> That moment when you hit send and remember a couple things… 
>> 
>> Of course labeling of the cables. 
>> 
>> Maybe colored wire loom for fiber and DACs in the vertical spaces to go 
>> along with the previously mentioned color scheme? 
>> 
>> 
>> 
>> 
>> - 
>> Mike Hammett 
>> Intelligent Computing Solutions 
>> http://www.ics-il.com 
>> 
>> Midwest-IX 
>> http://www.midwest-ix.com 
>> 
>> - Original Message - 
>> 
>> From: "Mike Hammett"  
>> To: "North American Network Operators' Group"  
>> Sent: Friday, February 12, 2016 2:53:17 PM 
>> Subject: Re: Shared cabinet "security" 
>> 
>> 
>> I am finding a bunch of covers for the front. I do wish they stuck out more 
>> than an inch (like two). 
>> http://www.middleatlantic.com/~/media/middleatlantic/documents/techdocs/s_sf%20series%20security%20covers_96-035/96_035s_sf.ashx
>>  
>> 
>> It looks like these guys stick out 1.5”. That may be workable… 
>> http://www.lowellmfg.com/tinymce/jscripts/tiny_mce/plugins/filemanager/files/1717-SSCV.pdf
>>  
>> 
>> I guess those covers are really only useful for servers. That really 
>> wouldn’t work with a switch\router. Switches and routers are going to be the 
>> bulk of what we’re dealing with. 
>> 
>> I am finding locking power cables, but that seems to be specific to the PDU 
>> you’re using as it requires the other half of the lock on the PDU. 
>> 
>> I did come across colored power cords. I wonder with some enforced cable 
>> management, colored power cables, etc. we would have “good enough”? You get 
>> some 1U or 2U cable organizers, require cables to be secured to the 
>> management, vertical cables in shared spaces are bound together by customer, 
>> color of Velcro matches color of the power cord? Blue customer, green 
>> customer, red customer, etc. Could do the cat6 patch cables that way too, 
>> but that gets lost when moving to glass or DACs. 
>> 
>> I thought about a web cam that would record anyone coming into the cabinet, 
>> but Equinix doesn’t really allow pictures in their facilities, so that’s not 
>> going to fly. Door contacts should be helpful for an audit log of at least 
>> when the doors were opened or closed. 
>> 
>> Financial penalty from the violator to the victim if there’s an uh oh? 
>> 
>> I’m not trying to save someone from themselves. I’m not trying to lock the 
>> whole thing down. Just trying to prevent mistakes in a shared space. 
>> 
>> 
>> 
>> 
>> - 
>> Mike Hammett 
>> Intelligent Computing Solutions 
>> http://www.ics-il.com 
>> 
>> Midwest-IX 
>> http://www.midwest-ix.com 
>> 
>> - Original Message - 
>> 
>> From: "Mike Hammett"  
>> To: "North American Network Operators' Group"  
>> Sent: Wednesday, February 10, 2016 8:59:08 AM 
>> Subject: Shared cabinet "security" 
>> 
>> I say "security" because I know that in a shared space, nothing is 
>> completely secure. I also know that with enough intent, someone will 
>> accomplish whatever they set out to do regarding breaking something of 
>> someone else's. My concern is mainly towards mitigation of accidents. This 
>> could even apply to a certain degree to things within your own space and 
>> your own careless techs 
>> 
>> If you have multiple entities in a shared space, how can you mitigate the 
>> chances of someone doing something (assuming accidentally) to disrupt your 
>> operations? I'm thinking accidentally unplug the wrong power cord, patch 
>> cord, etc. Accidentally power off or reboot the wrong device. 
>> 
>> Obviously labels are an easy way to point out to someone that's looking at 
>> the right place at the right time. Some devices have a cage around the power 
>> cord, but some do not. 
>> 
>> Any sort of mesh panels you could put on the front\rear of your 

Re: Internet Exchanges supporting jumbo frames?

2016-03-18 Thread Chris Woodfield
I think that’s the problem in a nutshell…until every vendor agrees on the size 
of a “jumbo” packet/frame (and as such, allows that size to be set with a 
non-numerical configuration flag). As is, every vendor has a default that 
results in 1500-byte IP MTU, but changing that requires entering a value…which 
varies from vendor to vendor.

The IEEE *really* should be the ones driving this particular standardization, 
but it seems that they’ve explicitly decided not to. This is…annoying to say 
the least. Have their been any efforts on the IETF side of things to 
standardize this, at least for IPv4/v6 packets?

-C

> On Mar 9, 2016, at 10:38 PM, Frank Habicht  wrote:
> 
> Hi,
> 
> On 3/10/2016 9:23 AM, Tassos Chatzithomaoglou wrote:
>> Niels Bakker wrote on 10/3/16 02:44:
>>> * nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
 I'm pretty confident there is no need for a specific MTU consensus and not 
 all IXP participants are obligated to raise their interface MTU if the IXP 
 starts allowing jumbo frames.
>>> 
>>> You're wrong here.  The IXP switch platform cannot send ICMP Packet Too Big 
>>> messages.  That's why everybody must agree on one MTU.
>>> 
>>> 
>> Isn't that the case for IXP's current/default MTU?
>> If an IXP currently uses 1500, what effect will it have to its customers if 
>> it's increased to 9200 but not announced to them?
> 
> none.
> everyone has agreed on 1500. it is near impossible to get close to
> everyone to agree on 9200 (or similar number) and implement it (at the
> same time or in a separate VLAN) (Nick argues, and i see the problem).
> The agreement and actions of the (various) operators of L3 devices
> connected at the IXP is what matters and seems not trivial.
> They are not under one control.
> 
> Frank



Re: Krebs on Security booted off Akamai network after DDoS attack proves pricey

2016-09-25 Thread Chris Woodfield
> On Sep 24, 2016, at 7:47 AM, John Levine  wrote:
> 
>>> Well...by anycast, I meant BGP anycast, spreading the "target"
>>> geographically to a dozen or more well connected/peered origins.  At that
>>> point, your ~600G DDoS might only be around
>> 
>> anycast and tcp? the heck you say! :)
> 
> People who've tried it say it works fine.  Routes don't flap that often.
> 

There are a number of companies terminating anycasted TCP endpoints without 
issue. It’s not exactly turnkey, but it’s hardly black magic either. 

Here’s Nick Holt @Microsoft presenting their experience: 
https://www.youtube.com/watch?v=40MONHHF2BU 
 

-Chris

Re: Dyn DDoS this AM?

2016-10-21 Thread Chris Woodfield
As a Twitter network  engineer (and the guy Patrick let camp out in your hotel 
room all day) - thank you for this. Whoever was behind this just poked a 
hornet’s nest. 

“Govern yourselves accordingly”.

-C

(Obviously speaking for myself, not my employer…)

> On Oct 21, 2016, at 10:48 AM, Patrick W. Gilmore  wrote:
> 
> I cannot give additional info other than what’s been on “public media”.
> 
> However, I would very much like to say that this is a horrific trend on the 
> Internet. The idea that someone can mention a DDoS then get DDoS’ed Can Not 
> Stand. See Krebs’ on the Democratization of Censorship. See lots of other 
> things.
> 
> To Dyn and everyone else being attacked:
> The community is behind you. There are problems, but if we stick together, we 
> can beat these miscreants.
> 
> To the miscreants:
> You will not succeed. Search "churchill on the beaches”. It’s a bit 
> melodramatic, but it’s how I feel at this moment.
> 
> To the rest of the community:
> If you can help, please do. I know a lot of you are thinking “what can I do?" 
> There is a lot you can do. BCP38 & BCP84 instantly come to mind. Sure, that 
> doesn’t help Mirai, but it still helps. There are many other things you can 
> do as well.
> 
> But a lot of it is just willingness to help. When someone asks you to help 
> trace an attack, do not let the request sit for a while. Damage is being 
> done. Help your neighbor. When someone’s house is burning, your current 
> project, your lunch break, whatever else you are doing is almost certainly 
> less important. If we stick together and help each other, we can - we WILL - 
> win this war. If we are apathetic, we have already lost.
> 
> 
> OK, enough motivational speaking for today. But take this to heart. Our 
> biggest problem is people thinking they cannot or do not want to help.
> 
> -- 
> TTFN,
> patrick
> 
>> On Oct 21, 2016, at 10:55 AM, Chris Grundemann  wrote:
>> 
>> Does anyone have any additional details? Seems to be over now, but I'm very
>> curious about the specifics of such a highly impactful attack (and it's
>> timing following NANOG 68)...
>> 
>> https://krebsonsecurity.com/2016/10/ddos-on-dyn-impacts-twitter-spotify-reddit/
>> 
>> -- 
>> @ChrisGrundemann
>> http://chrisgrundemann.com
> 



Re: Multi-CDN Strategies

2017-03-10 Thread Chris Woodfield
I have some experience with this; a few things off the top of my head:

- It’s usually best to leverage some sort of “smart” DNS  to handle CNAME 
distribution, giving you the ability to weight your CNAME distribution vs. only 
using one CDN all the time, or prefer different CDNs in various global regions. 
I’ve had decent experience with Dyn here, but Route53 has all the features 
you’d want as well. If possible, write tooling towards your DNS provider’s API 
to automate your failovers.

- Weight your distribution such that you never have one CDN turned off 
completely; you’ll want a small trickle of user traffic hitting every CDN so 
that the caches won’t be cold when you switch over to it.

- Make sure you have a distributed metrics service (ThousandEyes, WebMetrics, 
et al) testing your CDNs individually as well as the external hostname.

- Stay away from HTML- or Header-munging features when possible; stick with 
feature sets that are common (and implementable in similar ways) across your 
providers. (Similar advice goes for multi-vendor *anything*, TBH)

I could keep going, but if so, I might as well stick them into a powerpoint and 
submit a talk for Bellevue :)

-C

> On Mar 10, 2017, at 9:25 AM, Chris Grundemann  wrote:
> 
> Hail NANOG;
> 
> Is anyone here leveraging multiple CDN providers for resiliency and have
> best practices or other advice they'd be willing to share?
> 
> Thanks,
> ~Chris
> 
> -- 
> @ChrisGrundemann
> http://chrisgrundemann.com



Re: Admiral Hosting in London

2017-08-08 Thread Chris Woodfield
And I’d *love* to hear the story they come up with when you ask why they only 
want to rent space vs buy it…

-C

> On Jul 27, 2017, at 9:22 PM, Randy Bush  wrote:
> 
>> We were contacted by Admiral Hosting in London to rent some our
>> unused IP space.
> 
> anyone wanting to rent/lease space is 99% sure to be not nice folk.
> if you get your space back, it will be radioactive with an amazingly
> long half life.
> 
> if you are willing to let go of the space, just tell them the price
> for which you are willing to sell it.
> 
> randy
>