Re: IoT security

2017-02-07 Thread Ray Soucy
I think the fundamental problem here is that these devices aren't good
network citizens in the first place.  The odds of getting them to add
functionality to support a new protocol are even likely than getting them
to not have open services externally IMHO.

Couldn't a lot of this be caught by proactive vulnerability scanning and
working with customers to have an SPI firewall in place, or am I missing
something?

Historically residential ISP CPE options have been terrible.  If you could
deliver something closer to user expectations you would likely see much
more adoption and less desire to rip and replace.  Ideally a cloud-managed
device so that the config wouldn't need to be rebuilt in the event of a
hardware swap.

On Mon, Feb 6, 2017 at 5:31 PM, William Herrin  wrote:

> This afternoon's panel about IoT's lack of security got me thinking...
>
>
> On the issue of ISPs unable to act on insecure devices because they
> can't detect the devices until they're compromised and then only have
> the largest hammer (full account ban) to act...
>
> What about some kind of requirement or convention that upon boot and
> successful attachment to the network (and maybe once a month
> thereafter), any IoT device must _by default_ emit a UDP packet to an
> anycast address reserved for the purpose which identifies the device
> model and software build. The ISP can capture traffic to that anycast
> address, compare the data against a list of devices known to be
> defective and, if desired, respond with a fail message. If the IoT
> device receives the fail message, it must try to report the problem to
> its owner and remove its default route so that it can only communicate
> on the local lan.  The user can override the fail and if desired
> configure the device not to emit the init messages at all. But by
> default the ISP is allowed to disable the device by responding to the
> init message.
>
> Would have to cryptographically sign the fail message and let the
> device query the signer's reputation or something like that to avoid
> the obvious security issue. Obvious privacy issues to consider.
> Anyway, throwing it out there as a potential discussion starting
> point.
>
>
>
> The presentation on bandwidth policers...
>
> Seems like we could use some form of ICMP message similar to
> destination unreachable that provides some kind of arbitrary string
> plus the initial part of the dropped packet. One of the potential
> strings would be an explicit notice to the sender that packets were
> dropped and the bandwidth available.
>
> Yes, we already have ECN, but ECN tells the receiver about congestion,
> not the sender. More to the point, ECN can only be flagged on packets
> that are passed, not the packets that are dropped, so the policer
> would have to be complicated enough to note on the next packet that
> the prior packet was dropped. Also, ECN only advises that you're close
> to the limit not any information about the policer's target limit.
>
> This thought is not fully baked. Throwing it out for conversation purposes.
>
> Regards,
> Bill Herrin
>
>
>
> --
> William Herrin  her...@dirtside.com  b...@herrin.us
> Owner, Dirtside Systems . Web: 
>



-- 
Ray Patrick Soucy
Senior Cyber Security Engineer
Networkmaine, University of Maine System US:IT

207-581-3526


Re: IPv4 Legacy assignment frustration

2016-06-23 Thread Ray Soucy
Regardless of whether or not people "should" do this, I think the horse has
already left the barn on this one.  I don't see any way of getting people
who decided to filter all of APNIC to make changes.  Most of them are
static configurations that they'll never look to update.

On Wed, Jun 22, 2016 at 12:06 PM, Kraig Beahn  wrote:

> The following might add some clarity, depending upon how you look at it:
>
> We, as "core" engineers know better than to use some of the sources listed
> below, tho, my suspicion is that when an engineer or local IT person, on an
> edge network starts to see various types of attacks, they play wack-a-mole,
> based upon outdated or incomplete data, and never think twice about
> revisiting such, as, from their perspective, everything is working just
> fine.
>
> In a networking psychology test, earlier this morning, I wrote to ten
> well-known colleagues that I was fairly confident didn't regularly follow
> the nanog lists. Such individuals comprised of IP and IT engineers for
> which manage various network sizes and enterprises, ultimately posing the
> question of "Where in the world is 150.201.15.7, as we were researching
> some unique traffic patterns".
>
> *Seven out of ten came back with overseas*. Two came back with more
> questions "as the address space appeared to be assigned to APNIC", but was
> routed domestically.
>
> *One came back with the correct response.* (MORENET)
>
> Two of the queried parties were representative of major networks, one for
> an entire state governmental network with hundreds of thousands of actual
> users and tens of thousands of routers, the other from another major
> university. (Names left out, in the event they see this message later in
> the day or week)
>
> After probing the origin of their responses, I found the following methods
> or data-sources were used:
>
> -Search Engines - by far, the worst offender. Not necessarily "the engines"
> at fault, but a result of indexed sites containing inaccurate or outdated
> CIDR lists.
> -User generated forums, such as  "Block non-North American Traffic for
> Dummies Like Me
> "
> (Yes - that's the actual thread name on WebMasterWorld.com, from a Sr.
> Member)
> -Static (or aged) CIDR web-page based lists, usually placed for advertorial
> generation purposes and rarely up to date or accurate. (usually via SE's or
> forum referrals)
> -APNIC themselves - A basic SE search resulted in an APNIC page
> <
> https://www.apnic.net/manage-ip/manage-historical-resources/erx-project/erx-ranges
> >
> that,
> on it's face, appears to indicate 150.0.0.0/8 is in fact, part of the
> current APNIC range.
> -GitHub BGP Ranking tools: CIRCL / bgp-ranging example
> 
> (last
> updated on May 16th, 2011, tho an RT lookup
>  via the CIRCL tool
> does shows the appropriate redirect/org)
> -Several routing oriented books and Cisco examples
> <
> http://www.cisco.com/c/en/us/support/docs/ip/integrated-intermediate-system-to-intermediate-system-is-is/13796-route-leak.pdf
> >
> list
> such range, for example, FR/ISBN 2-212-09238-5.
> -And even established ISPs, that are publically announcing their "block
> list
> ", such as Albury's
> Local
> ISP in Australia
>
> The simple answer is to point IT directors, IP engineers or "the
> receptionists that manages the network" to the appropriate registry
> data-source, which should convince them that corrective action is
> necessary, i.e. fix your routing table or firewall. The complexity begins
> in trying to locate all of these people and directing them to the
> appropriate data-source, which I think is an unrealistic task, even for the
> largest of operators. Maybe a nanog-edge group is in order.
>
> If the issue was beyond just a nuisance and If I were in your shoes, i'd
> renumber or use an alternate range for the types of traffic affected by
> such blocks, i.e. administrative web traffic trying to reach major
> insurance portals. (Looks like AS2572 is announcing just over 700k IPv4
> address, over about 43 ranges with only some potentially affected)
>
> Realizing that renumbering is also extremely unrealistic, if you haven't
> already reached the IPv6 bandwagon, that's an option or, if none of the
> above seem reasonable, you could always put together a one-page PDF that
> points these administrators to the appropriate resource to validate that
> you, are in fact, part of the domestic United States.
>
> I agree that a more accurate tool probably needs to be created for the
> "general population to consume," but then again, even that solution, is
> "just another tool" for the search-engines to index, and you're back at
> square one.
>
> As much as I think most of us would like to help fix this issue, I don't
> know 

Re: Android and DHCPv6 again

2015-10-15 Thread Ray Soucy
Android does not have a complete IPv6 implementation and should not be IPv6
enabled.  Please do your part and complain to Google that Android does not
support DHCPv6 for address assignment.

On Sat, Oct 3, 2015 at 9:52 PM, Baldur Norddahl 
wrote:

> Hi
>
> I noticed that my Nexus 9 tablet did not have any IPv6 although everything
> else in my house is IPv6 enabled. Then I noticed that my Samsung S6 was
> also without IPv6. Hmm.
>
> A little work with tcpdump and I got this:
>
> 03:27:15.978826 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 120)
> fe80::222:7ff:fe49:ffad > ip6-allnodes: [icmp6 sum ok] ICMP6, router
> advertisement, length 120
> hop limit 0, Flags [*managed*, other stateful], pref medium, router
> lifetime 1800s, reachable time 0s, retrans time 0s
>  source link-address option (1), length 8 (1): 00:22:07:49:ff:ad
>  mtu option (5), length 8 (1):  1500
>  prefix info option (3), length 32 (4): 2a00:7660:5c6::/64, Flags [onlink,
> *auto*], valid time 7040s, pref. time 1800s
>  unknown option (24), length 16 (2):
>  0x:  3000  1b80 2a00 7660 05c6 
>
> So my CPE is actually doing DHCPv6 and some nice people at Google decided
> that it will be better for me to be without IPv6 in that case :-(.
>
> But it also has the auto flag, so Android should be able to do SLAAC yes?
>
> My Macbook Pro currently has the following set of addresses:
>
> en0: flags=8863 mtu 1500
> ether 3c:15:c2:ba:76:d4
> inet6 fe80::3e15:c2ff:feba:76d4%en0 prefixlen 64 scopeid 0x4
> inet 192.168.1.214 netmask 0xff00 broadcast 192.168.1.255
> inet6 2a00:7660:5c6::3e15:c2ff:feba:76d4 prefixlen 64 autoconf
> inet6 2a00:7660:5c6::b5a5:5839:ca0f:267e prefixlen 64 autoconf temporary
> inet6 2a00:7660:5c6::899 prefixlen 64 dynamic
> nd6 options=1
> media: autoselect
> status: active
>
> To me it seems that the Macbook has one SLAAC address, one privacy
> extension address and one DHCPv6 managed address.
>
> In fact the CPE manufacturer is a little clever here. They gave me an easy
> address that I can use to access my computer ("899") while still allowing
> SLAAC and privacy extensions. If I want to open ports in my firewall I
> could do that to the "899" address.
>
> But why is my Android devices without IPv6 in this setup?
>
> Regards,
>
> Baldur
>



-- 
*Ray Patrick Soucy*
Network Engineer I
Networkmaine, University of Maine System US:IT

207-561-3526


Re: /27 the new /24

2015-10-07 Thread Ray Soucy
Here is a quick starting point for filtering IPv6 on a Linux host system if
you don't feel comfortable opening up all ICMPv6 traffic:

http://soucy.org/tmp/v6firewall/ip6tables.txt

I haven't really re-visited it in a while, so if I'm forgetting something
let me know.

On Wed, Oct 7, 2015 at 9:13 AM, Stephen Satchell  wrote:

> This is excellent feedback, thank you.
>
> On 10/07/2015 04:54 AM, Owen DeLong wrote:
>
>>
>> On Oct 4, 2015, at 7:49 AM, Stephen Satchell  wrote:
>>>
>>> My bookshelf is full of books describing IPv4. Saying "IPv6 just
>>> works" ignores the issues of configuring intelligent firewalls to block
>>> the ne-er-do-wells using the new IP-level protocol.
>>>
>>
>> You will need most of the same blockages in IPv6 that you needed in IPv4,
>> actually.
>>
>> There are some important differences for ICMP (don’t break PMTU-D or
>> ND), but otherwise, really not much difference between your IPv4
>> security policy and your IPv6 security policy.
>>
>> In fact, on my linux box, I generate my IPv4 iptables file using
>> little more than a global search and replace on the IPv6 iptables
>> configuration which replaces the IPv6 prefixes/addresses with the
>> corresponding IPv4 prefixes/addresses. (My IPv6 addresses for things
>> that take incoming connections have an algorithmic map to IPv4 addresses
>> for things that have them.)
>>
>
> On my box, I have a librry of shell functions that do the generation,
> driven by parameter tables.  If I'm reading you correctly, I can just
> augment the parameter tables and those functions to generate the
> appropriate corresponding ip6table commands in parallel with the iptable
> commands.
>
> Question: should I still rate-limit ICMP packets in IPv6?  Also, someone
> on this list pointed me to NIST SP800-119, "Guidelines for the Secure
> Deployment of IPv6", the contents of which which I will incorporate.
>
> There is limited IPv6 support in many of the GUIs still,
>> unfortunately, but the command line tools are all there and for the
>> most part work pretty much identically for v4 and v6, the difference
>> often being as little as ping vs ping6 or   vs.
>>  -6 .
>>
>
> I've not been happy with the GUIs, because getting them to do what I want
> is a royal pain.  For example, I'm forced to use port-based redirection in
> one edge firewall application -- I blew a whole weekend figuring out how to
> do that with the CentOS 7 firewalld corkscrew, for a customer who outgrew
> the RV-220 he used for the application.  At least that didn't need IPv6!
>
> Primarily it involves changing the IPv4 addresses and/or prefixes
>> into IPv6 addresses and/or prefixes.
>>
>
> What about fragmented packets?  And adjusting the parameters in ip6table
> filters to detect the DNS "ANY" requests used in the DDoS amplification
> attacks?
>
> I'm not asking NANOG to go past its charter, but I am asking the
>>> IPv6fanatics on this mailing list to recognize that, even though the net
>>> itself may be running IPv6, the support and education infrastructure is
>>> still behind the curve. Reading RFCs is good, reading man pages is good,
>>> but there is no guidance about how to implement end-network policies in
>>> the wild yet...at least not that I've been able to find.
>>>
>>
>> There is actually quite a bit of information out there. Sylvia
>> Hagen’sIPv6 book covers a lot of this (O’Reilly publishes it).
>>
>
> Um, that would be "books".  Which one do you recommend I start with?
>
> * IPv6 Essentials (3rd Edition), 2014, ASIN: B00RWSNEKG
> * Planning for IPv6 (1st Edition), 2011,  ISBN-10: 1449305393
>
> (I would assume the first, as the NIST document probably covers the
> contents of the second)
>
> There are also several other good IPv6 books.
>>
>
> Recommendations?
>
> "ipv6.disable" will be changed to zero when I know how to set the
>>> firewall to implement the policies I need to keep other edge networks
>>> from disrupting mine.
>>>
>>
>> You do. You just don’t realize that you do. See above.
>>
>
> That's encouraging.  Being able to leverage the knowledge from IPv4 to
> project the same policies into IPv6 makes it easier for me, as I'm already
> using programmatic methods of generating the firewalls.  I expected that
> the implementation of existing applications-level policies would be
> parallel; it's the policies further down the stack that was my concern.
>
> Also, I have a lot of IP level blocks (like simpler Cisco access control
> lists) to shut out those people who like to bang on my SSH front door. I
> believe that people who are so rude as to try to break through dozens or
> hundreds of time a day will have other bad habits, and don't deserve to be
> allowed for anything.  (I have similar blocks for rabid spammers not in the
> DNSBLs, but that's a different story.)  I would expect to maintain a
> separate list of IPv6 subnets, based on experience.
>
> Which brings up another question:  should I block IPv6 access to port 25
> on my 

Re: UDP clamped on service provider links

2015-07-27 Thread Ray Soucy
It depends on the network. is really the only answer.

It's the kind of thing that happens quietly and often can be transient in
nature (e.g. temporary big stick filters to deal with an active attack).

As far as the reason it happens to UDP:

UDP is a challenge because it's easy to leverage for reflection attacks
where the source IP is spoofed to be the target.

The major targets are small services that are typically left open on host
systems.  The big ones being NTP, DNS, and more recently SSDP (universal
plug and play left open on consumer routers).  Once in a while you see some
really old protocols open like CHARGEN, but these are less common.  The
ones like NTP and DNS are popular because a small request can trigger a
large response (e.g. amplification attack) if services are not
appropriately locked down on the host.

A while back a big one a lot of people were caught off guard by was the NTP
MONLIST function which resulted in up to a 500:1 amplification.

Hopefully rate limiting UDP traffic is something that doesn't happen often,
and when people do rate-limit it they ideally limit the scope to known
problem protocols (like NTP and DNS) and base limits such that normal use
shouldn't be a problem.  That said I'm sure there are some who just
rate-limit everything (likely arguing that UDP is mostly peer-to-peer
anyway).  It's a bad practice no doubt.

TCP is still vulnerable to some level of reflection, but these are
generally easy to mitigate, and because the setup and teardown for TCP is
so small, not very effective for denial of service. There isn't much that
happens traffic-wise until the source address has confirmed a connection
which is what avoids spoofing being as big of a problem with TCP as it is
for UDP.  Similarly ICMP is generally not a problem because ICMP responses
are small by design.





On Mon, Jul 27, 2015 at 10:12 AM, Glen Kent glen.k...@gmail.com wrote:

 Hi,

 Is it true that UDP is often subjected to stiffer rate limits than TCP? Is
 there a reason why this is often done so? Is this because UDP is stateless
 and any script kiddie could launch a DOS attack with a UDP stream?

 Given the state of affairs these days how difficult is it going to be for
 somebody to launch a DOS attack with some other protocol?

 Glen




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Whats' a good product for a high-density Wireless network setup?

2015-06-20 Thread Ray Soucy
Compared to the old model of just providing coverage, it's definitely
higher density.  I think the point I was trying to make is that the old
high density is the new normal, and what most on list would consider high
density is more along the lines of stadium wireless.  I wouldn't really
focus on the term too much, though.  It's just a distraction from the real
question.

The answer as always is it depends.  Without detailed floor plans, survey
information, and information on what kind of demand users will place on the
network, there is really no way to tell you what solution will work well.

If you need to service residential areas or hostel units you might be
better off looking at some of the newer AP designs that have come out in
the last year or so targeting that application, like the Cisco 702 or the
Xirus 320.

The general design of these units is that they're both a low-power AP and a
small switch to provide residents with a few ports to plug in if they need
to.  This allows you to have one cable drop to each room instead of having
to run separate jacks for APs and wired connections.  The units are
wall-mount and if you have a challenging RF environment this design can be
really effective.

I've never run Xirrus personally, but I think they were used for the last
NANOG conference.





On Sat, Jun 20, 2015 at 6:41 AM, Sina Owolabi notify.s...@gmail.com wrote:

 Thanks everybody. I've been corrected on density... I've been informed
 that it's to be a minimum of 1000 users per building.
 That's 8,000 users. (8 buildings, not counting walkways and courtyards,
 admin, etc.)
 Does this qualify as high-density?

 On Sat, Jun 20, 2015 at 5:33 AM Ray Soucy r...@maine.edu wrote:

 Well, I could certainly be wrong, but it's news to me if UBNT started
 supporting DFS in the US.

 Your first screenshot is listing the UAP for 5240 which is channel 48,
 U-NII-1.  The second show 5825 which is the upper limit of U-NNI-3.  I
 don't see any U-NII-2 in what you posted.

 This forum post may be a bit out of date, but I haven't seen any
 announcement or information on the forums to indicate the situation has
 changed, and I'm pretty good at searching:

 https://community.ubnt.com/t5/UniFi-Wireless/DFS/m-p/700461#M54771

 From this thread it looks like the ability to configure DFS channels in
 the
 US was a UI bug and only showing for ZH anyway.  IIRC they actually got in
 a bit of trouble with the FCC over not restricting the use of these
 channels enough.

 Regardless of whether or not the FCC has cleared UBNT indoor products for
 U-NII-2 and U-NII-2-extended (and I haven't seen evidence of that yet),
 until you can configure APs to use those channels in the controller
 without
 violating FCC regulations I don't consider them usable.

 The UAP-AC doesn't seem to support DFS channels at all even without FCC
 restrictions, which kind of kills the point of AC, only 4 x 40 MHz or 2 x
 80 MHz channels doesn't cut it when we're talking about density.

 Note we're talking about indoor wireless and there ARE some UBNT products
 for outdoor WISP use that do support DFS and have been cleared by the FCC,
 but we would only be looking at the UAP-PRO or UAP-AC in this case so
 maybe
 that's the point of confusion here.




 On Fri, Jun 19, 2015 at 11:36 PM, Faisal Imtiaz fai...@snappytelecom.net
 
 wrote:

  FCC Cert claims different.
 
  :)
 
  Faisal Imtiaz
  Snappy Internet  Telecom
  7266 SW 48 Street
  Miami, FL 33155
  Tel: 305 663 5518 x 232
 
  Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net
 
  --
 
  *From: *Josh Luthman j...@imaginenetworksllc.com
  *To: *Faisal Imtiaz fai...@snappytelecom.net
  *Cc: *NANOG list nanog@nanog.org, Ray Soucy r...@maine.edu
  *Sent: *Friday, June 19, 2015 9:16:37 PM
 
  *Subject: *Re: Whats' a good product for a high-density Wireless network

  setup?
 
  Uhm he's not wrong...
 
  Josh Luthman
  Office: 937-552-2340
  Direct: 937-552-2343
  1100 Wayne St
  Suite 1337
  Troy, OH 45373
  On Jun 19, 2015 9:13 PM, Faisal Imtiaz fai...@snappytelecom.net
 wrote:
 
  The thing you need to watch out for with Ubiquiti is that they don't
  support DFS, so the entire U-NII-2 channel space is off limits for 5
 GHz.
 
  Huh 
 
  Please verify your facts before making blanket statements which are not
  accurate ...
 
 
 
  Faisal Imtiaz
  Snappy Internet  Telecom
 
 
  - Original Message -
   From: Ray Soucy r...@maine.edu
   To: Sina Owolabi notify.s...@gmail.com
   Cc: nanog@nanog.org list nanog@nanog.org
   Sent: Friday, June 19, 2015 7:07:01 PM
   Subject: Re: Whats' a good product for a high-density Wireless
 network
  setup?
  
   I know you don't want to hear this answer because of cost but I've
 had
  good
   luck with Cisco for very high density (about 1,000 clients in a
 packed
   auditorium actively using the network as they follow along with the
   presenter).
  
   The thing you need to watch out for with Ubiquiti

Re: Whats' a good product for a high-density Wireless network setup?

2015-06-20 Thread Ray Soucy
I've actually never made it out to a NANOG conference, so I'm not sure.  I
was just told this by peers who attended.

On Sat, Jun 20, 2015 at 5:31 PM, Randy Bush ra...@psg.com wrote:

  I've never run Xirrus personally, but I think they were used for the
  last NANOG conference.

 and how did that work out?  [ though i do not know it was the xirrus
 units ]

 randy




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Whats' a good product for a high-density Wireless network setup?

2015-06-19 Thread Ray Soucy
I know you don't want to hear this answer because of cost but I've had good
luck with Cisco for very high density (about 1,000 clients in a packed
auditorium actively using the network as they follow along with the
presenter).

The thing you need to watch out for with Ubiquiti is that they don't
support DFS, so the entire U-NII-2 channel space is off limits for 5 GHz.
That's pretty significant because you're limited to 9 x 20 MHz channels or
4 x 40 MHz channels.  Keeping the power level down and creating small cells
is essential for high density, so with less channels your hands are really
tied in that case.  Also, avoid the Zero Handoff marketing nonsense they
advertise; I'm sure it can work great for a low client residential area but
it requires all APs to share a single channel and depends upon coordinating
only one active transmitter at a time, so it simply won't scale.

I don't have experience with other vendors at large scale or high density.

I don't think what you're talking about is really high density anymore
though.  That's just normal coverage.  Wireless is a lot more complicated
than selecting a vendor, though.  If you know what you're doing even
Ubiquiti could work decently, but if you don't even a Cisco solution won't
save you.  You really need to be on top of surveying correctly and having
appropriate AP placement and channel distribution.





On Fri, Jun 19, 2015 at 1:57 AM, Sina Owolabi notify.s...@gmail.com wrote:

 Hi

 We are profiling equipment and design for an expected high user density
 network of multiple, close nit, residential/hostel units. Its going to be
 8-10 buildings with possibly a over 1000 users at any given time.
 We are looking at Ruckus and Ubiquiti as options to get over the high
 number of devices we are definitely going to encounter.

 How did you do it, and what would you advise for product and layout?

 Thanks in advance!




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Whats' a good product for a high-density Wireless network setup?

2015-06-19 Thread Ray Soucy
Well, I could certainly be wrong, but it's news to me if UBNT started
supporting DFS in the US.

Your first screenshot is listing the UAP for 5240 which is channel 48,
U-NII-1.  The second show 5825 which is the upper limit of U-NNI-3.  I
don't see any U-NII-2 in what you posted.

This forum post may be a bit out of date, but I haven't seen any
announcement or information on the forums to indicate the situation has
changed, and I'm pretty good at searching:

https://community.ubnt.com/t5/UniFi-Wireless/DFS/m-p/700461#M54771

From this thread it looks like the ability to configure DFS channels in the
US was a UI bug and only showing for ZH anyway.  IIRC they actually got in
a bit of trouble with the FCC over not restricting the use of these
channels enough.

Regardless of whether or not the FCC has cleared UBNT indoor products for
U-NII-2 and U-NII-2-extended (and I haven't seen evidence of that yet),
until you can configure APs to use those channels in the controller without
violating FCC regulations I don't consider them usable.

The UAP-AC doesn't seem to support DFS channels at all even without FCC
restrictions, which kind of kills the point of AC, only 4 x 40 MHz or 2 x
80 MHz channels doesn't cut it when we're talking about density.

Note we're talking about indoor wireless and there ARE some UBNT products
for outdoor WISP use that do support DFS and have been cleared by the FCC,
but we would only be looking at the UAP-PRO or UAP-AC in this case so maybe
that's the point of confusion here.




On Fri, Jun 19, 2015 at 11:36 PM, Faisal Imtiaz fai...@snappytelecom.net
wrote:

 FCC Cert claims different.

 :)

 Faisal Imtiaz
 Snappy Internet  Telecom
 7266 SW 48 Street
 Miami, FL 33155
 Tel: 305 663 5518 x 232

 Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net

 --

 *From: *Josh Luthman j...@imaginenetworksllc.com
 *To: *Faisal Imtiaz fai...@snappytelecom.net
 *Cc: *NANOG list nanog@nanog.org, Ray Soucy r...@maine.edu
 *Sent: *Friday, June 19, 2015 9:16:37 PM

 *Subject: *Re: Whats' a good product for a high-density Wireless network
 setup?

 Uhm he's not wrong...

 Josh Luthman
 Office: 937-552-2340
 Direct: 937-552-2343
 1100 Wayne St
 Suite 1337
 Troy, OH 45373
 On Jun 19, 2015 9:13 PM, Faisal Imtiaz fai...@snappytelecom.net wrote:

 The thing you need to watch out for with Ubiquiti is that they don't
 support DFS, so the entire U-NII-2 channel space is off limits for 5 GHz.

 Huh 

 Please verify your facts before making blanket statements which are not
 accurate ...



 Faisal Imtiaz
 Snappy Internet  Telecom


 - Original Message -
  From: Ray Soucy r...@maine.edu
  To: Sina Owolabi notify.s...@gmail.com
  Cc: nanog@nanog.org list nanog@nanog.org
  Sent: Friday, June 19, 2015 7:07:01 PM
  Subject: Re: Whats' a good product for a high-density Wireless network
 setup?
 
  I know you don't want to hear this answer because of cost but I've had
 good
  luck with Cisco for very high density (about 1,000 clients in a packed
  auditorium actively using the network as they follow along with the
  presenter).
 
  The thing you need to watch out for with Ubiquiti is that they don't
  support DFS, so the entire U-NII-2 channel space is off limits for 5
 GHz.
  That's pretty significant because you're limited to 9 x 20 MHz channels
 or
  4 x 40 MHz channels.  Keeping the power level down and creating small
 cells
  is essential for high density, so with less channels your hands are
 really
  tied in that case.  Also, avoid the Zero Handoff marketing nonsense they
  advertise; I'm sure it can work great for a low client residential area
 but
  it requires all APs to share a single channel and depends upon
 coordinating
  only one active transmitter at a time, so it simply won't scale.
 
  I don't have experience with other vendors at large scale or high
 density.
 
  I don't think what you're talking about is really high density anymore
  though.  That's just normal coverage.  Wireless is a lot more
 complicated
  than selecting a vendor, though.  If you know what you're doing even
  Ubiquiti could work decently, but if you don't even a Cisco solution
 won't
  save you.  You really need to be on top of surveying correctly and
 having
  appropriate AP placement and channel distribution.
 
 
 
 
 
  On Fri, Jun 19, 2015 at 1:57 AM, Sina Owolabi notify.s...@gmail.com
 wrote:
 
   Hi
  
   We are profiling equipment and design for an expected high user
 density
   network of multiple, close nit, residential/hostel units. Its going
 to be
   8-10 buildings with possibly a over 1000 users at any given time.
   We are looking at Ruckus and Ubiquiti as options to get over the high
   number of devices we are definitely going to encounter.
  
   How did you do it, and what would you advise for product and layout?
  
   Thanks in advance!
  
 
 
 
  --
  Ray Patrick Soucy
  Network Engineer
  University of Maine System
 
  T: 207-561-3526
  F: 207-561-3531

Re: Anycast provider for SMTP?

2015-06-18 Thread Ray Soucy
I gave a pretty broad answer because the question was about hosting mail
servers using anycast.

I don't think what I was getting at in regards to stateful vs. stateless
was incorrect, but I was talking about the application level not the nature
of the protocol and throwing TCP in there confused the issue (I wasn't
talking about TCP itself as a stateful protocol; notice I said most
things).

You can certainly do anycast with TCP, and for small stateless services it
can be effective.  You can't do anycast for a stateful application without
taking the split-brain problem into account.

The entire CDN model was developed with anycast in mind, so yes, I'm sure
it does work quite well.  It generally fits the description of a stateless
service, and if it does implement a stateful service it's designed such
that nodes have a method of sharing information (perhaps using an
eventually consistent model).

Taking a normal application, like mail or a dynamic website, and just using
anycast for load balancing without designing the service with the anycast
model in mind is probably not a good idea.  You need to expect that the
same user could access different systems, and design for that.

The real point here is the problem OP is describing should be easily
handled by having proper MX records, and getting into anycast for mail is
likely not the right choice (unless maybe your goal is to be really
efficient at SPAM).

I'd like to know more on what problems he's seeing.


On Thu, Jun 18, 2015 at 4:13 AM, Kurt Kraut lis...@kurtkraut.net wrote:

 Ray,


 Anycast is generally not well-suited for stateful connectivity (e.g. most
 things TCP).

 I don't know anything that would support that claim. I have been using for
 years BGP anycast for audio and video streaming, always in TCP (RTMP, HLS,
 WMS, and even the good and old ShoutCast) and works like a charm. And this
 is the 'secret sauce' of the company I work for, the thing we do better
 than our competitors that make our users happy and never wanting to leave
 us: anycast.

 We have customers that are TV stations and stream 24x7x365 their content
 and they have watchers getting their streaming also 24x7x365 (like waiting
 rooms, airports) with no complaints or instability.


 Best regards,


 Kurt Kraut

 2015-06-17 16:13 GMT-03:00 Ray Soucy r...@maine.edu:

 Anycast is generally not well-suited for stateful connectivity (e.g. most
 things TCP).  The use case for anycast is restricted to simple
 challenge-response protocol design.

 As such, you typically only see it leveraged for simple services (e.g.
 DNS,
 NTP).

 The reason for this, as you suspect, is you can never guarantee that the
 path and thus the server will remain consistent across client connections.

 Ideally you can leverage DNS to provide a response to a unicast resource
 rather than trying to make the service itself anycast.  DNS can be
 anycast,
 and DNS can provide different responses based on geographical location,
 but
 these can happen independently or together.

 As you still want failover, you might opt to announce the MX record with
 the priorities reversed but still pointing to each server.  For example MX
 10 server1, MX 20 server2 on one side, and MX 10 server2, MX 20 server1 on
 the other.

 Typically you would use a DNS load balancer rather than simple anycast DNS
 to achieve this though.


 On Mon, Jun 15, 2015 at 1:50 PM, Joe Hamelin j...@nethead.com wrote:

  I have a mail system where there are two MX hosts, one in the US and
 one in
  Europe.  Both have a DNS MX record metric of 10 so a bastardized
  round-robin takes place.  This does not work so well when one site goes
  down.   My solution will be to place a load balancer in a hosting site
  (virtual, of course) and have it provide HA.  But what about HA for the
  LB?  At first glance anycasting would seem to be a great idea but there
 is
  a problem of broken sessions when routes change.
 
  Have any of you seen something like this work in the wild?
 
 
  --
  Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474
 



 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Anycast provider for SMTP?

2015-06-17 Thread Ray Soucy
Anycast is generally not well-suited for stateful connectivity (e.g. most
things TCP).  The use case for anycast is restricted to simple
challenge-response protocol design.

As such, you typically only see it leveraged for simple services (e.g. DNS,
NTP).

The reason for this, as you suspect, is you can never guarantee that the
path and thus the server will remain consistent across client connections.

Ideally you can leverage DNS to provide a response to a unicast resource
rather than trying to make the service itself anycast.  DNS can be anycast,
and DNS can provide different responses based on geographical location, but
these can happen independently or together.

As you still want failover, you might opt to announce the MX record with
the priorities reversed but still pointing to each server.  For example MX
10 server1, MX 20 server2 on one side, and MX 10 server2, MX 20 server1 on
the other.

Typically you would use a DNS load balancer rather than simple anycast DNS
to achieve this though.


On Mon, Jun 15, 2015 at 1:50 PM, Joe Hamelin j...@nethead.com wrote:

 I have a mail system where there are two MX hosts, one in the US and one in
 Europe.  Both have a DNS MX record metric of 10 so a bastardized
 round-robin takes place.  This does not work so well when one site goes
 down.   My solution will be to place a load balancer in a hosting site
 (virtual, of course) and have it provide HA.  But what about HA for the
 LB?  At first glance anycasting would seem to be a great idea but there is
 a problem of broken sessions when routes change.

 Have any of you seen something like this work in the wild?


 --
 Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Anycast provider for SMTP?

2015-06-17 Thread Ray Soucy
NTP might have been a bad example for the timing reasons.  One thing to
keep in mind with anycast is that unless there are problems the routes are
fairly stable and depending on how many servers you deploy and what route
visibility you have even different providers will often see the same
location as the closest path in terms of BGP.

I believe pool.ntp.org employs anycast to some extent, but I'm not sure
about that.  SNTP seems to to have a discovery component designed to work
well with anycast.

RFC 7094 has a good summary of all this.

In general, the consensus seems to be that anycast is better used for
discovery services rather than services themselves.





On Wed, Jun 17, 2015 at 5:12 PM, Chuck Church chuckchu...@gmail.com wrote:

 Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Ray Soucy
 Sent: Wednesday, June 17, 2015 3:14 PM
 To: Joe Hamelin
 Cc: NANOG list
 Subject: Re: Anycast provider for SMTP?


 As such, you typically only see it leveraged for simple services (e.g.
 DNS, NTP).

 I've been thinking about this for NTP.  Wouldn't you end up with constant
 corrections with NTP and Anycast?  Or is the assumption your anycasted NTP
 hosts are all peers of each other and extremely close in time to one
 another?  That still wouldn't address the latency differences between the
 different hosts.

 Chuck




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Is it safe to use 240.0.0.0/4

2015-06-17 Thread Ray Soucy
There is already more than enough address space allocated for NAT, you
don't need to start using random prefixes that may or may not be needed for
other purposes in the future.

For all we know, tomorrow someone could write an RFC requesting an address
reserved for local anycast DNS and it could be assigned from this block.

On Wed, Jun 17, 2015 at 5:07 PM, Luan Nguyen lngu...@opsource.net wrote:

 Is that safe to use internally? Anyone using it?
 Just for NATTING on Cisco gears...




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-12 Thread Ray Soucy
Personally my view is that DHCPv6-PD support would be much better for
tethering, but I don't get to tell Google how to do that just like they
don't get to tell me how to give out addresses.  My only request would be
if you do implement DHCPv6-PD for tethering, please make it only request a
prefix when you actually enable tethering, not as the default.  That would
be bad for different reasons.

Wouldn't the simple play here be for Android to just throw up a message
saying This network does not support tethering if SLAAC isn't enabled,
and to let users complain to local operators if that's something they
want?  Google doesn't get blamed, operators are happy, and ultimately users
have a better experience because the expectations of the network are clear
rather than trying something that will likely not work consistently across
networks.

If the concern is that you don't want to carriers to use DHCPv6 then you
could just limit it to 802.11.

The point is that there are several options here to address peoples
concerns and make IPv6 adoption that much easier, but the position of
Google on DHCPv6 support for Android is just another barrier to widespread
adoption of IPv6.  I honestly appreciate the work Google has been doing for
years to promote IPv6 adoption, but they're wrong here.






On Fri, Jun 12, 2015 at 11:30 AM, Todd Underwood toddun...@gmail.com
wrote:

 lorenzo already stated that the cost was in user satisfaction related to
 tethering and the business reason was the desire to not implement NAT in v6
 on android.

 many people didn't like those reasons or think that they are less
 important than their own reasons.

 shockingly, everyone believes that their own priorities are more important
 than everyone else's priorities.

 the 'cranky about the lack of DHCPv6' crowd has already made their points
 and further shut down conversation by demanding that lorenzo speak for
 Google on this thread.  indeed, shouting loudly and shutting down
 conversation was almost certainly the intent of many of the posts here.  so
 mission accomplished.

 fists have been pounded.  conversation has been halted.  well done.

 can me move on now?

 t

 On Fri, Jun 12, 2015 at 11:18 AM, James R Cutler 
 james.cut...@consultant.com wrote:

 Ray Soucy has given us an nice summary. It goes along with “please let me
 manage my business and don’t take away my tools just to satisfy your
 prejudices.”

 Selection of management policies and implementations is ALWAYS a local
 issue (assuming consideration of legal necessities). Especially in the
 end-to-end model, the requirements management of end systems has not
 changed because the IP layer protocol has changed. This is a good reason
 for not prohibiting continuing use of DHCP-based solutions. “Purity of
 protocols” is not a reason for increasing management costs such as
 described by Ray.

 This debate about DHCPv6 has been going on far too long.  I want to know
 how much it will cost the “SLAAC-only” faction to quit fighting DHCPv6.
 My conjecture is that it would be minimal, especially as compared to the
 costs for the activities described by Ray.

 Putting it differently: What business purpose is served by fighting
 full-functioned DHCPv6 deployment. Don’t give me any RFC or protocol
 arguments - just tell me the business reasons for forcing others to change
 how they manage their business.

 James R. Cutler
 james.cut...@consultant.com
 PGP keys at http://pgp.mit.edu



  On Jun 12, 2015, at 10:07 AM, Ray Soucy r...@maine.edu wrote:
 
  The only thing I would add is that DHCPv6 is not just about tracking
  clients.  Yes there are ways to do so using SLAAC, but they are not
 pretty.
 
  Giving too much weight to tracking being the reason for DHCPv6 is just
 as
  bad as giving too much weight to tethering as the reason against it.  It
  skews the conversation.
 
  For us, DHCPv6 is about *operational consistency*.
 
  Universities have been running with the end-to-end model everyone is
  looking to IPv6 to restore for a very long time.
 
  When you connect to our network, wired or wireless, you get a public IP
  with no filtering in place in most cases.
 
  We have been living the end-to-end model, and that has given us
 operational
  experience and insight on what it actually takes to support access
 networks
  using this model.
 
  Almost every peer institution I talk to has implemented custom systems
  refined over decades to preserve the end-to-end model in a time of
  increasing security incidents and liability.  These include IPAM
 systems,
  which feed into vulnerability scanning, or host filtering for incident
  response, etc.
 
  These systems are in place for IPv4, and modifying them to support IPv6
  (which most of us have done) is relatively easy in the case of DHCPv6.
 
  By maintaining consistency between IPv4 and IPv6 we avoid having to
 retrain
  hundreds of IT workers on supporting connectivity.  By saying that you
  won't support DHCPv6, you effectively force

Re: Android (lack of) support for DHCPv6

2015-06-12 Thread Ray Soucy
The only thing I would add is that DHCPv6 is not just about tracking
clients.  Yes there are ways to do so using SLAAC, but they are not pretty.

Giving too much weight to tracking being the reason for DHCPv6 is just as
bad as giving too much weight to tethering as the reason against it.  It
skews the conversation.

For us, DHCPv6 is about *operational consistency*.

Universities have been running with the end-to-end model everyone is
looking to IPv6 to restore for a very long time.

When you connect to our network, wired or wireless, you get a public IP
with no filtering in place in most cases.

We have been living the end-to-end model, and that has given us operational
experience and insight on what it actually takes to support access networks
using this model.

Almost every peer institution I talk to has implemented custom systems
refined over decades to preserve the end-to-end model in a time of
increasing security incidents and liability.  These include IPAM systems,
which feed into vulnerability scanning, or host filtering for incident
response, etc.

These systems are in place for IPv4, and modifying them to support IPv6
(which most of us have done) is relatively easy in the case of DHCPv6.

By maintaining consistency between IPv4 and IPv6 we avoid having to retrain
hundreds of IT workers on supporting connectivity.  By saying that you
won't support DHCPv6, you effectively force us into a choice between
investing considerable effort in redesigning systems and training IT
personnel, while introducing confusion in the support process because IPv4
and IPv6 are delivered using completely different methods.

You have just made it cheaper for us to turn to NAT than to support IPv6.
That's unacceptable.

You might be thinking well that's just universities and a small percent of
users, but the point here is that we do these things for a reason; we've
been living without NAT and our collective operational experience doing so
is something that would be wise to take into consideration instead of
dismissing it or trying to call us names.

Organizations running SLAAC who say everything is fine, think everything is
fine because IPv6 has received almost no attention from bad actors in terms
of security incidents or denial of service attacks.  Even well known
servers with IPv6 addresses on our network rarely see SSH attempts over
IPv6 today.

*This will fundamentally change as IPv6 adoption grows*.

Are you prepared to be able to deal with abuse reports of hosts on your
network participating on denial of service attacks?  Can you associate the
identity of a system to an individual when law enforcement demands you to
do so?  How much longer will it take you to track down a host by its IPv6
address to disable it?  How many other users have degraded service while
they're waiting for you to do that?

For most people that are used to the typical IPv4 model (NAT and open pool
DHCP), these events are very infrequent, so of course they don't get it.
For those of us running a more liberal network which preserves the
end-to-end model, free from aggressive filtering on user traffic, these
incidents are literally daily occurrences.

The fact is that you never know what service a user might enable on their
device only to be exploited and degrade service for their peers.

So yes, we are looking to the future in the case of DHCPv6, because we've
learned from the past.





On Fri, Jun 12, 2015 at 3:05 AM, valdis.kletni...@vt.edu wrote:

 On Fri, 12 Jun 2015 02:07:22 -, Laszlo Hanyecz said:

university net nazis
  
   Did you really just write that?
  
 
  As far as net nazi, I meant it in the same sense as a BOFH.  Someone
 who is
  intentionally degrading a user's experience by using technical means to
 block
  specifically targeted applications or behaviors.

 Well, which is more BOFH-ish:

 1) We insist that you connect in a way that allows us to identify and track
 you for DMCA and other legal requirements that, quite frankly, we'd really
 rather not have to do, but it's a cost of doing business.

 2) We don't let you connect at all because we can't do so without exposing
 ourselves to more legal liability than we want.

 Besides which, that's a total red herring.

 If you actually go back and *read*, I don't think anybody's intentionally
 degrading by blocking targeted applications - except those who are
 refusing
 to provide features to allow the applications to work on more network
 configs.
 The vast majority of us would *prefer* that Android got fixed so it can
 ask for
 more addresses and do more stuff. Most of us don't *care* if a user sucks
 up 13
 adresses across 4 devices at the same time - IPv6 addresses are *cheap*.
 On the other hand, covering your ass when a subpoena shows up and you
 realize
 you don't have the data needed to point at the user they're *really*
 looking
 for is incredibly expensive.

 OK? Let me say that again:  We're all asking Google to quit being stubborn
 and add a feature to 

Re: eBay is looking for network heavies...

2015-06-11 Thread Ray Soucy
I really wonder how people get into this field today.  It has gotten
incredibly complex and I've been learning since before I was a teenager
(back when it was much more simple).

I'm 31 now, but I started getting into computers and specifically
networking at a very young age (elementary school).  We had a pair of
teachers that were enthusiasts and built up a computer lab with everything
on token ring running Novell.  I thought the fact that I could change to a
different PC by driver letter in DOS was the most amazing thing I had ever
seen in the 3rd grade.  From there I was really hooked, got really into
BBSing, and when the first dial-up ISPs started popping up I made it a
point to get a job with them.

My school district didn't offer a technical program for Internetworking but
they had a technical school that competed in the SkillsUSA competitions and
approached me about competing in the Internetworking event, without any
education or mentor I won the gold medal at the State level both years I
competed and went on to the nationals (where that lack of guidance and
access to equipment to train on meant I got my slice of humble pie).  I
held my own, but the guys who won at the national level were just so much
more prepared.  Despite the stigma of SkillsUSA being trades focused, the
Internetworking competition was a really great experience that mixed
physical networking and basically a CCNA level of theory (they actually
used an old copy of the CCNA as the exam).

During this same time I got a paid internship for the local hospital and
rebuilt their entire network after seeing the nightmare it was (they had
the AS400 with all their healthcare data sitting on a public IP address
with no firewall and default QSECOFR credentials sitting there for the
taking with 5020 over IP enabled).  It was pretty crazy for a high school
student to be doing a full redesign of a network for a healthcare provider,
even building frame-relay links between facilities and convincing the local
cable company to provide dark fiber between a few.

When I went to university I made it a point to get student employment with
the NOC they ran to provide all of the public schools and libraries in the
state with their Internet access, and that evolved into a full time job for
them within a few years.

Looking back, it's been like a perfect storm of opportunity that I just
don't think exists today.  I'm really happy I was born when I was and able
to have a front row seat to see the explosion of the Internet.  I don't
know if I'm just getting old but I feel like technology has gotten so
easy for young people that most of them have no idea how it works, and no
desire to know.

When we interview for new people, especially fresh out of school, its
really disappointing when I hear them start to talk about a /24 as a class
C and go on to find out the extent of their understanding ends at a
textbook that is 20 years out of date.  When I ask if they use Linux and
they respond yes, I start getting into the details and learn they don't
even know the basics on the CLI like being able to find and kill a process
(thanks, Ubuntu).  I think it's a big part of why the industry finds so
little value in a degree vs. experience.

That said, there are schools with dedicated networking programs that have
really impressed me.  RIT is the first that comes to mind.





On Thu, Jun 11, 2015 at 8:53 AM, William Waites wwai...@tardis.ed.ac.uk
wrote:

 On Thu, 11 Jun 2015 14:24:31 +0200, Ruairi Carroll 
 ruairi.carr...@gmail.com said:

  What I found is that back in early-mid 00's, the industry was a
  black box.  Unless you knew someone inside of the industry...

 I suspect this is partly a result of the consolidation that went
 on. In the mid 1990s when I started, there were tons of small mom and
 pop ISPs with 28.8 modems stacked on Ikea shelving. The way that I got
 my first job as a student was literally by hanging around one of them
 and pestering them until they hired me part time. These small ISPs
 grew and most were eventually were acquired and people who stuck
 around through that -- especially the often quite complicated network
 integration that happens after acquisitions -- learned quite a lot
 about how the Internet operates at a variety of scales and saw a
 variety of different architectures and technical strategies.

 The scale and stability of today's Internet means that path is mostly
 closed now I think, particularly if what you want to do is get a job
 at a big company. But not entirely, there are still lots of rich
 field-learning opportunities on the periphery, in places where large
 carriers fear to tread...

 -w




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-11 Thread Ray Soucy
That's really not the case at all.

You're just projecting your own views about not thinking DHCPv6 is valid
and making yourself and Lorenzo out to be the some sort of victims of NANOG
and the ...

 university net nazis

Did you really just write that?

What we're arguing for here is choice, the exact opposite of the
association you're trying to make here.  It's incredibly poor taste to
throw that term around in this context, and adds nothing to the discussion.

People are not logical.  They adopt a position and then look for
information to support it rather than counter it; they even go as far as to
ignore or dismiss relevant information in the face of logic.  That's
religion.  And this entire discussion continues to be rooted in religion
rather than pragmatism.

DHCPv6 is a tool, just as SLAAC is a tool.  IPv6 was designed to support
both options because they both have valid use cases.  Please allow network
operators to use the best tool for the job instead of telling us all we're
required to do it your way (can you even see how ridiculous this whole
nazi name calling is given the position you're taking)

You don't get to just say I'm not going to implement this because I don't
agree with it, which is what Google is doing in the case of Android.

The reason Lorenzo has triggered such a backlash on NANOG is that is
fundamental argument on why he doesn't see DHCPv6 as valid for the Android
is quite frankly a very weak argument at best.  If you're going to stand up
and say you're not going to do what everyone else has already done,
especially when it comes to implementation of fundamental standards that
everything depends upon, you need to have a better reason for it than the
one Lorenzo provided.

I honestly hope he collects himself and takes the time to respond, because
it really is a problem.

As much as you may not want DHCPv6 to be a thing, it's already a thing.





On Thu, Jun 11, 2015 at 7:42 PM, Laszlo Hanyecz las...@heliacal.net wrote:

 Lorzenzo is probably not going to post anymore because of this.

 It looks to me like Lorenzo wants the same thing as most everyone here,
 aside from the university net nazis, and he's got some balls to come defend
 his position against the angry old men of NANOG.  Perhaps the approach of
 attacking DHCP is not the right one, but it sounds like his goal is to make
 IPv6 better than how IPv4 turned out.

 Things like privacy extensions, multiple addresses and PD are great
 because they make it harder for people to do address based tracking, which
 is generally regarded as a desirable feature except by the people who want
 to do the tracking.  DHCPv6 is a crutch that allows operators to simply
 implement IPv6 with all the same hacks as IPv4 and continue to do address
 based access control, tracking, etc.  It's like a 'goto' statement - it can
 be used to do clever things, but it can also be used to hack stuff and
 create very hard to fix problems down the road.  I think what Lorenzo is
 trying to do is to use his influence/position to forcefully prevent people
 from doing this, and while that may not be the most diplomatic way, I
 admire his courage in posting here and trying to reason with the mob.

 -Laszlo


 On Jun 10, 2015, at 10:24 PM, Michael Thomas m...@mtcc.com wrote:

  On 06/10/2015 02:51 PM, Paul B. Henson wrote:
  From: Lorenzo Colitti
  Sent: Wednesday, June 10, 2015 8:27 AM
 
  please do not construe my words on this thread as being Google's
 position
  on anything. These messages were sent from my personal email address,
 and I
  do not speak for my employer.
  Can we construe your postings on the issue thread as being Google
 and/or Androids official position? They are posted by lore...@google.com
 with a tag of Project Member, and I believe you also declined the request
 in the issue under that mantle.
 
 
  Oh, stop this. The only thing this will accomplish is a giant black hole
 of silence from anybody at Google and any other $MEGACORP
  in a similar situation.
 
  Mike




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-11 Thread Ray Soucy
Well, most systems implemented DHCPv6 support a long time ago.  Despite
other efforts to have Google support DHCPv6 for Android, nothing has
happened.  There is nothing wrong with using NANOG to call out a major
vendor for this, even if they are a significant sponsor.

Just because you don't agree with the direction of the discussion doesn't
mean it has no value.

On Thu, Jun 11, 2015 at 9:03 PM, Ca By cb.li...@gmail.com wrote:

 Yeh, we get it.  Repeating yourself is not helpful. The horse is dead 

 Please move your android feature request to a forum more fit for your
 request.

 On Thursday, June 11, 2015, Paul B. Henson hen...@acm.org wrote:

   From: Laszlo Hanyecz
   Sent: Thursday, June 11, 2015 4:42 PM
  
   from the university net Nazis
 
  Wow, it must be nice to live in a fairyland utopia where there is no
 DMCA,
  no federal laws such as HEOA, and a wide variety of other things you
  clearly
  know nothing about that require universities to be able to track their
  users
  and manage their networks.
 
   attacking DHCP is not the right one, but it sounds like his goal is to
  make IPv6
   better than how IPv4 turned out.
 
  I don't think a single person here has a goal of making IPv6 worse, nor
  necessarily has any objection to the improvements he has suggested.
 OTOH, I
  think the number of people who think he is making a good decision in
  refusing to implement DHCPv6 is practically nil.
 
   diplomatic way, I admire his courage in posting here and trying to
 reason
  with
   the mob.
 
  If he really wants to validate his position on not implementing DHCPv6,
  maybe he should approach all of the other vendors who already did and get
  them to remove it. Being the one and only holdout on implementing a
 widely
  deployed Internet standard looks more like lunatic fringe than visionary
  leader 8-/.
 
 
 




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
So here is the thing.

You can try to use enhanced functionality which depends on multiple
addresses as justification for saying DHCPv6 is not supported.

In practice, your device will just not be supported.

As you pointed out, there isn't anything that forces adoption of IPv6 right
now.

If your client is broken because of an incomplete implementation, I just
won't give it an IPv6 address at all.  I think a lot of others feel the
same way.

At least on our network, the number of Apple devices dwarf Android by
several times.  With Android being a minority and not implementing DHCPv6
support you force us to make Android a second class citizen.

I'm perfectly find NATing Android, and don't see us giving up the
operational benefits to DHCPv6 anytime soon.

Also, in terms of hotspot functionality being the major driver ... I
already provide ubiquitous wireless coverage.  I don't want users enabling
a hotspot (rogue AP) on campus because it has a negative impact on service
for others, so the whole argument is really meaningless in the context of
higher education and campus networking.

A phone makes a terrible AP when the laptop supports full 802.11ac.  I
really don't know anyone who connects through their phone when WiFi is
available and the phone is also connected to the same WiFi network.  It's
not even a use case I think most people would consider common or valid but
we're using it a justification to not support something that IS very common
and widely deployed.





On Wed, Jun 10, 2015 at 7:16 AM, Lorenzo Colitti lore...@colitti.com
wrote:

 On Wed, Jun 10, 2015 at 5:31 PM, Sander Steffann san...@steffann.nl
 wrote:

  I can also see more deployment issues (much more state in the routers for
  all those PDs, needing huge amounts of /64s (or larger) to be able to
 deal
  with a few hundred/thousand clients) but it would be very nice if this
 was
  possible :)
 

 Well, in IPv4 you can do 16M devices in 10/8. You can divide IPv6 space in
 exactly the same way and give every client a /64. A /24 becomes a /56, and
 your 10/8 becomes a /40. A /40 is really not hard to get.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
Actually we do support DHCPv6-PD, but Android doesn't even support DHCPv6
let alone PD, so that's the discussion here, isn't it?

As for thinking long term and the future, we need devices to work
within current models of IPv6 to accelerate _adoption_ of IPv6 _today_
before we can get to that future you're talking about.

Not supporting DHCPv6 ultimately holds back adoption, which makes people
see IPv6 as optional for longer, and discourages deployment because vendor
support is all over the place and seen as not ready.

This isn't theory, we've been _living_ with this as a reality for years
now.  The networks are ready; the clients are not.

Universities see a constant stream of DMCA violation notices that need to
be dealt with and not being able to associate a specific IPv6 address to a
specific user is a big enough liability that the only option is to not use
IPv6.  As I said, Android becomes a second class citizen on the network
under your model.


On Wed, Jun 10, 2015 at 8:21 AM, Lorenzo Colitti lore...@colitti.com
wrote:

 On Wed, Jun 10, 2015 at 8:35 PM, Ray Soucy r...@maine.edu wrote:

 In practice, your device will just not be supported.

 As you pointed out, there isn't anything that forces adoption of IPv6
 right now.


 It's certainly a possibility for both sides in this debate to say my way
 or the highway, and wait and see what happens when operators start
 removing support for IPv4.


 I'm perfectly find NATing Android, and don't see us giving up the
 operational benefits to DHCPv6 anytime soon.


 Oh, I definitely see that DHCPv6 is easier for network operators.

 But even if you're dead set on using DHCPv6, what I don't see is why don't
 you support DHCPv6 PD instead of IA_NA? It's the same amount of state. Same
 accountability. Same transaction model. But unlike NA, the device gets as
 many addresses as it needs. Nothing breaks, and you get future flexibility.
 Mobile devices don't have to implement NAT, and application developers
 don't have to work around it. You size your IPv6 pools based on the size of
 your IPv4 pools, and don't run out of addresses. Technically, that sort of
 arrangement is superior to IA_NA in basically every way. So... why use
 IA_NA?

 Even if the answers are that's what we do in IPv4, and we want to do it
 the same way, or we're worried that this is strange and will tickle
 vendor bugs, it's not supported by the IPAM we use today, or this is
 what we've decided, our network our rules, I would hope that we as an
 industry can think a little longer term than that.

 Also, in terms of hotspot functionality being the major driver


 Don't think about yesterday, think about tomorrow. Tethering and 464xlat
 are just two examples of what can be done when you have the ability to use
 more than one IPv6 address and cannot be done without that. We know these
 will break today; tomorrow, we can use multiple addresses to do things we
 haven't thought of yet.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
The whole conversation is around 464XLAT on IPv6-only networks right?
We're going to be dual-stack for a while IMHO, and by the time we can get
away with IPv6 only for WiFi, 464 should no longer be relevant because
we'll have widespread IPv6 adoption by then.

Carriers can do IPv6 only because they tightly control the clients, for
WiFi clients are and will always be all over the place, so dual stack will
be pretty much a given for a long time.


On Wed, Jun 10, 2015 at 9:50 AM, George, Wes wesley.geo...@twcable.com
wrote:


 On 6/10/15, 2:32 AM, Lorenzo Colitti lore...@colitti.com wrote:

 I'd be happy to work with people on an Internet draft or other
 standard to define a minimum value for N, but I fear that it may not
 possible to gain consensus on that.

 WG] No, I think that the document you need to write is the one that
 explains why a mobile device needs multiple addresses, and make some
 suggestions about the best way to support that. Your earlier response
 detailing those vs how they do it in IPv4 today was the first a lot of us
 have heard of that, because we're not in mobile device development and
 don't necessarily understand the secret sauce involved. This is especially
 true for your mention of things like WiFi calling, and all of the other
 things that aren't tethering or 464xlat, since neither of those are as
 universally agreed-upon as must have on things like enterprise networks.
 I'm sure there are also use cases we haven't thought of yet, so I'm not
 trying to turn this into a debate about which use cases are valid, just
 observing that you might get more traction with the others.


 Asking for more addresses when the user tries to enable features such as
 tethering, waiting for the network to reply, and disabling the features if
 the network does not provide the necessary addresses does not seem like it
 would provide a good user experience.

 WG] Nor does not having IPv6 at all, and being stuck behind multiple
 layers of NAT, but for some reason you seem ok with that, which confuses
 me greatly. The amount of collective time wasted arguing this is likely
 more than enough to come up with cool ways to optimize the ask/wait/enable
 function so that it doesn't translate to a bad user experience, and few
 things on a mobile device are instantaneous anyway, so let's stop acting
 like it's an unsolvable problem.

 Thanks,

 Wes


 Anything below this line has been added by my company’s mail server, I
 have no control over it.
 --


 This E-mail and any of its attachments may contain Time Warner Cable
 proprietary information, which is privileged, confidential, or subject to
 copyright belonging to Time Warner Cable. This E-mail is intended solely
 for the use of the individual or entity to which it is addressed. If you
 are not the intended recipient of this E-mail, you are hereby notified that
 any dissemination, distribution, copying, or action taken in relation to
 the contents of and attachments to this E-mail is strictly prohibited and
 may be unlawful. If you have received this E-mail in error, please notify
 the sender immediately and permanently delete the original and any copy of
 this E-mail and any printout.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
I've already written systems to do this kind of thing, but the logging
requirements quickly go through the roof for a non-trivial network;
especially in the case of temporary addressing now default on many
systems.  That isn't so much the issue as operational consistency and
supportability.

The requirement for hosts to be dual stack will exist for some time.

For the sanity of IT workers and users alike (most of which don't really
know the details of IPv6 and barely understand address scopes let alone
multiple addresses) there needs to be some level of consistency between how
hosts are addressed and communicate between the two protocols.  Saying
we'll develop one system for IPv4 and one for IPv6 ultimate results in
Let's not deploy IPv6 just yet.

This rings especially true when security concerns come up.  Consistency
between IPv4 and IPv6 needs to be possible in this transition period, or
people simply decide to not deploy.  Consistency in addressing and
maintaining a one host one address model has major implications for policy
construction, flow analysis and incident response, denial of service
mitigation, etc.  If the consistency isn't there, you need to re-invent
the wheel for every aspect, then maintain different systems for IPv4 and
IPv6 along side one another.

Even worse, if we keep seeing adoption held up by inconsistent client
implementations I fear we'll start seeing commercial products which NAT or
proxy IPv4 to IPv6 to avoid using IPv6 internally.  It's very ugly and
requires some DNS magic to work, but it's not impossible.  We're already
seeing this in terms of cloud computing and virtualization.  Servers have
IPv4 addresses and only a load balancer with an external IPv6 address.

We absolutely need to make it possible for people to adopt IPv6 before we
can reach a point where IPv4 can go away, and for some organizations the
answer of Just use SLAAC is simply not acceptable.

They just won't do it.

Preventing IPv6 adoption hurts us all.  And Android not supporting a MAJOR
part of IPv6 like DHCPv6 is preventing adoption.





On Wed, Jun 10, 2015 at 1:42 PM, Sander Steffann san...@steffann.nl wrote:

 
  It's not the *only* option. There are large networks - O(100k) IPv6
 nodes - that do ND monitoring for accountability, and it does work for
 them. Many devices support this via syslog, even. As you can imagine, my
 Android device gets IPv6 at work, even though it doesn't support DHCPv6.
 Other universities, too. It's obviously  not your chosen or preferred
 mechanism, but it does work.

 /me starts to write that whitepaper that educates people on how to do this

 Cheers,
 Sander




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
I don't really feel I was trying to take things out of context, but the
full quote would be:

If there were consensus that delegating a prefix of sufficient size via
DHCPv6 PD of a sufficient size is an acceptable substitute for stateful
IPv6 addressing in the environments that currently insist on stateful
DHCPv6 addressing, then it would make sense to implement it. In that
scenario, Android would still not implement DHCPv6 NA, but it would
implement DHCPv6 PD.

To me, that's essentially saying:

EVEN IF we decided to support DHCPv6-PD, and that's a big IF, we will
never support stateful address assignment via DHCPv6.

This rings especially true when compared against the context of everything
else you've written on the subject.

I think that's how most others on this list would read it as well.

If that isn't what you meant to say, then I'm sorry.  I'm certainly not
trying to put words in your mouth.

I still feel that it's a very poor position to take.

Given that you don't speak for Google on the subject, if you're not
the decision maker for this issue on Android, could you pull in the people
at Google who are, or at least point us to them?

A lot of us would like the chance to make our case and expose the harm that
Android is doing by not supporting DHCPv6.

I think the Android team is very overconfident in their ability to shape
the direction of IPv6 adoption, especially with years old Android releases
being still in production and it taking incredibly long for changes to
trickle down through the Android ecosystem.

That delay is also why we have a hard time accepting the mindset that IF
you see a need for it in the future you'll add it.  That will be 4 to 8
years too late.





On Wed, Jun 10, 2015 at 12:29 PM, Lorenzo Colitti lore...@colitti.com
wrote:

 On Thu, Jun 11, 2015 at 12:36 AM, Jeff McAdams je...@iglou.com wrote:

 Then you need to be far more careful about what you say. When you said
 Android would still not support... you, very clearly, made a statement of
 product direction for a Google product.


 Did you intentionally leave the in that scenario, words that came right
 before the ones you quoted?

 How does a sentence that says in that scenario, android would X
 constitute a statement of direction?




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
I agree that some of the rhetoric should be toned down (go out for a beer
or something, guys ... I did).

There is a difference between fiery debate with Lorenzo and a witch hunt,
and some of this is starting to sound a bit personal.  I shouldn't have
worded things the way I did, I went for the cheap shot in one of those last
notes and that isn't really constructive.  I'm sorry.

I think for many this thread represents years of frustration, though, and
LC making the statements in the way he did made him a focal point for that
frustration.

The problem is there are many of us on the front lines trying to push for
IPv6 adoption outside the bubble of idealism and when people of great
influence like LC take positions like DHCPv6 isn't required it's like a
slap in the face to all that effort.

We really need to see Google and Android come on board with DHCPv6 support
and I'm interested in how we can help make that happen.





On Wed, Jun 10, 2015 at 7:00 PM, Jeff McAdams je...@iglou.com wrote:

 No.

 Given that Lorenzo was posting with absolute statements about Google's
 approach, and with what they would do in the future in response to
 hypothetical standards developments, these questions are completely valid.

 On Jun 10, 2015 5:24 PM, Michael Thomas m...@mtcc.com wrote:
 
  On 06/10/2015 02:51 PM, Paul B. Henson wrote:
   From: Lorenzo Colitti
   Sent: Wednesday, June 10, 2015 8:27 AM
  
   please do not construe my words on this thread as being Google's
 position
   on anything. These messages were sent from my personal email address,
 and I
   do not speak for my employer.
   Can we construe your postings on the issue thread as being Google
 and/or Androids official position? They are posted by lore...@google.com
 with a tag of Project Member, and I believe you also declined the request
 in the issue under that mantle.
  
  
  Oh, stop this. The only thing this will accomplish is a giant black hole
  of silence from anybody at Google and any other $MEGACORP
  in a similar situation.
 
  Mike




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
I like to think of it more like the command tent ;-)

On Wed, Jun 10, 2015 at 9:40 PM, Todd Underwood toddun...@gmail.com wrote:

 Anyone who thinks Lorenzo hasn't been on the front lines of pushing for
 IPv6 adoption is pretty late to the party or confused about the state of
 affairs.

 T

 On Wed, Jun 10, 2015, 21:30 Ray Soucy r...@maine.edu wrote:

 I agree that some of the rhetoric should be toned down (go out for a beer
 or something, guys ... I did).

 There is a difference between fiery debate with Lorenzo and a witch hunt,
 and some of this is starting to sound a bit personal.  I shouldn't have
 worded things the way I did, I went for the cheap shot in one of those
 last
 notes and that isn't really constructive.  I'm sorry.

 I think for many this thread represents years of frustration, though, and
 LC making the statements in the way he did made him a focal point for that
 frustration.

 The problem is there are many of us on the front lines trying to push
 for
 IPv6 adoption outside the bubble of idealism and when people of great
 influence like LC take positions like DHCPv6 isn't required it's like a
 slap in the face to all that effort.

 We really need to see Google and Android come on board with DHCPv6 support
 and I'm interested in how we can help make that happen.





 On Wed, Jun 10, 2015 at 7:00 PM, Jeff McAdams je...@iglou.com wrote:

  No.
 
  Given that Lorenzo was posting with absolute statements about Google's
  approach, and with what they would do in the future in response to
  hypothetical standards developments, these questions are completely
 valid.
 
  On Jun 10, 2015 5:24 PM, Michael Thomas m...@mtcc.com wrote:
  
   On 06/10/2015 02:51 PM, Paul B. Henson wrote:
From: Lorenzo Colitti
Sent: Wednesday, June 10, 2015 8:27 AM
   
please do not construe my words on this thread as being Google's
  position
on anything. These messages were sent from my personal email
 address,
  and I
do not speak for my employer.
Can we construe your postings on the issue thread as being Google
  and/or Androids official position? They are posted by
 lore...@google.com
  with a tag of Project Member, and I believe you also declined the
 request
  in the issue under that mantle.
   
   
   Oh, stop this. The only thing this will accomplish is a giant black
 hole
   of silence from anybody at Google and any other $MEGACORP
   in a similar situation.
  
   Mike
 



 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-10 Thread Ray Soucy
Respectfully disagree on all points.

The statement that Android would still not implement DHCPv6 NA, but it
would implement DHCPv6 PD. is troubling because you're not even willing to
entertain the idea for reasons that are rooted in idealism rather
than pragmatism.

Very disappointing to see that this is the position of Google.


On Wed, Jun 10, 2015 at 10:58 AM, Lorenzo Colitti lore...@colitti.com
wrote:

 On Wed, Jun 10, 2015 at 10:06 PM, Ray Soucy r...@maine.edu wrote:

 Actually we do support DHCPv6-PD, but Android doesn't even support DHCPv6
 let alone PD, so that's the discussion here, isn't it?


 It is possible to implement DHCPv6 without implementing stateful address
 assignment.

 If there were consensus that delegating a prefix of sufficient size via
 DHCPv6 PD of a sufficient size is an acceptable substitute for stateful
 IPv6 addressing in the environments that currently insist on stateful
 DHCPv6 addressing, then it would make sense to implement it. In that
 scenario, Android would still not implement DHCPv6 NA, but it would
 implement DHCPv6 PD.

 What needs to be gauged about that course of action is how much consensus
 would be achieved, whether network operators would actually use it (IPv6
 has a long and distinguished history of people claiming I can't support
 IPv6 until I get feature X, feature X appearing, and people changing their
 claim to I can't support IPv6 until I get feature Y), and how much of
 this discussion would be put to bed.

 That course of action would seem most feasible if it were accompanied by
 an IETF document that explained the deployment model and clarified what
 sufficient size is.


 Universities see a constant stream of DMCA violation notices that need to
 be dealt with and not being able to associate a specific IPv6 address to a
 specific user is a big enough liability that the only option is to not use
 IPv6.


 It's not the *only* option. There are large networks - O(100k) IPv6 nodes
 - that do ND monitoring for accountability, and it does work for them. Many
 devices support this via syslog, even. As you can imagine, my Android
 device gets IPv6 at work, even though it doesn't support DHCPv6. Other
 universities, too. It's obviously  not your chosen or preferred mechanism,
 but it does work.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Android (lack of) support for DHCPv6

2015-06-09 Thread Ray Soucy
It really is too bad.  They're literally the only major player not on board
but claim to champion IPv6.

There is a big difference between saying that something isn't supported and
the Android position that they will NOT support DHCPv6.  To me, that's
something that shouldn't be a decision they get to make, especially when
other operating systems have already come around with DHCPv6 support.  Very
frustrating, and quite frankly an embarrassment to Google.  I really didn't
expect that kind of dismissive response out of Lorenzo but maybe I just
haven't been paying attention.

Clients should support a verity of methods and let network operators choose
the solution that fits the environment.  The whole premise for not
supporting DHCPv6 seems to be that mobile networks don't need it, but that
totally ignores 802.11 which is equally important.

Not to single out Google, on the opposite side It's equally disappointing
that Windows 10 will not support RDNSS (at least I haven't been able to
find any information on it, has anyone been able to confirm?).

I would hope we're past the religious arguments of SLAAC vs DHCPv6 but it
seems like every time the topic comes up the entire conversation turns into
a holy war on what method is the best.  They're both valid, and both useful.





On Mon, Jun 8, 2015 at 11:14 PM, Paul B. Henson hen...@acm.org wrote:

 We're in the beginning steps of bringing up IPv6 at the fairly large
 university where I work. We plan to use DHCPv6 rather than SLAAC for a
 variety of reasons. One of our guys recently noticed that Android has no
 support for DHCPv6, and a rather odd issue thread discussing it:

 https://code.google.com/p/android/issues/detail?id=32621

 It looks like one developer simply refuses to implement it because if he
 did there might be a scenario where somebody might not be able to tether
 8-/? His attitude is that you have to use SLAAC and RDNSS, which we're
 just not going to do. At this point I guess Android devices just won't
 work with IPv6 on our network, and we'll suggest they complain to their
 vendor and/or get a different phone.

 I was just curious what this forum might think of that design decision
 and the discussion on the issue thread.

 Thanks...




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Low Cost 10G Router

2015-05-20 Thread Ray Soucy
You're right I dropped down to the v2 for pricing reasons:

- Supermicro SuperServer 5017R-MTRF
- 4x SATA
- 8x DDR3
- 400W Redundant
- Eight-Core Intel Xeon Processor E5-2640 v2 2.00GHz 20MB Cache (95W)
- 4 x SAMSUNG 2GB PC3-12800 DDR3-160
- 2 x 500GB SATA 6.0Gb/s 7200RPM - 3.5 - Western Digital RE4 WD5003ABYZ
- Supermicro System Cabinet Front Bezel CSE-PTFB-813B with Lock and Filter
(Black)
- No Windows Operating System (Hardware Warranty Only, No Software Support)
- Three Year Warranty with Advanced Parts Replacement

FWIW I used Sourcecode as the system builder.  They've been great to work
with.

On Tue, May 19, 2015 at 4:46 PM, Joe Greco jgr...@ns.sol.net wrote:

  How cheap is cheap and what performance numbers are you looking for?
 
  About as cheap as you can get:
 
  For about $3,000 you can build a Supermicro OEM system with an 8-core
 Xeon
  E5 V3 and 4-port 10G Intel SFP+ NIC with 8G of RAM running VyOS.  The pro
  is that BGP convergence time will be good (better than a 7200 VXR), and
  number of tables likely won't be a concern since RAM is cheap.  The con
 is
  that you're not doing things in hardware, so you'll have higher latency,
  and your PPS will be lower.

 What 8 core Xeon E5 v3 would that be?  The 26xx's are hideously pricey,
 and for a router, you're probably better off with something like a
 Supermicro X10SRn fsvo n with a Xeon E5-1650v3.  Board is typically
 around $300, 1650 is around $550, so total cost I'm guessing closer to
 $1500-$2000 that route.

 The edge you get there is the higher clock on the CPU.  Only six cores
 and only 15M cache, but 3.5GHz.  The E5-2643v3 is three times the cost
 for very similar performance specs.  Costwise, E5 single socket is the
 way to go unless you *need* more.

 ... JG
 --
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and]
 then I
 won't contact you again. - Direct Marketing Ass'n position on e-mail
 spam(CNN)
 With 24 million small businesses in the US alone, that's way too many
 apples.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Low Cost 10G Router

2015-05-20 Thread Ray Soucy
P.S I went through HotLava Systems for the Intel-based SFP+ NICs to add to
those, http://hotlavasystems.com/ (not trying to plug; these are just hard
to find)

On Wed, May 20, 2015 at 9:08 AM, Ray Soucy r...@maine.edu wrote:

 You're right I dropped down to the v2 for pricing reasons:

 - Supermicro SuperServer 5017R-MTRF
 - 4x SATA
 - 8x DDR3
 - 400W Redundant
 - Eight-Core Intel Xeon Processor E5-2640 v2 2.00GHz 20MB Cache (95W)
 - 4 x SAMSUNG 2GB PC3-12800 DDR3-160
 - 2 x 500GB SATA 6.0Gb/s 7200RPM - 3.5 - Western Digital RE4 WD5003ABYZ
 - Supermicro System Cabinet Front Bezel CSE-PTFB-813B with Lock and Filter
 (Black)
 - No Windows Operating System (Hardware Warranty Only, No Software Support)
 - Three Year Warranty with Advanced Parts Replacement

 FWIW I used Sourcecode as the system builder.  They've been great to work
 with.

 On Tue, May 19, 2015 at 4:46 PM, Joe Greco jgr...@ns.sol.net wrote:

  How cheap is cheap and what performance numbers are you looking for?
 
  About as cheap as you can get:
 
  For about $3,000 you can build a Supermicro OEM system with an 8-core
 Xeon
  E5 V3 and 4-port 10G Intel SFP+ NIC with 8G of RAM running VyOS.  The
 pro
  is that BGP convergence time will be good (better than a 7200 VXR), and
  number of tables likely won't be a concern since RAM is cheap.  The con
 is
  that you're not doing things in hardware, so you'll have higher latency,
  and your PPS will be lower.

 What 8 core Xeon E5 v3 would that be?  The 26xx's are hideously pricey,
 and for a router, you're probably better off with something like a
 Supermicro X10SRn fsvo n with a Xeon E5-1650v3.  Board is typically
 around $300, 1650 is around $550, so total cost I'm guessing closer to
 $1500-$2000 that route.

 The edge you get there is the higher clock on the CPU.  Only six cores
 and only 15M cache, but 3.5GHz.  The E5-2643v3 is three times the cost
 for very similar performance specs.  Costwise, E5 single socket is the
 way to go unless you *need* more.

 ... JG
 --
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and]
 then I
 won't contact you again. - Direct Marketing Ass'n position on e-mail
 spam(CNN)
 With 24 million small businesses in the US alone, that's way too many
 apples.




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Low Cost 10G Router

2015-05-19 Thread Ray Soucy
How cheap is cheap and what performance numbers are you looking for?

About as cheap as you can get:

For about $3,000 you can build a Supermicro OEM system with an 8-core Xeon
E5 V3 and 4-port 10G Intel SFP+ NIC with 8G of RAM running VyOS.  The pro
is that BGP convergence time will be good (better than a 7200 VXR), and
number of tables likely won't be a concern since RAM is cheap.  The con is
that you're not doing things in hardware, so you'll have higher latency,
and your PPS will be lower.

I haven't tried this configuration as a full router in production, but have
been using them in a few places as a firewall solution and they've handled
everything I've thrown their way so far.  Initially, I had these in place
as low-capital solutions that were going to be temporary so we could
start building out a new environment and collect usage data to have real
world sizing data for something like an ASA cluster, but they've worked so
well that we've held off on that purchase for now (given challenging budget
times in higher-education).

The stability of VyOS has been good, and the image-based upgrade system has
worked every time without issues for the past year or two (starting from
1.0.1 to the current 1.1.5).  That said documentation for VyOS is poor, so
you should be ready to dig into some source code or hit the IRC channel to
get things running.  Having a foundation with general Linux knowledge is
helpful here too.

If you just need a 10G link but only commit to 2-3G then this solution
might be able to work well for you.  If you need closer to line-rate 10G at
small packet sizes then you might start running into performance
limitations due to latency.  If this is the case there is the Vyatta
vRouter 5600 (VyOS is based on the GPL portions of the 5400), which claims
to have Intel DPDK support and can handle multi-10G at line rate; but last
time I checked it was really expensive ($10,000 per core or something
ridiculous like that).

In terms of commercial solutions, I think 10G and BGP are two things that
don't combine well for cheap.

An ASR1K might do the trick, but more likely than not you're looking at an
ASR9K if you want full tables; I don't have any experience with the 1K
personally so I can't speak to that.  The ASR 9K is a really great platform
and is what we use for BGP here, but it's pretty much the opposite of cheap.

As far as the firewall stuff goes, I have a draft of VyOS as a firewall
that I've been wanting to put together (still needs work):

http://soucy.org/vyos/UsingVyOSasaFirewall.pdf

P.S. Sorry the documentation for VyOS is so bad, what's there so far in the
User Guide is basically me trying to do a first pass in hopes that others
would help out and there haven't been many updates.





On Tue, May 19, 2015 at 1:22 PM, Colton Conor colton.co...@gmail.com
wrote:

 What options are available for a small, low cost router that has at least
 four 10G ports, and can handle full BGP routes? All that I know of are the
 Juniper MX80, and the Brocade CER line. What does Cisco and others have
 that compete with these two? Any other vendors besides Juniper, Brocade,
 and Cisco to look at?




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: symmetric vs. asymmetric [was: Verizon Policy Statement on Net Neutrality]

2015-04-23 Thread Ray Soucy
Sorry, I know I get long-winded.  That's why I don't post as much as I used
to. ;-)

On Thu, Apr 23, 2015 at 10:09 AM, Jay Ashworth j...@baylink.com wrote:

 There's an op-ed piece in this posting, Ray. Do you want to write it, or
 should I?

 :-)


 On April 23, 2015 10:06:42 AM EDT, Ray Soucy r...@maine.edu wrote:

 It's amazing, really.

 Netflix and YouTube now overtake BitTorrent and all other file sharing
 peer-to-peer traffic combined, even on academic networks, by order(s) of
 magnitude.  The amount of peer-to-peer traffic is not even significant in
 comparison.  It might as well be IRC from our perspective.

 Internet usage habits have shifted quite a bit in the past decade.  I
 think the takeaway is that if you provide content in a way that is fairly
 priced and convenient to access (e.g. DRM doesn't get in your way), most
 people will opt for the legal route.  Something we were trying to explain
 to the MPAA and RIAA years ago when they shoved the DMCA down our throats.

 I'm certainly in favor of symmetrical service.  I think there is a widely
 held myth that DOS attacks will take down the Internet when everyone has
 more bandwidth.  The fact is that DOS attacks are a problem regardless of
 bandwidth, and throttling people isn't a solution.  The other (somewhat
 insulting) argument that people will use greater upload speeds for illegal
 activity is pretty bogus as well.

 The limit on upload bandwidth for most people is a roadblock to a lot of
 the services that people will take for granted a decade from now; cloud
 backup, residential video surveillance over IP, peer-to-peer high
 definition video conferencing.  And likely a lot of things that we haven't
 imagined yet.

 As funny as it sounds, I think Twitch (streaming video games) has been
 the application that has made the younger generation care about their
 upload speed more than anything else.  They now have a use case where their
 limited upload is a real problem for them, and when they find out their ISP
 can't provide anything good enough they get pretty upset about it.





 On Wed, Apr 22, 2015 at 6:02 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
  From: Frank Bulk frnk...@iname.com

  Those are measured at the campus boundary. I don't have visibility
 inside
  the school's network to know who much intra-campus traffic there may
 be .
  but we know that peer-to-peer is a small percentage of overall Internet
  traffic flows, and streaming video remains the largets.

 BitTorrent makes special efforts to keep as much traffic local as
 possible,
 I understand; that probably isn't too helpful... except at scales like
 that
 on a resnet at a sizable campus.

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink
 j...@baylink.com
 Designer The Things I Think
  RFC 2100
 Ashworth  Associates   http://www.bcp38.info  2000 Land
 Rover DII
 St Petersburg FL USA  BCP38: Ask For It By Name!   +1 727
 647 1274




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net


 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: symmetric vs. asymmetric [was: Verizon Policy Statement on Net Neutrality]

2015-04-23 Thread Ray Soucy
It's amazing, really.

Netflix and YouTube now overtake BitTorrent and all other file sharing
peer-to-peer traffic combined, even on academic networks, by order(s) of
magnitude.  The amount of peer-to-peer traffic is not even significant in
comparison.  It might as well be IRC from our perspective.

Internet usage habits have shifted quite a bit in the past decade.  I think
the takeaway is that if you provide content in a way that is fairly priced
and convenient to access (e.g. DRM doesn't get in your way), most people
will opt for the legal route.  Something we were trying to explain to the
MPAA and RIAA years ago when they shoved the DMCA down our throats.

I'm certainly in favor of symmetrical service.  I think there is a widely
held myth that DOS attacks will take down the Internet when everyone has
more bandwidth.  The fact is that DOS attacks are a problem regardless of
bandwidth, and throttling people isn't a solution.  The other (somewhat
insulting) argument that people will use greater upload speeds for illegal
activity is pretty bogus as well.

The limit on upload bandwidth for most people is a roadblock to a lot of
the services that people will take for granted a decade from now; cloud
backup, residential video surveillance over IP, peer-to-peer high
definition video conferencing.  And likely a lot of things that we haven't
imagined yet.

As funny as it sounds, I think Twitch (streaming video games) has been the
application that has made the younger generation care about their upload
speed more than anything else.  They now have a use case where their
limited upload is a real problem for them, and when they find out their ISP
can't provide anything good enough they get pretty upset about it.





On Wed, Apr 22, 2015 at 6:02 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
  From: Frank Bulk frnk...@iname.com

  Those are measured at the campus boundary. I don't have visibility inside
  the school's network to know who much intra-campus traffic there may be .
  but we know that peer-to-peer is a small percentage of overall Internet
  traffic flows, and streaming video remains the largets.

 BitTorrent makes special efforts to keep as much traffic local as possible,
 I understand; that probably isn't too helpful... except at scales like that
 on a resnet at a sizable campus.

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink
 j...@baylink.com
 Designer The Things I Think   RFC
 2100
 Ashworth  Associates   http://www.bcp38.info  2000 Land
 Rover DII
 St Petersburg FL USA  BCP38: Ask For It By Name!   +1 727 647
 1274




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: How are you doing DHCPv6 ?

2015-04-01 Thread Ray Soucy
[ 3 year old thread ]

So back in 2012 there was some discussion on DHCPv6 and the challenge of
using a DUID in a dual-stack environment where MAC-based assignments are
already happening though an IPAM.

Update on this since then:




*RFC 6939 - Client Link-Layer Address Option in DHCPv6*

Pretty much exactly what I described in 2012 in terms of leveraging RFC
6422 to do this. Thank you, Halwasia, Bhandari, and Dec @ Cisco :-)

I'd like to think my email back in 2012 influenced this, but I'm sure it
didn't. ;-)



*Support in ISC DHCP 4.3.1+*

https://deepthought.isc.org/article/AA-01112/0/Newly-Pre-defined-Options-in-DHCP-4.3.html


Example to add logging for this in dhcpd.conf:

log (info, concat (Lease for , binary-to-ascii(16,16, :,
substring(suffix(option dhcp6.ia-na, 24),0,16)),  client-linklayer-addr
,v6relay(1, (binary-to-ascii(16, 8, :, option
dhcp6.client-linklayer-addr);


To create static bindings based on it:

host hostname-1 {
 host-identifier v6relopt 1 dhcp6.client-linklayer-addr
0:1:32:8b:36:ba:ed:ab;
 fixed-address6 2001:db8:100::123;
};


[ These examples taken from Enno Rey, link follows ]

http://www.insinuator.net/2015/02/is-rfc-6939-support-finally-here-checking-the-implementation-of-the-client-link-layer-address-option-in-dhcpv6/




*Cisco Support?*

Apparently Cisco has started to support this in IOS-XE by default. I
haven't had a chance to verify this yet, but I did check IOS XR and IOS and
still don't see support for it.

Does anyone have details on what platforms and releases from Cisco support
RFC 6939 Option 79 so far? The only thing I can find online is reference
to the Cisco uBR7200 release 12.2(33)SCI, which doesn't really help me.





On Mon, Jan 23, 2012 at 5:23 PM, Ray Soucy r...@maine.edu wrote:

 The requirement of the DUID is a big hurdle to DHCPv6 adoption, I agree.

 Currently, a DUID can be generated in 1 of 3 ways, 2 of which include
 _any_ MAC address of the system at the time of generation.  After
 that, the DUID is stored in software.

 The idea is that the DUID identifies the system and the IAID
 identifies the interface, and that over time, the system will keep its
 DUID even if the network adapter changes.

 This is obviously different from how we use DHCP for legacy IP.

 There are a few problems as a result:

 1. Systems that are built using disk images can all have the same DUID
 unless the admin takes care to remove any generated DUID on the image
 (already see this on Windows 7 and even Linux).

 2. Networks where the MAC addresses for systems are already known
 can't simply build a DHCPv6 configuration based on those MACs.

 If someone were to modify DHCPv6 to address these concerns, I think
 the easiest way to do so would be to extend DHCPv6 relay messages to
 include the MAC address of the system making the request (DHCPv6
 servers on local sub-networks would be able to determine the MAC from
 the packet).  This would allow transitional DHCPv6 configurations to
 be built on MAC addresses rather than DUID without client modification
 (which is key).

 Perhaps this is already possible through the use of RFC 6422 (which
 shouldn't break anything).

 I think more important, though, is a good DHCPv6 server implementation
 with verbose logging capabilities, and the ability to specify a DUID,
 DUID+IAID, or MAC for static assignments.

 I know there are people from ISC on-list.  It would be great to hear
 someone who works on DHCPd chime in.

 How about we start with modifying ISC DHCPd for IPv6 to have proper
 logging and support for configuring IAID, then work on the MAC
 awareness piece.  ISC DHCPd makes use of RAW sockets, so it should
 always have the MAC for a non-relayed request.  Then we just need to
 work with router vendors on adding MACs as a relay option.

 --
 Ray Soucy

 Epic Communications Specialist

 Phone: +1 (207) 561-3526

 Networkmaine, a Unit of the University of Maine System
 http://www.networkmaine.net/




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Broken SSL cert caused by router?

2015-03-27 Thread Ray Soucy
It might be filtering the CRL or OCSP verification for the SSL
certificate.  For GoDaddy I think this would be:

http://crl.godaddy.com/
http://ocsp.godaddy.com/
http://certificates.godaddy.com/

We ran into this when OS X changed how it handles SSL a few years
back, our captive portal was presenting a splash page in place of
Thawte OCSP and crashing the SSL keychain process.  The work-around
was either to respond with a TCP RST for these requests or to allow
them through.

On Thu, Mar 26, 2015 at 11:57 PM, Lewis,Mitchell T.
ml-na...@techcompute.net wrote:
 Meraki Access Points are interesting devices.

 I have found they cause issues with Linux firewalls if the merakis are not 
 configured correctly.

 Meraki Access Points do content inspections which I have found can cause 
 produce symptoms similar to yours, although I have not experienced what you 
 are describing. Since the MX64W is both an Access Point  security gateway, 
 it has some additional content inspection/intelligence for it's security 
 appliance role on top of the functions it performs as an access point, the 
 same functions which are found in Meraki standalone access points as well.

 I am not sure what the specifics are as I do not use Meraki security 
 appliances but it is worth checking. I have found with Meraki that items in 
 the control panel/dashboard are not always labeled the best so I have found 
 it is usually worth putting in a ticket with them and/or a call to them to 
 see what they think (1-888-490-0918).











 Mitchell T. Lewis
 mle...@techcompute.net
 : www.linkedin.com/in/mlewiscc
 Mobile: (203)816-0371
 PGP Fingerprint: 79F2A12BAC77827581C734212AFA805732A1394E Public PGP Key




 A computer will do what you tell it to do, but that may be much different 
 from what you had in mind. ~Joseph Weizenbaum

 - Original Message -

 From: Mike mike-na...@tiedyenetworks.com
 To: nanog@nanog.org
 Sent: Thursday, March 26, 2015 6:38:55 PM
 Subject: Broken SSL cert caused by router?

 Hi,

 I have a very odd problem.

 We've recently gotten a 'real' ssl certificate from godaddy to
 cover our domain (*.domain.com) and have installed it in several places
 where needed for email (imap/starttls and etc) and web. This works
 great, seems ok according to various online TLS certificate checkers,
 and I get the green lock when testing using my own browsers and such.

 I have a customer however that uses our web mail system now secured
 with ssl. I myself and many others use it and get the green lock. But,
 whenever any station at the customer tries using it, they get a broken
 lock and 'your connection is not private'. The actual error displayed
 below is 'cert_authority_invalid' and it's Go Daddy Secure Certificate
 Authority - G2. And it gets worse - whenever I go to the location and
 use my own laptop, the very one that 'works' when at my office, I ALSO
 get the error. AND EVEN WORSE - when I connect to my cell phone provided
 hotspot, the error goes away!

 As weird as this all sounds, I got it nailed down to one device -
 they have a Cisco/Meraki MX64W as their internet gateway - and when I
 remove that device from the chain and go 'straight' out to the internet,
 suddenly, the certificate problem goes away entirely.

 How is this possible? Can anyone comment on these devices and tell
 me what might be going on here?

 Mike-




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Getting hit hard by CHINANET

2015-03-23 Thread Ray Soucy
I did a test on my personal server of filtering every IP network assigned
to China for a few months and over 90% of SSH attempts and other noise just
went away.  It was pretty remarkable.

Working for a public university I can't block China outright, but there are
times it has been tempting. :-)

The majority of DDOS attacks I see are sourced from addresses in the US,
though (likely spoofed).  Just saw a pretty large one last week which was
SSDP 1900 to UDP port 80, 50K+ unique host addresses involved.


On Wed, Mar 18, 2015 at 8:32 AM, Eric Rogers ecrog...@precisionds.com
wrote:

 We are using Mikrotik for a BGP blackhole server that collects BOGONs
 from CYMRU and we also have our servers (web, email, etc.) use fail2ban
 to add a bad IP to the Mikrotik.  We then use BGP on all our core
 routers to null route those IPs.

 The ban-time is for a few days, and totally dynamic, so it isn't a
 permanent ban.  Seems to have cut down on the attempts considerably.

 Eric Rogers
 PDSConnect
 www.pdsconnect.me
 (317) 831-3000 x200


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Roland Dobbins
 Sent: Wednesday, March 18, 2015 6:04 AM
 To: nanog@nanog.org
 Subject: Re: Getting hit hard by CHINANET


 On 18 Mar 2015, at 17:00, Roland Dobbins wrote:

  This is not an optimal approach, and most providers are unlikely to
  engage in such behavior due to its potential negative impact (I'm
  assuming you mean via S/RTBH and/or flowspec).

 Here's one counterexample:

 https://ripe68.ripe.net/presentations/176-RIPE68_JSnijders_DDoS_Damage_
 Control.pdf

 ---
 Roland Dobbins rdobb...@arbor.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: FTTx Active-Ethernet Hardware

2015-02-10 Thread Ray Soucy
Thank you, this is useful information.  From your perspective as a
user, do things seem fairly stable?

On Tue, Feb 10, 2015 at 9:52 AM, Ammar Zuberi am...@fastreturn.net wrote:
 Hi,

 Here in Dubai they have a wide FTTH deployment (almost 80% of homes and 
 offices) with almost no copper in the service provider networks.

 They use these Planet devices in every deployment I've taken a look at so far.

 Ammar

 On 10 Feb 2015, at 6:42 pm, Ray Soucy r...@maine.edu wrote:

 Price and functionality-wise Planet MGSW-28240F and GSD-1020S look
 pretty close to what I'm looking for.  Anyone have real experience
 with using them on a large scale?  Performance?

 On Tue, Feb 10, 2015 at 8:34 AM, Mike Hammett na...@ics-il.net wrote:
 Check out Mikrotik, Planet and TP-Link.




 -
 Mike Hammett
 Intelligent Computing Solutions
 http://www.ics-il.com



 - Original Message -

 From: Ray Soucy r...@maine.edu
 To: NANOG nanog@nanog.org
 Sent: Tuesday, February 10, 2015 7:31:22 AM
 Subject: FTTx Active-Ethernet Hardware

 One thing I'm personally interested in is the growth of municipal FTTx
 that's starting to happen around the US and possibly applying that
 model to highly rural areas (e.g. 10 mile long town with no side
 streets, existing utility polls, 250 or so homes) and doing a
 realistic cost analysis of what that would take.

 What options are out there for Active-Ethernet hardware. Ideally
 something that could handle G.8032 and 802.1ad in hardware for the
 distribution side (24 or 48-port SFP metro switch) and something
 inexpensive for the access side but still managed (e.g. a 4-port
 switch with an SFP uplink supporting Q-in-Q).

 I'm really looking for something cheap to keep costs down for a
 proof-of-concept. The stuff from Cisco and even Ciena is a bit more
 expensive than my target.




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net



 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


FTTx Active-Ethernet Hardware

2015-02-10 Thread Ray Soucy
One thing I'm personally interested in is the growth of municipal FTTx
that's starting to happen around the US and possibly applying that
model to highly rural areas (e.g. 10 mile long town with no side
streets, existing utility polls, 250 or so homes) and doing a
realistic cost analysis of what that would take.

What options are out there for Active-Ethernet hardware.  Ideally
something that could handle G.8032 and 802.1ad in hardware for the
distribution side (24 or 48-port SFP metro switch) and something
inexpensive for the access side but still managed (e.g. a 4-port
switch with an SFP uplink supporting Q-in-Q).

I'm really looking for something cheap to keep costs down for a
proof-of-concept.  The stuff from Cisco and even Ciena is a bit more
expensive than my target.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: FTTx Active-Ethernet Hardware

2015-02-10 Thread Ray Soucy
Price and functionality-wise Planet MGSW-28240F and GSD-1020S look
pretty close to what I'm looking for.  Anyone have real experience
with using them on a large scale?  Performance?

On Tue, Feb 10, 2015 at 8:34 AM, Mike Hammett na...@ics-il.net wrote:
 Check out Mikrotik, Planet and TP-Link.




 -
 Mike Hammett
 Intelligent Computing Solutions
 http://www.ics-il.com



 - Original Message -

 From: Ray Soucy r...@maine.edu
 To: NANOG nanog@nanog.org
 Sent: Tuesday, February 10, 2015 7:31:22 AM
 Subject: FTTx Active-Ethernet Hardware

 One thing I'm personally interested in is the growth of municipal FTTx
 that's starting to happen around the US and possibly applying that
 model to highly rural areas (e.g. 10 mile long town with no side
 streets, existing utility polls, 250 or so homes) and doing a
 realistic cost analysis of what that would take.

 What options are out there for Active-Ethernet hardware. Ideally
 something that could handle G.8032 and 802.1ad in hardware for the
 distribution side (24 or 48-port SFP metro switch) and something
 inexpensive for the access side but still managed (e.g. a 4-port
 switch with an SFP uplink supporting Q-in-Q).

 I'm really looking for something cheap to keep costs down for a
 proof-of-concept. The stuff from Cisco and even Ciena is a bit more
 expensive than my target.




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Checkpoint IPS

2015-02-06 Thread Ray Soucy
An IPS doesn't have to be in line.

It can be something watching a tap and scripted to use something else
to block traffic (e.g. hardware filtering options on a router that can
handle it).

An IDS tied into an internal RTBH setup to leverage uRPF filtering in
hardware can be pretty effective at detecting and blocking the typical
UDP attacks out there before they reach systems that don't handle that
as gracefully (e.g. firewalls or host systems).  Even if you keep it
from being automated and just have it be an IDS that you can have a
human respond to is pretty valuable.


On Thu, Feb 5, 2015 at 6:40 PM, Patrick Tracanelli
eks...@freebsdbrasil.com.br wrote:

 On 05/02/2015, at 12:31, Terry Baranski terry.baranski.l...@gmail.com
 wrote:

 On Thu, Feb 5, 2015 at 8:34 AM, Roland Dobbins rdobb...@arbor.net wrote:

 I've never heard a plausible anecdote, much less seen meaningful

 statistics,

 of these devices actually 'preventing' anything.


 People tend to hear what they want to hear. Surely your claim can't be
 that
 an IPS has never, in the history of Earth, prevented an attack or exploit.
 So it's unclear to me what you're actually trying to say here.

 And the fact that well-known evasion techniques still work against these
 devices today, coupled with the undeniable proliferation of compromised
 hosts residing within networks supposedly 'protected' by these devices,
 militates against your proposition.


 Your tendency of making blanket statements is somewhat baffling given the
 multitude of intricacies, details, and varying circumstances involved in a
 complex topic like this. To me, it's indicative of an overly-simplified
 and/or biased way of looking at things.

 In any case, go ahead and stick with your router ACLs and (stateful!)
 proxies. Different strokes.

 -Terry


 There's room for a good engineered strategy for protection which won't turn
 into a point of failure in the overall networking topology.

 For decades, since first rainbow series books were written and military
 strategy started to be added to information security, it's always been
 about defense in depth and layered defense. Today those buzzwords became an
 incredibly bullshit bingo on sales force strategy on selling magical boxes
 and people tend to forget the basics. Layers and the depth is not a theory
 just to be added to drawings and keynote presentations.

 Considering a simplistic topology for 3-tier (4 if you count T0) depth
 protection strategy:

 (Internet)--[Tier-0]--(Core Router)--[Tier1]--(core
 switch)--[Tier2]--(DMZ)--[Tier3]--(Golden Secret)

 One security layer (tier, whatever) is there to try to fill the gap from the
 previous one.

 How deep you have to dig depends on who you are. If you are the end
 organization willing to protect the golden secret, how complex is your
 topology, or if you are the carrier, the telecom the company worried only
 about BGP, PPS, BPS and availability other than the actual value for what's
 the real target for the attack (if not availability)

 In summary, in my experience what will (not) work is:

 1) Tier 0  Tier 1
 On border, core, (Tier0) or on Tier-1 protection layers (typical
 firewall/dmz topological position)

 - Memory and CPU exaustion will shut down your operations (increase latency
 and decrease availability) easily, if you have a Protecting Inbound Proxy
 (Web Application Firewall, for the sales jargon), Stateful Firewall or IPS.

 The thing here is, you are insane or naive if you believe a finite state
 machine with finite resources can protect you from a virtually infinite
 (unlimited) source of attacks. No matter how much RAM you have, how much CPU
 cycles you have, they are finite, and since those amazing stateful w/ deep
 inspection, scrub, normalization and reassembly features on a firewall will
 demand at least 4K RAM per session and a couple CPU cycles per test, you
 have a whole line rate (1.4Mpps / 14Mpps for 1GbE 10GbE ports) attack
 potential, and come on, if a single or a 3-way packets sequence will cost
 you a state, it's an easy math count to find out you are in a bad position
 trying to secure those Tier0 and Tier1 topological locations. It's just
 easier and cheaper for the one attacking, than for your amazing firewall or
 IPS.

 So what to do here? Try to get rid of most automated/scripted/simple attack
 you can. You can do ingress filtering, a lot of BCP, protection against
 SNMP/NTP/DNS amplification, verify reverse path, (verrevpath, antipsoof,
 verrevreachability), and many good / effective (but limited) protection with
 ACL, data plane protection mechanisms, BGP blackholing, Null Routing
 (sending stuff to disc0 on BSD, null0 on Cisco, etc) and Stateles
 Firewalling.

 Also, an IDS sensor might fit here, because if CPU/Memory starvation happens
 on an IDS sensor, the worst thing will happen is some packets getting routed
 without proper processing. But still, they will get routed.

 Always stateles, always simple tests. No Layer7 inspection of any 

Re: Dynamic routing on firewalls.

2015-02-05 Thread Ray Soucy
It all depends how much of the firewall functionality is implemented in CPU.

The biggest problem is that firewalls that implement functionality in
software usually saturate CPU when stressed (e.g. DOS) and routing
protocols start dropping.

I'm a strong believer in having a router that can do basic filtering
in hardware (specifically uRPF) as the first line of defense in a DOS
attack and then using a firewall behind that to reach your security
policy goals.  We have a pretty large network so we've expanded the
concept of RTBH filtering internally and use a small ISR (currently
1841) to inject routes into our network to disable hosts using uRPF at
the first router they hit.  The entire thing is scripted and works
very well at containing problematic hosts centrally.

The other thing to watch out for IMHO is the NGFW hype.  I haven't
seen a NGFW that can actually do everything it claims to at the same
time and meet advertised performance levels.  You're much better off
splitting up the workload and having a series of components
architected to work with each other.

On Thu, Feb 5, 2015 at 9:42 AM, Eugeniu Patrascu eu...@imacandi.net wrote:
 On Thu, Feb 5, 2015 at 4:10 PM, David Jansen da...@nines.nl wrote:

 Hi,

 We have used dynamic routing on firewall in the old days. We did
 experience several severe outages due to this setup (OSPF en Cisco). As you
 will understand i'm not eager to go back to this solution but I am curious
 about your point of views.

 Is it advisory to so these days?


 Any specific firewall in mind? As this depends from vendor to vendor.

 I've had some issues with OSPF and CheckPoint firewalls when the firewalls
 would be overloaded and started dropping packets at the interface level
 causing adjacencies to go down, but I solved this by using BGP instead and
 the routing issues went away.

 On Juniper things tend work OK.

 Other than this, make sure you don't run into asymmetric routing as
 connections might get dropped because the firewall does not know about them
 or packets arrive out of order and the firewall cannot reassemble all of
 them.



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Recommended wireless AP for 400 users office

2015-02-04 Thread Ray Soucy
Honestly, in a lot of cases you don't even need a device to support
packet capture as a feature to add it as a feature once its
compromised.  This is just FUD IMHO.

On Wed, Feb 4, 2015 at 7:24 AM, Paul Nash p...@nashnetworks.ca wrote:
 I love the built-in remote packet captures,

 You, the NSA, and lots and lots of hackers, ALL love the remote packet 
 capture.  If Meraki support can turn it on, so can someone who penetrates 
 their systems (by getting a job there or by hacking), and then they get to 
 see everything happening INSIDE your network.  Not just your WAN traffic, 
 which would be bad enough.

 paul



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Cisco Nexus

2015-02-03 Thread Ray Soucy
I have a small setup, Nexus 2 x 5596UP + 12 x 2248TP FEX, 2 x B22DELL,
2 x B22HP, 1 x C2248PQ-10GE.

Been using this setup since 2012, so it's getting a bit long in the
tooth.  It's in an Active-Active setup because there wasn't much
guidance at the time on which way to go.  There are some restrictions
with an AA setup you probably want to avoid.  We currently don't do
any FCoE because we're mostly a NetApp and NFS environment.

The performance and stability have been great.

It works well for a traditional environment with a lot of wired ports
to stand-alone servers.  If you do a lot with virtualization it's not
a great solution.  You really want to avoid connecting VM host servers
to FEX ports because of all the restrictions that come with it.  One
restriction that's a real PITA for me right now is that a FEX port
can't be a promiscuous trunk port if you're using PVLAN.

Using config-sync has been a lot of trouble.  There are a lot of
actions that will verify OK but then fail.  The result is that things
are partially configured and the whole system gets out of sync not
letting you make any other changes; the fix is having to manually go
in to each switch to try and get the configuration to match (which
requires comparing the running-configuration to the running
switch-profile configuration).


On Mon, Feb 2, 2015 at 1:17 PM, Herman, Anthony
anthony.her...@mattersight.com wrote:
 Nanog,

 I would like to poll the collective for experiences both positive and 
 negative with the Nexus line. More specifically I am interested in hearing 
 about FEX with N2K at the ToR and if this has indeed made any impact on Opex 
 as well as non-obvious shortcomings to using the fabric extenders. Also if 
 anyone is using any of the Nexus line for I/O convergence (FCoE) I would be 
 interested in hearing your experience with this as well.

 Thank you in advance,

 -A



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Recommended wireless AP for 400 users office

2015-01-29 Thread Ray Soucy
Just curious.  What kind of problems have you seen with the Ubiquiti solution?

I've had a few units in for testing a potential managed wireless for
rural libraries and so far they've been pretty rock solid for the
price.  My biggest critique is that they don't support many features
and are fairly static, so you really need to map out your deployment
and handle power level and channel selection manually.  That said the
test deployments I have going are very, very small.

On Thu, Jan 29, 2015 at 1:19 AM, Tyler Mills tylermi...@gmail.com wrote:
 Have had a lot of experience with Ruckus(and Unifi unfortunately).  The
 Ruckus platform is one of the best. If you will be responsible for
 supporting the deployment, it will save you a lot of frustration when
 compared with UBNT.

 On Thu Jan 29 2015 at 12:18:54 AM Mike Lyon mike.l...@gmail.com wrote:

 Check out Xirrus
 On Jan 28, 2015 9:08 PM, Manuel Marín m...@transtelco.net wrote:

  Dear nanog community
 
  I was wondering if you can recommend or share your experience with APs
 that
  you can use in locations that have 300-500 users. I friend recommended me
  Ruckus Wireless, it would be great if you can share your experience with
  Ruckus or with a similar vendor.  My experience with ubiquity for this
 type
  of requirement was not that good.
 
  Thank you and have a great day
 




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Recommended wireless AP for 400 users office

2015-01-29 Thread Ray Soucy
Yeah, most people ignore ZH.  UBNT marketing hyped it up quite a bit,
and for a residential deployment it can work OK, but if you have any
kind of background in wireless you'll understand that it goes out the
window for a non-trivial deployment due to the requirement of all APs
sharing a channel.

It's too bad they don't support 802.11r (fast roaming) and 802.11k
(radio resource management).


On Thu, Jan 29, 2015 at 8:22 AM, Mike Hammett na...@ics-il.net wrote:
 What problems have you had with UBNT?

 It's zero hand-off doesn't work on unsecured networks, but that's about the 
 extent of the issues I've heard of other than stadium density environments.




 -
 Mike Hammett
 Intelligent Computing Solutions
 http://www.ics-il.com



 - Original Message -

 From: Manuel Marín m...@transtelco.net
 To: nanog@nanog.org
 Sent: Wednesday, January 28, 2015 11:06:39 PM
 Subject: Recommended wireless AP for 400 users office

 Dear nanog community

 I was wondering if you can recommend or share your experience with APs that
 you can use in locations that have 300-500 users. I friend recommended me
 Ruckus Wireless, it would be great if you can share your experience with
 Ruckus or with a similar vendor. My experience with ubiquity for this type
 of requirement was not that good.

 Thank you and have a great day




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: scaling linux-based router hardware recommendations

2015-01-29 Thread Ray Soucy
For us, open source isn't just a business model; it's smart
engineering practice. -- Bruce Schneier

I hope I'm not the only one, but I think the NSA (and other state
actors) intentionally introducing systemic weaknesses or backdoors
into critical infrastructure is pretty ... reckless.  I really can't
figure out if it's arrogance or just plain naivety on their part, but
they seem pretty confident that the information won't ever fall into
the wrong hands and keep pushing forward.

So for me, this is an area I've very interested in seeing some progress.

I think most people don't realize that if you only care about 1G
performance levels, commodity hardware can be more than fine.  Linux
netfilter makes a really great firewall, and it's the most
peer-reviewed in the world.




On Wed, Jan 28, 2015 at 6:18 PM, Adrian Chadd adr...@creative.net.au wrote:
 [snip]

 To inject science into the discussion:

 http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_an_ibm_system_x3550_m3_with_10-gigabit_intel_x540-at2

 And he maintains a test setup to check for performance regressions:

 http://bsdrp.net/documentation/examples/freebsd_performance_regression_lab

 Now, this is using the in-kernel stack, not netmap/pfring/etc that
 uses all the batching-y, stack-shallow-y implementations that the
 kernel currently doesn't have. But, there are people out there doing
 science on it and trying very hard to kick things along. The nice
 thing about what has come out of the DPDK related stuff is, well, the
 bar is set very high now. Now it's up to the open source groups to
 stop messing around and do something about it.


 If you're interested in more of this stuff, go poke Jim at pfsense/netgate.


 -adrian
 (This and RSS work is plainly in my stuff I do for fun category, btw.)



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Muni Fiber and Politics

2014-07-22 Thread Ray Soucy
IMHO the way to go here is to have the physical fiber plant separate.

FTTH is a big investment.  Easy for a municipality to absorb, but not
attractive for a commercial ISP to do.  A business will want to
realize an ROI much faster than the life of the fiber plant, and will
need assurance of having a monopoly and dense deployment to achieve
that.  None of those conditions apply in the majority of the US, so
we're stuck with really old infrastructure delivering really slow
service.

Municipal FTTH needs to be a regulated public utility (ideally at a
state or regional level).  It should have an open access policy at
published rates and be forbidden from offering lit service on the
fiber (conflict of interest).  This covers the fiber box in the house
to the communications hut to patch in equipment.

Think of it like the power company and the separation between
generation and transmission.

That's Step #1.

Step #2 is finding an ISP to make use of the fiber.

Having a single municipal ISP is not really what I think is needed.

Having the infrastructure in place to eliminate the huge investment
needed for an ISP to service a community is.  Hopefully, enough people
jump at the idea and offer service over the fiber, but if they don't,
you need to get creative.

The important thing is that the fiber stays open.  I'm not a fan of
having a town or city be an ISP because I know how the budgets work.
I trust a town to make sure my fiber is passing light; I don't trust
it to make sure I have the latest and greatest equipment to light the
fiber, or bandwidth from the best sources.  I certainly don't trust
the town to allow competition if it's providing its own service.

This is were the line really needs to be drawn IMHO.  Municipal FTTH
is about layer 1, not layer 2 or layer 3.

That said, there are communities where just having the fiber plant
won't be enough.  In these situations, the municipality can do things
like create an incentive program to guarantee a minimum income for an
ISP to reach the community which get's trimmed back as the ISP gains
subscribers.

I don't think a public option is bad on the ISP side of things; as
long as the fiber is open and people can choose which ISP they want.
The public option might be necessary for very rural communities that
can't get service elsewhere or to simply serve as a price-check, but
most of us here know that a small community likely won't be able to
find the staff to run its own ISP, either.

TL;DR Municipal FTTH should be about fixing the infrastructure issues
and promoting innovation and competition, not creating a
government-run ISP to oust anyone from the market.

Think about it: If you're an ISP, and you can lease fiber and
equipment space (proper hut, secured, with backup power and cooling
etc) for a subsidized rate; for cheaper than anything you could afford
to build out; how much arm twisting would it take for you to invest in
installing a switch or two to deliver service?  If you're a smaller
ISP, you were likely already doing this in working with telephone
companies in the past (until they started trying to oust you).


On Tue, Jul 22, 2014 at 11:27 AM, Aaron aa...@wholesaleinternet.net wrote:
 So let me throw out a purely hypothetical scenario to the collective:

 What do you think the consequences to a municipality would be if they laid
 fiber to every house in the city and gave away internet access for free?
 Not the WiFi builds we have today but FTTH at gigabit speeds for free?

 Do you think the LECs would come unglued?

 Aaron



 On 7/21/2014 8:33 PM, Miles Fidelman wrote:

 I've seen various communities attempt to hand out free wifi - usually in
 limited areas, but in some cases community-wide (Brookline, MA comes to
 mind).  The limited ones (e.g., in tourist hotspots) have been city funded,
 or donated.  The community-wide ones, that I've seen, have been
 public-private partnerships - the City provides space on light poles and
 such - the private firm provides limited access, in hopes of selling
 expanded service.  I haven't seen it work successfully - 4G cell service
 beats the heck out of WiFi as a metropolitan area service.

 When it comes to municipal fiber and triple-play projects, I've generally
 seen them capitalized with revenue bonds -- hence, a need for revenue to pay
 of the financing.  Lower cost than commercial services because municipal
 bonds are low-interest, long-term, and they operate on a cost-recovery
 basis.

 Miles Fidelman

 Aaron wrote:

 Do you have an example of a municipality that gives free internet access
 to it's residents?


 On 7/21/2014 2:26 PM, Matthew Kaufman wrote:

 I think the difference is when the municipality starts throwing in free
 or highly subsidized layer 3 connectivity free with every layer 1
 connection

 Matthew Kaufman

 (Sent from my iPhone)

 On Jul 21, 2014, at 12:08 PM, Blake Dunlap iki...@gmail.com wrote:

 My power is pretty much always on, my water is pretty much always on
 and safe, my sewer 

Re: Muni Fiber and Politics

2014-07-22 Thread Ray Soucy
I was mentally where you were a few years ago with the idea of having
switching and L2 covered by a public utility but after seeing some
instances of it I'm more convinced that different ISPs should use
their own equipment.

The equipment is what makes the speed and quality of service.  If you
have shared infrastructure for L2 then what exactly differentiates a
service?  More to the point; if that equipment gets oversubscribed or
gets neglected who is responsible for it?  I don't think the
municipality or public utility is a good fit.

Just give us the fiber and we'll decided what to light it up with.

BTW I don't know why I would have to note this, but of course I'm
talking about active FTTH.  PON is basically throwing money away if
you look at the long term picture.

Sure, having one place switch everything and just assign people to the
right VLAN keeps trucks from rolling for individual ISPs, but I don't
think giving up control over the quality of the service is in the
interest of an ISP.  What you're asking for is basically to have a
competitive environment where everyone delivers the same service.
If your service is slow and it's because of L2 infrastructure, no
change in provider will fix that the way you're looking to do it.



On Tue, Jul 22, 2014 at 2:26 PM, Scott Helms khe...@zcorum.com wrote:
 One of the main problems with trying to draw the line at layer 1 is that its
 extremely inefficient in terms of the gear.  Now, this is in large part a
 function of how gear is built and if a significant number of locales went in
 this direction we _might_ see changes, but today each ISP would have to
 purchase their own OLTs and that leads to many more shelves than the total
 number of line cards would otherwise dictate.  There are certainly many
 other issues, some of which have been discussed on this list before, but
 I've done open access networks for several cities and _today_ the cleanest
 situations by far (that I've seen) had the city handling layer 1 and 2 with
 the layer 2 hand off being Ethernet regardless of the access technology
 used.


 Scott Helms
 Vice President of Technology
 ZCorum
 (678) 507-5000
 
 http://twitter.com/kscotthelms
 


 On Tue, Jul 22, 2014 at 2:13 PM, Ray Soucy r...@maine.edu wrote:

 IMHO the way to go here is to have the physical fiber plant separate.

 FTTH is a big investment.  Easy for a municipality to absorb, but not
 attractive for a commercial ISP to do.  A business will want to
 realize an ROI much faster than the life of the fiber plant, and will
 need assurance of having a monopoly and dense deployment to achieve
 that.  None of those conditions apply in the majority of the US, so
 we're stuck with really old infrastructure delivering really slow
 service.

 Municipal FTTH needs to be a regulated public utility (ideally at a
 state or regional level).  It should have an open access policy at
 published rates and be forbidden from offering lit service on the
 fiber (conflict of interest).  This covers the fiber box in the house
 to the communications hut to patch in equipment.

 Think of it like the power company and the separation between
 generation and transmission.

 That's Step #1.

 Step #2 is finding an ISP to make use of the fiber.

 Having a single municipal ISP is not really what I think is needed.

 Having the infrastructure in place to eliminate the huge investment
 needed for an ISP to service a community is.  Hopefully, enough people
 jump at the idea and offer service over the fiber, but if they don't,
 you need to get creative.

 The important thing is that the fiber stays open.  I'm not a fan of
 having a town or city be an ISP because I know how the budgets work.
 I trust a town to make sure my fiber is passing light; I don't trust
 it to make sure I have the latest and greatest equipment to light the
 fiber, or bandwidth from the best sources.  I certainly don't trust
 the town to allow competition if it's providing its own service.

 This is were the line really needs to be drawn IMHO.  Municipal FTTH
 is about layer 1, not layer 2 or layer 3.

 That said, there are communities where just having the fiber plant
 won't be enough.  In these situations, the municipality can do things
 like create an incentive program to guarantee a minimum income for an
 ISP to reach the community which get's trimmed back as the ISP gains
 subscribers.

 I don't think a public option is bad on the ISP side of things; as
 long as the fiber is open and people can choose which ISP they want.
 The public option might be necessary for very rural communities that
 can't get service elsewhere or to simply serve as a price-check, but
 most of us here know that a small community likely won't be able to
 find the staff to run its own ISP, either.

 TL;DR Municipal FTTH should be about fixing the infrastructure issues
 and promoting innovation and competition, not creating a
 government-run ISP to oust

Re: The case(s) for, and against, preemption (was Re: Muni Fiber and Politics)

2014-07-22 Thread Ray Soucy
You're over-thinking it.  Use the power company as a model and you'll
close to the right path.

On Tue, Jul 22, 2014 at 4:05 PM, Eric Brunner-Williams
brun...@nic-naa.net wrote:
 On 7/22/14 11:13 AM, Ray Soucy wrote:

 Municipal FTTH needs to be a regulated public utility (ideally at a
 state or regional level).  It should have an open access policy at
 published rates and be forbidden from offering lit service on the
 fiber (conflict of interest).


 Ray,

 Could you offer a case for state (or regional, including a jurisdictional
 definition) preemption of local regulation?

 Counties in Maine don't have charters, and, like most states in the North
 East, their powers do not extend to incorporated municipalities. Here in
 Oregon there are general law counties, and chartered counties, and in the
 former, county ordinances to not apply, unless by agreement, with
 incorporated municipalities, in the later, the affect of county ordinances
 is not specified, though Art. VI, sec. 10 could be read as creating
 applicability, where there is a county concern. In agricultural regions
 (the South, the Mid-West, the West), country government powers are
 significantly greater than in the North East, and as in the case of Oregon,
 nuanced by the exceptions of charter vs non-charter, inferior jurisdictions.
 Yet another big issue is Dillon's Rule or Home Rule -- in the former the
 inferior jurisdictions of the state only have express granted powers on
 specific issues, and in the latter the inferior jurisdictions of the state
 have significant powers enshrined in the State(s) Constitution(s).

 I mention all this simply to show that one solution is not likely to fit all
 uses.

 Now because I've worked on Tribal Bonding, I'm aware that the IRS allows
 municipalities to issue tax free bonds for purposes that are wider than the
 government purposes test the IRS has imposed on Tribal Bonding (up until
 last year). Stadiums, golf courses, and {filling a hole in | using pole
 space on} public rights-of-way -- forms of long-term revenue Tribes are
 barred from funding via tax free bonds by an IRS rule.

 The (two, collided) points being, municipalities are likely sources of
 per-build-out funding, via their bonding authority, and you've offered a
 claim, shared by others, that municipalities should be preempted from
 per-build-out regulation of their infrastructure.

 How should it work, money originates in the municipality of X, but
 regulation of the use of that money resides in another jurisdiction?

 Eric




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Muni Fiber and Politics

2014-07-22 Thread Ray Soucy
 and take away
 their customers. This is what we, as consumers, want, isn't it?

 Owen

 
 
  Scott Helms
  Vice President of Technology
  ZCorum
  (678) 507-5000
  
  http://twitter.com/kscotthelms
  
 
 
  On Tue, Jul 22, 2014 at 2:13 PM, Ray Soucy r...@maine.edu wrote:
 
  IMHO the way to go here is to have the physical fiber plant separate.
 
  FTTH is a big investment.  Easy for a municipality to absorb, but not
  attractive for a commercial ISP to do.  A business will want to
  realize an ROI much faster than the life of the fiber plant, and will
  need assurance of having a monopoly and dense deployment to achieve
  that.  None of those conditions apply in the majority of the US, so
  we're stuck with really old infrastructure delivering really slow
  service.
 
  Municipal FTTH needs to be a regulated public utility (ideally at a
  state or regional level).  It should have an open access policy at
  published rates and be forbidden from offering lit service on the
  fiber (conflict of interest).  This covers the fiber box in the house
  to the communications hut to patch in equipment.
 
  Think of it like the power company and the separation between
  generation and transmission.
 
  That's Step #1.
 
  Step #2 is finding an ISP to make use of the fiber.
 
  Having a single municipal ISP is not really what I think is needed.
 
  Having the infrastructure in place to eliminate the huge investment
  needed for an ISP to service a community is.  Hopefully, enough people
  jump at the idea and offer service over the fiber, but if they don't,
  you need to get creative.
 
  The important thing is that the fiber stays open.  I'm not a fan of
  having a town or city be an ISP because I know how the budgets work.
  I trust a town to make sure my fiber is passing light; I don't trust
  it to make sure I have the latest and greatest equipment to light the
  fiber, or bandwidth from the best sources.  I certainly don't trust
  the town to allow competition if it's providing its own service.
 
  This is were the line really needs to be drawn IMHO.  Municipal FTTH
  is about layer 1, not layer 2 or layer 3.
 
  That said, there are communities where just having the fiber plant
  won't be enough.  In these situations, the municipality can do things
  like create an incentive program to guarantee a minimum income for an
  ISP to reach the community which get's trimmed back as the ISP gains
  subscribers.
 
  I don't think a public option is bad on the ISP side of things; as
  long as the fiber is open and people can choose which ISP they want.
  The public option might be necessary for very rural communities that
  can't get service elsewhere or to simply serve as a price-check, but
  most of us here know that a small community likely won't be able to
  find the staff to run its own ISP, either.
 
  TL;DR Municipal FTTH should be about fixing the infrastructure issues
  and promoting innovation and competition, not creating a
  government-run ISP to oust anyone from the market.
 
  Think about it: If you're an ISP, and you can lease fiber and
  equipment space (proper hut, secured, with backup power and cooling
  etc) for a subsidized rate; for cheaper than anything you could afford
  to build out; how much arm twisting would it take for you to invest in
  installing a switch or two to deliver service?  If you're a smaller
  ISP, you were likely already doing this in working with telephone
  companies in the past (until they started trying to oust you).
 
 
  On Tue, Jul 22, 2014 at 11:27 AM, Aaron aa...@wholesaleinternet.net
  wrote:
  So let me throw out a purely hypothetical scenario to the collective:
 
  What do you think the consequences to a municipality would be if they
  laid
  fiber to every house in the city and gave away internet access for
  free?
  Not the WiFi builds we have today but FTTH at gigabit speeds for free?
 
  Do you think the LECs would come unglued?
 
  Aaron
 
 
 
  On 7/21/2014 8:33 PM, Miles Fidelman wrote:
 
  I've seen various communities attempt to hand out free wifi - usually
  in
  limited areas, but in some cases community-wide (Brookline, MA comes
  to
  mind).  The limited ones (e.g., in tourist hotspots) have been city
  funded,
  or donated.  The community-wide ones, that I've seen, have been
  public-private partnerships - the City provides space on light poles
  and
  such - the private firm provides limited access, in hopes of selling
  expanded service.  I haven't seen it work successfully - 4G cell
  service
  beats the heck out of WiFi as a metropolitan area service.
 
  When it comes to municipal fiber and triple-play projects, I've
  generally
  seen them capitalized with revenue bonds -- hence, a need for revenue
  to pay
  of the financing.  Lower cost than commercial services because
  municipal
  bonds are low-interest, long-term, and they operate on a
  cost-recovery
  basis.
 
  Miles Fidelman

Re: Muni Fiber and Politics

2014-07-22 Thread Ray Soucy
Sometimes the beauty of having government involved in infrastructure
is that you don't need to justify a 3 year ROI.

Creation of the Transcontinental Railroad
Rural Electrification
Building of the Interstate Highway System

Wall ST may have everyone focused on short term gains, but when it
comes to infrastructure spending a bit more up front to plan for the
future is in the public interest.

Building active FTTH with proper capacity might not make sense for
Comcast, but then again we're not talking about Comcast.


On Tue, Jul 22, 2014 at 5:10 PM, Scott Helms khe...@zcorum.com wrote:
 I'll be there when I see it can be done practically in the US.  I agree with
 you from a philosophical standpoint, but I don't see it being there yet.


 Scott Helms
 Vice President of Technology
 ZCorum
 (678) 507-5000
 
 http://twitter.com/kscotthelms
 


 On Tue, Jul 22, 2014 at 5:00 PM, Owen DeLong o...@delong.com wrote:

 The beauty is that if you have a L1 infrastructure of star-topology fiber
 from
 a serving wire center each ISP can decide active E or PON or whatever
 on their own.

 That's why I think it's so critical to build out colo facilities with SWCs
 on the other
 side of the MMR as the architecture of choice. Let anyone who wants to be
 an
 ANYTHING service provider (internet, TV, phone, whatever else they can
 imagine)
 install the optical term at the customer prem and whatever they want in
 the colo
 and XC the fiber to them on a flat per-subscriber strand fee basis that
 applies to
 all comers with a per-rack price for the colo space.

 So I think we are completely on the same page now.

 Owen

 On Jul 22, 2014, at 13:37 , Ray Soucy r...@maine.edu wrote:

  I was mentally where you were a few years ago with the idea of having
  switching and L2 covered by a public utility but after seeing some
  instances of it I'm more convinced that different ISPs should use
  their own equipment.
 
  The equipment is what makes the speed and quality of service.  If you
  have shared infrastructure for L2 then what exactly differentiates a
  service?  More to the point; if that equipment gets oversubscribed or
  gets neglected who is responsible for it?  I don't think the
  municipality or public utility is a good fit.
 
  Just give us the fiber and we'll decided what to light it up with.
 
  BTW I don't know why I would have to note this, but of course I'm
  talking about active FTTH.  PON is basically throwing money away if
  you look at the long term picture.
 
  Sure, having one place switch everything and just assign people to the
  right VLAN keeps trucks from rolling for individual ISPs, but I don't
  think giving up control over the quality of the service is in the
  interest of an ISP.  What you're asking for is basically to have a
  competitive environment where everyone delivers the same service.
  If your service is slow and it's because of L2 infrastructure, no
  change in provider will fix that the way you're looking to do it.
 
 
 
  On Tue, Jul 22, 2014 at 2:26 PM, Scott Helms khe...@zcorum.com wrote:
  One of the main problems with trying to draw the line at layer 1 is
  that its
  extremely inefficient in terms of the gear.  Now, this is in large part
  a
  function of how gear is built and if a significant number of locales
  went in
  this direction we _might_ see changes, but today each ISP would have to
  purchase their own OLTs and that leads to many more shelves than the
  total
  number of line cards would otherwise dictate.  There are certainly many
  other issues, some of which have been discussed on this list before,
  but
  I've done open access networks for several cities and _today_ the
  cleanest
  situations by far (that I've seen) had the city handling layer 1 and 2
  with
  the layer 2 hand off being Ethernet regardless of the access technology
  used.
 
 
  Scott Helms
  Vice President of Technology
  ZCorum
  (678) 507-5000
  
  http://twitter.com/kscotthelms
  
 
 
  On Tue, Jul 22, 2014 at 2:13 PM, Ray Soucy r...@maine.edu wrote:
 
  IMHO the way to go here is to have the physical fiber plant separate.
 
  FTTH is a big investment.  Easy for a municipality to absorb, but not
  attractive for a commercial ISP to do.  A business will want to
  realize an ROI much faster than the life of the fiber plant, and will
  need assurance of having a monopoly and dense deployment to achieve
  that.  None of those conditions apply in the majority of the US, so
  we're stuck with really old infrastructure delivering really slow
  service.
 
  Municipal FTTH needs to be a regulated public utility (ideally at a
  state or regional level).  It should have an open access policy at
  published rates and be forbidden from offering lit service on the
  fiber (conflict of interest).  This covers the fiber box in the house
  to the communications hut to patch in equipment

Re: Muni Fiber and Politics

2014-07-21 Thread Ray Soucy
Agree.

I'd go a step further and say that Dark Fiber as a Public Utility
(which is regulated to provide open access at published rates and
forbidden from providing its own lit service directly) is the only way
forward.

That said, I don't think it's a good idea to see the municipality
provide the fiber and Internet access.  There needs to be some
separation to promote an equal playing field.  That isn't to say the
town couldn't provide their own service within the framework of being
a customer of the utility, which would be helpful as a price-check and
anchor provider.

Just need to make sure it's setup to promote competition not kill it.

For rural areas where the population density is too low to deliver an
acceptable ROI for companies like Verizon or Comcast, I think
municipal dark fiber to the home is the only hope.

Let the ISPs focus on the cost and investment of the optics and
routers to drive up bandwidth instead of trying to absorb the cost of
a 20 year fiber plant in 3 years.

On a side note, this model actually makes it possible for a smaller
ISP to actually be viable again, which might not be a bad thing.


On Mon, Jul 21, 2014 at 3:08 PM, Blake Dunlap iki...@gmail.com wrote:
 My power is pretty much always on, my water is pretty much always on
 and safe, my sewer system works, etc etc...

 Why is layer 1 internet magically different from every other utility?

 -Blake

 On Mon, Jul 21, 2014 at 1:38 PM, William Herrin b...@herrin.us wrote:
 On Mon, Jul 21, 2014 at 10:20 AM, Jay Ashworth j...@baylink.com wrote:
 Over the last decade, 19 states have made it illegal for municipalities
 to own fiber networks

 Hi Jay,

 Everything government does, it does badly. Without exception. There
 are many things government does better than any private organization
 is likely to sustain, but even those things it does slowly and at an
 exorbitant price.

 Muni fiber is a competition killer. You can't beat city hall; once
 built it's not practical to compete, even with better service, so
 residents are stuck with only the overpriced (either directly or via
 taxes), usually underpowered and always one-size-fits-all network
 access which results. As an ISP I watched something similar happen in
 Altoona PA a decade and a half ago. It was a travesty.

 The only exception I see to this would be if localities were
 constrained to providing point to point and point to multipoint
 communications infrastructure within the locality on a reasonable and
 non-discriminatory basis. The competition that would foster on the
 services side might outweigh the damage on the infrastructure side.
 Like public roads facilitate efficient transportation and freight
 despite the cost and potholes, though that's an imperfect simile.

 Regards,
 Bill Herrin


 --
 William Herrin  her...@dirtside.com  b...@herrin.us
 Owner, Dirtside Systems . Web: http://www.dirtside.com/
 Can I solve your unusual networking challenges?



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Net Neutrality...

2014-07-17 Thread Ray Soucy
In truth, however, market failures like these have never happened,
and nothing is broken that needs fixing.

Prefixing a statement with in truth doesn't actually make it true, Bob.


On Wed, Jul 16, 2014 at 10:50 AM, Fred Baker (fred) f...@cisco.com wrote:
 Relevant article by former FCC Chair

 http://www.washingtonpost.com/posteverything/wp/2014/07/14/this-is-why-the-government-should-never-control-the-internet/



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: FYI: Unbreakable VPN using Vyatta/VyOS -HOW TO-

2014-05-14 Thread Ray Soucy
Thanks for this,

Have you posted this to the VyOS project forums?  It would make a nice
addition to the wiki (*cough* I've been trying to find some help to
complete the VyOS user guide).


On Tue, May 13, 2014 at 5:10 AM, Naoto MATSUMOTO
n-matsum...@sakura.ad.jpwrote:

 Hi all!


 We wrote TIPS memo about the Basic Idea for inter-cloud networking using
 Virtual Router (a.k.a Brocade Vyatta vRotuer and VyOS) with High
 Availability
 Concept.

 Please enjoy it if you interest in ;-)

 Unbreakable VPN using Vyatta/VyOS -HOW TO-
 http://slidesha.re/1lryGVU

 Best Regards,

 --
 SAKURA Internet Inc. / Senior Researcher
 Naoto MATSUMOTO n-matsum...@sakura.ad.jp
 SAKURA Internet Research Center http://research.sakura.ad.jp/




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Managing ACL exceptions (was Re: Filter NTP traffic by packet size?)

2014-02-28 Thread Ray Soucy
I'm wondering how many operators don't have systems in place to
quickly and efficiently filter problem host systems.
I see a lot of talk of ACL usage, but not much about uRPF and black
hole filtering.

There are a few white papers that are worth a read:

http://www.cisco.com/c/dam/en/us/products/collateral/security/ios-network-foundation-protection-nfp/prod_white_paper0900aecd80313fac.pdf

http://www.cisco.com/web/about/security/intelligence/urpf.pdf

If you have uRPF enabled on all your access routers then you can
configure routing policy such that advertising a route for a specific
host system will trigger uRPF to drop the traffic at the first hop, in
hardware.

This prevents you from having to maintain ACLs or even give out access
to routers.  Instead, you can use a small router or daemon that
disables hosts by advertising them as a route (for example, we just
use a pair of small ISR 1841 routers for this); this in turn can be
tied into IPS or a web UI allowing your NOC to disable a problem host
at the first hop and prevent its traffic from propagating throughout
the network without having to know the overall architecture of the
network or determine the best place to apply an ACL.

I've seen a lot of talk on trying to filter specific protocols, or
rate-limit, etc. but I really feel that isn't the appropriate action
to take.  I think disabling a system that is a problem and notifying
its maintainer that they need to correct the issue is much more
sustainable.  There are also limitations on how much can be done
through the use of ACLs.  uRPF and black hole routing scale much
better, especially in response to a denial of service attack.

When the NTP problems first started popping up, we saw incoming NTP of
several Gb, without the ability to quickly identify and filter this
traffic a lot of our users would have been dead in the water because
the firewalls they use just can't handle that much traffic; our
routers, on the other hand, have no problem throwing those packets
out.

I only comment on this because one of the comments made to me was
Can't we just use a firewall to block it?.  It took me over an hour
to explain that the firewalls in use didn't have the capacity to
handle this level of traffic -- and when I tried to discuss hardware
vs. software filtering, I got a deer-in-the-headlights look. :-)


On Thu, Feb 27, 2014 at 8:57 PM, Keegan Holley no.s...@comcast.net wrote:
 It depends on how many customers you have and what sort of contract you have 
 with them if any.  A significant amount of attack traffic comes from 
 residential networks where a one-size-fits-all policy is definitely best.

 On Feb 26, 2014, at 4:01 PM, Jay Ashworth j...@baylink.com wrote:

 - Original Message -
 From: Brandon Galbraith brandon.galbra...@gmail.com

 On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley no.s...@comcast.net
 wrote:
 More politely stated, it's not the responsibility of the operator to
 decide what belongs on the network and what doesn't. Users can run any
 services that's not illegal or even reuse ports for other
 applications.

 Blocking chargen at the edge doesn't seem to be outside of the realm
 of possibilities.

 All of these conversations are variants of how easy is it to set up a
 default ACL for loops, and then manage exceptions to it?.

 Assuming your gear permits it, I don't personally see all that much
 Bad Actorliness in setting a relatively tight bidirectional ACL for
 Random Edge Customers, and opening up -- either specific ports, or
 just to a less-/un-filtered ACL on specific request.

 The question is -- as it is with BCP38 -- *can the edge gear handle it*?

 And if not: why not?  (Protip: because buyers of that gear aren't
 agitating for it)

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink   
 j...@baylink.com
 Designer The Things I Think   RFC 
 2100
 Ashworth  Associates   http://www.bcp38.info  2000 Land Rover 
 DII
 St Petersburg FL USA  BCP38: Ask For It By Name!   +1 727 647 
 1274






-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Managing ACL exceptions (was Re: Filter NTP traffic by packet size?)

2014-02-28 Thread Ray Soucy
When I was looking at the website before I didn't really see any
mention of uRPF, just the use of ACLs, maybe I missed it, but it's not
encouraging if I can't spot it quickly.  I just tried a search and the
only thing that popped up was a how-to for a Cisco 7600 VXR.

http://www.bcp38.info/index.php/HOWTO:CISCO:7200VXR

On Fri, Feb 28, 2014 at 9:04 AM, Jay Ashworth j...@baylink.com wrote:
 You mean, like Bcp38(.info)?


 On February 28, 2014 9:02:03 AM EST, Ray Soucy r...@maine.edu wrote:

 I'm wondering how many operators don't have systems in place to
 quickly and efficiently filter problem host systems.
 I see a lot of talk of ACL usage, but not much about uRPF and black
 hole filtering.

 There are a few white papers that are worth a read:


 http://www.cisco.com/c/dam/en/us/products/collateral/security/ios-network-foundation-protection-nfp/prod_white_paper0900aecd80313fac.pdf

 http://www.cisco.com/web/about/security/intelligence/urpf.pdf

 If you have uRPF enabled on all your access routers then you can
 configure routing policy such that advertising a route for a specific
 host system will trigger uRPF to drop the traffic at the first hop, in
 hardware.
 This prevents you from having to maintain ACLs or even give out access
 to routers.  Instead, you can use a small router or daemon that
 disables hosts by advertising them as a route (for example, we just
 use a pair of small ISR 1841 routers for this); this in turn can be
 tied into IPS or a web UI allowing your NOC to disable a problem host
 at the first hop and prevent its traffic from propagating throughout
 the network without having to know the overall architecture of the
 network or determine the best place to apply an ACL.

 I've seen a lot of talk on trying to filter specific protocols, or
 rate-limit, etc. but I really feel that isn't the appropriate action
 to take.  I think disabling a system that is a problem and notifying
 its maintainer that they need to correct the issue is much more
 sustainable.  There are also limitations on how much can be done
 through the use of ACLs.  uRPF and black hole routing
   scale
 much
 better, especially in response to a denial of service attack.

 When the NTP problems first started popping up, we saw incoming NTP of
 several Gb, without the ability to quickly identify and filter this
 traffic a lot of our users would have been dead in the water because
 the firewalls they use just can't handle that much traffic; our
 routers, on the other hand, have no problem throwing those packets
 out.

 I only comment on this because one of the comments made to me was
 Can't we just use a firewall to block it?.  It took me over an hour
 to explain that the firewalls in use didn't have the capacity to
 handle this level of traffic -- and when I tried to discuss hardware
 vs. software filtering, I got a deer-in-the-headlights look. :-)


 On Thu, Feb 27, 2014 at 8:57 PM, Keegan Holley no.s...@comcast.net
 wrote:

  It depends on how many customers you have and what sort of contract you
 have with them if any.  A significant amount of attack traffic comes from
 residential networks where a one-size-fits-all policy is definitely best.

  On Feb 26, 2014, at 4:01 PM, Jay Ashworth j...@baylink.com wrote:

  - Original Message -

  From: Brandon Galbraith brandon.galbra...@gmail.com


  On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley no.s...@comcast.net
  wrote:

  More politely stated, it's not the responsibility of the operator to
  decide what belongs on the network and what doesn't. Users can run
 any
  services that's not illegal or even reuse ports for other
  applications.


  Blocking chargen at the edge doesn't seem to be outside of the realm
  of possibilities.


  All of these conversations are variants of how easy is it to set up a
  default ACL for loops, and then manage exceptions to it?.

  Assuming your gear permits it, I don't personally see all that much
  Bad Actorliness in setting a relatively tight bidirectional ACL for
  Random Edge Customers, and opening up -- either specific ports, or
  just to a less-/un-filtered ACL on specific request
  .

  The question is -- as it is with BCP38 -- *can the edge gear handle
 it*?

  And if not: why not?  (Protip: because buyers of that gear aren't
  agitating for it)

  Cheers,
  -- jra
  --
  Jay R. Ashworth  Baylink
 j...@baylink.com
  Designer The Things I Think
 RFC 2100
  Ashworth  Associates   http://www.bcp38.info  2000 Land
 Rover DII
  St Petersburg FL USA  BCP38: Ask For It By Name!   +1 727
 647 1274






 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Filter NTP traffic by packet size?

2014-02-24 Thread Ray Soucy
We have had pretty good success in identifying offenders with simple
monitoring flow data for NTP flows destined for our address space with
packet counts higher than 100; we disable them and notify to correct
the configuration on the host.  Granted we only service about 1,000
different customers.

In cases where a large amount of incoming traffic was generated, we
have been able to temporarily blackhole offenders to not saturate
smaller downstream connections until traffic levels die down;
unfortunately it takes a few days for that to happen, and many service
providers outside the US don't seem to be very responsive to their
published abuse address.

I prefer targeted, temporary, and communicated filtering for actual
incidents over blanket filtering for potential incidents.


On Sun, Feb 23, 2014 at 7:35 PM, Randy Bush ra...@psg.com wrote:
 Ive talked to some major peering exchanges and they refuse to take any
 action. Possibly if the requests come from many peering participants
 it will be taken more seriously?

 i have talked to fiber providers and they have refused to take action.
 perhaps if requests came from hundreds of the unclued zombies they would
 take it seriously.

 randy




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: EIGRP support !Cisco

2014-01-08 Thread Ray Soucy
Use a standard protocol and redistribute between the two.  OSPF is likely
the easiest way to go for this.
I like EIGRP, but I don't think I like it enough to try a non-Cisco
implementation of it.  At least with OSPF you know that most of the bugs
have been worked out (hopefully).


On Wed, Jan 8, 2014 at 12:30 PM, Nick Olsen n...@flhsi.com wrote:

 Looking for EIGRP support in a platform other than Cisco. Since it was
 opened up last year. We have a situation where we need to integrate into a
 network running EIGRP and would like to avoid cisco if at all possible.

 Any thoughts?

 Nick Olsen
  Network Operations
 (855) FLSPEED  x106





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Open source hardware

2014-01-08 Thread Ray Soucy
Just to toss in a few more vendors so not to look biased:

Champion One:
http://www.championone.net/

Have used them with no complaints.

And a new company I heard about off-list:

Luma Optics:
http://www.lumaoptics.net/

I haven't dealt with them before, but their solution seems to be pretty
slick in that they give you the tools to recode optics yourself.

When all is said and done, my experience with third-party optics has been
that they're identical to brand-name optics except for the sticker.  In
fact, it's pretty clear most of the time that they're often made by the
same place.

I haven't counted them all up, but I believe we have over 1,000 third-party
optics in use, so a fair enough sample size.  Most of the optics that I've
replaced in the last year have had a Cisco label on them. ;-)



On Tue, Jan 7, 2014 at 9:58 AM, Ray Soucy r...@maine.edu wrote:

 http://approvedoptics.com/ is a good starting point if you want correct
 vendor codes


 On Tue, Jan 7, 2014 at 8:57 AM, Vlade Ristevski vrist...@ramapo.eduwrote:

 Sorry to get off topic, but is there a company that you can recommend?
 The price of the Cisco single mode GLC-LH-SMD= is killing me. I see a bunch
 of third party  ones on Amazon and CDW but I'd to love to get my hands one
 that has the correct vendor code without going and trying them all.


 On 1/3/2014 7:48 AM, Ray Soucy wrote:

 You actually buy brand-name SFP's? That's like buying the gold-plated
 HDMI
 Monster Cable at Best Buy at markup ...

 I just find the the companies that the vendors contract to make their OEM
 SFP's and buy direct.  Same SFP from the same factory except one has a
 Cisco sticker. ;-)

 You can even get them with the correct vendor code, been doing this for
 years and there is no difference in failure rate or quality and we go
 through hundreds of SFPs.




  Vlad
 Network Manager




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Vyatta to VyOS

2014-01-07 Thread Ray Soucy
Unfortunately, vyos.net is the only website for VyOS, Brocade still has a
commercial release of Vyatta vRouter that has all the Vyatta
documentation etc.  If you're nervous about the lack of resources from the
community project you might opt to go with the paid version from Brocade.

The VyOS project is still pretty new, so better documentation and forums
etc will come in time I suspect.

I'm not sure if Brocade has upped the pricing, but here is pricing info
from over the summer.  They had 1, 3, and 5 year commitments and different
pricing for virtual vs bare metal and 24-7 vs. business hour support.

MSRP pricing for Vyatta back in May 2013

For 24-7 Support:
Bare Metal  $2,600 (1 yr)  $5,000 (3 yr)
Virtual  $2,000 (1 yr)   $3,000 (3 yr)

For Business Hours support:
Bare Metal $2,000 (1 yr) $3,500 (3 yr)
Virtual $1,800 (1 yr) $2,700 (3 yr)


On Tue, Jan 7, 2014 at 9:24 AM, Vlade Ristevski vrist...@ramapo.edu wrote:

 This project looks interesting. Our 7206 VXR is at ends final days and
 replacing it with and ASR series is very expensive considering we're only
 pushing 600megs of Internet traffic with a full BGP table.

 When I go to the page linked below, I didn't see a mailing list, forum or
 very much documentation for it. Is there another site with this info? I'd
 love to test a few builds out but I never used Vyatta before.



 On 12/23/2013 10:18 AM, Ray Soucy wrote:

 Many here might be interested,

 In response to Brocade not giving the community edition of Vyatta much
 attention recently, some of the more active community members have created
 a fork of the GPL code used in Vyatta.

 It's called VyOS, and yesterday they released 1.0.

 http://vyos.net/

 I've been playing with the development builds and it seems to be every bit
 as stable as the Vyatta releases.

 Will be interesting to see how the project unfolds :-)


 --
 Vlade Ristevski
 Network Manager
 IT Services
 Ramapo College
 (201)-684-6854





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Open source hardware

2014-01-07 Thread Ray Soucy
http://approvedoptics.com/ is a good starting point if you want correct
vendor codes


On Tue, Jan 7, 2014 at 8:57 AM, Vlade Ristevski vrist...@ramapo.edu wrote:

 Sorry to get off topic, but is there a company that you can recommend? The
 price of the Cisco single mode GLC-LH-SMD= is killing me. I see a bunch of
 third party  ones on Amazon and CDW but I'd to love to get my hands one
 that has the correct vendor code without going and trying them all.


 On 1/3/2014 7:48 AM, Ray Soucy wrote:

 You actually buy brand-name SFP's? That's like buying the gold-plated HDMI
 Monster Cable at Best Buy at markup ...

 I just find the the companies that the vendors contract to make their OEM
 SFP's and buy direct.  Same SFP from the same factory except one has a
 Cisco sticker. ;-)

 You can even get them with the correct vendor code, been doing this for
 years and there is no difference in failure rate or quality and we go
 through hundreds of SFPs.




  Vlad
 Network Manager




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Open source hardware

2014-01-03 Thread Ray Soucy
You actually buy brand-name SFP's? That's like buying the gold-plated HDMI
Monster Cable at Best Buy at markup ...

I just find the the companies that the vendors contract to make their OEM
SFP's and buy direct.  Same SFP from the same factory except one has a
Cisco sticker. ;-)

You can even get them with the correct vendor code, been doing this for
years and there is no difference in failure rate or quality and we go
through hundreds of SFPs.

It is nice to have a solution provider if you're only looking at one unit,
but if you're deploying a large amount then building and testing your own
configuration really isn't that hard and will save you a lot of money.  You
can even contract an OEM appliance vendor to take care of the actual build
for you and they'll usually provide 3-year replacement on the hardware.
(I've found Sourcecode to be the best price-wise for smaller projects).
As a bonus they'll slap whatever branding you want on the thing for that
professional touch.

Vyatta and now VyOS are important projects for networking.  We really need
to get away from locked down non-free hardware and software for critical
infrastructure.

It's natural that most of the people in this community (myself included)
will be fans of companies like Cisco and Juniper and dismiss anything else,
but that mindset for me change when I deployed 100+ whitebox units 3 years
ago and saved nearly a million in the process.

Juniper is a FreeBSD shop, and Cisco's new OS lines are based on Linux.
 Ciena is largely based on Linux as well.  In poking around at these
platforms recently one of the big things I'm noticing is that there is a
lot less done in hardware than we traditionally saw, especially from Cisco.


Having your networking in silicon is great when you have a 100 MHz CPU;
Cisco even conditioned us to be terrified of anything being punted to CPU
by under-sizing and over-pricing their CPUs for years.  But when you have a
modern server-grade platform, multi-Gigabit performance, even with
significant levels of packet processing and small packet sizes, is a joke.
 So at least for the low end of the spectrum there is a huge savings for
equal (often better) performance.

As I mentioned before I haven't done much with 10-Gigabit, but I imagine
with Intel-based cards on a modern PCIe bus that you can at least get
entry-level performance.  Sometimes the biggest push for 10G is avoiding a
2G or 4G port-channel.

With the new Intel DPDK stuff, Intel is claiming 80M PPS performance on a
standard Xeon platform:
http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/packet-processing-is-enhanced-with-software-from-intel-dpdk.html

Eventually, DPDK support will likely start being included in projects like
VyOS, perhaps in Linux in general.

As for VyOS, the project is starting to get some momentum and is run by
former Vyatta employees and even some people from UBNT.  I think we'll see
some good stuff from them in the future.  The 1.0 release is solid from
what I've seen (and even fixes some bugs Vyatta hasn't yet).




On Fri, Jan 3, 2014 at 12:01 AM, Jimmy Hess mysi...@gmail.com wrote:

 On Thu, Jan 2, 2014 at 8:53 PM, Andrew Duey 
 andrew.d...@widerangebroadband.net wrote:

  I'm surprised nobody's mentioned vyatta.org or the new fork of VyOs.  We
  are currently using the vyatta community edition and so far it's been
 good
  to to us.  It depends on your hardware and how small of an ISP you are
 but
  it might be a great open source fit for you.


 The orig. author has potentially set course for a world of hurt --  if the
 plan is to scrap robust packaged highly-validated gear having separate
 hardware forwarding planes and ASIC-driven filtering,  to stick cheap x86
 servers in the SP core and internet borders.

 Sure... anyone can install Vyatta on a x86 server,   but  assembly of all
 the pieces and full validation for a resilient platform comparable to
 carrier grade gear, for a mission critical network,  should be a bit more
 involved than that.

 Next up   how to build your own  10-Gigabit  SFPs to avoid paying for
 expensive brand-name SFPs,  by putting together some chips,  wires,  fiber,
 and tying it all together with a piece of duck tape

 just saying... :)


  --Andrew Duey
 
 --
 -JH




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-31 Thread Ray Soucy
I think there needs to be some clarification on how these tools get used,
how often they're used, and if they're ever cleaned up when no longer part
of an active operation.  Of course we'll never get that.

The amount of apologists with the attitude this isn't a big deal, nothing
to see here, the NSA does this kind of thing is kind of shocking for this
community; especially with the information that's been released over the
past few months.

This whole backdoor business is a very, very, dangerous game.




On Tue, Dec 31, 2013 at 12:19 AM, Blair Trosper blair.tros...@gmail.comwrote:

 To supplement and amend what I said:

 These are the KINDS of things we want the NSA to do; however, the
 institutional oversight necessary to make sure it's Constitutional,
 warranted, and kept in bounds is woefully lacking (if any exists at all).
  Even FISA is unsatisfactory.

 At any rate, I agree that the current disposition of the NSA (or, at least,
 what's been leaking the last few months) is simply unacceptable and cannot
 be allowed.  I say that last part from the perspective of a US citizen,
 though I'd imagine most people of other nationalities would agree with me,
 but probably for different reasons.


 On Mon, Dec 30, 2013 at 11:08 PM, Jimmy Hess mysi...@gmail.com wrote:

  On Mon, Dec 30, 2013 at 10:41 PM, Blair Trosper blair.tros...@gmail.com
 wrote:
 
  I'm torn on this.  On one hand, it seems sinister.  On the other, it's
 not
  only what the NSA is tasked with doing, but it's what you'd EXPECT them
 to
  be doing in the role as the NSA.
 
  [snip]
 
  The NSA's role is not supposed to include subterfuge and undermining the
  integrity or security of domestic enterprise infrastructure
 
  With any luck, we'll hopefully find absolutely nothing, or that it was
  targetted backdooring against specific targets only.
 
  And people have a need to know that the security agencies haven't left a
  trail of artificially inserted bugs and backdoors in common IT equipment
  providing critical infrastructures services,  and that the agencies
 haven't
  prepared a collection of instant-root 0days,  that are no more protected
  then the agencies' other poorly guarded secrets.
 
  There would be a risk that any 'backdoors' are ready to be exploited by
  other unintended nefarious actors!
  Because the NSA are apparently  great at prepping the flammables and
  setting fires,but  totally incapable of  keeping the fires contained,
  once they  (or someone else)  lights it.
 
 
  It is not the least bit necessary for the NSA itself to be a nefarious
  actor  exploiting things or even complicit;  for the mere presence of
  any
  backdoor or surreptitious code to eventually have the potential for
 serious
  damage.
 
  It could well be a rogue ex-employee of the NSA, such as Snowden,  or
  others,  that happened to be aware of technical details, hackers, or
  members of a foreign nation state,  who will just happen to have the time
  and energy to track down open doors waiting for the taking,  AND  figure
  out how to abuse them  for evil purposes.
 
 
  There are enough potential 0day risks, without intentional ones,  waiting
  for bad guys to co-opt!
 
  --
  -JH
 




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread Ray Soucy
Even more outrageous than the domestic spying is the arrogance to think
that they can protect the details on backdoors into critical
infrastructure.

They may have basically created the framework for an Internet-wide kill
switch, that likely also affects every aspect of modern communication.
 Since they don't disclose any of this to other agencies, it's very likely
that even parts of the DOD is vulnerable.

I hope when [if] the truth is learned it is a lot less prevalent than it
sounds, but I'm not optimistic.

This is why we need all infrastructure to be implemented using open
standards, open hardware designs, and open source software IMHO.

I hope Cisco, Juniper, and others respond quickly with updated images for
all platforms affected before the details leak.


On Mon, Dec 30, 2013 at 6:29 AM, Dobbins, Roland rdobb...@arbor.net wrote:


 On Dec 30, 2013, at 6:18 PM, Saku Ytti s...@ytti.fi wrote:

  I welcome the short-term havok and damage of such disclose if it would
 be anywhere near the magnitude implied, it would create pressure to change
  things.

 This is the type of change we're likely to see, IMHO:

 http://lauren.vortex.com/archive/001074.html

 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

   Luck is the residue of opportunity and design.

-- John Milton





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread Ray Soucy
Looking more at the actual leaked information it seems that if the NSA is
working with companies, it's not anything the companies are likely aware
of.

The common form of infection seems to be though software updates performed
by administrators (through the NSA hijacking web traffic).  They are
implimented as firmware and BIOS infections that modify the OS image and
persist through software upgrades to provide a persistant back door (PBD).
 The documents imply that a signiciant of systems deployed are already
infected.

So this isn't an issue of the NSA working with Cisco and Juniper to include
back doors, it's an issue of the NSA modifying those releases after the
fact though BIOS implants.  Where exatcly the NSA is inserting these we
can't be sure.  They could be targeted or they could be at the assembly
line.

Quick Summary of Leaked Information:
Source: http://www.spiegel.de/international/world/a-941262.html

Firewalls:

(1) Cisco PIX and ASA: Codename JETPLOW
(2) Huawei Eudemon: Codename HALLUXWATER
(3) Juniper Netscreen and ISG: Codename: FEEDTROUGH
(4) Juniper SSG and Netscreen G5, 25, and 50, SSG-series: Codename:
GOURMETTROUGH
(5) Juniper SSG300 and SSG500: Codename SOUFFLETROUGH

Routers:

(1) Huawei Router: Codename HEADWATER
(2) Juniper J-Series: Codename SCHOOLMONTANA
(3) Juniper M-Series: Codename SIERRAMONTANA
(4) Juniper T-Series: Codename STUCCOMONTANA

Servers:
(1) HP DL380 G5: Codename IRONCHEF
(2) Dell PowerEdge: Codename DEITYBOUNCE
(3) Generic PC BIOS: Codename SWAP, able to compromise Windows, Linux,
FreeBSD, or Solaris using FAT32, NTFS, EXT2, EXT3, or UFS filesystems.

USB Cables and VGA Cables:

Codename COTTONMOUTH, this one is a hardware implmant hidden in a USB
cable.  The diagram shows it's small enough that you would never know its
there.
Codename RAGEMASTER, VGA cable, mirrors VGA over the air.

Many others.

I'm not sure that the list is comprehensive, so I wouldn't say that since
Cisco routers are not mentioned (for example) that they're any more safe
than Juniper (which is listed often).






On Mon, Dec 30, 2013 at 11:50 AM, Dobbins, Roland rdobb...@arbor.netwrote:


 On Dec 30, 2013, at 11:18 PM, Sam Moats s...@circlenet.us wrote:

  This might be an interesting example of it's (mis)use.
  http://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%932005

 That's one of the cases I know about; it was utilized via Ericsson gear.

 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

   Luck is the residue of opportunity and design.

-- John Milton





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread Ray Soucy
On a side note,

I've been involved with organizing the New England regional Collegiate
Cyber-Defense Competition for a while, and one our Red Team members was
able to make a pretty convincing IOS rootkit using IOS TCL scripting to
mask configuration from the students.  I don't think any students were able
to detect it until word got out after it was used a few years in a row.
 IIRC, Cisco threatened to sue if it was ever released, so no it's not
publicly available.  It is possible, however.

Don't assume that your routers are any safer than your servers. :-)



On Mon, Dec 30, 2013 at 1:35 PM, shawn wilson ag4ve...@gmail.com wrote:

 On Mon, Dec 30, 2013 at 1:17 PM, Lorell Hathcock lor...@hathcock.org
 wrote:
  NANOG:
 
  Here's the really scary question for me.
 
  Would it be possible for NSA-payload traffic that originates on our
 private
  networks that is destined for the NSA to go undetected by our IDS
 systems?
 

 Yup. Absolutely. Without a doubt.

  For example tcpdump-based IDS systems like Snort has been rooted to
 ignore
  or not report packets going back to the NSA?  Or netflow on Cisco devices
  not reporting NSA traffic?  Or interface traffic counters discarding
  NSA-packets to report that there is no usage on the interface when in
 fact
  there is?
 

 Do you detect 100% of malware in your IDS? Why would anyone need to do
 anything with your IDS? Craft a PDF, DOC, Java, Flash, or anything
 else that can run code that people download all the time with payload
 of unknown signature. This isn't really a network discussion. This is
 just to say - I seriously doubt there's anything wrong with your IDS -
 don't skin a cat with a flame thrower, it just doesn't need to be that
 hard.

  Here's another question.  What traffic do we look for on our networks
 that
  would be going to the NSA?
 

 Standard https on port 443 maybe? That's how I'd send it. If you need
 to send something bigger than normal, maybe compromise the email
 server and have a few people send off some 5 - 10 meg messages?
 Depends on your normal user base. If you've got a big, complex user
 base, it's not hard to stay under the radar. Google 'Mandiant APT1'
 for some real good reading.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: The Making of a Router

2013-12-29 Thread Ray Soucy
 for i in /proc/sys/net/ipv4/conf/*/arp_announce; do echo 2  $i;done

+1 setting arp_announce in Linux is essential if being used as a router
with more than one subnet.

I would also recommend setting arp_ignore.  For Linux-based routers, I've
found the following settings to be optimal:

echo 1  /proc/sys/net/ipv4/conf/all/arp_announce
echo 2  /proc/sys/net/ipv4/conf/all/arp_ignore

On a side note, this underscores what a lot of people on-list are saying:

If you don't understand the internals of a Linux system, for example,
rolling your own will bite you.

It's also pretty rare to find a network engineer who is also a Linux
system-level developer, so finding and maintaining that talent can often be
a challenge.

Many make a leap and go on to assert that because of this software-based
systems can never be viable, which I disagree with.  After all, the latest
OS offerings from Cisco run a Linux kernel.  Nearly all the Ciena DWDM and
ME gear I run is built on Linux.  These companies aren't doing quite as
much with hardware acceleration as they would lead you to believe.

I think Intel DPDK will be a disruptive technology for networking.

At the end of the day, I'm pretty anxious to see the days of over-priced
routers driving up network service costs go away.




On Sun, Dec 29, 2013 at 4:10 AM, Laurent GUERBY laur...@guerby.net wrote:

 On Sun, 2013-12-29 at 03:31 +0100, Baldur Norddahl wrote:
  (...)
  The users each have a unique VLAN (Q-in-Q). The question is, what do I
put
  on those VLANs, if I do not want to put a full IPv4 subnet on each?
 
  My own answer to that is to have the users share a larger subnet, for
  example I could have a full class C sized subnet shared between 253
  users/VLANs.
 
  To allow these users to communicate with each other, and so they can
  communicate with the default gateway IP, I will need proxy arp. And in a
  non-OpenFlow solution, also the associated security functions such as
  DHCP-snooping to prevent hijacking of IP addresses.
 
  Which devices can solve this task?

 Hi Baldur,

 Assuming you manage 1.1.1.0/24 and 2001:db8:0::/48 and
 have a Linux box on both ends you can get rid of
 IPv4 and v6 interco subnets and arp proxy the following way:

 1/ on the gateway
 ip addr add 1.1.1.0/32 dev lo

 for all client VLAN NN on eth0 :
 ip -6 addr add fe80::1/64 dev eth0.NN
 ip -6 route add 2001:db8:0:NN00::/56 via fe80::1:NN dev eth0.NN

 2/ on user CPE number NN CPE WAN interface being eth0 :
 ip addr add 1.1.1.NN/32 dev eth0
 ip route add 1.1.1.0/32 dev eth0
 ip route add default via 1.1.1.0
 ip -6 addr add fe80::1:NN/64 dev eth0
 ip -6 route add default via fe80::1 dev eth0
 # ip -6 addr add  2001:db8:0:NN00::1/56 dev eth0 # optional

 Note: NN in hex for IPv6

 The trick in IPv4 is that linux by default will answer to ARP requests
 for 1.1.1.0 on all interfaces even if the adress is on the loopback.
 And in IPv6 use static link local on both ends. You can replace
 1.1.1.0 by any IPv4, but since .0 are rarely assigned to end users
 it doesn't waste anything and keep traceroute with public IPv4.

 The nice thing of this setup is that it virtualizes the routing from
 the client point of view: you can split/balance your clients on multiple
 physical gateways and not change a line to the client configuration
 while it's being moved, you just have to configure your IGP between
 gateways to properly distribute internal routes.

 We (AS197422 / tetaneutral.net) use this for virtual machines too (with
 tapNN interfaces from KVM instead of eth0.NN): it allows us to move
 virtual machines around physical machines without user reconfiguration,
 not waste any IPv4 and avoid all issues with shared L2 (rogue RA/ARP
 spoofing/whatever) since there's no shared L2 anymore between user VM.
 It also allows us to not pre split our IPv4 space in a fixed scheme,
 we manage only /32 so no waste at all.

 Of course you still have work to do on PPS tuning.

 Sincerely,

 Laurent GUERBY
 AS197422 http://tetaneutral.net peering http://as197422.net

 PS: minimum settings on a Linux router
 echo 1  /proc/sys/net/ipv4/ip_forward
 for i in /proc/sys/net/ipv6/conf/*; do for j in autoconf accept_ra; do
echo 0  $i/$j; done;done
 echo 1  /proc/sys/net/ipv6/conf/all/forwarding
 echo 65536  /proc/sys/net/ipv6/route/max_size
 for i in /proc/sys/net/ipv4/conf/*/arp_announce; do echo 2  $i;done

 PPS: we also like to give /56 to our users in IPv6, it makes a nice /24
 IPv4 = /48 IPv6 correspondance (256 users).






--
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: The Making of a Router

2013-12-27 Thread Ray Soucy
In talking about RAMBOOT I also realized the instructions are out of date
on the website.

The ramboot boot target script was updated since I created the initial
page to generate the correct fstab, and enable the root account, set a
hostname, etc. so you can actually use the OS until you create a new image.

I extracted the script from the initird to make it easier to grab:

http://ramboot.org/download/RAMBOOT/RAMBOOT-pre0.2/SYSLINUX/initrd/scripts/ramboot

Essentially, by adding a new ramboot script to
/usr/share/initramfs-tools/scripts along side nfs and local it
creates a new boot= target (since the init script looks for
scripts/${BOOT}.

As mentioned on the website, the ramboot process needs a more complete
version of busybox (for tar archive support) and the mke2fs tool added to
/usr/lib/initramfs-tools/bin/ so they will be available to the initrd.

Once you configure networking (see the INSTALL/setup_network script) you
can do an apt-get update and apt-get the packages you need from Ubuntu
12.04 LTS.

Example starting point:

apt-get install sudo
apt-get install nano
apt-get install ssh
apt-get install vlan
apt-get install bridge-utils



On Thu, Dec 26, 2013 at 8:27 PM, Ray Soucy r...@maine.edu wrote:

 The basic idea of RAMBOOT is typical in Embedded Linux development.

 Linux makes use of multi-stage boot process.  One of the stages involves
 using an initial ramdisk (initrd) to provide a base root filesystem which
 can be used to locate and mount the system root, then continue the boot
 process from there.

 For an in-memory OS, instead of the boot process mounting a pre-loaded
 storage device with the expected root filesystem (e.g. your installed HDD),
 you modify it to:

 1) Create and format a ramdisk.
 2) Create the expected directory structure and system files for the root
 filesystem on that ramdisk.

 The root filesystem includes the /dev directory and appropriate device
 nodes, the basic Linux filesystem and your init program.

 The easy way to do that is just to have a TAR archive that you extract to
 the ramdisk on boot, better yet use compression on it (e.g. tar.gz) so that
 the archive can be read from storage (e.g. USB flash) more quickly.

 Today, the initramfs in Linux handles a lot more than simply mounting the
 storage device.  It performs hardware discovery and loads appropriate
 modules.  As such the Debian project has a dynamic build system for
 initramfs that is run to build the initrd when a new kernel package is
 installed, it's called initramfs-tools.

 You can manually build your own initramfs using the examples on the
 RAMBOOT website, but the point of RAMBOOT is to make building an in-memory
 OS quick and simple.

 RAMBOOT instead adds configuration to initramfs-tools so that each time a
 new initrd is generated, it includes the code needed for RAMBOOT.

 The RAMBOOT setup adds handling of a new boot target called ramboot to
 the kernel arguments.  This allows the same kernel to be used for a normal
 installation and remain unaffected, but when you add the argument
 boot=ramboot as a kernel option to the bootloader, it triggers the
 RAMBOOT process described above.

 Having a common kernel between your development environment and embedded
 environment makes it much easier to test and verify functionality.

 The other part of RAMBOOT is that it makes use of Ubuntu Core.  Ubuntu
 Core is a stripped down minimal (and they really do mean minimal) root
 filesystem for Embedded Linux development.  It includes apt-get, though, so
 you can install all the packages you need from Ubuntu on the running system.

 RAMBOOT then has a development script to make a new root filesystem
 archive with the packages you've installed as a baseline.  This allows for
 you to boot a RAMBOOT system, install your desired packages and change
 system configuration files as desired, then build a persistent image of
 that install that will be used for future boots.

 I also have the start of a script to remove unused kernel modules, and
 other files (internationalization for example) which add to the OS
 footprint.

 You could build the root filesystem on your own (and compile all the
 necessary packages) but using Ubuntu Core provides a solid base and allows
 for the rapid addition of packages from the giant Ubuntu repository.

 Lastly, I make use of SYSLINUX as a bootloader because my goal was to use
 a USB stick as the bootflash on an Atom box.  Unfortunately, the Atom BIOS
 will only boot a USB device if it has a DOS boot partition, so GRUB was a
 no-go.  The upside is that since the USB uses SYSLINUX and is DOS
 formatted, it's easily mounted in Windows or Mac OS X, allowing you to copy
 new images or configuration to it easily.

 For the boot device I make use of the on-board vertical USB socket on the
 system board (typical for most system boards these days) and a low-profile
 USB stick.  I find the Verbatim Store 'n' Go 8GB USB stick ideally suited
 for this as it's less than a quarter-inch

Re: The Making of a Router

2013-12-27 Thread Ray Soucy
It seems to be a pretty hot button issue, but I feel that modern hardware
is more than capable of pushing packets.  The old wisdom of only hardware
can do it efficiently is starting to prove untrue.  10G might still be a
challenge (I haven't tested), but 1G is not even close to being an issue.
 Depending on the target for your deployment, it might make sense to
whitebox a router or firewall instead of spending 20K on it.  Especially if
you're working with any kind of scale.

TL;DR I think the backlash against anything but big iron routing is
becoming an old way of thinking.



On Fri, Dec 27, 2013 at 6:56 PM, Jon Sands fohdee...@gmail.com wrote:

 On 12/27/2013 4:23 PM, Matt Palmer wrote:

 There *is* a world outside of Silly Valley, you know... a world where
 money doesn't flow like a mighty cascade from the benevolent wallets of
 vulture capitalists, into the waiting arms of every crackpot with an
 elevator pitch. - Matt


 Yes, and in that world, one should probably not start up a FTTH ISP when
 one has not even budgeted for a router, among a thousand other things. And
 if you must, you should probably figure out your cost breakdown beforehand,
 not after. Baldur, you mention $200k total to move 10gb with Juniper (which
 seems insanely off to me). Look into Brocades CER line, you can move 4x
 10gbe per chassis for under 12k.

 --
 Jon Sands




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: The Making of a Router

2013-12-27 Thread Ray Soucy
On a side note, Q-in-Q support has been added to the recent 3.10 Linux
kernel, configured using the ip command.  It will be popping up in
distributions soon [tm].  Another interesting addition is IPv6 NAT
(transparent redirect, prefix translation, etc).


On Fri, Dec 27, 2013 at 8:18 PM, Baldur Norddahl
baldur.nordd...@gmail.comwrote:

 On Sat, Dec 28, 2013 at 12:56 AM, Jon Sands fohdee...@gmail.com wrote:

  Yes, and in that world, one should probably not start up a FTTH ISP when
  one has not even budgeted for a router, among a thousand other things.
 And
  if you must, you should probably figure out your cost breakdown
 beforehand,
  not after. Baldur, you mention $200k total to move 10gb with Juniper
 (which
  seems insanely off to me). Look into Brocades CER line, you can move 4x
  10gbe per chassis for under 12k.
 

 I was saying $100k for two Juniper routers total.

 Perhaps we could get back on track, instead of trying to second guess what
 we did or did not budget for. You have absolute no information about our
 business plans.

 The Brocade BR-CER-2024F-4X-RT-AC - Brocade NetIron CER 2024F-4X goes for
 about $21k and we need two of them. That is enough to buy a full year of
 unlimited 10G internet. And even then, we would be short on 10G ports.

 It is not that we could not bring that money if that was the only way to do
 it. It is just that I have so many other things that I could spend that
 money on, that would further our business plans so much more.

 I can not even say if the Juniper or the Brocade will actually solve my
 problem. I need it to route to ten of thousands of VLANS (Q-in-Q), both
 with IPv4 and IPv6. It needs to act as IPv6 router on every VLAN, and very
 few devices seems to like having that many IP-addresses assigned. It also
 needs to do VRRP and proxy arp for every VLAN.

 The advantage of a software solution is that I can test it all before
 buying. Also to some limited degree, I am able to fix shortcomings myself.

 Regards,

 Baldur




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: The Making of a Router

2013-12-26 Thread Ray Soucy
You can build using commodity hardware and get pretty good results.

I've had really good luck with Supermicro whitebox hardware, and
Intel-based network cards.  The Hot Lava Systems cards have a nice
selection for a decent price if you're looking for SFP and SFP+ cards that
use Intel chipsets.

There might be some benefits in going with something like FreeBSD, but I
find that Linux has a lot more eyeballs on it making it much easier to
develop for, troubleshoot, and support.  There are a few options if you
want to go the Linux route.

Option 1: Roll your own OS.  This takes quite a bit of effort, but if you
have the tallant to do it you can generally get exactly what you want.

Option 2: Use an established distribution.

Vyatta doesn't seem to be doing much with its FOSS release Vyatta Core
anymore, but the community has forked the GPL parts into VyOS.  I've been
watching them pretty closely and helping out where I can; I think the
project is going to win over a lot of people over the next few years.

http://www.vyatta.org/
http://www.vyos.net/

The biggest point of failure I've experienced with Linux-based routers on
whitebox hardware has been HDD failure.  Other than that, the 100+ units
I've had deployed over the past 3+ years have been pretty much flawless.

Thankfully, they currently run an in-memory OS, so a disk failure only
affects logging.

If you want to build your own OS, I'll shamelessly plug a side project of
mine: RAMBOOT

http://ramboot.org/

RAMBOOT makes use of the Ubuntu Core rootfs, and a modified boot process
(added into initramfs tools, so kernel updates generate the right kernel
automatically).  Essentially, I use a kernel ramdisk instead of an HDD for
the root filesystem and / is mounted on /dev/ram1.

The bootflash can be removed while the system is running as it's only
mounted to save system configuration or update the OS.

I haven't polished it up much, but there is enough there to get going
pretty quickly.

You'll also want to pay attention to the settings you use for the kernel.
 Linux is tuned as a desktop or server, not a router, so there are some
basics you should take care of (like disabling ICMP redirects, increasing
the ARP table size, etc).

I have some examples in: http://soucy.org/xorp/xorp-1.7-pre/TUNING
or http://soucy.org/tmp/netfilter.txt (more recent, but includes firewall
examples).

Also a note of caution.  I would stick with a longterm release of Linux.
 I've had good experience with 2.6.32, and 3.10.  I'm eager to use some of
the post-3.10 features, though, so I'm anxious for the next longterm branch
to be locked in.

If running a proxy server of any kind, you'll want to adjust
TCP_TIMEWAIT_LEN in the header file and re-compile the kernel, else you'll
run into ephemeral port exhaustion before you touch the limits of the CPU.
 I recommend 15 seconds (the default in Linux is 60).

Routing-engine -wise.  I currently have a large XORP 1.6 deployment because
I have a need for multicast routing (PIM-SM), but XORP is very touchy and
takes quite a bit of operational experience to avoid problems.  Quagga has
much more active development and eyeballs.  BIRD is also very interesting.
 I like the model of BIRD a lot (more of a traditional daemon than trying
to be a Cisco or Juniper clone).  It doesn't seem to be as far along as
Quagga though.

One of the biggest advantages is the low cost of hardware allows you to
maintain spare systems, reducing the time to service restoration in the
event of failure.  Dependability-wise, I feel that whitebox Linux systems
are pretty much at Cisco levels these days, especially if running in-memory.




On Thu, Dec 26, 2013 at 1:07 PM, jim deleskie deles...@gmail.com wrote:

 I've recently pushed a large BSD box to a load of over 300, for more then
 an hour, while under test,  some things slowed a little, but she kept on
 working!

 -jim


 On Thu, Dec 26, 2013 at 1:59 PM, Shawn Wilson ag4ve...@gmail.com wrote:

  Totally agree that a routing box should be standalone for tons of
 reasons.
  Even separating network routing and call routing.
 
  It used to be that BSD's network stack was much better than Linux's under
  load. I'm not sure if this is still the case - I've never been put in the
  situation where the Linux kernel was at its limits. FWIW
 
  Jared Mauch ja...@puck.nether.net wrote:
  Have to agree on the below. I've seen too many devices be so integrated
  they do no task well, and can't be rebooted to troubleshoot due to
  everyone using them.
  
  Jared Mauch
  
   On Dec 26, 2013, at 10:55 AM, Andrew D Kirch trel...@trelane.net
  wrote:
  
   Don't put all this in one box.
 
 
 




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: The Making of a Router

2013-12-26 Thread Ray Soucy
Chipsets and drivers matter a lot in the 1G+ range.

I've had pretty good luck with the Intel stuff because they offload a lot
in hardware and make open drivers available to the community.


On Thu, Dec 26, 2013 at 7:48 PM, Olivier Cochard-Labbé
oliv...@cochard.mewrote:

 Le 26 déc. 2013 22:02, Nick Cameo sym...@gmail.com a écrit :
 
  Any benchmarks of freebsd vs openbsd vs present day linux kern?
 
 Hi,

 Here are my own benchs using smallest packet size (sorry no Linux):
 http://dev.bsdrp.net/benchs/BSD.network.performance.TenGig.png

 My conclusion: building a line-rate gigabit router (or a few rules ipfw
 firewall) is possible on commodity server without problem with FreeBSD.
 Building a 10gigabit router (this mean routing about 14Mpps) will be more
 complex in present day.
 Note: The packet generator used was the high-perf netmap pkg-gen, allowing
 me to generate about 13Mpps on this same hardware (under FreeBSD), but I'm
 not aware of forwarding tools that use netmap: There are only packet
 generator and capture tools available.




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Vyatta to VyOS

2013-12-23 Thread Ray Soucy
Many here might be interested,

In response to Brocade not giving the community edition of Vyatta much
attention recently, some of the more active community members have created
a fork of the GPL code used in Vyatta.

It's called VyOS, and yesterday they released 1.0.

http://vyos.net/

I've been playing with the development builds and it seems to be every bit
as stable as the Vyatta releases.

Will be interesting to see how the project unfolds :-)

-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Meraki

2013-11-26 Thread Ray Soucy
Can confirm the current ER Lite is a plastic enclosure.
But for $ 100 I can definitely look past that.

Also, most of the UBNT distributers seem to be very knowledgeable about the
product line, so I'm sure they would know if you asked them :-)

We've been running XORP internally for about 100+ CPE devices (actually the
ones we were looking at Vyatta as a replacement for).  In the end I think
that moving to Quagga was a good thing for Vyatta as XORP doesn't have a
very active developer community. XORP releases since 1.6 have been a forked
code base that eventually became XORP 1.8.  It's very touchy, and requires
quite a bit of operational experience to know what will cause it to crash
and what won't.  The big thing you get with XORP that you don't with Quagga
is multicast routing, and a more active community.  I've been really
interested in BIRD [0] as well, but haven't had a chance to try it out.

Back to UBNT, though.  The ER makes use of a lot of non-free code (not so
great), but it's to facilitate hardware acceleration (very nice).  A lot of
functionality for IPv4 and IPv6 are both implemented in hardware, including
not just forwarding and NAT, but also regex matching for DPI.  It's how
they can get so much PPS for such a modest piece of hardware.  I believe
the chips they use are from Cavium [1], but I could be mistaken.

[0]. http://bird.network.cz/
[1]. http://www.cavium.com/


On Mon, Nov 25, 2013 at 6:47 PM, SilverTip257 silvertip...@gmail.comwrote:

 Date: Mon, 25 Nov 2013 09:32:10 -0500
 From: Ray Soucy r...@maine.edu
 To: Rob Seastrom r...@seastrom.com

 Cc: NANOG nanog@nanog.org
 Subject: Re: Meraki
 Message-ID:
 
 calftrnppbqlhrrdkmnt1nz8wi0k3b6kemt9tbgns-wfrhqs...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1


 It looks like Brocade has swapped out Quagga with IP Infusion's non-free
 version, ZebOS.  They also decided to abandon the FOSS Vyatta Core
 project.


 A number of years back it was interesting to see Vyatta switch from XORP
 [0] to Quagga.  I found out quite a while after they made the move.

 Bummer.
 This move by Brocade is unfortunate.



 It's really unfortunate, as the FOSS project is the only reason I was
 interested in paying the licensing.  It was attractive to have Vyatta Core
 as a no-cost option for small things, and the subscription edition for
 higher visibility devices.  Now that they've moved away from having any
 FOSS project, I'm not really inclined to invest in the product, I'm sure
 there are others who feel the same way.


 There is a group of people who were active in the Vyatta community trying
 to get a fork of it going under the name VyOS, http://www.vyos.net/


 Thanks for pointing out VyOS.




 As far as Ubiquiti, it looks like about 2 years ago they actually hired a
 few people from Vyatta, Inc. to work on EdgeOS.  So development of EdgeOS
 has continued [and likely will continue] independently, though it looks
 like at least a few people from UBNT are interested in seeing VyOS happen
 and participating on their own time.  I know one of the early goals for
 VyOS is to get the documentation up on their Wiki and have a release of
 the
 current Vyatta Core with the name swapped out as a starting point.


 For those of you that purchased EdgeRouter Lite (ERLite-3) [2] units
 recently, do they come in plastic enclosure or the steel enclosure like the
 EdgeRouter PoE (ERPoe-5) [3] units?  We got a few of each in at the office
 at different times (first ERL and later ERPoe).  Just curious.

 I guess I'm spoiled ... I like the metal case much better than the plastic
 ones.  Once I saw the case of the PoE model and saw the new pictures [4]
 for the ERL on Ubiquiti's site I've been holding out purchasing an ERL for
 my home.  I should bug our distributor, but I doubt they'd know since they
 aren't opening the boxes prior to shipment.

 Although a commercial alternative, Mikrotik hardware (ex: RB750GL [1]) and
 OS is attractive.  It appears all Mikrotik integrated solutions include
 some sort of enclosure (see www.routerboard.com).  The CLI takes some
 getting used to, but the syntax makes sense after a while. ;)  There's also
 a webui called webfig and a Windows client called Winbox.



 I really hope the VyOS project can get off the ground.  If any developers
 familiar with maintaining Debian-based distributions are on-list, I know
 the project is looking for people to help.


 +1
 I hope VyOS project succeeds.


 [0] http://en.wikipedia.org/wiki/XORP
 [1] http://routerboard.com/RB750GL
 [2]
 http://www.ubnt.com/media/product/edgemax/hardware-overview/edgerouter-lite-1.jpg
 [3]
 http://www.ubnt.com/media/product/edgemax/hardware-overview/edgerouter-poe-1.jpg
 [4] http://www.ubnt.com/edgemax#EdgeMAXhardware

 --
 ---~~.~~---
 Mike
 //  SilverTip257  //




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Meraki

2013-11-25 Thread Ray Soucy
It looks like Brocade has swapped out Quagga with IP Infusion's non-free
version, ZebOS.  They also decided to abandon the FOSS Vyatta Core project.

It's really unfortunate, as the FOSS project is the only reason I was
interested in paying the licensing.  It was attractive to have Vyatta Core
as a no-cost option for small things, and the subscription edition for
higher visibility devices.  Now that they've moved away from having any
FOSS project, I'm not really inclined to invest in the product, I'm sure
there are others who feel the same way.

There is a group of people who were active in the Vyatta community trying
to get a fork of it going under the name VyOS, http://www.vyos.net/

As far as Ubiquiti, it looks like about 2 years ago they actually hired a
few people from Vyatta, Inc. to work on EdgeOS.  So development of EdgeOS
has continued [and likely will continue] independently, though it looks
like at least a few people from UBNT are interested in seeing VyOS happen
and participating on their own time.  I know one of the early goals for
VyOS is to get the documentation up on their Wiki and have a release of the
current Vyatta Core with the name swapped out as a starting point.

I really hope the VyOS project can get off the ground.  If any developers
familiar with maintaining Debian-based distributions are on-list, I know
the project is looking for people to help.






On Sun, Nov 24, 2013 at 8:33 PM, Rob Seastrom r...@seastrom.com wrote:


 Ray Soucy r...@maine.edu writes:

  Pricing just popped up for the new EdgeRouter PRO last night and I was
  pretty blown away:
 
  $360
 
  For a device with 2 SFP ports, and 2M PPS.  That is music to my ears
 since
  we do a lot of dark fiber around the state even for smaller locations.
  I'm
  pretty excited to get one of these and see how they perform.

 Me too...  Will probably pony up for one as soon as I can.

 I haven't tried out their software in a more than trivial
 configuration, and haven't gotten IPv4 policy routing working (though
 I understand it's in there now).  An ER-Lite tunnels IPv6 fine,
 haven't tried it with PD...

  I feel like I'm at the risk for becoming a UBNT fanboy.  Does anyone have
  any qualified horror stories about EdgeMAX or UniFi?  Everything I've
 been
  able to find has been for nonsense configurations like complaining about
  trying to to OSPF over WiFi ... Who does that?

 They are an unmitigated disaster at hitting dates and calibrating
 expectations.  The EdgeRouter Pro has been on their web site for well
 over a year.  The EdgeRouter Carrier (with 10ge SFP+) has disappeared
 from their web site altogether.

 I'm concerned about the Vyatta CLI docs and their continuing
 availability.  Someone from Brocade should make some public statements
 to set our collective minds at ease.

 That said I've been reasonably happy with the UBNT stuff (mostly older
 outdoor gear) that I've bought over the past several years.  It's held
 up well.  Just don't count on the availability of anything that you
 can't order up from your favorite VAR and have it show up in the
 FedEx.

 -r




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Meraki

2013-11-22 Thread Ray Soucy
FWIW, I picked up a UniFi 3-pack of APs and built up a controller VM using
Ubuntu Server LTS and the beta multi-site controller code over the past
week.

I'm very impressed so far, it doesn't have all the bells and whistles of
Cisco setup, sure, but I'm pretty shocked at the level of functionality
here and the ease of having APs use an off-site controller (they all phone
home over TCP so no VPN or port forwarding is required).

I'm interested in UniFi mainly for remote Libraries that don't have any IT
staff but need a little more than a router from Best Buy.

Also of interest is the EdgeMAX line.  I also got the EdgeRouter LITE for
testing this past week after finding out it runs a fork of Vyatta (EdgeOS)
and is developed by former Vyatta employees.  For a sub- $100 device ...
very impressive.

Pricing just popped up for the new EdgeRouter PRO last night and I was
pretty blown away:

$360

For a device with 2 SFP ports, and 2M PPS.  That is music to my ears since
we do a lot of dark fiber around the state even for smaller locations.  I'm
pretty excited to get one of these and see how they perform.

I wish I would have bothered looking at Ubiquiti sooner, really.  I'm a
little embarrassed to admit I initially wrote them off because the prices
were so low, but the more I look into these guys the more I like them.

I feel like I'm at the risk for becoming a UBNT fanboy.  Does anyone have
any qualified horror stories about EdgeMAX or UniFi?  Everything I've been
able to find has been for nonsense configurations like complaining about
trying to to OSPF over WiFi ... Who does that?






On Fri, Nov 22, 2013 at 1:34 AM, Seth Mos seth@dds.nl wrote:


 Op 22 nov 2013, om 06:37 heeft Jay Ashworth het volgende geschreven:

  - Original Message -
  Anecdote:
 
  My local IHOP finally managed to get Wifi internet access in the
 restaurant.
 
  For reasons unknown to me, it's a Meraki box, backhauled *over T-mobile*.
 
  That's just as unpleasant as you'd think it would be, And More!
 
  Both the wifi and 3G (yes, 3G) boxes lock up on a fairly regular basis,
  requiring a power cycle, which, generally, they'll only do because I've
  been eating there for 20 years, and they trust me when I ask them to.
 
  I can't say whether this provides any illumination on the rest of their
  product line, but...

 To compound matters, i'd go as far as to say that any wireless solution on
 2.4Ghz isn't really a wireless solution. It's just not feasible anymore in
 2013, there is just *so much* interference from everything using the
 unlicensed 2.4Ghz band that it's own success is it's greatest downfall.

 Reliable wireless isn't (to use the famous war quote friendly fire isn't)

 For whatever reasons, whomever I talk to they all tell me that ISP here
 sucks, and if I ask further if they are using the wireless thingamabob that
 the ISP shipped them, they says yes. So, that's about right then.

 I've been using a PCengines.ch Alix router for years now (AMD Geode, x86,
 256MB ram, CF) with a cable modem in bridge mode with seperate dual band
 access points in the places where I need them (living room, attic office)
 and I can't say that my experiences with the ISP here mesh with theirs.

 Anyhow, if you are going to deploy wireless, make sure to use dual band,
 and name the 2.4Ghz SSID internet and the 5Ghz SSID faster-internet.
 You'll see people having a heck of a better time. Social engineering works
 :)

 When we chose the Ubiquity wireless kit we could deploy twice as many APs
 for the same price of one of the other APs. This effectively means we have
 a very dense wireless network that covers the entire building, and lot's of
 kit that can actually see and use the 5Ghz band.

 Setup was super easy, I added a unifi DNS name that points to my unifi
 controller host and I get a email that a new AP is ready to be put into
 service. Having a local management host instead of some cloud was a hard
 requirement. I also like that I can just apt-get update; apt-get upgrade
 the software. By using DNS remote deployment was super easy too, send the
 unit off and let them plug it in, it then comes onto the network and
 registers itself.

 I believe every current Apple iDevice currently supports the 5Ghz band,
 and all the Dell gear we purchase also comes ordered with it. Heck, even my
 2011 Sony Xperia T has 5Ghz wireless now, as do the current Samsung Galaxy
 S3, S4

 Best regards,

 Seth




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Meraki

2013-11-20 Thread Ray Soucy
I'm very interested in other user experiences with Ubiquity for smaller
deployments vs. traditional Cisco APs and WLC.  Especially for a collection
of rural areas.  The price point and software controller are very
attractive.

Anyone running a centralized controller for a lot of remote sites?


On Tue, Nov 19, 2013 at 1:57 PM, Seth Mos seth@dds.nl wrote:


 Op 19 nov 2013, om 18:25 heeft Hank Disuko het volgende geschreven:

  Hi folks,
 
  I've traditionally been a Cisco Catalyst shop for my switching gear.
 
  I am doing a significant hardware refresh in one of my offices, which
 will entail replacing about 20 access switches and a couple core devices.
  Pretty simple L3 VLAN environment with VRRP/HSRP, on the physical end I
 have 1G fibre/copper and 10G fibre.  My core switch of choice will likely
 be the Cat 4500 series.
 
  I'm considering Cisco's Meraki platform for my access layer and I'm
 looking for deployment stories of folks that have deployed Meraki in the
 past...good/bad/ugly kinda stuff.
 
  I know Meraki hardcores were upset when Cisco acquired them, but not
 exactly sure why.
 
  Anyway, any thoughts would be useful.  Thanks!

 We used to use the 3Com wireless kit before it became H3C, and then HP,
 which worked ok but the engrish in the UI was horrid.

 We've since purchased 25 Ubiquity wireless access points, specifically the
 300N Pro access points, they work really well, pricing is competitive
 priced and the management is nice.

 I've setup a Debian VM, installed their management software from their APT
 repo and just go from there. The version 3 software also supports
 multi-site which is really nice.

 It's a huge upgrade over our previous wireless though.

 Cheers,
 Seth




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: DNS and nxdomain hijacking

2013-11-05 Thread Ray Soucy
http://en.wikipedia.org/wiki/Response_policy_zone

RPZ functionality has been widely adopted in the past few years.  Also
known as DNS Firewall.


On Tue, Nov 5, 2013 at 10:30 PM, Andrew Sullivan asulli...@dyn.com wrote:

 On Tue, Nov 05, 2013 at 07:57:59PM -0500, Phil Bedard wrote:
 
  I think every major residential ISP in the US has been doing this for 5+
  years now.

 Comcast doesn't, because it breaks DNSSEC.

 A

 --
 Andrew Sullivan
 Dyn, Inc.
 asulli...@dyn.com
 v: +1 603 663 0448




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: latest Snowden docs show NSA intercepts all Google and Yahoo DC-to-DC traffic

2013-10-31 Thread Ray Soucy
Was the unplanned L3 DF maintenance that took place on Tuesday a frantic
removal of taps? :-)


On Wed, Oct 30, 2013 at 3:30 PM, Scott Weeks sur...@mauigateway.com wrote:

 On Wed, Oct 30, 2013 at 1:46 PM, Jacque O'Lantern 
 jacque.olant...@yandex.com wrote:

 
 http://www.washingtonpost.com/world/national-security/nsa-infiltrates-links-to-yahoo-google-data-centers-worldwide-snowden-documents-say/2013/10/30/e51d661e-4166-11e3-8b74-d89d714ca4dd_story.html


 --- brandon.galbra...@gmail.com wrote:
 From: Brandon Galbraith brandon.galbra...@gmail.com

 Google is speeding up its initiative to encrypt all DC to DC traffic, as
 this was suspected a short time ago.

 http://www.informationweek.com/security/government/nsa-fallout-google-speeds-data-encryptio/240161070
 -


 This goes back to our conversation last June:

 http://mailman.nanog.org/pipermail/nanog/2013-June/thread.html#59352

 now $189K may not seem as 'big'!  ;-)

 (http://mailman.nanog.org/pipermail/nanog/2013-June/059371.html)


 scott




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Cisco DMVPN Configuration Question

2013-08-16 Thread Ray Soucy
Don't usually poke NANOG for a second pair of eyes, but got hit with an
urgent need to get connectivity up on a small budget.

I've run into a situation where I require multiple DMVPN spokes to be
behind a single NAT IP (picture of things to come with CGN?)

The DMVPN endpoint works fine behind NAT until a 2nd is added behind the
same IP address.  At that point the hub gets confused and I start seeing
packet loss to the endpoints in a round-robin fashion.

As far as I can see Cisco documentation says pretty clearly that each DMVPN
spoke requires a unique IP address.  Is there any way around this, or do I
need to be looking at an alternative VPN solution?

Hub config:

8
 description DMVPN
 bandwidth 10
 ip address 10.231.254.1 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip nhrp authentication ! removed
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 ip nhrp redirect
 ip tcp adjust-mss 1360
 tunnel source ! removed
 tunnel mode gre multipoint
 tunnel key 0
 tunnel protection ipsec profile DMVPN
8

Spoke:

8
interface Tunnel2
 description DMVPN
 bandwidth 10
 ip vrf forwarding DMVPN
 ip address 10.231.254.10 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip nhrp authentication ! removed
 ip nhrp map multicast ! removed
 ip nhrp map 10.231.254.1 ! removed
 ip nhrp network-id 1
 ip nhrp nhs 10.231.254.1
 ip nhrp shortcut
 ip tcp adjust-mss 1360
 tunnel source FastEthernet0/0
 tunnel mode gre multipoint
 tunnel key 0
 tunnel protection ipsec profile DMVPN
end
8

-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: Muni fiber: L1 or L2?

2013-01-31 Thread Ray Soucy
Late to the conversation, but I'll chime in that we established a
model in Maine that is working pretty well, at least for middle-mile
fiber.

When we started building out MaineREN (our RON) we decided that having
the University own the fiber would tie it up in political red tape.
So much so that it would ultimately not be made available to the
private sector (because incumbents would accuse us of competing with
them using public funds).  We knew this because we had already spent a
year in the legislature fighting off industry lobbyists.

Obviously there are considerable investments in such infrastructure
that many private companies are unwilling or unable to make in rural
areas (ROI takes too long), so we really wanted to make sure that
future facilities would be built out in a way that would allow service
providers to expand into the state cheaply, encourage competition, and
ultimately provide better services at lower costs.

The goal was to establish geographically diverse, high stand-count,
rings to reach the majority of the state, so we pitched it in a
public-private partnership to go after Recovery Act funding.

As of a few months ago the build-out is complete, and the first
networks to make use of the fiber are starting to come online
(including MaineREN).

The way we did it was to have the state government create a new public
utility designation of Dark Fiber Provider.  There are a few rules
in place to keep things fair: Mainly they're forbidden to provide lit
services and they're required to provide open access to anyone at
published rates.

The result is Maine Fiber Company:

http://www.mainefiberco.com/

It's still early on, but I'm anxious to see how things look in 10 years or so.

A lot of people who like the idea of what we've done aren't sure if
it's a good model to apply for last mile fiber.  Personally, I think
replicating this model to deliver dark fiber to the home (much like
electricity) is the only way we'll be able to shield providers from
having to make major investments to deliver the level of service we
really need.  By keeping it as a dark-fiber only service, you create
an environment where there is competition instead of one provider
keeping speeds low and prices high.

I initially thought having L2 separation would be good in that service
changes could be done remotely, etc.  But after giving it some
thought, I think it places way too much potential for L2 to be the
bottleneck or source of problematic service and if it's provided by a
public utility or municipality it could take very long to fix (if it
get's fixed at all) due to politics and budget hawks.  I really want
to have choice between providers even at the L2 level.




On Tue, Jan 29, 2013 at 12:54 PM, Jay Ashworth j...@baylink.com wrote:
 - Original Message -
 From: Leo Bicknell bickn...@ufp.org

 I am a big proponent of muni-owned dark fiber networks. I want to
 be 100% clear about what I advocate here:

 - Muni-owned MMR space, fiber only, no active equipment allowed. A
 big cross connect room, where the muni-fiber ends and providers are
 all allowed to colocate their fiber term on non-discriminatory terms.

 - 4-6 strands per home, home run back to the muni-owned MMR space.
 No splitters, WDM, etc, home run glass. Terminating on an optical
 handoff inside the home.

 Hmmm.  I tend to be a Layer-2-available guy, cause I think it lets smaller
 players play.  Does your position (likely more deeply thought out than
 mine) permit Layer 2 with Muni ONT and Ethernet handoff, as long as clients
 are *also* permitted to get a Layer 1 patch to a provider in the fashion you
 suggest?

 (I concur with your 3-pair delivery, which makes this more practical on an
 M-A-C basis, even if it might require some users to have multiple ONTs...)

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink   
 j...@baylink.com
 Designer The Things I Think   RFC 2100
 Ashworth  Associates http://baylink.pitas.com 2000 Land Rover DII
 St Petersburg FL USA   #natog  +1 727 647 1274




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Muni fiber: L1 or L2?

2013-01-31 Thread Ray Soucy
 1.  Must sell dark fiber to any purchaser.
 2.  Must sell dark fiber to all purchasers on equal terms.
 (There must be a published price list and there cannot be deviations
 from that price list. If the price list is modified, existing 
 customers
 receive the new pricing at the beginning of their next billing cycle.)
 3.  May provide value-added L2 services
 4.  If L2 services are provided, they are also subject to rule 2.
 5.  May not sell L3 or higher level services.
 6.  May not hold ownership or build any form of alliance or affiliation 
 with
 a provider of L3 or higher level services.

I think rule #3 is the kind of thing that sounds like a good idea, but
ends up being abused in practice.

My personal view is that you really want that separation in place.
You don't want a situation where the dark fiber provider gives
priority to their L2 outages and get's around to their competitors
later.

Businesses are in the business of profit.  Nothing wrong with that,
but if you want it to be a fair playing field you need to avoid this
kind of conflict of interest.

We've seen the same behavior with ILECs and small ISPs.  They were
required to open up their network to competing ISPs, but did
everything they could to make it as difficult as possible.  You really
want to create a situation where that temptation isn't even there.

We've also seen that when left up to the private sector even last-mile
solutions suffer from the same cherry-picking of profitable
locations to service: example would be an apartment complex having
fiber delivered vs. a house next door not having fiber delivered.  You
can't really blame the private sector for it, but if you want the idea
of FTTH to be a universal service, you really need to apply the public
utility model to it.




P.S.Fletcher Kittredge is the private side of the public-private
partnership that made Maine Fiber Company possible and deserves at
least 50% of the credit if not more (Google him).  Great to see him
on-list.

P.P.S. I should also note that my boss, Jeff, would be the public
side of that, and he isn't quite on board with my position on
extending FTTH as a public utility.  He still has faith in the private
sector to take care of it.  ;-)  I mostly stand on the sidelines and
provide commentary, I'm not suited for the level of political
involvement it actually takes to make the magic happen.

-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-07 Thread Ray Soucy
+1

Thanks for the tip, this looks very useful.

Looks like it was only introduced in 2.6.35, we're still on 2.6.32 ...
might be worth the upgrade, it just takes so long to test new kernel
versions in this application.

We ended up dropping TCP_TIMEWAIT_LEN to 30 seconds as a band-aid for
now, along with the expanded port range.
In talking to others 20 seconds seems to be 99%+ safe, with the sweet
spot seeming to be 24 seconds or so.  So we opted to just go with 30
seconds and be cautious, even though others claim going as low as 10
or 5 seconds without issue.  I'll let people know if it introduces any
problems.

In talking with the author of HAproxy, he seems to be in the camp that
using SO_LINGER of 0 might be the way to go, but is unsure of how
servers would respond to it; we'll likely try a build with that method
and see what happens at some point.




On Fri, Dec 7, 2012 at 4:51 PM, Matthew Palmer mpal...@hezmatt.org wrote:
 On Thu, Dec 06, 2012 at 08:58:10AM -0500, Ray Soucy wrote:
  net.ipv4.tcp_keepalive_intvl = 15
  net.ipv4.tcp_keepalive_probes = 3
  net.ipv4.tcp_keepalive_time = 90
  net.ipv4.tcp_fin_timeout = 30

 As discussed, those do not affect TCP_TIMEWAIT_LEN.

 There is a lot of misinformation out there on this subject so please
 don't just Google for 5 min. and chime in with a solution that you
 haven't verified yourself.

 We can expand the ephemeral port range to be a full 60K (and we have
 as a band-aid), but that only delays the issue as use grows.  I can
 verify that changing it via:

 echo 1025 65535  /proc/sys/net/ipv4/ip_local_port_range

 Does work for the full range, as a spot check shows ports as low as
 2000 and as high as 64000 being used.

 I can attest to the effectiveness of this method, however be sure and add
 any ports in that range that you use as incoming ports for services to
 /proc/sys/net/ipv4/ip_local_reserved_ports, otherwise the first time you
 restart a service that uses a high port (*cough*NRPE*cough*), its port will
 probably get snarfed for an outgoing connection and then you're in a sad,
 sad place.

 - Matt

 --
 [An ad for Microsoft] uses the musical theme of the Confutatis Maledictis
 from Mozart's Requiem. Where do you want to go today? is on the screen,
 while the chorus sings Confutatis maledictis, flammis acribus addictis,.
 Translation: The damned and accursed are convicted to the flames of hell.





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-06 Thread Ray Soucy
It does require a fixed source address.  The box is also a router and
firewall, so it has many IP addresses available to it.

On Wed, Dec 5, 2012 at 5:24 PM, William Herrin b...@herrin.us wrote:
 On Wed, Dec 5, 2012 at 5:01 PM, Mark Andrews ma...@isc.org wrote:
 In message 
 CAP-guGW6oXo=UfTfg+SDiFjB4=qxpsho+yfk6vxnlkcc58p...@mail.gmail.com,
  William Herrin writes:
 The thing is, Linux doesn't behave quite that way.

 If you do an anonymous connect(), that is you socket() and then
 connect() without a bind() in the middle, then the limit applies *per
 destination IP:port pair*. So, you should be able to do 30,000
 connections to 192.168.1.1 port 80, another 30,000 connections to
 192.168.1.2 port 80, and so on.

 The socket api is missing a bind + connect call which restricts the
 source address when making the connect.  This is needed when you
 are required to use a fixed source address.

 Hi Mark,

 There are ways around this problem in Linux. For example you can mark
 a packet with iptables based on the uid of the process which created
 it and then you can NAT the source address based on the mark. Little
 messy but the tools are there.

 Anyway, Ray didn't indicate that he needed a fixed source address
 other than the one the machine would ordinarily choose for itself.

 Regards,
 Bill Herrin


 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-06 Thread Ray Soucy
This tunes conntrack, not local TCP on the server itself.

On Wed, Dec 5, 2012 at 4:18 PM, Cyril Bouthors cy...@bouthors.org wrote:
 On  5 Dec 2012, r...@maine.edu wrote:

 Where there is no way to change this though /proc

 10:17PM lenovo:~% sudo sysctl -a |grep wait
 net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
 net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
 net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
 net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
 net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
 net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
 10:17PM lenovo:~%

 ?

 We use this to work around the default limit on our internal load balancers.

 HIH.
 --
 Cyril Bouthors - Administration Système, Infogérance
 ISVTEC SARL, 14 avenue de l'Opéra, 75001 Paris
 1 rue Émile Zola, 69002 Lyon
 Tél : 01 84 16 16 17 - Fax : 01 77 72 57 24
 Ligne directe : 0x7B9EE3B0E



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-06 Thread Ray Soucy
 net.ipv4.tcp_keepalive_intvl = 15
 net.ipv4.tcp_keepalive_probes = 3
 net.ipv4.tcp_keepalive_time = 90
 net.ipv4.tcp_fin_timeout = 30

As discussed, those do not affect TCP_TIMEWAIT_LEN.

There is a lot of misinformation out there on this subject so please
don't just Google for 5 min. and chime in with a solution that you
haven't verified yourself.

We can expand the ephemeral port range to be a full 60K (and we have
as a band-aid), but that only delays the issue as use grows.  I can
verify that changing it via:

echo 1025 65535  /proc/sys/net/ipv4/ip_local_port_range

Does work for the full range, as a spot check shows ports as low as
2000 and as high as 64000 being used.

While this works fine for the majority of our sites as they average
well below that, for a handful peak hours can spike above 1000
connections per second; so we would really like to see something
closer to an ability to provide closer to 2000 or 2500 connections a
second for the amount of bandwidth being delivered through the unit
(full gigabit).

But ideally we would find a way to significantly reduce the number of
ports being chewed up for outgoing connections.

On the incoming side everything just makes use of the server port
locally so it's not an issue.

Trying to avoid using multiple source addresses for this as it would
involve a fairly large configuration change to about 100+ units; each
requiring coordination with the end-user, but it is a last resort
option.

The other issue is that this is all essentially squid, so a drastic
re-design of how it handles networking is not ideal either.




On Thu, Dec 6, 2012 at 8:25 AM, Kyrian kyr...@ore.org wrote:
 On  5 Dec 2012, r...@maine.edu wrote:

  Where there is no way to change this though /proc


 ...

 Those netfilter connection tracking tunables have nothing to do with the
 kernel's TCP socket handling.

 No, but these do...

 net.ipv4.tcp_keepalive_intvl = 15
 net.ipv4.tcp_keepalive_probes = 3
 net.ipv4.tcp_keepalive_time = 90
 net.ipv4.tcp_fin_timeout = 30

 I think the OP was wrong, and missed something.

 I'm no TCP/IP expert, but IME connections go into TIME_WAIT for a period
 pertaining to the above tuneables (X number of probes at Y interval until
 the remote end is declared likely dead and gone), and then go into FIN_WAIT
 and then IIRC FIN_WAIT2 or some other state like that before they are
 finally killed off. Those tunables certainly seem to have actually worked in
 the real world for me, whether they are right in theory or not is possibly
 another matter.

 Broadly speaking I agree with the other posters who've suggested adding
 other IP addresses and opening up the local port range available.

 I'm assuming the talk of 30k connections is because the OP's proxy has a
 'one in one out' situation going on with connections, and that's why your
 ~65k pool for connections is halved.

 K.





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-06 Thread Ray Soucy
This issue is for really for connections that close properly and
without any issue.

The application closes the socket and doesn't care about it; but the
OS keeps it in the TIME_WAIT state as required by the RFC for TCP in
case data tries to be sent after the connection has closed (out of
order transmission).

I think we're going to go with dropping it to 30 seconds instead of 60
seconds and seeing how that goes.  It seems to be the direction taken
by people who have implemented high traffic load balancers and proxy
servers.

I was hoping someone would have real data on what a realistic time
window is for keeping a socket in a TIME_WAIT state, but it doesn't
seem like anyone has collected data on it.




On Thu, Dec 6, 2012 at 11:33 AM, Jean-Francois Mezei
jfmezei_na...@vaxination.ca wrote:
 Question:

 If a TCP connection is left hanging and continues to hoard the port for
 some time before it times out, shouldn't the work to be focused on
 finding out why the connection is not properly closed instead of trying
 to support a greater number of hung connections waiting to time out ?






-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



TCP time_wait and port exhaustion for servers

2012-12-05 Thread Ray Soucy
RFC 793 arbitrarily defines 2MSL (how long to hold a socket in
TIME_WAIT state before cleaning up) as 4 min.

Linux is a little more reasonable in this and has it baked into the
source as 60 seconds in /usr/src/linux/include/net/tcp.h:
#define TCP_TIMEWAIT_LEN (60*HZ)

Where there is no way to change this though /proc (probably a good
idea to keep users from messing with it), I am considering re-building
a kernel with a lower TCP_TIMEWAIT_LEN to deal with the following
issue.

With a 60 second timeout on TIME_WAIT, local port identifiers are tied
up from being used for new outgoing connections (in this case a proxy
server).  The default local port range on Linux can easily be
adjusted; but even when bumped up to a range of 32K ports, the 60
second timeout means you can only sustain about 500 new connections
per second before you run out of ports.

There are two options to try an deal with this, tcp_tw_reuse, and
tcp_tw_recycle; but both seem to be less than ideal.  With
tcp_tw_reuse, it doesn't appear to be effective in situations where
you're sustaining 500+ new connections per second rather than a small
burst.  With tcp_tw_recycle it seems like too big of a hammer and has
been reported to cause problems with NATed connections.

The best solution seems to be trying to keep TIME_WAIT in place, but
being faster about it.

30 seconds would get you to 1000 connections a second; 15 to 2000, and
10 seconds  to about 3000 a second.

A few questions:

Does anyone have any data on how typical it is for TIME_WAIT to be
necessary beyond 10 seconds on a modern network?
Has anyone done some research on how low you can make TIME_WAIT safely?
Is this a terrible idea?  What alternatives are there?  Keep in mind
this is a proxy server making outgoing connections as the source of
the problem; so things like SO_REUSEADDR which work for reusing
sockets for incoming connections don't seem to do much in this
situation.

Anyone running large proxies or load balancers have this situation?
If so what is your solution?




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-05 Thread Ray Soucy
This would be outgoing connections sourced from the IP of the proxy,
destined to whatever remote website (so 80 or 443) requested by the
user.

Essentially it's a modified Squid service that is used to filter HTTP
for CIPA compliance (required by the government) for keep children in
public schools from stumbling on to inappropriate content.

Like most web traffic, the majority of these connections open and
close in under a second.  When we get to a point that there is enough
traffic from users behind the proxy to be generating over 500 new
outgoing connections per second, sustained, we start having users
experience an error where there are no local ports available to Squid
to use since they're all tied up in a TIME_WAIT state.

Here is an example of netstat totals on a box we're seeing the behavior on:

   10 LAST_ACK
   32 LISTEN
5 SYN_RECV
5 CLOSE_WAIT
  756 ESTABLISHED
   26 FIN_WAIT1
   40 FIN_WAIT2
5 CLOSING
   10 SYN_SENT
481947 TIME_WAIT

As a band-aid we've opened up the local port range to allow up to 50K
local ports with /proc/sys/net/ipv4/ip_local_port_range, but they're
brushing up against that limit again at peak times.

It's a shame because memory and CPU-wise the box isn't breaking a sweat.

Enabling TW_REUSE doesn't seem to have any effect for this case
(/proc/sys/net/ipv4/tcp_tw_reuse)
Using TW_RECYCLE drops the TIME_WAIT count to about 10K instead of
50K, but everything I read online says to avoid using TW_RECYCLE
because it will break things horribly.

Someone responded off-list saying that TIME_WAIT is controlled by
/proc/sys/net/ipv4/tcp_fin_timeout, but that is just incorrect
information that has been parroted by a lot on blogs.  There is no
relation between fin_timeout and TCP_TIMEWAIT_LEN.

This level of use seems to translate into about 250 Mbps of traffic on
average, FWIW.




On Wed, Dec 5, 2012 at 11:56 AM, JÁKÓ András jako.and...@eik.bme.hu wrote:
  Ray,

 With a 60 second timeout on TIME_WAIT, local port identifiers are tied
 up from being used for new outgoing connections (in this case a proxy
 server).  The default local port range on Linux can easily be
 adjusted; but even when bumped up to a range of 32K ports, the 60
 second timeout means you can only sustain about 500 new connections
 per second before you run out of ports.

 Is that 500 new connections per second per {protocol, remote address,
 remote port} tuple, that's too few for your proxy? (OK, this tuple is more
 or less equivalent with only {remote address} if we talk about a web
 proxy.) Just curious.

 Regards,
 András



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-05 Thread Ray Soucy
For each second that goes by you remove X addresses from the available
pool of ports for new connections for whatever the TCP_TIMEWAIT_LEN is
set to (60 seconds by default in Linux).

In this case it's making quick connections for HTTP requests (most of
which finish in less than a second).

Say you have a pool of 30,000 ports and 500 new connections per second
(typical):
1 second goes by you now have 29500
10 seconds go by you now have 25000
30 seconds go by you now have 15000
at 59 seconds you get to 29500,
at 60 you get back 500 and stay at 29500 and that keeps rolling at
29500.  Everyone is happy.

Now say that you're seeing an average of 550 connections a second.
Suddenly there aren't any available ports to use.

So, your first option is to bump up the range of allowed local ports;
easy enough, but even if you open it up as much as you can and go from
1025 to 65535, that's still only 64000 ports; with your 60 second
TCP_TIMEWAIT_LEN, you can sustain an average of 1000 connections a
second.

Our problem is that our busy sites are easily peaking to that 1000
connection a second average, and when we enable TCP_TW_RECYCLE, we see
them go past that to 1500 or so connections per second sustained.

Unfortinately, TCP_TW_RECYCLE is a little too blunt of a hammer and breaks TCP.

From what I've read and heard from others, in a high connection
environment the key is really to drop down the TCP_TIMEWAIT_LEN.

My question is basically, how low can you go?

There seems to be consensus around 20 seconds being safe, 15 being a
99% OK, and 10 or less being problematic.

So if I rebuild the kernel to use a 20 second timeout, then that 3
port pool can sustain 1500, and a 6 port pool can sustain 3000
connections per second.

The software could be re-written to round-robin though IP addresses
for outgoing requests, but trying to avoid that.




On Wed, Dec 5, 2012 at 1:58 PM, William Herrin b...@herrin.us wrote:
 On Wed, Dec 5, 2012 at 12:09 PM, Ray Soucy r...@maine.edu wrote:
 Like most web traffic, the majority of these connections open and
 close in under a second.  When we get to a point that there is enough
 traffic from users behind the proxy to be generating over 500 new
 outgoing connections per second, sustained, we start having users
 experience an error where there are no local ports available to Squid
 to use since they're all tied up in a TIME_WAIT state.

 Here is an example of netstat totals on a box we're seeing the behavior on:

 481947 TIME_WAIT

 Stupid question but how does 500 x 60 = 481947?  To have that many
 connections in TIME_WAIT on a 60 second timer, you'd need more like
 8000 connections per second, wouldn't you?

 Regards,
 Bill Herrin




 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: TCP time_wait and port exhaustion for servers

2012-12-05 Thread Ray Soucy
There is an extra 7 on that number, it was 48194 (was sitting on a
different PC so I typed it instead of copy-paste).

On Wed, Dec 5, 2012 at 1:58 PM, William Herrin b...@herrin.us wrote:
 On Wed, Dec 5, 2012 at 12:09 PM, Ray Soucy r...@maine.edu wrote:
 Like most web traffic, the majority of these connections open and
 close in under a second.  When we get to a point that there is enough
 traffic from users behind the proxy to be generating over 500 new
 outgoing connections per second, sustained, we start having users
 experience an error where there are no local ports available to Squid
 to use since they're all tied up in a TIME_WAIT state.

 Here is an example of netstat totals on a box we're seeing the behavior on:

 481947 TIME_WAIT

 Stupid question but how does 500 x 60 = 481947?  To have that many
 connections in TIME_WAIT on a 60 second timer, you'd need more like
 8000 connections per second, wouldn't you?

 Regards,
 Bill Herrin




 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-30 Thread Ray Soucy
I'll see your disagree and raise you another ;-)

I would say you almost never want to store addresses as character data
unless the only thing you're using them for is logging (even then it's
questionable).  I run into people who do this all the time and it's a
nightmare.

It's easy to store a v6 address as a string, but when you want to select a
range of IPv6 addresses from a database, not having them represented as
integers means you can't do efficient numerical comparisons in your SQL
statements, it also makes indexing your table slower; to put it simply, it
doesn't scale well.

So as a general rule, if you need to do any comparison or calculation on a
v6 address, please don't store it as a string.

From an efficiency standpoint, you want to store it in chunks of the
largest integer your DBMS supports.  If a DBMS supports 128-bit integers
and has optimized operations for them, then go for it.  Most only support
64-, or even 32-bit.  I say 64-bit because that's what the majority of
current systems actually support and I don't see anyone coming out with a
128-bit architecture ;(

For convenience I would very much love to see MySQL include inet6_aton and
inet6_ntoa, along with a 128-bit data structure that would
be implemented as either a pair of 64-bit or 4x 32-bit values depending on
the architecture.  But from a performance standpoint, I really don't want
my DBMS doing that calculation; I want the application server doing it
(because it's much easier to scale and distribute the application side than
the storage side).

Note that I'm talking about more from a database storage perspective than
an internal application perspective.

By all means, you should use the standard data structure for v6.  As
mentioned below a lot of the internal structures use 8-bit unsigned
integers (or char); but that's mainly a hold-over from when we had the
reality of 8-bit and 16-bit platforms (for compatibility).  With unions,
these structs are treated as a collection of 8, 16, 32, 64 or a single
128-bit variable which makes it something the developer doesn't need to
worry about once the libraries are written.




On Thu, Nov 29, 2012 at 9:55 AM, William Herrin b...@herrin.us wrote:

 On Thu, Nov 29, 2012 at 9:01 AM, Ray Soucy r...@maine.edu wrote:
  You should store IPv6 as a pair of 64-bit integers.  While PHP lacks
  the function set to do this on its own, it's not very difficult to do.

 Hi Ray,

 I have to disagree. In your SQL database you should store addresses as
 a fixed length character string containing a zero-padded hexadecimal
 representation of the IPv4 or IPv6 address with A through F forced to
 the consistent case of your choice. Expand :: and optionally strip the
 colons entirely. If you want to store a block of addresses, store it
 as two character strings: start and end of the range.

 Bytes are cheap and query simplicity is important. Multi-element
 indexes are messy and the code to manage an array of integers is
 messier than managing a character string in most programming
 languages. memcmp() that integer array for less or greater than? Not
 on a little endian machine!


  Here are a set of functions I wrote a while back to do just that
  (though I admit I should spend some time to try and make it more
  elegant and I'm not sure it's completely up to date compared to my
  local copy ... I would love some eyes on it to make some
  improvements).
 
  http://soucy.org/project/inet6/

 If we're plugging our code, give my public domain libeasyv6 a try. It
 eases entry into dual stack programming for anyone used to doing
 gethostbyname followed by a blocking connect(). Just do a
 connectbyname() with the hostname or textual IP address, the port, a
 timeout and null options. The library takes care of finding a working
 IPv4 or IPv6 address for the host and connecting to it in a timely
 manner.

 http://bill.herrin.us/freebies/

 Currently Linux only but if you're willing to lose timeout control on
 the DNS lookup you can replace getaddrinfo_a() with standard
 getaddrinfo() and the code should run anywhere.

 Regards,
 Bill Herrin


 --
 William D. Herrin  her...@dirtside.com  b...@herrin.us
 3005 Crane Dr. .. Web: http://bill.herrin.us/
 Falls Church, VA 22042-3004




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net


Re: William was raided for running a Tor exit node. Please help if you can.

2012-11-29 Thread Ray Soucy
If you run Tor, then you should probably accept that it might be used
for activity that you don't approve of or even is in violation of the
law.

I'm not saying Tor is good or bad, just that if you're using it you
probably know what you're getting into.

In order to catch someone in a criminal case, most law enforcement
will certainly take whatever they think could be used as evidence,
perform forensic analysis on it, and retain it as long as they think
necessary.

Depending on how well your laws are written, you might be not be
protected from them discovering other activity that is outside the
scope and bringing a separate criminal case against you directly.

Got any pirated music or movies?




On Thu, Nov 29, 2012 at 8:04 AM, Chris cal...@gmail.com wrote:
 I'm not William and a friend pasted a link on IRC to me. I'm going to
 send him a few bucks because I know how it feels to get blindsided by
 the police on one random day and your world is turned upside down.

 Source: 
 http://www.lowendtalk.com/discussion/6283/raided-for-running-a-tor-exit-accepting-donations-for-legal-expenses

 From the URL:

 Yes, it happened to me now as well - Yesterday i got raided for
 someone sharing child pornography over one of my Tor exits.
 I'm good so far, not in jail, but all my computers and hardware have
 been confiscated.
 (20 computers, 100TB+ storage, My Tablets/Consoles/Phones)

 If convicted i could face up to 6 years in jail, of course i do not
 want that and i also want to try to set a legal base for running Tor
 exit nodes in Austria or even the EU.

 Sadly we have nothing like the EFF here that could help me in this
 case by legal assistance, so i'm on my own and require a good lawyer.
 Thus i'm accepting donations for my legal expenses which i expect to
 be around 5000-1 EUR.

 If you can i would appreciate if you could donate a bit (every amount
 helps, even the smallest) either by PayPal (any currency is ok):
 https://paypal.com/cgi-bin/webscr?cmd=_s-xclickhosted_button_id=2Q4LZNBBD7EH4

 Or by Bank Transfer (EUR only please):

 Holder: William Weber
 Bank: EasyBank AG (Vienna, Austria)
 Account: 20011351213
 Bank sort number: 14200
 IBAN: AT031420020011351213
 BIC: EASYATW1

 I will try to pay them back when i'm out of this (or even before) but
 i can obviously not guarantee this, please keep this in mind.
 This money will only be used for legal expenses related to this case.

 If you have any questions or want to donate by another way
 (MoneyBookers, Webmoney, Bitcoin, Liberty Reserve, Neteller) feel free
 to send me a mail (will...@william.si) or a PM, or contact me in LET
 IRC.

 Thanks!
 William




 --
 --C

 The dumber people think you are, the more surprised they're going to
 be when you kill them. - Sir William Clayton




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Programmers can't get IPv6 thus that is why they do not have IPv6 in their applications....

2012-11-29 Thread Ray Soucy
You should store IPv6 as a pair of 64-bit integers.  While PHP lacks
the function set to do this on its own, it's not very difficult to do.

Here are a set of functions I wrote a while back to do just that
(though I admit I should spend some time to try and make it more
elegant and I'm not sure it's completely up to date compared to my
local copy ... I would love some eyes on it to make some
improvements).

http://soucy.org/project/inet6/




I would point out that many developers don't even store IP addresses
correctly and just treat them as a string.  In regards to storing as a
pair of 64-bit integers, I would caution against the temptation of
treating one field as the prefix and the other as the host segment.
While the 64-bit boundary is most common, it is certainly not
required, so please write your IPv6 support in a way that will allow
any valid prefix-length.




While PHP hasn't been my language of choice in the past, it seems to
be something that almost everyone knows, or can learn very quickly.
I've started using it as a command line scripting language quite a bit
as a result ... pretty much a cleaner Perl, the upshot is that you
don't have all the pre-written libraries that you'd find with Perl.
I've tried switching to Python for some things, but I got annoyed with
the specification being in a constant state of drastic syntax change.




But back to the topic at hand.  I think the OP was expressing that
until developers have native IPv6 access at home, they just won't care
about IPv6 support, or won't test it as well as IPv4 support.  That's
pretty much expected and I'm not even sure why it's being stated as
some revelation.  As many have pointed out, there are tunnel brokers
available to developers to test IPv6 with, but at the end of the day,
most people don't want to use a slow tunnel for anything byond
testing, and if they don't have a lot of users asking for IPv6 they're
probably not going to give it much attention until they see a need for
it.

The majority of larger applications support IPv6 just fine because
there are enough users asking for IPv6 support.  I suspect once you
see native IPv6 for residential users become more common you'll see
the developers who have been dragging their feet give in and add IPv6
support.

As mentioned with a shift to web applications though the browser, it's
been a lot less work.  Just throw your application on a server with
IPv6 and it will generally work.  You might need to modify a few
places that interact with IP addresses, but usually they're pretty
trivial (like logging) unless it's a network management oriented
application.




On Thu, Nov 29, 2012 at 8:15 AM, Jeroen Massar jer...@unfix.org wrote:
 On 2012-11-29 13:53 ,  . wrote:
 On 29 November 2012 12:48, Dobbins, Roland rdobb...@arbor.net wrote:

 On Nov 29, 2012, at 6:47 PM, Bjørn Mork wrote:

 What's the proper term for software which happens to access the network?

 Just about anything, these days.

 ;

 'Network-enabled' or 'network-capable' software, maybe?


 Connecting a app to a network is a fundamental change, so perhaps
 change the app to become a network app.  A piece of software
 connected to a network turns from a product into a service.

 Programmers is to a wide group of people.  I am PHP programmer. How
 will ipv6 impact me? nothing, probably.

 *ahem*

 As Owen already alluded to, some programs (PHP also) actually store IP
 addresses in databases. Thus if you where storing those as 32bit, you
 are out of luck.

 [..]
 There are two groups of programmers, a)  these that have programming
 only as a job,  and  b) these that invest time beyond that.

 Group a for you? :)

 Greets,
  Jeroen





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



PHP library for IOS devices

2012-11-28 Thread Ray Soucy
Quick note as many on-list may find this useful.

I've maintained a PHP class to connect to IOS devices over telnet and
parse the output into something useful for various internal tools for
a few years now.  I've recently worked with the author of phpseclib to
create an SSH version of the library.

It's still in a pre-release state until I have time to clean it up,
but I've uploaded an archive of the SSH version and the modified
phpseclib for anyone who needs in the meantime.

You can find it at the bottom of the Cisco for PHP page:

http://soucy.org/project/cisco/




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Big day for IPv6 - 1% native penetration

2012-11-20 Thread Ray Soucy
Or artificially high ...

On Tue, Nov 20, 2012 at 8:45 AM, Owen DeLong o...@delong.com wrote:
 It is entirely possible that Google's numbers are artificially low for a 
 number
 of reasons.

 Owen

 On Nov 20, 2012, at 5:31 AM, Aaron Toponce aaron.topo...@gmail.com wrote:

 On Tue, Nov 20, 2012 at 10:14:18AM +0100, Tomas Podermanski wrote:
It seems that today is a big day for IPv6. It is the very first
 time when native IPv6 on google statistics
 (http://www.google.com/intl/en/ipv6/statistics.html) reached 1%. Some
 might say it is tremendous success after 16 years of deploying IPv6 :-)

 And given the rate on that graph, we'll hit 2% before year-end 2013.

 --
 . o .   o . o   . . o   o . .   . o .
 . . o   . o o   o . o   . o o   . . o
 o o o   . o .   . o o   o o .   o o o





-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: Plages d'adresses IP Orange

2012-11-19 Thread Ray Soucy
The universal translator is still a few years out it seems.
Written that way it's borderline insulting. ;-)

2012/11/19 Jon Lewis jle...@lewis.org:
 Pourquoi demandez-vous des questions NANOG que Wanadoo peut répondre?

 Hopefully google translate hasn't butchered that too badly.


 On Mon, 19 Nov 2012, Pierre-Yves Maunier wrote:

 Hi,

 I think few people understand French on this list. You should try FRnOG.

 Pierre-Yves Maunier


 Le 19 novembre 2012 17:48, jipe foo fooj...@gmail.com a écrit :

 Bonjour à tous,

 Quelqu'un d'Orange (ou autre) pourrait-il me donner plus d'info sur les
 plages d'adresses suivantes:

 inetnum:81.253.0.0 - 81.253.95.255
 netname:ORANGE-FRANCE-HSIAB
 descr:  Orange France / Wanadoo service
 country:FR
 admin-c:AR10027-RIPE
 tech-c: ER1049-RIPE

 inetnum:90.96.0.0 - 90.96.199.255
 netname:ORANGEFRANCE-WFP
 descr:  Orange France - WFP
 country:FR
 admin-c:ER1049-RIPE
 tech-c: ER1049-RIPE

 S'agit-il de plages d'adresses de mobiles, de livebox ou de connexions
 WIFI
 partagées (au moins pour la seconde) ?

 Merci d'avance,

 --
 J




 --
 Pierre-Yves Maunier


 --
  Jon Lewis, MCP :)   |  I route
  Senior Network Engineer |  therefore you are
  Atlantic Net|
 _ http://www.lewis.org/~jlewis/pgp for PGP public key_



-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



DHCPv6 and MAC addresses

2012-11-14 Thread Ray Soucy
Saw yet another attempt at a solution pop up to try and deal with the
lack of a MAC address in DHCPv6 messages.

I've been giving this some thought about how this should be best
accomplished without requiring that host implementations of DHCPv6 be
modified.
Taking advantage of the relay-agent seems to be the most elegant solution:

1) Any DHCPv6 server on a local segment will already have access to
the MAC address; but having a DHCPv6 server on each segment is not
ideal.
2) Requests that are relayed flow through a relay-agent, which is on a
device that also has access to the MAC address of the client system.




Option A:

RFC 6422 provides for Realy-Supplied DHCP Options, currently with one
option code registered with IANA (OPTION_ERP_LOCAL_DOMAIN_NAME).
You can see the list here:

http://www.iana.org/assignments/dhcpv6-parameters/dhcpv6-parameters.xml

I think the quickest solution would be:

1) Adding an RSOO for the client MAC address
2) Get vendors on board to draft and adopt a standard for it on routers,
3) Modify DHCPv6 servers to have support for MAC identification based
on the MAC of the local request OR the MAC of the relayed request when
the option is present.




Option B:

The current RELAY-FORW message already includes the link-local address
of the client.  Perhaps there should be a modification to the privacy
extensions RFC to forbid the use of privacy addressing on the
link-local scope; at this point we could modify DHCPv6 servers to be
able to determine the MAC address for relayed requests based on their
link-local address.

Unfortunately, the cat is out of the bag on this one, so it would take
time to get host implementations modified.




I might be overlooking something critical... thoughts?




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: dhcpy6d - a MAC address aware DHCPv6 server

2012-11-14 Thread Ray Soucy
FWIW ISC DHCPd listens on raw sockets.

On Tue, Nov 6, 2012 at 11:12 AM, George Herbert
george.herb...@gmail.com wrote:
 Oh, horrors, part of my infrastructure needs raw socket data?

 We should ban that, for security.  Who needs those pesky switches anyways?


 George William Herbert
 Sent from my iPhone

 On Nov 6, 2012, at 5:49 AM, Stephane Bortzmeyer bortzme...@nic.fr wrote:

 On Tue, Nov 06, 2012 at 05:38:32AM -0800,
 Owen DeLong o...@delong.com wrote
 a message of 68 lines which said:

 If you're on local subnet, why not pull the MAC address out of the
 received packet?

 Because it requires access to raw sockets, which should not be
 necessary for DHCP?






-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: DHCPv6 and MAC addresses

2012-11-14 Thread Ray Soucy
Well I guess someone is already working on it, +1

Since this is a relay-only message, though.  I think it would be
better as a sub-option of RFC 6422 with a requirement that
relay-agents drop the option if the client tries to source it.  But, I
guess it's splitting hairs.




On Wed, Nov 14, 2012 at 1:02 PM, Tim Chown t...@ecs.soton.ac.uk wrote:
 What about

 http://tools.ietf.org/html/draft-ietf-dhc-dhcpv6-client-link-layer-addr-opt-03

 ?

 --
 Tim

 On 14 Nov 2012, at 17:46, Ray Soucy r...@maine.edu wrote:

 Saw yet another attempt at a solution pop up to try and deal with the
 lack of a MAC address in DHCPv6 messages.

 I've been giving this some thought about how this should be best
 accomplished without requiring that host implementations of DHCPv6 be
 modified.
 Taking advantage of the relay-agent seems to be the most elegant solution:

 1) Any DHCPv6 server on a local segment will already have access to
 the MAC address; but having a DHCPv6 server on each segment is not
 ideal.
 2) Requests that are relayed flow through a relay-agent, which is on a
 device that also has access to the MAC address of the client system.




 Option A:

 RFC 6422 provides for Realy-Supplied DHCP Options, currently with one
 option code registered with IANA (OPTION_ERP_LOCAL_DOMAIN_NAME).
 You can see the list here:

 http://www.iana.org/assignments/dhcpv6-parameters/dhcpv6-parameters.xml

 I think the quickest solution would be:

 1) Adding an RSOO for the client MAC address
 2) Get vendors on board to draft and adopt a standard for it on routers,
 3) Modify DHCPv6 servers to have support for MAC identification based
 on the MAC of the local request OR the MAC of the relayed request when
 the option is present.




 Option B:

 The current RELAY-FORW message already includes the link-local address
 of the client.  Perhaps there should be a modification to the privacy
 extensions RFC to forbid the use of privacy addressing on the
 link-local scope; at this point we could modify DHCPv6 servers to be
 able to determine the MAC address for relayed requests based on their
 link-local address.

 Unfortunately, the cat is out of the bag on this one, so it would take
 time to get host implementations modified.




 I might be overlooking something critical... thoughts?




 --
 Ray Patrick Soucy
 Network Engineer
 University of Maine System

 T: 207-561-3526
 F: 207-561-3531

 MaineREN, Maine's Research and Education Network
 www.maineren.net




-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



Re: IP tunnel MTU

2012-10-29 Thread Ray Soucy
The core issue here is TCP MSS. PMTUD is a dynamic process for
adjusting MSS, but requires that ICMP be permitted to negotiate the
connection.  The realistic alternative, in a world that filters all
ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
can be used to adjust MSS for example.

Keep in mind that the MSS will be smaller than your MTU.
Consider the following example:

 ip mtu 1480
 ip tcp adjust-mss 1440
 tunnel mode ipip

IP packets have 20 bytes of overhead, leaving 1480 bytes for data.  So
for an IP-in-IP tunnel, you'd set your MTU of your tunnel interface to
1480.  Subtract another 20 bytes for the tunneled IP header and 20
bytes (typical) for your TCP header and you're left with 1440 bytes
for data in a TCP connection.  So in this case we write the MSS as
1440.

I use IP-in-IP as an example because it's simple.  GRE tunnels can be
a little more complex.  While the GRE header is typically 4 bytes, it
can grow up to 16 bytes depending on options used.

So for a typical GRE tunnel (4 byte header), you would subtract 20
bytes for the IP header and 4 bytes for the GRE header from your base
MTU of 1500.  This would mean an MTU of 1476, and a TCP MMS of 1436.

Keep in mind that a TCP header can be up to 60 bytes in length, so you
may want to go higher than the typical 20 bytes for your MSS if you're
seeing problems.




On Tue, Oct 23, 2012 at 10:07 AM, Templin, Fred L
fred.l.temp...@boeing.com wrote:
 Hi Roland,

 -Original Message-
 From: Dobbins, Roland [mailto:rdobb...@arbor.net]
 Sent: Monday, October 22, 2012 6:49 PM
 To: NANOG list
 Subject: Re: IP tunnel MTU


 On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:

  Since tunnels always reduce the effective MTU seen by data packets due
 to the encapsulation overhead, the only two ways to accommodate
  the tunnel MTU is either through the use of path MTU discovery or
 through fragmentation and reassembly.

 Actually, you can set your tunnel MTU manually.

 For example, the typical MTU folks set for a GRE tunnel is 1476.

 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.

 This isn't a new issue; it's been around ever since tunneling technologies
 have been around, and tons have been written on this topic.  Look at your
 various router/switch vendor Web sites, archives of this list and others,
 etc.

 Sure. I've written a fair amount about it too over the span
 of the last ten years. What is new is that there is now a
 solution near at hand.

 So, it's been known about, dealt with, and documented for a long time.  In
 terms of doing something about it, the answer there is a) to allow the
 requisite ICMP for PMTU-D to work to/through any networks within your span
 of administrative control and b)

 That does you no good if there is some other network further
 beyond your span of administrative control that does not allow
 the ICMP PTBs through. And, studies have shown this to be the
 case in a non-trivial number of instances.

 b) adjusting your own tunnel MTUs to
 appropriate values based upon experimentation.

 Adjust it down to what? 1280? Then, if your tunnel with the
 adjusted MTU enters another tunnel with its own adjusted MTU
 there is an MTU underflow that might not get reported if the
 ICMP PTB messages are lost. An alternative is to use IP
 fragmentation, but recent studies have shown that more and
 more operators are unconditionally dropping IPv6 fragments
 and IPv4 fragmentation is not an option due to wrapping IDs
 at high data rates.

 Nested tunnels-within-tunnels occur in operational scenarios
 more and more, and adjusting the MTU for only one tunnel in
 the nesting does you no good if there are other tunnels that
 adjust their own MTUs.

 Enterprise endpoint networks are notorious for blocking *all* ICMP (as
 well as TCP/53 DNS) at their edges due to 'security' misinformation
 propagated by Confused Information Systems Security Professionals and
 their ilk.  Be sure that your own network policies aren't part of the
 problem affecting your userbase, as well as anyone else with a need to
 communicate with properties on your network via tunnels.

 Again, all an operator can control is that which is within their
 own administrative domain. That does no good for ICMPs that are
 lost beyond their administrative domain.

 Thanks - Fred
 fred.l.temp...@boeing.com

 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

 Luck is the residue of opportunity and design.

  -- John Milton






-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



  1   2   3   >