Re: DHCPv6 PD & Routing Questions

2015-11-25 Thread Brian Knight
On Tue, Nov 24, 2015 at 6:34 PM, Baldur Norddahl
 wrote:
>
> DHCPv6-PD allows multiple PD requests. But did anyone actually implement
> that? I am not aware of any device that will hand out sub delegations on
> one interface, notice that it is out of address space and then go request
> more space from the upstream router (*).
>
> DHCPv6-PD allows size hints, but it is often ignored. Also there is no
> guidance for what prefix sizes you should ask for. Many CPEs will ask for
> /48. If you got a /48 you will give out that /48 and then not honor any
> further requests, because only one /48 per site is allowed. If you are an
> ISP that gives out /48 and your customers CPE asks for a /56 you will still
> ignore his size hint and give him /48.

Or, worse, the ISP's DHCPv6 server honors the new request and issues
the larger prefix, but refuses to route it.  Ran into that myself when
I replaced my home CPE router, and changed the prefix hint to ask for
a /60 block (expanded from /64) at the same time.  That made for a
frustrating few days without IPv6 service, waiting for my original
delegation to expire.  (Tech support, of course, had no clue and
blamed my router.)

In retrospect I should have perhaps had my original CPE generate a
DHCP release message for that prefix before disconnecting it.  But I
won't be the last person to fail to generate that.

-Brian


Re: Templating/automating configuration

2017-06-07 Thread Brian Knight
 On Wed, 07 Jun 2017 04:23:33 -0500 t...@pelican.org wrote 




Hi Brian,



On Tuesday, 6 June, 2017 21:48, "Brian Knight" m...@knight-networks.com 
said:



 Because we had different sources of truth which were written in-house, we 
wound up

 rolling our own template engine in Python. It took about 3 weeks to write 
the

 engine and adapt existing templates. Given a circuit ID, it generates the 
full

 config for copy and paste into a terminal session. It also hooks into a

 configuration parser tool, written in-house, that tracks configured 
interfaces, so

 it is easy to see whether the template would overwrite an existing 
interface.



Interesting. I'm going through much the same process at the moment, due to 
similar requirements - multiple sources of truth, validation that there's no 
clash with existing configs, but also with a requirement for network-wide 
atomic operations. The latter has been a strong driver for a custom tool - it's 
now grabbing an exclusive lock on all the devices, making all the checks, 
pushing all the config, commit check everywhere, commit everywhere, and only 
once all the commits succeed, release the locks. If any of those steps fail 
anywhere, we get to roll back everywhere. (Obviously with appropriate timeouts 
/ back-offs / deadlock prevention, and specific to platforms with sane config 
management - no vanilla IOS).





Did you find anything to give you a leg-up on config parsing, or did you have 
to do that completely from scratch? At the moment, I'm working with PyEZ (I 
know, vendor lock-in, but we're firmly a Juniper shop, and going in eyes-open 
to the lock-in) to build a limited model of just the parts of the config I'm 
interested in validating, and it seems to be working.





The import process to the database runs directly on our rancid server, reading 
the downloaded configs out of the appropriate directory within rancid. Most of 
our gear is Cisco, so the ciscoconfparse module for Python helps a lot with 
organizing and querying the config.  From there, the config is parsed for key 
items like interface name, description, configured bandwidth, etc., and that 
info is then added or updated as necessary in the database.



Because it's dependent on rancid, there is some lag time between when a change 
is made and when the database gets updated, so we still strongly encourage 
running the pre-config checks for new circuits.  But with PyEZ, it looks like 
you easily could, after grabbing that lock, validate the existing config before 
pushing down new config.  Lots of possibilities there.  I'm envious that you 
have a vendor-written Python module as a choice!





 If I had a free hand and unlimited budget, I would find a single app that

 functions as a source of truth for all circuits and products, which 
includes a

 templating engine that hooks in easily.



Plus the business buy-in and the resource to go back and standardise all the 
existing configs, so the application can fully parse and understand the network 
before it starts. That, and a pony :)





Or, at least, rebuild the existing configs based on the new source of truth, so 
that subsequent config parsing conforms to a single standard.



Regards,

Tim.









Cheers,



-Brian




Re: Templating/automating configuration

2017-06-06 Thread Brian Knight
Because we had different sources of truth which were written in-house, we wound 
up rolling our own template engine in Python. It took about 3 weeks to write 
the engine and adapt existing templates.  Given a circuit ID, it generates the 
full config for copy and paste into a terminal session.  It also hooks into a 
configuration parser tool, written in-house, that tracks configured interfaces, 
so it is easy to see whether the template would overwrite an existing interface.



I used the Jinja2 template engine, along with pyodbc/unixODBC/FreeTDS for 
access to a Microsoft SQL backend.



The keys for us are:



* extracting information from a source of truth

* validating the information for correctness

* making sure you don't overwrite existing config

* outputting the right templates for the circuit features



It made more sense to write a tool than it did to try to adapt something for 
our environment.



If I had a free hand and unlimited budget, I would find a single app that 
functions as a source of truth for all circuits and products, which includes a 
templating engine that hooks in easily.



-Brian





 On Tue, 06 Jun 2017 08:22:59 -0500 Graham Johnston 
johnst...@westmancom.com wrote 











Short of complete SDN, for those of you that have some degree of configuration 
templating and/or automation tools what is it that you run? I'm envisioning 
some sort of tool that let's me define template snippets of configuration and 
aids in their deployment to devices. I'm okay doing the heaving lifting in 
defining everything, I'm just looking for the tool that stitches it together 
and hopefully makes things a little less error prone for those who aren't as 
adept. 



Graham Johnston 

Network Planner 

Westman Communications Group 

204.717.2829 

johnst...@westmancom.commailto:johnst...@westmancom.com; 







Re: improving signal to noise ratio from centralized network syslogs

2018-02-05 Thread Brian Knight

On 2018-02-03 15:49, Scott Weeks wrote:

Then, you can watch your network in real time
like so (below is all one line):

tail -f /var/log/router.log /var/log/switch.log
| egrep -vi 'term1|term2|termN'

'egrep -v' takes out all the lines you don't
want to see while the syslog messages scroll
across the screen.


Syslog-ng can do regex filtering on messages also.  So instead of doing 
an 'egrep -v' on a huge file after it has been logged, you can put your 
filter right into the syslog-ng configuration, and have those filtered 
messages output to a file (or any other output that syslog-ng supports). 
 The result is a smaller file to search and work with.


We implemented a simple email alerter using this functionality.  In 
syslog-ng, we set up two filters.  One filter does the 'egrep -v':


filter f_email_msg {
not message("%PKT_INFRA-LINEPROTO-.*[0-9/]+\\.")   # filter out 
subinterface up/downs

and not message("%PKT_INFRA-LINEPROTO-.*Multilink")
and not message("%PKT_INFRA-LINEPROTO-.*Serial")
and not message("%PKT_INFRA-LINEPROTO-.*Tunnel")
# etc
};

Another filter applied to the messages filters messages to just our core 
devices:


filter f_email_sources {
host("192.0.2.1")
or host("192.0.2.2")
or host("192.0.2.3")
or host("192.0.2.4")
or host("192.0.2.5")
or host("192.0.2.6")
};

Then those are tied together in a syslog-ng rule that outputs to a file:

destination d_email_log {
file("/var/log/syslog-ng/alert/alerts.log"
  template("$HOST:$MSG\n")
  create_dirs(yes)
);
};
log { source(s_devices); filter(f_email_sources); filter(f_email_msg); 
destination(d_email_log); };


A lightweight Python script that runs as a daemon checks that file once 
every 10 seconds, and if the file length is non-zero, it sends the 
contents of the file in an email to the admins.  A shell script run as a 
cron job would work equally as well.


(Also, for emailed syslogs, there is more incentive for the admin to 
keep her or his message filter up to date, as opposed to a file the 
administrator must manually examine.  Otherwise the admin has a full 
inbox :) )


It's very simple and stable, and has worked better than the commercial 
product we used to use for this purpose.


-Brian


Re: 60 Hudson Woes

2018-02-17 Thread Brian Knight
As the engineer working on that Cisco / IBM issue Erik mentioned... ;)

I was able to get walk-up, same-day access to the building for myself a few 
weeks ago (as a customer of DR) and didn’t get my hand slapped for it. DR just 
created the access ticket with the building and that was enough. It took about 
20 minutes start to finish.

But if a vendor tech needs access, they need a COI generated, and that must be 
sent to the building ahead of time via DR. Otherwise they will be turned away.

The COI was the biggest blocker. A 48 hour lead time for the visit didn’t seem 
to be enforced, not by Digital Realty anyway.

Also, I tried to arrange for permanent building key card access while I was 
there. But the key cards must be used at least once every 60 days, otherwise 
they are deactivated. I decided just to arrange for access ahead of time since 
I don’t visit often.

-Brian

> On Feb 16, 2018, at 1:50 PM, Erik Sundberg  wrote:
> 
> We just had an issue where cisco was going to replace a power tray in our 
> router at 60 hudson, we are also at telx.  Cisco contracts with IBM for this. 
> The building is now checking that all 3rd party vendors have an existing 
> Certificate of insurance (COI). This take 48 hours to get put in there 
> system... 
> 
> So now we are forced to use telx smarthands if it's under 48 hours or weekends
> 
> 
> 
> -Original Message-
> From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Dovid Bender
> Sent: Friday, February 16, 2018 12:03 PM
> To: NANOG 
> Subject: 60 Hudson Woes
> 
> We have space with Digital Realty (aka TELX) and 60 Hudson and lately it's 
> been a nightmare getting in. The real estate management company is having us 
> reconsider our options. They are giving us the option to have ID badges for 
> our employees but for anyone else that wants access we need to request it 48 
> hours in advance to get approval. So if we plan on having an unexpected 
> outage and we need to have a have a vendor come on site (e.g. a Dell tech) we 
> will need to let them know in advance.
> 
> What are peoples experiences with 111 8th and  165 Halsey? We really like the 
> connectivity options at 60 Hudson but at some point the hassle becomes not 
> worth it.



Re: Multicast traffic % in enterprise network ?

2018-08-08 Thread Brian Knight

On 2018-08-08 13:49, Mankamana Mishra (mankamis) via NANOG wrote:

Hi Every one,
Recently we had good discussion over multicast uses in public
internet. From discussion, it was pointed out uses of multicast is
more with in enterprise.  Wanted to understand how much % multicast
traffic present in network

  *   If there is any data which can provide what % of traffic is
multicast traffic. And if multicast is removed, how much unicast
traffic it would add up?
  *   Since this forum has people from deployment area, I would love
to know if there is real deployment problems or its pain to deploy
multicast.


These questions is to work / discussion in IETF to see what is pain
points for multicast, and how can we simplify it.



Thanks
Mankamana



Hi Mankamana,

I once worked for a financial futures broker-dealer where I implemented 
multicast, which was around 2009.  They had one main application, which 
was a trading "screen" that traders and customers used to execute 
trades.  I would guesstimate maybe 5-10% of the packets and bytes 
flowing over the network was multicast, depending on network conditions.


In terms of bandwidth savings, I'm not sure how much we saved.  We had 
nine or ten participants using that particular application.  However, 
they all worked on different desks, trading different products.  The app 
was smart enough to send only the price feeds in which the user was 
interested.  Assuming at least 50% of the users looked at the same price 
feeds 50% of the time, I'd say it saved about 25-50 meg.


We also had one major exchange distributing price feeds via multicast.  
However, that feed was not routed on our network.  Our systems plugged 
directly into exchange-provided switches for the feed.


The hurdles I had to overcome to implement multicast were:

* The learning curve for PIM.  Deciding on the deployment model was 
difficult, as were the first few support calls.  We wound up going with 
PIM-SM w/ BSR for RP selection.


* Vendor support for PIM on our gear.  These were mainly troubles with 
PIM running on firewalls in high-availability mode.


If I had to do it over again, I wouldn't have bothered with multicast.  
It was a great opportunity and we learned a lot, but the app had a 
unicast mode of operation that would have worked perfectly fine for our 
purposes.


I work for an ISP now.  We have decided not to support multicast on our 
network for now mainly because of the learning curve, and also because 
we simply don't see that much demand.  Those two or three prospective 
customers that wanted it, wanted it for multi-site video conferencing on 
an MPLS VPN.


Hope this helps,

-Brian


Re: QoS for Office365

2019-07-09 Thread Brian Knight


> On Jul 9, 2019, at 9:19 AM, Mark Tinka  wrote:
> 
> 
> 
>> On 9/Jul/19 16:18, Ross Tajvar wrote:
>> I think the difficulty lies in appropriately marking the traffic. Like
>> Joe said, the IPs are always changing.
> 
> Does anyone know if they are reasonably static in an Express Route scenario?
> 
> Mark.

Yes, the IPs used on the ExpressRoute connection are whatever is chosen for the 
internal IP scheme of the VPC. ExpressRoute is a VPN connection into that 
internal side of a VPC.  On our carrier (MegaPort), ExpressRoute looks similar 
to an Option A NNI connection. BGP is used for routing.

The source IPs on packets crossing out of the VPC onto the Azure-provided 
Internet may or may not be static, but internally they are usually static RFC 
1918 addresses.

We’ve been using ExpressRoute for our own office systems and a small handful of 
customers for about two years now. However, we don’t use diffserv on 
ExpressRoute, so can’t comment on that.

-Brian




Re: RIPE our of IPv4

2019-11-27 Thread Brian Knight

On 2019-11-26 17:11, Ca By wrote:

On Tue, Nov 26, 2019 at 12:15 AM Sabri Berisha 
wrote:

- On Nov 26, 2019, at 1:36 AM, Doug Barton do...@dougbarton.us 
wrote:




[snip]
there is no ROI at this point. In this kind of environment there needs 
to

be a strong case to invest the capex to support IPv6.

IPv6 must be supported on the CxO level in order to be deployed.

Thanks,

Sabri, (Badum tsss) MBA



I seewell let me translate it you MBA-eese for you:

FANG deployed ipv6 nearly 10 years ago. Since deploying ipv6, the 
cohort
experienced 300% CAGR. Also, everything is mobile, and all mobile 
providers

in the usa offer ipv6 by default in most cases. Latency! Scale! As your
company launches its digital transformation iot 2020 virtualization
container initiatives, ipv6 will be an integral part of staying 
relevant on
the blockchain.  Also, FANG did it nearly 10 years ago.  Big content 
and
big eyeballs are on ipv6, ipv4 is a winnowing longtail of irrelevance 
and

iot botnets.


None of which matters a damn to almost all of my business eyeball 
customers.  They can still get from our network to 100% of all Internet 
content & services via IPv4 in 2019.  I regularly vet deals for our 
sales team, and out of the hundreds of deals we sold this year, I can 
count on one hand the number of deals where customers wanted IPv6.  We 
sold them IPv6 access, but we didn't put it on our own network, because 
we face the same internal challenges Sabri mentioned.  (SD-WAN, OTOH, 
was far more popular.  I'll give you three guesses why.  Hint - it's not 
because tunnel technology is awesome and allows us to scale our networks 
further and everyone is doing it.)


Though their participation has been key in making IPv6 more useful for 
eyeballs, content hasn't driven adoption.  The only thing eyeballs care 
about is getting to 100% of what they need and want at minimal cost.  
Until eyeball networks start charging eyeballs for IPv4, IPv4 will 
linger.  The day eyeballs start bitching on forums, opening tickets, 
complaining on Twitter, etc. because they have only IPv6 is when IPv4 
will start to lose relevance.


As an aside, I would guess that it's the corporate eyeball customers 
with servers, not resi/mobile behind CGNAT, that will bear the brunt of 
the IPv4 cost first.  But what enterprise wants to tell its non-IPv6 
customers "your Internet needs to be upgraded, come back to us when 
you're done?"  That doesn't bode well for the short-term future.


-Brian


Re: RIPE our of IPv4

2019-11-30 Thread Brian Knight

>> On Nov 29, 2019, at 5:28 PM, Mike Hammett  wrote:
> 
> "So if they do care about IPv6 connectivity, they haven’t communicated that 
> to us."
> 
> Nor will they, but that doesn't mean IPv6 isn't important.

Personally, I don’t disagree. We engineers do what we can to support IPv6: We 
build it into our tooling and switch it on in our gear. Our network is dual 
stack v4/v6 and has been for quite a while. But with other tools we don’t 
control, and particularly in terms of business process, we have a ways to go, 
and it’s not a priority.

I want IPv6 to succeed, really.  But the global end game picture looks more and 
more bleak to me.

> 
> Frankly, I'm surprised anti-IPv6 people still have employment.
> 
> 
> 
> -
> Mike Hammett
> Intelligent Computing Solutions
> http://www.ics-il.com
> 
> Midwest-IX
> http://www.midwest-ix.com

-Brian

> 
> From: "Brian Knight" 
> To: "Mark Andrews" 
> Cc: "nanog" 
> Sent: Friday, November 29, 2019 10:29:17 AM
> Subject: Re: RIPE our of IPv4 
> 
> 
> > On Nov 27, 2019, at 4:04 PM, Mark Andrews  wrote:
> > 
> > 
> > 
> >> On 28 Nov 2019, at 06:08, Brian Knight  wrote:
> >> 
> >>> On 2019-11-26 17:11, Ca By wrote:
> >>> On Tue, Nov 26, 2019 at 12:15 AM Sabri Berisha 
> >>> wrote:
> >>>> - On Nov 26, 2019, at 1:36 AM, Doug Barton do...@dougbarton.us wrote:
> >> 
> >> [snip]
> >>>> there is no ROI at this point. In this kind of environment there needs to
> >>>> be a strong case to invest the capex to support IPv6.
> >>>> IPv6 must be supported on the CxO level in order to be deployed.
> >>>> Thanks,
> >>>> Sabri, (Badum tsss) MBA
> >>> I seewell let me translate it you MBA-eese for you:
> >>> FANG deployed ipv6 nearly 10 years ago. Since deploying ipv6, the cohort
> >>> experienced 300% CAGR. Also, everything is mobile, and all mobile 
> >>> providers
> >>> in the usa offer ipv6 by default in most cases. Latency! Scale! As your
> >>> company launches its digital transformation iot 2020 virtualization
> >>> container initiatives, ipv6 will be an integral part of staying relevant 
> >>> on
> >>> the blockchain.  Also, FANG did it nearly 10 years ago.  Big content and
> >>> big eyeballs are on ipv6, ipv4 is a winnowing longtail of irrelevance and
> >>> iot botnets.
> >> 
> >> None of which matters a damn to almost all of my business eyeball 
> >> customers.  They can still get from our network to 100% of all Internet 
> >> content & services via IPv4 in 2019.
> > 
> > No you can’t.  You can’t reach the machine I’m typing on via IPv4 and it is 
> > ON THE INTERNET.  It is directly reachable via IPv6.  Selling Internet 
> > connectivity without IPv6 should be considered fraud these days.  Don’t
> > you believe in “Truth in Advertising”?
> 
> I had meant to write “They can still get from our network to 100% of all 
> Internet content and services that matter to them [our customers] via IPv4...”
> 
> 0% of my IPv4-only customers have opened tickets saying they cannot reach 
> some service that is only IPv6 accessible. So if they do care about IPv6 
> connectivity, they haven’t communicated that to us.
> 
> > Mark Andrews, ISC
> > 1 Seymour St., Dundas Valley, NSW 2117, Australia
> > PHONE: +61 2 9871 4742  INTERNET: ma...@isc.org
> > 
> 
> Thanks,
> 
> -Brian


Re: RIPE our of IPv4

2019-11-27 Thread Brian Knight


>> On Nov 27, 2019, at 2:54 PM, Brandon Butterworth  
>> wrote:
>> 
>> On Wed Nov 27, 2019 at 01:08:04PM -0600, Brian Knight wrote:
>> None of which matters a damn to almost all of my business eyeball
>> customers.  They can still get from our network to 100% of all Internet
>> content & services via IPv4 in 2019.  I regularly vet deals for our
>> sales team, and out of the hundreds of deals we sold this year, I can
>> count on one hand the number of deals where customers wanted IPv6.  We
>> sold them IPv6 access, but we didn't put it on our own network, because
>> we face the same internal challenges Sabri mentioned.  (SD-WAN, OTOH,
>> was far more popular
> 
> A few year later customer wakes up:
> 
> "wait you sold us all those toys we didn't need but didn't include
> the basic transport capabilites everyone apparently has been saying
> for over a decade are required minimum?"
> 
> "and now you want us to pay you to rebuild it again and trust that
> you got the basics right this time?"
> 
> If you're an internet professional you are a negligent one if by
> now you are not ensuring all you build quietly includes IPv6, no
> customer should need to know to ask for it. It's not like it
> needs different kit.

Possibly some customers may react this way, but I’m thinking many more would 
ask “what does it take to enable it?”  Most are reasonable and show good faith, 
even if an equipment swap is needed.  And if the demand for IPv6 is there, the 
providers will get the work prioritized.

>> As an aside, I would guess that it's the corporate eyeball customers
>> with servers, not resi/mobile behind CGNAT, that will bear the brunt of
>> the IPv4 cost first.  But what enterprise wants to tell its non-IPv6
>> customers "your Internet needs to be upgraded, come back to us when
>> you're done?"  That doesn't bode well for the short-term future.
> 
> "all that multi natted into same address space VPN firewall 
> complicated knitting we never got right wasn't needed if you'd
> told us to use IPv6?"

IPv6 doesn’t help anyone get access to their IPv4-only customers.  (Too bad 
that it doesn’t.)

My point was that, if eyeball networks start charging a premium for IPv4, their 
likely first customers to be charged are business customers not behind CGNAT.  
Those that don’t wish to pay the IPv4 premium would have to force *their* 
customers to go IPv6. That would be a much more difficult conversation than 
simply paying the premium.  So out of all the forces at work, which gives way 
first?

> 
> brandon

Thanks,

-Brian


Re: RIPE our of IPv4

2019-11-29 Thread Brian Knight


> On Nov 27, 2019, at 4:04 PM, Mark Andrews  wrote:
> 
> 
> 
>> On 28 Nov 2019, at 06:08, Brian Knight  wrote:
>> 
>>> On 2019-11-26 17:11, Ca By wrote:
>>> On Tue, Nov 26, 2019 at 12:15 AM Sabri Berisha 
>>> wrote:
>>>> - On Nov 26, 2019, at 1:36 AM, Doug Barton do...@dougbarton.us wrote:
>> 
>> [snip]
>>>> there is no ROI at this point. In this kind of environment there needs to
>>>> be a strong case to invest the capex to support IPv6.
>>>> IPv6 must be supported on the CxO level in order to be deployed.
>>>> Thanks,
>>>> Sabri, (Badum tsss) MBA
>>> I seewell let me translate it you MBA-eese for you:
>>> FANG deployed ipv6 nearly 10 years ago. Since deploying ipv6, the cohort
>>> experienced 300% CAGR. Also, everything is mobile, and all mobile providers
>>> in the usa offer ipv6 by default in most cases. Latency! Scale! As your
>>> company launches its digital transformation iot 2020 virtualization
>>> container initiatives, ipv6 will be an integral part of staying relevant on
>>> the blockchain.  Also, FANG did it nearly 10 years ago.  Big content and
>>> big eyeballs are on ipv6, ipv4 is a winnowing longtail of irrelevance and
>>> iot botnets.
>> 
>> None of which matters a damn to almost all of my business eyeball customers. 
>>  They can still get from our network to 100% of all Internet content & 
>> services via IPv4 in 2019.
> 
> No you can’t.  You can’t reach the machine I’m typing on via IPv4 and it is 
> ON THE INTERNET.  It is directly reachable via IPv6.  Selling Internet 
> connectivity without IPv6 should be considered fraud these days.  Don’t
> you believe in “Truth in Advertising”?

I had meant to write “They can still get from our network to 100% of all 
Internet content and services that matter to them [our customers] via IPv4...”

0% of my IPv4-only customers have opened tickets saying they cannot reach some 
service that is only IPv6 accessible. So if they do care about IPv6 
connectivity, they haven’t communicated that to us.

> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742  INTERNET: ma...@isc.org
> 

Thanks,

-Brian


Re: Backup over 4G/LTE

2020-01-30 Thread Brian Knight
In the past couple of years, we deployed CradlePoint IBR650's and
IBR600's (with and without wifi respectively).  It's a configurable
mini-router that can also accept wired access.  There is an on-board SIM
slot.  Downside is that the unit is a bit expensive as a CPE. 

Lately we have been deploying Proxicast PocketPort units.  These have an
RJ45 jack at one end, USB port on the other, with no built-in cell
antenna.  So you'll need a USB 4G/LTE dongle in addition.  Those
PocketPorts are a bit more limited in terms of config, but they work for
our use case and are less expensive overall than CradlePoint. 

If you want to stick with a router from big C, I've also worked with the
newer C-8PLTEEA which has an onboard SIM slot.  We tested them but
didn't put them in production due to cost. 

HTH! 

-Brian 

On 2020-01-28 17:30, K MEKKAOUI wrote:

> Dear NANOG Community, 
> 
> Can anyone help with any device information that provides redundancy for 
> business internet access? In other words when the internet provided through 
> the cable modem fails the 4G/LTE takes over automatically to provide internet 
> access to the client. 
> 
> Thank you 
> 
> KARIM M.

Re: Ingress filtering on transits, peers, and IX ports

2020-10-14 Thread Brian Knight via NANOG
So I have put together what I think is a reasonable and complete ACL.  
From my time in the enterprise world, I know that a good ingress ACL 
filters out traffic sourcing from:


* Bogon blocks, like 0.0.0.0/8, 127.0.0.0/8, RFC1918 space, etc 
(well-documented in 
https://team-cymru.com/community-services/bogon-reference/bogon-reference-http/)

* RIR-assigned blocks I am announcing to the rest of the world

However, I recognized a SP-specific case where we could receive 
legitimate traffic sourcing from our own IP blocks: customers running 
multi-homed BGP where we have assigned PA space to them.  So I added 
"permit" statements for traffic sourcing from these blocks.


Also, we have direct peering links that are numbered within our assigned 
prefixes.  So we can use the same ACL with these peer interfaces and 
continue to have BGP work, I added "permit" statements for these 
point-to-point subnets.


So the order of the statements is:

* Permit where source is direct peer PtP networks
* Permit where source is BGP customer PA prefix
* Deny where source is bogon
* Deny where source is our advertised prefixes
* Permit all other traffic

I considered BGP customer PI prefixes to be out of scope for ingress 
filtering, since the customer is likely to be multi-homing.  Should we 
consider filtering them?


The Team Cymru Secure IOS Template 
[https://www.cymru.com/Documents/secure-ios-template.html] also 
references an ICMP fragment drop entry on the ingress ACL.  I think 
that's good for an enterprise network, but as an SP, I'm very hesitant 
to include this.  Is this included in anyone else's transit / peer / IX 
ACL?


Is there anything else that I'm not thinking of?

Thanks,

-Brian


On 2020-10-14 09:25, Brian Knight via NANOG wrote:

Hi Marcos,

Thanks for your reply.  But I am looking for guidance on traffic
filtering, not BGP prefix filtering.

I have looked at BCP 84, and it's a good overview of the methods
available to an ISP.  My questions are more operational in nature and
are not covered by the document.  Of the choices presented in BCP 84,
what do folks really use?  If it's an ACL, what challenges have there
been with updates?  etc.

-Brian


On 2020-10-13 18:52, Marcos Manoni wrote:

Hi, Brian

Check RFC3704/BCP84 Ingress Filtering for Multihomed Networks (Updated
by: RFC8704 Enhanced Feasible-Path uRPF).

Ingress Access Lists require typically manual maintenance, but are
the most bulletproof when done properly; typically, ingress access
lists are best fit between the edge and the ISP when the
configuration is not too dynamic if strict RPF is not an option,
between ISPs if the number of used prefixes is low, or as an
additional layer of protection


Ingress filters Best Practices
https://www.ripe.net/support/training/material/bgp-operations-and-security-training-course/BGP-Slides-Single.pdf
*Don’t accept BOGON ASNs
*Don’t accept BOGON prefixes
*Don’t accept your own prefix
*Don’t accept default (unless you requested it)
*Don’t accept prefixes that are too specific
*Don’t accept if AS Path is too long
*Create filters based on Internet Routing Registries

And also BGP Best Current Practices by Philip Smith
http://www.bgp4all.com.au/pfs/_media/workshops/05-bgp-bcp.pdf

Regards.

El mar., 13 oct. 2020 a las 19:52, Brian Knight via NANOG
() escribió:


Hi Mel,

My understanding of uRPF is:

* Strict mode will permit a packet only if there is a route for the 
source IP in the RIB, and that route points to the interface where 
the packet was received


* Loose mode will permit a packet if there is a route for the source 
IP in the RIB.  It does not matter where the route is pointed.


Strict mode won't work for us, because with our multi-homed transits 
and IX peers, we will almost certainly drop a legitimate packet 
because the best route is through another transit.


Loose mode won't work for us, because all of our own prefixes are in 
our RIB, and thus the uRPF check on a transit would never block 
anything.


Or am I missing something?

Thanks,

-Brian

On 2020-10-13 17:22, Mel Beckman wrote:

You can also use Unicast Reverse Path Forwarding. RPF is more 
efficient than ACLs, and has the added advantage of not requiring 
maintenance. In a nutshell, if your router has a route to a prefix in 
its local RIB, then incoming packets from a border interface having a 
matching source IP will be dropped.


RPF has knobs and dials to make it work for various ISP environments. 
Implement it carefully (as is be standing next to the router involved 
:)


Here's a Cisco brief on the topic:


https://tools.cisco.com/security/center/resources/unicast_reverse_path_forwarding





I think all router vendors support this feature. Here's a similar 
article by Juniper:


https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/interfaces-configuring-unicast-rpf.html


-mel beckman

On Oct 13, 2020, at 3:15 PM, Brian Knight via NANOG  
wrote:


We recent

Re: Ingress filtering on transits, peers, and IX ports

2020-10-13 Thread Brian Knight via NANOG
Hi Mel, 

My understanding of uRPF is: 

* Strict mode will permit a packet only if there is a route for the
source IP in the RIB, and that route points to the interface where the
packet was received 

* Loose mode will permit a packet if there is a route for the source IP
in the RIB.  It does not matter where the route is pointed. 

Strict mode won't work for us, because with our multi-homed transits and
IX peers, we will almost certainly drop a legitimate packet because the
best route is through another transit. 

Loose mode won't work for us, because all of our own prefixes are in our
RIB, and thus the uRPF check on a transit would never block anything. 

Or am I missing something? 

Thanks, 

-Brian 

On 2020-10-13 17:22, Mel Beckman wrote:

> You can also use Unicast Reverse Path Forwarding. RPF is more efficient than 
> ACLs, and has the added advantage of not requiring maintenance. In a 
> nutshell, if your router has a route to a prefix in its local RIB, then 
> incoming packets from a border interface having a matching source IP will be 
> dropped. 
> 
> RPF has knobs and dials to make it work for various ISP environments. 
> Implement it carefully (as is be standing next to the router involved :) 
> 
> Here's a Cisco brief on the topic: 
> 
> https://tools.cisco.com/security/center/resources/unicast_reverse_path_forwarding
>  
> 
> I think all router vendors support this feature. Here's a similar article by 
> Juniper: 
> 
> https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/interfaces-configuring-unicast-rpf.html
>  
> 
> -mel beckman
> 
>> On Oct 13, 2020, at 3:15 PM, Brian Knight via NANOG  wrote:
> 
>> We recently received an email notice from a group of security researchers 
>> who are looking at the feasibility of attacks using spoofed traffic.  Their 
>> methodology, in broad strokes, was to send traffic to our DNS servers with a 
>> source IP that looked like it came from our network.  Their attacks were 
>> successful, and they included a summary of what they found.  So this message 
>> has started an internal conversation on what traffic we should be filtering 
>> and how.
> 
>> This security test was not about BCP 38 for ingress traffic from our 
>> customers, nor was it about BGP ingress filtering.  This tested our ingress 
>> filtering from the rest of the Internet.
> 
>> It seems to me like we should be filtering traffic with spoofed IPs on our 
>> transit, IX, and peering links.  I have done this filtering in the 
>> enterprise world extensively, and it's very helpful to keep out bad traffic. 
>>  BCP 84 also discusses ingress filtering for SP's briefly and seems to 
>> advocate for it.
> 
>> We have about 15 IP blocks allocated to us.  With a network as small as 
>> ours, I chose to go with a static ACL approach to start the conversation.  I 
>> crafted a static ACL, blocking all ingress traffic sourced from any of our 
>> assigned IP ranges.  I made sure to include:
> 
>> * Permit entries for P-t-P WAN subnets on peering links
> 
>> * Permit entries for IP assignments to customers running multi-homed BGP
> 
>> * The "permit ipv4 any any" at the end :)
> 
>> The questions I wanted to ask the SP community are:
> 
>> * What traffic filtering do you do on your transits, on IX ports, and your 
>> direct peering links?
> 
>> * How is it accomplished?  Through static ACL or some flavor of uRPF?
> 
>> * If you use static ACLs, what is the administrative overhead like?  What 
>> makes it easy or difficult to update?
> 
>> * How did you test your filters when they were implemented?
> 
>> Thanks a lot,
> 
>> -Brian

Ingress filtering on transits, peers, and IX ports

2020-10-13 Thread Brian Knight via NANOG
We recently received an email notice from a group of security 
researchers who are looking at the feasibility of attacks using spoofed 
traffic.  Their methodology, in broad strokes, was to send traffic to 
our DNS servers with a source IP that looked like it came from our 
network.  Their attacks were successful, and they included a summary of 
what they found.  So this message has started an internal conversation 
on what traffic we should be filtering and how.


This security test was not about BCP 38 for ingress traffic from our 
customers, nor was it about BGP ingress filtering.  This tested our 
ingress filtering from the rest of the Internet.


It seems to me like we should be filtering traffic with spoofed IPs on 
our transit, IX, and peering links.  I have done this filtering in the 
enterprise world extensively, and it's very helpful to keep out bad 
traffic.  BCP 84 also discusses ingress filtering for SP's briefly and 
seems to advocate for it.


We have about 15 IP blocks allocated to us.  With a network as small as 
ours, I chose to go with a static ACL approach to start the 
conversation.  I crafted a static ACL, blocking all ingress traffic 
sourced from any of our assigned IP ranges.  I made sure to include:


* Permit entries for P-t-P WAN subnets on peering links
* Permit entries for IP assignments to customers running multi-homed BGP
* The "permit ipv4 any any" at the end :)

The questions I wanted to ask the SP community are:

* What traffic filtering do you do on your transits, on IX ports, and 
your direct peering links?

* How is it accomplished?  Through static ACL or some flavor of uRPF?
* If you use static ACLs, what is the administrative overhead like?  
What makes it easy or difficult to update?

* How did you test your filters when they were implemented?

Thanks a lot,

-Brian


Re: Ingress filtering on transits, peers, and IX ports

2020-10-14 Thread Brian Knight via NANOG

Hi Marcos,

Thanks for your reply.  But I am looking for guidance on traffic 
filtering, not BGP prefix filtering.


I have looked at BCP 84, and it's a good overview of the methods 
available to an ISP.  My questions are more operational in nature and 
are not covered by the document.  Of the choices presented in BCP 84, 
what do folks really use?  If it's an ACL, what challenges have there 
been with updates?  etc.


-Brian


On 2020-10-13 18:52, Marcos Manoni wrote:

Hi, Brian

Check RFC3704/BCP84 Ingress Filtering for Multihomed Networks (Updated
by: RFC8704 Enhanced Feasible-Path uRPF).

Ingress Access Lists require typically manual maintenance, but are
the most bulletproof when done properly; typically, ingress access
lists are best fit between the edge and the ISP when the
configuration is not too dynamic if strict RPF is not an option,
between ISPs if the number of used prefixes is low, or as an
additional layer of protection


Ingress filters Best Practices
https://www.ripe.net/support/training/material/bgp-operations-and-security-training-course/BGP-Slides-Single.pdf
*Don’t accept BOGON ASNs
*Don’t accept BOGON prefixes
*Don’t accept your own prefix
*Don’t accept default (unless you requested it)
*Don’t accept prefixes that are too specific
*Don’t accept if AS Path is too long
*Create filters based on Internet Routing Registries

And also BGP Best Current Practices by Philip Smith
http://www.bgp4all.com.au/pfs/_media/workshops/05-bgp-bcp.pdf

Regards.

El mar., 13 oct. 2020 a las 19:52, Brian Knight via NANOG
() escribió:


Hi Mel,

My understanding of uRPF is:

* Strict mode will permit a packet only if there is a route for the 
source IP in the RIB, and that route points to the interface where the 
packet was received


* Loose mode will permit a packet if there is a route for the source 
IP in the RIB.  It does not matter where the route is pointed.


Strict mode won't work for us, because with our multi-homed transits 
and IX peers, we will almost certainly drop a legitimate packet 
because the best route is through another transit.


Loose mode won't work for us, because all of our own prefixes are in 
our RIB, and thus the uRPF check on a transit would never block 
anything.


Or am I missing something?

Thanks,

-Brian

On 2020-10-13 17:22, Mel Beckman wrote:

You can also use Unicast Reverse Path Forwarding. RPF is more 
efficient than ACLs, and has the added advantage of not requiring 
maintenance. In a nutshell, if your router has a route to a prefix in 
its local RIB, then incoming packets from a border interface having a 
matching source IP will be dropped.


RPF has knobs and dials to make it work for various ISP environments. 
Implement it carefully (as is be standing next to the router involved 
:)


Here's a Cisco brief on the topic:


https://tools.cisco.com/security/center/resources/unicast_reverse_path_forwarding





I think all router vendors support this feature. Here's a similar 
article by Juniper:


https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/interfaces-configuring-unicast-rpf.html


-mel beckman

On Oct 13, 2020, at 3:15 PM, Brian Knight via NANOG  
wrote:


We recently received an email notice from a group of security 
researchers who are looking at the feasibility of attacks using 
spoofed traffic.  Their methodology, in broad strokes, was to send 
traffic to our DNS servers with a source IP that looked like it came 
from our network.  Their attacks were successful, and they included a 
summary of what they found.  So this message has started an internal 
conversation on what traffic we should be filtering and how.


This security test was not about BCP 38 for ingress traffic from our 
customers, nor was it about BGP ingress filtering.  This tested our 
ingress filtering from the rest of the Internet.


It seems to me like we should be filtering traffic with spoofed IPs on 
our transit, IX, and peering links.  I have done this filtering in the 
enterprise world extensively, and it's very helpful to keep out bad 
traffic.  BCP 84 also discusses ingress filtering for SP's briefly and 
seems to advocate for it.


We have about 15 IP blocks allocated to us.  With a network as small 
as ours, I chose to go with a static ACL approach to start the 
conversation.  I crafted a static ACL, blocking all ingress traffic 
sourced from any of our assigned IP ranges.  I made sure to include:


* Permit entries for P-t-P WAN subnets on peering links

* Permit entries for IP assignments to customers running multi-homed 
BGP


* The "permit ipv4 any any" at the end :)

The questions I wanted to ask the SP community are:

* What traffic filtering do you do on your transits, on IX ports, and 
your direct peering links?


* How is it accomplished?  Through static ACL or some flavor of uRPF?

* If you use static ACLs, what is the administrative overhead like?  
What makes it easy or difficult to update?


* How did you

Re: Ingress filtering on transits, peers, and IX ports

2020-10-14 Thread Brian Knight via NANOG
Hi Eric, 

I shot a message over the folk who did the testing for more info about
their test.  If I'm able to find anything useful in our logs from their
detail, I'll post it to the list. 

The message referenced DNS cache poisoning and DDOS amplification, so it
seemed fairly general and more focused on whether ASes accepted spoofed
traffic.  They also referenced the new NXNSAttack, which I did not know
about previously. 

Thanks, 

-Brian 

On 2020-10-13 20:49, Eric Kuhnke wrote:

> Aside from the BCPs currently being discussed for ingress filtering, I would 
> be very interested in seeing what this traffic looked like from the 
> perspective of your DNS servers' logs. 
> 
> I assume you're talking about customer facing recursive/caching resolvers, 
> and not authoritative-only nameservers.  
> 
> Considering that one can run an instance of an anycasted recursive 
> nameserver, under heavy load for a very large number of clients, on a $600 1U 
> server these days... I wonder what exactly the threat model is. 
> 
> Reverse amplification of DNS traffic returning to the spoofed IPs for DoS 
> purposes, such as to cause the nameserver to DoS a single /32 endpoint IP 
> being targeted, as in common online gaming disputes?  
> 
> What volume of pps or Mbps would appear as spurious traffic as a result of 
> this attack? 
> 
> On Tue, Oct 13, 2020 at 3:14 PM Brian Knight via NANOG  
> wrote: 
> 
>> We recently received an email notice from a group of security 
>> researchers who are looking at the feasibility of attacks using spoofed 
>> traffic.  Their methodology, in broad strokes, was to send traffic to 
>> our DNS servers with a source IP that looked like it came from our 
>> network.  Their attacks were successful, and they included a summary of 
>> what they found.  So this message has started an internal conversation 
>> on what traffic we should be filtering and how.
>> 
>> This security test was not about BCP 38 for ingress traffic from our 
>> customers, nor was it about BGP ingress filtering.  This tested our 
>> ingress filtering from the rest of the Internet.
>> 
>> It seems to me like we should be filtering traffic with spoofed IPs on 
>> our transit, IX, and peering links.  I have done this filtering in the 
>> enterprise world extensively, and it's very helpful to keep out bad 
>> traffic.  BCP 84 also discusses ingress filtering for SP's briefly and 
>> seems to advocate for it.
>> 
>> We have about 15 IP blocks allocated to us.  With a network as small as 
>> ours, I chose to go with a static ACL approach to start the 
>> conversation.  I crafted a static ACL, blocking all ingress traffic 
>> sourced from any of our assigned IP ranges.  I made sure to include:
>> 
>> * Permit entries for P-t-P WAN subnets on peering links
>> * Permit entries for IP assignments to customers running multi-homed BGP
>> * The "permit ipv4 any any" at the end :)
>> 
>> The questions I wanted to ask the SP community are:
>> 
>> * What traffic filtering do you do on your transits, on IX ports, and 
>> your direct peering links?
>> * How is it accomplished?  Through static ACL or some flavor of uRPF?
>> * If you use static ACLs, what is the administrative overhead like?  
>> What makes it easy or difficult to update?
>> * How did you test your filters when they were implemented?
>> 
>> Thanks a lot,
>> 
>> -Brian

Re: Ingress filtering on transits, peers, and IX ports

2020-10-19 Thread Brian Knight via NANOG
Thanks to the folks who responded to my messages on and off-list.  A 
couple of folks have asked me to summarize the responses that I 
received.


* Static ACL is currently the best way to protect a multi-homed network. 
 Loose RPF may be used if bogon filtering is more important, but it does 
not provide anti-spoofing security.


* Protect your infrastructure subnets with the ingress ACL [BCP 84 sec 
3.2].  Loopbacks and point-to-point circuits can benefit from this.  In 
the draft ACL, for example, I permit ICMP and traceroute over UDP, and 
block all else.


* Do an egress ACL also, to prevent clutter from reaching the rest of 
the 'Net.  Permit only your aggregate and customer prefixes going 
outbound.


* As I worked through putting the ACLs together, I found that if one 
implements an egress ACL, then customer prefixes must be enumerated 
anyway.  Once those are in an object group, it's easy to add an entry to 
the ingress ACL permitting traffic destined to customer PI space and 
aggregate space.  Seems better than just permitting all traffic in.


Our ACLs, both v4 and v6, now look like the following:

Ingress

* Deny to and from bogon networks, where bogon is either source or dest
* Permit to and from WAN PtP subnets
* For IPv6, also permit link-local IPs (fe80::/10)
* Deny to and from multicast ranges 224.0.0.0/4 and ff00::/8
* Permit ICMP / traceroute over UDP to infrastructure
* Deny all other traffic to infrastructure
* Permit from customer PI / PA space
* Deny from originated aggregate space
* Permit all traffic to customer PI / PA space
* Permit all traffic to aggregate space
* Deny any any

Egress

* Deny to and from bogon networks
* Permit to and from WAN PtP subnets
* For IPv6, also permit link-local IPs
* Deny to and from multicast range
* Permit all traffic from customer PI / PA space
* Permit all traffic from aggregate space
* Deny any any

We have started implementing the ACLs by blocking the bogon traffic 
only.  The other deny rules are set up as permit rules for now with 
logging turned on.  I'll review matching traffic before I switch the 
rules to deny.


Future work also includes automating the updates to the object groups 
via IRR.


BTW, Team Cymru didn't have any guidance around IPv6 bogons, so I put 
together the below object group based on the IANA IPv6 allocation list: 
https://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xhtml. 
 Obviously this is only for space not yet allocated to RIRs.


object-group network ipv6 IPV6-BOGON
  description Invalid IPV6 networks
  ::/3
  4000::/3
  6000::/3
  8000::/3
  a000::/3
  c000::/3
  e000::/4
  f000::/5
  f800::/6
  fc00::/7
  fe00::/9
  fec0::/10
exit

Thanks,

-Brian



On 2020-10-14 17:43, Brian Knight wrote:

So I have put together what I think is a reasonable and complete ACL.
From my time in the enterprise world, I know that a good ingress ACL
filters out traffic sourcing from:

* Bogon blocks, like 0.0.0.0/8, 127.0.0.0/8, RFC1918 space, etc
(well-documented in
https://team-cymru.com/community-services/bogon-reference/bogon-reference-http/)
* RIR-assigned blocks I am announcing to the rest of the world

However, I recognized a SP-specific case where we could receive
legitimate traffic sourcing from our own IP blocks: customers running
multi-homed BGP where we have assigned PA space to them.  So I added
"permit" statements for traffic sourcing from these blocks.

Also, we have direct peering links that are numbered within our
assigned prefixes.  So we can use the same ACL with these peer
interfaces and continue to have BGP work, I added "permit" statements
for these point-to-point subnets.

So the order of the statements is:

* Permit where source is direct peer PtP networks
* Permit where source is BGP customer PA prefix
* Deny where source is bogon
* Deny where source is our advertised prefixes
* Permit all other traffic

I considered BGP customer PI prefixes to be out of scope for ingress
filtering, since the customer is likely to be multi-homing.  Should we
consider filtering them?

The Team Cymru Secure IOS Template
[https://www.cymru.com/Documents/secure-ios-template.html] also
references an ICMP fragment drop entry on the ingress ACL.  I think
that's good for an enterprise network, but as an SP, I'm very hesitant
to include this.  Is this included in anyone else's transit / peer /
IX ACL?

Is there anything else that I'm not thinking of?

Thanks,

-Brian


On 2020-10-14 09:25, Brian Knight via NANOG wrote:

Hi Marcos,

Thanks for your reply.  But I am looking for guidance on traffic
filtering, not BGP prefix filtering.

I have looked at BCP 84, and it's a good overview of the methods
available to an ISP.  My questions are more operational in nature and
are not covered by the document.  Of the choices presented in BCP 84,
what do folks really use?  If it's an ACL, what challenges have there
been with updates?  etc.

-Brian


On 2020-10-13 18:52, Marc

Re: Ingress filtering on transits, peers, and IX ports

2020-10-22 Thread Brian Knight via NANOG
Randy, thank you for the reminder to look also at what services (L4 
ports) should be generally blocked.


As I was implementing a similar rule for logging purposes, I discovered 
an oddity with $VENDOR_C_XR ACLs.  I created the following:


object-group port TCPUDP-BLOCKED
  eq 0
  eq sunrpc
  eq 445
  range 137 139
exit

ipv4 access-list IPV4-INET-IN
  10 remark BCP 84 for transits, IX, and peering
  101 remark *** Block bogon networks as src or dest ***
  110 deny ipv4 net-group IPV4-BOGON any
  111 deny ipv4 any net-group IPV4-BOGON
  201 remark *** Blocked protocols PERMIT FOR NOW ***
  210 permit udp any port-group TCPUDP-BLOCKED any log
  211 permit udp any any port-group TCPUDP-BLOCKED log
  212 permit tcp any port-group TCPUDP-BLOCKED any log
  213 permit tcp any any port-group TCPUDP-BLOCKED log
[snip]

ipv4 access-list IPV4-INET-OUT
  10 remark BCP 84 for transits, IX, and peering
  101 remark *** Block bogon networks as src or dest ***
  110 deny ipv4 net-group IPV4-BOGON any
  111 deny ipv4 any net-group IPV4-BOGON
  201 remark *** Blocked protocols PERMIT FOR NOW ***
  210 permit udp any port-group TCPUDP-BLOCKED any log
  211 permit udp any any port-group TCPUDP-BLOCKED log
  212 permit tcp any port-group TCPUDP-BLOCKED any log
  213 permit tcp any any port-group TCPUDP-BLOCKED log
[snip]

After I did this, logs on our syslog server started growing like crazy.  
It was full of entries like:


2020-10-21T01:47:17-05:00,info,RP/0/RSP1/CPU0:Oct 21 01:47:17.972 CDT: 
ipv4_acl_mgr[305]: %ACL-IPV4_ACL-6-IPACCESSLOGP : access-list 
IPV4-INET-OUT (210) permit udp on.net.ip.adr(0) -> off.net.ip.adr(0), 5 
packets
2020-10-21T02:01:08-05:00,info,RP/0/RSP0/CPU0:Oct 21 02:01:08.490 CDT: 
ipv4_acl_mgr[263]: %ACL-IPV4_ACL-6-IPACCESSLOGP : access-list 
IPV4-INET-IN (210) permit udp off.net.ip.adr(0) -> on.net.ip.adr(0), 58 
packets


After wondering why in the world my customers were sending so much data 
on port 0, I found a few different sources saying that port 0 is 
commonly used in place of valid information when dealing with fragments. 
 Turns out that $VENDOR_C_XR does this too.


It wasn't clear why fragments would be matching that rule until I found 
the right vendor doc.  The router will pass IP fragments with a "permit" 
ACL line as long as that fragment's layer 3 info matches the layer 3 
information in the ACL.  The router logs the packet similar the above: 
L4 protocol with source and dest port = 0.  From the doc:


-

For an access-list entry containing Layer 3 and Layer 4 information:
• The entry is applied to non-fragmented packets and initial fragments.
• If the entry matches and is a permit statement, the packet or
fragment is permitted.
• If the entry matches and is a deny statement, the packet or fragment
is denied.

The entry is also applied to non-initial fragments in the following
manner. Because non-initial fragments contain only Layer 3 information,
only the Layer 3 portion of an access-list entry can be applied. If the
Layer 3 portion of the access-list entry matches, and
• If the entry is a permit statement, the non-initial fragment is
permitted.
• If the entry is a deny statement, the next access-list entry is
processed.
The deny statements are handled differently for non-initial
fragments versus non-fragmented or initial fragments.

-

Since my rule's L3 info was permit any source to any destination, any IP 
fragment would match the rule, be passed, and be logged.  The solution 
was to add rules explicitly permitting fragments above the layer 4 
rules:


ipv4 access-list IPV4-INET-IN
  10 remark BCP 84 for transits, IX, and peering
  101 remark *** Block bogon networks as src or dest ***
  110 deny ipv4 net-group IPV4-BOGON any
  111 deny ipv4 any net-group IPV4-BOGON
  201 remark *** Blocked protocols PERMIT FOR NOW ***
  203 permit ipv4 net-group IPV4-CUST any fragments
  204 permit ipv4 net-group IPV4-BACKDOOR-HOSTS any fragments
  205 permit ipv4 any net-group IPV4-BGP-AGG fragments
  206 permit ipv4 any net-group IPV4-CUST fragments
  210 permit udp any port-group TCPUDP-BLOCKED any log
  211 permit udp any any port-group TCPUDP-BLOCKED log
  212 permit tcp any port-group TCPUDP-BLOCKED any log
  213 permit tcp any any port-group TCPUDP-BLOCKED log

Logs are a lot calmer now in terms of new lines per minute, and far more 
relevant.  When we switch those rules to deny statements, we can 
eliminate the rules specifically permitting fragments.


Looks like $VENDOR_J makes things so much simpler for this task.

Thanks,


-Brian


On 2020-10-20 00:18, Randy Bush wrote:

term blocked-ports {
from {
protocol [ tcp udp ];
first-fragment;
destination-port
	[ 0 sunrpc 135 netbios-ns netbios-dgm netbios-ssn 111 445 syslog 
11211];

}
then {
sample;
discard;
}
}

and i block all external access to weak devices such as switches, pdus,
ipmi, ...

randy


Re: Ingress filtering on transits, peers, and IX ports

2020-11-20 Thread Brian Knight via NANOG
As a final update to this thread, we started blocking spoofed and 
invalid traffic as of early Thursday morning Nov 19th.  So far, knock on 
wood, no reports of issues from our customer base.


In addition, I've been able to verify with the security research team's 
test tool that we are no longer responding to the spoofed DNS requests.


The ACL was implemented as follows:

Ingress

* Deny to and from bogon networks, where bogon is either source or dest
* Deny invalid TCP and UDP ports (currently only port 0) [log]
* Permit to and from transit / peer / IX connected subnets
* For IPv6, also permit link-local IPs (fe80::/10)
* Deny to and from multicast ranges 224.0.0.0/4 and ff00::/8
* Permit ICMP / traceroute over UDP to infrastructure
* Deny all other traffic to infrastructure [log]
* Permit from customer PI / PA space
* Deny from originated aggregate space [log]
* Permit all traffic to customer PI / PA space
* Permit all traffic to aggregate space
* Deny any any [log]

Egress

* Deny to and from bogon networks
* Deny invalid ports [log]
* Permit to and from transit / peer / IX connected subnets
* For IPv6, also permit link-local IPs
* Deny to and from multicast range
* Permit all traffic from any source to customer PI / PA space
* Permit all traffic from customer PI / PA space
* Permit all traffic from aggregate space
* Deny any any [log]

Below I've included the specific $VENDOR_C config I implemented for the 
filtering, sans specifics on our IP blocks.  I hope folks find this 
useful as a guide to their own efforts, and constructive criticism is 
always welcome.


Future work includes:

* Tightening the rules permitting access to/from the transit / peer / IX 
connected subnets, while keeping the ACL general enough for use on all 
Internet-facing interfaces
* Automation of updates to aggregate and customer IP blocks (looking at 
using the irrpt project for this)


Once more, to those who provided valuable input, thank you very much 
indeed!


-Brian


!-

! Static ACLs for Service Provider BCP 84 Compliance
! IOS XR config

! IPv4

object-group network ipv4 IPV4-BOGON
  description Invalid IPV4 networks
  0.0.0.0/8
  10.0.0.0/8
  100.64.0.0/10
  127.0.0.0/8
  169.254.0.0/16
  172.16.0.0/12
  192.0.0.0/24
  192.0.2.0/24
  192.168.0.0/16
  198.18.0.0/15
  198.51.100.0/24
  203.0.113.0/24
  240.0.0.0/4
exit

object-group network ipv4 IPV4-TRAN-WAN
  description Transit WAN PtP subnets
  [Point to point /30's go here]
exit

object-group network ipv4 IPV4-IX
  description IX subnets
  [IX /24 and /23 subnets here]
exit

object-group network ipv4 IPV4-PEER-WAN
  description Direct peer WAN PtP subnets
  [Direct peer WAN IPs go here]
exit

object-group network ipv4 IPV4-BGP-AGG
  description ARIN IPV4 Aggregate Blocks
  [Aggregated IP blocks go here]
exit

object-group network ipv4 IPV4-INFRA
  description Infrastructure subnets to be protected
  [List of loopback blocks and backbone / core PtP /30's here]
exit

object-group network ipv4 IPV4-BACKDOOR-HOSTS
  description Hosts observed to be sending valid traffic via Internet
  [One-off hosts, active TCP or UDP traffic was observed during data 
collection]

exit

object-group network ipv4 IPV4-CUST
  [full list of all customer IP blocks]
  [Includes customer PI blocks, disaggregated PA from other providers,]
  [and PA assigned from your aggregate space]
exit

object-group port TCPUDP-BLOCKED
  eq 0
  [additional ports to be generally blocked, list here]
exit

ipv4 access-list IPV4-INET-IN
  10 remark BCP 84 for transits, IX, and peering
  101 remark *** Block bogon networks as src or dest ***
  110 deny ipv4 net-group IPV4-BOGON any
  111 deny ipv4 any net-group IPV4-BOGON
  201 remark *** Blocked protocols ***
  210 deny udp any port-group TCPUDP-BLOCKED any log
  211 deny udp any any port-group TCPUDP-BLOCKED log
  212 deny tcp any port-group TCPUDP-BLOCKED any log
  213 deny tcp any any port-group TCPUDP-BLOCKED log
  301 remark *** Transit, IX, peer connected networks ***
  310 permit ipv4 net-group IPV4-PEER-WAN any
  311 permit ipv4 any net-group IPV4-PEER-WAN
  312 permit ipv4 net-group IPV4-TRAN-WAN any
  313 permit ipv4 any net-group IPV4-TRAN-WAN
  314 permit ipv4 net-group IPV4-IX any
  315 permit ipv4 any net-group IPV4-IX
  401 remark *** Block multicast ***
  410 deny ipv4 224.0.0.0/4 any
  411 deny ipv4 any 224.0.0.0/4
  501 remark *** Protect infrastructure subnets ***
  510 deny icmp any net-group IPV4-INFRA fragments log
  511 permit icmp any net-group IPV4-INFRA
  512 permit udp any range 1024 65535 net-group IPV4-INFRA range 33435 
33535
  513 permit udp any range 33435 33535 net-group IPV4-INFRA range 1024 
65535

  515 deny ipv4 any net-group IPV4-INFRA
  601 remark *** Customer Inet BGP Announced Prefixes ***
  620 permit ipv4 net-group IPV4-CUST any
  640 permit ipv4 net-group IPV4-BACKDOOR-HOSTS any
  701 remark *** Block originated networks ***
  710 deny ipv4 net-group IPV4-BGP-AGG any log
  801 remark *** Permit 

Re: DPDK and energy efficiency

2021-03-05 Thread Brian Knight via NANOG

On 2021-03-05 12:22, Etienne-Victor Depasquale wrote:


Sure, here goes:

https://www.surveymonkey.com/results/SM-BJ9FCT6K9/


Thanks for sharing these results.  We run DPDK workloads (Cisco nee 
Viptela vEdge Cloud) on ESXI.  Fwiw, a quick survey of a few of our Dell 
R640s running mostly vEdge workloads shows the PS output wattage is 
about 60% higher than a non-vEdge workload: 420W vs 260W.  PS input 
amperage is 2.0A@208V vs 1.4A, a 42% difference.  Processor type is Xeon 
6152.  Stats obtained from the iDRAC lights-out management module.


vEdge does not do any limiting of polling by default, and afaik the 
software has no support for any kind of limiting.  It will poll the 
network driver on every core assigned to the VM for max performance, 
except for one core which is assigned to the control plane.


I'm usually more concerned about the lack of available CPU cores.  The 
CPU usage forces us not to oversubscribe the VM hosts, which means we 
must provision vEdges less densely and buy more gear sooner.  Plus, the 
increased power demand means we can fit about 12 vEdge servers per 
cabinet instead of 17.  (Power service is 30A 208V, maximum of 80% 
usage.)


OTOH, I face far fewer questions about vEdge Cloud performance problems 
than I do on other virtual platforms.




Cheers,

Etienne



Thanks again,

-Brian


Re: DPDK and energy efficiency

2021-03-05 Thread Brian Knight via NANOG

On 2021-03-05 15:40, Eric Kuhnke wrote:

For comparison purposes, I'm curious about the difference in wattage 
results between:


a) Your R640 at 420W running DPDK

b) The same R640 hardware temporarily booted from a Ubuntu server live 
USB, in which some common CPU stress and memory disk/IO benchmarks are 
being run to intentionally load the system to 100% to characterize its 
absolute maximum AC load wattage.


We've got a few more hosts waiting to be deployed that are configured 
almost identically.  I'll see what we can do.


I'm guessing those tests would pull slightly more power than the vEdge 
hosts, just because there's not much disk IO that happens on a 
networking VM.  These hosts have four SSDs for local storage.


What's the delta between the 420W and absolute maximum load the server 
is capable of pulling on the 208VAC side?


https://manpages.ubuntu.com/manpages/artful/man1/stress-ng.1.html


Server PS maximum input wattage is 900W.  Present draw of 2.0A @ 208V is 
~420W, so 420/900 = 46.67%


One possible factor is whether ESXI is configured to pass the pci-e 
devices directly through to the guest VM, or if there is any 
abstraction in between. For non-ESXI stuff, in the world of Xen or KVM 
there's many different ways that a guest domU can access a dom0's 
network devices, some of which can have impact on overall steady-state 
wattage consumed by the system.


The 420W server has its interfaces routed through the ESXI kernel.  
We're moving quickly to SR-IOV on new servers.


If the greatest possible efficiency is desired for a number of 1U 
things, one thing to look at would be something similar to the open 
compute platform single centralized AC to DC power units, and servers 
that don't each have their own discrete 110-240VAC single or dual power 
supplies. In terms of cubic meters of air moved per hour vs wattage, 
the fans found in 1U servers are really quite inefficient. As a 
randomly chosen example of 12VDC 40mm (1U server height) fan:


https://www.shoppui.com/documents/9HV0412P3K001.pdf

If you have a single 12.0VDC fan that's a maximum load of 1.52A, that's 
a possible load of up to 18.24W for just *one* 40mm height fan. And 
your typical high speed dual socket 1U server may have up to eight or 
ten of those, in the typical front to back wind tunnel configuration. 
Normally fans won't be running at full speed, so each one won't be a 
18W load, but more like 10-12W per fan is totally normal. Plus two at 
least two more fans in both hot swap power supplies. Under heavy load I 
would not be surprised at all to say that 80W to 90W of your R640's 
total 420W load is ventilation.


Which is of course dependent on the environmentals.  Fan speeds on our 
two servers are 25% for the 260W vs. 29% for 420W, so not much 
difference.  Inlet temp on both is ~17C.


I checked out another R640 heavily loaded with vEdge VMs, and it's 
pulling similar power, 415W, but the fan speed is at 45%, because inlet 
temp is 22C.


The TDP for the Xeon 6152 is 140W, which seems middle-of-the-road.  From 
the quick survey I did of Dell's configurator, the R640 can take CPUs up 
to 205W.  So we have headroom in terms of cooling.


In a situation where you're running out of power before you run out of 
rack space, look at some 1.5U and 2U high chassist that use 60mm height 
fans, which are much more efficient in ratio of air moved per time 
period vs watts.


Or ask the colo to turn the A/C lower ;)  (that moves the power problem 
elsewhere, I know)


Thanks,

-Brian


Re: Famous operational issues

2021-02-18 Thread Brian Knight via NANOG

On 2021-02-17 13:28, John Kristoff wrote:

On Wed, 17 Feb 2021 14:07:54 -0500
John Curran  wrote:


I have no idea what outages were most memorable for others, but the
Stanford transfer switch explosion in October 1996 resulted in a much
of the Internet in the Bay Area simply not being reachable for
several days.


Thanks John.

This reminds me of two I've not seen anyone mention yet.  Both
coincidentally in the Chicago area that I learned before my entry
into netops full time.  One was a flood:

  

The other, at the dawn of an earlier era:

  



I wouldn't necessarily put those two in the top 3, but by some standard
for many they were certainly very significant and noteworthy.

John


Thanks for sharing these links John.  I was personally affected by the 
Hinsdale CO fire when I was a kid.  At the time, my family lived on the 
southern border of Hinsdale in the adjacent town of Burr Ridge.  It was 
weird like a power outage: you're reminded of the loss of service every 
time you perform the simple act of requesting service, picking up the 
phone or toggling a light switch.  But it lasted a lot longer than any 
loss of power: It was six or seven weeks that, to this day, felt a lot 
longer.


Anytime we needed to talk to someone long-distance, we had to drive to a 
cousin's house to make the call.  To talk to anyone local, you'd have to 
physically go and show up unannounced.  At 11 years old, I was the 
bicycle messenger between our house and my great-grandmother, who lived 
about two blocks away.  My mother and father kept the cars gassed up and 
extra fuel on hand in case there was an emergency.


Dad ran a home improvement business out of the house, so new business 
ground to a halt.  Mom worked for a publishing company, so their release 
dates were impacted.  The local grocery store's scanners wouldn't work, 
so they had to punch the orders into the register by hand, using the 
paper sticker prices on the items.


I clearly remember from the local papers that they had to special-order 
the replacement 5ESS at enormous cost.  I saw the big brick building 
after the fire with the burn marks around the front door.  In late May 
and early June, the Greyhound buses with the workers were parked around 
the block, power plants outside with huge cables snaking in right 
through the wide open front door.


When we heard that dial tone at last, everyone was happier than an 
iPhone with full bars. Lol


We're spoiled for choice in telecom networks these days.  Also, 
facilities management have learned plenty of lessons since then.  Like, 
install and maintain an FM-200 fire suppression system.  But 
nevertheless, sometimes when I step into a colo, I think of that outage 
and the impact it had.


-Brian