Re: Curious Cloudflare DNS behavior

2020-05-31 Thread Joe Greco
On Sun, May 31, 2020 at 10:07:41AM -0600, Keith Medcalf wrote:
> On Saturday, 30 May, 2020 13:18, Joe Greco  wrote:
> 
> >The Internet didn't evolve in the way its designers expected.  Early
> >mistakes and errors required terrible remediation.  As an example, look
> >at the difficulty involved in running a service like e-mail or DNS.
> >E-mail requires all sorts of things to interoperate well, including
> SPF,
> >DKIM, SSL, DNSBL's, etc., etc., and it is a complicated service to run
> >self-hosted.  DNS is only somewhat better, with the complexity of
> DNSSEC
> >and other recent developments making for more difficulties in
> maintaining
> >self-hosted services.
> 
> I've been running my own DNS and e-mail for more than a quarter century.
> Contrary to your proposition it hasn't gotten much more complicated over
> than time.

Really?  Because nowadays, there's all this extra crap that didn't used
to exist. 

>From my perspective, it's gone from "configure Sendmail on your Sun
workstation and compile Elm (back in the '80's)" to something a lot more
complicated.

Now you need to sign your mail with DKIM, have SPF records, and even if
you cross all the T's and dot all the I's, you can expect your mail to be
rejected at some major mail sites because the LACK of a consistent high
volume of mail being sent by your site is actually scored against you. 
On the inbound side, you now need to be filtering your mail with 
Spamassassin and DNSBL's, and also virus scanners because it's likely
some of your users won't be.  You need to support both IMAP _and_ webmail
if you want to be able to support users, because we are now in that
"post-PC" era where people expect to be able to sit down at an arbitrary
PC and have an experience on par with that of any of the mail service
providers.

I've watched in dismay as many technically competent sysadmins, and even
whole service providers, have given up and outsourced e-mail, because
it is so difficult to do well.  Even Apple finally ditched their
OSX Server product's email services, which had for years been one of
my best examples of "it's still possible to run this yourself."

If this is your idea of "hasn't gotten much more complicated", I salute
your technical prowess.  It's not that I want this to be the status quo,
but I'm also not so blind as to deny what is going on.  :-(

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


RE: Curious Cloudflare DNS behavior

2020-05-31 Thread Keith Medcalf
On Saturday, 30 May, 2020 13:18, Joe Greco  wrote:

>The Internet didn't evolve in the way its designers expected.  Early
>mistakes and errors required terrible remediation.  As an example, look
>at the difficulty involved in running a service like e-mail or DNS.
>E-mail requires all sorts of things to interoperate well, including
SPF,
>DKIM, SSL, DNSBL's, etc., etc., and it is a complicated service to run
>self-hosted.  DNS is only somewhat better, with the complexity of
DNSSEC
>and other recent developments making for more difficulties in
maintaining
>self-hosted services.

I've been running my own DNS and e-mail for more than a quarter century.
Contrary to your proposition it hasn't gotten much more complicated over
than time.

--
The fact that there's a Highway to Hell but only a Stairway to Heaven
says a lot about anticipated traffic volume.






Re: Rate-limiting BCOP?

2020-05-31 Thread Saku Ytti
On Sun, 31 May 2020 at 17:37,  wrote:

> Shouldn’t the egress FIA(NPU) be issuing fabric grants (via central Arbiters) 
> to ingress FIA(NPU) for any of the VOQs all the way up till egress NPU's 
> processing capacity, i.e. till the egress NPU can still cope with the overall 
> pps rate (i.e. pps rate from fabric & pps rate from "edge" interfaces), 
> subject to ingress NPU fairness of course?

This is how it works in say MX. But in ASR9k VoQ are artificially
policed, no questions asked. And as policers are port level, if you
subdivide them via satellite or vlan you'll have collateral damage.
Technically the policer is programmable, and there is CLI, but CLI
config is binary between two low values, not arbitrary.

> Or in other words, shouldn't all or most of the 26Gbps end up on egress NPU, 
> since it most likely has all the necessary pps processing capacity to deal 
> with the packets at the rate they are arriving, and decide for each based on 
> local classification and queuing policy whether to enqueue the packet or drop 
> it?

No, as per explanation given. Basically don't subdivide ports or don't
get attacked.

-- 
  ++ytti


RE: Rate-limiting BCOP?

2020-05-31 Thread adamv0025
> Saku Ytti
> Sent: Friday, May 22, 2020 7:52 AM
> 
> On Thu, 21 May 2020 at 22:11, Bryan Holloway  wrote:
> 
> > I've done all three on some level in my travels, but in the past it's
> > also been oftentimes vendor-centric which hindered a scalable or
> > "templateable" solution. (Some things police in only one direction, or
> > only well in one direction, etc.)
> 
> Further complication, let's assume you are all-tomahawk on ASR9k.
> Let's assume TenGigE0/1/2/3/4 as a whole is pushing 6Gbps traffic across all
> VLAN, everything is in-contract, nothing is being dropped for any VLAN in any
> class. Now VLAN 200 gets DDoS attack of 20Gbps coming from single
> backbone interface. I.e. we are offering that tengig interftace 26Gbps of
> traffic. What will happen is, all VLANs start dropping packets QoS unaware,
> 12.5Gbps is being dropped by the ingress NPU which is not aware to which
> VLAN traffic is going nor is it aware of the QoS policy on the egress VLAN. 
>
Hmm, is that so?
Shouldn’t the egress FIA(NPU) be issuing fabric grants (via central Arbiters) 
to ingress FIA(NPU) for any of the VOQs all the way up till egress NPU's 
processing capacity, i.e. till the egress NPU can still cope with the overall 
pps rate (i.e. pps rate from fabric & pps rate from "edge" interfaces), subject 
to ingress NPU fairness of course?
Or in other words, shouldn't all or most of the 26Gbps end up on egress NPU, 
since it most likely has all the necessary pps processing capacity to deal with 
the packets at the rate they are arriving, and decide for each based on local 
classification and queuing policy whether to enqueue the packet or drop it?  

Looking at my notes, (from discussions with Xander and Thuijs and Aleksandar 
Vidakovic):
Each 10G entity is represented by on VQI = 4 VOQs (one VOQ for each priority 
level)
The trigger for the back-pressure is the utilisation level of RFD buffers. 
RFD buffers are holding the packets while the NP microcode is processing them. 
If you search for the BRKSPG-2904, the more feature processing the packet goes 
through, the longer it stays in RFD buffers.
RFD buffers are from-fabric feeder queues. - Fabric side backpressure kicks in 
if RFD queues are more than 60% full

So according to the above, should the egress NPU be powerful enough to deal 
with 26Gbps of traffic coming from fabric in addition to whatever business as 
usual duties its performing, (i.e RFD queues utilization is below 60%) then no 
drops should occur on ingress NPU (originating the DDoS traffic).
  

> So
> VLAN100 starts to see NC, AF, BE, LE drops, even though the offered rate in
> VLAN100 remains in-contract in all classes.
> To mitigate this to a degree on the backbone side of the ASR9k you need to
> set VoQ priority, you have 3 priorities. You could choose for example BE P2,
> NC+AF P1 and LE Pdefault. Then if the attack traffic to
> VLAN200 is recognised and classified as LE, then we will only see
> VLAN0100 LE dropping (as well as every other VLAN LE) instead of all the
> classes.
> 




Re: RFC6550 (RPL) and RFC6775 (IPv6 Neighbor Discovery for 6LoWPANs)

2020-05-31 Thread Etienne-Victor Depasquale
Pascal, thank you, the draft at
https://datatracker.ietf.org/doc/draft-thubert-6man-ipv6-over-wireless/
is very informative.

You hit the nail on the head with your suggestion of confusion between the
congruence of link and subnet.

However, I followed one of the references (RFC4903) in your draft and
it does not help that it (RFC4903) points to RFC4291's assertion that:
"Currently IPv6 continues the IPv4 model that a subnet prefix is associated
with one link"

RFC4903 further states that:
 "clearly, the notion of a multi-link subnet would be a change to the
existing IP model.".

I confess: your assertion in the draft that:
"In Route-Over Multi-link subnets (MLSN) [RFC4903],
routers federate the links between nodes
that belong to the subnet, the subnet is not on-link and it extends
beyond any of the federated links"

is news to me.

Best regards,

Etienne





On Sat, May 30, 2020 at 1:39 PM Pascal Thubert (pthubert) <
pthub...@cisco.com> wrote:

> Hello Etienne Victor
>
> Maybe you’re confusing link and a subnet?
>
> This is discussed at length here:
>
> https://datatracker.ietf.org/doc/draft-thubert-6man-ipv6-over-wireless/
>
> RPL can route inside a subnet using host routes. This is how a multi link
> subnet can be made to work...
>
> Please let me know if the draft above helped and whether it is clear
> enough. The best way for that discussion would be to cc 6MAN.
>
> Keep safe,
>
> Pascal
>
> Le 30 mai 2020 à 10:03, Etienne-Victor Depasquale  a
> écrit :
>
> 
> Thank you Carsten, and thank you Pacal. Your replies are valuable and
> packed with insight.
>
> I'll wrap up with how I interpret RPL's behaviour in terms of IP hops.
>
> On one hand, RFC6775 defines a route-over topology as follows:
> "A topology where hosts are connected to the 6LBR through the use of
> intermediate layer-3 (IP) routing.
> Here, hosts are typically multiple IP hops away from a 6LBR.
> The route-over topology typically consists of a 6LBR, a set of 6LRs, and
> hosts."
> If RPL is route-over by definition, then RFC6775 would imply that there
> are typically multiple IP hops between a leaf and the border router.
>
> On the other hand, there at least two contradictions (which I justify
> after stating them):
> (a) RFC6550 states that "RPL also introduces the capability to bind a
> subnet together with a common prefix and to route within that subnet."
> (b) Reduction of a DODAG to a single subnet prefix, albeit only only one
> parent-child relationship deep, is clearly shown at Contiki-NG's Github
> page (deep dive section).
>
> The hinge on which my understanding revolves is that an IP hop traverses a
> router and ***results in a change of prefix of the link on which the packet
> travels*** :
>
> -->
> -->
>
> With RPL, the "hop" would look like as shown below:
>
>   --
> --
>
> There seems to be a change in the meaning associated with "IP hop".
> I guess that I can reconcile both cases through the observation that RPL
> actually does apply to a single, NBMA link and therefore the IP prefix
> ***is*** the same.
> Then again, calling the RPL device involved in the packet forwarding by
> the name "router" feels like an uncomfortable stretch.
> Don't routers sit at the meeting point of different layer 2 links?
>
>
> Cheers,
>
> Etienne
>
> On Fri, May 29, 2020 at 10:39 PM Pascal Thubert (pthubert) <
> pthub...@cisco.com> wrote:
>
>> Hello Etienne
>>
>> You may see ND as the host to * interface for any network and RPL as the
>> router to router interface when the network is NBMA.
>>
>> Some of us cared about the interworking.
>>
>> Look at the RPL Unaware leaf I-draft and you’ll see that I’m sure.
>>
>> Keep safe,
>>
>> Pascal
>>
>> > Le 29 mai 2020 à 20:28, Carsten Bormann  a écrit :
>> >
>> > Hi Etienne,
>> >
>> > I’m also not sure many of the classical network operators assembled in
>> NANOG work with 6LoWPANs today, but I still can answer your question.
>> >
>> >> While trying to build a holistic view of LoWPANs, I'm consulting the
>> IETF's informational and standards documents.
>> >>
>> >> I'm struck by the impression that, despite the significance of
>> RFC6775's extension of Neighbor Discovery(ND) to low-power and lossy
>> networks (LLNs),
>> >> it is largely ignored by RFC6550 (RPL), with little to no reference to
>> the ontological plane created in RFC6775's terminology section.
>> >
>> > Yes, you could say that.
>> >
>> > ND (Neighbor discovery) describes interfaces between hosts and between
>> hosts and routers.
>> > 6LoWPAN-ND does not use host-to-host interfaces (different from
>> Ethernet, all traffic goes over routers, which RFC 4861 already forsaw in
>> the L — on-link — bit, which isn’t set in 6LoWPAN-ND).
>> >
>> > RFC 6550 was completed at a time when many people who came in from the
>> WSN (wireless sensor network) world thought they could get away with a
>> network that is wholly composed of routers.
>> > Even the “leaf” nodes in the RPL world were participating