Re: is ipv6 fast, was silly Redeploying

2021-11-22 Thread Owen DeLong via NANOG



> On Nov 22, 2021, at 02:45 , Masataka Ohta  
> wrote:
> 
> Mans Nilsson wrote:
> 
> > Not everyone are Apple, "hp"[0] or MIT, where initial
> > allocation still is mostly sufficient.
> 
> The number of routing table entries is growing exponentially,
> not because of increase of the number of ISPs, but because of
> multihoming.

Again, wrong. The number is growing exponentially primarily because of the
fragmentation that comes from recycling addresses.

> As such, if entities requiring IPv4 multihoming will also
> require IPv6 multihoming, the numbers of routing table
> entries will be same.

There are actually ways to do IPv6 multihoming that don’t require using the
same prefix with both providers. Yes, there are tradeoffs, but these mechanisms
aren’t even practical in IPv4, but have been sufficiently widely implemented in
IPv6 to say that they are viable in some cases.

Nonetheless, multihoming isn’t creating 8-16 prefixes per ASN. Fragmentation
is.

>> Your reasoning is correct, but the size of the math matters more.
> 
> Indeed, with the current operational practice. global IPv4
> routing table size is bounded below 16M. OTOH, that for
> IPv6 is unbounded.

Only by virtue of the lack of addresses available in IPv4. The other tradeoffs
associated with that limitation are rather unpalatable at best.

Owen



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-22 Thread Lincoln Dale
On Thu, Nov 18, 2021 at 1:21 PM John Gilmore  wrote:

> We have found no ASIC IP implementations that
> hardwire in assumptions about specific IP address ranges.  If you know
> of any, please let us know, otherwise, let's let that strawman rest.
>

There's at least one. Marvell PresteriaCX (its either PresteriaCX or DX,
forget which). It is in Juniper EX4500, among others.
Hardware-based bogon filter when L3 routing that cannot be disabled.


cheers,

lincoln.


Re: Class D addresses? was: Redploying most of 127/8 as unicast public

2021-11-22 Thread Greg Skinner via NANOG

> On Nov 21, 2021, at 1:20 PM, William Herrin  wrote:
> 
> On Sun, Nov 21, 2021 at 4:16 AM Eliot Lear  
> wrote:
>> In 2008, Vince Fuller, Dave Meyer, and I put together
>> draft-fuller-240space, and we presented it to the IETF. There were
>> definitely people who thought we should just try to get to v6, but what
>> really stopped us was a point that Dave Thaler made: unintended impact
>> on non-participating devices, and in particular CPE/consumer firewall
>> gear, and at the time there were  serious concerns about some endpoint
>> systems as well.  Back then it might have been possible to use the space
>> as part of an SP interior, but no SP demonstrated any interest at the
>> time, because it would have amounted to an additional transition.
> 
> Hi Eliot,
> 
> I wasn't in the working group so I'll take your word for it. Something
> rather different happened later when folks on NANOG discovered that
> the IETF had considered and abandoned the idea. Opinion coalesced into
> two core groups:
> 
> Group 1: Shut up and use IPv6. We don't want the IETF or vendors
> distracted from that effort with improvements to IPv4. Mumble mumble
> titanic deck chairs harrumph.
> 
> Group 2: Why is the IETF being so myopic? We're likely to need more
> IPv4 addresses, 240/4 is untouched, and this sort of change has a long
> lead time. Mumble mumble heads up tailpipes harrumph.
> 
> 
> More than a decade later, the "titantic" is shockingly still afloat
> and it would be strikingly useful if there were a mostly working /4 of
> IP addresses we could argue about how best to employ.
> 
> Regards,
> Bill Herrin
> 
> 
> -- 
> William Herrin
> bill at herrin.us
> https://bill.herrin.us/
> 

I agree, generally speaking.  IMO, it’s unfortunate that these addresses are 
being held in “limbo” while these debates go on.  I’m not complaining about the 
debates per se, but the longer we go without resolution, these addresses can’t 
be put to any (documented) use.

There’s background information available that might be helpful to those who 
haven’t yet seen it:

https://datatracker.ietf.org/doc/slides-70-intarea-4/ 
 (links to the 
draft-fuller-240space slides from IETF 70)
https://datatracker.ietf.org/doc/minutes-70-intarea/ 
 (IETF 70 INTAREA meeting 
minutes)
https://mailman.nanog.org/pipermail/nanog/2007-October/thread.html 
 (NANOG 
October 2007 mail archives, containing links to the “240/4” thread)
https://puck.nether.net/pipermail/240-e/ 
 (the 240-e archives)
https://mailarchive.ietf.org/arch/browse/int-area/ 
 (IETF INTAREA archives, 
containing comments on the 240space draft and related issues, roughly in the 
same time frame as in the previous links)

—gregbo



Your opinion on security and privacy implication of CDN - a 2min survey

2021-11-22 Thread Rui Xin
Hi all,

Do any of your websites employ password-based logins? Do they also use a
CDN service? Are you concerned about the security of users' passwords? Are
there any measures to protect users' sensitive information?

Many websites we investigated so far send users' account credentials
directly to their CDN providers, enabling the possibility of passive
attacks. This survey aims at finding people's awareness of the security and
privacy implications of such an issue, and we (researchers from Duke
University) would love to hear your opinion.

Please help us out by filling out this short and anonymous survey (8
multiple choice questions, <2 minutes).

Survey URL: https://duke.qualtrics.com/jfe/form/SV_6tUJE7uqzFQv1d4

Thank you so much in advance, and we look forward to reading your responses!

Best,
Rui Xin


Re: FreeBSD users of 127/8

2021-11-22 Thread Måns Nilsson
Subject: FreeBSD users of 127/8 Date: Mon, Nov 22, 2021 at 12:57:43AM -0800 
Quoting John Gilmore (g...@toad.com):
 
> If it turns out that FreeBSD usage of 127.1/16 is widespread, and the
> above analysis is incorrect or unacceptable to the FreeBSD community, we
> would be happy to modify the draft to retain default loopback behavior
> on 127.0.0.1/17 rather than 127.0.0.1/16.  That would include both
> 127.0.x.y and 127.1.x.y as default loopback addresses.  

treize:~ mansaxel$ sipcalc 127.0.0.1/17 | grep "Network range"
Network range   - 127.0.0.0 - 127.0.127.255
treize:~ mansaxel$ sipcalc 127.0.0.1/15 | grep "Network range"
Network range   - 127.0.0.0 - 127.1.255.255


-- 
Måns Nilsson primary/secondary/besserwisser/machina
MN-1334-RIPE   SA0XLR+46 705 989668
DON'T go!!  I'm not HOWARD COSELL!!  I know POLISH JOKES ... WAIT!!
Don't go!!  I AM Howard Cosell! ... And I DON'T know Polish jokes!!


signature.asc
Description: PGP signature


RE: Quantifying the customer support and impact of cgnat for residential ipv4

2021-11-22 Thread Graham Johnston

>We have 10,000+ customers and by default everyone is behind CGNAT. Around 25 
>customers have asked for a dedicated public IP 
>address and we usually just give them one free of charge. For our case, very 
>low percentage actually request one.

> Travis

Out of curiosity, based on your experience, or anyone else that wishes to 
respond, how many public IPs are required per 1000 customers?


Re: is ipv6 fast, was silly Redeploying

2021-11-22 Thread Masataka Ohta

Mans Nilsson wrote:

> Not everyone are Apple, "hp"[0] or MIT, where initial
> allocation still is mostly sufficient.

The number of routing table entries is growing exponentially,
not because of increase of the number of ISPs, but because of
multihoming.

As such, if entities requiring IPv4 multihoming will also
require IPv6 multihoming, the numbers of routing table
entries will be same.

The proper solution is to have end to end multihoming:

https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt


Your reasoning is correct, but the size of the math matters more.


Indeed, with the current operational practice. global IPv4
routing table size is bounded below 16M. OTOH, that for
IPv6 is unbounded.

Masataka Ohta



RPKI-Based Policy Without Route Refresh

2021-11-22 Thread Mark Tinka
Randy will be presenting draft-ymbk-sidrops-rov-no-rr during RIPE-83, at 
around 1530hrs UTC:


https://datatracker.ietf.org/doc/html/draft-ymbk-sidrops-rov-no-rr-02

Most grateful if you can join, and provide some initial feedback. Thanks.

Mark.

Re: Class E addresses? 240/4 history

2021-11-22 Thread Eliot Lear

Hi John,


On 22.11.21 10:25, John Gilmore wrote:

Eliot Lear  wrote:

I was not in this part of IETF in those days, so I did not participate
in those discussions.  But I later read them on the archived mailing
list, and reached out by email to Dave Thaler for more details about his
concerns.  He responded with the same general issues (and a request that
we and everyone else spend more time on IPv6).  I asked in a subsequent
message for any details he has about such products that he thought would
fail.  He was unable or unwilling to point out even a single operating
system, Internet node type, or firewall product that would fail unsafely
if it saw packets from the 240/4 range.


To be fair, you were asking him to recall a conversation that did take 
place quite some time earlier.



As documented in our Internet-Draft, all such products known to us
either accept those packets as unicast traffic, or reject such packets
and do not let them through.  None crashes, reboots, fills logfiles with
endless messages, falls on the floor, or otherwise fails.  No known
firewall is letting 240/4 packets through on the theory that it's
perfectly safe because every end-system will discard them.

As far as I can tell, what Eliot says really stopped this proposal in
2008 was Dave's hand-wave of *potential* concern, not an actual
documented problem with the proposal.


I wouldn't go so far as to call it a hand wave.  You have found devices 
that drop packets.  That's enough to note that this block of space would 
not be substitutable for other unicast address space.  And quite 
frankly, unless you're testing every device ever made, you simply can't 
know how this stuff will work in the wild. That's ok, though, so long as 
the use is limited to environments that can cope with it.



If anyone knows an *actual* documented problem with 240/4 packets,
please tell us!

(And as I pointed out subsequently to Dave, if any nodes currently in
service would *actually* crash if they received a 240/4 packet, that's a
critical denial of service issue.  For reasons completely independent
from our proposal, those machines should be rapidly identified and
patched, rather than remaining vulnerable from 2008 thru 2021 and
beyond.  It would be trivial for an attacker to send such
packets-of-death from any Linux, Solaris, Android, MacOS, or iOS machine
that they've broken into on the local LAN.  And even Windows machines
may have ways to send raw Ethernet packets that could be crafted by
an attacker to appear to be deadly IPv4 240/4 packets.)


Right, and indeed there are devices out there that have been known to 
stop functioning properly under certain forms of attack, regardless of 
the source address.


Eliot



OpenPGP_signature
Description: OpenPGP digital signature


Re: Class E addresses? 240/4 history

2021-11-22 Thread John Gilmore
Eliot Lear  wrote:
> In 2008, Vince Fuller, Dave Meyer, and I put together
> draft-fuller-240space, and we presented it to the IETF. There were
> definitely people who thought we should just try to get to v6, but
> what really stopped us was a point that Dave Thaler made: unintended
> impact on non-participating devices, and in particular CPE/consumer
> firewall gear, and at the time there were serious concerns about some
> endpoint systems as well.

I was not in this part of IETF in those days, so I did not participate
in those discussions.  But I later read them on the archived mailing
list, and reached out by email to Dave Thaler for more details about his
concerns.  He responded with the same general issues (and a request that
we and everyone else spend more time on IPv6).  I asked in a subsequent
message for any details he has about such products that he thought would
fail.  He was unable or unwilling to point out even a single operating
system, Internet node type, or firewall product that would fail unsafely
if it saw packets from the 240/4 range.

As documented in our Internet-Draft, all such products known to us
either accept those packets as unicast traffic, or reject such packets
and do not let them through.  None crashes, reboots, fills logfiles with
endless messages, falls on the floor, or otherwise fails.  No known
firewall is letting 240/4 packets through on the theory that it's
perfectly safe because every end-system will discard them.

As far as I can tell, what Eliot says really stopped this proposal in
2008 was Dave's hand-wave of *potential* concern, not an actual
documented problem with the proposal.

If anyone knows an *actual* documented problem with 240/4 packets,
please tell us!

(And as I pointed out subsequently to Dave, if any nodes currently in
service would *actually* crash if they received a 240/4 packet, that's a
critical denial of service issue.  For reasons completely independent
from our proposal, those machines should be rapidly identified and
patched, rather than remaining vulnerable from 2008 thru 2021 and
beyond.  It would be trivial for an attacker to send such
packets-of-death from any Linux, Solaris, Android, MacOS, or iOS machine
that they've broken into on the local LAN.  And even Windows machines
may have ways to send raw Ethernet packets that could be crafted by
an attacker to appear to be deadly IPv4 240/4 packets.)

John



FreeBSD users of 127/8

2021-11-22 Thread John Gilmore
J. Hellenthal wrote:
> FreeBSD operators have been using this space for quite a long time for
> many NAT'ing reasons including firewalls and other services behind
> them for jail routing and such.
> 
> https://dan.langille.org/2013/12/29/freebsd-jails-on-non-routable-ip-addresses/
> 
> That's just one example that I've seen repeated in multiple other
> ways. One of which a jail operator with about 250 addresses out of
> that range that enabled his jail routed services.

Thank you for letting us know!  We would be happy to improve
the draft so that it has less impact on such pre-existing users.

When we surveyed publicly visible applications based on Linux,
we only found them configured to use the lowest /16.  It's true
that any system operator could configure their system in any part
of 127/8, but we focused on the default configurations of popular
software (such as systemd and Kubernetes).

Do you know of any FreeBSD software that comes with a default
configuration in 127/8 but not in 127/16?  (It looks like the web page
you referenced is about specific manual configuration, not about
the default behavior of supplied software.)

I do not know the details of FreeBSD jail configuration, nor the precise
behavior of its loopback interface.  From my limited understanding, it
looks like the jail configured in the web page you referenced, with
address 127.1.0.128/32 on lo1, would provide loopback service regardless
of whether the default address on lo0 was 127.0.0.1/8 or 127.0.0.1/16.
That's because lo1 is a separate interface from lo0, and the "lo"
interfaces always loop back any packets sent through them, no matter
what addresses are configured on them.  (Indeed the example
configures it with a 10.80.0.128 address as well, which would not
normally be considered a loopback address.)

So, if I am right, then even if our current Internet-Draft became a
standard and FreeBSD was modified to implement it, the recommended
commands would continue to work.  The only impact would be that such a
FreeBSD machine would be unable to reach a potential global Internet
service hosted out on the Internet at address 127.1.0.128 (because a
local interface has been configured at that address, shadowing the
globally reachable address).  I anticipate that no such global services
would be created before 2026 at the very earliest (other than for
reachability testing), and likely much later in the 2020's or early
2030's.

If it turns out that FreeBSD usage of 127.1/16 is widespread, and the
above analysis is incorrect or unacceptable to the FreeBSD community, we
would be happy to modify the draft to retain default loopback behavior
on 127.0.0.1/17 rather than 127.0.0.1/16.  That would include both
127.0.x.y and 127.1.x.y as default loopback addresses.  This would
completely resolve the issue presented on the "FreeBSD jails on
non-routable IP addresses" web page, while still recovering more than 16
million addresses for global use.

The worst case might be if FreeBSD sysadmins have become accustomed to
picking "random" addresses manually from all over the 127/8 space.  If
so, it is not unreasonable to expect that when manually configuring a
node to use "non-routable" addresses, that in the passage of time, some
of them might become routable in the future.  When upgrading any machine
to a new OS release, various small things typically need adjusting to
fit into the revised OS.  Renumbering the in-system use of up to a few
hundred non-routable addresses like 127.44.22.66 into addresses like
127.0.22.66 (in a smaller non-routable range that still would still
contain 65,000 or 130,000 addresses) might be one of those things that
could be easily adjusted during such an upgrade.

John