Re: Guest Column: Kentik's Doug Madory, Last Call for Upcoming ISOC Course + More

2023-09-08 Thread John Gilmore
Ryan Hamel  wrote:
> For you to say, "my privacy has been sold", is simply not true.

I agree with you somewhat about tracking links.  They only spy on a
person when that person tries to follow them.  I do find it much less
useful to read mailing lists that include references to external
resources that I decline to access, because I don't want to follow
bugged links.

But the "web bugs" that I mentioned as a second default-on Mailchimp
tracking technology ARE specifically designed to be triggered any time a
recipient reads a message in an HTML-based web browser.

Back when postal mail was the default, senders had no idea whether the
recipient opened, read, or forwarded a letter, versus tossing it into
the fireplace as kindling.  Society carried forward that expectation
when postal mail was gradually replaced by electronic mail.  Ordinary
email senders don't know if you have read their message (unless they get
social clues from your subsequent actions, just as with paper mail).
Tracking was never part of the Internet email protocols; it was glued-on
by abusing HTML email features and making unique URLs sent to each
recipient, whose corresponding web server logs when they are accessed.

These email tracking technologies deliberately violate the social
expectation that reading a letter is a private act.  They produce
detailed records of the private, in-home or at-work activities of every
recipient.  They do all this covertly; you will not find a MailChimp
mailing list message plainly telling you, "If you want to safeguard your
privacy as an email reader, do not open these messages, because we have
filled them with spyware."  That would produce too many unsubscribes and
too much outrage.  Instead, a recipient has to be technically
sophisticated to even notice that it's happening.  (Many bulk email
senders also don't know that their emails have spyware quietly inserted
into them as they are distributed.  I have engaged on this topic with
many nonprofit CEOs and marketing executives, who really had no idea.)

Those detailed email-reading and link-clicking records are not just
accessible to the sender.  There's an agency problem.  They are kept and
stored and sold by the intermediary (MailChimp), both individually and
in bulk.  They are accessible to any government that wants to ask,
without a warrant, without probable cause, in bulk or individually,
since they are "third-party" records about you, like your banking
records or license-plate-reader records.  They are accessible to private
investigators via data brokers.  They are accessible to any business
that offers a sufficiently attractive deal to MailChimp -- places like
Google or Facebook who make billions of dollars a year from tracking
people to manipulate them with advertising.

And wouldn't you like to know just which emails your competitors'
engineers and executives are reading, and when, and where, and how many
times, and whether they forwarded the messages?  (I've often wanted the
Google Detective Agency, that I could merely pay to tell me what my wife
or my competitor or that rude guy who insulted me is searching for on
Google, what web pages they are looking at, what emails they are reading
or sending, and exactly where they are navigating in their car or on
their bike or on transit.  Google has all this information; why won't
they sell it to me?  They definitely sell it to the government, so why
not to me?  It's amazing to me that people treat Google like Santa Claus
giving them free gifts, when it's really like an NSA.gov that is
unencumbered by laws or oversight.  MailChimp isn't as bad as Google.
Its scope is smaller, but its defaults are deliberately bad, and it's
created quite a honeypot of trillions of records about billions of
people.  The point is that besides being a gross violation of the
personal privacy of the home and office, this data also has real
commercial value.

I suggest that as a technically aware organization, NANOG.org should not
be creating detailed spy dossiers on its members who read emails, and
then letting its subcontractor MailChimp sell or trade that info out
into the world.

John Gilmore


Re: Guest Column: Kentik's Doug Madory, Last Call for Upcoming ISOC Course + More

2023-09-08 Thread John Gilmore
It is totally possible to turn off the spyware in MailChimp.  You just
need to buy an actual commercial account rather than using their
"free" service.  To save $13 or $20 per month, you are instead selling
the privacy of every recipient of your emails.  See:

  https://mailchimp.com/help/enable-and-view-click-tracking/

  "Check the Track clicks box to enable click tracking, or uncheck the
  box to disable click tracking.  ...  Mailchimp will continue to
  redirect URLs for users with free account plans to protect against
  malicious links.  ...  When a paid user turns off click tracking,
  Mailchimp will continue to redirect their URLs until certain account
  activity thresholds are met."

Don't forget to turn off the spyware 1x1 pixel "web bugs" that
MailChimp inserts by default, too:

  https://mailchimp.com/help/about-open-tracking/

John


Re: NTP Sync Issue Across Tata (Europe)

2023-08-12 Thread John Gilmore
Forrest Christian (List Account)  wrote:
> > > At some point, using publicly available NTP sources is redundant
> > > unless one wants to mitigate away the risks behind failure of the GPS
> > > system itself.

On Fri, Aug 11, 2023, 3:33 AM Masataka Ohta wrote:
> > Your assumption that public NTP servers were not GPS-derived NTP
> > servers is just wrong.

Subsequent conversation has shown that you are both right here.

Yes, many public NTP servers ARE using GPS-derived time.
Yes, some public NTP servers ARE NOT using GPS-derived time.

Up to this point, popular public NTP pools have not made these
distinctions readily configurable, though.

For sites that need to run even during a war, or a similar situation
that is likely to disrupt or distort GPS, they might like to have access
to NTP servers that are completely independent from GPS.

At one point I proposed that some big NTP server pools be segregated by
names, to distinguish between GPS-derived time and national-standard
derived time.  For example, two domain names could be e.g.:

  fromnist.pool.tick.tock
  fromgps.pool.tick.tock

If you wanted particular redundancy, say because you have a local GPS
clock and you want a non-GPS check on that, you'd use
fromnist.pool.tick.tock (or fromnict.pool.tick.tock for the Japanese
timebase, etc).

(If you were agnostic about where your times comes from, you would just
use a generic domain name like vendorname.pool.tick.tock.)

An automated tool could periodically trace that the stratum 0 source
currently being used by each node in each of the pools is actually the
same as advertised in its domain name.  Alerting any difference to the
relevant system administrators would allow those clocks to continue
running with a backup timebase, while making it more likely that some
human would work to restore their access to the correct stratum 0
source.

So far this is just an idea.

John

PS: When we say "GPS", do we really mean any GNSS (global navigation
satellite system)?  There are now four such systems that have global
coverage, plus regionals.  While they attempt to coordinate their
time-bases and reference-frames, they are using different hardware and
systems, and are under different administration, so there are some
differences in the clock values returned by each GNSS.  These
differences and discontinuties have ranged up to 100ns in normal
operation, and higher amounts in the past.  See:

  Nicolini, Luca; Caporali, Alessandro (9 January 2018). "Investigation
  on Reference Frames and Time Systems in Multi-GNSS". Remote
  Sensing. 10 (2): 80. doi:10.3390/rs10010080.
  https://www.mdpi.com/2072-4292/10/1/80


Re: NTP Sync Issue Across Tata (Europe)

2023-08-08 Thread John Gilmore
> I was also speaking specifically about installing GPS antennas in
> viable places, not using a facility-provided GPS or NTP service.

Am I confused?  Getting the time over a multi-gigabit Internet from a
national time standard agency such as NIST (or your local country's
equivalent) should produce far better accuracy and stability than
relying on locally received GPS signals.  GPS uses very weak radio
signals which are regularly spoofed by all sorts of bad actors:

  https://www.gps.gov/spectrum/jamming/

for all sorts of reasons (like misleading drone navigation):

  https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incident

Depending on satnav systems creates a large single point of failure for
worldwide civilian infrastructure.

Jamming GPS with subtly fake time data near big data centers seems like
an easy move that would cause all sorts of distributed algorithms to
start failing in unusual ways.  And in a more serious wartime attack,
many or most GPS satellites themselves would be destroyed or disabled.
Yet digital radio modulations like FT8 or DMR rely on tight time
synchronization among different transmitters.  So do many modern
cellphone modulations -- not to mention distributed database sync
algorithms.  Depending on any of these for emergency communications when
their time comes from GPS, is a recipe for having no communications
during wars or cyber-wars in which GPS satellites are attacked or
jammed.  See a longer explanation here:

  https://www.ardc.net/apply/grants/2020-grants/grant-ntpsec/

I suspect that even today, if you rely on civilian GPS time near the US
White House, Pentagon, or other military targets like bases, you will
discover "anomalies" in the local radio GPS data, compared to what you
get from an authenticated time standard over NTP.  How reliable is
civilian GPS time in Ukraine these days?

John



Re: Alternative Re: ipv4/25s and above Re: 202211201009.AYC

2022-11-27 Thread John Gilmore
John Curran  wrote:
>> https://datatracker.ietf.org/doc/draft-schoen-intarea-unicast-240/
>
> ... this Internet draft ... can't be safely deployed in the actual
> real-world Internet

The draft *has been* safely deployed in the actual real-world Internet.
It is in all Linux nodes since 2008, and all recent Cisco routers.  We
know that this step is safe, because it was already done a decade ago,
and the Internet did not descend into flames (except on mailing lists
;-).  The draft is just trying to help the paperwork catch up with the
implementations.

So it seems that John Curran was criticizing a strawman rather than the
draft above (which I co-authored).  Perhaps the confusion is that John
thought that the draft would actually allocate 240/4 addresses to
end-users for end-user purposes.  Doing that would be unsafe in today's
big Internet (though major squatters are already using these addresses
in private clouds, as RIPE and Google have documented).  Allocating
these addresses would be far more controversial than just updating the
code in IPv4 implementations.  Therefore, the draft doesn't do that.
Instead it would just clear out the cobwebs in implementations, which
would some years later, after more work, allow a responsible party to
make such allocations.  John's suggestion that "it's unsafe to do this
safe change, because we don't yet know *when* some later change will be
safe" is simply incorrect.

Perhaps what the draft failed to explain is that "Merely implementing
unicast treatment of 240/4 addresses in routers and operating systems,
as this draft proposes, does not cause any interoperability issues.
Hundreds of millions of IPv4 nodes currently contain this unicast
treatment, and all are interoperating successfully with each other and
with non-updated nodes."  We'll add something like that to the next
version.

John Gilmore
IPv4 Unicast Extensions Project



Re: any dangers of filtering every /24 on full internet table to preserve FIB space ?

2022-10-10 Thread John Gilmore
Randy Bush  wrote:
> it is a tragedy that cidr and an open market has helped us more than
> ipv6 has.

True.

Maybe cidr and an open market for ipv6 addresses would reduce the tragedy?

John


Re: [External] Normal ARIN registration service fees for LRSA entrants after 31 Dec 2023

2022-09-19 Thread John Gilmore
John Curran  wrote:
> [challenges by legacy registrants] has been before judges and resolved
> numerous times.
> 
> We’ve actually had the matter before many judges, and have never been
> ordered to do anything other than operate the registry per the number
> resource policy as developed by this community – this has been the
> consistent outcome throughout both civil and bankruptcy
> proceedings.

Is there a public archive of these court proceedings?  Or even a list
of which cases have involved ARIN (or another RIR)?

What can the community learn from what help resource holders have asked
courts for, and what help they eventually got?

John

PS: Re another RIR: There's a short list of some of the ~50 lawsuits
against AFRINIC in its Wikipedia page:

  https://en.wikipedia.org/wiki/AFRINIC#Controversies_&_Scandals

These are mostly to do with corruption, theft, and harassment.  But an
important subtheme includes what power AFRINIC has to seize IP addresses
that were legitimately allocated to recipients.  In the "Cloud
Innovation" case, CI got addresses under standard policy, but years
later, as I recall, AFRINIC tried to retroactively impose a new "no
renting vm's" policy and a new requirement that all the addresses be
hosted in Africa, even by global customers.  After AFRINIC threatened to
immediately revoke CI's membership and take back the addresses over
this, CI sued AFRINIC to keep the status quo, keeping their business
alive, while the courts sort out whether AFRINIC has the power to do so.
Since then, it's mostly been procedural scuffling and some bad faith
negotiations.  If neither party goes bankrupt nor settles, it's possible
that the courts of Mauritius will answer the question about whether
their RIR has the power to impose new policies and then reclaim
allocated addresses for violating them.


Re: [External] Normal ARIN registration service fees for LRSA entrants after 31 Dec 2023 (was: Fwd: [arin-announce] Availability of the Legacy Fee Cap for New LRSA Entrants Ending as of 31 December 20

2022-09-16 Thread John Gilmore
John Curran  wrote:
> ... the long-term direction is to provide the same services to all
> customers under the same agreement and fees – anything else wouldn’t
> be equitable.

There are many "anything else"s that would indeed be equitable.  It is
equitable for businesses to sell yesterday's bread at a lower price than
today's bread.  Or to rent unused hotel rooms to late-night transients
for lower prices than those charged to people who want pre-booked
certainty about their overnight shelter.  ARIN could equitably charge
different prices to people in different situations; it already does.
And ARIN could equitably offer services to non-members, by charging them
transaction fees for services rendered, rather than trying to force them
into a disadvantageous long term contract.  Please don't confuse
"seeking equity" with "forcing everyone into the same procrustean bed".

As a simple example, ARIN's contract need not require its customers to
give up their resources when ceasing to pay ARIN for services.  (There's
an existence proof: RIPE's doesn't.)  Such a contract would likely
result in more-equitable sharing of costs, since it would encourage
legacy holders to pay ARIN (and legacy holders are still more than a
quarter of the total IP addresses, possibly much more).  The fact that
ARIN hasn't made this happen says nothing about equity; it's about
something else.

This whole tussle is about power.  ARIN wants the power to take away
legacy resources, while their current owners don't want that to happen.
ARIN wants to be the puppeteer who pulls all the strings for the North
American Internet.  It pursues this desire by stealth and misdirection
(e.g. "We strongly encourage all legacy resource holders who have not
yet signed an LRSA to cover their legacy resources to consider doing so
before 31 December 2023 in order to secure the most favorable fees for
their ARIN Services...")  ARIN is also trying to encourage ISPs to
demand RPKI before providing transit to IP address holders, which would
turn its optional RPKI service (that it has tied by contract into ARIN
gaining control over legacy resources) into an effectively mandatory
RPKI service.

ARIN hides its power grab behind "our policies are set by our community"
and "our board is elected by our community" misdirections.  Its voting
community consists almost entirely of those who aren't legacy holders
(by definition: if you accept their contract, your legacy resource
ownership goes away; if you don't, you can't vote).  That community
would love to confiscate some "underused" legacy IP addresses to be
handed out for free to their own "waiting list".  So this is equivalent
to putting foxes in charge of policy for a henhouse.

Now that markets exist for IP addresses, all that IP addresses need is a
deed-registry to discourage fraud, like a county real-estate registrar's
office.  IP addresses no longer need a bureacracy for socialistic
determinations about which supplicants "deserve" addresses.  Addresses
now have prices, and if you want some, you buy them.  Deed registries
get to charge fees for transactions, but they don't get to take away
your property, nor tell you that you can't buy any more property because
they disapprove of how you managed your previous properties.  Actual
ownership of real estate is defined by contracts and courts, not by the
registry, which is just a set of pointers to help people figure out the
history and current status of each parcel.  The registry is important,
but it's not definitive.

Deed-registry is apparently not a model that ARIN wants to be operating
in.  They initially tried to refuse to record purchases of address
blocks, because it violated their model of "if you don't use your IP
addresses, you must give them back to us and receive no money for them".
They saw their job as being the power broker who hands out free favors.
But when their supply of free IP addresses dried up, they had no
remaining function other than to record ownership (be a deed registry),
and to run an occasional conference.  It dawned on them that if they
refused to record these transactions, they would not even be a reliable
deed-registry; they would have entirely outlived their usefulness.  So
they reluctantly agreed to do that job, but their policies are still
left over from their power-broker past.  They'd love to go back to it,
if only they could figure out how.  IPv6?  Sure!  RPKI maybe?  Worth a
try!

ARIN prefers to be a power broker rather than a scribe.  Who can blame
them for that?  But don't mistake their strategy for stewardship.
"Doing what the community wants" or "seeking the equitable thing" quacks
like stewardship, so of course they brand themselves that way.  But
in my opinion their power-seeking is self-serving, not community-serving.

John Gilmore



Re: Normal ARIN registration service fees for LRSA entrants after 31 Dec 2023 (was: Fwd: [arin-announce] Availability of the Legacy Fee Cap for New LRSA Entrants Ending as of 31 December 2023)

2022-09-15 Thread John Gilmore
John Curran wrote:
> > We strongly encourage all legacy resource holders who have not yet
> > signed an LRSA to cover their legacy resources to

Randy Bush  wrote:
> consult a competent lawyer before signing an LRSA

Amen to that.  ARIN's stance on legacy resources has traditionally been
that ARIN would prefer to charge you annually for them, and then
"recover" them (take them away from you) if you ever stop paying, or if
they ever decide that you are not using them wisely.  If you once agree
to an ARIN contract, your resources lose their "legacy" status and you
become just another sharecropper subject to ARIN's future benevolence or
lack thereof.

The change recently announced by John Curran will make the situation
very slightly worse, by making ARIN's annual fees for legacy resources
changeable at their option, instead of being capped by contract.  ARIN
management could have changed their offer to be better, if they wanted
to attract legacy users, but they made an explicit choice to do the
opposite.

By contrast, RIPE has developed a much more welcoming stance on legacy
resources, including:

  *  retaining the legacy status of resources after a transfer or sale
  *  allowing resources to be registered without paying annual fees to RIPE
 (merely paying a one-time transaction fee), so that later non-payment
 of annual fees can't be used as an excuse to steal the resources.
  *  agreeing that RIPE members will keep all their legacy resources even if
 they later cease to be RIPE members

You are within the RIPE service area if your network touches Europe,
northern Asia, or Greenland.  This can be as simple as having a rented
or donated server located in Europe, or as complicated as running a
worldwide service provider.  If you have a presence there, you can
transfer your worldwide resources out from under ARIN policies and put
them under RIPE's jurisdiction instead.

Moving to RIPE is not an unalloyed good; Europeans invented bureaucracy,
and RIPE pursues it with vigor.  And getting the above treatment may
require firmly asserting to RIPE that you want it, rather than accepting
the defaults.  But their motives are more benevolent than ARIN's toward
legacy resource holders; RIPE honestly seems to want to gather in legacy
resource holders, either as RIPE members or not, without reducing any of
the holders' rights or abilities.  I commend them for that.

Other RIRs may have other good or bad policies about legacy resource
holders.  As Randy proposed, consult a lawyer competent in legacy domain
registration issues before making any changes.

John


Re: Serious Juniper Hardware EoL Announcements

2022-06-14 Thread John Gilmore
Matthew Petach  wrote:
> https://cacm.acm.org/news/257742-german-factory-fire-could-worsen-global-chip-shortage/fulltext
> 
> That was the *sole* supplier of extreme ultraviolet lithography machines
> for every major chip manufacturer on the planet.
> 
> Chip shortages will only get worse for the next several years.  The light
> at the end of the tunnel is unfortunately *not* coming from an ultraviolet
> lithography machine.  :(

It's quite trendy (but inaccurate) to declare that everything sucks,
human life on the planet is ending, etc.  Matthew's last paragraph seems
to be one of those unduly dire conclusions, based on subsequent news
after January.  See:

  
https://www.asml.com/en/news/press-releases/2022/update-fire-incident-at-asml-berlin

  
https://www.cnbc.com/2022/01/19/asml-profit-beats-despite-berlin-fire-sees-20percent-sales-growth-in-2022-.html

Those with a detailed interest in the topic can speak directly with
Monique Mols, head of media relations at ASML.com, at +31 652 844 418,
or Ryan Young, US media relations manager, +1 480 205 8659.  Ryan
confirmed to me today that the latest news is in the above links: there
is expected to be no impact from the fire on ASML's extreme UV delivery
schedule.  He says they will provide a further update at their large
annual meeting in about a month.

John



Re: Serious Juniper Hardware EoL Announcements

2022-06-14 Thread John Gilmore
Dave Taht  wrote:
> > Then it was "what can we do with what we can afford" now it's more
> > like "What can we do with what we have (or can actually get)"?
> 
> Like, working on better software...

Like, deploying the other 300 million IPv4 addresses that are currently
lying around unused.  They remain formally unused due to three
interlocking supply chain problems: at IETF, ICANN, and vendors.  IETF's
is caused by a "we must force everyone to abandon trailing edge
technology" attitude.  ICANN's is because nobody is sure how to allocate
~$15B worth of end-user value into a calcified IP address market
dominated by government-created regional monopolies doing allocation by
fiat.

Vendors have leapfrogged the IETF and ICANN processes, and most have
deployed the key one-line software patches needed to fully enable these
addresses in OS's and routers.  Microsoft is the only major vendor
seemingly committed to never doing so.  Our project continues to track
progress in this area, and test and document compatability.

John
IPv4 Unicast Extensions Project 


Re: 2749 routes AT RISK - Re: TIMELY/IMPORTANT - Approximately 40 hours until potentially significant routing changes (re: Retirement of ARIN Non-Authenticated IRR scheduled for 4 April 2022)

2022-04-04 Thread John Gilmore
Job Snijders via NANOG  wrote:
> our community also has to be cognizant about there being parts of the
> Internet which are not squatting on anyone's numbers *and* also are
> not contracted to a specific RIR.

Let's not undermine one of the few remaining widely distributed (with no
center) technical achievements behind the Internet -- the decentralized
routing system.

I'm on the board of a large legacy allocation that is deliberately NOT
an ARIN (or other RIR) member.  And I have a small address block of my
own, ditto.

ARIN doesn't provide authenticated RPKI entries for just anybody.  You
have to pay them for that service.  And in order to pay them, you have
to sign their contract.  And if you sign that contract, ARIN can take
away your legacy allocation -- anytime they decide it would be in their
best interest.  Whereas, if you don't sign, the courts have held that
you have a *property right* in your IP addresses and they *belong* to
you.  As a result, most legacy address holders (a large fraction of the
Internet addresses) have declined to sign such contracts, pay such
bills, and thus can't be in the ARIN authenticated routing registry.

For years, ARIN has been deliberately limiting access to the RPKI
registry as a lever to force people to sign one-sided contracts
beneficial to ARIN.  (They do the same lever thing when you sell an
address block -- at ARIN, it loses its legacy status, requiring the
recipient to pay annual rent to ARIN, and risk losing their block if
political winds shift.)

The pro-RPKI faction also seems to have completely ignored what I
consider a major concern among anti-RPKI folks.  The distributed
Internet routing system is resilient to centralized failures, and should
remain so.  Inserting five points of failure (signatures of RIRs) would
undermine that resilience.

Also, centralizing control over route acceptance can be used for
censorship.  If the RIRs succeed in convincing "enough of the net" to
reject any route that doesn't come with an RIR signature, then any
government with jurisdiction over those RIRs can force them to not sign
routes for sites that are politically incorrect.  How convenient -- for
authoritarians.  You can have all the IP addresses you want, you just
can't get 90% of the ISPs in the world to route packets to them.

There is no shortage of Horsemen of the Infopocalypse (child porn,
terrorism, sex slavery, Covid misinformation, manipulative propaganda,
war news, copyright violations, etc, etc, etc) that Absolutely Need To
Be Stamped Out Today whenever politicians decide that Something Must Be
Done.  As an example, we have regularly seen courts force centralized
domain registrars to reject perfectly good applicants for just such
reasons (e.g. SciHub).  The distributed Internet has "routed around"
their ability to censor such information via the routing table.  ISPs
should not hand governments a tool that they have abused so many times
in the past.

John



Re: Let's Focus on Moving Forward Re: V6 still not supported

2022-03-30 Thread John Gilmore
Tom Beecher  wrote:
> I'd be curious to see the data you guys have collected on what it has been
> confirmed to work on if that's available somewhere.

The Implementation Status of unicast 240/4 is in the Appendix of our draft:

  https://datatracker.ietf.org/doc/draft-schoen-intarea-unicast-240/

Enjoy -- it's a long list.

In addition to what's in the published draft, some research that I did
last night also determined that 240/4 as unicast was released in Fedora
9 in May 2008, and in Ubuntu 8.10 in October 2008.

John



Re: Let's Focus on Moving Forward Re: V6 still not supported

2022-03-28 Thread John Gilmore
Christopher Morrow  wrote:
> I think the advice in the draft, and on the quoted page of Google cloud
> docs is that you can use whatever address space you want for your voc
> network. I think it also says that choosing poorly could make portions if
> the internet unreachable.
> 
> I don't see that the docs specify usage of 240/4 though.

Thank you for catching this!  Draft-schoen-intarea-unicast-240 links its
reference [VPC] to:

  https://cloud.google.com/vpc/docs/vpc#valid-ranges

As late as March 18, 2022, that page included a table of "Valid ranges"
that include "240.0.0.0/4", which you can see in the Internet Archive's
"Wayback Machine" copy of the page:

  
http://web.archive.org/web/20220318080636/https://cloud.google.com/vpc/docs/vpc

  A subnet's primary and secondary IP address ranges are regional
  internal IP addresses. The following table describes valid ranges.  ...

  240.0.0.0/4   Reserved for future use (Class E) as noted in
RFC 5735 and RFC 1112.

Some operating systems do not support the
use of this range, so verify that your OS
supports it before creating subnets that use
this range.

However, as of March 20, Google moved that table into a subsidiary page,
the "Subnets overview", which is now linked from the original page:

  
http://web.archive.org/web/20220320031522/https://cloud.google.com/vpc/docs/vpc
  
http://web.archive.org/web/20220328102630/https://cloud.google.com/vpc/docs/subnets

The same information about 240/4 is there.

Thanks again for the bug report!  We'll update the URL in the next
version of the draft.

John




Re: Let's Focus on Moving Forward Re: V6 still not supported

2022-03-26 Thread John Gilmore
Tom Beecher  wrote:
> > */writing/* and */deploying/* the code that will allow the use of 240/4 the
> > way you expect
> 
> While Mr. Chen may have considered that, he has repeatedly hand waved that
> it's 'not that big a deal.', so I don't think he adequately grasps the
> scale of that challenge.

>From multiple years of patching and testing, the IPv4 Unicast Extensions
Project knows that 240/4 ALREADY WORKS in a large fraction of the
Internet.  Including all the Linux servers and desktops, all the Android
phones and tablets, all the MacOS machines, all the iOS phones, many of
the home wifi gateways.  All the Ethernet switches.  And some less
popular stuff like routers from Cisco, Juniper, and OpenWRT.  Most of
these started working A DECADE AGO.  If others grasp the scale of the
challenge better than we do, I'm happy to learn from them.

A traceroute from my machine to 240.1.2.3 goes through six routers at my
ISP before stopping (probably at the first default-route-free router).

Today Google is documenting to its cloud customers that they should use
240/4 for internal networks.  (Read draft-schoen-intarea-unicast-240 for
the citation.)  We have received inquiries from two other huge Internet
companies, which are investigating or already using 240/4 as private
IPv4 address space.

In short, we are actually making it work, and writing a spec for what
already works.  Our detractors are arguing: not that it doesn't work,
but that we should instead seek to accomplish somebody else's goals.

John

PS: Mr. Abraham Chen's effort is not related to ours.  Our drafts are
agnostic about what 240/4 should be used for after we enable it as
ordinary unicast.  His EzIP overlay network effort is one that I don't
fully understand.  What I do understand is that since his effort uses
240/4 addresses as the outer addresses in IPv4 packets, it couldn't work
without reaching our goal first: allowing any site on the Internet to
send unicast packets to or from 240.0.0.1 and having them arrive.



Re: V6 still not supported

2022-03-24 Thread John Gilmore
Pascal Thubert \(pthubert\) via NANOG  wrote:
> I'm personally fond of the IP-in-IP variation that filed in 20+ years
> ago as US patent 7,356,031.

No wonder -- you are listed as the co-inventor!

Just the fact that it is patented (and the patent is still unexpired)
would make it a disfavored candidate for an Internet transition technology.

It was not nice of y'all to try to get a monopoly over nesting headers
for making an overlay network that tunnels things to distant routers.
You have to certify that your work is original in order to even apply
for a patent.  So, nobody had ever thought of that before y'all did?  Really?

John



Re: BOOTP & ARP history

2022-03-19 Thread John Gilmore
Michael Thomas  wrote:
> There were tons of things that were slapped onto IP that were basically 
> experimental like ARP and bootp. CIDR didn't even exist back then.

Speaking as one of the co-designers of BOOTP (RFC 951): yes, it was
experimental.  So why was it "slapped onto" IP?  Well, in those days
the IP protocol itself (now known as IPv4) was an experiment.

The previous method of configuring each computer that was plugged into
an IP network, was to have a human-to-human conversation with your
network administrator, who would manually write down your new IP address
on the local network, and also tell you the IP address of a local
gateway.  You would type this into a file in /etc and save it on your
new computer's local hard drive, and your experimental network would all
start working.

Then it became 1985, disk drives were small and expensive, flash memory
nonexistent.  As a cost-reduction measure, Sun built some of the
first end-user computers that could operate without any nonvolatile
local storage: diskless workstations.  They needed a way to boot without
having a system administrator (or end user) type in the machine's IP
address and the address of their gateway every time it booted.  We
had a network, why didn't we communicate this over the network?

Sun designed and implemented a way of doing this, called RARP (RFC 903).
This did not use IP packets; it required accessing the local Ethernet
using packets with a unique Ethertype.  And it only worked on Ethernet,
not on any other possible LAN technology.  I was the maintainer of the
bootstrap ROMs in Sun workstations.  Bill Croft and I thought we could
do better.  We put our heads together and said, "why can't we use IP
broadcast packets for this?"  (IP support for broadcast packets was also
an experiment at around that time.  IP multicast was barely a cloud on
the horizon.)  You could even write a portable UNIX program to be the
BOOTP server or client (unlike for RARP).

We got the BOOTP protocol working in prototype.  Bill understood the RFC
submission process, and we got it published as an experimental RFC.

Sun didn't care.  Their products shipped using RARP.  Bill and I had
lots of other things to do, so we ignored BOOTP for years.

But Sun had a run of good luck at supporting and creating industry
standards like TCP/IP and NFS.  3000 miles away, some competitors hated
Sun's success, so they decided to standardize their products on
"anything that doesn't work like Sun's".  This was the UNIX wars, which
distracted all the UNIX vendors.  They should've been watching
Microsoft, which proceeded to steamroller the computing market and make
almost all of them irrelevant.  I think it might have been DEC that
first adopted BOOTP for actually bootstrapping their products, and
others followed suit.  I don't actually know why they picked it --
especially because I was co-author and I worked at Sun.  But they
started using it, and it caught on among that consortium of non-Sun UNIX
vendors.  And eventually, Mitch Bradley, a hardware designer at Sun with
a penchant for FORTH programming, built the boot code for the first
SPARC machines, and implemented BOOTP in it, so even Sun started using
it.

By 1993, others had built DHCP (RFC 1541) on top of BOOTP, and that went
onto the standards track at IETF.  Everybody who liked the old way was
free to continue manually typing human-assigned IP addresses into every
new computer on their network and keeping them in local nonvolatile
storage.  Plugging into an Ethernet used to require inserting an
electric drill into a fat yellow coaxial cable in your ceiling (see
https://en.wikipedia.org/wiki/10BASE5), and then cleaning out the copper
detritis that would short the cable and bring down the whole network,
and attaching in a separate transceiver box with a "sting tap" that
would touch the center conductor of the coax, then running a separate
transceiver cable to your computer.  The manual IP address configuration
was only a small portion of the pain.  But 3Com migrated Ethernet to
twist-on BNC connectors (https://en.wikipedia.org/wiki/10BASE2), and
users started migrating toward products that worked without bringing in
a technician.  BOOTP and DHCP offered a "plug and play" experience that
those users craved.  So what started as experiments eventually became
expected parts of the IP standards.

ARP was "slapped on" in 1982, long before RARP or BOOTP.  The original
IP specs required that the LAN address must fit into the low order bits
of your IP netblock.  This wasn't well thought through, but IP was an
experiment and there were very few other experiments for its designers
to learn from.  It worked ok when ARPANET was your LAN (see the original
use of 10/8), and when everybody else had Class A addresses, like the
packet radio network or 3-megabit Experimental Ethernet users.  But it
didn't scale up, and it didn't work at all for 10-megabit industry
standard Ethernet, with 48-bit addresses much longer than IP 

Re: V6 still not supported

2022-03-16 Thread John Gilmore
> > Let me say that again.  Among all the reasons why IPv6 didn't take
> > over the world, NONE of them is "because we spent all our time
> > improving IPv4 standards instead".
> 
> I'll somewhat call bullshit on this conclusion from the data
> available. True, none of the reasons directly claim "IPv6 isn't good
> enough because we did X for v4 instead", yet all of them in some way
> refer back to "insufficient resources to make this the top priority."
> which means that any resources being dedicated to improving (or more
> accurately further band-aiding) IPv4 are effectively being taken away
> from solving the problems that exist with IPv6 pretty much by
> definition.

Hi, Owen.  Your reasoning proves too much.

You propose that every minute of every day that every human isn't
actively working at top priority to make IPv6 the default protocol on
the Internet, are misguided efforts.  "Pretty much by definition."

"Any resources being dedicated to" eating, sleeping, going to the
bathroom, listening to music, painting a canvas, repairing cars,
steering ships, growing food, running railroads, going to school, going
to work, riding bicycles, ending homelessness, stopping wars, reforming
drug laws, band-aiding IPv4, reducing corruption in government, posting to
mailing lists (as you pointed out -- by posting a message to a mailing
list!), hopping, skipping, and jumping, "are effectively being taken
away from solving the problems that exist with IPv6."

Given the billions of people who eat and sleep for HOURS every day, I
think I am doing pretty well by just coordinating three people part-time
trying to improve IPv4 a little bit.  The eaters' and sleepers' level of
non-IPv6 effort is billions of times stronger than my level of non-IPv6
effort.  Can you forgive me?

John


Re: V6 still not supported

2022-03-16 Thread John Gilmore
It is great to see NANOG members describing some of the real barriers to
widespread IPv6 deployment.  Buggy implementations, lack of consumer
demand, too many other things to do (like rapidly deploying fiber to
customers before they switch to a competitor), lack of IPv6 expertise at
ISPs, lack of ISP demand driving lack of supplier support, and doubled
testing and qualification workload.

As Tim Howe  wrote:
>...  I do not really blame those who don't, because in order
> to get where we are I had to make it my personal mission in life to get
> to a passive FTTP configuration that would work with functional parity
> between v4 and v6...
>   For over a year I had to test gear, which requires a lot of
> time and effort and study and support and managerial latitude.  I had
> to isolate bugs and spend the time reporting them, which often means
> making a pain in the butt out of yourself and championing the issue
> with the vendor (sometimes it means committing to buying things).  I
> had to INSIST on support from vendors and refuse to buy things that
> didn't work.  I had to buy new gear I would not have otherwise needed.
> I also had to "fire" a couple of vendors and purge them from my
> network; I even sent back an entire shipment of gear to a vendor due to
> broken promises.
>   Basically I had to be extremely unreasonable.  My position is
> unique in that I was able to do these things and get away with it.  I
> can't blame anyone for not going down that road.

What struck me is how NONE of those challenges in doing IPv6 deployment
in the field had anything to do with fending off attempts to make IPv4
better.

Let me say that again.  Among all the reasons why IPv6 didn't take
over the world, NONE of them is "because we spent all our time
improving IPv4 standards instead".

John Gilmore





Re: CC: s to Non List Members (was Re: 202203080924.AYC Re: 202203071610.AYC Re: Making Use of 240/4 NetBlock)

2022-03-09 Thread John Gilmore
John Levine  wrote:
> FWIW, I also don't think that repurposing 240/4 is a good idea.  To be
> useful it would require that every host on the Internet update its
> network stack, which would take on the order of a decade...

Those network stacks were updated for 240/4 in 2008-2009 -- a decade
ago.  See the Implementation Status section of our draft:

  https://datatracker.ietf.org/doc/draft-schoen-intarea-unicast-240/

Major networks are already squatting on the space internally, because
they tried it and it works.  We have running code.  The future is now.
We are ready to update the standards.

The only major OS that doesn't support 240/4 is Microsoft Windows -- and
it comes with regular online updates.  So if IETF made the decision to
make it unicast space, most MS OS users could be updated within less
than a year.

> It's basically
> the same amount of work as getting everything to work on IPv6.

If that was true, we'd be living in the IPv6 heaven now.

It doesn't take any OS upgrades for "getting everything to work on
IPv6".  All the OS's and routers have supported IPv6 for more than a
decade.

Whatever the IPv6 transition might require, it isn't comparable to the
small effort needed to upgrade a few laggard OS's to support 240/4 and
to do some de-bogonization in the global Internet, akin to what CloudFlare
did for 1.1.1.1.

John



Re: VPN recommendations?

2022-02-10 Thread John Gilmore
Mike Lyon  wrote:
> How about running ZeroTier on those Linux boxes and call it a day?
> https://www.zerotier.com/

ZeroTier is not a free-as-in-freedom project.  Running it in Linux boxes
or network appliances to provide a VPN to paying customers may be
prohibited (at least for some customers, and before 2025) by its
convoluted license:

  https://github.com/zerotier/ZeroTierOne/blob/master/LICENSE.txt

I recommend using something that doesn't have litigious companies
nitpicking about what you can and can't use it for.

    John Gilmore


Re: questions about ARIN ipv6 allocation

2021-12-10 Thread John Gilmore
Owen DeLong via NANOG  wrote:
> The double billing (had it been present at the time) would have prevented me 
> from signing the LRSA for my IPv4 resources.

Owen, the root of your problem is that you signed an LRSA with ARIN,
rather than keeping your legacy resources un-tainted by an ARIN contract
that deliberately reduced your rights.

When ARDC transferred 44.192/10 via ARIN, the recipient lost the legacy
status of the address block.  That was an ARIN requirement, which was OK
with that particular recipient.  However, ARIN is not your only option.

It is possible to transfer legacy resources such as IPv4 address blocks
from ARIN to RIPE, having them be recognized as legacy blocks under RIPE
jurisdiction.  You can do this without signing any long term contract
with RIPE, if you like; or you can choose to become a long-term paying
RIPE member, under their fee schedule.  All you need is to have any
Internet resources in Europe -- like a virtual machine in a data center
there, or a DNS server.  I'm sure of this because I have done it; see

  
https://apps.db.ripe.net/db-web-ui/lookup?source=ripe=209.16.159.0%20-%20209.16.159.255=inetnum

The short-term contract for the transfer honors and retains the legacy
status of those resources: that you own them, not the ARIN fiction that
an RIR now controls them and will steal them from you if you stop paying
them annually.

Randy Bush detailed a similar transfer process back in 2016:

  https://archive.psg.com/160524.ripe-transfer.pdf  

The process is more bureaucratic and cumbersome than you expect;
Europeans named bureacracy in the 1800s, and RIPE has raised it to a
painful art.  But once it's done, you are out from under the ARIN
anti-legacy mentality forever.

    John Gilmore

PS: If you want RPKI, which I didn't, you can sign a RIPE long term
contract, pay them annually, and (according to Randy) they will STILL
honor your ownership of your resources, unlike ARIN.


Re: private 5G networks?

2021-11-30 Thread John Gilmore
Michael Thomas  wrote:
> > What do you mean 3rd Tier?
> General Authorized Access? Taken from some random site looking it up.

  https://en.wikipedia.org/wiki/Citizens_Broadband_Radio_Service

it has 3 tiers:

* Incumbent access, primarily government and military radars, plus some
pre-existing band users.

* 3550 to 3650 MHz in 10MHz chunks, allocated for priority users by census
tracts for up to 3 years, with up to 7 Priority Access Licenses per tract.
Competitive bidding for getting these licenses.

* General Authorized Access users can use any of those chunks that aren't
assigned for priority use, or that nobody currently is transmitting on,
plus another 50 MHz at 3650-3700 in free-for-all mode unless there are
incumbents.

A local Spectrum Access System (SAS) would program the individual devices to
stay within the restrictions specified by the FCC and any licenses
issued to the operator, for a particular geography.

John

PS: The CBRS radio devices can't turn on their transmitter until they
talk a detailed negotiation to their SAS, via HTTP over TLS 1.2 over
IPv4.  IPv6 support is optional.  None of this negotiation appears to
happen over the radio, it's all apparently on Ethernet (or assumes some
separate Internet provisioning not done in CBRS spectrum).  And there's
no discovery procedure, it's all done by manual configuration.  See:

  https://winnf.memberclicks.net/assets/CBRS/WINNF-TS-0016.pdf


Re: Class E addresses? 240/4 history

2021-11-22 Thread John Gilmore
Eliot Lear  wrote:
> In 2008, Vince Fuller, Dave Meyer, and I put together
> draft-fuller-240space, and we presented it to the IETF. There were
> definitely people who thought we should just try to get to v6, but
> what really stopped us was a point that Dave Thaler made: unintended
> impact on non-participating devices, and in particular CPE/consumer
> firewall gear, and at the time there were serious concerns about some
> endpoint systems as well.

I was not in this part of IETF in those days, so I did not participate
in those discussions.  But I later read them on the archived mailing
list, and reached out by email to Dave Thaler for more details about his
concerns.  He responded with the same general issues (and a request that
we and everyone else spend more time on IPv6).  I asked in a subsequent
message for any details he has about such products that he thought would
fail.  He was unable or unwilling to point out even a single operating
system, Internet node type, or firewall product that would fail unsafely
if it saw packets from the 240/4 range.

As documented in our Internet-Draft, all such products known to us
either accept those packets as unicast traffic, or reject such packets
and do not let them through.  None crashes, reboots, fills logfiles with
endless messages, falls on the floor, or otherwise fails.  No known
firewall is letting 240/4 packets through on the theory that it's
perfectly safe because every end-system will discard them.

As far as I can tell, what Eliot says really stopped this proposal in
2008 was Dave's hand-wave of *potential* concern, not an actual
documented problem with the proposal.

If anyone knows an *actual* documented problem with 240/4 packets,
please tell us!

(And as I pointed out subsequently to Dave, if any nodes currently in
service would *actually* crash if they received a 240/4 packet, that's a
critical denial of service issue.  For reasons completely independent
from our proposal, those machines should be rapidly identified and
patched, rather than remaining vulnerable from 2008 thru 2021 and
beyond.  It would be trivial for an attacker to send such
packets-of-death from any Linux, Solaris, Android, MacOS, or iOS machine
that they've broken into on the local LAN.  And even Windows machines
may have ways to send raw Ethernet packets that could be crafted by
an attacker to appear to be deadly IPv4 240/4 packets.)

John



FreeBSD users of 127/8

2021-11-22 Thread John Gilmore
J. Hellenthal wrote:
> FreeBSD operators have been using this space for quite a long time for
> many NAT'ing reasons including firewalls and other services behind
> them for jail routing and such.
> 
> https://dan.langille.org/2013/12/29/freebsd-jails-on-non-routable-ip-addresses/
> 
> That's just one example that I've seen repeated in multiple other
> ways. One of which a jail operator with about 250 addresses out of
> that range that enabled his jail routed services.

Thank you for letting us know!  We would be happy to improve
the draft so that it has less impact on such pre-existing users.

When we surveyed publicly visible applications based on Linux,
we only found them configured to use the lowest /16.  It's true
that any system operator could configure their system in any part
of 127/8, but we focused on the default configurations of popular
software (such as systemd and Kubernetes).

Do you know of any FreeBSD software that comes with a default
configuration in 127/8 but not in 127/16?  (It looks like the web page
you referenced is about specific manual configuration, not about
the default behavior of supplied software.)

I do not know the details of FreeBSD jail configuration, nor the precise
behavior of its loopback interface.  From my limited understanding, it
looks like the jail configured in the web page you referenced, with
address 127.1.0.128/32 on lo1, would provide loopback service regardless
of whether the default address on lo0 was 127.0.0.1/8 or 127.0.0.1/16.
That's because lo1 is a separate interface from lo0, and the "lo"
interfaces always loop back any packets sent through them, no matter
what addresses are configured on them.  (Indeed the example
configures it with a 10.80.0.128 address as well, which would not
normally be considered a loopback address.)

So, if I am right, then even if our current Internet-Draft became a
standard and FreeBSD was modified to implement it, the recommended
commands would continue to work.  The only impact would be that such a
FreeBSD machine would be unable to reach a potential global Internet
service hosted out on the Internet at address 127.1.0.128 (because a
local interface has been configured at that address, shadowing the
globally reachable address).  I anticipate that no such global services
would be created before 2026 at the very earliest (other than for
reachability testing), and likely much later in the 2020's or early
2030's.

If it turns out that FreeBSD usage of 127.1/16 is widespread, and the
above analysis is incorrect or unacceptable to the FreeBSD community, we
would be happy to modify the draft to retain default loopback behavior
on 127.0.0.1/17 rather than 127.0.0.1/16.  That would include both
127.0.x.y and 127.1.x.y as default loopback addresses.  This would
completely resolve the issue presented on the "FreeBSD jails on
non-routable IP addresses" web page, while still recovering more than 16
million addresses for global use.

The worst case might be if FreeBSD sysadmins have become accustomed to
picking "random" addresses manually from all over the 127/8 space.  If
so, it is not unreasonable to expect that when manually configuring a
node to use "non-routable" addresses, that in the passage of time, some
of them might become routable in the future.  When upgrading any machine
to a new OS release, various small things typically need adjusting to
fit into the revised OS.  Renumbering the in-system use of up to a few
hundred non-routable addresses like 127.44.22.66 into addresses like
127.0.22.66 (in a smaller non-routable range that still would still
contain 65,000 or 130,000 addresses) might be one of those things that
could be easily adjusted during such an upgrade.

John




Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread John Gilmore
David Conrad  wrote:
> Doesn't this presume the redeployed addresses would be allocated
> via a market rather than via the RIRs?
> 
> If so, who would receive the money?

You ask great questions.

The community can and should do the engineering to extend the IP
implementations.  If that doesn't happen, the rest of the issues are
moot.  There would be no addresses to allocate and no money to receive.

It would take multiple years post-RFC and post-implementation before
anyone could even contemplate doing allocation, of any sort.  So while
it's an entertaining sideline to think about later, at this point it is
a distraction.

John

PS:  It's conceivable that RIRs could allocate via a market.
One has already done so (APNIC).


Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread John Gilmore
Fred Baker  wrote:
> I tend to think that if we can somehow bless a prefix and make be
> global unicast address space, it needs to become Global Unicast
> Address Space.

Yes, I agree.  The intention is that with the passage of time, each
prefix becomes more and more reachable, til it's as close to 100% as any
other IP address.

I was just suggesting a side point, that some kinds of IP users may be
able to make earlier use of measurably less-reachable addresses.  That
could possibly enable them to get those addresses at lower prices (and
with no guarantees), compared to people who want and expect 100%
reachable IP addresses.  Having users making actual early use of them
would also encourage those users to actively work to improve their
reachability, rather than passively waiting til "somebody else" improves
their reachability.  (Indeed, some adventurous early adopters might buy
such addresses, actively improve their reachability, then 'flip' them
for higher prices, as some people do with real-estate.)  Just pointing
out a side chance for a win-win situation.  Most users would wait til
surveys show high reachability.

John



Re: Redploying most of 127/8 as unicast public

2021-11-19 Thread John Gilmore
=?utf-8?B?TcOlbnM=?= Nilsson  wrote:
> The only viable future is to convert [to IPv6].  This is not
> group-think, it is simple math.

OK.  And in the long run, we are all dead.  That is not group-think, it
is simple math.  Yet that's not a good argument for deciding not to
improve our lives today.  Nor to fail to improve them for tomorrow,
in case we live til then.

John  



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread John Gilmore
Nick Hilliard wrote:
>>   consider three hosts on a broadcast domain: A, B and 
>> C.  A uses the lowest address, B accepts a lowest address, but C does 
>> not.  Then A can talk to B, B can talk to C, but C cannot talk to A.  
>> This does not seem to be addressed in the draft.

Section 3.4.  Compatibility and Interoperability.

   Many deployed systems follow older Internet standards in not allowing
   the lowest address in a network to be assigned or used as a source or
   destination address.  Assigning this address to a host may thus make
   it inaccessible by some devices on its local network segment.   [there's
   more...]

If you think that section needs improving, please send suggested text.
We're happy to explain the implications better.

Joe Maimon  wrote:
> its a local support issue only.

That's also true.  The only issues arise between your devices, on your
LAN.  Everybody else on the Internet is unaffected and can reach all
your devices, including the lowest if your LAN uses it.  Nothing forces
you to use your lowest address, and we recommend that DHCP servers be
configured by default to not to hand them out (no change from how they
historically have been configured).

We submitted a 6-line patch to the Busybox DHCP implementation in
February to avoid hardcoded prevention of issuing a .0 or .255 address
(which was wrong anyway, even without our proposal).  The default in the
config file continues to use a range excluding .0.  The patch was merged
upstream without trouble.  See:

  
https://github.com/schoen/unicast-extensions/blob/master/merged-patches/busybox/0001-Don-t-hardcode-refusing-to-issue-.0-or-.255-addresse.patch
  
John



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-19 Thread John Gilmore
Joe Maimon  wrote:
> And all thats needed to be done is to drop this ridiculous .0 for 
> broadcast compatibility from standards.why is this even controversial?

Not to put words in his mouth, but that's how original BSD maintainer
Mike Karels seemed to feel when we raised this issue for FreeBSD.  He
was like, what?  We're wasting an address per subnet for 4.2BSD
compatability?  Since 1986?  Let's fix that.

John


Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-18 Thread John Gilmore
Randy Bush  wrote:
> as a measurement kinda person, i wonder if anyone has looked at how much
> progress has been made on getting hard coded dependencies on D, E, 127,
> ... out of the firmware in all networked devices.

The drafts each have an Implementation Status section that describes
what we know.  The authors would be happy to receive updates for any of
that information.

240/4 is widespread everywhere except in Microsoft products.  It works
so reliably that both Google and Amazon appear to be bootlegging it on
their internal networks.  Google even advises customers to use it:

  https://cloud.google.com/vpc/docs/vpc#valid-ranges

0/8, and the lowest address per subnet, have interoperable
implementations that don't cause problems with legacy equipment, but
they are not widely deployed.  0/8 has a few years in Linux & Android
kernels and distros.  Lowest address is in the most recent Linux and
FreeBSD kernels, but not yet in any OS distros.  In particular, OpenWRT
doesn't implement it yet, which could easily be a fast path to a free
extra IP address for anyone who has a compatible home router and a
globally routed small network like a /29.

We used RIPE Atlas to test the reachability of many addresses that end
in .0 and .0.0, and they were reachable from all but one probe that we
tried.  Amazon offers

  https://ec2-reachability.amazonaws.com/

for this purpose; try it on your own network!

Some embedded TCP stacks treat 127/8 as unicast (simply because they
don't special-case much of anything), but otherwise we don't know of any
current OS distributions that implement unicast for 127/8.  The Linux
kernel has long had a non-default "route_localnet" option offering
similar but more problematic behavior.

I would be happy to fund or run a project that would announce small
global routes in each of these ranges, and do some network probing, to
actually measure how well they work on the real Internet.  We are
working with RIPE Atlas already to enable this.  I thought it would be
prudent to propose the changes in detail to IETF, before "bogus" routes
showed up in BGP and the screaming on the NANOG list started.  :-/

John



Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-18 Thread John Gilmore
Fred Baker  wrote:
> My observation has been that people don't want to extend the life of
> IPv4 per se; people want to keep using it for another very short time
> interval and then blame someone else for the fact that the 32 bit
> integers are a finite set.

It's an attractive strawman, but knocking it down doesn't
contribute to the discussion.

I am not blaming anybody for the finity of 32-bit integers.

Nor am I trying to extend the lifetime of IPv4, for either a short or a
long time.  Personally, I think that IPv4 will already be with us for
the rest of my lifetime.  Its life will already extend beyond mine,
without any effort on my part.  It was a good design, and it will
outlive its makers.  The people who in 2008 predicted that it was
senseless to improve IPv4 because it would be dead by 2018, were
objectively wrong, because it's not dead.

IETF did what the objectors said, back in 2008: They didn't improve
IPv4, on the exact theory that effort would go into IPv6 instead.  Hmm.
13 years later, that decision did not cause IPv6 to take over the world
and obsolete IPv4.  IPv4 is still here, and the improvements that would
have been fully rolled out to the user base by now, if standardized in
2008, are still missing.  Perhaps we should reconsider that wrong
advice, rather than regurgitate the same decision every time the issue
is raised?

> If you don't think that's a true statement, I'd be very interested to
> hear what you think might be true.

IPv6 is still on a remarkable if not meteoric growth ramp; in the last
year it's gone from 30% to 34% of the Internet, according to Google.
There is no need to cut off IPv4 maintenance at the knees in order to
make IPv6 look good.  We can make both v4 and v6 better, and let people
choose which one(s) they prefer to use.

That's what I think might be true about why simple low-risk tweaks that
make IPv4 better are not an obviously stupid tactic.

As Brian Carpenter has been saying for 15+ years, IPv4 and IPv6 don't
have a transition strategy, they have a co-existence strategy.
Neglecting or abandoning IPv4 is not a useful part of that strategy.

Keeping the price of IPv4 addresses reasonable means that dual-stack
servers can continue to be deployed at reasonable cost, so that it
doesn't matter whether clients have IPv6 or IPv4.  Any company that put
its services on IPv6-only sites today would be cutting off 65% of their
potential customers.  Even if v6 had 90% of the market, why would a
company want 10% of its prospects to be unable to reach its service?

(Big companies who run massive public-facing server farms are the
biggest buyers of IPv4 addresses in the market, spending hundreds of
millions of dollars every year.  Amazon, Google, Alibaba, Microsoft,
etc.  They are already running IPv6, and all their servers already have
free IPv6 addresses from a RIR.  Why are they spending that money?
It isn't because they are stupid doodooheads who hate IPv6.)

As Fred points out, this issue has been discussed since 2008.  IETF also
faced it in 2014-2018 in the now-sunsetted "sunset4" working group.
That group wrote a bunch of drafts proposing to disable or sunset IPv4.
None of these became RFCs.  They didn't pass the sniff test.  They
weren't what users and customers wanted.

The issue keeps coming back because the wrong decision keeps getting
offered: "Just switch everybody to use IPv6, and if they won't, then
deny their proposals for IPv4."  Another way of putting it was proposed
by Dave Thaler: "Rather than changing any existing software (OS's or
apps) to allow class E addresses, the effort would be better spent
towards getting more IPv6 deployment."

This is not an objection to deploying reserved addresses in IPv4, it is
a plea for more IPv6 deployment.  It is like saying, "Do not spend your
time fixing the environment, instead we need to fix the political
system."

It is a hopeless plea, since "allowing Class E addresses" and "more IPv6
deployment" are not the only two possible goals to put forth effort on.
Merely stopping work on one, will not cause the other to be advanced.
There must be a name for this fallacious argument...thank you, Wikipedia,
it's a "false dilemma":

  https://en.wikipedia.org/wiki/False_dilemma

The two goals can proceed in parallel, and many of the people who would
happily do Goal 1 are not in any position to affect Goal 2 -- just as
with the environment and politics.

It's pretty simple to understand why IPv6 has not taken over the world
yet.  IPv4 got rapid adoption because it was so much better than all the
alternatives available at the time.  IPv6 would have gotten equally
rapid adoption if it had been the thing so much better than all the
alternatives.  IPv6 is not getting that rapid adoption today, because it
has to compete with the already pervasive IPv4.  IPv4 is better in one
key way: everybody you want to talk with is already there.  (It's akin
to the issue of why can't people just switch to a better social network
than Facebook?  

Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-18 Thread John Gilmore
Steven Bakker  wrote:
> The ask is to update every ip stack in the world (including validation,
> equipment retirement, reconfiguration, etc)...

This raises a great question.

Is it even *doable*?  What's the *risk*?  What will it *cost* to upgrade
every node on the Internet?  And *how long* might it take?

We succeeded in upgrading every end-node and every router in the
Internet in the late '90s and early 2000's, when we deployed CIDR.  It
was doable.  We know that because we did it!  (And if we hadn't done it,
the Internet would not have scaled to world scale.)

So today if we decide that unicast use of the 268 million addresses in
240/4 is worth doing, we can upgrade every node.  If we do, we might as
well support unicast on the other 16 million addresses in 0/8, and the
16 million in 127/8, and the other about 16 million reserved for
4.2BSD's pre-standardized subnet broadcast address that nobody has used
since 1985.  And take a hard look at another hundred million addresses
in the vast empty multicast space, that have never been assigned by IANA
for anybody or anything.  Adding the address blocks around the edges
makes sense; you only have to upgrade everything once, but the 268
million addresses becomes closer to 400 million formerly wasted
addresses.  That would be worth half again as much to end users,
compared to just doing 240/4!

That may not be worth it to you.  Or to your friends.  But it would be
useful to a lot of people -- hundreds of millions of people who you may
never know.  People who didn't get IP addresses when they were free,
people outside the US and Europe, who will be able to buy and use them
in 5 or 10 years, rather than leaving them unused and rotting on the
vine.

We already know that making these one-line patches is almost risk-free.
240/4 unicast support is in billions of nodes already, without trouble.
Linux, Android, MacOS, iOS, and Solaris all started supporting unicast
use of 240/4 in 2008!  Most people -- even most people in NANOG --
didn't even notice.  0/8 unicast has been in Linux and Android kernels
for multiple years, again with no problems.  Unicast use of the lowest
address in each subnet is now in Linux and NetBSD, recently (see the
drafts for specifics).  If anyone knows of security issues that we
haven't addressed in the drafts, please tell us the details!  There has
been some arm-waving about a need to update firewalls, but most of these
addresses have been usable as unicast on LANs and private networks for
more than a decade, and nobody's reported any firewall vulnerabilities
to CERT.

Given the low risk, the natural way for these unicast extensions to roll
out is to simply include them in new releases of the various operating
systems and router OS's that implement the Internet protocols.  It is
already happening, we're just asking that the process be adopted
universally, which is why we wrote Internet-Drafts for IETF.  Microsoft
Windows is the biggest laggard; they drop any packet whose destination
OR SOURCE address is in 240/4.  When standards said 240/4 was reserved
for what might become future arcane (variable-length, anycast, 6to4,
etc) addressing modes, that made sense.  It doesn't make sense in 2021.
IPv4 is stable and won't be inventing any new addressing modes.  The
future is here, and all it wants out of 240/4 is more unicast addresses.

By following the normal OS upgrade path, the cost of upgrading is almost
zero.  People naturally upgrade their OS's every few years.  They
replace their server or laptop with a more capable one that has the
latest OS.  Laggards might take 5 or 10 years.  Peoples' home WiFi
routers break, or are upgraded to faster models, or they change ISPs and
throw the old one out, every 3 to 5 years.  A huge proportion of
end-users get automatic over-the-net upgrades, via an infrastructure
that had not yet been built for consumers during the CIDR transition.
"Patch Tuesday" could put some or all of these extensions into billions
of systems at scale, for a one-time fixed engineering and testing cost.

We have tested major routers, and none so far require software updates
to enable most of these addresses (except on the lowest address per
subnet).  At worst, the ISP would have to turn off or reconfigure a
bogon filter with a config setting.  Also, many "Martian address" bogon
lists are centrally maintained
(e.g. https://team-cymru.com/community-services/bogon-reference/ ) and
can easily be updated.  We have found no ASIC IP implementations that
hardwire in assumptions about specific IP address ranges.  If you know
of any, please let us know, otherwise, let's let that strawman rest.

Our drafts don't propose to choose between public and private use of the
newly usable unicast addresses (so the prior subject line that said
"unicast public" was incorrect).  Since the kernel and router
implementation is the same in either case, we're trying to get those
fixed first.  There will be plenty of years and plenty of forums (NANOG,
IETF, 

Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast

2021-11-18 Thread John Gilmore
Steven Bakker  wrote:
> > ... the gain is 4 weeks of
> > extra ip address space in terms of estimated consumption.
>
> The burn rate is the best argument I've seen against the idea so far.

I'm glad you think so, since it's easy to refute.

There will be no future free-for-all that burns through 300 million
IPv4 addresses in 4 months.

When IPv4 addresses were being given away for free, of course people
were grabbing them as fast as they could.  Particularly after everyone
could see the end of the supply coming.  It took detailed administrative
bureacracy, checking paperwork "justifications", to just slow down the
free-for-all!  (In the early '90s I got 140.174/16 for the same cost as
a /24, by sending a few emails asking for it.  By the end of the '90s,
that /16 was one of the major assets of our small ISP!)

Now that has ended, and addresses actually cost money in a real market.
Companies are only buying what they need, because they have other uses
for their money.  And other companies are selling addresses that they
once obtained and no longer plan to use.  It's like the difference
between getting free land from the government, versus having to pay for
it.  You'd rather have 100 acres or 1000 acres if it's free.  But if you
have to buy it, well, half an acre is still pretty nice, and leaves you
some money to build a house on it.

Now we have a market.  When low supply raises the price of something,
people are going to buy less of it.  The initial price rise from $0/each
to $11.25 was in 2011, after ARIN announced there would be no more free
addresses, and Microsoft bought Nortel's addresses out of bankruptcy.
The Internet world has adapted.  It's been a decade since that
adaptation.  Adding some global unicast address supply in a few years
would reduce prices, benefiting consumers, but won't take us back to the
old pre-market model.

Here's an analysis as of late 2020 of the IPv4 transfer market:

  https://circleid.com/posts/20201125-ipv4-market-and-ipv6-deployment

It shows a range of 6 million to 16 million addresses transferred (sold)
per quarter.  This roughly matches Geoff Huston's analysis of both free
RIR allocations, and purchased IPv4 transfers.  Geoff reports about
2.2 million allocated and 26 million transferred in 2020:

  
https://circleid.com/posts/20210117-a-look-back-at-the-world-of-ip-addressing-in-2020

At current rates, 300 to 400 million addresses would last more than a
decade!  (Compare this to the ~50 /8s unadvertised in global routing
tables, about 838 million addresses, that are currently owned but likely
to be sold to someone who'll route them, sometime in the next decade.)

There will be no future free-for-all that burns through 300 million
IPv4 addresses in 4 months.

John