Re: V6 still not supported

2022-03-24 Thread Mark Delany
On 24Mar22, Pascal Thubert (pthubert) allegedly wrote:
> Hello Mark:
> 
> > Any such "transition plan" whether "working" or "straightforward" is
> > logically impossible. Why anyone thinks such a mythical plan might yet be
> > formulated some 20+ years after deploying any of ipv6, ipv4++ or ipv6-lite 
> > is
> > absurd.
> 
> This is dishonest

My point is that if there was a real transition plan it would have been 
invented and
executed by now and we'd all be on ipv6. Yet the reality is that here we are 
some 20 years
later with no plan and no ubiquitous ipv6. How is that observation dishonest?

> considering that I just proved on this very thread that such ideas existed

I don't know why you're conflating an idea with a plan - they are about as far 
away from
each other as is possible. Frankly no one cares about ideas, they're a dime a 
dozen.

A plan is an actionable, compelling and logical set of steps towards an end 
result. Do you
have such a thing for moving everyone on the planet to ipv6?

Here's a simple test of whether you have a plan or not. I'm connected via my 
legacy ipv4
ISP router completely oblivous to ipv6. How does your plan incentivise me to 
upgrade my
router to support ipv6?

When you have an answer to that, you might have a glimmer of a plan.


Mark.


Re: V6 still not supported

2022-03-24 Thread Mark Delany
On 24Mar22, Vasilenko Eduard allegedly wrote:
> Hence, the primary blocking entity for IPv6 adoption is Google: they do not 
> support DHCPv6 for the most popular OS.

No. The primary "blocking entity" is that "legacy" ipv4 works just fine and 
adopting ipv6
or ipv4++ or ipv6-lite or ipv-magical is harder than doing nothing.

That Google/Android don't like DHCPv6 is largely irrelevant.

My five year old ISP router only supports ipv4. Yet I get to every site on the 
planet just
fine. Give me one good reason to spend my hard earned pay on an expensive new 
router which
supports ipv6?

You have two choices: make my ipv4 router fail or make the new ipv6 router 
compelling.

How do you propose to do either of those?


Mark.


Re: V6 still not supported

2022-03-24 Thread Mark Delany
On 24Mar22, Greg Skinner via NANOG allegedly wrote:

> straightforward transition plan

> in-hand working transition strategy

> nor a straightforward transition

Any such "transition plan" whether "working" or "straightforward" is logically
impossible. Why anyone thinks such a mythical plan might yet be formulated some 
20+ years
after deploying any of ipv6, ipv4++ or ipv6-lite is absurd.

The logic goes: we support legacy "do nothing" ipv4 deployments forever. We 
also expect
those same deployments to invest significant effort, cost and risk to move off 
their
perfectly functioning network for no self-serving benefit.

There be unicorns and denial of human nature.


Mark.


Re: V6 still not supported

2022-03-24 Thread Mark Delany
On 23Mar22, Owen DeLong via NANOG allegedly wrote:

> I would not say that IPv6 has been and continues to be a failure

Even if one might ask that question, what are the realistic alternatives?

1. Drop ipv6 and replace it with ipv4++ or ipv6-lite or whatever other protocol 
that
   magically creates a better and quicker transition?

2. Drop ipv6 and extend above the network layer for the forseeable future? By 
extend I
   mean things which only introduce ipv4-compatible changes: NATs, TURN, CDN at 
the edge,
   application overlays and other higher layer solutions.

3. Live with ipv6 and continue to engineer simpler, better, easier and 
no-brainer
   deployments?

I'll admit it risks being a "sunk cost falacy" argument to perpetuate ipv6, but 
are the
alternatives so clear that we're really ready to drop ipv6?


> so much as IPv6 has not yet achieved its goal.

As someone previously mentioned, "legacy" support can have an extremely long 
tail which
might superficially make "achieving a goal" look like a failure.

Forget ss7 and SIP, what about 100LL vs unleaded petrol or 1/2" bolts vs 13mm 
bolts? Both
must be 50 years in the making with many more years to come. The glacial grind 
of
displacing legacy tech is hardly unique to network protocols.

In the grand scheme of things, the goal of replacing ipv4 with ipv6 has really 
only had a
relatively short life-time compared to many other tech transitions. Perhaps 
it's time to
adopt the patience of the physical world?


Mark.


Re: V6 still not supported

2022-03-19 Thread Mark Delany
On 19Mar22, Matt Hoppes allegedly wrote:

> So, while it's true that a 192.168.0.1 computer couldn't connect to a 
> 43.23.0.0.12.168.0.1 computer, without a software patch - that patch 
> would be very simple and quick to deploy

Let's call this ipv4++

Question: How does 192.168.0.1 learn about 43.23.0.0.12.168.0.1? Is that a DNS 
lookup?

How does the DNS support ipv4++ addresses? Is that some extension to the A RR? 
It better
be an extension that doesn't break packet validation rules embeded in most DNS 
libraries
and middleware. You give 'em an A RData longer than 32 bits and they're going 
to drop it
with prejudice. Perhaps you should invent a new ipv4++ address RR to avoid some 
of these
issues?

In either case, how does every program on my ipv4 computer deal with these new 
addresses
that come back from a DNS lookup? Do you intend to modify every DNS library to 
hide these
extensions from older programs? How do you do that exactly? What about my 
home-grown DNS
library? Who patches that?

Here's a code fragment from my ipv4-only web browser:

   uint32 ip
   ip = dnslookup("www.rivervalleyinternet.net", TypeA)
   socket = connect(ip)

What does 'ip' contain if www.rivervalleyinternet.net is ipv4++ compliant and 
advertises
43.23.0.0.199.34.228.100? Do these magical concentrators sniff out DNS queries 
and do some
form of translation? How does that work with DoH and DoT?

Or are you suggesting that www.rivervalleyinternet.net continues to advertise 
and listen
on both 43.23.0.0.199.34.228.100 *and* good ol' 199.34.228.100 until virtually 
every
client and network on the planet transitions to ipv4++? In short, the 
transition plan is
to have www.rivervalleyinternet.net run dual-stacked for many years to come. 
Yes?

Speaking of DNS lookups. If my ipv4++ DNS server is on the same LAN as my 
laptop, how do I
talk to it? You can't ARP for ipv4++ addresses, so you'll have to invent a new 
ARP
protocol type or a new LAN protocol. Is that in your patch too? Make sure the 
patch
applies to network devices running proxy ARP as well, ok?

If I connect to an ipv4++ network, how do I acquire my ipv4++ address?  If it's 
DHCP,
doesn't that require an extension to the DHCP protocol to support the larger 
ipv4++
addresses?  So DHCP protocol extensions and changes to all DHCP servers and 
clients are
part of the patch too, right? Or perhaps you plan to invent a new DHCP packet 
which better
accomodates all of the ipv4++ addresses that can get returned? Still plenty of 
code
changes.

And how do I even do DHCP? Does ipv4++ support broadcast in the same way as 
ipv4? What of
DHCP relays? They will need to be upgraded I presume.


So let's say we've solved all the issues with getting on a network, talking 
over a LAN,
acquiring an ipv4++ address, finding our ipv4++ capable router and resolving 
ipv4++
addresses. My application is ready to send an ipv4++ packet to a remote 
destionation.

But what does an ipv4++ packet look like on the wire? Is it an ipv4 packet with 
bigger
address fields?  An ipv4 packet with an extension? Or do you propose inventing 
a new IP
type? Do these packets pass thru ipv4-only routers untainted or must they be
"concentrated" beforehand?  Won't all the firewalls and router vendors need to 
change
their products to allow such packets to pass? Normally oddball ipv4 packets are 
dropped of
course. As we know, vendor changes can notoriously take decades; just ask the 
ipv6 crowd.

Ok. We've upgraded all our infrastructure to route ipv4++ packets. The packet 
reached the
edge of our network. But how does the edge know that the next hop (our ISP) 
supports
ipv4++ packets?  Do routers have to exchange capabilities with each other? How 
do they do
that?

For that matter, how does my application know that the whole path thru to the 
destination
supports ipv4++? It only needs one transition across an ipv4-only network 
somewhere in the
path and the packet will be dropped. I think you're going to have to advertise 
ipv4++
reachability on a per-network basis. Perhaps using BGP? Oh, so that means your 
patch has
to add ipv4++ capabilities to all BGP implementations. Hmm.


> However, to get back 192.168.0.1 can proxy through an IPv4.1 to IPv4.2 
> concentration system.

Sounds like you're proposing a global deployment of reverse CG-NATs (aka 
concentrators)
which need to run for as long as the transition takes. Do you run concentrators 
on ISP
borders where one party is ipv4 and the other party is ipv4++? Not sure a lot of
edge/border routers have the memory footprint or CPU grunt to maintain NAT 
tables that
might involve 1,000s or millions of flows. Your patch might have to include a 
memory
upgrade. Will these address ranges fit into 32bit CAM or will routers need to 
drop back to
regular memory and software lookups? That's a pretty nasty performance hit 
which was an
argument against ipv6 for quite a while.

One of the bigger problems with your "concentrators" (apart from who implements 
t

Re: Authoritative Resources for Public DNS Pinging

2022-02-09 Thread Mark Delany
On 09Feb22, Joe Greco allegedly wrote:

> I dunno.  I think I'd find that being unable to resolve a hostname or
> being unable to exchange packets result in a similar level of Internet
> brokenness.

Sure. The result is the same, but as a discriminator for diagnosing the problem 
it's quite
different. If a support rep hears "cannot reach fatbook" vs. ping 8.8.8.8 fails 
they'll
hopefully go down a different diagnostic route.

In large partI think of this sort of tool as a "what next" helper or "where do 
I look
next" helper.

> It is going to be hard to quantify all the things you might
> want to test for.  You already enumerated several.  But if it has to be
> a comprehensive "Internet is fully working" test, what do you do to be
> able to detect that your local coffee shop isn't implementing a net nanny
> filter?  Just to take it too far down the road.  ;-)

Well comprehensive might be a bit much, but useful info for a support rep or a 
tech forum
helper, well, that's a different matter. The Ocean boiling we aint.

In many ways the idea is simply to automate the basic steps that most of us 
would take to
diagnose a connectivity issue:

for ip in 4, 6
  1a) Can I reach first hop - check
  1b) Can I reach ISP first hop - check

  2a) Can I reach a resolver - check

  3a) Can I resolve ISP end-points - check
  3b) Can I reach ISP end-points - check

  4a) Can I resolve ping.ripe.net - check
  4b) Can I reach ping.ripe.net end-points - check
done


Or similar. The tool bootstraps its way up rather than offering just a binary
yeah/nay. The output might simply be the above. I reckon if you saw that sort 
of output
you'd be able to make a pretty informed guess as to where the problem might be 
- or at
least what to do next. Which is the goal. We know we have a problem, we want to 
zero in on
it.

>From an automation POV it's nothing more than a bit of data-gathering and 
>issuing pretty
standard network exchanges in sequence. By way of example, on FreeBSD13, ping 
is 4,000+
lines of C. I'm certain the above could be implemented in far fewer ELOCs in a 
modern
language.

And, if these basic steps are augmented by ISPs adopting one or two conventions 
such as
well-defined DNS names and end-points, then such a tool could offer a lot of 
insights
for that inevitable support call: "I ran 'ping-internet [sol.net]' and it said: 
...".

As J. Hellenthal mentioned earlier, I think there's an argument here for why 
ISPs might
encourage such a beast. What do you think?


Mark.


Re: Authoritative Resources for Public DNS Pinging

2022-02-09 Thread Mark Delany
On 09Feb22, Joe Greco allegedly wrote:

> So what people really want is to be able to "ping internet" and so far
> the easiest thing people have been able to find is "ping 8.8.8.8" or
> some other easily remembered thing.

Yes, I think "ping internet" is the most accurate description thus far. Or 
perhaps "reach
internet".

> Does this mean that perhaps we should seriously consider having some
> TLD being named "internet"

Meaning you need to have a functioning DNS resolver first? I'm sure you see the 
problem
with that clouding the results of a diagnostic test.

> service providers register appropriate upstream targets for their 
> customers, and then maybe also allow for some form of registration such
> that if I wanted to provide a remote ping target for AS14536, I could
> somehow register "as14536.internet" or "solnet.internet"?

Possibly. You'd want to be crystal clear on the use cases. As a starting point, 
maybe:

1. Do packets leave my network?
2. Do packets leave my ISP's network?
3. Mainly for IOT - is the internet reachable?

Because of 2 and 3. I don't think creative solutions such as ISPs any-casting 
some
memorable IP or name will do the trick. And because of 1. anything relying on 
DNS
resolution is probably a non-starter. Much as I like "ping ping.ripe.net" it 
alone is too
intertwined with DNS resolution to be a reliable alternative.


> Fundamentally, this is a valid issue.

Yup. There are far more home-gamers and tiny network admins (the networks are 
tiny, not
the admins) who just want to run a reachability test or add a command to a 
cheap network
monitor cron job. Those on this list who can - or should - do something more 
sophisticated
are numerically in the minority of people who care about reachability and are 
not really
the target audience for a better "ping 8.8.8.8".

> and we'll end up needing a special non-ping client and some trainwreck of 
> names and
> other hard-to-grok

I'm not sure the two are fundamentally intertwined tho it could easily be an 
unintended
consequence. However, being constrained to creating a new ping target does 
severely limit
the choices. And including ipv6 just makes that more complicated.

The other matter is that the alternative probably has to present a compelling 
case to
cause change in behavior. I can see an industry standard ping target being of 
possible use
to tests built into devices. But again it'd have to be compelling for most 
manufacturers
to even notice.

But for humans, I'd be surprised if you can create a compelling alternative ping
target. For them, I'd be going down the path of a "ping-internet" command which 
answers
use-cases 1. & 2. while carefully avoiding the second-system syndrome - he says 
with a
laugh.


Mark.


Re: Authoritative Resources for Public DNS Pinging

2022-02-08 Thread Mark Delany
On 08Feb22, Mike Hammett allegedly wrote:

> Some people need a clue by four and I'm looking to build my collection of 
> them. 

> "Google services, including Google Public DNS, are not designed as ICMP 
> network testing services"

Hard to disagree with "their network, their rules", but we're talking about an 
entrenched,
pervasive, Internet-wide behaviorial issue.

My guess is that making ping/ICMP less reliable to the extent that it becomes 
unusable
wont change fundamental behavior. Rather, it'll make said "pingers" reach for 
another tool
that does more or less the same thing with more or less as little extra effort 
as possible
on their part.

And what might such an alternate tool do? My guess is one which SYN/ACKs 
various popular
TCP ports (say 22, 25, 80, 443) and maybe sends a well-formed UDP packet to a 
few popular
DNS ports (say 53 and 119). Let's call this command "nmap -sn" with a few 
tweaks, shall
we?

After all, it's no big deal to the pinger if their reachability command now 
exchanges
10-12 packets with resource intensive destination ports instead of a couple of 
packets to
lightweight destinations. I'll bet most pingers will neither know nor care, 
especially if
their next-gen ping works more consistently than the old one.

So. Question. Will making ping/ICMP mostly useless for home-gamers and lazy 
network admins
change internet behaviour for the better? Or will it have unintended 
consequences such as
an evolutionary adaptation by the tools resulting in yet more unwanted traffic 
which is
even harder to eliminate?


Mark.


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-03 Thread Mark Delany
On 03Jan21, Brandon Martin allegedly wrote:
> I was thinking more in the original context of this thread w.r.t. 
> potential distribution of emergency alerts.  That could, if 
> semi-centralized, easily result in 100s of million connections to juggle 
> across a single service just for the USA.  While it presumably wouldn't 
> be quite that centralized, it's a sizable problem to manage.

Indeed. But how do you know the clients are still connected? And if they 
aren't, there is
not much a server can do beyond discarding the state. Presumably the client 
would need to
run a fairly frequent keep-a-live/reconnect strategy to ensure the connection 
is still
functioning.

Which raises the question: how long a delay do you tolerate for an emergency 
alert? I
think the end result is a lot of active connections and keep-a-live traffic. 
Not really
quiescent at all. In the end, probably just as cheap to poll a CDN.


Mark.


Re: NDAA passed: Internet and Online Streaming Services Emergency Alert Study

2021-01-03 Thread Mark Delany
On 03Jan21, Brandon Martin allegedly wrote:
> On 1/3/21 3:11 PM, Jay R. Ashworth wrote:
> > Well, TCP means that the servers have to expect to have 100k's of open
> > connections; I remember that used to be a problem.
> 
> Out of curiosity, has anyone investigated if it's possible to hold open 
> a low-traffic, long-lived TCP session without actually storing state 
> using techniques similar to syncookies and do so in a compatible manner?

Creating quiescent sockets has certainly been discussed in the context of RSS 
where you
might want to server-notify a large number of long-held client connections very
infrequently.

While a kernel could quiesce a TCP socket down to maybe 100 bytes or so 
(endpoint tuples,
sequence numbers, window sizes and a few other odds and sods), a big residual 
cost is
application state - in particular TLS state.

Even with a participating application, quiescing in-memory state to something 
less than,
say, 1KB is probably hard but might be doable with a participating TLS library. 
If so, a
million quiescent connections could conceivably be stashed in a coupla GB of 
memory. And
of course if you're prepared to wear a disk read to recover quiescent state, 
your
in-memory cost could be less than 100 bytes allowing many millions of quiescent
connections per server.

Having said all that, as far as I understand it, none of the DNS-over-TCP 
systems imply
centralization, that's just how a few applications have chosen to deploy. We 
deploy DOH to
a private self-managed server pool which consume a measly 10-20 concurrent TCP 
sessions.


Mark.


Re: An appeal for more bandwidth to the Internet Archive

2020-05-13 Thread Mark Delany
On 13May20, Denys Fedoryshchenko allegedly wrote:
> What about introducing some cache offloading, like CDN doing? (Google,
> Facebook, Netflix, Akamai, etc)

> Maybe some opensource communities can help as well

Surely someone has already thought thru the idea of a community CDN?
Perhaps along the lines of pool.ntp.org? What became of that
discussion?

Maybe a TOR network could be repurposed to cover the same ground.


Mark.


Re: alternative to voip gateways

2020-05-11 Thread Mark Delany
> We need to keep battery backup requirements, and expand them to all last
> mile IP bits. The need to call 911 has not gone away.

For sure. I was merely observing that the conversion of POTS to VOIP
in Australia didn't create a nation-wide disaster as the
pearl-clutchers once predicted.

In fact, if anything, the same folk who complained about the so-called
largesse of a nationwide IP last-mile are strangely silent now that
WFH is de rigueur.

To your point, the original plan was 90+% passive optical back to
major exchanges so the infrastructure was largely invulnerable to
wide-scale power shutdowns/failures. All a residence has to do is feed
a 7W NTD to stay connected.


Mark.


Re: alternative to voip gateways

2020-05-11 Thread Mark Delany
> wasnt there a hige shit stom in australia for their new national
> broadband network making internet ptrimary and phone secondary, a lot
> of aussies on forums I frequent bitch about its reliability, where
> even their aged copper services worked fine, not to mention prolonged
> outages due to storms and the bushfires they had recently,  lets hope
> the world learns from australias mistakes and not go down that path.

There are still a few complaints every now and again but fixed line
numbers are continuing to drop off a cliff and those residential
services which remain have almost all been converted to VOIP via home
gateway with an FXS port.

Mobile/Cell is where most people ended up. Especially since you can
get unlimited calls/txt with some data for about ten bucks a month.

It also helps that a number of the mobile providers include
wifi-calling so mobile is a viable alternative even in weak cell
coverage areas if you have internet.

Yes, everyone knows about the reliability/power-failure arguments but
in the latest set of bushfires whole exchanges, backhaul services and
power distribution cables were destroyed so 8 hours of battery backed
up POTS in a local exchange didn't help much.


Mark.


Re: Elephant in the room - Akamai

2019-12-08 Thread Mark Delany
> Have there been any fundamental change in their network architecture
> that might explain pulling these caches?

Maybe not network architecture, but what if the cache-to-content ratio
is dropping dramatically due to changes in consumer behavior and/or a
huge increase in the underlying content (such as adoption of higher
and multiple-resolution videos)?

There has to be a tipping point at which a proportionally small cache
becomes almost worthless from a traffic saving perspective.

If you run a cluster one presumes you can see what your in/out ratio
looks like and where the trend-line is headed.


Another possibility might be security. It may be that they need
additional security credentials for newer services which they are
reluctant to load into remote cache clusters they don't physically
control.


Mark.


Re: all major US carriers received text messages overnight that appear to have been sent around Valentine's Day 2019

2019-11-11 Thread Mark Delany
> What I've seen happen more often than that:
> 
> Server goes partly belly-up, queue fills up.  Backup process runs, backing up 
> the
> queue. (Optionally here: Reboot the server and lose the queue).  Much later, 
> the
> server hits another issue that requires recovering from backups - and they 
> restore
> a truly ancient copy.

Particularly as mail servers tend to check for expired messages
*after* a delivery attempt. This means a restored queue containing
ancient messages will most likely be given one last delivery attempt
prior to bouncing. One real example comes from the qmail-send man
page:

   queuelifetime
Number of seconds a message can stay in the queue.  Default:
604800 (one week).  After this time expires, qmail-send will try
the message once more, but it will treat any temporary delivery
failures as permanent failures.

Combine that with the fact that it's not unheard of for SMS servers to
be derived from mail servers (since they do virtually the same thing)
and an accidental queue restore or server revivication seems the most
plausible.


Mark.


Re: IPv6 on mobile networks, was Update to BCP-38?

2019-10-03 Thread Mark Delany
> Yep I see this on AT&t's post paid network with my Pixel 3A XL as well, one
> place I really noticed it causing issues is with Facebook and Instagram
> where Facebook requires constant captions to view any Facebook links I
> receive and embedded Instagram content in news articles and things of that
> nature often failed load. It is very annoying.

> > Tmobile US, VZ, and Sprint all have IPv6, but only AT&T has this behavior
> > afaik.

HTTP proxies are used by some mobile carriers to down-scale media sent
thru their radio network to reduce bandwidth. They rationalise that,
e.g, a HD video can be down-scaled for a tiny screen with no real loss
of fidelity but a signficant reduction in bandwidth. Similar
strategies apply to almost all compressible media: mp3, jpegs, etc.

More often used outside the US as I recall but sounds like AT&T might
be doing something similar. You could try a mobile fetch of a known
media file via HTTP and HTTPS then compare them for possible insights
(make sure to use a mobile browser to avoid browser-detects).

Such proxies are sometimes used for carrier ad-insertion as well so
one presumes they detest the widespread switch to HTTPS for at least
two reasons.


Mark.