I don't necessarily see an issue here - DNS solves this quite nicely, but .....

When the IP changes, you only have half the IP that needs updating, and it's a 
simple sed exercise to replace it in all configurations, if you have it hard 
coded for any reason.

One of the nice things about this is that, the times I've had to re-number 
networks, it was a zero downtime operation with smooth transition. 

I'd go full DNS instead of LUA style, myself (which is what I've done), so 
renumbering doesn't impact me at all. Everything is updated/correct within a 
minute or so of address change. 

Sadly, as I've noted before, my last mile doesn't have IPv6 service so I've had 
to tunnel that, but I've re-numbered numerous times since I first started 
tunneling in 2009 without any operational impact. At most, dyndns provider 
updates for external services were the only low hanging fruit that had any 
impact. 

-----Original Message-----
From: Chris Woodfield via NANOG <[email protected]> 
Sent: Monday, December 1, 2025 5:47 PM
To: North American Network Operators Group <[email protected]>
Cc: Bryan Fields <[email protected]>; Chris Woodfield <[email protected]>
Subject: Re: IPv6 Performance (was Re: IPv4 Pricing)

I’ll chime in my personal beef with IPv6, or at least, my home ISP’s 
implementation…

Unless I want to pay $$$ for a “business-class” service for my home, my IP 
allocations, both IPv4 and V6, are not statically assigned. While they don’t 
change often, they have in the past.

Now, if I want to assign static addresses for devices within my home network, I 
don’t have a problem with v4 - everything’s RFC1918, so if the public IP 
changes, NBD, and I can even do it with DHCP client IDs. However, if my IPv6 PD 
changes and my home devices all have GUAs assigned via SLAAC, then… guess what 
- every IPv6 device address in my network just changed. Oops.

Practically, I’ve worked around this by manually assigning LUAs to the devices 
that need static v6 addresses, like my SAN and the machines that do NFS mounts 
from it. But 1. that’s more than annoyingly clunky - hardly the improved 
experience that IPv6 promised - and 2. weren’t we trying to get away from LUAs 
in the first place?

-Chris

> On Dec 1, 2025, at 13:44, Bryan Fields via NANOG <[email protected]> 
> wrote:
> 
> On 12/1/25 14:22, Jared Mauch via NANOG wrote:
> 
>> I find myself having to tether off their networks when I’m on IPv4 
>> only networks to access things like my hypervisors and other assets 
>> that are IPv6-only because they have superior networking these days.
> 
> While I'll agree v6 is easy and should be deployed I have to take issue with 
> the current as-built being superior.
> 
> At least once or twice a month I'm downloading something and will find the 
> IPv4 to transfer significantly faster.  Case in point, I downloaded the 
> proxmox iso yesterday to a colo server with 50g uplinks.  It loafed at 2.4 
> mbytes/s using default wget, which of course preferred ipv6.  Adding -4 to 
> wget made that shoot up to 80 mbytes/s.
> 
> This is ipv6 behavior I've seen time and time again.  I'm unsure where 
> problems like these lie in the network, other than it's not mine or my peers. 
> I've seen the same issues with v6 paths to the same server bounce around the 
> west coast and back, whilst IPv4 is 6 hops and 12 ms away.
> 
> This is exactly the sort of thing that holds IPv6 back by giving it a bad 
> name.
> --
> Bryan Fields
> 
> 727-409-1194 - Voice
> http://bryanfields.net
> _______________________________________________
> NANOG mailing list 
> https://lists.nanog.org/archives/list/[email protected]/message/AP
> A2YIX47NF7U65G2HIBAPHT3X6EWRIG/

_______________________________________________
NANOG mailing list
https://lists.nanog.org/archives/list/[email protected]/message/AWW6EP3WE2D7V65Z3EHZJBYZPWX5WRBH/
_______________________________________________
NANOG mailing list 
https://lists.nanog.org/archives/list/[email protected]/message/KOIZYXN5B6ZODPGJQHG43ZWLBJMJDLXF/

Reply via email to