Even better, there are providers (major national ones, even!) that only provide 
IPv6 to DHCP customers, if you pay extra for static IPv4, there *is* no IPv6.

Then again, that same provider only has IPv6 on ~5% of their network: 
https://stats.labs.apnic.net/ipv6/AS5650?c=US&p=1&v=1&w=30&x=1

On Dec 1, 2025, at 4:47 PM, Chris Woodfield via NANOG <[email protected]> 
wrote:

I’ll chime in my personal beef with IPv6, or at least, my home ISP’s 
implementation…

Unless I want to pay $$$ for a “business-class” service for my home, my IP 
allocations, both IPv4 and V6, are not statically assigned. While they don’t 
change often, they have in the past.

Now, if I want to assign static addresses for devices within my home network, I 
don’t have a problem with v4 - everything’s RFC1918, so if the public IP 
changes, NBD, and I can even do it with DHCP client IDs. However, if my IPv6 PD 
changes and my home devices all have GUAs assigned via SLAAC, then… guess what 
- every IPv6 device address in my network just changed. Oops.

Practically, I’ve worked around this by manually assigning LUAs to the devices 
that need static v6 addresses, like my SAN and the machines that do NFS mounts 
from it. But 1. that’s more than annoyingly clunky - hardly the improved 
experience that IPv6 promised - and 2. weren’t we trying to get away from LUAs 
in the first place?

-Chris

On Dec 1, 2025, at 13:44, Bryan Fields via NANOG <[email protected]> wrote:

On 12/1/25 14:22, Jared Mauch via NANOG wrote:

I find myself having to tether off their networks when I’m on IPv4 only
networks to access things like my hypervisors and other assets that are
IPv6-only because they have superior networking these days.

While I'll agree v6 is easy and should be deployed I have to take issue with 
the current as-built being superior.

At least once or twice a month I'm downloading something and will find the IPv4 
to transfer significantly faster.  Case in point, I downloaded the proxmox iso 
yesterday to a colo server with 50g uplinks.  It loafed at 2.4 mbytes/s using 
default wget, which of course preferred ipv6.  Adding -4 to wget made that 
shoot up to 80 mbytes/s.

This is ipv6 behavior I've seen time and time again.  I'm unsure where problems 
like these lie in the network, other than it's not mine or my peers. I've seen 
the same issues with v6 paths to the same server bounce around the west coast 
and back, whilst IPv4 is 6 hops and 12 ms away.

This is exactly the sort of thing that holds IPv6 back by giving it a bad name.
--
Bryan Fields

727-409-1194 - Voice
http://bryanfields.net
_______________________________________________
NANOG mailing list 
https://lists.nanog.org/archives/list/[email protected]/message/APA2YIX47NF7U65G2HIBAPHT3X6EWRIG/

_______________________________________________
NANOG mailing list
https://lists.nanog.org/archives/list/[email protected]/message/AWW6EP3WE2D7V65Z3EHZJBYZPWX5WRBH/

_______________________________________________
NANOG mailing list 
https://lists.nanog.org/archives/list/[email protected]/message/G3WDYKKGYPFQ277O4DXKOKQSH4NW2GTX/

Reply via email to