Summary of this email: I repeat my argument that automatic NDP
proxying is the right way to handle the "road warrior" use case for
IPv6. The reasons I'm pushing this so hard are that (1) including this
functionality in iked would be much more robust than any hacky script I
could write that tries to monitor routing state changes, and (2) both
of the responses to my routing question claim that the correct way to
connect a laptop to my VPN is to negotiate with my ISP to get a larger
subnet which just sounds bonkers when "ndp -s" solves the technical
problem so perfectly. Then I repeat my offer to fund this solution.

> While I suppose the /64 your VPS provider gives you is "enormous"
> compared to IPv4, I don't find such a comparison relevant since IPv6
> and IPv4 are entirely different protocols. In fact I actually think it
> is small. Why? RFC 6177 (https://datatracker.ietf.org/doc/html/rfc6177)
> recommends that /48 or at least /56 subnets be given to end sites, so
> your _small_ /64 violates that recommendation. Hell, even my lowly
> residential ISP, Xfinity/Comcast, gives me a /60. Unfortunately a great
> many ISPs and VPS providers violate this. Not sure if it is due to
> incompetence where they incorrectly think such allocations are
> "wasteful" or what. IPv6 not only restores end-to-end communication the
> way IPv4 initially started, but it is designed so that sites have many
> _subnets_. This brings me to the next point.

The trouble with subnets is that they have to be configured. I would
have to install a DHCPv6 server to request that subnet. OpenBSD doesn't
have one in base so I have to install the wide-dhcp6 package. Then I
have to configure it. Can a host on my home network request a /60 from
my ISP's router? Or do I have to replace the ISP's router with my own
device in order to make that request? If I replace the ISP's router, I
have to provide all the other router functionality, like the firewall
and IPv4-style DHCP. Then there is the host on which iked is running:
it needs to advertise ownership of this subnet so I have to configure
rad properly. This is insane! I'm not trying to sell VPN services to
customers; I just want a simple proxy for my desktop, a couple of
laptops, and maybe a friend or two. It's the simple sort of low-volume,
hobbyist thing that I *could* accomplish with a layer 2 VPN that
tunnels entire Ethernet frames to the remote server---but that would
also be a huge pain to configure because now I have to set up and
configure a bunch of extra tunnel interfaces.

The "road warrior" VPN (I hate that name) described in the OpenBSD FAQ
is actually really simple to set up. You just have to copy the
/etc/iked/local.pub keys into the appropriate places, writing a pair
of very short /etc/iked.conf files, and then rcctl enable iked and
away we go! Almost. The one thing iked doesn't do is "last-step
routing" between the gateway router and the IKEv2 responder. So for
IPv4 we also have to enable nat in /etc/pf.conf, and for IPv6 we have
to *either* switch from Vultr to a fancier VPS and do a crapton of
configuration to set up the subnets, *or* use NDP proxying to advertise
that final hop.

NDP proxying is simple. It's also elegant. What does it take for the
proxied IP address to be globally routable? Three things have to
happen: the Internet has to deliver the packet to the IKEv2 responder's
gateway router. The gateway router has to send the packet to the IKEv2
responder. And the IKEv2 responder has to send the packet into the
IPsec tunnel. The first step (internet to gateway router) is the
fundamental job of the Internet and requires no extra work on our part.
The last step (pushing the packet into the tunnel) is handled by iked.
All that's left is the middle step: getting the packet from a router to
a host on the same link. How do we do that? This is the problem NDP was
designed to solve. I don't think it's hacky or unclean at all. The
responder wants to tell the hosts on the local link that it is the
link-layer endpoint for a single destination address that is on the
local link's subnet. That's all NDP does: it instructs hosts on the
lcoal link to update their local routing tables. It's perfect for this
purpose. And it's even *more* perfect because OpenBSD implements this
functionality by updating its own internal routing tables, which iked
already does. So it would be even *more* simple if iked set up the
NDP proxying automatically instead of forcing me to run a script every
time I bounce the connection.

I would also suggest comparing the "hackiness" of NDP proxying to the
hackiness of NAT, which is how we solve this same problem in IPv4. If
we put aside all the political issues around NAT, it's still a really
janky process. It only works for TCP and UDP, so you can't send ICMP
messages or use any other fancy experimental protocols. You can't
listen on ports unless you create specific firewall rules to forward
those ports. The server has to maintain per-connection state to rewrite
addresses and port numbers. And addresses and port numbers are being
rewritten at all! That's crazy when you think about it! The only reason
we're so quick to suggest NAT is that we have a lot of practice with it
because the IPv4 address shortage made it necessary.


> You would like to rely on SLAAC for your VPN peers, but SLAAC will
> likely not work on anything smaller than /64. Why? Because the first
> 64 bits of an IPv6 address is designated as the network identifier.
> You already carved out some IPs from the /64 though which means you
> have less than /64 to use for SLAAC inside the tunnel.

To be clear: I am using exactly *one* address per tunnel. I'm not
bridging networks. I'm attaching individual clients to a network and
giving them a single IPv6 address each. Suppose I have three friends
visit my home and they simultaneously connect their phones to my WiFi.
There would be no talk of additional subnets; I can spare three IPv6
addresses. I don't need to assign a whole subnet to each phone. Now,
suppose that my friends stay at their homes but they each connect
their phones to my VPN. Now I'm hearing that I need to request a /56
from my ISP to support that use case properly. What?!

When I brought up SLAAC, I meant to invoke it *on the host network.*
It makes no sense at all to route an entire subnet down each tunnel.
The point of a subnet is that two different hosts in the same subnet
can communicate directly without their packets ever having to exit
that subnet. If there is only one host, a subnet is pointless (unless
you're using it for temporary addresses, but that's a heavy use of a
subnet). I could allocate a subnet specifically for "connected VPN
clients" but that would be for accounting purposes only and wouldn't
actually make sense from an Internet architecture standpoint: the
clients wouldn't be able to communicate with each other directly over
the subnet---any messages between them would have to first pass through
the IKEv2 responder, which would *not* have assigned itself an IP
address from this subnet.

Subnets make sense if I host a LAN party at my house, and my friend
hosts another LAN party at his house, and we wanted to to encrypt
traffic between our two networks. Then I would have one subnet, my
friend would have a different subnet, and the IPsec tunnel would carry
traffic from one subnet to the other while ignoring traffic whose
source and destination addresses are in the same subnet. But that isn't
this use case at all.


> I used to use Vultr; but when they were unwilling to provide something
> bigger than a /64 in addition to actually routing the entire block, I
> left them. If you insist on using IPv6 without relying on NAT or NDP
> proxying, then I recommend you find another provider. What you are
> trying to do is trivial when IPv6 is done properly. I have a similar
> setup myself except I use WireGuard, but I'm confident IKEv2/IPSec
> would be easy to set up as well.

I am willing to rely on NDP proxying. More specifically: I think that
NAT is the correct way to handle this use case for IPv4 and NDP proxying
is the correct way to handle it with IPv6.

What I am trying to do is trivial with NDP proxying. The suggestion to
run "ndp -s" got my connection working immediately, and conceptually it
make sense why that should be the case: the purpose of NDP is to
advertise the route to an individual IPv6 route on a local link, and
the one remaining step to making a "configured" IPv6 VPN address usable
is to find a way to advertise it on the local link.

The only problem with NDP proxying, as it's implemented right now, is
that I have to do it manually. I can probably set up a script that
monitors /var/log/daemon for connection and disconnection events and
calls "ndp -s [...] proxy" and "ndp -d" appropriately, but I think this
is a fundamental feature that *everyone* would want to use when doing
road-warrior-style tunneling for IPv6. (I assume people are using NAT
right now because that's how they're used to doing it for IPv4.)


I'm serious about being willing to pay to have the automatic NDP
proxying added to iked. I often hear stories about how open-source
developers suffer from lack of funding. I would be willing to part
with $200 for this specific feature; if that's not a reasonable price,
I'm willing to negotiate off-list. I don't know how else to make a
serious feature request. I could try to do it myself but it would take
me longer to navigate an unfamiliar code base and I probably wouldn't
do it as cleanly as an established OpenBSD developer. This way,
everyone wins: I can stop running my shell script as a daemon with root
privileges, other users (including Zack Newman, who apparently has the
same use case I do) benefit from the code updates, and some lucky
developer gets to have a really nice dinner. I also get to feel like I
contributed something useful to my favorite operating system.

Regards,
Anthony Coulter

Reply via email to