Thanks for the info, I will try the master branch. All the best, Rob
On Mon, Dec 18, 2023 at 2:51 PM Alberto Leiva <[email protected]> wrote: > > Also, I am not able to get jool compiled for my kernel at this time; > > I am dependant on a recent kernel, as I am using/testing bcachefs: > > In regards to this, please note that jool.mx is now a zombie domain. > We've been trying to recover it (or destroy it), but so far our > efforts have been fruitless. At present, Jool's official website is > https://nicmx.github.io/Jool/en/index.html, and the latest version is > 4.1.10, not 4.1.7. > > ... That said, 4.1.10 also doesn't support kernel 6.7: > https://nicmx.github.io/Jool/en/intro-jool.html#compatibility > > But I just tried Jool's latest commit (from the main branch) in kernel > 6.7-rc6, and it compiles without issues. > > I realize Jool 4.1.11 is quite overdue at this point, so I will try to > squeeze a new release into this week's schedule. But I'm not confident > I'm going to make it in time; December is a difficult month. > > In any case, can you compile the latest main? > > On Mon, Dec 18, 2023 at 5:16 AM Ondřej Caletka via Jool-list > <[email protected]> wrote: > > > > On 17/12/2023 21:08, Rob Ert via Jool-list wrote: > > > What I need now, is for the IPv6-only systemd-nspawn containerized > > > machine instances > > > connected over ipvlan to be able access IPv4-only hosts (e.g. > github.com > > > <http://github.com>). > > > > > > I wasn’t able to get NAT64 working with my particular setup and my > first > > > tries with tayga; > > > ping -6 github.com <http://github.com> works on the host, but not on > the > > > IPv6-only containers, as they don’t > > > automatically have access to the host's nat64 tun device among other > > > things. Is there any > > > chance jool would be easier to get working with this particular setup? > > > > Hello Rob, > > > > what I see here is that due to the fact that you are using ipvlan, there > > is not a router owned by you in this setup. This makes it really tricky > > to put NAT64 in place. If your setup used a more traditional way of > > routing incoming traffic between the upstream interface and a bridge > > interface with veth pair to each container, deploying NAT64 would be > > pretty straightforward. > > > > The problem with ipvlan interface is that you cannot alter the routing > > decision - on egress side, everything is either sent on the wire or to > > another ipvlan interface if it contains destination address. On ingress > > side, the destination address decides which ipvlan interface will > > receive it. > > > > What you need to do is to route a prefix like 64:ff9b::/96 into a > > container that would work as NAT64. But this cannot happen with ipvlan > > as ipvlan driver will not figure out where to send such data - the > > destination IPv6 address will not belong to any ipvlan interface so the > > packet will end up forwarded to the wire. > > > > I don't see any easy way out of this other than changing host setup to > > routing instead of ipvlan or deploying a separate NAT64 outside of your > > host. > > > > > > -- > > Best regards, > > > > Ondřej Caletka > > > > _______________________________________________ > > Jool-list mailing list > > [email protected] > > https://mail-lists.nic.mx/listas/listinfo/jool-list >
_______________________________________________ Jool-list mailing list [email protected] https://mail-lists.nic.mx/listas/listinfo/jool-list
