I think I have this working by using proxyarp instead of bridging.

On the EC2 VM: leave lxdbr0 unconfigured. Then do:

sysctl net.ipv4.conf.all.forwarding=1
sysctl net.ipv4.conf.lxdbr0.proxy_arp=1
ip route add dev lxdbr0
ip route add dev lxdbr0
# where and are the IP addresses of the containers

The containers are statically configured with those IP addresses, and as gateway.

This is sufficient to allow connectivity between the containers and other VMs in the same VPC - yay!

At this point, the containers *don't* have connectivity to the outside world. I can see the packets are being sent out with the correct source IP address (the container's) and MAC address (the EC2 VM), so I presume that the NAT in EC2 is only capable of working with the primary IP address - that's reasonable, if it's 1:1 NAT without overloading.

So there's also a need for iptables rules to NAT the container's address to the EC2 VM's address when talking to the outside world:

iptables -t nat -A POSTROUTING -s -d -j ACCEPT
iptables -t nat -A POSTROUTING -s -o eth0 -j MASQUERADE

And hey presto: containers with connectivity, albeit fairly heavily frigged.

But this is quite a useful outcome. You can run a single EC2 VM, and run multiple containers on it for separate services, reached via separate VPC IP addresses as if they were separate VMs, albeit ones without their own public IP addresses.


lxc-users mailing list

Reply via email to