On 09/09/2011 04:55 AM, Marco wrote:
Note that there I was capturing only ICMP traffic. If I change the capture filter to include UDP port 4500, the encrypted packets do show up (10.0.4.100 is the Shrew box's LAN address, x.x.x.x is the remote VPN concentrator): shrewbox# tcpdump -v -n -i any icmp or udp port 4500 tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes 10:31:02.142694 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.0.4.18> 192.168.1.12: ICMP echo request, id 14367, seq 1, length 64 10:31:02.142816 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 144) 10.0.4.100.4500> x.x.x.x.4500: UDP-encap: ESP(spi=0x0953125a,seq=0x85b), length 116 10:31:02.173169 IP (tos 0x0, ttl 55, id 53038, offset 0, flags [none], proto UDP (17), length 144) x.x.x.x.4500> 10.0.4.100.4500: UDP-encap: ESP(spi=0x0de90eeb,seq=0x8a2), length 116 10:31:02.173194 IP (tos 0x48, ttl 57, id 3004, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.12> 192.168.10.219: ICMP echo reply, id 14367, seq 1, length 64 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel Also capturing at the LAN gateway shows ESP in UDP packets flowing in both directions, so I would say that things are fine. Furthermore, Shrew IPsec has its own set of oddities in the way (cleartext) traffic shows up in tcpdump, for more information see this post: http://lists.shrew.net/pipermail/vpn-help/2011-April/003658.html. But regardless, I am confident that the encryption process is working correctly; the problem is that it seems that Shrew somehow prevents Linux from successfully doing stateful tracking of the connection, because (and that is may original problem) the ICMP reply packet never gets back to 10.0.4.18; it seems it's dropped by the Shrew box. Note that it's also possible that I'm not using the correct iptables rule to do it (I based it on what I did on an OpenSwan machine, where the same thing did work fine); what I've tried is iptables -t nat -A POSTROUTING -d 192.168.1.0/24 -j MASQUERADE iptables -t nat -A POSTROUTING -s 10.0.4.0/24 -d 192.168.1.0/24 -j MASQUERADE iptables -t nat -A POSTROUTING -o tap0 -d 192.168.1.0/24 -j MASQUERADE iptables -t nat -A POSTROUTING -o tap0 -j MASQUERADE none of these work. If this is the problem, suggestions are of course welcome. Thanks!
Ok, it does seem that the tunnel is working and that it is the NAT/SPI that is not working. The response packet from the remote LAN does pop out of the tunnel, addressed to the Shrew client host. At this point the NAT should be undone and the response packet sent on its way to 10.0.4.18.
Unfortunately, we're reaching the end of my usefulness. I've never played with iptables and NAT, so I'm only guessing now where to go on debugging this.
I'm wondering if part of the problem is this business where the packet coming in is NATted to the Shrew virtual adapter IP. Maybe you could try using PREROUTING and have it NATted to the Shrew box's LAN IP instead of the Shrew IP.
_______________________________________________ vpn-help mailing list [email protected] http://lists.shrew.net/mailman/listinfo/vpn-help
