Your policy routing looks good.
The problem must be somewhere else, where you do the nat maybe ?
Go in the network namespace where there is the neutron router with
If you tcpdump there what do you see ?
to be 100% sure about the policy routing just go in the network node
where you do the nat.
ip netns exec qrouter-<uuid> wget -O /dev/null http://10.0.16.11/
uuid is the uuid of the neutron router where you are natting
I guess this will work.
Oh, did you double check the security groups ?
2016-12-01 15:18 GMT+01:00 Paul Browne <pf...@cam.ac.uk>:
> Hello Saverio,
> Many thanks for the reply, I'll answer your queries below;
> On 01/12/16 12:49, Saverio Proto wrote:
>> while the problem is in place, you should share the output of
>> ip rule show
>> ip route show table 1
>> It could be just a problem in your ruleset
> Of course, these are those outputs ;
> root@test1:~# ip rule show
> 0: from all lookup local
> 32764: from all to 10.0.16.11 lookup rt2
> 32765: from 10.0.16.11 lookup rt2
> 32766: from all lookup main
> 32767: from all lookup default
> root@test1:~# ip route show table 1
> default via 10.0.16.1 dev eth1
> 10.0.16.0/24 dev eth1 scope link src 10.0.16.11
>> and, which one is your webserver ? can you tcpdump to make sure reply
>> packets get out on the NIC with src address 10.0.16.11 ?
> The instance has its two vNICs with source addresses 10.0.0.11 & 10.0.16.11,
> and the web-server is listening on both.
> The HTTP packets do seem to be getting out from 10.0.16.11 as source, but
> are stopped elsewhere upstream.
> I've attached two pcaps showing HTTP reply packets, one from 10.0.0.11
> (first vNIC; HTTP request and reply works to a remote client) and one from
> 10.0.16.11 (second vNIC; HTTP request is sent, reply not received by remote
> client). In the latter case, the server starts to make retransmissions to
> the remote client.
> Kind regards,
> Paul Browne
>> 2016-12-01 13:08 GMT+01:00 Paul Browne <pf...@cam.ac.uk>:
>>> Hello Operators,
>>> For reasons not yet amenable to persuasion otherwise, a customer of our
>>> ML2+OVS classic implemented OpenStack would like to map two floating IPs
>>> pulled from two separate external network floating IP pools, to two
>>> different vNICs on his instances.
>>> The floating IP pools correspond to one pool routable from the external
>>> Internet and another, RFC1918 pool routable from internal University
>>> The tenant private networks are arranged as two RFC1918 VXLANs, each with
>>> router to one of the two external networks.
>>> 10.0.0.0/24 -> route to -> 184.108.40.206/23
>>> 10.0.16.0/24 -> route to -> 172.24.46.0/23
>>> Mapping two floating IPs to instances isn't possible in Horizon, but is
>>> possible from command-line. This doesn't immediately work, however, as
>>> return traffic from the instance needs to be sent back through the
>>> router gateway interface and not the instance default gateway.
>>> I'd initially thought this would be possible by placing a second routing
>>> table on the instances to handle the return traffic;
>>> debian@test1:/etc/iproute2$ less rt_tables
>>> # reserved values
>>> 255 local
>>> 254 main
>>> 253 default
>>> 0 unspec
>>> # local
>>> #1 inr.ruhep
>>> 1 rt2
>>> debian@test1:/etc/network$ less interfaces
>>> # The loopback network interface
>>> auto lo
>>> iface lo inet loopback
>>> # The first vNIC, eth0
>>> auto eth0
>>> iface eth0 inet dhcp
>>> # The second vNIC, eth1
>>> auto eth1
>>> iface eth1 inet static
>>> address 10.0.16.11
>>> netmask 255.255.255.0
>>> post-up ip route add 10.0.16.0/24 dev eth1 src 10.0.16.11 table
>>> post-up ip route add default via 10.0.16.1 dev eth1 table rt2
>>> post-up ip rule add from 10.0.16.11/32 table rt2
>>> post-up ip rule add to 10.0.16.11/32 table rt2
>>> And this works well for SSH and ICMP, but curiously not for HTTP traffic.
>>> Requests to a web-server listening on all vNICs are sent but replies not
>>> received when the requests are sent to the second mapped floating IP
>>> requests and replies work as expected when sent to the first mapped
>>> IP). The requests are logged in both cases however, so traffic is making
>>> to the instance in both cases.
>>> I'd say this is clearly an unusual (and possibly un-natural) arrangement,
>>> but I was wondering whether anyone else on Operators had come across a
>>> similar situation in trying to map floating IPs from two different
>>> networks to an instance?
>>> Kind regards,
>>> Paul Browne
>>> Paul Browne
>>> Research Computing Platforms
>>> University Information Services
>>> Roger Needham Building
>>> JJ Thompson Avenue
>>> University of Cambridge
>>> United Kingdom
>>> E-Mail: pf...@cam.ac.uk
>>> Tel: 0044-1223-46548
>>> OpenStack-operators mailing list
> Paul Browne
> Research Computing Platforms
> University Information Services
> Roger Needham Building
> JJ Thompson Avenue
> University of Cambridge
> United Kingdom
> E-Mail: pf...@cam.ac.uk
> Tel: 0044-1223-46548
OpenStack-operators mailing list