On 4/8/15 2:42 AM, Martin Pieuchot wrote:
> On 07/04/15(Tue) 15:42, David Newman wrote:
>> On 3/30/15 12:54 PM, Martin Pieuchot wrote:
>>> [...] 
>> Not OK for the carp interfaces. On the production machines I'm
>> replicating here as VMs, it looks like the carp interfaces are bound to
>> themselves -- note that the last column is "carp21":
>>
>> # netstat -nr -f inet | grep 12.20.174.98
>> 12.20.174.98       12.20.174.98       UH         0    14853     -     4
>> carp21
> 
> Which version of OpenBSD are you running here?

5.4

> 
>> But on the similarly configured VM, the carp interface (carp221 in this
>> example) is bound to the lo0 interface:
>>
>> # netstat -nr -f inet | grep 12.220.174.98
>> 12.220.174.98      00:00:5e:00:01:dd  UHLl       0        0     -     1 lo0
> 
> This is the behavior since 5.6.

Ah, OK. Did not see this in the release notes. The production box is
still on 5.4, so that could explain the difference.

> 
>>> Now if you configure an IP address of the same subnet on the parent
>>> interface, vic1 in your case, this interface will hold the cloning
>>> route ('C' in your output) and will be used to reach any other address
>>> of the subnet.  If you don't to that, then the carp interfaces should
>>> hold the cloning route and their address will be used.
>>
>> In both cases above, the parent and carp interfaces are configured with
>> IP addresses on the same subnet.
>>
>> In the case of the physical (production) machines, other machines on
>> that subnet can ping the carp interface (the virtual IP address shared
>> by two machines with carp interfaces).
>>
>> In the case of the VMs, a machine on that subnet cannot ping the carp
>> interface. I think this is because it's bound to lo0, but I don't know why.
> 
> Can you tcpdump your traffic on the CARP node and see what happen to the
> icmps packets?  Do you see requests on the physical interface?  On the
> carp one?  Do you see reply?  

Now I do, on both CARP and physical interfaces, but for reasons
completely unrelated to OpenBSD.

The underlying VM infrastructure is VMware vSphere 5.5. During
troubleshooting I tried changing the NIC type from "Flexible" to
"E1000E" after seeing a report that 'vic' type interfaces don't work:

http://is.gd/qoG3Sm

And in a very Linux-like way, vSphere changed all the NIC assignments --
vic0 became em3, vic1 became em0, vic1 became em0, and vic3 became em2.
Very annoying.

I only noticed this when running tcpdump and then comparing MAC
addresses with the VM settings. With the NICs correctly assigned again,
ping and CARP work fine, as they should.

Sorry for the waste of bandwidth.

For anyone else prototyping CARP on VMware:

1. Use Intel NIC drivers, either E1000 or E1000E (I used the latter),
not the vic drivers.

2. The above URL says a virtual switch (or distributed vswitch in my
case) needs three settings set to "accept" (under security settings for
the vSwitch or distributed port group):

- promiscuous mode
- MAC address changes
- forged transmits

In my experience, CARP and pfsync require promiscuous mode and forged
transmits but not MAC address changes (makes sense, since CARP nodes
share the same virtual MAC and IP address, and thus the MAC address
should not change).

3. The above URL recommends setting Net.ReversePathFwdCheckPromisc to 1
on the ESXi host, then disabling and re-enabling promiscuous mode on the
vSwitch. In my experience this step is not needed, and CARP came up and
transitioned as expected without it.

Again, sorry for the false alarm, but I hope at least these tips will
help anyone else doing this on VMware.

dn

> 
>> Here again are the hostname files for the physical and carp interfaces
>> on the VM.
>>
>> # cat hostname.vic1
>> inet 12.220.174.99 255.255.255.224 12.220.174.127 up
>>
>> # backslash added for clarity -- it's 1 line in original
>> # cat hostname.carp221
>> inet 12.220.174.98 255.255.255.224 12.220.174.127 vhid 221 \
>>  carpdev vic1 advskew 1 pass ******
>>
>>
>>> Does that answer your question?
>>
>> In terms of how CARP works, yes. In terms of why it's bound to lo0 here,
>> no, sorry, I'm missing something here.
> 
> Routes to local address are bounds to lo0 because on this particular
> machine you don't need to send the packet to the wire when you want
> to reach your own address.  Loopback interfaces are just that, a pipe
> that connect the output of your stack to the input.
> 
> But it should not matter in your case.

Reply via email to