On 12/01/2013 11:30 AM, Andrew Lau wrote:
I put the management and storage on separate VLANs to try avoid the
floating IP address issue temporarily. I also bonded the two nics, but I
don't think that shouldn't matter.

The other server got brought down the other day for some maintenance, I
hope to get it back up in a few days. But I can tell you a few things I
noticed:

ip a - it'll list the floating IP on both servers even if only active on
one.

I've got about 10 other networks so I've snipped out quite a bit.

# ip a
<snip>
130: bond0.2@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP
     link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
     inet 172.16.0.11/24 <http://172.16.0.11/24> brd 172.16.0.255 scope
global bond0.2
     inet6 fe80::210:18ff:fe2e:6acb/64 scope link
        valid_lft forever preferred_lft forever
131: bond0.3@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
qdisc noqueue state UP
     link/ether 00:10:18:2e:6a:cb brd ff:ff:ff:ff:ff:ff
     inet 172.16.1.11/24 <http://172.16.1.11/24> brd 172.16.1.255 scope
global bond0.3
     inet 172.16.1.5/32 <http://172.16.1.5/32> scope global bond0.3
     inet6 fe80::210:18ff:fe2e:6acb/64 scope link
        valid_lft forever preferred_lft forever
</snip>


# vdsClient -s 0 getVdsCaps
<snip>
            'storage_network': {'addr': '172.16.1.5',
                                'bridged': False,
                                'gateway': '172.16.1.1',
                                'iface': 'bond0.3',
                                'interface': 'bond0.3',
                                'ipv6addrs':
['fe80::210:18ff:fe2e:6acb/64'],
                                'ipv6gateway': '::',
                                'mtu': '1500',
                                'netmask': '255.255.255.255',
                                'qosInbound': '',
                                'qosOutbound': ''},
<snip>
vlans = {'bond0.2': {'addr': '172.16.0.11',
                     'cfg': {'BOOTPROTO': 'none',
                             'DEFROUTE': 'yes',
                             'DEVICE': 'bond0.2',
                             'GATEWAY': '172.16.0.1',
                             'IPADDR': '172.16.0.11',
                             'NETMASK': '255.255.255.0',
                             'NM_CONTROLLED': 'no',
                             'ONBOOT': 'yes',
                             'VLAN': 'yes'},
                     'iface': 'bond0',
                     'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
                     'mtu': '1500',
                     'netmask': '255.255.255.0',
                     'vlanid': 2},
         'bond0.3': {'addr': '172.16.1.5',
                     'cfg': {'BOOTPROTO': 'none',
                             'DEFROUTE': 'no',
                             'DEVICE': 'bond0.3',
                             'IPADDR': '172.16.1.11',
                             'NETMASK': '255.255.255.0',
                             'NM_CONTROLLED': 'no',
                             'ONBOOT': 'yes',
                             'VLAN': 'yes'},
                     'iface': 'bond0',
                     'ipv6addrs': ['fe80::210:18ff:fe2e:6acb/64'],
                     'mtu': '1500',
                     'netmask': '255.255.255.255',
                     'vlanid': 3},

I hope that's enough info, if not I'll post the full config on both when
I can bring it back up.

Cheers,
Andrew.


On Sun, Dec 1, 2013 at 7:15 PM, Assaf Muller <amul...@redhat.com
<mailto:amul...@redhat.com>>wrote:

    Could you please attach the output of:
    "vdsClient -s 0 getVdsCaps"
    (Or without the -s, whichever works)
    And:
    "ip a"

    On both hosts?
    You seem to have made changes since the documentation on the link
    you provided, like separating the management and storage via VLANs
    on eth0. Any other changes?


    Assaf Muller, Cloud Networking Engineer
    Red Hat

    ----- Original Message -----
    From: "Andrew Lau" <and...@andrewklau.com
    <mailto:and...@andrewklau.com>>
    To: "users" <users@ovirt.org <mailto:users@ovirt.org>>
    Sent: Sunday, December 1, 2013 4:55:32 AM
    Subject: [Users] Keepalived on oVirt Hosts has engine networking issues

    Hi,

    I have the scenario where I have gluster and ovirt hosts on the same
    box, to keep the gluster volumes highly available incase a box drops
    I'm using keepalived across the boxes and using that IP as the means
    for the storage domain. I documented my setup here in case anyone
    needs a little more info
    http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/

    However, the engine seems to be picking up the floating IP assigned
    to keepalived as the interface and messing with the ovirtmgmt
    migration network, so migrations are failing as my floating IP gets
    assigned to the ovirtmgmt bridge in the engine however it's not
    actually there on most hosts (except one) so vdsm seems to report
    destination same as source.

    I've since created a new vlan interface just for storage to avoid
    the ovirtmgmt conflict, but the engine will still pick up the wrong
    IP on the storage vlan because of keepalived. This means I can't use
    the save network feature within the engine as it'll save the
    floating ip rather than the one already there. Is this a bug or just
    the way it's designed.

    eth0.2 -> ovirtmgmt (172.16.0.11) -> management and migration
    network -> engine sees, sets and saves 172.16.0.11
    eth0.3 -> storagenetwork (172.16.1.11) -> gluster network -> engine
    sees, sets and saves 172.16.1.5 (my floating IP)

    I hope this makes sense.

    p.s. can anyone also confirm, does gluster support multi pathing by
    default? If I'm using this keepalived method, am I bottle necking
    myself to one host?

    Thanks,
    Andrew

    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users




_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Andrew - was this resolved or you are still looking for more insight/assistance?

thanks,
   Itamar
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to