Could you please attach the output of:
"vdsClient -s 0 getVdsCaps"
(Or without the -s, whichever works)
And:
"ip a"

On both hosts?
You seem to have made changes since the documentation on the link you provided, 
like separating the management and storage via VLANs on eth0. Any other changes?


Assaf Muller, Cloud Networking Engineer 
Red Hat 

----- Original Message -----
From: "Andrew Lau" <[email protected]>
To: "users" <[email protected]>
Sent: Sunday, December 1, 2013 4:55:32 AM
Subject: [Users] Keepalived on oVirt Hosts has engine networking issues

Hi, 

I have the scenario where I have gluster and ovirt hosts on the same box, to 
keep the gluster volumes highly available incase a box drops I'm using 
keepalived across the boxes and using that IP as the means for the storage 
domain. I documented my setup here in case anyone needs a little more info 
http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/ 

However, the engine seems to be picking up the floating IP assigned to 
keepalived as the interface and messing with the ovirtmgmt migration network, 
so migrations are failing as my floating IP gets assigned to the ovirtmgmt 
bridge in the engine however it's not actually there on most hosts (except one) 
so vdsm seems to report destination same as source. 

I've since created a new vlan interface just for storage to avoid the ovirtmgmt 
conflict, but the engine will still pick up the wrong IP on the storage vlan 
because of keepalived. This means I can't use the save network feature within 
the engine as it'll save the floating ip rather than the one already there. Is 
this a bug or just the way it's designed. 

eth0.2 -> ovirtmgmt (172.16.0.11) -> management and migration network -> engine 
sees, sets and saves 172.16.0.11 
eth0.3 -> storagenetwork (172.16.1.11) -> gluster network -> engine sees, sets 
and saves 172.16.1.5 (my floating IP) 

I hope this makes sense. 

p.s. can anyone also confirm, does gluster support multi pathing by default? If 
I'm using this keepalived method, am I bottle necking myself to one host? 

Thanks, 
Andrew 

_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to