Interestingly enough I literally just went through this same thing with a
slight variation.

Note to the below: I am not sure if this would be considerd best practice
or good for something long term support but I made due with what I had

I had 10Gb cards for my storage network but no 10Gb switch, so I direct
connected them with some fun routing and /etc/hosts settings. I also didnt
want my storage network on a routed network (have firewalls in the way of
VLANs) and I wanted the network separate from my ovirtmgmt - and, as I
said, had no switches for 10Gb. Here is what you need at a bare minimum.
Adapt / change it as you need

1 dedicated NIC on each node for ovirtmgmt. Ex: eth0

1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1

1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2

1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3

You'll need custom routes too:

Route to node 3 from node 1 via eth2
Route to node 3 from node 2 via eth2
Route to node 2 from node 3 via eth2

Finally, entries in your /etc/hosts which match to your routes above

Then, advisably, a dedicated NIC per box for VM network but you can
leverage ovirtmgmt if you are just proofing this out

At this point if you can reach all of your nodes via this direct connect
IPs then you setup gluster as you normally would referencing your entries
in /etc/hosts when you call "gluster volume create"

In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I
setup LACP as well as you can see below

This is what my Frankenstein POC looked like: http://i.imgur.com/iURL9jv.png


You can optionally choose to setup this network in ovirt as well (and add
the NICs to each host) but dont configure it as a VM network. Then you can
also, with some other minor tweaks, use these direct connects as migration
networks rather than ovirtmgmt or VM network

On Tue, Sep 12, 2017 at 9:12 AM, Tailor, Bharat <
bha...@synergysystemsindia.com> wrote:

> Hi,
>
> I am trying to deploy 3 hosts hyper converged setup.
> I am using Centos and installed KVM on all hosts.
>
> Host-1
> Hostname - test1.localdomain
>  eth0 - 192.168.100.15/24
> GW - 192.168.100.1
>
> Hoat-2
> Hostname - test2.localdomain
> eth0 - 192.168.100.16/24
> GW - 192.168.100.1
>
> Host-3
> Hostname - test3.localdomain
> eth0 - 192.168.100.16/24
> GW - 192.168.100.1
>
> I have created two gluster volume "engine" & "data" with replica 3.
> I have add fqdn entry in /etc/hosts for all host for DNS resolution.
>
> I want to deploy Ovirt engine self hosted OVA to manage all the hosts and
> production VM and my ovirt-engine VM should have HA enabled.
>
> I found multiple docs over internet to deply Self-hosted-engine-ova but I
> don't what kind of network configuration I've to do on Centos network card
> & KVM. As KVM docs suggest that I've to create a bridge network for Pnic to
> Vnic bridge. If I configure a bridge br0 for eth0 bridge that I can't see
> eth0 while deploying ovirt-engine setup at NIC card choice.
>
> Kindly help me to do correct configuration for Centos hosts, KVM &
> ovirt-engine-vm for HA enabled DC.
> Regrards
> Bharat Kumar
>
> G15- Vinayak Nagar complex,Opp.Maa Satiya, ayad
> Udaipur (Raj.)
> 313001
> Mob: +91-9950-9960-25
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to