[ovirt-users] First oVirt engine deploy: missing gateway on hosts

2017-09-02 Thread Mauro Tridici
Hi all,

I just started my first Ovirt Engine deploy using a dedicated (and separated) 
virtual machine.
I’m trying to create and manage a test Gluster cluster using 3 “virtual” hosts 
(hostnames are glu01, glu02, glu03)
2 different networks have been defined on the hosts (192.168.213.0/24 for 
management network and 192.168.152.0/24 for gluster network).
Ovirt engine deploy completed without any problem, the hosts have been added 
easily using ovirtmgmt network (bridgeless mgmt network) and ovirtgluster 
(bridgeless gluster network).

Everything seems to be ok for this first deploy, but I just noticed that the 
gateway is missing on the target hosts:

[root@glu01 ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
link-local  0.0.0.0 255.255.0.0 U 1002   00 ens33
link-local  0.0.0.0 255.255.0.0 U 1003   00 ens34
192.168.152.0   0.0.0.0 255.255.255.0   U 0  00 ens34
192.168.213.0   0.0.0.0 255.255.255.0   U 0  00 ens33

[root@glu02 ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
link-local  0.0.0.0 255.255.0.0 U 1002   00 ens33
link-local  0.0.0.0 255.255.0.0 U 1003   00 ens34
192.168.152.0   0.0.0.0 255.255.255.0   U 0  00 ens34
192.168.213.0   0.0.0.0 255.255.255.0   U 0  00 ens33

[root@glu03 ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
link-local  0.0.0.0 255.255.0.0 U 1002   00 ens33
link-local  0.0.0.0 255.255.0.0 U 1003   00 ens34
192.168.152.0   0.0.0.0 255.255.255.0   U 0  00 ens34
192.168.213.0   0.0.0.0 255.255.255.0   U 0  00 ens33

Due to this problem I cannot reach internet from ens33 nic (management network).
I just tried to add the gateway in ifcfg-ens33 configuration file but gateway 
disappear after host reboot. 

[root@glu01 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
# Generated by VDSM version 4.19.28-1.el7.centos
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.213.151
NETMASK=255.255.255.0
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=yes
IPV6_AUTOCONF=yes

The oVirt Engine network configuration is the following one:

[host glu01]
ens33 -> ovirtmgmt (192.168.213.151, 255.255.255.0, 192.168.213.2)
ens34 -> ovirtgluster (192.168.152.151, 255.255.255.0)

[host glu02]
ens33 -> ovirtmgmt (192.168.213.152, 255.255.255.0, 192.168.213.2)
ens34 -> ovirtgluster (192.168.152.152, 255.255.255.0)

[host glu03]
ens33 -> ovirtmgmt (192.168.213.153, 255.255.255.0, 192.168.213.2)
ens34 -> ovirtgluster (192.168.152.153, 255.255.255.0)

Do you know the right way to set the gateway IP on all hosts?

Just two last questions: I was able to import an existing gluster cluster using 
oVirt Engine, but I’m not able to create a new volume because:

- I can’t select a distributed disperse volume configuration from oVirt Engine 
volume creation window
- i can’t see the bricks to be used to create a new volume (but I can import an 
existing volume without problem).

Is there something that I can do to resolve the issues and complete my first 
experience with oVirt?

Thank you very much,
Mauro T.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm grouping

2017-09-02 Thread Barak Korren
On 1 September 2017 at 19:01, david caughey  wrote:
>
> Any help or hints would be appreciated,

oVirt does not have a built-in grouping mechanism.

You can use Ansible [1] to automate the creation of several VMs
together. Another possible solution is Vagrant oVirt provider [2].

You need to watch for any resources that are shared between the
different groups of VMs, you will probably share a network at the very
lest, so you will need to be careful about allocation of IPs and host
names.

You can try using the Newton integration [3] to provide layers
networks that are not shared. The is also built-in OVS support, but
AFAIK its not yet mature enough.

As a side note, since we were not willing to have any resources share
between different oVirt instances in the oVirt CI system, we ended up
creating Lago [4], which keeps everything confined to a single
physical host.

[1]: 
https://www.ovirt.org/develop/release-management/features/infra/ansible_modules/
[2]: https://www.ovirt.org/blog/2017/02/using-oVirt-vagrant/
[3]: 
https://www.ovirt.org/documentation/how-to/networking/overlay-networks-with-neutron-integration/
[4]: http://lago.readthedocs.io/en/latest/

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users