Re: [ovirt-users] Deploying self-hosted engine - Waiting for VDSM host to become operation + received packet with own address as source

2018-01-08 Thread Sam McLeod
Hi Simone,

Thanks for your response.

> Could you please attach/share hosted-engine-setup, vdsm, supervdsm and 
> host-deploy (on the engine VM) logs?

I was on the IRC channel yesterday (username: protoporpoise) and user 'zipur' 
suggested that although the self-hosted engine documentation doesn't mention it 
- they thought that perhaps oVirt doesn't support having it's management 
network setup (bridged) to a bonded interface.

It does indeed seem like the documentation is a bit fragmented between the 
three major install options (self-hosted engine vs engine vs the appliance-like 
ISO (I think this was called oVirt live?)).

In addition, we usually remove NetworkManager as part of our normal / existing 
CentOS 7 SOE as we believe it's not ready for server deployments (yet, maybe 
it's OK in the latest Fedora), however it appears oVirt depends on it.

I noticed it generates a big mess of ifcfg files in sysconfig/network-scripts/ 
, and often changing configuration using nmtui / nmcli / editing files gives 
inconsistent results and nmtui actually adds a lot of options to bonded 
interfaces without notifying you or letting you set / change them (for example, 
with LACP, you can't set the lacp_rate to fast, and it sets it to slow for some 
reason), so this wasn't helping the confusion.

Today I'm going to try installing oVirt hosted-engine on CentOS 7.4 without the 
network interfaces being bonded.

> Yes, currently ovirt-hosted-engine-cleanup is not able to revert to initial 
> network configuration but hosted-engine-setup is supposed to be able to 
> consume an existing management bridge.

That's definitely what I experienced yesterday and found it easier (although 
very time consuming) to completely reinstall the entire OS each time to get to 
a 'clean' state to test the install on.
I also had to blkdiscard / zero out the iSCSI LUN, otherwise the installer 
would either crash out, or complain that it can't be used.

Wish me luck for trying the install without bonded interfaces today - I'll 
update this thread with my results and if I'm still having problems I'll 
compile the logs up and provide them (off-list), is there anyone else that I 
should send them to other than yourself?

If I find inconsistent documentation, I'll try to make notes as to where and 
then submit merge requests to the site when possible.

Thanks again, I'll be on the IRC channel all day.


> --
> Sam McLeod (protoporpoise on IRC)
> https://smcleod.net 
> https://twitter.com/s_mcleod 
> 
> Words are my own opinions and do not necessarily represent those of my 
> employer or partners.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploying self-hosted engine - Waiting for VDSM host to become operation + received packet with own address as source

2018-01-08 Thread Simone Tiraboschi
On Mon, Jan 8, 2018 at 3:23 AM, Sam McLeod  wrote:

> Hello,
>
> I'm trying to setup a host as a self-hosted engine as per
> https://www.ovirt.org/documentation/self-hosted/
> chap-Deploying_Self-Hosted_Engine/.
> The host is configured with two bonded network interfaces:
>
> bond0 = management network for hypervisors, setup as active/passive.
> bond1 = network that has VLANs for various network segments for virtual
> machines to use, setup as LACP bond to upstream switches.
>
> On the host, both networks are operational and work as expected.
>
> When setting up the self-hosted engine, bond0 is selected as network to
> bridge with, and a unique IP is given to the self-hosted engine VM.
>
> During the final stages of the self-hosted engine setup, the installer
> gets stuck on ' Waiting for the VDSM host to become operational'.
> While it is repeating this every minute or so the host repeats the message
> 'bond0: received packet with own address as source address', which is odd
> to me as it's in active/passive bond and I'd only expect to see this kind
> of message on XOR / load balanced interfaces.
>

Could you please attach/share hosted-engine-setup, vdsm, supervdsm and
host-deploy (on the engine VM) logs?


>
> Host console screenshot: https://imgur.com/a/a2JLd
> Host OS: CentOS 7.4
> oVirt version: 4.2.0
>
> ip a on host while install is stuck waiting for VDSM:
>
> # ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen
> 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: enp2s0f0:  mtu 9000 qdisc mq
> master bond1 portid 01373031384833 state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> 3: enp2s0f1:  mtu 9000 qdisc mq
> master bond1 portid 02373031384833 state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> 4: ens1f0:  mtu 1500 qdisc mq
> master bond0 portid 01474835323543 state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 5: ens1f1:  mtu 1500 qdisc mq
> master bond0 portid 02474835323543 state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 6: bond1:  mtu 9000 qdisc noqueue
> state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::7ae3:b5ff:fe10:7488/64 scope link
>valid_lft forever preferred_lft forever
> 7: storage@bond1:  mtu 9000 qdisc
> noqueue state UP qlen 1000
> link/ether 78:e3:b5:10:74:88 brd ff:ff:ff:ff:ff:ff
> inet 10.51.40.172/24 brd 10.51.40.255 scope global storage
>valid_lft forever preferred_lft forever
> inet6 fe80::7ae3:b5ff:fe10:7488/64 scope link
>valid_lft forever preferred_lft forever
> 9: ovs-system:  mtu 1500 qdisc noop state DOWN qlen
> 1000
> link/ether ae:c0:02:25:42:24 brd ff:ff:ff:ff:ff:ff
> 10: br-int:  mtu 1500 qdisc noop state DOWN qlen 1000
> link/ether be:92:5d:c3:28:4d brd ff:ff:ff:ff:ff:ff
> 30: bond0:  mtu 1500 qdisc
> noqueue master ovirtmgmt state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> 46: ovirtmgmt:  mtu 1500 qdisc noqueue
> state UP qlen 1000
> link/ether 00:9c:02:3c:49:90 brd ff:ff:ff:ff:ff:ff
> inet 10.51.14.112/24 brd 10.51.14.255 scope global ovirtmgmt
>valid_lft forever preferred_lft forever
> 47: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN qlen
> 1000
> link/ether 8e:c0:25:88:40:de brd ff:ff:ff:ff:ff:ff
>
>
> Before the self-hosted engine install was run the following did not exist:
>
> ;vdsmdummy;
> ovs-system
> br-int
> ovirtmgmt
>
> and bond0 was *not* a slave of ovirtmgmt.
>
> I'm now going to kick off a complete reinstall of CentOS 7 on the host as
> I've since tried cleaning up the host using the ovirt-hosted-engine-cleanup
> command and removing the packages which seem to leave the network
> configuration a mess an doesn't actually cleanup files on disk as expected.
>
>
Yes, currently ovirt-hosted-engine-cleanup is not able to revert to initial
network configuration but hosted-engine-setup is supposed to be able to
consume an existing management bridge.


>
> --
> Sam McLeod (protoporpoise on IRC)
> https://smcleod.net
> https://twitter.com/s_mcleod
>
> Words are my own opinions and do not necessarily represent those of
> my employer or partners.
>
>
> ___
> Users mailing list
> Users@ovirt.org
>