Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-23 Thread Giuseppe Ragusa
On Mon, Nov 9, 2015, at 08:16, Sandro Bonazzola wrote:
> On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa  
> wrote:
>> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
> >  wrote:
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
> >>>  wrote:
>  Hi all,
>  I'm stuck with the following error during the final phase of 
>  ovirt-hosted-engine-setup:
> 
>            The host hosted_engine_1 is in non-operational state.
>            Please try to activate it via the engine webadmin UI.
> 
>  If I login on the engine administration web UI I find the corresponding 
>  message (inside NonOperational first host hosted_engine_1 Events tab):
> 
>  Host hosted_engine_1 does not comply with the cluster Default networks, 
>  the following networks are missing on host: 'ovirtmgmt'
> 
>  I'm installing with an oVirt snapshot from October the 27th on a 
>  fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
>  hyperconverged, replica 3, for the engine-vm) pre-created and network 
>  interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
>  on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
>  in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
>  service; NetworkManager disabled).
> 
> >>>
> >>> If you manually created the network bridges, the match between them and 
> >>> the logical network should happen on name bases.
> >>
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >>
> >> As you may note from the above comment, the name should actually match 
> >> (it's exactly ovirtmgmt) but it doesn't get recognized.
> >>
> >>
> >>> If it doesn't for any reasons (please report if you find any evidence), 
> >>> you can manually bind logical network and network interfaces editing the 
> >>> host properties from the web-ui. At that point the host should become 
> >>> active in a few seconds.
> >>
> >>
> >> Well, the most immediate evidence are the error messages already reported 
> >> (given that the bridge is actually present, with the right name and 
> >> actually working).
> >> Apart from that, I find the following past logs (I don't know whether they 
> >> are relevant or not):
> >>
> >> From /var/log/vdsm/connectivity.log:
> >
> >
> > Can you please add also host-deploy logs?
>
> Please find a gzipped tar archive of the whole directory 
> /var/log/ovirt-engine/host-deploy/ at:
>
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz
>>  
>> Since I suppose that there's nothing relevant on those logs, I'm planning to 
>> specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on 
>> the host, then making the (still blocked) setup re-check.
>>  
>> 
Is there anything I should pay attention to before proceeding? (in particular 
while restarting VDSM)
> 
> 
> ^^ Dan?

I went on and unfortunately "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf 
and restarting VDSM on the host did not solve it (same error as before).

While trying (always without success) all other steps suggested by Simone 
(binding logical network and synchronizing networks from host) I found an 
interesting-looking libvirt network definition (autostart too) for 
vdsm-ovirtmgmt and this recalled some memories from past mailing list messages 
(that I still cannot find...) ;)

Long story short: aborting setup, cleaning up all and creating a libvirt 
network for each pre-provisioned bridge worked! ("net_persistence = ifcfg" has 
been kept for other, client-specific, reasons so I don't know if it's needed 
too)
Here it is, in BASH form:

for my_bridge in ovirtmgmt bridge1 bridge2; do
cat <<- EOM > /root/my-${my_bridge}.xml

  vdsm-${my_bridge}
  
  

EOM
virsh -c qemu:///system net-define /root/my-${my_bridge}.xml
virsh -c qemu:///system net-autostart vdsm-${my_bridge}
rm -f /root/my-${my_bridge}.xml
done

I was able to connect (with the "virsh" commands above) to libvirtd (which must 
be running for the above to work) by removing the VDSM-added config fragment, 
allowing tcp connections and denying TLS-only connections in 
/etc/libvirt/libvirtd.conf and finally by removing /etc/sasl2/libvirt.conf (all 
these modifications must be reverted after configuring networks then stopping 
libvirtd and before relaunching setup).

Many thanks again for suggestions, hints etc.

Regards,
Giuseppe

>> 
I will report back here on the results.
>>  
>> 
Regards,
>> 
Giuseppe
>> 
>> 
> Many thanks again for your kind assistance.
>> 
>
>> 
> Regards,
>> 
> Giuseppe
>> 
>
>> 
> >> 2015-11-01 

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-08 Thread Giuseppe Ragusa
On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > 
> > 
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
> >  wrote:
> >> __
> >> 
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> 
> >>> 
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
> >>>  wrote:
>  Hi all,
>  I'm stuck with the following error during the final phase of 
>  ovirt-hosted-engine-setup:
>  
>            The host hosted_engine_1 is in non-operational state.
>            Please try to activate it via the engine webadmin UI.
>  
>  If I login on the engine administration web UI I find the corresponding 
>  message (inside NonOperational first host hosted_engine_1 Events tab):
>  
>  Host hosted_engine_1 does not comply with the cluster Default networks, 
>  the following networks are missing on host: 'ovirtmgmt'
>  
>  I'm installing with an oVirt snapshot from October the 27th on a 
>  fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
>  hyperconverged, replica 3, for the engine-vm) pre-created and network 
>  interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
>  on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
>  in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
>  service; NetworkManager disabled).
>  
> >>> 
> >>> If you manually created the network bridges, the match between them and 
> >>> the logical network should happen on name bases.
> >> 
> >> 
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >> 
> >> As you may note from the above comment, the name should actually match 
> >> (it's exactly ovirtmgmt) but it doesn't get recognized.
> >> 
> >> 
> >>> If it doesn't for any reasons (please report if you find any evidence), 
> >>> you can manually bind logical network and network interfaces editing the 
> >>> host properties from the web-ui. At that point the host should become 
> >>> active in a few seconds.
> >> 
> >> 
> >> Well, the most immediate evidence are the error messages already reported 
> >> (given that the bridge is actually present, with the right name and 
> >> actually working).
> >> Apart from that, I find the following past logs (I don't know whether they 
> >> are relevant or not):
> >> 
> >> From /var/log/vdsm/connectivity.log:
> > 
> > 
> > Can you please add also host-deploy logs?
> 
> Please find a gzipped tar archive of the whole directory 
> /var/log/ovirt-engine/host-deploy/ at:
> 
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz

Since I suppose that there's nothing relevant on those logs, I'm planning to 
specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on 
the host, then making the (still blocked) setup re-check.

Is there anything I should pay attention to before proceeding? (in particular 
while restarting VDSM)

I will report back here on the results.

Regards,
Giuseppe

> Many thanks again for your kind assistance.
> 
> Regards,
> Giuseppe
> 
> >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
> >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
> >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 
> >> duplex:full) d
> >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped 
> >> vnet1:(operstate:up spee
> >> d:0 duplex:full) 
> >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
> >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
> >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up 
> >> speed:0 dupl
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
> >> bond1:(operstate:up sp
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
> >> ;vdsmdum
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up 
> >> speed:0 dup
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), 
> >> enp7s0f0:(operstate:up s
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), 
> >> enp6s0f1:
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 
> >> duplex:unknown)
> >> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up 
> >> speed:1000
> >>  duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), 
> >> enp0s20f3:(opers
> >> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 
> >> duplex:full)
> >> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
> >> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up 
> >> speed:0 dupl
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
> >> bond1:(operstate:up sp
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
> >> ;vdsmdum
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up 
> >> speed:0 dup
> >> lex:unknown), lo:(operstate:up 

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-08 Thread Sandro Bonazzola
On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa  wrote:

> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> > On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > >
> > >
> > > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <
> giuseppe.rag...@hotmail.com> wrote:
> > >> __
> > >>
> > >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> > >>>
> > >>>
> > >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <
> giuseppe.rag...@hotmail.com> wrote:
> >  Hi all,
> >  I'm stuck with the following error during the final phase of
> ovirt-hosted-engine-setup:
> > 
> >    The host hosted_engine_1 is in non-operational state.
> >    Please try to activate it via the engine webadmin UI.
> > 
> >  If I login on the engine administration web UI I find the
> corresponding message (inside NonOperational first host hosted_engine_1
> Events tab):
> > 
> >  Host hosted_engine_1 does not comply with the cluster Default
> networks, the following networks are missing on host: 'ovirtmgmt'
> > 
> >  I'm installing with an oVirt snapshot from October the 27th on a
> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5
> hyperconverged, replica 3, for the engine-vm) pre-created and network
> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on
> underlying 802.3ad bonds or plain interfaces) manually pre-configured in
> /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service;
> NetworkManager disabled).
> > 
> > >>>
> > >>> If you manually created the network bridges, the match between them
> and the logical network should happen on name bases.
> > >>
> > >>
> > >> Hi Simone,
> > >> many thanks fpr your help (again) :)
> > >>
> > >> As you may note from the above comment, the name should actually
> match (it's exactly ovirtmgmt) but it doesn't get recognized.
> > >>
> > >>
> > >>> If it doesn't for any reasons (please report if you find any
> evidence), you can manually bind logical network and network interfaces
> editing the host properties from the web-ui. At that point the host should
> become active in a few seconds.
> > >>
> > >>
> > >> Well, the most immediate evidence are the error messages already
> reported (given that the bridge is actually present, with the right name
> and actually working).
> > >> Apart from that, I find the following past logs (I don't know whether
> they are relevant or not):
> > >>
> > >> From /var/log/vdsm/connectivity.log:
> > >
> > >
> > > Can you please add also host-deploy logs?
> >
> > Please find a gzipped tar archive of the whole directory
> /var/log/ovirt-engine/host-deploy/ at:
> >
> >
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz
>
> Since I suppose that there's nothing relevant on those logs, I'm planning
> to specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart
> VDSM on the host, then making the (still blocked) setup re-check.
>
> Is there anything I should pay attention to before proceeding? (in
> particular while restarting VDSM)
>


^^ Dan?


>
> I will report back here on the results.
>
> Regards,
> Giuseppe
>
> > Many thanks again for your kind assistance.
> >
> > Regards,
> > Giuseppe
> >
> > >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
> > >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
> > >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0
> duplex:full) d
> > >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped
> vnet1:(operstate:up spee
> > >> d:0 duplex:full)
> > >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
> > >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
> > >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up
> speed:0 dupl
> > >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
> bond1:(operstate:up sp
> > >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000
> duplex:full), ;vdsmdum
> > >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up
> speed:0 dup
> > >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown),
> enp7s0f0:(operstate:up s
> > >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
> duplex:full), enp6s0f1:
> > >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0
> duplex:unknown)
> > >> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up
> speed:1000
> > >>  duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
> enp0s20f3:(opers
> > >> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000
> duplex:full)
> > >> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
> > >> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up
> speed:0 dupl
> > >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
> bond1:(operstate:up sp
> > >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000
> duplex:full), ;vdsmdum
> > >> my;:(operstate:down speed:0 

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-03 Thread Giuseppe Ragusa
On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> 
> 
> On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
>  wrote:
>> __
>> 
>> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>>> 
>>> 
>>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
>>>  wrote:
 Hi all,
 I'm stuck with the following error during the final phase of 
 ovirt-hosted-engine-setup:
 
           The host hosted_engine_1 is in non-operational state.
           Please try to activate it via the engine webadmin UI.
 
 If I login on the engine administration web UI I find the corresponding 
 message (inside NonOperational first host hosted_engine_1 Events tab):
 
 Host hosted_engine_1 does not comply with the cluster Default networks, 
 the following networks are missing on host: 'ovirtmgmt'
 
 I'm installing with an oVirt snapshot from October the 27th on a 
 fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
 hyperconverged, replica 3, for the engine-vm) pre-created and network 
 interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
 on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
 in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
 service; NetworkManager disabled).
 
>>> 
>>> If you manually created the network bridges, the match between them and the 
>>> logical network should happen on name bases.
>> 
>> 
>> Hi Simone,
>> many thanks fpr your help (again) :)
>> 
>> As you may note from the above comment, the name should actually match (it's 
>> exactly ovirtmgmt) but it doesn't get recognized.
>> 
>> 
>>> If it doesn't for any reasons (please report if you find any evidence), you 
>>> can manually bind logical network and network interfaces editing the host 
>>> properties from the web-ui. At that point the host should become active in 
>>> a few seconds.
>> 
>> 
>> Well, the most immediate evidence are the error messages already reported 
>> (given that the bridge is actually present, with the right name and actually 
>> working).
>> Apart from that, I find the following past logs (I don't know whether they 
>> are relevant or not):
>> 
>> From /var/log/vdsm/connectivity.log:
> 
> 
> Can you please add also host-deploy logs?

Please find a gzipped tar archive of the whole directory 
/var/log/ovirt-engine/host-deploy/ at:

https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz

Many thanks again for your kind assistance.

Regards,
Giuseppe

>> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 
>> duplex:full) d
>> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up 
>> spee
>> d:0 duplex:full) 
>> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 
>> dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
>> bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
>> ;vdsmdum
>> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 
>> dup
>> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), 
>> enp7s0f0:(operstate:up s
>> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), 
>> enp6s0f1:
>> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 
>> duplex:unknown)
>> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up 
>> speed:1000
>>  duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), 
>> enp0s20f3:(opers
>> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 
>> duplex:full)
>> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 
>> dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
>> bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
>> ;vdsmdum
>> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 
>> dup
>> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), 
>> enp7s0f0:(operstate:up s
>> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), 
>> enp6s0f1:
>> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 
>> duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), 
>> enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up 
>> speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), 
>> enp0s20f2:(operstate:up speed:1000 duplex:full)
>> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0 
>> duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
>> bond1:(operstate:up 

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-03 Thread Simone Tiraboschi
On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <
giuseppe.rag...@hotmail.com> wrote:

> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>
>
>
> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <
> giuseppe.rag...@hotmail.com> wrote:
>
> Hi all,
> I'm stuck with the following error during the final phase of
> ovirt-hosted-engine-setup:
>
>   The host hosted_engine_1 is in non-operational state.
>   Please try to activate it via the engine webadmin UI.
>
> If I login on the engine administration web UI I find the corresponding
> message (inside NonOperational first host hosted_engine_1 Events tab):
>
> Host hosted_engine_1 does not comply with the cluster Default networks,
> the following networks are missing on host: 'ovirtmgmt'
>
> I'm installing with an oVirt snapshot from October the 27th on a
> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5
> hyperconverged, replica 3, for the engine-vm) pre-created and network
> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on
> underlying 802.3ad bonds or plain interfaces) manually pre-configured in
> /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service;
> NetworkManager disabled).
>
>
>
> If you manually created the network bridges, the match between them and
> the logical network should happen on name bases.
>
>
> Hi Simone,
> many thanks fpr your help (again) :)
>
> As you may note from the above comment, the name should actually match
> (it's exactly ovirtmgmt) but it doesn't get recognized.
>
>
> If it doesn't for any reasons (please report if you find any evidence),
> you can manually bind logical network and network interfaces editing the
> host properties from the web-ui. At that point the host should become
> active in a few seconds.
>
>
> Well, the most immediate evidence are the error messages already reported
> (given that the bridge is actually present, with the right name and
> actually working).
> Apart from that, I find the following past logs (I don't know whether they
> are relevant or not):
>
> From /var/log/vdsm/connectivity.log:
>


Can you please add also host-deploy logs?


>
> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0
> duplex:full) d
> ropped vnet2:(operstate:up speed:0 duplex:full) dropped
> vnet1:(operstate:up spee
> d:0 duplex:full)
> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up
> speed:0 dupl
> ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
> bond1:(operstate:up sp
> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full),
> ;vdsmdum
> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up
> speed:0 dup
> lex:unknown), lo:(operstate:up speed:0 duplex:unknown),
> enp7s0f0:(operstate:up s
> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full),
> enp6s0f1:
> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0
> duplex:unknown)
> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up
> speed:1000
>  duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
> enp0s20f3:(opers
> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000
> duplex:full)
> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up
> speed:0 dupl
> ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
> bond1:(operstate:up sp
> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full),
> ;vdsmdum
> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up
> speed:0 dup
> lex:unknown), lo:(operstate:up speed:0 duplex:unknown),
> enp7s0f0:(operstate:up s
> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full),
> enp6s0f1:
> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0
> duplex:unknown), bond2:(operstate:up speed:3000 duplex:full),
> enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up
> speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full),
> enp0s20f2:(operstate:up speed:1000 duplex:full)
> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up
> speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full),
> bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up
> speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0
> duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown),
> lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000
> duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full),
> enp6s0f1:(operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0
> duplex:unknown), bond2:(operstate:up speed:3000 duplex:full),
> enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-02 Thread Simone Tiraboschi
On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa  wrote:

> Hi all,
> I'm stuck with the following error during the final phase of
> ovirt-hosted-engine-setup:
>
>   The host hosted_engine_1 is in non-operational state.
>   Please try to activate it via the engine webadmin UI.
>
> If I login on the engine administration web UI I find the corresponding
> message (inside NonOperational first host hosted_engine_1 Events tab):
>
> Host hosted_engine_1 does not comply with the cluster Default networks,
> the following networks are missing on host: 'ovirtmgmt'
>
> I'm installing with an oVirt snapshot from October the 27th on a
> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5
> hyperconverged, replica 3, for the engine-vm) pre-created and network
> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on
> underlying 802.3ad bonds or plain interfaces) manually pre-configured in
> /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service;
> NetworkManager disabled).
>
>
If you manually created the network bridges, the match between them and the
logical network should happen on name bases.
If it doesn't for any reasons (please report if you find any evidence), you
can manually bind logical network and network interfaces editing the host
properties from the web-ui. At that point the host should become active in
a few seconds.
When the host will become active you'll can continue with
hosted-engine-setup.


> I seem to recall that a preconfigured network setup on oVirt 3.6 would
> need something predefined on the libvirt side too (apart from usual ifcfg-*
> files), but I cannot find the relevant mailing list message anymore nor any
> other specific documentation.
>
> Does anyone have any further suggestion or clue (code/docs to read)?
>
> Many thanks in advance.
>
> Kind regards,
> Giuseppe
>
> PS: please keep also my address in replying because I'm experiencing some
> problems between Hotmail and oVirt-mailing-list
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-02 Thread Giuseppe Ragusa
On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa
>  wrote:
>> Hi all,
>>
I'm stuck with the following error during the final phase of 
ovirt-hosted-engine-
setup:
>>
>>
The host hosted_engine_1 is in non-operational state.
>>
Please try to activate it via the engine webadmin UI.
>>
>>
If I login on the engine administration web UI I find the corresponding
message (inside NonOperational first host hosted_engine_1 Events tab):
>>
>>
Host hosted_engine_1 does not comply with the cluster Default networks,
the following networks are missing on host: 'ovirtmgmt'
>>
>>
I'm installing with an oVirt snapshot from October the 27th on a fully-
patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged,
replica 3, for the engine-vm) pre-created and network interfaces/bridges
(ovirtmgmt and other two bridges, called nfs and lan, on underlying
802.3ad bonds or plain interfaces) manually pre-configured in 
/etc/sysconfig/network-interfaces/ifcfg-
* (using "classic" network service; NetworkManager disabled).
>>
>
> If you manually created the network bridges, the match between them
> and the logical network should happen on name bases.

Hi Simone, many thanks fpr your help (again) :)

As you may note from the above comment, the name should actually match
(it's exactly ovirtmgmt) but it doesn't get recognized.

> If it doesn't for any reasons (please report if you find any
> evidence), you can manually bind logical network and network
> interfaces editing the host properties from the web-ui. At that point
> the host should become active in a few seconds.

Well, the most immediate evidence are the error messages already
reported (given that the bridge is actually present, with the right name
and actually working). Apart from that, I find the following past logs
(I don't know whether they are relevant or not):

>From /var/log/vdsm/connectivity.log:

2015-11-01 21:37:21,029:DEBUG:recent_client:True 2015-11-01
21:37:51,088:DEBUG:recent_client:False 2015-11-01
21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
ropped vnet2:(operstate:up speed:0 duplex:full) dropped
vnet1:(operstate:up spee
d: duplex:full) 2015-11-01 21:38:36,174:DEBUG:recent_client:True 2015-11-
   01 21:39:06,233:DEBUG:recent_client:False 2015-11-01
   21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
   ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up sp eed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdum my;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup lex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
   peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1: (operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown) , bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(opers tate:up speed:1000 duplex:full),
   enp0s20f2:(operstate:up speed:1000 duplex:full) 2015-11-01
   21:48:52,450:DEBUG:recent_client:False 2015-11-01
   22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
   ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up sp eed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdum my;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup lex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
   peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1: (operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(operstate:up speed:1000 duplex:full),
   enp0s20f2:(operstate:up speed:1000 duplex:full) 2015-11-01
   22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0
   duplex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up
   speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(operstate:up speed:1000 duplex:full),
   enp0s20f2:(operstate:up speed:1000 duplex:full) 2015-11-01
   22:58:16,215:DEBUG:new vnet0:(operstate:up speed:0