Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
On Tue, Jul 25, 2017 at 6:25 PM, Vinícius Ferrão  wrote:
> Bug opened here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1474904


Thanks! Let's continue the discussion in the bug

>
> Thanks,
> V.
>
> On 25 Jul 2017, at 12:08, Vinícius Ferrão  wrote:
>
> Hello Maor,
>
> Thanks for answering and looking deeper in this case. You’re welcome to
> connect to my machine since it’s reachable over the internet. I’ll be
> opening a ticket in moments. Just to feed an update here:
>
> I’ve done what you asked, but since I’m running Self Hosted Engine, I lost
> the connection to HE, here’s the CLI:
>
>
>
> Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3
>
>  node status: OK
>  See `nodectl check` for more information
>
> Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or
> https://146.164.37.103:9090/
>
> [root@ovirt3 ~]# iscsiadm -m session -u
> Logging out of session [sid: 1, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
> Logging out of session [sid: 4, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 7, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.12.14,3260]
> Logging out of session [sid: 5, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logging out of session [sid: 6, target:
> iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal:
> 192.168.11.14,3260]
> Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.12.14,3260] successful.
> Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio,
> portal: 192.168.11.14,3260] successful.
> [root@ovirt3 ~]# service iscsid stop
> Redirecting to /bin/systemctl stop  iscsid.service
> Warning: Stopping iscsid.service, but it can still be activated by:
>  iscsid.socket
>
> [root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> [root@ovirt3 ~]# service iscsid start
> Redirecting to /bin/systemctl start  iscsid.service
>
> And finally:
>
> [root@ovirt3 ~]# hosted-engine --vm-status
> .
> .
> .
>
> It just hangs.
>
> Thanks,
> V.
>
> On 25 Jul 2017, at 05:54, Maor Lipchuk  wrote:
>
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same
> IPs.
>
> based on the  VDSM logs:
> u'connectionParams':[
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno3.11',
>   u'connection':u'192.168.12.14',
>}
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.11.14',
>},
>{
>   u'netIfaceName':u'eno4.12',
>   u'connection':u'192.168.12.14',
>}
> ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>  "iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>  "service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
>   mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>  "service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
> trying to enable the feature without success too.
>
> Here’s what I’ve done, step-by-step.
>
> 1. Installed oVirt Node 4.1.3 with the following network settings:
>
> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
> eno3 with 9216 MTU.
> eno4 with 9216 MTU.
> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>
> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
> different switches.
>
>
>
> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
> 

Re: [ovirt-users] Ovirt Node

2017-07-25 Thread Alexander Wels
On Tuesday, July 25, 2017 3:39:12 PM EDT FERNANDO FREDIANI wrote:
> Josep, these Hosts was CentOS Minimal Install or were oVirt-Node-NG
> images ? If they were CentOS Minimal install you must install vsdm
> before adding the host to oVirt Engine.
> 
> Fernando
> 

If it was a minimal centos then he has to turn on the ovirt repo for whatever 
version of ovirt he 
is running by calling

yum localinstall http://resources.ovirt.org/pub/yum-repo/ovirt-releaseXX.rpm[1]

Where XX is the version of his ovirt, for instance 41

> On 25/07/2017 14:13, Jose Vicente Rosello Vila wrote:
> > Hello users,
> > 
> > I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2
> > hosts, but the result was “ install failed”.
> > 
> > Both nodes have been installes from CD image.
> > 
> > What can I do?
> > 
> > Thanks,
> > 
> > Descripción: Descripción: logo_upv_val.jpg
> > 
> > 
> > 
> > Josep Vicent Roselló Vila
> > 
> > Àrea de Sistemes d’Informació i Comunicacions
> > 
> > *Universitat Politècnica de València *
> > 
> > 
> > 
> > Camí de Vera, s/n
> > 
> > 46022 VALÈNCIA
> > 
> > _Edifici 4
> >  > AD=CPD>L___
> > 
> > 
> > 
> > Tel. +34 963 879 075 (ext.78746)
> > 
> > rose...@asic.upv.es 
> > 
> > 
> > 
> > Antes de imprimir este mensaje, piense si es necesario.
> > ¡El cuidado del medioambiente es cosa de todos!
> > 
> > 
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users




[1] http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm%3E
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node

2017-07-25 Thread FERNANDO FREDIANI
Josep, these Hosts was CentOS Minimal Install or were oVirt-Node-NG 
images ? If they were CentOS Minimal install you must install vsdm 
before adding the host to oVirt Engine.


Fernando


On 25/07/2017 14:13, Jose Vicente Rosello Vila wrote:


Hello users,

I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2 
hosts, but the result was “ install failed”.


Both nodes have been installes from CD image.

What can I do?

Thanks,

Descripción: Descripción: logo_upv_val.jpg



Josep Vicent Roselló Vila

Àrea de Sistemes d’Informació i Comunicacions

*Universitat Politècnica de València *



Camí de Vera, s/n

46022 VALÈNCIA

_Edifici 4 
L___




Tel. +34 963 879 075 (ext.78746)

rose...@asic.upv.es 



Antes de imprimir este mensaje, piense si es necesario.
¡El cuidado del medioambiente es cosa de todos!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Node

2017-07-25 Thread Jose Vicente Rosello Vila
Hello users,



I installed ovirt engine 4.1.3.5-1.el7.centos and I tried to install 2 hosts, 
but the result was " install failed".



Both nodes have been installes from CD image.



What can I do?











Thanks,







Josep Vicent Roselló Vila

Àrea de Sistemes d'Informació i Comunicacions

Universitat Politècnica de València



Camí de Vera, s/n

46022 VALÈNCIA

Edifici 
4L



Tel. +34 963 879 075 (ext.78746)

rose...@asic.upv.es



  _

Antes de imprimir este mensaje, piense si es necesario.
¡El cuidado del medioambiente es cosa de todos!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Foreman

2017-07-25 Thread Ivan Necas
Oved Ourfali  writes:

> CC-ing Ohad and Ivan from the Foreman team to take a look. 
>
> Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL in 
> Foreman that uses v3 (as Foreman doesn't support v4 yet).
>
> I assume that's not your issue, otherwise you would have encountered more 
> basic issues. 
>
> Also, can you please share your logs from both environments? 
>
> Ohad/Ivan, any clue? 
>
> Thanks, 
> Oved 
>
> On Jul 24, 2017 18:08, "Davide Ferrari"  wrote:
>
> Hello list
>
> is anybody successfully using oVirt + Foreman for VM creation + 
> provisioning?
>
> I'm using Foremn (latest version, 1.15.2) with latest oVirt version 
> (4.1.3) but I'm encountering several problem, especially related to disks. 
> For example:
>
> - cannot create a VM with multiple disks though Foreman CLI
> (hammer)

Could you send the hammer command you're using

>
> - if I create a multidisk VM from Foreman, the second disk always
> gets the "bootable" flag and not the primary image, making the VMs
> not bootable at all.

Are the compute profiles involved in the provisioning by any chance?

/CC to ori to have more pair of eyes to look at this.

-- Ivan

>
> Any other Foreman user sharing the pain here? Foramn's list is not so 
> useful so I'm trying to ask here. How do you programmatically create virtual 
> machines with oVirt and Foreman? Should I switch do
> directly using oVirt API?
>
> Thanks in advance
>
> Davide
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Vinícius Ferrão
Bug opened here:
https://bugzilla.redhat.com/show_bug.cgi?id=1474904

Thanks,
V.

On 25 Jul 2017, at 12:08, Vinícius Ferrão 
> wrote:

Hello Maor,

Thanks for answering and looking deeper in this case. You’re welcome to connect 
to my machine since it’s reachable over the internet. I’ll be opening a ticket 
in moments. Just to feed an update here:

I’ve done what you asked, but since I’m running Self Hosted Engine, I lost the 
connection to HE, here’s the CLI:



Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3

 node status: OK
 See `nodectl check` for more information

Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or 
https://146.164.37.103:9090/

[root@ovirt3 ~]# iscsiadm -m session -u
Logging out of session [sid: 1, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
Logging out of session [sid: 4, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 7, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 5, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logging out of session [sid: 6, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
[root@ovirt3 ~]# service iscsid stop
Redirecting to /bin/systemctl stop  iscsid.service
Warning: Stopping iscsid.service, but it can still be activated by:
 iscsid.socket

[root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces

[root@ovirt3 ~]# service iscsid start
Redirecting to /bin/systemctl start  iscsid.service

And finally:

[root@ovirt3 ~]# hosted-engine --vm-status
.
.
.

It just hangs.

Thanks,
V.

On 25 Jul 2017, at 05:54, Maor Lipchuk 
> wrote:

Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk 
> wrote:
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
u'connectionParams':[
   {
  u'netIfaceName':u'eno3.11',
  u'connection':u'192.168.11.14',
   },
   {
  u'netIfaceName':u'eno3.11',
  u'connection':u'192.168.12.14',
   }
  u'netIfaceName':u'eno4.12',
  u'connection':u'192.168.11.14',
   },
   {
  u'netIfaceName':u'eno4.12',
  u'connection':u'192.168.12.14',
   }
],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
 "iscsiadm -m session -u"

3. Stop the iscsid service:
 "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
  mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
 "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz 
> wrote:
Hi,


Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
trying to enable the feature without success too.

Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
different switches.


This is the point: the OVirt implementation of iSCSI-Bonding assumes that
all network interfaces in the bond can connect/reach all targets, including
those in the other net(s). The fact that you use separate, isolated networks
means that this is not the case in your setup (and not in mine).

I am not sure if this is a bug, a design flaw or a feature, but as 

Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Vinícius Ferrão
Hello Maor,

Thanks for answering and looking deeper in this case. You’re welcome to connect 
to my machine since it’s reachable over the internet. I’ll be opening a ticket 
in moments. Just to feed an update here:

I’ve done what you asked, but since I’m running Self Hosted Engine, I lost the 
connection to HE, here’s the CLI:



Last login: Thu Jul 20 02:43:50 2017 from 172.31.2.3

  node status: OK
  See `nodectl check` for more information

Admin Console: https://192.168.11.3:9090/ or https://192.168.12.3:9090/ or 
https://146.164.37.103:9090/

[root@ovirt3 ~]# iscsiadm -m session -u
Logging out of session [sid: 1, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, portal: 192.168.12.14,3260]
Logging out of session [sid: 4, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 7, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.12.14,3260]
Logging out of session [sid: 5, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logging out of session [sid: 6, target: 
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, portal: 192.168.11.14,3260]
Logout of [sid: 1, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-he, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 4, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 7, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.12.14,3260] successful.
Logout of [sid: 5, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
Logout of [sid: 6, target: iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio, 
portal: 192.168.11.14,3260] successful.
[root@ovirt3 ~]# service iscsid stop
Redirecting to /bin/systemctl stop  iscsid.service
Warning: Stopping iscsid.service, but it can still be activated by:
  iscsid.socket

[root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces

[root@ovirt3 ~]# service iscsid start
Redirecting to /bin/systemctl start  iscsid.service

And finally:

[root@ovirt3 ~]# hosted-engine --vm-status
.
.
.

It just hangs.

Thanks,
V.

> On 25 Jul 2017, at 05:54, Maor Lipchuk  wrote:
> 
> Hi Vinícius,
> 
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
> 
> Thanks,
> Maor
> 
> 
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>> Hi Vinícius,
>> 
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>> 
>> based on the  VDSM logs:
>>  u'connectionParams':[
>> {
>>u'netIfaceName':u'eno3.11',
>>u'connection':u'192.168.11.14',
>> },
>> {
>>u'netIfaceName':u'eno3.11',
>>u'connection':u'192.168.12.14',
>> }
>>u'netIfaceName':u'eno4.12',
>>u'connection':u'192.168.11.14',
>> },
>> {
>>u'netIfaceName':u'eno4.12',
>>u'connection':u'192.168.12.14',
>> }
>>  ],
>> 
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>> 
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>> 
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>   "iscsiadm -m session -u"
>> 
>> 3. Stop the iscsid service:
>>   "service iscsid stop"
>> 
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>>mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>> 
>> 5. Start the iscsid service
>>   "service iscsid start"
>> 
>> Regards,
>> Maor and Benny
>> 
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>>> Hi,
>>> 
>>> 
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>> 
 I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
 trying to enable the feature without success too.
 
 Here’s what I’ve done, step-by-step.
 
 1. Installed oVirt Node 4.1.3 with the following network settings:
 
 eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
 eno3 with 9216 MTU.
 eno4 with 9216 MTU.
 vlan11 on eno3 with 9216 MTU and fixed IP addresses.
 vlan12 on eno4 with 9216 MTU and fixed IP addresses.
 
 eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
 different switches.
>>> 
>>> 
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>> 
>>> I am 

Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread yayo (j)
2017-07-25 11:31 GMT+02:00 Sahina Bose :

>
>> Other errors on unsync gluster elements still remain... This is a
>> production env, so, there is any chance to subscribe to RH support?
>>
>
> The unsynced entries - did you check for disconnect messages in the mount
> log as suggested by Ravi?
>
>
Hi have provided this (check past mails): * tail -f
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-dvirtgluster\:engine.log*


Is enougth?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt memory quota

2017-07-25 Thread Staniforth, Paul
Hello,

when I restart the ovirt engine all quotas show 100% usage for 
memory, if I open the quota in edit mode and close it again it updates memory 
used.

I'm using  4.1.3.5-1.el7.centos


Regards,

   Paul S.

To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt node ready for production env?

2017-07-25 Thread Yaniv Kaul
On Thu, Jul 20, 2017 at 7:42 PM, Vinícius Ferrão  wrote:

> Hello Lionel,
>
> Production ready it's definitely yes. Red Hat even sells RHV-H, which is
> the same thing as oVirt Node. But keep in mind one thing: it's an
> appliance. So modifications on the appliance isn't really supported. As far
> as I know oVirt Node is based on imgbase and updates/security are done
> through yum. But when an update is made everything is rewritten. So you
> will lose your modifications if you install additional packages on oVirt
> Node.
>
> The host is stateless, so you don't really need to backup it, the core is
> running on hosted engine.
>
> About the other questions, I can't add anything since I'm new to oVirt
> too. Perhaps someone could complete my answer.
>

The answers above are inaccurate wrt recent oVirt node, which:
1. does allow you to install additional packages (via 'yum')
2. and does save them between upgrades.

Y.


>
> V.
>
> Sent from my iPhone
>
> > On 20 Jul 2017, at 03:59, Lionel Caignec  wrote:
> >
> > Hi,
> >
> > i'm did not test myself so i prefer asking before use it (
> https://www.ovirt.org/node/).
> > Is ovirt node can be used for production environment ?
> > Is it possible to add some software on host (ex: backup tools, ossec,...
> )?
> > How does work security update, is it managed by ovirt? or can i plug
> ovirt node on spacewalk/katello?
> >
> >
> > Sorry for my "noobs question"
> >
> > Regards
> > --
> > Lionel
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Foreman

2017-07-25 Thread Davide Ferrari



On 25/07/17 12:19, Maton, Brett wrote:
Last time I looked at creating VM's from foreman there was a problem 
with the compute resource being passed from foreman plugin to the 
ovirt api.


Can't remember exactly what was being sent, but it didn't match any 
available ovirt 'instance type' which is why it was failing to create 
the machine.


Not sure if you're facing the same issue, but maybe worth looking into...

Well, actually I can create a VM both from Foreman UI and Hammer CLI, 
the problem arises when I'm trying to add more disks to that VM

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Foreman

2017-07-25 Thread Maton, Brett
Last time I looked at creating VM's from foreman there was a problem with
the compute resource being passed from foreman plugin to the ovirt api.

Can't remember exactly what was being sent, but it didn't match any
available ovirt 'instance type' which is why it was failing to create the
machine.

Not sure if you're facing the same issue, but maybe worth looking into...

On 25 July 2017 at 09:59, Davide Ferrari  wrote:

> Hello
>
> I've attached logs from:
>
> - hammer cli (debug) with the command line I've used
>
> - foreman logs
>
> - ovirt engine logs (server.log)
>
>
> Basically I was trying to create a VM from an ovirt template linked to a
> Foreman image (CentOS_73) which consists of a single disk with the OS, and
> attach via Hammer 2 more disks. In this case I get a 404 Resource Not Found
> from Foreman and what I see in the ovirt logs is that the VM is created and
> then immediately deleted via API
>
>
> Thanks!
>
> On 24/07/17 20:56, Oved Ourfali wrote:
>
> CC-ing Ohad and Ivan from the Foreman team to take a look.
>
> Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL
> in Foreman that uses v3 (as Foreman doesn't support v4 yet).
>
> I assume that's not your issue, otherwise you would have encountered more
> basic issues.
>
> Also, can you please share your logs from both environments?
>
> Ohad/Ivan, any clue?
>
> Thanks,
> Oved
>
> On Jul 24, 2017 18:08, "Davide Ferrari"  wrote:
>
> Hello list
>
>
> is anybody successfully using oVirt + Foreman for VM creation +
> provisioning?
>
> I'm using Foremn (latest version, 1.15.2) with latest oVirt version
> (4.1.3) but I'm encountering several problem, especially related to disks.
> For example:
>
> - cannot create a VM with multiple disks though Foreman CLI (hammer)
>
> - if I create a multidisk VM from Foreman, the second disk always gets the
> "bootable" flag and not the primary image, making the VMs not bootable at
> all.
>
>
> Any other Foreman user sharing the pain here? Foramn's list is not so
> useful so I'm trying to ask here. How do you programmatically create
> virtual machines with oVirt and Foreman? Should I switch do directly using
> oVirt API?
>
> Thanks in advance
>
> Davide
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Can you please try to connect to your iSCSI server using iscsadm from
your VDSM Host, for example like so:
   iscsiadm -m node -T
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio -I eno3.11 -P
192.168.12.14,3260 --login

On Tue, Jul 25, 2017 at 11:54 AM, Maor Lipchuk  wrote:
> Hi Vinícius,
>
> I was trying to reproduce your scenario and also encountered this
> issue, so please disregard my last comment, can you please open a bug
> on that so we can investigate it properly
>
> Thanks,
> Maor
>
>
> On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
>> Hi Vinícius,
>>
>> For some reason it looks like your networks are both connected to the same 
>> IPs.
>>
>> based on the  VDSM logs:
>>   u'connectionParams':[
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno3.11',
>> u'connection':u'192.168.12.14',
>>  }
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.11.14',
>>  },
>>  {
>> u'netIfaceName':u'eno4.12',
>> u'connection':u'192.168.12.14',
>>  }
>>   ],
>>
>> Can you try to reconnect to the iSCSI storage domain after
>> re-initializing your iscsiadm on your host.
>>
>> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>>
>> 2. In your VDSM host, log out from your iscsi open sessions which are
>> related to this storage domain
>> if that is your only iSCSI storage domain log out from all the sessions:
>>"iscsiadm -m session -u"
>>
>> 3. Stop the iscsid service:
>>"service iscsid stop"
>>
>> 4. Move your network interfaces configured in the iscsiadm to a
>> temporary folder:
>> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>>
>> 5. Start the iscsid service
>>"service iscsid start"
>>
>> Regards,
>> Maor and Benny
>>
>> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>>> Hi,
>>>
>>>
>>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>>
 I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
 trying to enable the feature without success too.

 Here’s what I’ve done, step-by-step.

 1. Installed oVirt Node 4.1.3 with the following network settings:

 eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
 eno3 with 9216 MTU.
 eno4 with 9216 MTU.
 vlan11 on eno3 with 9216 MTU and fixed IP addresses.
 vlan12 on eno4 with 9216 MTU and fixed IP addresses.

 eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
 different switches.
>>>
>>>
>>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>>> all network interfaces in the bond can connect/reach all targets, including
>>> those in the other net(s). The fact that you use separate, isolated networks
>>> means that this is not the case in your setup (and not in mine).
>>>
>>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>>> of this OVirt's iSCSI-Bonding does not work for us.
>>>
>>> Please see my mail from yesterday for a workaround.
>>>
>>> cu,
>>> Uwe
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread Sahina Bose
On Tue, Jul 25, 2017 at 1:45 PM, yayo (j)  wrote:

> 2017-07-25 7:42 GMT+02:00 Kasturi Narra :
>
>> These errors are because not having glusternw assigned to the correct
>> interface. Once you attach that these errors should go away.  This has
>> nothing to do with the problem you are seeing.
>>
>
> Hi,
>
> You talking  about errors like these?
>
> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
>
>
> How to assign "glusternw (???)" to the correct interface?
>

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
"Storage network" section explains this. Please make sure that gdnode01 is
resolvable from engine.



>
> Other errors on unsync gluster elements still remain... This is a
> production env, so, there is any chance to subscribe to RH support?
>

The unsynced entries - did you check for disconnect messages in the mount
log as suggested by Ravi?

For Red Hat support, the best option is to contact your local Red Hat
representative.


> Thank you
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Foreman

2017-07-25 Thread Davide Ferrari

Hello

I've attached logs from:

- hammer cli (debug) with the command line I've used

- foreman logs

- ovirt engine logs (server.log)


Basically I was trying to create a VM from an ovirt template linked to a 
Foreman image (CentOS_73) which consists of a single disk with the OS, 
and attach via Hammer 2 more disks. In this case I get a 404 Resource 
Not Found from Foreman and what I see in the ovirt logs is that the VM 
is created and then immediately deleted via API



Thanks!


On 24/07/17 20:56, Oved Ourfali wrote:

CC-ing Ohad and Ivan from the Foreman team to take a look.

Also, by default, RHV 4.1 will use v4 of the api, so you have to use a 
URL in Foreman that uses v3 (as Foreman doesn't support v4 yet).


I assume that's not your issue, otherwise you would have encountered 
more basic issues.


Also, can you please share your logs from both environments?

Ohad/Ivan, any clue?

Thanks,
Oved

On Jul 24, 2017 18:08, "Davide Ferrari" > wrote:


Hello list


is anybody successfully using oVirt + Foreman for VM creation +
provisioning?

I'm using Foremn (latest version, 1.15.2) with latest oVirt
version (4.1.3) but I'm encountering several problem, especially
related to disks. For example:

- cannot create a VM with multiple disks though Foreman CLI (hammer)

- if I create a multidisk VM from Foreman, the second disk always
gets the "bootable" flag and not the primary image, making the VMs
not bootable at all.


Any other Foreman user sharing the pain here? Foramn's list is not
so useful so I'm trying to ask here. How do you programmatically
create virtual machines with oVirt and Foreman? Should I switch do
directly using oVirt API?

Thanks in advance

Davide

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users







hammer-ovirt-logs.tar.gz
Description: application/gzip
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly

Thanks,
Maor


On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk  wrote:
> Hi Vinícius,
>
> For some reason it looks like your networks are both connected to the same 
> IPs.
>
> based on the  VDSM logs:
>   u'connectionParams':[
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno3.11',
> u'connection':u'192.168.12.14',
>  }
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.11.14',
>  },
>  {
> u'netIfaceName':u'eno4.12',
> u'connection':u'192.168.12.14',
>  }
>   ],
>
> Can you try to reconnect to the iSCSI storage domain after
> re-initializing your iscsiadm on your host.
>
> 1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it
>
> 2. In your VDSM host, log out from your iscsi open sessions which are
> related to this storage domain
> if that is your only iSCSI storage domain log out from all the sessions:
>"iscsiadm -m session -u"
>
> 3. Stop the iscsid service:
>"service iscsid stop"
>
> 4. Move your network interfaces configured in the iscsiadm to a
> temporary folder:
> mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> 5. Start the iscsid service
>"service iscsid start"
>
> Regards,
> Maor and Benny
>
> On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
>> Hi,
>>
>>
>> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>>
>>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>>> trying to enable the feature without success too.
>>>
>>> Here’s what I’ve done, step-by-step.
>>>
>>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>>
>>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>>> eno3 with 9216 MTU.
>>> eno4 with 9216 MTU.
>>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>>
>>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>>> different switches.
>>
>>
>> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
>> all network interfaces in the bond can connect/reach all targets, including
>> those in the other net(s). The fact that you use separate, isolated networks
>> means that this is not the case in your setup (and not in mine).
>>
>> I am not sure if this is a bug, a design flaw or a feature, but as a result
>> of this OVirt's iSCSI-Bonding does not work for us.
>>
>> Please see my mail from yesterday for a workaround.
>>
>> cu,
>> Uwe
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Nicolas Ecarnot

Le 25/07/2017 à 10:26, Maor Lipchuk a écrit :

Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.


Hi,

Sorry to jump in this thread, but I'm concerned with this issue.

Correct me if I'm wrong, but in this thread, many people are using 
Equallogic SANs, which provides only one virtual IP to connect to.


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-25 Thread Maor Lipchuk
Hi Vinícius,

For some reason it looks like your networks are both connected to the same IPs.

based on the  VDSM logs:
  u'connectionParams':[
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.12.14',
 }
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.11.14',
 },
 {
u'netIfaceName':u'eno4.12',
u'connection':u'192.168.12.14',
 }
  ],

Can you try to reconnect to the iSCSI storage domain after
re-initializing your iscsiadm on your host.

1. Move your iSCSI storage domain to maintenance in oVirt by deactivating it

2. In your VDSM host, log out from your iscsi open sessions which are
related to this storage domain
if that is your only iSCSI storage domain log out from all the sessions:
   "iscsiadm -m session -u"

3. Stop the iscsid service:
   "service iscsid stop"

4. Move your network interfaces configured in the iscsiadm to a
temporary folder:
mv /var/lib/iscsi/ifaces/* /tmp/ifaces

5. Start the iscsid service
   "service iscsid start"

Regards,
Maor and Benny

On Wed, Jul 19, 2017 at 1:01 PM, Uwe Laverenz  wrote:
> Hi,
>
>
> Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
>
>> I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
>> trying to enable the feature without success too.
>>
>> Here’s what I’ve done, step-by-step.
>>
>> 1. Installed oVirt Node 4.1.3 with the following network settings:
>>
>> eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
>> eno3 with 9216 MTU.
>> eno4 with 9216 MTU.
>> vlan11 on eno3 with 9216 MTU and fixed IP addresses.
>> vlan12 on eno4 with 9216 MTU and fixed IP addresses.
>>
>> eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on
>> different switches.
>
>
> This is the point: the OVirt implementation of iSCSI-Bonding assumes that
> all network interfaces in the bond can connect/reach all targets, including
> those in the other net(s). The fact that you use separate, isolated networks
> means that this is not the case in your setup (and not in mine).
>
> I am not sure if this is a bug, a design flaw or a feature, but as a result
> of this OVirt's iSCSI-Bonding does not work for us.
>
> Please see my mail from yesterday for a workaround.
>
> cu,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread yayo (j)
2017-07-25 7:42 GMT+02:00 Kasturi Narra :

> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away.  This has
> nothing to do with the problem you are seeing.
>

Hi,

You talking  about errors like these?

2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
network found in cluster '0002-0002-0002-0002-017a'


How to assign "glusternw (???)" to the correct interface?

Other errors on unsync gluster elements still remain... This is a
production env, so, there is any chance to subscribe to RH support?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration network used also as vm network

2017-07-25 Thread Luca 'remix_tj' Lorenzetto
Hello Gianluca,

As far as i know this shouldn't be a problem, you can have ips on any
interface.

The only drawbacks can be related to performance of migrations if vms on
that interface makes too much traffic.

Luca

Il 25 lug 2017 9:25 AM, "Gianluca Cecchi"  ha
scritto:

> Hello,
> I have a 10Gbit vlan defined as migration network and currently not
> enabled as vm network.
> When you configure a host interface, assigning a vlan that is defined as
> migration network, you must assign an ip to it on the host.
>
> Suppose I want to edit this vlan in DC so that I enable it to be also a VM
> network, are there any drawbacks having for example on a host the ip for
> this vlan (the migration ip) and also one or more running VMs with their
> vnics configured on this vlan too...?
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migration network used also as vm network

2017-07-25 Thread Michael Burman
Hello Gianluca,

This should work as expected.
The only drawbacks that could be are loads on the link in case of heavy
traffic.
If for example the VMs using most of the link's traffic it may affect the
migration and vise-versa.

Cheers)

On Tue, Jul 25, 2017 at 10:24 AM, Gianluca Cecchi  wrote:

> Hello,
> I have a 10Gbit vlan defined as migration network and currently not
> enabled as vm network.
> When you configure a host interface, assigning a vlan that is defined as
> migration network, you must assign an ip to it on the host.
>
> Suppose I want to edit this vlan in DC so that I enable it to be also a VM
> network, are there any drawbacks having for example on a host the ip for
> this vlan (the migration ip) and also one or more running VMs with their
> vnics configured on this vlan too...?
>
> Thanks in advance,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migration network used also as vm network

2017-07-25 Thread Gianluca Cecchi
Hello,
I have a 10Gbit vlan defined as migration network and currently not enabled
as vm network.
When you configure a host interface, assigning a vlan that is defined as
migration network, you must assign an ip to it on the host.

Suppose I want to edit this vlan in DC so that I enable it to be also a VM
network, are there any drawbacks having for example on a host the ip for
this vlan (the migration ip) and also one or more running VMs with their
vnics configured on this vlan too...?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread Sahina Bose
On Tue, Jul 25, 2017 at 11:12 AM, Kasturi Narra  wrote:

> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away.  This has
> nothing to do with the problem you are seeing.
>
> sahina any idea about engine not showing the correct volume info ?
>

Please provide the vdsm.log (contianing the gluster volume info) and
engine.log


> On Mon, Jul 24, 2017 at 7:30 PM, yayo (j)  wrote:
>
>> Hi,
>>
>> UI refreshed but problem still remain ...
>>
>> No specific error, I've only these errors but I've read that there is no
>> problem if I have this kind of errors:
>>
>>
>> 2017-07-24 15:53:59,823+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterServersListVDSCommand(HostName =
>> node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
>> hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
>> 2017-07-24 15:54:01,066+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterServersListVDSCommand, return: 
>> [10.10.20.80/24:CONNECTED,
>> node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417
>> 2017-07-24 15:54:01,076+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterVolumesListVDSCommand(HostName =
>> node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync=
>> 'true', hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
>> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,212+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,215+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,218+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,221+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterVolumesListVDSCommand, return: {d19c19e3-910d
>> -437b-8ba7-4f2a23d17515=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity@fdc91062, c7a5dfc9
>> -3e72-4ea1-843e-c8275d4a7c2d=org.ovirt.engine.core.c
>> ommon.businessentities.gluster.GlusterVolumeEntity@999a6f23}, log id: 7
>> fce25d3
>>
>>
>> Thank you
>>
>>
>> 2017-07-24 8:12 GMT+02:00 Kasturi Narra :
>>
>>> Hi,
>>>
>>>Regarding the UI showing incorrect information about engine and data
>>> volumes, can you please refresh the UI and see if the issue persists  plus
>>> any errors in the engine.log files ?
>>>
>>> Thanks
>>> kasturi
>>>
>>> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
>>> wrote:
>>>

 On 07/21/2017 11:41 PM, yayo (j) wrote:

 Hi,

 Sorry for follow up again, but, checking the ovirt interface I've found
 that ovirt report the "engine" volume as an "arbiter" configuration and the
 "data" volume as full replicated volume. 

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-25 Thread Edward Haas
On Tue, Jul 25, 2017 at 12:20 AM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Edward, this happened again today and I was able to check more
> details.
>
> So:
>
> - The VM stopped passing any network traffic.
> - Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address
> missing.
> - I then went to oVirt Engine, under VM's 'Network Interfaces' tab,
> clicked Edit and changed the Link State to Down then to Up and it recovered
> its connectivity.
> - Another 'brctl showmacs ovirtmgmt' showed the VM's mac address learned
> again by the bridge.
>
> This Node server has the particularity of sharing the ovirtmgmt with VMs.
> Could it possibly be the cause of the issue in any way ?
>
There is nothing special with ovirtmgmt bridge in this regard.
I suggest you check the VM itself to see if it tries to send traffic, the
fact that the mac is not appearing on the bridge table is an outcome of no
traffic passing from the vNIC with that source mac address.

Thanks
> Fernando
>
> On 24/07/2017 09:47, FERNANDO FREDIANI wrote:
>
> Not tried this yet Edwardh, but will do at next time it happens. THe
> source mac address should be the mac as the VM. I don't see any reason for
> it to change from within the VM ou outside.
>
> What type of things would make the bridge stop learning a given VM mac
> address ?
>
> Fernando
>
> On 23/07/2017 07:51, Edward Haas wrote:
>
> Have you tried to use tcpdump at the VM vNIC to examine if there is
> traffic trying to get out from there? And with what source mac address?
>
> Thanks,
> Edy,
>
> On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Has anyone had problem when using the ovirtmgmt bridge to connect VMs ?
>>
>> I am still facing a bizarre problem where some VMs connected to this
>> bridge stop passing traffic. Checking the problem further I see its mac
>> address stops being learned by the bridge and the problem is resolved only
>> with a VM reboot.
>>
>> When I last saw the problem I run brctl showmacs ovirtmgmt and it shows
>> me the VM's mac adress with agening timer 200.19. After the VM reboot I see
>> the same mac with agening timer 0.00.
>> I don't see it in another environment where the ovirtmgmt is not used for
>> VMs.
>>
>> Does anyone have any clue about this type of behavior ?
>>
>> Fernando
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users