[ovirt-users] Re: Bridge not forwarding frames on node.

2020-05-15 Thread Stefano Danzi
I've just rebooted the ovirt node and now it works, without any changes 
in configuration.

I can't imagine why.

Il 15/05/2020 15:10, Dominik Holler ha scritto:



Hi! I have to check, but it is strange.
Arp replies originated from the VM has not problems, only ARP
replies
that came from TAP device in VM where not forwarded to real LAN.


Do you have a TAP device inside the VM?


Yes! This VM act as L2 VPN server. Inside the VM tap device is
bridged with vm lan adapter.


This should work, so let me ask some detailed questions:

Does the issue reproduce, if you are using a single NIC instead of a bond?

Can you please share the output of
bridge fdb show br ovirtmgmt
and
brctl showmacs ovirtmgmt
while replacing ovirtmgmt with the name of your bridge?
What are relevant MAC addresses like bridge/bond, vNIC and tun device 
in the output?


What is the output of
ebtables -t filter -L
?

The thread
[ovirt-users] DHCP Client in Guest VM does not work on ovirtmgmt
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/566IC5K2B2JJV77ZQO73KGJNMRJNQ67X/#566IC5K2B2JJV77ZQO73KGJNMRJNQ67X
might be similar.



Exactly as descived in bz1279161 (that's solved in bz1135347
but it's
not public and I can't read it)


Unfortunately BZ1135347 does not look helpful here.




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW6YKIGD7WCDRESHFVY5QPEE7LNTERB6/


[ovirt-users] Re: Bridge not forwarding frames on node.

2020-05-15 Thread Stefano Danzi



Il 15/05/2020 14:29, Dominik Holler ha scritto:



On Fri, May 15, 2020 at 9:35 AM Stefano Danzi <mailto:s.da...@hawai.it>> wrote:




Il 14/05/2020 20:13, Strahil Nikolov ha scritto:
> On May 14, 2020 6:16:06 PM GMT+03:00, Stefano Danzi
mailto:s.da...@hawai.it>> wrote:
>>
>> Il 14/05/2020 12:50, Dominik Holler ha scritto:
>>>
>>> On Wed, May 13, 2020 at 9:44 PM s.danzi mailto:s.da...@hawai.it>
>>> <mailto:s.da...@hawai.it <mailto:s.da...@hawai.it>>> wrote:
>>>
>>>      Hi to all!
>>>
>>>      I'm having an issue with networks bridges on ovirt node.
>>>
>>>      It's look like this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1279161
>>>
>>>      On VM I have a bridge between a tap device and network
>> interface.
>>>      On node side the interface is bridged with bond0 vlan 128
>>>      (bond0.128 lacp).
>>>
>>>      When I ping an host on the other side of tap device I can see
>> this:
>>>      Arp request goes from my lan to the tap device on vm. Arp
reply
>>>      return from tap vm and bridge forward this to vm networks
>>>      interface. Using tcpdump on vm interface on node I can
see the
>> arp
>>>      reply, using tcpdump on bond0.128 or on bridge I can't
see the
>> arp
>>>      reply.  Arp request is forwarded from bond0.128 to vm net
but arp
>>>      reply isn't forwarded from vm net to bond0.128.
>>>
>>>
>>>
>>> Any chance that there is network filtering involved?
>>> Please check if the related vNIC profile has No Network Filter.
>>> If there is a Network Filter set, please shutdown the VM, set
to No
>>> Network Filter in the vNIC profile, and start the VM again and
check
>>> if the issue is gone.
>> Hi! No Network filter It was my first check.


Did you power off the VM after removing the network filter from the 
vNIC profile?
There is currently no indication of the running vNIC configuration 
does not match the

desired configuration (BZ1113630).

Yes, of corse


> Have you checked the MTU ?
> You need to keep it a little bit lower on the VM, as you have
vlan on the hypervisor.
>
> Best Regards,
> Strahil Nikolov
Hi! I have to check, but it is strange.
Arp replies originated from the VM has not problems, only ARP replies
that came from TAP device in VM where not forwarded to real LAN.


Do you have a TAP device inside the VM?


Yes! This VM act as L2 VPN server. Inside the VM tap device is bridged 
with vm lan adapter.



Exactly as descived in bz1279161 (that's solved in bz1135347 but it's
not public and I can't read it)


Unfortunately BZ1135347 does not look helpful here.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCAT6Y3KLH32DGLOV3QUXR2LS5IBK2QP/


[ovirt-users] Re: Bridge not forwarding frames on node.

2020-05-15 Thread Stefano Danzi



Il 14/05/2020 20:13, Strahil Nikolov ha scritto:

On May 14, 2020 6:16:06 PM GMT+03:00, Stefano Danzi  wrote:


Il 14/05/2020 12:50, Dominik Holler ha scritto:


On Wed, May 13, 2020 at 9:44 PM s.danzi mailto:s.da...@hawai.it>> wrote:

 Hi to all!

 I'm having an issue with networks bridges on ovirt node.

 It's look like this bug:
 https://bugzilla.redhat.com/show_bug.cgi?id=1279161

 On VM I have a bridge between a tap device and network

interface.

 On node side the interface is bridged with bond0 vlan 128
 (bond0.128 lacp).

 When I ping an host on the other side of tap device I can see

this:

 Arp request goes from my lan to the tap device on vm. Arp reply
 return from tap vm and bridge forward this to vm networks
 interface. Using tcpdump on vm interface on node I can see the

arp

 reply, using tcpdump on bond0.128 or on bridge I can't see the

arp

 reply.  Arp request is forwarded from bond0.128 to vm net but arp
 reply isn't forwarded from vm net to bond0.128.



Any chance that there is network filtering involved?
Please check if the related vNIC profile has No Network Filter.
If there is a Network Filter set, please shutdown the VM, set to No
Network Filter in the vNIC profile, and start the VM again and check
if the issue is gone.

Hi! No Network filter It was my first check.

Have you checked the MTU ?
You need to keep it a little bit lower on the VM, as you have vlan on the 
hypervisor.

Best Regards,
Strahil Nikolov

Hi! I have to check, but it is strange.
Arp replies originated from the VM has not problems, only ARP replies 
that came from TAP device in VM where not forwarded to real LAN.
Exactly as descived in bz1279161 (that's solved in bz1135347 but it's 
not public and I can't read it)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DEH2AUIZFW5IHJW6YEVACXRKVOZGZIHS/


[ovirt-users] Re: Bridge not forwarding frames on node.

2020-05-15 Thread Stefano Danzi

Already removed the network filter This is not the problem :(

Il 14/05/2020 23:20, Giulio Casella ha scritto:

Maybe you can check the vm network filter.
Take a look at Network -> vNic profile ->  and 
choose edit. If "Network Filter" has the default value 
"vdsm-no-mac-spoofing", it can prevent bridge normal behaviour. Maybe 
"No network filter" can do the magic.


HTH.

Cheers,
Giulio

On 14/05/2020 17:16, Stefano Danzi wrote:



Il 14/05/2020 12:50, Dominik Holler ha scritto:



On Wed, May 13, 2020 at 9:44 PM s.danzi <mailto:s.da...@hawai.it>> wrote:


    Hi to all!

    I'm having an issue with networks bridges on ovirt node.

    It's look like this bug:
    https://bugzilla.redhat.com/show_bug.cgi?id=1279161

    On VM I have a bridge between a tap device and network 
interface.     On node side the interface is bridged with bond0 vlan 
128

    (bond0.128 lacp).

    When I ping an host on the other side of tap device I can see this:

    Arp request goes from my lan to the tap device on vm. Arp reply
    return from tap vm and bridge forward this to vm networks
    interface. Using tcpdump on vm interface on node I can see the arp
    reply, using tcpdump on bond0.128 or on bridge I can't see the arp
    reply.  Arp request is forwarded from bond0.128 to vm net but arp
    reply isn't forwarded from vm net to bond0.128.



Any chance that there is network filtering involved?
Please check if the related vNIC profile has No Network Filter.
If there is a Network Filter set, please shutdown the VM, set to No 
Network Filter in the vNIC profile, and start the VM again and check 
if the issue is gone.


Hi! No Network filter It was my first check.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WCD554AXHF7TQH7CMZ5QXK37HLRWF2G2/





___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTN3DF25SWMHLUULPT3UEOW2HJ3ANGE3/


[ovirt-users] Re: Bridge not forwarding frames on node.

2020-05-14 Thread Stefano Danzi



Il 14/05/2020 12:50, Dominik Holler ha scritto:



On Wed, May 13, 2020 at 9:44 PM s.danzi > wrote:


Hi to all!

I'm having an issue with networks bridges on ovirt node.

It's look like this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1279161

On VM I have a bridge between a tap device and network interface. 
On node side the interface is bridged with bond0 vlan 128
(bond0.128 lacp).

When I ping an host on the other side of tap device I can see this:

Arp request goes from my lan to the tap device on vm. Arp reply
return from tap vm and bridge forward this to vm networks
interface. Using tcpdump on vm interface on node I can see the arp
reply, using tcpdump on bond0.128 or on bridge I can't see the arp
reply.  Arp request is forwarded from bond0.128 to vm net but arp
reply isn't forwarded from vm net to bond0.128.



Any chance that there is network filtering involved?
Please check if the related vNIC profile has No Network Filter.
If there is a Network Filter set, please shutdown the VM, set to No 
Network Filter in the vNIC profile, and start the VM again and check 
if the issue is gone.


Hi! No Network filter It was my first check.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WCD554AXHF7TQH7CMZ5QXK37HLRWF2G2/


[ovirt-users] Re: Recover node from partial upgrade

2019-10-04 Thread Stefano Danzi

Hi!
boot from older image has no effects on this issue.
After long debug I figured out that after the interrupted upgrade there 
was two Logical volume:


/dev/onn_ovirtn01/ovirt-node-ng-4.3.6-0.20190926.0
/dev/onn_ovirtn01/ovirt-node-ng-4.3.6-0.20190926.0+1

When I executed  "yum reinstall ovirt-node-ng-image-update", rpm scripts 
try to create these LVs but fail because they already exist.

So the RPM script fail with a generic error.

After removing them the script completed successfully and upgrade was done.

Maybe be nice if this situation will be handled correctly by 
installation scripts.


Il 04/10/2019 17:19, Strahil ha scritto:

You can boot from the older image (grub menu) and then a cleanup procedure 
should be performed.
I'm not sure if the thin LVs will be removed when you rerun the upgrade process 
, or they will remain the same.

Beat Regards,
Strahil NikolovOn Oct 4, 2019 14:04, Stefano Danzi  wrote:

Hello!!!

I upgraded oVirtNode 4.3.5.2 to 4.3.6 using Engine UI.
In the middle of the upgrade Engine fenced it, I can't understand why.

Now Engine UI, and yum, report that there are no updates but node still
to version 4.3.5.2.
If I run "yum reinstall ovirt-node-ng-image-update" The result is as
follow, but the system remain not upgraded.

ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm | 719 MB  00:06:31
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
   Installazione : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
warning: %post(ovirt-node-ng-image-update-4.3.6-1.el7.noarch) scriptlet
failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package
ovirt-node-ng-image-update-4.3.6-1.el7.noarch
Uploading Package Profile
Cannot upload package profile. Is this client registered?
   Verifica in corso : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1

Installato:
   ovirt-node-ng-image-update.noarch 0:4.3.6-1.el7

Completo!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y52X663MAR347SVQ47PQNR2HWVNG5PDS/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKH3TUPT7KSEFQV3K3UG5CHLRDQ7TG7K/


[ovirt-users] Recover node from partial upgrade

2019-10-04 Thread Stefano Danzi

Hello!!!

I upgraded oVirtNode 4.3.5.2 to 4.3.6 using Engine UI.
In the middle of the upgrade Engine fenced it, I can't understand why.

Now Engine UI, and yum, report that there are no updates but node still 
to version 4.3.5.2.
If I run "yum reinstall ovirt-node-ng-image-update" The result is as 
follow, but the system remain not upgraded.


ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm | 719 MB  00:06:31
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installazione : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
warning: %post(ovirt-node-ng-image-update-4.3.6-1.el7.noarch) scriptlet 
failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 
ovirt-node-ng-image-update-4.3.6-1.el7.noarch

Uploading Package Profile
Cannot upload package profile. Is this client registered?
  Verifica in corso : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1

Installato:
  ovirt-node-ng-image-update.noarch 0:4.3.6-1.el7

Completo!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y52X663MAR347SVQ47PQNR2HWVNG5PDS/


[ovirt-users] Re: Mix CPU

2019-07-01 Thread Stefano Danzi

oVirt don't permit this but KVM support that scenario:

https://www.linux-kvm.org/page/Migration

I don't know how this could have performance impact

Il 01/07/2019 13:54, Michal Skrivanek ha scritto:



On 1 Jul 2019, at 13:39, supo...@logicworks.pt 
 wrote:


Hello,

Can we mix an Intel cpu with an AMD cpu in the same cluster?


No, not in the same cluster. You can’t really live migrate workloads 
between each other.


Thanks,
michal



Thanks

--

Jose Ferradeira
http://www.logicworks.pt
___


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DXEMX5OJCHDCN6HRAAI4TGHN2CEZ4KDC/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-26 Thread Stefano Danzi



Il 26/06/2019 11:57, Yedidyah Bar David ha scritto:

On Tue, Jun 25, 2019 at 8:37 PM Stefano Danzi  wrote:

Il 25/06/2019 14:26, Stefano Danzi ha scritto:

I don't remember to ever seen a question about this during
engine-setup,
but it could be.
In /etc/pki/vdsm/certs/ I can see an old cert and ca with subjet:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/cacert.pem.20150205093608 -text'
Certificate:
   Data:
   Version: 3 (0x2)
   Serial Number: 1423056193 (0x54d21d41)
   Signature Algorithm: sha256WithRSAEncryption
   Issuer: CN=VDSM Certificate Authority
   Validity
   Not Before: Feb  4 13:23:13 2015 GMT
   Not After : Feb  4 13:23:13 2016 GMT
   Subject: CN=VDSM Certificate Authority
   Subject Public Key Info:

[CUT]

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/vdsmcert.pem.20150205093609 -text'
Certificate:
   Data:
   Version: 3 (0x2)
   Serial Number: 1423056193 (0x54d21d41)
   Signature Algorithm: sha256WithRSAEncryption
   Issuer: CN=VDSM Certificate Authority
   Validity
   Not Before: Feb  4 13:23:13 2015 GMT
   Not After : Feb  4 13:23:13 2016 GMT
   Subject: CN=ovirt01.hawai.lan, O=VDSM Certificate
   Subject Public Key Info:
   Public Key Algorithm: rsaEncryption


I think that was certs made during first hosted engine installation.
Could it work if I manually create certs like this?
Just to start libvirtd, vdsm and hosted-engine.

I think it's worth a try. Just create a self-signed CA, a keypair
signed by it, and place them correctly, should work.

The engine won't be able to talk with the host, but you can then more
easily reinstall/re-enroll-certs.

Good luck,

This workaround works!
I have hosted engine running!

So I have to find how reinstall/re-enroll-certs on host. From engine
UI host status is "NonResponsive" and I can't do nothing
___

Status:

now Host status is "Unassiged".  Engine can't reach host for "General
SSLEngine problem" and It's ok because certs are "home made".
I can't switch host to maintenance because it's not operational.
I can't enroll certificate because is not in maintenance status.

You can try to remove it. I think we do not support "force-remove"
despite being asked about this occasionally, because
generally-speaking, this is very unsafe. If you insist, you can try
using the sql function DeleteVds to delete it from the database.


hou I can enroll host cert manually?

You can try following what I wrote in "2. Try to manually fix" before.
Create a CSR on the host (with whatever private key you want), copy it
to engine, pki-enroll-request, copy the cert to host.

Good luck and best regards,


I've just solved using  pki-enroll-request as you told me. Thanks!!
This upgrade was very very hard!!


 
___

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XX4U45DYHZJXGMP2DIPS7X34CBGUHFYZ/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-25 Thread Stefano Danzi

Il 25/06/2019 14:26, Stefano Danzi ha scritto:


I don't remember to ever seen a question about this during 
engine-setup,

but it could be.
In /etc/pki/vdsm/certs/ I can see an old cert and ca with subjet:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/cacert.pem.20150205093608 -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 1423056193 (0x54d21d41)
  Signature Algorithm: sha256WithRSAEncryption
  Issuer: CN=VDSM Certificate Authority
  Validity
  Not Before: Feb  4 13:23:13 2015 GMT
  Not After : Feb  4 13:23:13 2016 GMT
  Subject: CN=VDSM Certificate Authority
  Subject Public Key Info:

[CUT]

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/vdsmcert.pem.20150205093609 -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 1423056193 (0x54d21d41)
  Signature Algorithm: sha256WithRSAEncryption
  Issuer: CN=VDSM Certificate Authority
  Validity
  Not Before: Feb  4 13:23:13 2015 GMT
  Not After : Feb  4 13:23:13 2016 GMT
  Subject: CN=ovirt01.hawai.lan, O=VDSM Certificate
  Subject Public Key Info:
  Public Key Algorithm: rsaEncryption


I think that was certs made during first hosted engine installation.
Could it work if I manually create certs like this?
Just to start libvirtd, vdsm and hosted-engine.

I think it's worth a try. Just create a self-signed CA, a keypair
signed by it, and place them correctly, should work.

The engine won't be able to talk with the host, but you can then more
easily reinstall/re-enroll-certs.

Good luck,

This workaround works!
I have hosted engine running!

So I have to find how reinstall/re-enroll-certs on host. From engine 
UI host status is "NonResponsive" and I can't do nothing
___ 


Status:

now Host status is "Unassiged".  Engine can't reach host for "General 
SSLEngine problem" and It's ok because certs are "home made".

I can't switch host to maintenance because it's not operational.
I can't enroll certificate because is not in maintenance status.

hou I can enroll host cert manually?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YS3GQNBWPPFYVV2YJKGHJEOEB2UVA7HI/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-25 Thread Stefano Danzi



I don't remember to ever seen a question about this during engine-setup,
but it could be.
In /etc/pki/vdsm/certs/ I can see an old cert and ca with subjet:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/cacert.pem.20150205093608 -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 1423056193 (0x54d21d41)
  Signature Algorithm: sha256WithRSAEncryption
  Issuer: CN=VDSM Certificate Authority
  Validity
  Not Before: Feb  4 13:23:13 2015 GMT
  Not After : Feb  4 13:23:13 2016 GMT
  Subject: CN=VDSM Certificate Authority
  Subject Public Key Info:

[CUT]

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/vdsmcert.pem.20150205093609 -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 1423056193 (0x54d21d41)
  Signature Algorithm: sha256WithRSAEncryption
  Issuer: CN=VDSM Certificate Authority
  Validity
  Not Before: Feb  4 13:23:13 2015 GMT
  Not After : Feb  4 13:23:13 2016 GMT
  Subject: CN=ovirt01.hawai.lan, O=VDSM Certificate
  Subject Public Key Info:
  Public Key Algorithm: rsaEncryption


I think that was certs made during first hosted engine installation.
Could it work if I manually create certs like this?
Just to start libvirtd, vdsm and hosted-engine.

I think it's worth a try. Just create a self-signed CA, a keypair
signed by it, and place them correctly, should work.

The engine won't be able to talk with the host, but you can then more
easily reinstall/re-enroll-certs.

Good luck,

This workaround works!
I have hosted engine running!

So I have to find how reinstall/re-enroll-certs on host. From engine UI 
host status is "NonResponsive" and I can't do nothing

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEIIEWQ5DH4OO3T2463OMCNOWPJM656X/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-25 Thread Stefano Danzi



Il 25/06/2019 10:08, Yedidyah Bar David ha scritto:

On Tue, Jun 25, 2019 at 10:26 AM Stefano Danzi  wrote:



Il 25/06/2019 08:27, Yedidyah Bar David ha scritto:

On Mon, Jun 24, 2019 at 7:56 PM Stefano Danzi  wrote:

I've found that this issue is related to:

https://bugzilla.redhat.com/show_bug.cgi?id=1648190

Are you sure?

That bug is about an old cert, generated by an old version, likely
before we fixed bug 1210486 (even though it's not mentioned in above
bug).

Yes! Malformed "Not Before" date/time in certs


But i've no idea how fix it

Il 24/06/2019 18:19, Stefano Danzi ha scritto:

I've just upgraded my test environment from ovirt 4.2 to 4.3.4.

Was it installed as 4.2, or upgraded? From which first version?

I don't remember the first installed version. Maybe 4.0... I always
upgraded the original installation.


System has only one host (Centos 7.6.1810) and run a self hosted engine.

After upgrade I'm not able to run vdsmd (and so hosted engine)

Above the error in log:

   journalctl -xe

-- L'unità libvirtd.service ha iniziato la fase di avvio.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: info : libvirt version: 4.5.0, package:
10.el7_6.12 (CentOS BuildSystem <http://bugs.centos.org>,
2019-06-20-15:01:15, x86-01.bsys.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: info : hostname: ovirt01.hawai.lan
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: error : virNetTLSContextLoadCertFromFile:513
: Unable to import server certificate /etc/pki/vdsm/certs/vdsmcert.pem

Did you check this file? Does it exist?

ls -l /etc/pki/vdsm/certs/vdsmcert.pem

Can vdsm user read it?

su - vdsm -s /bin/bash -c 'cat /etc/pki/vdsm/certs/vdsmcert.pem > /dev/null'

Please check/share output of:

openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -text

Thanks and best regards,

vdsm can read vdsmcert. The problem is "Not Before" date:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/vdsmcert.pem -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 4102 (0x1006)
  Signature Algorithm: sha1WithRSAEncryption
  Issuer: C=US, O=hawai.lan, CN=ovirtbk-sheng.hawai.lan.63272
  Validity
  Not Before: Feb  4 08:36:07 2015
  Not After : Feb  4 08:36:07 2020 GMT
[CUT]


[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in
/etc/pki/vdsm/certs/cacert.pem -text'
Certificate:
  Data:
  Version: 3 (0x2)
  Serial Number: 4096 (0x1000)
  Signature Algorithm: sha1WithRSAEncryption
  Issuer: C=US, O=hawai.lan, CN=ovirtbk-sheng.hawai.lan.63272
  Validity
  Not Before: Feb  4 00:06:25 2015
  Not After : Feb  2 00:06:25 2025 GMT


OK :-(

So it will be rather difficult to fix.

You should have been prompted by engine-setup long ago to renew PKI,
weren't you? And when you did, didn't you have to reinstall (or Re-
Enroll Certificates, in later versions) all hosts?


I don't remember to ever seen a question about this during engine-setup, 
but it could be.

In /etc/pki/vdsm/certs/ I can see an old cert and ca with subjet:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in 
/etc/pki/vdsm/certs/cacert.pem.20150205093608 -text'

Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number: 1423056193 (0x54d21d41)
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: CN=VDSM Certificate Authority
    Validity
    Not Before: Feb  4 13:23:13 2015 GMT
    Not After : Feb  4 13:23:13 2016 GMT
    Subject: CN=VDSM Certificate Authority
    Subject Public Key Info:

[CUT]

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in 
/etc/pki/vdsm/certs/vdsmcert.pem.20150205093609 -text'

Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number: 1423056193 (0x54d21d41)
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: CN=VDSM Certificate Authority
    Validity
    Not Before: Feb  4 13:23:13 2015 GMT
    Not After : Feb  4 13:23:13 2016 GMT
    Subject: CN=ovirt01.hawai.lan, O=VDSM Certificate
    Subject Public Key Info:
    Public Key Algorithm: rsaEncryption


I think that was certs made during first hosted engine installation.
Could it work if I manually create certs like this?
Just to start libvirtd, vdsm and hosted-engine.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBJGZAFKMBRK3RM4TOGWAJ64Y7W5NT7O/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-25 Thread Stefano Danzi



Il 25/06/2019 08:27, Yedidyah Bar David ha scritto:

On Mon, Jun 24, 2019 at 7:56 PM Stefano Danzi  wrote:

I've found that this issue is related to:

https://bugzilla.redhat.com/show_bug.cgi?id=1648190

Are you sure?

That bug is about an old cert, generated by an old version, likely
before we fixed bug 1210486 (even though it's not mentioned in above
bug).


Yes! Malformed "Not Before" date/time in certs


But i've no idea how fix it

Il 24/06/2019 18:19, Stefano Danzi ha scritto:

I've just upgraded my test environment from ovirt 4.2 to 4.3.4.

Was it installed as 4.2, or upgraded? From which first version?


I don't remember the first installed version. Maybe 4.0... I always 
upgraded the original installation.



System has only one host (Centos 7.6.1810) and run a self hosted engine.

After upgrade I'm not able to run vdsmd (and so hosted engine)

Above the error in log:

  journalctl -xe

-- L'unità libvirtd.service ha iniziato la fase di avvio.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: info : libvirt version: 4.5.0, package:
10.el7_6.12 (CentOS BuildSystem <http://bugs.centos.org>,
2019-06-20-15:01:15, x86-01.bsys.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: info : hostname: ovirt01.hawai.lan
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24
16:09:17.006+: 8176: error : virNetTLSContextLoadCertFromFile:513
: Unable to import server certificate /etc/pki/vdsm/certs/vdsmcert.pem

Did you check this file? Does it exist?

ls -l /etc/pki/vdsm/certs/vdsmcert.pem

Can vdsm user read it?

su - vdsm -s /bin/bash -c 'cat /etc/pki/vdsm/certs/vdsmcert.pem > /dev/null'

Please check/share output of:

openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -text

Thanks and best regards,


vdsm can read vdsmcert. The problem is "Not Before" date:

[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in 
/etc/pki/vdsm/certs/vdsmcert.pem -text'

Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number: 4102 (0x1006)
    Signature Algorithm: sha1WithRSAEncryption
    Issuer: C=US, O=hawai.lan, CN=ovirtbk-sheng.hawai.lan.63272
    Validity
    Not Before: Feb  4 08:36:07 2015
    Not After : Feb  4 08:36:07 2020 GMT
[CUT]


[root@ovirt01 ~]# su - vdsm -s /bin/bash -c 'openssl x509 -in 
/etc/pki/vdsm/certs/cacert.pem -text'

Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number: 4096 (0x1000)
    Signature Algorithm: sha1WithRSAEncryption
    Issuer: C=US, O=hawai.lan, CN=ovirtbk-sheng.hawai.lan.63272
    Validity
    Not Before: Feb  4 00:06:25 2015
    Not After : Feb  2 00:06:25 2025 GMT


giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: libvirtd.service: main
process exited, code=exited, status=6/NOTCONFIGURED
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: Failed to start
Virtualization daemon.
-- Subject: L'unità libvirtd.service è fallita


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQJ2IOGZSLQBQGAMNYUGGDJ4DQTKE6UL/


[ovirt-users] Re: Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-24 Thread Stefano Danzi

I've found that this issue is related to:

https://bugzilla.redhat.com/show_bug.cgi?id=1648190

But i've no idea how fix it

Il 24/06/2019 18:19, Stefano Danzi ha scritto:

I've just upgraded my test environment from ovirt 4.2 to 4.3.4.
System has only one host (Centos 7.6.1810) and run a self hosted engine.

After upgrade I'm not able to run vdsmd (and so hosted engine)

Above the error in log:

 journalctl -xe

-- L'unità libvirtd.service ha iniziato la fase di avvio.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: info : libvirt version: 4.5.0, package: 
10.el7_6.12 (CentOS BuildSystem <http://bugs.centos.org>, 
2019-06-20-15:01:15, x86-01.bsys.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: info : hostname: ovirt01.hawai.lan
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: error : virNetTLSContextLoadCertFromFile:513 
: Unable to import server certificate /etc/pki/vdsm/certs/vdsmcert.pem
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: libvirtd.service: main 
process exited, code=exited, status=6/NOTCONFIGURED
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: Failed to start 
Virtualization daemon.

-- Subject: L'unità libvirtd.service è fallita
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAP4TPH7UAGBFLS3YI7JCL4IGMPIDKTQ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADFJRSR4BDGD5XRSTK64CYVK2267DRRU/


[ovirt-users] Error virNetTLSContextLoadCertFromFile after upgrade from oVirt 4.2 to 4.3.4

2019-06-24 Thread Stefano Danzi

I've just upgraded my test environment from ovirt 4.2 to 4.3.4.
System has only one host (Centos 7.6.1810) and run a self hosted engine.

After upgrade I'm not able to run vdsmd (and so hosted engine)

Above the error in log:

 journalctl -xe

-- L'unità libvirtd.service ha iniziato la fase di avvio.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: info : libvirt version: 4.5.0, package: 
10.el7_6.12 (CentOS BuildSystem , 
2019-06-20-15:01:15, x86-01.bsys.
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: info : hostname: ovirt01.hawai.lan
giu 24 18:09:17 ovirt01.hawai.lan libvirtd[8176]: 2019-06-24 
16:09:17.006+: 8176: error : virNetTLSContextLoadCertFromFile:513 : 
Unable to import server certificate /etc/pki/vdsm/certs/vdsmcert.pem
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: libvirtd.service: main 
process exited, code=exited, status=6/NOTCONFIGURED
giu 24 18:09:17 ovirt01.hawai.lan systemd[1]: Failed to start 
Virtualization daemon.

-- Subject: L'unità libvirtd.service è fallita
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAP4TPH7UAGBFLS3YI7JCL4IGMPIDKTQ/


[ovirt-users] Re: SSL Error adding new host

2019-06-19 Thread Stefano Danzi

Solved!

During host installation it run yum to install some packages.
Corporate firewall was blocking internet access for this host so 
installation fails.


Enabling internet access and readding the host solved the problem.

I think that a more specific error entry in Engine UI could be nice.
Something like "Yum on host was unable to reach servers".

Il 19/06/2019 07:27, Yedidyah Bar David ha scritto:

On Tue, Jun 18, 2019 at 6:55 PM Stefano Danzi  wrote:

Hello,
I have i running cluster ad I have to add a new host.
Current cluster has AMD type cpu, new host has intel CPU su I added a new 
cluster in same datacenter (datacenter default compatibility 4.2).

New Host is running node 4.3.4, engine is 4.3.4.3-1.el7

The new host is attached to ths new cluster, but engine can't connect.
Engine logs tell me:

2019-06-18 13:04:21,873+02 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable 
to process messages General SSLEngine problem
2019-06-18 13:04:21,878+02 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
(EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Unable to 
RefreshCapabilities: VDSNetworkException: VDSGenericException: 
VDSNetworkException: General SSLEngine problem 4.3..4.3-1.el7

Please check/share relevant logs - from the engine:

/var/log/ovirt-engine/host-deploy/*

>From the host:

/var/log/vdsm/vdsm.log

Thanks and best regards,

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S2GKHFHDDWNTB56EDYETISY7T43JHLU7/


[ovirt-users] SSL Error adding new host

2019-06-18 Thread Stefano Danzi

Hello,
I have i running cluster ad I have to add a new host.
Current cluster has AMD type cpu, new host has intel CPU su I added a 
new cluster in same datacenter (datacenter default compatibility 4.2).


New Host is running node 4.3.4, engine is 4.3.4.3-1.el7

The new host is attached to ths new cluster, but engine can't connect.
Engine logs tell me:

2019-06-18 13:04:21,873+02 ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] 
Unable to process messages General SSLEngine problem
2019-06-18 13:04:21,878+02 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
(EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Unable to 
RefreshCapabilities: VDSNetworkException: VDSGenericException: 
VDSNetworkException: General SSLEngine problem 4.3..4.3-1.el7
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4M7X5FF7K6MCJX2PXXJZJM2W6R7F34TP/


[ovirt-users] oVirtNode 4.3.3 - Missing os-brick

2019-04-16 Thread Stefano Danzi

Hello,

I've just upgrade one node host to v. 4.3.3 and I can see this entry in 
logs every 10 seconds:


==> /var/log/vdsm/vdsm.log <==
2019-04-16 18:03:06,417+0200 INFO  (jsonrpc/5) [root] managedvolume not 
supported: Managed Volume Not Supported. Missing package os-brick.: 
('Cannot import os_brick',) (caps:150)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A32HK5AZKVITKRIXAC5AV44MMXRDMTEE/


[ovirt-users] Windows Guest timezone

2019-04-11 Thread Stefano Danzi

Hello!

I've a guest windows VM (Server 2016) and hardware clock in oVirt 
machine configuration is GMT +1.
Now we are DST period and machine timezone is GMT +2.  Engine Dashboard 
warn me about timezone
mismatch between configuration and running VM. I can't see a setting to 
enable DST on emulated hardware.


Some ideas?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ENKC37PNVQEZ2722BMDKQ5PEWQXXTFBJ/


[ovirt-users] Re: [ANN] oVirt 4.3.2 is now generally available

2019-03-20 Thread Stefano Danzi
Hi! Documentation report to run "yum install 
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.2-1.el7.noarch.rpm; 
to update from Node NG 4.2, but this rpm is missing.

Il 19/03/2019 15:53, Sandro Bonazzola ha scritto:



Il giorno mar 19 mar 2019 alle ore 10:59 Sandro Bonazzola 
mailto:sbona...@redhat.com>> ha scritto:


The oVirt Project is pleased to announce the general availability
of oVirt 4.3.2, as of March 19th, 2019.
This update is the second in a series of stabilization updates to
the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le
architectures for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for
Fedora 28 is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]

oVirt Node has been updated including:
- oVirt 4.3.2: http://www.ovirt.org/release/4.3.2/
- Latest CentOS updates (no relevant errata available up to now on
https://lists.centos.org/pipermail/centos-announce )


Relevant errata have been published:
CESA-2019:0512 Important CentOS 7 kernel Security Update 

CESA-2019:0483 Moderate CentOS 7 openssl Security Update 





Additional Resources:
* Read more about the oVirt 4.3.2 release
highlights:http://www.ovirt.org/release/4.3.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt
blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.2/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/


-- 


SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOC67ATH4Y74CMD3IC7FM6PO652FBBGN/


[ovirt-users] Gluster messages after upgrade to 4.3.1

2019-03-01 Thread Stefano Danzi

Hello,

I've just upgrade to version 4.3.1 and I can see this message in gluster 
log of all my host (running oVirt Node):


The message "E [MSGID: 101191] 
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to 
dispatch handler" repeated 59 times between [2019-03-01 10:21:42.099983] 
and [2019-03-01 10:23:38.340971


Another strange thing:

A Vm was running. I shutted down it fo mistake. I was no more able to 
run this vm. The error was: "Bad volume specification ".
After a little investigation I notice that disk image was no more owned 
by vdsm.kvm but root.root. I changed back to correct value and vm 
started fine.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5VIYA7QAO5QMWMAMB3OAULMKRLBYHVS/


[ovirt-users] oVirt 4.2.8 CPU Compatibility

2019-01-22 Thread Stefano Danzi


Hello!
I'm running oVirt 4.2.7.5-1.el7 on 3 hosts cluster.
Cluster CPU Type is "AMD Opteron G3".

On default cluster I can see the warning:
"Warning: The CPU type 'AMD Opteron G3' will not be supported in the 
next minor version update'"


Is it still supported in version 4.2.8? I can't see references in 
documentation or changelog.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3P7KOTLL2PE7WWBK5S4BS665KDDOXPQJ/


[ovirt-users] Re: Node, failed to deploy hosted engine

2018-10-19 Thread Stefano Danzi

another little step:

I found an ovirtmgmt interface active on host (from a prev. failed 
deployment).
After shutdown this interface I soved one error and now deploy script 
wait. It is waiting since 1 hour ago


[ INFO  ] TASK [Wait for ovirt-engine service to start]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Detect VLAN ID]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Set Engine public key as authorized key without 
validating the TLS/SSL certificates]

[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Enable GlusterFS at cluster level]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Set VLAN ID at datacenter level]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Force host-deploy in offline mode]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the host to be up]


Il 19/10/2018 13:47, Stefano Danzi ha scritto:

I've found some additional info.
Engine what for host to be up.
VDSM log on host show "waiting for storage pool to go up", but 
"hosted-engine --deploy" (and web wizard) don't ask for storage domain.




2018-10-19 13:36:52,206+0200 INFO  (vmrecovery) [vds] recovery: 
waiting for storage pool to go up (clientIF:707)
2018-10-19 13:36:52,529+0200 INFO  (jsonrpc/7) [api.host] START 
getStats() from=:::192.168.124.71,44704 (api:46)
2018-10-19 13:36:52,532+0200 INFO  (jsonrpc/7) [vdsm.api] START 
repoStats(domains=()) from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:46)
2018-10-19 13:36:52,533+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
repoStats return={} from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:52)
2018-10-19 13:36:52,534+0200 INFO  (jsonrpc/7) [vdsm.api] START 
multipath_health() from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:46)
2018-10-19 13:36:52,535+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
multipath_health return={} from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:52)
2018-10-19 13:36:52,561+0200 INFO  (jsonrpc/7) [api.host] FINISH 
getStats return={'status': {'message': 'Done', 'code': 0}, 'info': 
{'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '0.73', 'nodeIndex': 0, 
'cpuSys': '0.20', 'cpuIdle': '99.07'}, '1': {'cpuUser': '1.45', 
'nodeIndex': 1, 'cpuSys': '0.86', 'cpuIdle': '97.69'}, '0': 
{'cpuUser': '2.44', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': 
'97.23'}, '3': {'cpuUser': '1.25', 'nodeIndex': 1, 'cpuSys': '0.53', 
'cpuIdle': '98.22'}, '2': {'cpuUser': '1.58', 'nodeIndex': 0, 
'cpuSys': '0.46', 'cpuIdle': '97.96'}, '5': {'cpuUser': '0.13', 
'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '4': 
{'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': 
'99.47'}, '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 
'cpuIdle': '99.80'}, '6': {'cpuUser': '1.19', 'nodeIndex': 0, 
'cpuSys': '0.33', 'cpuIdle': '98.48'}, '9': {'cpuUser': '0.40', 
'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '8': 
{'cpuUser': '0.86', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': 
'98.81'}}, 'numaNodeMemFree': {'1': {'memPercent': 30, 'memFree': 
'14483'}, '0': {'memPercent': 35, 'memFree': '13445'}}, 'memShared': 
446, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 1, 
'memUsed': '19', 'storageDomains': {}, 'incomingVmMigrations': 0, 
'network': {'glusternet': {'txErrors': '0', 'state': 'up', 
'sampleTime': 1539949006.318315, 'name': 'glusternet', 'tx': 
'10080290', 'txDropped': '0', 'rx': '6669132', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'enp3s0f0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'enp3s0f0', 
'tx': '66909029', 'txDropped': '0', 'rx': '17996942', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'bond0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'bond0', 'tx': 
'87998708', 'txDropped': '0', 'rx': '46881018', 'rxErrors': '0', 
'speed': '3000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 
'state': 'down', 'sampleTime': 1539949006.318315, 'name': 
';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': 
'0', 'speed': '1000', 'rxDropped': '0'}, 'ovirtmgmt': {'txErrors': 
'0', 'state': 'up', 'sampleTime': 1539949006.318315, 'name': 
'ovirtmgmt', 'tx': '56421549', 'txDropped': '0', 'rx': '25448254', 
'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': 
{'txErrors': '0', 'state': 'up', 'sampleTime': 1539949006.318315, 
'name': 'lo', 'tx': '41578221', 'txDropped': '0', 'rx': '41578221', 
'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'ovs-system': 
{'txErrors': '0', 'state': 'down', 'sampleTime': 1539949006.318315, 
'name': 'ovs-system', 'tx': '0', 'txDropped': '0', 'rx': '0', 
'rx

[ovirt-users] Re: Node, failed to deploy hosted engine

2018-10-19 Thread Stefano Danzi

I've found some additional info.
Engine what for host to be up.
VDSM log on host show "waiting for storage pool to go up", but 
"hosted-engine --deploy" (and web wizard) don't ask for storage domain.




2018-10-19 13:36:52,206+0200 INFO  (vmrecovery) [vds] recovery: waiting 
for storage pool to go up (clientIF:707)
2018-10-19 13:36:52,529+0200 INFO  (jsonrpc/7) [api.host] START 
getStats() from=:::192.168.124.71,44704 (api:46)
2018-10-19 13:36:52,532+0200 INFO  (jsonrpc/7) [vdsm.api] START 
repoStats(domains=()) from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:46)
2018-10-19 13:36:52,533+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
repoStats return={} from=:::192.168.124.71,44704, 
task_id=13a3ce58-4226-4a0d-91e8-8742ffe40222 (api:52)
2018-10-19 13:36:52,534+0200 INFO  (jsonrpc/7) [vdsm.api] START 
multipath_health() from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:46)
2018-10-19 13:36:52,535+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
multipath_health return={} from=:::192.168.124.71,44704, 
task_id=fc1f5246-d6c2-41e5-bedc-5f9a7f27e9c5 (api:52)
2018-10-19 13:36:52,561+0200 INFO  (jsonrpc/7) [api.host] FINISH 
getStats return={'status': {'message': 'Done', 'code': 0}, 'info': 
{'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '0.73', 'nodeIndex': 0, 
'cpuSys': '0.20', 'cpuIdle': '99.07'}, '1': {'cpuUser': '1.45', 
'nodeIndex': 1, 'cpuSys': '0.86', 'cpuIdle': '97.69'}, '0': {'cpuUser': 
'2.44', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '97.23'}, '3': 
{'cpuUser': '1.25', 'nodeIndex': 1, 'cpuSys': '0.53', 'cpuIdle': 
'98.22'}, '2': {'cpuUser': '1.58', 'nodeIndex': 0, 'cpuSys': '0.46', 
'cpuIdle': '97.96'}, '5': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': 
'0.07', 'cpuIdle': '99.80'}, '4': {'cpuUser': '0.40', 'nodeIndex': 0, 
'cpuSys': '0.13', 'cpuIdle': '99.47'}, '7': {'cpuUser': '0.07', 
'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '6': {'cpuUser': 
'1.19', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '98.48'}, '9': 
{'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': 
'99.40'}, '8': {'cpuUser': '0.86', 'nodeIndex': 0, 'cpuSys': '0.33', 
'cpuIdle': '98.81'}}, 'numaNodeMemFree': {'1': {'memPercent': 30, 
'memFree': '14483'}, '0': {'memPercent': 35, 'memFree': '13445'}}, 
'memShared': 446, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 
'vmCount': 1, 'memUsed': '19', 'storageDomains': {}, 
'incomingVmMigrations': 0, 'network': {'glusternet': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'glusternet', 
'tx': '10080290', 'txDropped': '0', 'rx': '6669132', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'enp3s0f0': {'txErrors': '0', 
'state': 'up', 'sampleTime': 1539949006.318315, 'name': 'enp3s0f0', 
'tx': '66909029', 'txDropped': '0', 'rx': '17996942', 'rxErrors': '0', 
'speed': '1000', 'rxDropped': '0'}, 'bond0': {'txErrors': '0', 'state': 
'up', 'sampleTime': 1539949006.318315, 'name': 'bond0', 'tx': 
'87998708', 'txDropped': '0', 'rx': '46881018', 'rxErrors': '0', 
'speed': '3000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 
'state': 'down', 'sampleTime': 1539949006.318315, 'name': ';vdsmdummy;', 
'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': 
'1000', 'rxDropped': '0'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 
'sampleTime': 1539949006.318315, 'name': 'ovirtmgmt', 'tx': '56421549', 
'txDropped': '0', 'rx': '25448254', 'rxErrors': '0', 'speed': '1000', 
'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'lo', 'tx': '41578221', 'txDropped': '0', 
'rx': '41578221', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 
'ovs-system': {'txErrors': '0', 'state': 'down', 'sampleTime': 
1539949006.318315, 'name': 'ovs-system', 'tx': '0', 'txDropped': '0', 
'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 
'bond0.127': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'bond0.127', 'tx': '10080290', 'txDropped': 
'0', 'rx': '6670098', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'enp3s0f1', 'tx': '14732876', 'txDropped': 
'0', 'rx': '12258325', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'bond0.1': {'txErrors': '0', 'state': 'up', 'sampleTime': 
1539949006.318315, 'name': 'bond0.1', 'tx': '56421549', 'txDropped': 
'0', 'rx': '25458727', 'rxErrors': '0', 'speed': '1000', 'rxDropped': 
'0'}, 'br-int': {'txErrors': '0', 'state': 'down', 'sampleTime': 
1539949006.318315, 'name': 'br-int', 'tx': '0', 'txDropped': '0', 'rx': 
'0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '2'}, 'enp65s0f0': 
{'txErrors': '0', 'state': 'down', 'sampleTime': 1539949006.318315, 
'name': 'enp65s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': 
'0', 'speed': '1000', 'rxDropped': '0'}, 

[ovirt-users] Re: New oVirt deployment suggestions

2018-10-08 Thread Stefano Danzi

Thaks for the reply.

sda is a virual disk because server has an hardware array controller.
So I have to split sda in two smaller disks.

Il 08/10/2018 17:24, Jayme ha scritto:
You should be using shared external storage or glusterfs, if gluster 
you should have other drives in the server to porvision as gluster 
bricks during the hyoerconverged deployment


On Mon, Oct 8, 2018, 8:07 AM Stefano Danzi, <mailto:s.da...@hawai.it>> wrote:


Hi! It's the frist time thai I use node.

I installed node on my hosts and I leave auto partitioning.
Hosts storage now is:

[root@ovirtn01 ~]# lsblk
NAME  MAJ:MIN RM   
SIZE RO TYPE  MOUNTPOINT

sda 8:00 
136,7G  0 disk

├─sda1  8:10
 1G  0 part  /boot

└─sda2  8:20 
135,7G  0 part

  ├─onn_ovirtn01-pool00_tmeta 253:00
 1G  0 lvm

  │ └─onn_ovirtn01-pool00-tpool   253:20  
94,8G  0 lvm

  │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:30  
67,8G  0 lvm   /

  │   ├─onn_ovirtn01-pool00   253:60  
94,8G  0 lvm

  │   ├─onn_ovirtn01-var_log_audit253:70
 2G  0 lvm   /var/log/audit

  │   ├─onn_ovirtn01-var_log  253:80
 8G  0 lvm   /var/log

  │   ├─onn_ovirtn01-var  253:90
15G  0 lvm   /var

  │   ├─onn_ovirtn01-tmp  253:10   0
 1G  0 lvm   /tmp

  │   ├─onn_ovirtn01-home 253:11   0
 1G  0 lvm   /home

  │   ├─onn_ovirtn01-root 253:12   0  
67,8G  0 lvm

  │   └─onn_ovirtn01-var_crash253:13   0
10G  0 lvm   /var/crash

  ├─onn_ovirtn01-pool00_tdata 253:10  
94,8G  0 lvm

  │ └─onn_ovirtn01-pool00-tpool   253:20  
94,8G  0 lvm

  │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:30  
67,8G  0 lvm   /

  │   ├─onn_ovirtn01-pool00   253:60  
94,8G  0 lvm

  │   ├─onn_ovirtn01-var_log_audit253:70
 2G  0 lvm   /var/log/audit

  │   ├─onn_ovirtn01-var_log  253:80
 8G  0 lvm   /var/log

  │   ├─onn_ovirtn01-var  253:90
15G  0 lvm   /var

  │   ├─onn_ovirtn01-tmp  253:10   0
 1G  0 lvm   /tmp

  │   ├─onn_ovirtn01-home 253:11   0
 1G  0 lvm   /home

  │   ├─onn_ovirtn01-root 253:12   0  
67,8G  0 lvm

  │   └─onn_ovirtn01-var_crash253:13   0
10G  0 lvm   /var/crash

  └─onn_ovirtn01-swap 253:40  
13,7G  0 lvm   [SWAP]


But I have no more space for hosted engine (Vm/Data space will
be in another place).
Now I could:
- manually resize volumes
- reinstall with custom partitioning and leave 60Gb for hosted
engine volume.

what's the better way?

Il 27/09/2018 13:46, Hesham Ahmed ha scritto:

Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt



___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKVK6EWIHWK5Q3IXMPIU75DJZUPBYGJP/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F3ECOBSK7ZU3KT3ATB6XGEOUTNLOITA3/


[ovirt-users] Re: New oVirt deployment suggestions

2018-10-08 Thread Stefano Danzi

Hi! It's the frist time thai I use node.

I installed node on my hosts and I leave auto partitioning.
Hosts storage now is:

[root@ovirtn01 ~]# lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT 

sda 8:00 136,7G  0 disk 

├─sda1  8:10 1G  0 part  /boot 

└─sda2  8:20 135,7G  0 part 


  ├─onn_ovirtn01-pool00_tmeta 253:00 1G 
 0 lvm
  │ └─onn_ovirtn01-pool00-tpool   253:20  94,8G 
 0 lvm
  │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:30  67,8G 
 0 lvm   /
  │   ├─onn_ovirtn01-pool00   253:60  94,8G 
 0 lvm
  │   ├─onn_ovirtn01-var_log_audit253:70 2G 
 0 lvm   /var/log/audit
  │   ├─onn_ovirtn01-var_log  253:80 8G 
 0 lvm   /var/log
  │   ├─onn_ovirtn01-var  253:9015G 
 0 lvm   /var
  │   ├─onn_ovirtn01-tmp  253:10   0 1G 
 0 lvm   /tmp
  │   ├─onn_ovirtn01-home 253:11   0 1G 
 0 lvm   /home
  │   ├─onn_ovirtn01-root 253:12   0  67,8G 
 0 lvm
  │   └─onn_ovirtn01-var_crash253:13   010G 
 0 lvm   /var/crash
  ├─onn_ovirtn01-pool00_tdata 253:10  94,8G 
 0 lvm
  │ └─onn_ovirtn01-pool00-tpool   253:20  94,8G 
 0 lvm
  │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:30  67,8G 
 0 lvm   /
  │   ├─onn_ovirtn01-pool00   253:60  94,8G 
 0 lvm
  │   ├─onn_ovirtn01-var_log_audit253:70 2G 
 0 lvm   /var/log/audit
  │   ├─onn_ovirtn01-var_log  253:80 8G 
 0 lvm   /var/log
  │   ├─onn_ovirtn01-var  253:9015G 
 0 lvm   /var
  │   ├─onn_ovirtn01-tmp  253:10   0 1G 
 0 lvm   /tmp
  │   ├─onn_ovirtn01-home 253:11   0 1G 
 0 lvm   /home
  │   ├─onn_ovirtn01-root 253:12   0  67,8G 
 0 lvm
  │   └─onn_ovirtn01-var_crash253:13   010G 
 0 lvm   /var/crash
  └─onn_ovirtn01-swap 253:40  13,7G 
 0 lvm   [SWAP]

But I have no more space for hosted engine (Vm/Data space will be in 
another place).

Now I could:
- manually resize volumes
- reinstall with custom partitioning and leave 60Gb for hosted engine 
volume.


what's the better way?

Il 27/09/2018 13:46, Hesham Ahmed ha scritto:

Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKVK6EWIHWK5Q3IXMPIU75DJZUPBYGJP/


[ovirt-users] Re: New oVirt deployment suggestions

2018-10-08 Thread Stefano Danzi

Hi!

But I lose all cutom packages when I upgrade node. Is this correct?

Il 27/09/2018 15:00, Hesham Ahmed ha scritto:

You can install any CentOS compatible custom software on oVirt nodes
without much trouble.
On Thu, Sep 27, 2018 at 3:06 PM Stefano Danzi  wrote:


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HI2E5W7CSXKEFQOP536RQTKM7XGCPGEZ/


[ovirt-users] Re: New oVirt deployment suggestions

2018-10-08 Thread Stefano Danzi

Thanks for the informtion.
The possibility of configuring teams in node generated me confusion.

Il 08/10/2018 10:44, Gianluca Cecchi ha scritto:
On Mon, Oct 8, 2018 at 10:28 AM Stefano Danzi <mailto:s.da...@hawai.it>> wrote:


I'm looking into oVirt node.
The node management interface allow to configure network cards as
bond
or team.
Is team interface supported by oVirt?


Last time asked for other reasons on September, it seems not so soon.
See Edward's message here:
https://www.mail-archive.com/users@ovirt.org/msg50866.html
HIH,
Gianluca


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWUXE5NIJEC53QQ6Z2KCVCL2ILYPLVF6/


[ovirt-users] Re: New oVirt deployment suggestions

2018-10-08 Thread Stefano Danzi

I'm looking into oVirt node.
The node management interface allow to configure network cards as bond 
or team.

Is team interface supported by oVirt?

Il 27/09/2018 15:07, Hesham Ahmed ha scritto:

Also I would not create a single bond for use with gluster and
management (even on different VLAN), gluster can saturate 4x1Gbe ports
easily, specially during healing and this would trigger fencing since
engine would start getting timeouts when trying to reach hosts. I
would suggest creating 2 bonded interfaces (2 x 1G each), with one
dedicated for gluster or get a dedicated 10G nic for gluster.
On Thu, Sep 27, 2018 at 4:00 PM Hesham Ahmed  wrote:

You can install any CentOS compatible custom software on oVirt nodes
without much trouble.
On Thu, Sep 27, 2018 at 3:06 PM Stefano Danzi  wrote:

Hi!

I need to install HP SSA and HP SHM on hosts and I don't know if this is
supported on oVirt node.

Il 27/09/2018 13:46, Hesham Ahmed ha scritto:

Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt


On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi  wrote:

Hello!

I'm almost ready to start with a new oVirt deplyment. I will use CentOS
7, self hosting engine and glusert storage.
I have 3 phisical host. Each host has four NIC. My first idea is:

- configure bond betwheen NICs
- configure a VLAN interface for management network (and local lan)
- configure a VLAN interface for gluster network
- configure gluster for the hosted engine
- start "hosted-engine --deploy" process

is this enough? do I need a phisical dedicated NIC for management network?

Bye
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHJFK6VQ7GE45S2MHB6BFBEPKNDZIW2T/


[ovirt-users] Re: New oVirt deployment suggestions

2018-09-27 Thread Stefano Danzi

Hi!

I need to install HP SSA and HP SHM on hosts and I don't know if this is 
supported on oVirt node.


Il 27/09/2018 13:46, Hesham Ahmed ha scritto:

Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt


On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi  wrote:

Hello!

I'm almost ready to start with a new oVirt deplyment. I will use CentOS
7, self hosting engine and glusert storage.
I have 3 phisical host. Each host has four NIC. My first idea is:

- configure bond betwheen NICs
- configure a VLAN interface for management network (and local lan)
- configure a VLAN interface for gluster network
- configure gluster for the hosted engine
- start "hosted-engine --deploy" process

is this enough? do I need a phisical dedicated NIC for management network?

Bye
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIC4IH32HFUM5WRWBCBBEHVGNJV4GHE4/


[ovirt-users] New oVirt deployment suggestions

2018-09-27 Thread Stefano Danzi

Hello!

I'm almost ready to start with a new oVirt deplyment. I will use CentOS 
7, self hosting engine and glusert storage.

I have 3 phisical host. Each host has four NIC. My first idea is:

- configure bond betwheen NICs
- configure a VLAN interface for management network (and local lan)
- configure a VLAN interface for gluster network
- configure gluster for the hosted engine
- start "hosted-engine --deploy" process

is this enough? do I need a phisical dedicated NIC for management network?

Bye
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/


Re: [ovirt-users] Open source backup!

2018-03-05 Thread Stefano Danzi

I've never used it, but https://www.bareos.org/ could be a good product.
It isn't specific for VMs, but it can help.

Il 05/03/2018 05:57, Nasrum Minallah Manzoor ha scritto:


HI,
Can you please suggest me any open source backup solution for ovirt 
Virtual machines.
My backup media is FC tape library which is directly attached to my 
ovirt node. I really appreciate your help


Regards,



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Strange thing.

after "vdsm-client Host getCapabilities" command, cluster cpu type 
become "Intel Sandybridge Family". Same thing for all VMs.

Now I can run VMs.

Il 13/02/2018 11:28, Simone Tiraboschi ha scritto:

Ciao Stefano,
we have to properly indagate this: thanks for the report.

Can you please attach from your host the output of
- grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf
- vdsm-client Host getCapabilities

Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 
upgrade?




On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start
any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster
is unknown. Its possible to change the cluster cpu or set a
different one per VM."

All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember
cpu type before the upgrade), my CPU should be Ivy Bridge but it
isn't in the dropdown list.

If I try to select a similar cpu (SandyBridge IBRS) I get an
error. I can't chage cluster cpu type when I have running hosts
with a lower CPU type.
I can't put host in maintenance because  hosted engine is running
on it.

How I can solve?

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

Stefano Danzi
Responsabile ICT

HAWAI ITALIA S.r.l.
Via Forte Garofolo, 16
37057 S. Giovanni Lupatoto Verona Italia

P. IVA 01680700232

tel. +39/045/8266400
fax +39/045/8266401
Web www.hawai.it

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1

2018-02-13 Thread Stefano Danzi

Hello!

In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM.
Hosted engine starts regularly.

I have a sigle host with Hosted Engine.

Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz

When I start any VM I get this error: "The CPU type of the cluster is 
unknown. Its possible to change the cluster cpu or set a different one 
per VM."


All VMs have " Guest CPU Type: N/D"

Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu 
type before the upgrade), my CPU should be Ivy Bridge but it isn't in 
the dropdown list.


If I try to select a similar cpu (SandyBridge IBRS) I get an error. I 
can't chage cluster cpu type when I have running hosts with a lower CPU 
type.

I can't put host in maintenance because  hosted engine is running on it.

How I can solve?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Multi OVF_STORE entries

2018-01-24 Thread Stefano Danzi

Hello,

I'm checking Storage -> Disks in my oVirt test site. I can find:

- 4 disks for my 4 VM
- 1 disk for HostedEngine
- 4 OVF_STORE entries, sharables and without size.

I can't manage, move or remove OVF_STORE entries.
I think that they are somethig ported during some upgrade...

Does anyone have any ideas?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-21 Thread Stefano Danzi



Il 21/12/2017 16:37, Sandro Bonazzola ha scritto:



2017-12-21 14:26 GMT+01:00 Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>>:


Sloved installing glusterfs-gnfs package.
Anyway could be nice to move hosted engine to gluster


Adding some gluster folks. Are we missing a dependency somewhere?
During the upgrade nfs on gluster stopped to work here and adding the 
missing dep solved.
Stefano please confirm, you were on gluster 3.8 (oVirt 4.1) and now 
you are on gluster 3.12 (ovirt 4.2)



Sandro I confirm the version.
Host are running CentOS 7.4.1708
before the upgrade there was gluster 3.8 in oVirt 4.1
now I have gluster 3.12 in oVirt 4.2



Il 21/12/2017 11:37, Stefano Danzi ha scritto:



Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:



On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi
<s.da...@hawai.it <mailto:s.da...@hawai.it>> wrote:

Hello!
I have a test system with one phisical host and hosted
engine running on it.
Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable off"
doesn't help.

How I can re-enable nfs? O better how I can migrate self
hosted engine to native glusterfs?



Ciao Stefano,
could you please attach the output of
  gluster volume info engine

adding Kasturi here


[root@ovirt01 ~]# gluster volume info engine

Volume Name: engine
Type: Distribute
Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.disable: off
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
features.shard-block-size: 512MB





___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA <https://www.redhat.com/>

<https://red.ht/sig>  
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-21 Thread Stefano Danzi

Sloved installing glusterfs-gnfs package.
Anyway could be nice to move hosted engine to gluster

Il 21/12/2017 11:37, Stefano Danzi ha scritto:



Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:



On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hello!
I have a test system with one phisical host and hosted engine
running on it.
Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable off" doesn't help.

How I can re-enable nfs? O better how I can migrate self hosted
engine to native glusterfs?



Ciao Stefano,
could you please attach the output of
  gluster volume info engine

adding Kasturi here


[root@ovirt01 ~]# gluster volume info engine

Volume Name: engine
Type: Distribute
Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.disable: off
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
features.shard-block-size: 512MB





___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-21 Thread Stefano Danzi



Il 21/12/2017 11:30, Simone Tiraboschi ha scritto:



On Thu, Dec 21, 2017 at 11:16 AM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hello!
I have a test system with one phisical host and hosted engine
running on it.
Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable off" doesn't help.

How I can re-enable nfs? O better how I can migrate self hosted
engine to native glusterfs?



Ciao Stefano,
could you please attach the output of
  gluster volume info engine

adding Kasturi here


[root@ovirt01 ~]# gluster volume info engine

Volume Name: engine
Type: Distribute
Volume ID: 565951c8-977e-4674-b6b2-b4f60551c1d8
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirt01.hawai.lan:/home/glusterfs/engine/brick
Options Reconfigured:
server.event-threads: 4
client.event-threads: 4
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
nfs.disable: off
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
features.shard-block-size: 512MB





___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-21 Thread Stefano Danzi

Hello!
I have a test system with one phisical host and hosted engine running on it.
Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable off" doesn't help.

How I can re-enable nfs? O better how I can migrate self hosted engine 
to native glusterfs?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Self hosted engine fails after 4.2 upgrade

2017-12-21 Thread Stefano Danzi

Hello!
I have a test system with one phisical host and hosted engine running on 
it.

Storage is gluster but hosted engine mount it as nfs.

After the upgrade gluster no longer activate nfs.
The command "gluster volume set engine nfs.disable off" doesn't help.

How I can re-enable nfs? O better how I can migrate self hosted engine 
to native glusterfs?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Native Access on gluster storage domain

2017-09-11 Thread Stefano Danzi

Your suggestion solved the problem.
In the UI relative flag still missing, but now VMs are using gfapi.

Il 11/09/2017 05:23, Sahina Bose ha scritto:
You could try to enable the config option for the 4.1 cluster level - 
using engine-config tool from the Hosted Engine VM. This will require 
a restart of the engine service and will enable gfapi access for all 
clusters at 4.1 level though - so try this option if this is acceptable.


On Wed, Aug 30, 2017 at 8:02 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


above the logs.
PS cluster compatibility level is 4.1

engine:

2017-08-30 16:26:07,928+02 INFO
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
[56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to object
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2017-08-30 16:26:07,951+02 WARN
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
[56d090c5-1097-4641-b745-74af8397d945] Validation of action
'UpdateCluster' failed for user admin@internal. Reasons:

VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
2017-08-30 16:26:07,952+02 INFO
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
[56d090c5-1097-4641-b745-74af8397d945] Lock freed to object
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'

vdsm:

2017-08-30 16:29:23,310+0200 INFO  (jsonrpc/0)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
0.15 seconds (__init__:539)
2017-08-30 16:29:23,419+0200 INFO  (jsonrpc/4)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in
0.01 seconds (__init__:539)
2017-08-30 16:29:23,424+0200 INFO  (jsonrpc/3)
[jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies
succeeded in 0.00 seconds (__init__:539)
2017-08-30 16:29:23,814+0200 INFO  (jsonrpc/5)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
0.15 seconds (__init__:539)
2017-08-30 16:29:24,011+0200 INFO  (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:51862
(protocoldetector:72)
2017-08-30 16:29:24,023+0200 INFO  (Reactor thread)
[ProtocolDetector.Detector] Detected protocol stomp from ::1:51862
(protocoldetector:127)
2017-08-30 16:29:24,024+0200 INFO  (Reactor thread)
[Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2017-08-30 16:29:24,031+0200 INFO  (JsonRpc (StompReactor))
[Broker.StompAdapter] Subscribe command received (stompreactor:130)
2017-08-30 16:29:24,287+0200 INFO  (jsonrpc/2)
[jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in
0.01 seconds (__init__:539)
2017-08-30 16:29:24,443+0200 INFO  (jsonrpc/7) [vdsm.api] START
getSpmStatus(spUUID=u'0002-0002-0002-0002-01ef',
options=None) from=:::192.168.1.55,46502, flow_id=1f664a9,
task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
2017-08-30 16:29:24,446+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM',
'spmLver': 1430L}} from=:::192.168.1.55,46502,
flow_id=1f664a9, task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
2017-08-30 16:29:24,447+0200 INFO  (jsonrpc/7)
[jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus
succeeded in 0.00 seconds (__init__:539)
2017-08-30 16:29:24,460+0200 INFO  (jsonrpc/6)
[jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
0.16 seconds (__init__:539)
2017-08-30 16:29:24,467+0200 INFO  (jsonrpc/1) [vdsm.api] START
getStoragePoolInfo(spUUID=u'0002-0002-0002-0002-01ef',
options=None) from=:::192.168.1.55,46506, flow_id=1f664a9,
task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)


Il 30/08/2017 16:06, Shani Leviim ha scritto:

Hi Stefano,
Can you please attach your engine and vdsm logs?

*Regards,
*
*Shani Leviim
*

On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi <s.da...@hawai.it
<mailto:s.da...@hawai.it>> wrote:

Hello,
I have a test environment with a sigle host and self hosted
engine running oVirt Engine: 4.1.5.2-1.el7.centos

I what to try the option "Native Access on gluster storage
domain" but I get an error because I have to put the
host in maintenance mode. I can't do that because I have a
single host so the hosted engine can't be migrated.

There are a way to change this option but apply it at next
reboot?

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>





___
Users mailing list
Use

Re: [ovirt-users] Native Access on gluster storage domain

2017-09-01 Thread Stefano Danzi

On host info I can see:

Cluster compatibility level: 3.6,4.0,4.1

could is this the problem?

Il 30/08/2017 16:32, Stefano Danzi ha scritto:

above the logs.
PS cluster compatibility level is 4.1

engine:

2017-08-30 16:26:07,928+02 INFO 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2017-08-30 16:26:07,951+02 WARN 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Validation of action 
'UpdateCluster' failed for user admin@internal. Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
2017-08-30 16:26:07,952+02 INFO 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Lock freed to object 
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'


vdsm:

2017-08-30 16:29:23,310+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
2017-08-30 16:29:23,419+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] 
RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2017-08-30 16:29:23,424+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] 
RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds 
(__init__:539)
2017-08-30 16:29:23,814+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
2017-08-30 16:29:24,011+0200 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:51862 
(protocoldetector:72)
2017-08-30 16:29:24,023+0200 INFO  (Reactor thread) 
[ProtocolDetector.Detector] Detected protocol stomp from ::1:51862 
(protocoldetector:127)
2017-08-30 16:29:24,024+0200 INFO  (Reactor thread) 
[Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2017-08-30 16:29:24,031+0200 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompreactor:130)
2017-08-30 16:29:24,287+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] 
RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
2017-08-30 16:29:24,443+0200 INFO  (jsonrpc/7) [vdsm.api] START 
getSpmStatus(spUUID=u'0002-0002-0002-0002-01ef', 
options=None) from=:::192.168.1.55,46502, flow_id=1f664a9, 
task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
2017-08-30 16:29:24,446+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 
'spmLver': 1430L}} from=:::192.168.1.55,46502, flow_id=1f664a9, 
task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
2017-08-30 16:29:24,447+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] 
RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:539)
2017-08-30 16:29:24,460+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.16 seconds (__init__:539)
2017-08-30 16:29:24,467+0200 INFO  (jsonrpc/1) [vdsm.api] START 
getStoragePoolInfo(spUUID=u'0002-0002-0002-0002-01ef', 
options=None) from=:::192.168.1.55,46506, flow_id=1f664a9, 
task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)


Il 30/08/2017 16:06, Shani Leviim ha scritto:

Hi Stefano,
Can you please attach your engine and vdsm logs?

*Regards,
*
*Shani Leviim
*

On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hello,
I have a test environment with a sigle host and self hosted
engine running oVirt Engine: 4.1.5.2-1.el7.centos

I what to try the option "Native Access on gluster storage
domain" but I get an error because I have to put the
host in maintenance mode. I can't do that because I have a single
host so the hosted engine can't be migrated.

There are a way to change this option but apply it at next reboot?

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

Stefano Danzi
Responsabile sistemi informativi

HAWAI ITALIA S.r.l.
Via Forte Garofolo, 16
37057 S. Giovanni Lupatoto Verona Italia

P. IVA 01680700232

tel. +39/045/8266400
fax +39/045/8266401
Web www.hawai.it

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Native Access on gluster storage domain

2017-08-30 Thread Stefano Danzi

above the logs.
PS cluster compatibility level is 4.1

engine:

2017-08-30 16:26:07,928+02 INFO 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
2017-08-30 16:26:07,951+02 WARN 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Validation of action 
'UpdateCluster' failed for user admin@internal. Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
2017-08-30 16:26:07,952+02 INFO 
[org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8) 
[56d090c5-1097-4641-b745-74af8397d945] Lock freed to object 
'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'


vdsm:

2017-08-30 16:29:23,310+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
2017-08-30 16:29:23,419+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] 
RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2017-08-30 16:29:23,424+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] 
RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds 
(__init__:539)
2017-08-30 16:29:23,814+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
2017-08-30 16:29:24,011+0200 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:51862 
(protocoldetector:72)
2017-08-30 16:29:24,023+0200 INFO  (Reactor thread) 
[ProtocolDetector.Detector] Detected protocol stomp from ::1:51862 
(protocoldetector:127)
2017-08-30 16:29:24,024+0200 INFO  (Reactor thread) 
[Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2017-08-30 16:29:24,031+0200 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompreactor:130)
2017-08-30 16:29:24,287+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] 
RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
2017-08-30 16:29:24,443+0200 INFO  (jsonrpc/7) [vdsm.api] START 
getSpmStatus(spUUID=u'0002-0002-0002-0002-01ef', 
options=None) from=:::192.168.1.55,46502, flow_id=1f664a9, 
task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
2017-08-30 16:29:24,446+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 
'spmLver': 1430L}} from=:::192.168.1.55,46502, flow_id=1f664a9, 
task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
2017-08-30 16:29:24,447+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] 
RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:539)
2017-08-30 16:29:24,460+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] 
RPC call GlusterHost.list succeeded in 0.16 seconds (__init__:539)
2017-08-30 16:29:24,467+0200 INFO  (jsonrpc/1) [vdsm.api] START 
getStoragePoolInfo(spUUID=u'0002-0002-0002-0002-01ef', 
options=None) from=:::192.168.1.55,46506, flow_id=1f664a9, 
task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)


Il 30/08/2017 16:06, Shani Leviim ha scritto:

Hi Stefano,
Can you please attach your engine and vdsm logs?

*Regards,
*
*Shani Leviim
*

On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hello,
I have a test environment with a sigle host and self hosted engine
running oVirt Engine: 4.1.5.2-1.el7.centos

I what to try the option "Native Access on gluster storage domain"
but I get an error because I have to put the
host in maintenance mode. I can't do that because I have a single
host so the hosted engine can't be migrated.

There are a way to change this option but apply it at next reboot?

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Native Access on gluster storage domain

2017-08-30 Thread Stefano Danzi

Hello,
I have a test environment with a sigle host and self hosted engine 
running oVirt Engine: 4.1.5.2-1.el7.centos


I what to try the option "Native Access on gluster storage domain" but I 
get an error because I have to put the
host in maintenance mode. I can't do that because I have a single host 
so the hosted engine can't be migrated.


There are a way to change this option but apply it at next reboot?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Leaving maintenace mode

2017-01-10 Thread Stefano Danzi

Hi!

You are right! But I think isn't correct that the command fail if the 
iso domain isn't available.
I'ts ok if the comman fail when the self hosted engine storage is not 
available, but not for iso domain.



Il 10/01/2017 17.47, Martin Sivak ha scritto:

Hi,

the appliance we publish actually does not enable the ISO domain on
the VM anymore for exactly this reason. I believe we mention this in
the documentation somewhere too.

Please do not put or disable all domains that are served from inside
the engine VM.

Best regards

--
Martin Sivak
SLA / oVirt

On Tue, Jan 10, 2017 at 3:41 PM, Stefano Danzi <s.da...@hawai.it> wrote:

Hi!

I have a testing oVirt installation with only one host and running self
hosted engine.

If I put host in global maintenance mode:

  hosted-engine  --set-maintenance --mode=global

then ssh into self hosted engine VM, shoutdown VM using 'init 0', when
I run on the host:

hosted-engine  --set-maintenance --mode=none

I have:

Traceback (most recent call last):
   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
 "__main__", fname, loader, pkg_name)
   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
 exec code in run_globals
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 73, in 
 if not maintenance.set_mode(sys.argv[1]):
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 61, in set_mode
 value=m_global,
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 259, in set_maintenance_mode
 str(value))
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 202, in set_global_md_flag
 self._configure_broker_conn(broker)
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn
 dom_type=dom_type)
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 176, in set_storage_domain
 .format(sd_type, options, e))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage
domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid':
'46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out


This because the host has the iso domain nfs export mounted but the self
hosted engine VM is not running, I think...
So I have to reboot the host before change the maintenance mode.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Leaving maintenace mode

2017-01-10 Thread Stefano Danzi

Hi!

I have a testing oVirt installation with only one host and running self 
hosted engine.


If I put host in global maintenance mode:

 hosted-engine  --set-maintenance --mode=global

then ssh into self hosted engine VM, shoutdown VM using 'init 0', when
I run on the host:

hosted-engine  --set-maintenance --mode=none

I have:

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", 
line 73, in 

if not maintenance.set_mode(sys.argv[1]):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", 
line 61, in set_mode

value=m_global,
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 259, in set_maintenance_mode

str(value))
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 202, in set_global_md_flag

self._configure_broker_conn(broker)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 180, in _configure_broker_conn

dom_type=dom_type)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 176, in set_storage_domain

.format(sd_type, options, e))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set 
storage domain FilesystemBackend, options {'dom_type': 'nfs3', 
'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out



This because the host has the iso domain nfs export mounted but the self 
hosted engine VM is not running, I think...

So I have to reboot the host before change the maintenance mode.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-29 Thread Stefano Danzi


HI to All!!

Il 28/06/2016 17.24, Stefano Danzi ha scritto:

[CUT]
We have two issues here. First is that
https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_hosted_engine_ha/lib/storage_backends.py;h=f2fbdc43d0e4afd7539a3a1de75de0cb07bdca9d;hb=HEAD#l271 


is still using vdscli to contact vdsm, instead of the preferred
jsonrpccli.

The second is that vdscli.connect's heuristic ends up reading the local
server address from vdsm config, where it finds the default ipv6-local
address of "::".

Please try setting

[addresses]
management_ip='0.0.0.0'

in your /etc/vdsm/vdsm.conf instead of the crontab hacks.


this solve the issue, but I still to haven't default gateway on 
ovirtmgmt inteface.




Using this configuration all work, but every 50 minutes I receive 3 
email from broker:


- ovirt-hosted-engine state transition StartState-ReinitializeFSM
- ovirt-hosted-engine state transition ReinitializeFSM-EngineStarting
- ovirt-hosted-engine state transition EngineStarting-EngineUp

the uptime of engine is about 15 hours, so really the engine VM is not 
rebooted.

Agent log:


MainThread::INFO::2016-06-29 
08:46:10,453::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current sta

te ReinitializeFSM (score: 0)
MainThread::INFO::2016-06-29 
08:46:20,547::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) 
Trying: notify time=1467182780.
55 type=state_transition detail=ReinitializeFSM-EngineStarting 
hostname='ovirt01.hawai.lan'
MainThread::INFO::2016-06-29 
08:46:20,732::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) 
Success, was notification of st

ate_transition (ReinitializeFSM-EngineStarting) sent? sent
MainThread::INFO::2016-06-29 
08:46:20,733::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) 
Initializin

g VDSM
MainThread::INFO::2016-06-29 
08:46:24,430::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) 
C

onnecting the storage
MainThread::INFO::2016-06-29 
08:46:24,431::storage_server::218::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) 
Conn

ecting storage server
MainThread::INFO::2016-06-29 
08:46:31,764::storage_server::225::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) 
Conn

ecting storage server
MainThread::INFO::2016-06-29 
08:46:31,780::storage_server::232::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) 
Refr

eshing the storage domain
MainThread::INFO::2016-06-29 
08:46:31,945::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) 
P

reparing images
MainThread::INFO::2016-06-29 
08:46:31,946::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) 
Preparing images
MainThread::INFO::2016-06-29 
08:46:35,895::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) 
R

eloading vm.conf from the shared storage domain
MainThread::INFO::2016-06-29 
08:46:35,896::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) 
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2016-06-29 
08:46:39,621::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) 
Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, 
volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9
MainThread::INFO::2016-06-29 
08:46:39,667::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) 
Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, 
volUUID:3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-29 
08:46:39,760::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) 
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-06-29 
08:46:39,761::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) 
OVF_STORE volume path: 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-29 
08:46:39,772::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) 
Found an OVF for HE VM, trying to convert
MainThread::INFO::2016-06-29 
08:46:39,774::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) 
Got vm.conf from OVF_STORE
MainThread::INFO::2016-06-29 
08:46:43,489::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineStarting (score: 3400)
MainThread::INFO::2016-06-29 
08:46:53,605::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Timeout cle

Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-28 Thread Stefano Danzi



Il 28/06/2016 15.02, Dan Kenigsberg ha scritto:

On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:

Hi!


Thanks for the detailed logging!


The broker error is:

==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-27 
09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140293563619152

==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-25::ERROR::2016-06-27 
09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
Error while serving connection
Traceback (most recent call last):
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 166, in handle
 data)
   File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 299, in _dispatch
 .set_storage_domain(client, sd_type, **options)
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 66, in set_storage_domain
 self._backends[client].connect()
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 400, in connect
 volUUID=volume.volume_uuid
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 245, in _get_volume_path
 volUUID

We have two issues here. First is that
https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_hosted_engine_ha/lib/storage_backends.py;h=f2fbdc43d0e4afd7539a3a1de75de0cb07bdca9d;hb=HEAD#l271
is still using vdscli to contact vdsm, instead of the preferred
jsonrpccli.

The second is that vdscli.connect's heuristic ends up reading the local
server address from vdsm config, where it finds the default ipv6-local
address of "::".

Please try setting

[addresses]
management_ip='0.0.0.0'

in your /etc/vdsm/vdsm.conf instead of the crontab hacks.


this solve the issue, but I still to haven't default gateway on 
ovirtmgmt inteface.


Would you please open a bug about the two issues
(ovirt-hosted-engine-ha and vdsm networking)?


Here: https://bugzilla.redhat.com/show_bug.cgi?id=1350883


Would you report the output of `netstats -nltp` on your host, as I do
not completely understand why no interface (not even the loopback one)
was listening on ipv6?

Here:
[root@ovirt01 ~]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State   
PID/Program name
tcp0  0 0.0.0.0:39373   0.0.0.0:* LISTEN  
2200/rpc.statd
tcp0  0 0.0.0.0:111 0.0.0.0:* LISTEN  
969/rpcbind

tcp0  0 0.0.0.0:54322   0.0.0.0:* LISTEN  943/python
tcp0  0 0.0.0.0:22  0.0.0.0:* LISTEN  1571/sshd
tcp0  0 0.0.0.0:858 0.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:49152   0.0.0.0:* LISTEN  
1929/glusterfsd
tcp0  0 0.0.0.0:49153   0.0.0.0:* LISTEN  
1968/glusterfsd
tcp0  0 0.0.0.0:20490.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:38465   0.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:38466   0.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:16514   0.0.0.0:* LISTEN  
1603/libvirtd
tcp0  0 0.0.0.0:38468   0.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:38469   0.0.0.0:* LISTEN  
1946/glusterfs
tcp0  0 0.0.0.0:24007   0.0.0.0:* LISTEN  
1585/glusterd

tcp6   0  0 :::54321:::* LISTEN  1893/python
tcp6   0  0 :::22   :::* LISTEN  1571/sshd
tcp6   0  0 :::16514:::* LISTEN  1603/libvirtd




Regards,
Dan.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-27 Thread Stefano Danzi
RTIFICATE-\nMIIE"..., 4096) = 1574
[pid  3742] read(8, "", 4096)   = 0
[pid  3742] close(8)= 0
[pid  3742] munmap(0x7f98bded, 4096) = 0
[pid  3742] open("/etc/pki/vdsm/keys/vdsmkey.pem", O_RDONLY) = 8
[pid  3742] fstat(8, {st_mode=S_IFREG|0440, st_size=1675, ...}) = 0
[pid  3742] mmap(NULL, 4096, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f98bded

[pid  3742] read(8, "-BEGIN RSA PRIVATE KEY-\n"..., 4096) = 1675
[pid  3742] close(8)= 0
[pid  3742] munmap(0x7f98bded, 4096) = 0
[*_pid  3742] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 8_**_
_**_[pid  3742] connect(8, {sa_family=AF_INET6, sin6_port=htons(54321), 
inet_pton(AF_INET6, "::", _addr), sin6_flowinfo=0, 
sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)_**_

_**_[pid  3742] close(8)= 0_*
[pid  3742] 
stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
{st_mode=S_IFREG|0644, st_size=14078, ...}) = 0
[pid  3742] 
stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
{st_mode=S_IFREG|0644, st_size=14078, ...}) = 0
[pid  3742] 
stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
{st_mode=S_IFREG|0644, st_size=8885, ...}) = 0
[pid  3742] 
stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
{st_mode=S_IFREG|0644, st_size=32723, ...}) = 0
[pid  3742] 
stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
{st_mode=S_IFREG|0644, st_size=32723, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/xmlrpclib.py", 
{st_mode=S_IFREG|0644, st_size=51801, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/xmlrpclib.py", 
{st_mode=S_IFREG|0644, st_size=51801, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/xmlrpclib.py", 
{st_mode=S_IFREG|0644, st_size=51801, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/xmlrpclib.py", 
{st_mode=S_IFREG|0644, st_size=51801, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/xmlrpclib.py", 
{st_mode=S_IFREG|0644, st_size=51801, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/httplib.py", 
{st_mode=S_IFREG|0644, st_size=48234, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/httplib.py", 
{st_mode=S_IFREG|0644, st_size=48234, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/httplib.py", 
{st_mode=S_IFREG|0644, st_size=48234, ...}) = 0
[pid  3742] stat("/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", 
{st_mode=S_IFREG|0644, st_size=10720, ...}) = 0
[pid  3742] stat("/usr/lib64/python2.7/socket.py", 
{st_mode=S_IFREG|0644, st_size=20512, ...}) = 0
[pid  3742] sendto(3, "<11>ovirt-ha-broker ovirt_hosted"..., 1970, 0, 
NULL, 0) = 1970
[pid  3742] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2652, 
...}) = 0

[pid  3742] write(4, "Thread-49::ERROR::2016-06-27 09:"..., 2012) = 2012
[pid  3742] write(2, "ERROR:ovirt_hosted_engine_ha.bro"..., 1950) = 1950
[pid  3742] poll([{fd=6, events=POLLOUT}], 1, 700) = 1 ([{fd=6, 
revents=POLLOUT}])
[pid  3742] sendto(6, "failure \n", 31, 0, NULL, 
0) = 31

[pid  3742] fcntl(6, F_GETFL)   = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid  3742] fcntl(6, F_SETFL, O_RDWR)   = 0
[pid  3742] recvfrom(6,  

enabling IPV6 only in ovirtmgmt interface the broker still report python 
exception, but in vdsm log I see:


jsonrpc.Executor/4::DEBUG::2016-06-27 
10:01:36,697::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Host.getHardwareInfo' in bridge with {'systemProductName': 'To 
be filled by O.E.M.', 'systemSerialNumber': 'To be filled by O.E.M.', 
'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'To be filled 
by O.E.M.', 'systemUUID': 'F90B3100-D83F-11DD-8DD8-40167E3684F1', 
'systemManufacturer': 'To be filled by O.E.M.'}
Reactor thread::INFO::2016-06-27 
10:01:38,703::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) 
*Accepting connection from :::192.168.1.50:56228*
Reactor thread::DEBUG::2016-06-27 
10:01:38,713::protocoldetector::85::ProtocolDetector.Detector::(__init__) 
Using required_size=11
Reactor thread::INFO::2016-06-27 
10:01:38,714::protocoldetector::121::ProtocolDetector.Detector::(handle_read) 
Detected protocol stomp from :::192.168.1.50:56228
Reactor thread::INFO::2016-06-27 
10:01:38,714::stompreactor::101::Broker.StompAdapter::(_cmd_connect) 
Processing CONNECT request
JsonRpc (StompReactor)::INFO::2016-06-27 
10:01:38,715::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) 
Subscribe command received
Reactor thread::DEBUG::2016-06-27 
10:01:38,716::stompreactor::482::protocoldetector.StompDetector::(handle_socket) 
Stomp detected from (':::192.168.1.50', 56228)
jsonrpc.Executor/5::DE

Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi

HI!!

I found a workaround

the brocker process try to connect to vdsm to IPV4 host address using an 
IPV6 connection

(I noticed that doing a strace to the process),
but ipv6 is not intialized at boot. (why connect to IPV4 address using 
IPV6?)


I added the following lines to crontab:

@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | 
/usr/bin/at now+1 minutes
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | 
/usr/bin/at now+1 minutes
@reboot echo '/usr/sbin/route add default gw 192.168.1.254'  | 
/usr/bin/at now+1 minutes




Il 24/06/2016 12.36, Stefano Danzi ha scritto:
How I can change self hosted engine configuration to mount directly 
gluster storage without pass through gluster NFS?


Maybe this solve

Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some 
other rpms.


Something changed.
My setup has engine storage on gluster, but mounted with NFS.
Now gluster daemon don't automatically start at boot. After starting 
manually gluster the error is the same:


==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-19::ERROR::2016-06-24 
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) 
Error while serving connection

Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle

data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch

.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
line 66, in set_storage_domain

self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 400, in connect

volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 245, in _get_volume_path

volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, 
in connect

sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable


VDSM.log

jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0

aborting False
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Following parameters ['type'] were not recogn

ized
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Provided value "2" not defined in DiskType en

um for Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter capacity is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Required property allocType is not provided w

hen calling Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter mtime is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter ctime is not int type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter truesize is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter apparentsize is not uint type
jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Volume.getInfo' in bridge with {'sta
tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7'

Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi
How I can change self hosted engine configuration to mount directly 
gluster storage without pass through gluster NFS?


Maybe this solve

Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other 
rpms.


Something changed.
My setup has engine storage on gluster, but mounted with NFS.
Now gluster daemon don't automatically start at boot. After starting 
manually gluster the error is the same:


==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-19::ERROR::2016-06-24 
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) 
Error while serving connection

Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle

data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch

.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
line 66, in set_storage_domain

self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 400, in connect

volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 245, in _get_volume_path

volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, 
in connect

sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable


VDSM.log

jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0

aborting False
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Following parameters ['type'] were not recogn

ized
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Provided value "2" not defined in DiskType en

um for Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter capacity is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Required property allocType is not provided w

hen calling Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter mtime is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter ctime is not int type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter truesize is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter apparentsize is not uint type
jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Volume.getInfo' in bridge with {'sta
tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 
'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': 
'--00
00--', 'format': 'RAW', 'image': 
'6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 
'disktype': '2', 'legality': 'LEGAL',
'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 
'capacity': '1048576', 'uuid': 
u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize

': '1048576', 'type': 'PREALLOCATED'}
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::WARNING:

Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi
error 
during sending data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: bad write retry
JsonRpc (StompReactor)::WARNING::2016-06-24 
10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled 
close event
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::WARNING::2016-06-24 
10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled 
close event
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::WARNING::2016-06-24 
10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled 
close event
Reactor thread::INFO::2016-06-24 
10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) 
Accepting connection from :::1

92.168.1.50:48554
Reactor thread::DEBUG::2016-06-24 
10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) 
Using required_size=11
Reactor thread::INFO::2016-06-24 
10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) 
Detected protocol stomp from :::192

.168.1.50:48554
Reactor thread::INFO::2016-06-24 
10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) 
Processing CONNECT request
Reactor thread::DEBUG::2016-06-24 
10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) 
Stomp detected from (':::192.168.1.50', 48554)




Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:



On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Hi!
After cleanin metadata yum do an update of vdsm:

[root@ovirt01 ~]# rpm -qva | grep vdsm
vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch
vdsm-infra-4.18.4.1-0.el7.centos.noarch
vdsm-cli-4.18.4.1-0.el7.centos.noarch
vdsm-python-4.18.4.1-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch
vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch
vdsm-4.18.4.1-0.el7.centos.x86_64
vdsm-api-4.18.4.1-0.el7.centos.noarch
vdsm-gluster-4.18.4.1-0.el7.centos.noarch
vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch

But this not solve the issue.

- Host haven't default gateway after a reboot
- Self Hosted engine don't start.


Martin, Dan, can you please look into this?
Stefano, can you please share a full sos report from the host?


vdsm.log:
    
https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing



Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:

On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi
<s.da...@hawai.it <mailto:s.da...@hawai.it>>
wrote:

Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able
to start
the self hosted engine.


Hi Stefano, can you please try "yum clean metadata" "yum update"
again?
You should get vdsm 4.18.4.1, please let us know if this solve
your
issue.




--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com <http://redhat.com>


--

Stefano Danzi
Responsabile sistemi informativi

HAWAI ITALIA S.r.l.
Via Forte Garofolo, 16
37057 S. Giovanni Lupatoto Verona Italia

P. IVA 01680700232

tel. +39/045/8266400
fax +39/045/8266401
Web www.hawai.it

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Stefano Danzi

Hi!
After cleanin metadata yum do an update of vdsm:

[root@ovirt01 ~]# rpm -qva | grep vdsm
vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch
vdsm-infra-4.18.4.1-0.el7.centos.noarch
vdsm-cli-4.18.4.1-0.el7.centos.noarch
vdsm-python-4.18.4.1-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch
vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch
vdsm-4.18.4.1-0.el7.centos.x86_64
vdsm-api-4.18.4.1-0.el7.centos.noarch
vdsm-gluster-4.18.4.1-0.el7.centos.noarch
vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch

But this not solve the issue.

- Host haven't default gateway after a reboot
- Self Hosted engine don't start.

vdsm.log:
https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing

Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:

On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.da...@hawai.it>
wrote:


Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start
the self hosted engine.


Hi Stefano, can you please try "yum clean metadata" "yum update"
again?
You should get vdsm 4.18.4.1, please let us know if this solve your
issue.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-23 Thread Stefano Danzi


Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self 
hosted engine.

first thing is that the host network lose the degaut gateway configuration. But 
this is not the problem.

Logs:

==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-23 
18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Reloading vm.conf from the shared storage domain
MainThread::INFO::2016-06-23 
18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, 
volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9
MainThread::INFO::2016-06-23 
18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, 
volUUID:3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-23 
18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 OVF_STORE volume path: 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b
MainThread::INFO::2016-06-23 
18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Found an OVF for HE VM, trying to convert
MainThread::INFO::2016-06-23 
18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Got vm.conf from OVF_STORE
MainThread::INFO::2016-06-23 
18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
MainThread::INFO::2016-06-23 
18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '192.168.1.254'}

==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-25::ERROR::2016-06-23 
18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
 Error while serving connection
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle
data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch
.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
 line 66, in set_storage_domain
self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
 line 400, in connect
volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
 line 245, in _get_volume_path
volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect
sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable

==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-06-23 
18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Failed set the storage domain: 'Failed to set storage domain VdsmBackend, options 
{'hosted-engine.lockspace': 
'7B22696D6167655F75756964223A202265663131373139322D623564662D346534362D383939622D6262663862663135222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230613363393433652D633032392D343134372D623364342D396366353364663161356262227D',
 'sp_uuid': 

[ovirt-users] R: Re: Autostart VMS

2016-04-09 Thread Stefano Danzi
I need this Feature for a similar issue. You can see my last post on this ML. 
(VM that has to run on a host in a remote site)

 Messaggio originale 
Da: Pavel Gashev  
Data: 09/04/2016  15:49  (GMT+01:00) 
A: users@ovirt.org, svenkie...@gmail.com 
Oggetto: Re: [ovirt-users] Autostart VMS 


I'd like to see the autostart feature as well. In my case I need to autostart a 
virtual router VM at remote site. The issue is that oVirt can't see the remote 
host until the virtual router is started on this host. So HA is not an option.



On Sat, 2016-04-09 at 08:50 +0200, Sven Kieske wrote:

On 06.04.2016 07:46, Brett I. Holcomb wrote:

In VMware we could setup guests to autostart when the host started and
define the order.  Is that doable in oVirt?  The only thing I've seen
is the watchdog and tell it to reset but nothing that allows me to
define who starts up when and if they autostart.  I assume it's there
but I must be missing it or haven't found it in the web portal.


See this long standing bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1166657

also searching this ML for "autostart" and "VM"

turns up this same discussion every few months.

sadly no dev seems to grasp the importance of this feature, so it's
always rationalsed that it's not needed or the HA feature is enough (it
isn't).

I don't know why the already implemented libvirt autostart feature
does not get simply passed through.

This feature was at least requested by 4 or 5 different people, to no avail.

you might CC yourself to the bugreport(RFE) and vote on it, maybe
this way it will get some traction.

kind regards

Sven


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM HA when no engine

2016-04-06 Thread Stefano Danzi

Hello!

I have an oVirt Environment where one host is phisically in a different
building than others hosts. Main building and remote building are connected
using a WiFi link.

The host on remote building runs a VM that use host local storage.
The VM is pinned to the host to prevent any migration.
Hosted engine VM never runs on this host.

I need that this VM starts on this host even if the host restarts 
accidentally and

the wifi link is down (aka "Cleaning lady unintentional DoS attack" ;-)  ).

There are a way to do this?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: Re: R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-02-04 Thread Stefano Danzi
I have only one switch so two interfaces are connected to the same switch. The 
configuration in switch is corrected.  I opened a ticket for switch Tech 
support and the configuration was validated.
This configuration worked without problems h24 for one year!   All problems 
started after a kernel update so something was changed in kernel. 

 Messaggio originale 
Da: Dan Kenigsberg <dan...@redhat.com> 
Data: 04/02/2016  22:02  (GMT+01:00) 
A: Stefano Danzi <s.da...@hawai.it>, yd...@redhat.com 
Cc: Jon Archer <j...@rosslug.org.uk>, mbur...@redhat.com, users@ovirt.org 
Oggetto: Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 ->
  3.6.1 

On Thu, Feb 04, 2016 at 06:26:14PM +0100, Stefano Danzi wrote:
> 
> 
> Il 04/02/2016 16.55, Dan Kenigsberg ha scritto:
> >On Wed, Jan 06, 2016 at 08:45:16AM +0200, Dan Kenigsberg wrote:
> >>On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:
> >>>On Mon, Jan 04, 2016 at 12:31:38PM +0100, Stefano Danzi wrote:
> >>>>I did some tests:
> >>>>
> >>>>kernel-3.10.0-327.3.1.el7.x86_64 -> bond mode 4 doesn't work (if I detach
> >>>>one network cable the network is stable)
> >>>>kernel-3.10.0-229.20.1.el7.x86_64 -> bond mode 4 works fine
> >>>Would you be kind to file a kernel bug in bugzilla.redhat.com?
> >>>Summarize the information from this thread (e.g. your ifcfgs and in what
> >>>way does mode 4 doesn't work).
> >>>
> >>>To get the bug solved quickly we'd better find paying RHEL7 customer
> >>>subscribing to it. But I'll try to push from my direction.
> >>Stefano has been kind to open
> >>
> >> Bug 1295423 - Unstable network link using bond mode = 4
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1295423
> >>
> >>which we fail to reproduce on our own lab. I'd be pleased if anybody who
> >>experiences it, and their networking config to the bug (if it is
> >>different). Can you also lay out your switch's hardware and
> >>configuration?
> >Stefano, could you share your /proc/net/bonding/* files with us?
> >I heard about similar reports were the bond slaves had mismatching
> >aggregator id. Could it be your case as well?
> >
> 
> Here:
> 
> [root@ovirt01 ~]# cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
> 
> Bonding Mode: IEEE 802.3ad Dynamic link aggregation
> Transmit Hash Policy: layer2 (0)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 0
> Down Delay (ms): 0
> 
> 802.3ad info
> LACP rate: slow
> Min links: 0
> Aggregator selection policy (ad_select): stable
> Active Aggregator Info:
> Aggregator ID: 2
> Number of ports: 1
> Actor Key: 9
> Partner Key: 1
> Partner Mac Address: 00:00:00:00:00:00
> 
> Slave Interface: enp4s0
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 2
> Permanent HW addr: **:**:**:**:**:f1
> Slave queue ID: 0
> Aggregator ID: 1

---^^^


> Actor Churn State: churned
> Partner Churn State: churned
> Actor Churned Count: 4
> Partner Churned Count: 5
> details actor lacp pdu:
> system priority: 65535
> port key: 9
> port priority: 255
> port number: 1
> port state: 69
> details partner lacp pdu:
> system priority: 65535
> oper key: 1
> port priority: 255
> port number: 1
> port state: 1
> 
> Slave Interface: enp5s0
> MII Status: up
> Speed: 1000 Mbps
> Duplex: full
> Link Failure Count: 1
> Permanent HW addr: **:**:**:**:**:f2
> Slave queue ID: 0
> Aggregator ID: 2

---^^^


it sounds awfully familiar - mismatching aggregator IDs, and an all-zero
partner mac. Can you double-check that both your nics are wired to the
same switch, which is properly configured to use lacp on these two
ports?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-02-04 Thread Stefano Danzi



Il 04/02/2016 16.55, Dan Kenigsberg ha scritto:

On Wed, Jan 06, 2016 at 08:45:16AM +0200, Dan Kenigsberg wrote:

On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:

On Mon, Jan 04, 2016 at 12:31:38PM +0100, Stefano Danzi wrote:

I did some tests:

kernel-3.10.0-327.3.1.el7.x86_64 -> bond mode 4 doesn't work (if I detach
one network cable the network is stable)
kernel-3.10.0-229.20.1.el7.x86_64 -> bond mode 4 works fine

Would you be kind to file a kernel bug in bugzilla.redhat.com?
Summarize the information from this thread (e.g. your ifcfgs and in what
way does mode 4 doesn't work).

To get the bug solved quickly we'd better find paying RHEL7 customer
subscribing to it. But I'll try to push from my direction.

Stefano has been kind to open

 Bug 1295423 - Unstable network link using bond mode = 4
 https://bugzilla.redhat.com/show_bug.cgi?id=1295423

which we fail to reproduce on our own lab. I'd be pleased if anybody who
experiences it, and their networking config to the bug (if it is
different). Can you also lay out your switch's hardware and
configuration?

Stefano, could you share your /proc/net/bonding/* files with us?
I heard about similar reports were the bond slaves had mismatching
aggregator id. Could it be your case as well?



Here:

[root@ovirt01 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 1
Actor Key: 9
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00

Slave Interface: enp4s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: **:**:**:**:**:f1
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 4
Partner Churned Count: 5
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 1
port state: 69
details partner lacp pdu:
system priority: 65535
oper key: 1
port priority: 255
port number: 1
port state: 1

Slave Interface: enp5s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: **:**:**:**:**:f2
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 2
details actor lacp pdu:
system priority: 65535
port key: 9
port priority: 255
port number: 2
port state: 77
details partner lacp pdu:
system priority: 65535
oper key: 1
port priority: 255
port number: 1
port state: 1



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-01-07 Thread Stefano Danzi



Il 06/01/2016 7.45, Dan Kenigsberg ha scritto:

On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:

On Mon, Jan 04, 2016 at 12:31:38PM +0100, Stefano Danzi wrote:

I did some tests:

kernel-3.10.0-327.3.1.el7.x86_64 -> bond mode 4 doesn't work (if I detach
one network cable the network is stable)
kernel-3.10.0-229.20.1.el7.x86_64 -> bond mode 4 works fine

Would you be kind to file a kernel bug in bugzilla.redhat.com?
Summarize the information from this thread (e.g. your ifcfgs and in what
way does mode 4 doesn't work).

To get the bug solved quickly we'd better find paying RHEL7 customer
subscribing to it. But I'll try to push from my direction.

Stefano has been kind to open

 Bug 1295423 - Unstable network link using bond mode = 4
 https://bugzilla.redhat.com/show_bug.cgi?id=1295423

which we fail to reproduce on our own lab. I'd be pleased if anybody who
experiences it, and their networking config to the bug (if it is
different). Can you also lay out your switch's hardware and
configuration?



I made some tests using kernel  3.10.0-327.4.4.el7.x86_64.
I did a TCP dump on virtual interface "DMZ" (VLAN X on bond0).

When I have two netwok cables connected I can se ARP requests but not ARP 
replyes.
When I detach one network cable I can see ARP requests and ARP replyes (and 
networking on VM works).

Maybe the problem isn't in bonding config but in qemu/kvm/vhost_net


 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster Data domain not correctly setup at boot

2016-01-05 Thread Stefano Danzi



Il 04/01/2016 18.27, Nir Soffer ha scritto:

On Mon, Jan 4, 2016 at 6:36 PM, Stefano Danzi <s.da...@hawai.it> wrote:
This sounds like
https://bugzilla.redhat.com/1271771

This patch may fix this: https://gerrit.ovirt.org/#/c/27334/
Would you like to test it?


I patched vdsm and now gluster sd work at boot.


To dig deeper, we need the logs:
- /var/log/vdsm/vdsm.log (the one showing this timeframe)
- /var/log/sanlock.log
- /var/log/messages
- /var/log/glusterfs/:-.log

Nir




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 3.6.1 unable to manually start hosted engine vm

2016-01-04 Thread Stefano Danzi

Hi !
After upgrade from ovirt 3.5 to 3.6 I'm not able to manually start 
hosted-engine vm using "hosted-engine --vm-start"


I have only one host.
I put engine in global maintenance mode, I shutdown all VM and hosted 
engine VM.
I reboot the host and run "hosted-engine --vm-start", but the VM never 
start (I waited for 10 minutes).
When I run " hosted-engine --set-maintenance --mode=none" engine VM 
start immediately.


[root@ovirt01 ~]#  hosted-engine --vm-start

b66ae2c5-de0f-4361-953b-f10226da7eb8
Status = WaitForLaunch
nicModel = rtl8139,pv
statusTime = 4294711929
emulatedMachine = pc
pid = 0
vmName = HostedEngine
devices = [{'index': '2', 'iface': 'ide', 'specParams': {}, 
'readonly': 'true', 'deviceId': '9866b541-20db-40c8-9512-b4f5277700fd', 
'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': 
'0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': 
'/home/pacchetti/iso/CentOS-7.0-1406-x86_64-Minimal.iso', 'type': 
'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': 
'1', 'poolID': '----', 'volumeID': 
'f89cf2a9-af3c-4c07-9e45-5f2c5fd8e155', 'imageID': 
'c0c575cf-9107-49b8-8a0f-95c8efa6ad06', 'specParams': {}, 'readonly': 
'false', 'domainID': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'optional': 
'false', 'deviceId': 'c0c575cf-9107-49b8-8a0f-95c8efa6ad06', 'address': 
{'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 
'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 
'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 
'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': 
'00:16:3e:12:d5:35', 'linkActive': 'true', 'network': 'ovirtmgmt', 
'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 
'3f76ab15-bf91-41ad-b02d-dbdb21fae828', 'address': {'slot': '0x03', 
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 
'device': 'bridge', 'type': 'interface'}, {'device': 'console', 
'specParams': {}, 'type': 'console', 'deviceId': 
'a40e268c-bef7-4ca4-832a-28e550e9e012', 'alias': 'console0'}]

guestDiskMapping = {}
vmType = kvm
clientIp =
displaySecurePort = -1
memSize = 4096
displayPort = -1
cpuType = SandyBridge
spiceSecureChannels = 
smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir

smp = 2
displayIp = 0
display = vnc
[root@ovirt01 ~]#  hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!



--== Host 1 status ==--

Status up-to-date  : False
Hostname   : ovirt01.hawai.lan
Host ID: 1
Engine status  : unknown stale-data
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : d2fb89cb
Host timestamp : 3569

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2016-01-04 Thread Stefano Danzi

I did some tests:

kernel-3.10.0-327.3.1.el7.x86_64 -> bond mode 4 doesn't work (if I 
detach one network cable the network is stable)

kernel-3.10.0-229.20.1.el7.x86_64 -> bond mode 4 works fine

Il 31/12/2015 9.44, Dan Kenigsberg ha scritto:

On Wed, Dec 30, 2015 at 09:39:12PM +0100, Stefano Danzi wrote:

Hi Dan,
some info about my network setup:

- My bond is used only for VM networking. ovirtmgmt has a dedicated ethernet
card.
- I haven't set any ethtool opts.
[cut]

I do not see anything suspecious here.

Which kernel version worked well for you?

Would it be possible to boot the machine with it, and retest bond mode
4, so that we can whole-heartedly place the blame on kernel?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 3.6.1 unable to manually start hosted engine vm

2016-01-04 Thread Stefano Danzi



Il 04/01/2016 12.28, Gianluca Cecchi ha scritto:
On Mon, Jan 4, 2016 at 12:20 PM, Roy Golan <rgo...@redhat.com 
<mailto:rgo...@redhat.com>> wrote:






On Mon, Jan 4, 2016 at 1:08 PM, Stefano Danzi
<s.da...@hawai.it <mailto:s.da...@hawai.it>> wrote:

Hi !
After upgrade from ovirt 3.5 to 3.6 I'm not able to
manually start hosted-engine vm using "hosted-engine
--vm-start"

I have only one host.
I put engine in global maintenance mode, I shutdown all VM
and hosted engine VM.
I reboot the host and run "hosted-engine --vm-start", but
the VM never start (I waited for 10 minutes).
When I run " hosted-engine --set-maintenance --mode=none"
engine VM start immediately.



Sorry I was quick to answer and now I saw you already tried and it
immediately started the vm.


Possibly the original question was of different kind?
So how to manually start engine VM if environment is in global 
maintenance and I have previously shutdown it? Should it be possible?
For example one wants to check it from an OS point of view and not 
oVirt point of view, before exiting maintenance...


Gianluca



Gianluca you are right!!!
ovirt 3.5 permit to start engine VM when cluster is in maintenance... I did it 
many times!!! From oVirt 3.6 it is no longer possible.
VM remain in state "WaitingForLaunch" until cluster exit from maintenance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster Data domain not correctly setup at boot

2016-01-04 Thread Stefano Danzi
I have one testing host (only one) with hosted engine and a gluster data 
domain (in the same machine).


When I start the host and the engine I can see data domain active "up 
and green" but in the event list I get:


- Invalid status on Data Center Default. Setting status to Non Responsive
- Storage Domain Data (Data Center Default) was deactivated by system 
because it's not visible by any of the hosts


If I try tu start a VM I get:

- Failed to run onDalesSRV on Host ovirt01
- VM onSalesSRV is down with error. Exit message: Cannot access storage file
'/rhev/data-center/0002-0002-0002-0002-01ef/f739b27a-35bf-49c7-a95b-a92ec5c10320/images..

The gluster volume is correctly mounted:

[root@ovirt01 ~]# df -h
File system   Dim. Usati Dispon. Uso% Montato su
/dev/mapper/centos_ovirt01-root50G   18G 33G  35% /
devtmpfs  7,8G 07,8G   0% /dev
tmpfs 7,8G 07,8G   0% /dev/shm
tmpfs 7,8G   17M7,8G   1% /run
tmpfs 7,8G 07,8G   0% 
/sys/fs/cgroup
/dev/mapper/centos_ovirt01-home10G  1,3G8,8G  13% /home
/dev/mapper/centos_ovirt01-glusterOVEngine 50G   11G 40G  22% 
/home/glusterfs/engine
/dev/md0  494M  244M251M  50% /boot
/dev/mapper/centos_ovirt01-glusterOVData  500G  135G366G  27% 
/home/glusterfs/data
ovirt01.hawai.lan:/engine  50G   11G 40G  22% 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine
tmpfs 1,6G 01,6G   0% 
/run/user/0
ovirtbk-sheng.hawai.lan:/var/lib/exports/iso   22G  7,6G 15G  35% 
/rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso
ovirt01.hawai.lan:/data   500G  135G366G  27% 
/rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data

But link on '/rhev/data-center/0' is missing:

[root@ovirt01 ~]# ls -la /rhev/data-center/0002-0002-0002-0002-01ef/
totale 0
drwxr-xr-x. 2 vdsm kvm 64  4 gen 14.31 .
drwxr-xr-x. 4 vdsm kvm 59  4 gen 14.31 ..
lrwxrwxrwx. 1 vdsm kvm 84  4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7 -> 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
lrwxrwxrwx. 1 vdsm kvm 84  4 gen 14.31 mastersd -> 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7

If I put data domain on maintenace mode and reactivate it I can run the VMs.
Mounted fs are the same, but now I have links into /rhev/data-center/ :

[root@ovirt01 ~]# ls -la /rhev/data-center/0002-0002-0002-0002-01ef/
totale 4
drwxr-xr-x. 2 vdsm kvm 4096  4 gen 17.10 .
drwxr-xr-x. 4 vdsm kvm   59  4 gen 17.10 ..
lrwxrwxrwx. 1 vdsm kvm   84  4 gen 14.31 46f55a31-f35f-465c-b3e2-df45c05e06a7 
-> 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7
lrwxrwxrwx. 1 vdsm kvm  103  4 gen 17.10 837f-d2d4-4684-a389-ac1adb050fa8 
-> 
/rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/837f-d2d4-4684-a389-ac1adb050fa8
lrwxrwxrwx. 1 vdsm kvm   92  4 gen 17.10 f739b27a-35bf-49c7-a95b-a92ec5c10320 
-> 
/rhev/data-center/mnt/glusterSD/ovirt01.hawai.lan:_data/f739b27a-35bf-49c7-a95b-a92ec5c10320
lrwxrwxrwx. 1 vdsm kvm   84  4 gen 14.31 mastersd -> 
/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: Re: R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-31 Thread Stefano Danzi


Hi Dan,I can't change switch settings until next week.I will post a message 
after others tests.

 Messaggio originale 
Da: Dan Kenigsberg <dan...@redhat.com> 
Data: 31/12/2015  09:44  (GMT+01:00) 
A: Stefano Danzi <s.da...@hawai.it> 
Cc: Jon Archer <j...@rosslug.org.uk>, mbur...@redhat.com, users@ovirt.org 
Oggetto: Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 ->
  3.6.1 

I do not see anything suspecious here.

Which kernel version worked well for you?

Would it be possible to boot the machine with it, and retest bond mode
4, so that we can whole-heartedly place the blame on kernel?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-30 Thread Stefano Danzi

Hi Dan,
some info about my network setup:

- My bond is used only for VM networking. ovirtmgmt has a dedicated 
ethernet card.

- I haven't set any ethtool opts.
- Nics on bond specs:
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network 
Connection

Subsystem: ASUSTeK Computer Inc. Motherboard
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at df20 (32-bit, non-prefetchable) [size=128K]
I/O ports at e000 [size=32]
Memory at df22 (32-bit, non-prefetchable) [size=16K]
Capabilities: [c8] Power Management version 2
Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [e0] Express Endpoint, MSI 00
Capabilities: [a0] MSI-X: Enable+ Count=5 Masked-
Capabilities: [100] Advanced Error Reporting
Kernel driver in use: e1000e

[root@ovirt01 ~]# ifconfig
DMZ: flags=4163  mtu 1500
txqueuelen 0  (Ethernet)
RX packets 43546  bytes 2758816 (2.6 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

LAN_HAW: flags=4163  mtu 1500
txqueuelen 0  (Ethernet)
RX packets 2090262  bytes 201078292 (191.7 MiB)
RX errors 0  dropped 86  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0: flags=5187  mtu 1500
txqueuelen 0  (Ethernet)
RX packets 2408059  bytes 456371629 (435.2 MiB)
RX errors 0  dropped 185  overruns 0  frame 0
TX packets 118966  bytes 14862549 (14.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0.1: flags=4163  mtu 1500
txqueuelen 0  (Ethernet)
RX packets 2160985  bytes 210157656 (200.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0.3: flags=4163  mtu 1500
txqueuelen 0  (Ethernet)
RX packets 151195  bytes 185253584 (176.6 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 118663  bytes 13857950 (13.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=6211  mtu 1500
txqueuelen 1000  (Ethernet)
RX packets 708141  bytes 95034564 (90.6 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 16714  bytes 5193108 (4.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 16  memory 0xdf20-df22

enp5s0: flags=6211  mtu 1500
txqueuelen 1000  (Ethernet)
RX packets 1699934  bytes 361339105 (344.5 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 102252  bytes 9669441 (9.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17  memory 0xdf10-df12

enp6s1: flags=4163  mtu 1500
txqueuelen 1000  (Ethernet)
RX packets 2525232  bytes 362345893 (345.5 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 388452  bytes 208145492 (198.5 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
loop  txqueuelen 0  (Local Loopback)
RX packets 116465661  bytes 1515059255942 (1.3 TiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 116465661  bytes 1515059255942 (1.3 TiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.50  netmask 255.255.255.0  broadcast 
192.168.1.255

txqueuelen 0  (Ethernet)
RX packets 3784298  bytes 36509 (529.8 MiB)
RX errors 0  dropped 86  overruns 0  frame 0
TX packets 1737669  bytes 1401650369 (1.3 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163  mtu 1500
txqueuelen 500  (Ethernet)
RX packets 558574  bytes 107521742 (102.5 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1316892  bytes 487764500 (465.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163  mtu 1500
txqueuelen 500  (Ethernet)
RX packets 42282  bytes 7373007 (7.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 40498  bytes 17598215 (16.7 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet2: 

Re: [ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-28 Thread Stefano Danzi

Problem solved!!!

The file hosted-engine.conf had a wrong fqdn.
I don't think that this happened during upgrade... mybe thay my 
colleague did something of wrong...


Il 20/12/2015 14.52, Stefano Danzi ha scritto:
Network problems was solved after changing Bond mode  (and it's 
strange. I have to investigate around qemu-kvm, cento 7.2 and switch 
firmware ), but broker problem still exist.  If I turn on the host, ha 
agent start engine vm. When engine VM is up, broker strats to send 
email.  Now I haven't here detailed logs.



 Messaggio originale 
Da: Yedidyah Bar David <d...@redhat.com>
Data: 20/12/2015 11:20 (GMT+01:00)
A: Stefano Danzi <s.da...@hawai.it>, Dan Kenigsberg <dan...@redhat.com>
Cc: users <users@ovirt.org>
Oggetto: Re: [ovirt-users] Network instability after upgrade 3.6.0 -> 
3.6.1


On Fri, Dec 18, 2015 at 5:31 PM, Stefano Danzi <s.da...@hawai.it> wrote:
> I found this in vdsm.log and I think that could be the problem:
>
> Thread-3771::ERROR::2015-12-18
> 
16:18:58,597::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)

> Connection closed: Connection closed
> Thread-3771::ERROR::2015-12-18 
16:18:58,597::API::1847::vds::(_getHaInfo)

> failed to retrieve Hosted Engine HA info
> Traceback (most recent call last):
>   File "/usr/share/vdsm/API.py", line 1827, in _getHaInfo
> stats = instance.get_all_stats()
>   File
> 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",

> line 103, in get_all_stats
> self._configure_broker_conn(broker)
>   File
> 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",

> line 180, in _configure_broker_conn
> dom_type=dom_type)
>   File
> 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",

> line 176, in set_storage_domain
> .format(sd_type, options, e))
> RequestError: Failed to set storage domain FilesystemBackend, options
> {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}:
> Connection closed

My guess is that this is a consequence of your networking problems.

Adding Dan.

>
>
> Il 17/12/2015 18.51, Stefano Danzi ha scritto:
>>
>> I partially solve the problem.
>>
>> My host machine has 2 network interfaces with a bond. The bond was
>> configured with  mode=4 (802.3ad) and switch was configured in the 
same way.
>> If I remove one network cable the network become stable. With both 
cables

>> attached the network is instable.
>>
>> I removed the link aggregation configuration from switch and change the
>> bond in mode=2 (balance-xor). Now the network are stable.
>> The strange thing is that previous configuration worked fine for one
>> year... since the last upgrade.
>>
>> Now ha-agent don't reboot the hosted-engine anymore, but I receive two
>> emails from brocker evere 2/5 minutes.
>> First a mail with "ovirt-hosted-engine state transition
>> StartState-ReinitializeFSM" and after "ovirt-hosted-engine state 
transition

>> ReinitializeFSM-EngineStarting"
>>
>>
>> Il 17/12/2015 10.51, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have one testing host (only one host) with self hosted engine 
and 2 VM

>>> (one linux and one windows).
>>>
>>> After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
>>> discontinuously.
>>> Every 10 minutes HA agent restart hosted engine VM because result 
down.

>>> But the machine is UP,
>>> only the network stop to work for some minutes.
>>> I activate global maintenace mode to prevent engine reboot. If I 
ssh to

>>> the hosted engine sometimes
>>> the connection work and sometimes no.  Using VNC connection to 
engine I

>>> see that sometime VM reach external network
>>> and sometimes no.
>>> If I do a tcpdump on phisical ethernet interface I don't see any 
packet

>>> when network on vm don't work.
>>>
>>> Same thing happens fo others two VM.
>>>
>>> Before the upgrade I never had network problems.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



--
Didi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.1 USB Host Device passthrough

2015-12-23 Thread Stefano Danzi

Hi!

You could change approach and use this: 
http://sourceforge.net/projects/usbip/


This project share USB connection over TCP.

I've never tried it, but it should work.

In this way you con share usb device from host to guest, or from a PC in 
the network and guest.

Another benefit is that the live migration still possible.

As I know qemu-kvm already can use a USB device from network connection, 
but live migration is not yet supported.


Il 23/12/2015 11.52, Alexandr Krivulya ha scritto:

Hi,

1. Look at my previous posts:

http://lists.ovirt.org/pipermail/users/2015-November/035881.html
http://lists.ovirt.org/pipermail/users/2015-November/035890.html

2. You can try to pass a usb host controller and not devices itself. 
But I didn't verify it.


23.12.15 10:28, ov...@timmi.org пишет:

Hi list,

I'm currently trying to get USB devices into a VM.
My host is running a oVirt 3.6.1.

I'm trying to use the GUI feature to add the device into the VM and 
I'm facing two issues:


1. I receive the following message while starting the VM:
Error while executing action:

:

  * Cannot run VM. There is no host that satisfies current scheduling
constraints. See below for details:
  * The host  did not satisfy internal filter HostDevice
because it does not support host device passthrough..

2. The USB device list never get refreshed while removing or 
inserting different devices.


Does anyone knows what I'm doing wrong or what I need to check?

I found for example the following description for oVirt 3.5.
http://wiki.rosalab.com/en/index.php/Blog:ROSA_Planet/How_to_use_USB_device_inside_VM_under_Rosa_Virtualisation_%28oVirt_3.5%29

But this is not helping me as I will have multiple USB devices with 
the same ID in the host.


Best regards and Merry Christmas
Christoph



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] R: Re: Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-20 Thread Stefano Danzi


Network problems was solved after changing Bond mode  (and it's strange. I have 
to investigate around qemu-kvm, cento 7.2 and switch firmware ), but broker 
problem still exist.  If I turn on the host, ha agent start engine vm. When 
engine VM is up, broker strats to send email.  Now I haven't here detailed logs.

 Messaggio originale 
Da: Yedidyah Bar David <d...@redhat.com> 
Data: 20/12/2015  11:20  (GMT+01:00) 
A: Stefano Danzi <s.da...@hawai.it>, Dan Kenigsberg <dan...@redhat.com> 
Cc: users <users@ovirt.org> 
Oggetto: Re: [ovirt-users] Network instability after upgrade 3.6.0 -> 3.6.1 

On Fri, Dec 18, 2015 at 5:31 PM, Stefano Danzi <s.da...@hawai.it> wrote:
> I found this in vdsm.log and I think that could be the problem:
>
> Thread-3771::ERROR::2015-12-18
> 16:18:58,597::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
> Connection closed: Connection closed
> Thread-3771::ERROR::2015-12-18 16:18:58,597::API::1847::vds::(_getHaInfo)
> failed to retrieve Hosted Engine HA info
> Traceback (most recent call last):
>   File "/usr/share/vdsm/API.py", line 1827, in _getHaInfo
> stats = instance.get_all_stats()
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 103, in get_all_stats
> self._configure_broker_conn(broker)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
> line 180, in _configure_broker_conn
> dom_type=dom_type)
>   File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
> line 176, in set_storage_domain
> .format(sd_type, options, e))
> RequestError: Failed to set storage domain FilesystemBackend, options
> {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}:
> Connection closed

My guess is that this is a consequence of your networking problems.

Adding Dan.

>
>
> Il 17/12/2015 18.51, Stefano Danzi ha scritto:
>>
>> I partially solve the problem.
>>
>> My host machine has 2 network interfaces with a bond. The bond was
>> configured with  mode=4 (802.3ad) and switch was configured in the same way.
>> If I remove one network cable the network become stable. With both cables
>> attached the network is instable.
>>
>> I removed the link aggregation configuration from switch and change the
>> bond in mode=2 (balance-xor). Now the network are stable.
>> The strange thing is that previous configuration worked fine for one
>> year... since the last upgrade.
>>
>> Now ha-agent don't reboot the hosted-engine anymore, but I receive two
>> emails from brocker evere 2/5 minutes.
>> First a mail with "ovirt-hosted-engine state transition
>> StartState-ReinitializeFSM" and after "ovirt-hosted-engine state transition
>> ReinitializeFSM-EngineStarting"
>>
>>
>> Il 17/12/2015 10.51, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have one testing host (only one host) with self hosted engine and 2 VM
>>> (one linux and one windows).
>>>
>>> After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works
>>> discontinuously.
>>> Every 10 minutes HA agent restart hosted engine VM because result down.
>>> But the machine is UP,
>>> only the network stop to work for some minutes.
>>> I activate global maintenace mode to prevent engine reboot. If I ssh to
>>> the hosted engine sometimes
>>> the connection work and sometimes no.  Using VNC connection to engine I
>>> see that sometime VM reach external network
>>> and sometimes no.
>>> If I do a tcpdump on phisical ethernet interface I don't see any packet
>>> when network on vm don't work.
>>>
>>> Same thing happens fo others two VM.
>>>
>>> Before the upgrade I never had network problems.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-18 Thread Stefano Danzi

I found this in vdsm.log and I think that could be the problem:

Thread-3771::ERROR::2015-12-18 
16:18:58,597::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate) 
Connection closed: Connection closed
Thread-3771::ERROR::2015-12-18 
16:18:58,597::API::1847::vds::(_getHaInfo) failed to retrieve Hosted 
Engine HA info

Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1827, in _getHaInfo
stats = instance.get_all_stats()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 103, in get_all_stats

self._configure_broker_conn(broker)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 180, in _configure_broker_conn

dom_type=dom_type)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 176, in set_storage_domain

.format(sd_type, options, e))
RequestError: Failed to set storage domain FilesystemBackend, options 
{'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: 
Connection closed


Il 17/12/2015 18.51, Stefano Danzi ha scritto:

I partially solve the problem.

My host machine has 2 network interfaces with a bond. The bond was 
configured with  mode=4 (802.3ad) and switch was configured in the 
same way.
If I remove one network cable the network become stable. With both 
cables attached the network is instable.


I removed the link aggregation configuration from switch and change 
the bond in mode=2 (balance-xor). Now the network are stable.
The strange thing is that previous configuration worked fine for one 
year... since the last upgrade.


Now ha-agent don't reboot the hosted-engine anymore, but I receive two 
emails from brocker evere 2/5 minutes.
First a mail with "ovirt-hosted-engine state transition 
StartState-ReinitializeFSM" and after "ovirt-hosted-engine state 
transition ReinitializeFSM-EngineStarting"



Il 17/12/2015 10.51, Stefano Danzi ha scritto:

Hello,
I have one testing host (only one host) with self hosted engine and 2 
VM (one linux and one windows).


After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works 
discontinuously.
Every 10 minutes HA agent restart hosted engine VM because result 
down. But the machine is UP,

only the network stop to work for some minutes.
I activate global maintenace mode to prevent engine reboot. If I ssh 
to the hosted engine sometimes
the connection work and sometimes no.  Using VNC connection to engine 
I see that sometime VM reach external network

and sometimes no.
If I do a tcpdump on phisical ethernet interface I don't see any 
packet when network on vm don't work.


Same thing happens fo others two VM.

Before the upgrade I never had network problems.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-17 Thread Stefano Danzi

Hello,
I have one testing host (only one host) with self hosted engine and 2 VM 
(one linux and one windows).


After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works 
discontinuously.
Every 10 minutes HA agent restart hosted engine VM because result down. 
But the machine is UP,

only the network stop to work for some minutes.
I activate global maintenace mode to prevent engine reboot. If I ssh to 
the hosted engine sometimes
the connection work and sometimes no.  Using VNC connection to engine I 
see that sometime VM reach external network

and sometimes no.
If I do a tcpdump on phisical ethernet interface I don't see any packet 
when network on vm don't work.


Same thing happens fo others two VM.

Before the upgrade I never had network problems.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network instability after upgrade 3.6.0 -> 3.6.1

2015-12-17 Thread Stefano Danzi

I partially solve the problem.

My host machine has 2 network interfaces with a bond. The bond was 
configured with  mode=4 (802.3ad) and switch was configured in the same way.
If I remove one network cable the network become stable. With both 
cables attached the network is instable.


I removed the link aggregation configuration from switch and change the 
bond in mode=2 (balance-xor). Now the network are stable.
The strange thing is that previous configuration worked fine for one 
year... since the last upgrade.


Now ha-agent don't reboot the hosted-engine anymore, but I receive two 
emails from brocker evere 2/5 minutes.
First a mail with "ovirt-hosted-engine state transition 
StartState-ReinitializeFSM" and after "ovirt-hosted-engine state 
transition ReinitializeFSM-EngineStarting"



Il 17/12/2015 10.51, Stefano Danzi ha scritto:

Hello,
I have one testing host (only one host) with self hosted engine and 2 
VM (one linux and one windows).


After upgrade ovirt from 3.6.0 to 3.6.1 the network connection works 
discontinuously.
Every 10 minutes HA agent restart hosted engine VM because result 
down. But the machine is UP,

only the network stop to work for some minutes.
I activate global maintenace mode to prevent engine reboot. If I ssh 
to the hosted engine sometimes
the connection work and sometimes no.  Using VNC connection to engine 
I see that sometime VM reach external network

and sometimes no.
If I do a tcpdump on phisical ethernet interface I don't see any 
packet when network on vm don't work.


Same thing happens fo others two VM.

Before the upgrade I never had network problems.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Strange issue after upgrade

2015-12-16 Thread Stefano Danzi

Yes! You are right! I forget to make that step.
Now all it's ok.

Il 16/12/2015 17.19, Ondra Machacek ha scritto:

Hi,

do you use ovirt-3.6? If yes and you run 'yum update', then please run 
also 'engine-setup' again.
For more info please read: 
/usr/share/doc/ovirt-engine-extension-aaa-jdbc-1.0.4/README.admin


Ondra

On 12/16/2015 11:55 AM, Stefano Danzi wrote:

Hello,
today yum ugraded my ovirt environment.

Now I have a very strange issue:

engine.log (self hosted engine) report this error:

2015-12-16 11:43:57,692 INFO 
[org.ovirt.engine.docs.utils.servlet.ContextSensitiveHelpMappingServlet] 
(default task-19) [] Context-sensitive help is not installed. Manual 
directory doesn't exist: /usr/share/ovirt-engine/manual
2015-12-16 11:43:57,696 ERROR 
[org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-10) 
[] Can't read file 
'/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
'/ovirt-engine/services/files/spice/SpiceVersion.txt', will send a 
404 error response.


On host vdsm.log report:

Thread-882::WARNING::2015-12-16 
11:40:06,400::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 
[] ['  WARNING: lvmetad is running but disabled. Restart lvmetad 
before enabling it!', '  Volume group 
"837f-d2d4-4684-a389-ac1adb050fa8" not found', '  Cannot process 
volume group 837f-d2d4-4684-a389-ac1adb050fa8']
Thread-882::DEBUG::2015-12-16 
11:40:06,401::lvm::415::Storage.OperationMutex::(_reloadvgs) 
Operation 'lvm reload operation' released the operation mutex
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,420::threadPool::29::Storage.ThreadPool::(__init__) Enter - 
numThreads: 5, waitTimeout: 3, maxTasks: 500
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,423::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) /usr/bin/dd 
if=/rhev/data-center/0002-0002-0002-0002-01ef/mastersd/dom_md/outbox 
iflag=direct,fullblock bs=512 count=8 skip=8 (cwd None)
Thread-882::ERROR::2015-12-16 
11:40:06,455::sdc::145::Storage.StorageDomainCache::(_findDomain) 
domain 837f-d2d4-4684-a389-ac1adb050fa8 not found

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in 
_findUnfetchedDomain

raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
Thread-882::ERROR::2015-12-16 
11:40:06,455::monitor::276::Storage.Monitor::(_monitorDomain) Error 
monitoring domain 837f-d2d4-4684-a389-ac1adb050fa8

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 767, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 323, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 100, in produce
domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in 
_findUnfetchedDomain

raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,456::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) SUCCESS: 
 = '8+0 records in\n8+0 records out\n4096 bytes (4.1 kB) copied, 
0.0275067 s, 149 kB/s\n';  = 0
Thread-882::INFO::2015-12-16 
11:40:06,480::monitor::299::Storage.Monitor::(_notifyStatusChanges) 
Domain 837f-d2d4-4684-a389-ac1adb050fa8 became INVALID


I can't access to engine administrator portal.
There are an empty "profile" tab. Entering admin credentials where 
reported an empty error box (see attachment).





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-12-16 Thread Stefano Danzi

I applied the patch adn the issue is solved now!

Il 16/12/2015 11.43, Martin Sivak ha scritto:

Hi Stefano,

we haven't touched the code for this, but I see that also. If you are
willing to experiment just a bit (it is revertable) you can apply the
attached patch to
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py.
It seems to solve this for me.

diff --git a/ovirt_hosted_engine_ha/broker/notifications.py b/ovirt_hosted_engin
index 425822d..00e7e60 100644
--- a/ovirt_hosted_engine_ha/broker/notifications.py
+++ b/ovirt_hosted_engine_ha/broker/notifications.py
@@ -1,4 +1,4 @@
-from email.mime.text import MIMEText
+from email.parser import Parser
  from email.utils import formatdate
  import socket

@@ -24,7 +24,7 @@ def send_email(cfg, email_body):
  server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
  server.set_debuglevel(1)
  to_addresses = EMAIL_SPLIT_RE.split(cfg["destination-emails"].strip())
-message = MIMEText(email_body)
+message = Parser().parsestr(email_body)
  message["Date"] = formatdate(localtime=True)
  server.sendmail(cfg["source-email"],
  to_addresses,

Then restart the ovirt-ha-broker service.

In any case, please open a new bug so we can properly fix it in the
nearest 3.6 update.

Regards

Martin Sivak


On Wed, Dec 16, 2015 at 10:26 AM, Stefano Danzi <s.da...@hawai.it> wrote:

Hello, there are a way to solve this?



Il 09/11/2015 14.08, Stefano Danzi ha scritto:

Hello,
I've made no changes than upgrading oVirt from 3.5 to 3.6
Distro is a standard CenOS 7.1

Pyton is: python-2.7.5-18.el7_1.1.x86_64

state_transition.txt hasn't an empty line as first line.



Il 09/11/2015 13.32, Martin Sivak ha scritto:

Btw, please check the template file
(/etc/ovirt-hosted-engine-ha/notifications/state_transition.txt) and
make sure it does not start with an empty line.

Martin

On Mon, Nov 9, 2015 at 1:25 PM, Martin Sivak <msi...@redhat.com> wrote:

Hi,

can you please tell us the Python version you are using? We are using
the smtplib and email.mime.text standard libraries to send emails so
this should not be our bug (unless the API changed).

Thanks

--
Martin Sivak
SLA / oVirt


On Mon, Nov 9, 2015 at 1:11 PM, Sandro Bonazzola <sbona...@redhat.com>
wrote:


On Mon, Nov 9, 2015 at 11:44 AM, Stefano Danzi <s.da...@hawai.it>
wrote:

Your trick work fine! Thanks!

Now I see that emails sent from brocker has "corrupted" headers:

At the ent of message we can see:

Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Date: Mon, 09 Nov 2015 11:33:37 +0100
Message-Id: <20151109103337.d9c7d1260...@my.server.lan>
From: mysen...@server.lan
To: undisclosed-recipients:;

From: mysen...@server.lan
To: myrecei...@server.lan
Subject: ovirt-hosted-engine state transition
EngineUp-GlobalMaintenance

The state machine changed state.



Adding Roy and Martin, looks like a separate issue




  From and To are repeated twice. This cause that email client show
correctly the sender,
an empty recipient and an empty subject.

On message body I see everything after "To: undisclosed-recipients:;"

Il 06/11/2015 20.01, Simone Tiraboschi ha scritto:

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Strange issue after upgrade

2015-12-16 Thread Stefano Danzi

Hello,
today yum ugraded my ovirt environment.

Now I have a very strange issue:

engine.log (self hosted engine) report this error:

2015-12-16 11:43:57,692 INFO 
[org.ovirt.engine.docs.utils.servlet.ContextSensitiveHelpMappingServlet] 
(default task-19) [] Context-sensitive help is not installed. Manual 
directory doesn't exist: /usr/share/ovirt-engine/manual
2015-12-16 11:43:57,696 ERROR 
[org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-10) [] 
Can't read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' 
for request '/ovirt-engine/services/files/spice/SpiceVersion.txt', will 
send a 404 error response.


On host vdsm.log report:

Thread-882::WARNING::2015-12-16 
11:40:06,400::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] 
['  WARNING: lvmetad is running but disabled. Restart lvmetad before 
enabling it!', '  Volume group "837f-d2d4-4684-a389-ac1adb050fa8" 
not found', '  Cannot process volume group 
837f-d2d4-4684-a389-ac1adb050fa8']
Thread-882::DEBUG::2015-12-16 
11:40:06,401::lvm::415::Storage.OperationMutex::(_reloadvgs) Operation 
'lvm reload operation' released the operation mutex
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,420::threadPool::29::Storage.ThreadPool::(__init__) Enter - 
numThreads: 5, waitTimeout: 3, maxTasks: 500
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,423::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) 
/usr/bin/dd 
if=/rhev/data-center/0002-0002-0002-0002-01ef/mastersd/dom_md/outbox 
iflag=direct,fullblock bs=512 count=8 skip=8 (cwd None)
Thread-882::ERROR::2015-12-16 
11:40:06,455::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 
837f-d2d4-4684-a389-ac1adb050fa8 not found

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
Thread-882::ERROR::2015-12-16 
11:40:06,455::monitor::276::Storage.Monitor::(_monitorDomain) Error 
monitoring domain 837f-d2d4-4684-a389-ac1adb050fa8

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 767, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 323, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 100, in produce
domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,456::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) 
SUCCESS:  = '8+0 records in\n8+0 records out\n4096 bytes (4.1 kB) 
copied, 0.0275067 s, 149 kB/s\n';  = 0
Thread-882::INFO::2015-12-16 
11:40:06,480::monitor::299::Storage.Monitor::(_notifyStatusChanges) 
Domain 837f-d2d4-4684-a389-ac1adb050fa8 became INVALID


I can't access to engine administrator portal.
There are an empty "profile" tab. Entering admin credentials where 
reported an empty error box (see attachment).



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-12-16 Thread Stefano Danzi

Bug report here: https://bugzilla.redhat.com/show_bug.cgi?id=1292060

Il 16/12/2015 12.14, Martin Sivak ha scritto:

That is good to hear. Sorry it took so long, can you please open a bug for us?

Thanks

Martin

On Wed, Dec 16, 2015 at 12:04 PM, Stefano Danzi <s.da...@hawai.it> wrote:

I applied the patch adn the issue is solved now!


Il 16/12/2015 11.43, Martin Sivak ha scritto:

Hi Stefano,

we haven't touched the code for this, but I see that also. If you are
willing to experiment just a bit (it is revertable) you can apply the
attached patch to

/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py.
It seems to solve this for me.

diff --git a/ovirt_hosted_engine_ha/broker/notifications.py
b/ovirt_hosted_engin
index 425822d..00e7e60 100644
--- a/ovirt_hosted_engine_ha/broker/notifications.py
+++ b/ovirt_hosted_engine_ha/broker/notifications.py
@@ -1,4 +1,4 @@
-from email.mime.text import MIMEText
+from email.parser import Parser
   from email.utils import formatdate
   import socket

@@ -24,7 +24,7 @@ def send_email(cfg, email_body):
   server = smtplib.SMTP(cfg["smtp-server"], port=cfg["smtp-port"])
   server.set_debuglevel(1)
   to_addresses =
EMAIL_SPLIT_RE.split(cfg["destination-emails"].strip())
-message = MIMEText(email_body)
+message = Parser().parsestr(email_body)
   message["Date"] = formatdate(localtime=True)
   server.sendmail(cfg["source-email"],
   to_addresses,

Then restart the ovirt-ha-broker service.

In any case, please open a new bug so we can properly fix it in the
nearest 3.6 update.

Regards

Martin Sivak


On Wed, Dec 16, 2015 at 10:26 AM, Stefano Danzi <s.da...@hawai.it> wrote:

Hello, there are a way to solve this?



Il 09/11/2015 14.08, Stefano Danzi ha scritto:

Hello,
I've made no changes than upgrading oVirt from 3.5 to 3.6
Distro is a standard CenOS 7.1

Pyton is: python-2.7.5-18.el7_1.1.x86_64

state_transition.txt hasn't an empty line as first line.



Il 09/11/2015 13.32, Martin Sivak ha scritto:

Btw, please check the template file
(/etc/ovirt-hosted-engine-ha/notifications/state_transition.txt) and
make sure it does not start with an empty line.

Martin

On Mon, Nov 9, 2015 at 1:25 PM, Martin Sivak <msi...@redhat.com> wrote:

Hi,

can you please tell us the Python version you are using? We are using
the smtplib and email.mime.text standard libraries to send emails so
this should not be our bug (unless the API changed).

Thanks

--
Martin Sivak
SLA / oVirt


On Mon, Nov 9, 2015 at 1:11 PM, Sandro Bonazzola <sbona...@redhat.com>
wrote:


On Mon, Nov 9, 2015 at 11:44 AM, Stefano Danzi <s.da...@hawai.it>
wrote:

Your trick work fine! Thanks!

Now I see that emails sent from brocker has "corrupted" headers:

At the ent of message we can see:

Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Date: Mon, 09 Nov 2015 11:33:37 +0100
Message-Id: <20151109103337.d9c7d1260...@my.server.lan>
From: mysen...@server.lan
To: undisclosed-recipients:;

From: mysen...@server.lan
To: myrecei...@server.lan
Subject: ovirt-hosted-engine state transition
EngineUp-GlobalMaintenance

The state machine changed state.



Adding Roy and Martin, looks like a separate issue




   From and To are repeated twice. This cause that email client show
correctly the sender,
an empty recipient and an empty subject.

On message body I see everything after "To:
undisclosed-recipients:;"

Il 06/11/2015 20.01, Simone Tiraboschi ha scritto:


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-12-16 Thread Stefano Danzi

Hello, there are a way to solve this?


Il 09/11/2015 14.08, Stefano Danzi ha scritto:

Hello,
I've made no changes than upgrading oVirt from 3.5 to 3.6
Distro is a standard CenOS 7.1

Pyton is: python-2.7.5-18.el7_1.1.x86_64

state_transition.txt hasn't an empty line as first line.



Il 09/11/2015 13.32, Martin Sivak ha scritto:

Btw, please check the template file
(/etc/ovirt-hosted-engine-ha/notifications/state_transition.txt) and
make sure it does not start with an empty line.

Martin

On Mon, Nov 9, 2015 at 1:25 PM, Martin Sivak <msi...@redhat.com> wrote:

Hi,

can you please tell us the Python version you are using? We are using
the smtplib and email.mime.text standard libraries to send emails so
this should not be our bug (unless the API changed).

Thanks

--
Martin Sivak
SLA / oVirt


On Mon, Nov 9, 2015 at 1:11 PM, Sandro Bonazzola 
<sbona...@redhat.com> wrote:


On Mon, Nov 9, 2015 at 11:44 AM, Stefano Danzi <s.da...@hawai.it> 
wrote:

Your trick work fine! Thanks!

Now I see that emails sent from brocker has "corrupted" headers:

At the ent of message we can see:

Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Date: Mon, 09 Nov 2015 11:33:37 +0100
Message-Id: <20151109103337.d9c7d1260...@my.server.lan>
From: mysen...@server.lan
To: undisclosed-recipients:;

From: mysen...@server.lan
To: myrecei...@server.lan
Subject: ovirt-hosted-engine state transition 
EngineUp-GlobalMaintenance


The state machine changed state.



Adding Roy and Martin, looks like a separate issue





 From and To are repeated twice. This cause that email client show
correctly the sender,
an empty recipient and an empty subject.

On message body I see everything after "To: undisclosed-recipients:;"

Il 06/11/2015 20.01, Simone Tiraboschi ha scritto:

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Strange issue after upgrade

2015-12-16 Thread Stefano Danzi

Maybe the cause is different.
If I try to start a guest machine using python API I get a status 401 " 
Unauthorized".


Engine.log report:

2015-12-16 17:10:31,883 ERROR 
[org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter] (default 
task-20) [] Cannot obtain profile for user admin@internal
2015-12-16 17:10:33,260 ERROR 
[org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter] (default 
task-22) [] Cannot obtain profile for user admin@internal




Il 16/12/2015 11.55, Stefano Danzi ha scritto:

Hello,
today yum ugraded my ovirt environment.

Now I have a very strange issue:

engine.log (self hosted engine) report this error:

2015-12-16 11:43:57,692 INFO 
[org.ovirt.engine.docs.utils.servlet.ContextSensitiveHelpMappingServlet] 
(default task-19) [] Context-sensitive help is not installed. Manual 
directory doesn't exist: /usr/share/ovirt-engine/manual
2015-12-16 11:43:57,696 ERROR 
[org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-10) 
[] Can't read file 
'/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request 
'/ovirt-engine/services/files/spice/SpiceVersion.txt', will send a 404 
error response.


On host vdsm.log report:

Thread-882::WARNING::2015-12-16 
11:40:06,400::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] 
['  WARNING: lvmetad is running but disabled. Restart lvmetad before 
enabling it!', '  Volume group "837f-d2d4-4684-a389-ac1adb050fa8" 
not found', '  Cannot process volume group 
837f-d2d4-4684-a389-ac1adb050fa8']
Thread-882::DEBUG::2015-12-16 
11:40:06,401::lvm::415::Storage.OperationMutex::(_reloadvgs) Operation 
'lvm reload operation' released the operation mutex
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,420::threadPool::29::Storage.ThreadPool::(__init__) Enter - 
numThreads: 5, waitTimeout: 3, maxTasks: 500
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,423::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) 
/usr/bin/dd 
if=/rhev/data-center/0002-0002-0002-0002-01ef/mastersd/dom_md/outbox 
iflag=direct,fullblock bs=512 count=8 skip=8 (cwd None)
Thread-882::ERROR::2015-12-16 
11:40:06,455::sdc::145::Storage.StorageDomainCache::(_findDomain) 
domain 837f-d2d4-4684-a389-ac1adb050fa8 not found

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in 
_findUnfetchedDomain

raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
Thread-882::ERROR::2015-12-16 
11:40:06,455::monitor::276::Storage.Monitor::(_monitorDomain) Error 
monitoring domain 837f-d2d4-4684-a389-ac1adb050fa8

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain
self._produceDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 767, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 323, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 100, in produce
domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 173, in 
_findUnfetchedDomain

raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)
jsonrpc.Executor/4::DEBUG::2015-12-16 
11:40:06,456::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) 
SUCCESS:  = '8+0 records in\n8+0 records out\n4096 bytes (4.1 kB) 
copied, 0.0275067 s, 149 kB/s\n';  = 0
Thread-882::INFO::2015-12-16 
11:40:06,480::monitor::299::Storage.Monitor::(_notifyStatusChanges) 
Domain 837f-d2d4-4684-a389-ac1adb050fa8 became INVALID


I can't access to engine administrator portal.
There are an empty "profile" tab. Entering admin credentials where 
reported an empty error box (see attachment).





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-12 Thread Stefano Danzi
to temporary fix this problem I changed [Unit] section in 
glusterd.service file:


[Unit]
Description=GlusterFS, a clustered file-system server
After=network.target rpcbind.service network-online.target 
vdsm-network.service

Before=vdsmd.service

Il 10/11/2015 8.02, Kaushal M ha scritto:

On Mon, Nov 9, 2015 at 9:06 PM, Stefano Danzi <s.da...@hawai.it> wrote:

Here output from systemd-analyze critical-chain and  systemd-analyze blame.
I think that now glusterd start too early (before networking)

You are nearly right. GlusterD did start too early. GlusterD is
configured to start after network.target. But network.target in
systemd only guarantees that the network management stack is up; it
doesn't guarantee that the network devices have been configured and
are usable (Ref [1]). This means that when GlusterD starts, the
network is still not up and hence GlusterD will fail to resolve
bricks.

While we could start GlusterD after network-online.target, it would
break GlusterFS mounts configured in /etc/fstab with _netdev option.
Systemd automatically schedules _netdev mounts to be done after
network-online.target. (Ref [1] network-online.target). This could
allow the GlusterFS mounts to be done before GlusterD is up, causing
them to fail. This can be done using systemd-220 [2] which introduced
support for `x-systemd.requires` option for fstab, which can be used
to order mounts after specific services, but is not possible with el7
which has systemd-208.

[1]: https://wiki.freedesktop.org/www/Software/systemd/NetworkTarget/
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=812826


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO Domain empty after upgrade oVirt from 3.5 to 3.6

2015-11-10 Thread Stefano Danzi



Il 10/11/2015 16.40, Simone Tiraboschi ha scritto:



On Tue, Nov 10, 2015 at 12:36 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


Solved!

inside the image directory there was this link:

lrwxrwxrwx. 1 root root 54 02 nov 12.14
ovirt-tools-setup.iso ->
/usr/share/ovirt-guest-tools-iso/ovirt-tools-setup.iso

removing the link all was solved.


Hi Stefano,
that link has been created installing oVirt guest tools rpm.
Was that ISO storage domain exposed via NFS?
Was it exported by the engine VM/host?
thanks,
Simone




Hi!

The link was a my attempt to have ovirt-tools-setup.iso always updated 
on oVirt CD list.
I made the link just before upgrading, so maybe that cause errors also 
in 3.5


ISO storage domain was exposed via NFS. It was exported from engine host 
(self hosted engine).


Maybe nice to have a more detailed error instead of "VDSM command 
failed: Cannot get file stats: (u'837f-d2d4-4684-a389-ac1adb050fa8',)".


I think that engine stats all files. If one stat fail, for a link in 
this case, all process fail and list remain empty.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ISO Domain empty after upgrade oVirt from 3.5 to 3.6

2015-11-09 Thread Stefano Danzi


Hello,
after upgrading oVirt my ISO domain is empty. If I upload an iso image (using 
ovirt-iso-uploader) I can upload an image without errors.
The path /var/lib/exports/iso contains all old and new images. This path is 
also correctly mounted on host machine.
...but ISO domain still empty.

I didn't find any error in logs.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-11-09 Thread Stefano Danzi

Hello,
I've made no changes than upgrading oVirt from 3.5 to 3.6
Distro is a standard CenOS 7.1

Pyton is: python-2.7.5-18.el7_1.1.x86_64

state_transition.txt hasn't an empty line as first line.



Il 09/11/2015 13.32, Martin Sivak ha scritto:

Btw, please check the template file
(/etc/ovirt-hosted-engine-ha/notifications/state_transition.txt) and
make sure it does not start with an empty line.

Martin

On Mon, Nov 9, 2015 at 1:25 PM, Martin Sivak <msi...@redhat.com> wrote:

Hi,

can you please tell us the Python version you are using? We are using
the smtplib and email.mime.text standard libraries to send emails so
this should not be our bug (unless the API changed).

Thanks

--
Martin Sivak
SLA / oVirt


On Mon, Nov 9, 2015 at 1:11 PM, Sandro Bonazzola <sbona...@redhat.com> wrote:


On Mon, Nov 9, 2015 at 11:44 AM, Stefano Danzi <s.da...@hawai.it> wrote:

Your trick work fine! Thanks!

Now I see that emails sent from brocker has "corrupted" headers:

At the ent of message we can see:

Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Date: Mon, 09 Nov 2015 11:33:37 +0100
Message-Id: <20151109103337.d9c7d1260...@my.server.lan>
From: mysen...@server.lan
To: undisclosed-recipients:;

From: mysen...@server.lan
To: myrecei...@server.lan
Subject: ovirt-hosted-engine state transition EngineUp-GlobalMaintenance

The state machine changed state.



Adding Roy and Martin, looks like a separate issue





 From and To are repeated twice. This cause that email client show
correctly the sender,
an empty recipient and an empty subject.

On message body I see everything after "To: undisclosed-recipients:;"

Il 06/11/2015 20.01, Simone Tiraboschi ha scritto:



On Thu, Nov 5, 2015 at 7:10 PM, Stefano Danzi <s.da...@hawai.it> wrote:




the content is:

[email]
smtp-server=localhost
smtp-port=25
destination-emails=root@localhost
source-email=root@localhost

[notify]
state_transition=maintenance|start|stop|migrate|up|down

and is the default. My conf was lost during upgrade.
If I restart ovirt-ha-broker the broker.conf is replaced with the default

If I don't restart ovirt-ha-broker, the broker.conf is silently replaced
after a while.

Looking here
http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html
I understand that broker.conf is stored in another place and overwrite at
runtime.


The broker.conf is now on the shared storage (as other hosted-engine
related configuration files) so that in the future they'll be easily
editable from the web UI.

The issue here seams to be that the upgrade overwrite it with the default
file before copying to the shared storage.
I'm opening a bug against that.

Let's try to fix in your instance (please substitute
'192.168.1.115:_Virtual_ext35u36' with the mount point on your system):

dir=`mktemp -d` && cd $dir
systemctl stop ovirt-ha-broker
sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-engine.conf)
sdUUID=${sdUUID_line:7:36}
conf_volume_UUID_line=$(grep conf_volume_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_volume_UUID=${conf_volume_UUID_line:17:36}
conf_image_UUID_line=$(grep conf_image_UUID
/etc/ovirt-hosted-engine/hosted-engine.conf)
conf_image_UUID=${conf_image_UUID_line:16:36}
dd
if=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
2>/dev/null| tar -xvf -
cp /etc/ovirt-hosted-engine-ha/broker.conf.rpmsave broker.conf # or edit
broker.conf as you need
tar -cO * | dd
of=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID
systemctl start ovirt-ha-broker






Il 05/11/2015 18.44, Simone Tiraboschi ha scritto:

Can you please paste here the content of
/var/lib/ovirt-hosted-engine-ha/broker.conf ?
eventually make it anonymous if you prefer



On Thu, Nov 5, 2015 at 6:42 PM, Stefano Danzi <s.da...@hawai.it> wrote:

After upgrading from 3.5 to 3.6 Hosted engine notifications stop to
work.
I think that broker.conf was lost during upgrade.

I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1260757
But I don't undertand how to change the configuration now.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO Domain empty after upgrade oVirt from 3.5 to 3.6

2015-11-09 Thread Stefano Danzi

Hello,
I do a refresh many times. Goes to maintenence and reactivate don't solve.
Now UI show me this message "VDSM command failed: Cannot get file stats: 
(u'837f-d2d4-4684-a389-ac1adb050fa8',)"


I will try to detach and import ISO domain.



Il 09/11/2015 16.11, Sandro Bonazzola ha scritto:

Adding Nir and Allon

On Mon, Nov 9, 2015 at 4:05 PM, Simone Tiraboschi <stira...@redhat.com 
<mailto:stira...@redhat.com>> wrote:




On Mon, Nov 9, 2015 at 3:11 PM, Stefano Danzi <s.da...@hawai.it
<mailto:s.da...@hawai.it>> wrote:


Hello,
after upgrading oVirt my ISO domain is empty. If I upload an
iso image (using ovirt-iso-uploader) I can upload an image
without errors.
The path /var/lib/exports/iso contains all old and new images.
This path is also correctly mounted on host machine.
...but ISO domain still empty.


Can you please try to go to that storage domain into your engine
and force a refresh if you didn't?
If it doesn't work maintenance and reactivate that storage domain
and if also this is not sufficient please try to detach and import
it again.


I didn't find any error in logs.

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com <http://redhat.com>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO Domain empty after upgrade oVirt from 3.5 to 3.6

2015-11-09 Thread Stefano Danzi

Detach and Import the domain don't solve the issue.
Detach, destroy and import domain also don't solve.

Error is the same.

I could create a new ISO domain and reupload isos, but I think that this 
case has to be investigated a bit more.


Il 09/11/2015 16.21, Stefano Danzi ha scritto:

Hello,
I do a refresh many times. Goes to maintenence and reactivate don't solve.
Now UI show me this message "VDSM command failed: Cannot get file 
stats: (u'837f-d2d4-4684-a389-ac1adb050fa8',)"


I will try to detach and import ISO domain.



Il 09/11/2015 16.11, Sandro Bonazzola ha scritto:

Adding Nir and Allon

On Mon, Nov 9, 2015 at 4:05 PM, Simone Tiraboschi 
<stira...@redhat.com <mailto:stira...@redhat.com>> wrote:




On Mon, Nov 9, 2015 at 3:11 PM, Stefano Danzi <s.da...@hawai.it>
wrote:


Hello,
after upgrading oVirt my ISO domain is empty. If I upload an
iso image (using ovirt-iso-uploader) I can upload an image
without errors.
The path /var/lib/exports/iso contains all old and new
images. This path is also correctly mounted on host machine.
...but ISO domain still empty.


Can you please try to go to that storage domain into your engine
and force a refresh if you didn't?
If it doesn't work maintenance and reactivate that storage domain
and if also this is not sufficient please try to detach and
import it again.


I didn't find any error in logs.

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com <http://redhat.com>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-09 Thread Stefano Danzi

Here output from systemd-analyze critical-chain and  systemd-analyze blame.
I think that now glusterd start too early (before networking)

[root@ovirt01 tmp]# systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" 
character.

The time the unit takes to start is printed after the "+" character.

multi-user.target @17.148s
└─ovirt-ha-agent.service @17.021s +127ms
  └─vdsmd.service @15.871s +1.148s
└─vdsm-network.service @11.495s +4.373s
  └─libvirtd.service @11.238s +254ms
└─iscsid.service @11.228s +8ms
  └─network.target @11.226s
└─network.service @6.748s +4.476s
  └─iptables.service @6.630s +117ms
└─basic.target @6.629s
  └─paths.target @6.629s
└─brandbot.path @6.629s
  └─sysinit.target @6.615s
└─systemd-update-utmp.service @6.610s +4ms
  └─auditd.service @6.450s +157ms
└─systemd-tmpfiles-setup.service @6.369s +77ms
  └─rhel-import-state.service @6.277s +88ms
└─local-fs.target @6.275s
  └─home-glusterfs-data.mount @5.805s 
+470ms

└─home.mount @3.946s +1.836s
└─systemd-fsck@dev-mapper-centos_ovirt01\x2dhome.service @3.937s +7ms
└─dev-mapper-centos_ovirt01\x2dhome.device @3.936s



[root@ovirt01 tmp]# systemd-analyze blame
  4.476s network.service
  4.373s vdsm-network.service
  2.318s glusterd.service
  2.076s postfix.service
  1.836s home.mount
  1.651s lvm2-monitor.service
  1.258s lvm2-pvscan@9:1.service
  1.211s systemd-udev-settle.service
  1.148s vdsmd.service
  1.079s dmraid-activation.service
  1.046s boot.mount
   904ms kdump.service
   779ms multipathd.service
   657ms var-lib-nfs-rpc_pipefs.mount
   590ms 
systemd-fsck@dev-disk-by\x2duuid-e185849f\x2d2c82\x2d4eb2\x2da215\x2d97340e90c93e.service

   547ms tuned.service
   481ms kmod-static-nodes.service
   470ms home-glusterfs-data.mount
   427ms home-glusterfs-engine.mount
   422ms sys-kernel-debug.mount
   411ms dev-hugepages.mount
   411ms dev-mqueue.mount
   278ms systemd-fsck-root.service
   263ms systemd-readahead-replay.service
   254ms libvirtd.service
   243ms systemd-tmpfiles-setup-dev.service
   216ms systemd-modules-load.service
   209ms rhel-readonly.service
   195ms wdmd.service
   192ms sanlock.service
   191ms gssproxy.service
   186ms systemd-udev-trigger.service
   157ms auditd.service
   151ms plymouth-quit-wait.service
   151ms plymouth-quit.service
   132ms proc-fs-nfsd.mount
   127ms ovirt-ha-agent.service
   117ms iptables.service
   110ms ovirt-ha-broker.service
96ms avahi-daemon.service
89ms systemd-udevd.service
88ms rhel-import-state.service
77ms systemd-tmpfiles-setup.service
71ms sysstat.service
71ms microcode.service
71ms chronyd.service
69ms systemd-readahead-collect.service
68ms systemd-sysctl.service
65ms systemd-logind.service
61ms rsyslog.service
58ms systemd-remount-fs.service
46ms rpcbind.service
46ms nfs-config.service
45ms systemd-tmpfiles-clean.service
41ms rhel-dmesg.service
37ms dev-mapper-centos_ovirt01\x2dswap.swap
29ms systemd-vconsole-setup.service
26ms plymouth-read-write.service
26ms systemd-random-seed.service
24ms netcf-transaction.service
22ms mdmonitor.service
20ms systemd-machined.service
14ms plymouth-start.service
12ms systemd-update-utmp-runlevel.service
11ms 
systemd-fsck@dev-mapper-centos_ovirt01\x2dglusterOVEngine.service

 8ms iscsid.service
 7ms systemd-fsck@dev-mapper-centos_ovirt01\x2dhome.service
 7ms systemd-readahead-done.service
 7ms 
systemd-fsck@dev-mapper-centos_ovirt01\x2dglusterOVData.service

 6ms sys-fs-fuse-connections.mount
 4ms systemd-update-utmp.service
 4ms glusterfsd.service
 4ms rpc-statd-notify.service
 3ms iscsi-shutdown.service
 3ms systemd-journal-flush.service
 2ms sys-kernel-config.mount
 1ms systemd-user-sessions.service


Il 06/11/2015 9.27, Stefano Danzi ha scritto:

Hi!
I have only one node (Test system) and I don't chage any ip address 
and the entry is on /etc/hosts.

I think that now gluster start before networking

Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-11-09 Thread Stefano Danzi

Your trick work fine! Thanks!

Now I see that emails sent from brocker has "corrupted" headers:

At the ent of message we can see:

Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Date: Mon, 09 Nov 2015 11:33:37 +0100
Message-Id: <20151109103337.d9c7d1260...@my.server.lan>
From: mysen...@server.lan
To: undisclosed-recipients:;

From: mysen...@server.lan
To: myrecei...@server.lan
Subject: ovirt-hosted-engine state transition EngineUp-GlobalMaintenance

The state machine changed state.



From and To are repeated twice. This cause that email client show 
correctly the sender,

an empty recipient and an empty subject.

On message body I see everything after "To: undisclosed-recipients:;"

Il 06/11/2015 20.01, Simone Tiraboschi ha scritto:



On Thu, Nov 5, 2015 at 7:10 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:






the content is:

[email]
smtp-server=localhost
smtp-port=25
destination-emails=root@localhost
source-email=root@localhost

[notify]
state_transition=maintenance|start|stop|migrate|up|down


and is the default. My conf was lost during upgrade.
If I restart ovirt-ha-broker the broker.conf is replaced with the
default

If I don't restart ovirt-ha-broker, the broker.conf is silently
replaced after a while.

Looking here
http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html
I understand that broker.conf is stored in another place and
overwrite at runtime.


The broker.conf is now on the shared storage (as other hosted-engine 
related configuration files) so that in the future they'll be easily 
editable from the web UI.


The issue here seams to be that the upgrade overwrite it with the 
default file before copying to the shared storage.

I'm opening a bug against that.

Let's try to fix in your instance (please substitute 
'192.168.1.115:_Virtual_ext35u36' with the mount point on your system):


dir=`mktemp -d` && cd $dir
systemctl stop ovirt-ha-broker
sdUUID_line=$(grep sdUUID /etc/ovirt-hosted-engine/hosted-engine.conf)
sdUUID=${sdUUID_line:7:36}
conf_volume_UUID_line=$(grep conf_volume_UUID 
/etc/ovirt-hosted-engine/hosted-engine.conf)

conf_volume_UUID=${conf_volume_UUID_line:17:36}
conf_image_UUID_line=$(grep conf_image_UUID 
/etc/ovirt-hosted-engine/hosted-engine.conf)

conf_image_UUID=${conf_image_UUID_line:16:36}
dd 
if=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID 
2>/dev/null| tar -xvf -
cp /etc/ovirt-hosted-engine-ha/broker.conf.rpmsave broker.conf # or 
edit broker.conf as you need
tar -cO * | dd 
of=/rhev/data-center/mnt/192.168.1.115:_Virtual_ext35u36/$sdUUID/images/$conf_image_UUID/$conf_volume_UUID

systemctl start ovirt-ha-broker





Il 05/11/2015 18.44, Simone Tiraboschi ha scritto:

Can you please paste here the content of
/var/lib/ovirt-hosted-engine-ha/broker.conf ?
eventually make it anonymous if you prefer



On Thu, Nov 5, 2015 at 6:42 PM, Stefano Danzi <s.da...@hawai.it
<mailto:s.da...@hawai.it>> wrote:

After upgrading from 3.5 to 3.6 Hosted engine notifications
stop to work.
I think that broker.conf was lost during upgrade.

I found this:
https://bugzilla.redhat.com/show_bug.cgi?id=1260757
But I don't undertand how to change the configuration now.
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users









___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi

I pathced the code for "emergency"
I can't find how change confguration.

But I think that's a bug:

- All work in ovirt 3.5, after upgrade stop working
- The log show a python exception.

I think a thing:

If there are chages on configuration requirements I have to be warned 
during upgrade, or I have to find a specific error in log.
...remove something that non exist from a list, and leave a cryptic 
python exception as error log, isn't the better solution...


Il 06/11/2015 8.12, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi" <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> כתב:

>
> To temporary solve the problem I patched storageserver.py as 
suggested on link above.


I would not patch the code but change the configuration.

> I can't find a related issue on bugzilla.

Would you file bug about this?

>
>
> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>
>> My error is related to this message:
>>
>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>
>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have an Ovirt installation with only 1 host and self-hosted engine.
>>> My Master Data storage domain is GlusterFS type.
>>>
>>> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

>>> The error in vdsm.log is:
>>>
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

>>> g
>>> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
>>> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', 
u'connection': u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': 
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':

>>>  '', u'port': u''}], options=None)
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

>>> e
>>> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
>>> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in 
connect

>>> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in 
options

>>> backup_servers_option = self._get_backup_servers_option()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

>>> servers.remove(self._volfileserver)
>>> ValueError: list.remove(x): x not in list
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
>>> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

>>> ___
>>> Users mailing list
>>> Users@ovirt.org <mailto:Users@ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-06 Thread Stefano Danzi

Hello!
It's a test evoirment, so I have only one node.
If I start manually glusterd some seconds after boot I have no problems. 
This error is only during boot.


I think that something chages during upgrade. Maybe that now glusterd 
start before networkging or rpc.


Il 06/11/2015 5.29, Sahina Bose ha scritto:

Did you upgrade all the nodes too?
Are some of your nodes not-reachable?

Adding gluster-users for glusterd error.

On 11/06/2015 12:00 AM, Stefano Danzi wrote:


After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when 
the host boot.

Manual start of service after boot works fine.

gluster log:

[2015-11-04 13:37:55.360876] I [MSGID: 100030] 
[glusterfsd.c:2318:main] 0-/usr/sbin/glusterd: Started running 
/usr/sbin/glusterd version 3.7.5 (args: /usr/sbin/glusterd -p 
/var/run/glusterd.pid)
[2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init] 
0-management: Maximum allowed open file descriptors set to 65536
[2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init] 
0-management: Using /var/lib/glusterd as working directory
[2015-11-04 13:37:55.464540] W [MSGID: 103071] 
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm 
event channel creation failed [Nessun device corrisponde]
[2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init] 
0-rdma.management: Failed to initialize IB Device
[2015-11-04 13:37:55.464566] W 
[rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma' 
initialization failed
[2015-11-04 13:37:55.464616] W 
[rpcsvc.c:1597:rpcsvc_transport_create] 0-rpc-service: cannot create 
listener, initing the transport failed
[2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init] 
0-management: creation of 1 listeners failed, continuing with 
succeeded transport
[2015-11-04 13:37:57.663862] I [MSGID: 106513] 
[glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 30600
[2015-11-04 13:37:58.284522] I [MSGID: 106194] 
[glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list] 
0-management: No missed snaps list.
[2015-11-04 13:37:58.287477] E [MSGID: 106187] 
[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd: 
resolve brick failed in restore
[2015-11-04 13:37:58.287505] E [MSGID: 101019] 
[xlator.c:428:xlator_init] 0-management: Initialization of volume 
'management' failed, review your volfile again
[2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init] 
0-management: initializing translator failed
[2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate] 
0-graph: init failed
[2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-: 
received signum (0), shutting down



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi
oVirt is configured to use ovirtbk-mount.hawai.lan but gluster 
ovirt01.hawai.lan.

ovirtbk-mount.hawai.lan is an alias of ovirt01 and is in /etc/hosts

Il 06/11/2015 8.01, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 1:47 לפנה״צ,‏ "Stefano Danzi" <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> כתב:

>
> Hello,
> I have an Ovirt installation with only 1 host and self-hosted engine.
> My Master Data storage domain is GlusterFS type.
>
> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

> The error in vdsm.log is:
>
> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

> g
> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection': 
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': 
u'1', u'vfs_type': u'glusterfs', u'password':

>  '', u'port': u''}], options=None)

The error below suggest that ovirt and gluster are not configured in 
the same way, one using a domain name and the other ip address.


Can you share the output of
gluster volme info

On one of the bricks, or on the host (you will need to use --remote-host)

Nir

> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

> e
> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
> backup_servers_option = self._get_backup_servers_option()
>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

> servers.remove(self._volfileserver)
> ValueError: list.remove(x): x not in list
> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

> ___
> Users mailing list
> Users@ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-11-05 Thread Stefano Danzi

After upgrading from 3.5 to 3.6 Hosted engine notifications stop to work.
I think that broker.conf was lost during upgrade.

I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1260757
But I don't undertand how to change the configuration now.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine notifications don't work after upgrading ovirt from 3.5 to 3.6

2015-11-05 Thread Stefano Danzi





the content is:

[email]
smtp-server=localhost
smtp-port=25
destination-emails=root@localhost
source-email=root@localhost

[notify]
state_transition=maintenance|start|stop|migrate|up|down


and is the default. My conf was lost during upgrade.
If I restart ovirt-ha-broker the broker.conf is replaced with the default

If I don't restart ovirt-ha-broker, the broker.conf is silently replaced 
after a while.


Looking here 
http://lists.ovirt.org/pipermail/engine-commits/2015-June/022940.html
I understand that broker.conf is stored in another place and overwrite 
at runtime.




Il 05/11/2015 18.44, Simone Tiraboschi ha scritto:
Can you please paste here the content of 
/var/lib/ovirt-hosted-engine-ha/broker.conf ?

eventually make it anonymous if you prefer



On Thu, Nov 5, 2015 at 6:42 PM, Stefano Danzi <s.da...@hawai.it 
<mailto:s.da...@hawai.it>> wrote:


After upgrading from 3.5 to 3.6 Hosted engine notifications stop
to work.
I think that broker.conf was lost during upgrade.

I found this: https://bugzilla.redhat.com/show_bug.cgi?id=1260757
But I don't undertand how to change the configuration now.
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-05 Thread Stefano Danzi
To temporary solve the problem I patched storageserver.py as suggested 
on link above.

I can't find a related issue on bugzilla.

Il 05/11/2015 11.43, Stefano Danzi ha scritto:

My error is related to this message:

http://lists.ovirt.org/pipermail/users/2015-August/034316.html

Il 05/11/2015 0.28, Stefano Danzi ha scritto:

Hello,
I have an Ovirt installation with only 1 host and self-hosted engine.
My Master Data storage domain is GlusterFS type.

After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

The error in vdsm.log is:

Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init 
-> state preparin

g
Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection': 
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': 
u'1', u'vfs_type': u'glusterfs', u'password':

 '', u'port': u''}], options=None)
Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

e
Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could 
not connect to storageServer

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
backup_servers_option = self._get_backup_servers_option()
  File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

servers.remove(self._volfileserver)
ValueError: list.remove(x): x not in list
Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) 
knownSDs: {46f55a31-f35f-465c-b3e2-df45c05e06a7: 
storage.nfsSD.findDomain}
Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 
100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

2015-11-05 Thread Stefano Danzi


After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when the host 
boot.
Manual start of service after boot works fine.

gluster log:

[2015-11-04 13:37:55.360876] I [MSGID: 100030] [glusterfsd.c:2318:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.5 (args: 
/usr/sbin/glusterd -p /var/run/glusterd.pid)
[2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init] 
0-management: Maximum allowed open file descriptors set to 65536
[2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init] 
0-management: Using /var/lib/glusterd as working directory
[2015-11-04 13:37:55.464540] W [MSGID: 103071] 
[rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [Nessun device corrisponde]
[2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init] 
0-rdma.management: Failed to initialize IB Device
[2015-11-04 13:37:55.464566] W [rpc-transport.c:359:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed
[2015-11-04 13:37:55.464616] W [rpcsvc.c:1597:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init] 
0-management: creation of 1 listeners failed, continuing with succeeded 
transport
[2015-11-04 13:37:57.663862] I [MSGID: 106513] 
[glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd: retrieved 
op-version: 30600
[2015-11-04 13:37:58.284522] I [MSGID: 106194] 
[glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list] 0-management: 
No missed snaps list.
[2015-11-04 13:37:58.287477] E [MSGID: 106187] 
[glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd: resolve brick 
failed in restore
[2015-11-04 13:37:58.287505] E [MSGID: 101019] [xlator.c:428:xlator_init] 
0-management: Initialization of volume 'management' failed, review your volfile 
again
[2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init] 0-management: 
initializing translator failed
[2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate] 0-graph: 
init failed
[2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit] 
(-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d] 
-->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6] 
-->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-: received signum 
(0), shutting down


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] liburcu dependecy with glusterfs-epel on Centos 7.1

2015-05-19 Thread Stefano Danzi

Solved:

in ovirt-3.5-dependencies.repo in section [ovirt-3.5-epel] there is a 
list of packages included. In this list userspace-rcu is missing.

Adding this package to the list I solved the problem.

Il 18/05/2015 17.39, Stefano Danzi ha scritto:

Hello,

I'm just trying to do a yum update on one ovirt host machine (Centos 
7.1) And I have this trouble:


Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 
(ovirt-3.5-glusterfs-epel)

Richiede: liburcu-bp.so.1()(64bit)
Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 
(ovirt-3.5-glusterfs-epel)

Richiede: liburcu-cds.so.1()(64bit)


(Don't care about Italian words on error message)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] liburcu dependecy with glusterfs-epel on Centos 7.1

2015-05-18 Thread Stefano Danzi

Hello,

I'm just trying to do a yum update on one ovirt host machine (Centos 
7.1) And I have this trouble:


Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 
(ovirt-3.5-glusterfs-epel)

Richiede: liburcu-bp.so.1()(64bit)
Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 
(ovirt-3.5-glusterfs-epel)

Richiede: liburcu-cds.so.1()(64bit)


(Don't care about Italian words on error message)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to run noVNC console un recent browsers

2015-02-20 Thread Stefano Danzi

Hello!
Already done but this didn't help.

I downloaded a portable version of Firefox 17 and noVNC work as expected.

Il 20/02/2015 5.18, Darrell Budic ha scritto:

Try reimporting the ca.cert for noVNC by connecting directly to the webproxy 
address at port 6100. Do this by trying to connect to a console and then, once 
the 1006 error shows up, just strip off everything after :6100/ . I've found 
that somewhere in or after 3.5, restarting the webproxy causes it to generate 
its own new ca.cert even through it shouldn't.

   -Darrell


On Feb 19, 2015, at 4:09 PM, Stefano Danzi s.da...@hawai.it wrote:

Hello,

I can't make work noVNC console on recent browsers (Chrome 40, Firefox 35 and 
IE 11).

The error that I have is already explained here:

https://forge.univention.org/bugzilla/show_bug.cgi?id=33587

I tried to change websocket like suggested 
(http://errata.univention.de/ucs/3.2/31.html) but this not helped.

Someone know a workaround?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to run noVNC console un recent browsers

2015-02-19 Thread Stefano Danzi

Hello,

I can't make work noVNC console on recent browsers (Chrome 40, Firefox 
35 and IE 11).


The error that I have is already explained here:

https://forge.univention.org/bugzilla/show_bug.cgi?id=33587

I tried to change websocket like suggested 
(http://errata.univention.de/ucs/3.2/31.html) but this not helped.


Someone know a workaround?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enable Smart Array P212 thru VM

2015-02-12 Thread Stefano Danzi

You could share a tape library as iSCSI target and connet it to the VM.
I've never do a similar thing but internet give some suggestions:

http://scst.sourceforge.net/target_iscsi.html
http://www.kraftkennedy.com/virtualizing-scsi-tape-drives-with-an-iscsi-bridge/



Il 12/02/2015 10.40, VONDRA Alain ha scritto:

Hi,
I'd like to know if there is any solution to enable a Smart Array P212 PCIe 
card over a Windows VM ?
I've a Backup Acronis Software to install, and I'd prefer to install it on a 
CentOS distrib, and not on a Windows 2008 R2.
Our old backup software is Symantec Backup Exec and we have to keep the 
possibility to restore old tapes on demand.
So the solution to keep a sleeping Windows VM, and have the way to enable it, 
just to restore a tape would be perfect.
I've tried to add a new hardware with virt-manager, but this action just freeze 
the manager, and I need to restart the libvirtd service to unlock it.
If anybody knows a way to do that, I'd be very grateful.
Thanks in advance


Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self hosted engine iusses

2015-02-06 Thread Stefano Danzi

This solved the issue!!!
Thanks!!

If oVirt rewrite  /etc/multipath.conf maybe useful to open a bug
What do you-all think about it?

Il 05/02/2015 20.36, Darrell Budic ha scritto:

You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps 
multipathd from finding non-multipath devices as multi path devices and avoids 
the error message and keeps mutlipathd from binding your normal devices. I find 
it simpler than blacklisting and it should work if you also have real multi 
path devices.

defaults {
 find_multipaths yes
 polling_interval5
 …



On Feb 5, 2015, at 1:04 PM, George Skorup geo...@mwcomm.com wrote:

I ran into this same problem after setting up my cluster on EL7. As has been 
pointed out, the hosted-engine installer modifies /etc/multipath.conf.

I appended:

blacklist {
devnode *
}

to the end of the modified multipath.conf, which is what was there before the 
engine installer, and the errors stopped.

I think I was getting 253:3 trying to map which don't exist on my systems. I 
have a similar setup, md raid1 and LVM+XFS for gluster.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Self hosted engine iusses

2015-02-05 Thread Stefano Danzi


Hello,
I'm performing a self hosted engine installation for the first time.
The system now has only one host.

Host machine and engine VM run Centos 7 .

The host uses LVM

After ovirt installation on host console I see this error every 5 minutes:

[ 1823.837020] device-mapper: table: 253:4: multipath: error getting device
[ 1823.837228] device-mapper: ioctl: error adding target to table

On /dev there are't devices that match 253:4
Seems that multipath try to add /dev/dm-4 but this isn't in /dev (and dm-* is 
blacklisted in multipath.conf default)

Other thing is related to the engine.
On oVirt web interface I can see the VM for engine running. I don't see IP 
address and FQDN in the list.
If I try to change something in VM configuration I get the message:

Cannot edit VM. This VM is not managed by the engine.

Is this correct??

Bye
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   >