[ovirt-users] Re: oVirt Survey 2019 results

2019-04-02 Thread Dan Kenigsberg
On Tue, Apr 2, 2019 at 9:36 AM Sandro Bonazzola  wrote:
>
> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at 
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your 
> feedback.
>
> As a first step after reading the results I'd like to invite the 30 persons 
> who replied they're willing to contribute code to send an email to 
> de...@ovirt.org introducing themselves: we'll be more than happy to welcome 
> them and helping them getting started.
>
> I would also like to invite the 17 people who replied they'd like to help 
> organizing oVirt events in their area to either get in touch with me or 
> introduce themselves to users@ovirt.org so we can discuss about events 
> organization.
>
> Last but not least I'd like to invite the 38 people willing to contribute 
> documentation and the one willing to contribute localization to introduce 
> themselves to de...@ovirt.org.
>
> Thanks!

and thank you, Sandro, for shepherding this survey.

It has, as usual, very interesting results. I am happily surprised to
see how many are using OvS, OVN and IPv6. I am less happy (but
unsurprised) to see that nobody responded that they were using
Fedora-based oVirt.

I know the survey is anonymous, but I would love to reach out and
obtain more information about the painful use case of whomever
answered
What is the most challenging flow in oVirt? with "Working with networks."
I would love to hear more about your (and others'!) challenges, and
see how we developers can ease them.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLP7KDV7WFYFQRXM24HYAMYPNSSI56V6/


[ovirt-users] Re: 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-02 Thread andreas . elvers+ovirtforum
Thanks for your answer.

> Yes, now you can do it via backup and restore:
> take a backup of the engine with engine-backup and restore it on a new
> hosted-engine VM on a new storage domain with:
> hosted-engine --deploy --restore-from-file=mybackup.tar.gz

That is great news. Just to clear things up completely:

- This backup command is sufficient ? No further switches ?
  "engine-backup --mode=backup --file=[EngineBackupFile] --log=[LogFILE]"

- Do I need to place the node to which I want to deploy engine into maintenance
  before doing the backup? This is referred in the stated documentation as:

  "If a hosted-engine host is carrying a virtual load at the time of backup 
[...] 
   then a host [...] cannot be used to deploy a restored self-hosted engine."

   This is still true? If yes: all other precautions apply from that 
documentation?

- before creating the new engine on the new storage, the old engine need to be 
  un-deployed on all engine-HA hosts. I am unable to find information about this
  issue. Just un-deploy via web ui? 

- I want to deploy on a gluster volume that has been automatically created by 
the
  Node setup. The file /etc/gluster/glusterd.vol does not carry 
  "option rpc-auth-allow-insecure on" which is mentioned in the documentation.
  Do I need to follow the instructions for gluster, or are settings already 
sufficiently
  set by the automatic gluster deployment done by oVirt Node setup? I already
  have some VM images running on that gluster storage anyway.

Thanks for your help.
- Andreas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQNWNUHROMQLO2RSPRCMW3E5J7Y4USJD/


[ovirt-users] Re: 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-02 Thread Yedidyah Bar David
On Tue, Apr 2, 2019 at 11:24 AM Simone Tiraboschi  wrote:
>
>
>
> On Tue, Apr 2, 2019 at 10:20 AM  
> wrote:
>>
>> Thanks for your answer.
>>
>> > Yes, now you can do it via backup and restore:
>> > take a backup of the engine with engine-backup and restore it on a new
>> > hosted-engine VM on a new storage domain with:
>> > hosted-engine --deploy --restore-from-file=mybackup.tar.gz
>>
>> That is great news. Just to clear things up completely:
>>
>> - This backup command is sufficient ? No further switches ?
>>   "engine-backup --mode=backup --file=[EngineBackupFile] --log=[LogFILE]"
>
>
> Yes, right.

Just 'engine-backup' should be enough for taking a backup, in 4.3.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1530031

>
>>
>>
>> - Do I need to place the node to which I want to deploy engine into 
>> maintenance
>>   before doing the backup? This is referred in the stated documentation as:
>>
>>   "If a hosted-engine host is carrying a virtual load at the time of backup 
>> [...]
>>then a host [...] cannot be used to deploy a restored self-hosted engine."
>>
>>This is still true? If yes: all other precautions apply from that 
>> documentation?
>
>
> It's a good idea but it's not strictly required.
>
>
>>
>> - before creating the new engine on the new storage, the old engine need to 
>> be
>>   un-deployed on all engine-HA hosts. I am unable to find information about 
>> this
>>   issue. Just un-deploy via web ui?
>
>
> No need for that; but you will required to redeploy them from the new engine 
> to update their configuration.
>
>>
>>
>> - I want to deploy on a gluster volume that has been automatically created 
>> by the
>>   Node setup. The file /etc/gluster/glusterd.vol does not carry
>>   "option rpc-auth-allow-insecure on" which is mentioned in the 
>> documentation.
>>   Do I need to follow the instructions for gluster, or are settings already 
>> sufficiently
>>   set by the automatic gluster deployment done by oVirt Node setup? I already
>>   have some VM images running on that gluster storage anyway.
>>
>> Thanks for your help.
>> - Andreas
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQNWNUHROMQLO2RSPRCMW3E5J7Y4USJD/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/A3H7FUPRUU4JTOXB36SG5FLWVV4CHBU7/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZAQDQ76RYIB6BP7EJFWU65JQBKO2CEWO/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Gobinda Das
Hi Leo,
 Can you please paste "df -Th" and "gluster v status" out put ?
Wanted to make sure engine mounted and volumes and bricks are up.
What does vdsm log say?

On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:

> Thank you very much !
> I have just installed a new fresh node,   and triggered the single
> instance hyperconverged setup. It seems it fails at the hosted engine final
> steps of deployment:
>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
> domain]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot attach Storage. There is no active Host in the Data Center.]".
> HTTP response code is 409.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage.
> There is no active Host in the Data Center.]\". HTTP response code is 409."}
> Also,  the
> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
>  throws
> the following:
>
> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[Cannot attach Storage. There is no active Host in the
> Data Center.]\". HTTP response code is 409.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[Cannot attach Storage. There is no active Host in the Data Center.]\".
> HTTP response code is 409."
> }"
>
> I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far,
> I am unable to deploy oVirt single node Hyperconverged...
> Any thoughts ?
>
>
>
> On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Apr 1, 2019 at 6:14 PM Leo David  wrote:
>>
>>> Thank you Simone.
>>> I've decides to go for a new fresh install from iso, and i'll keep
>>> posted if any troubles arise. But I am still trying to understand what are
>>> the services that mount the lvms and volumes after configuration. There is
>>> nothing related in fstab, so I assume there are a couple of .mount files
>>> somewhere in the filesystem.
>>> Im just trying to understand node's underneath workflow.
>>>
>>
>> hosted-engine configuration is stored
>> in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount
>> the hosted-engine storage domain according to that and so ovirt-ha-agent
>> will be able to start the engine VM.
>> Everything else is just in the engine DB.
>>
>>
>>>
>>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi 
>>> wrote:
>>>
 Hi,
 to understand what's failing I'd suggest to start attaching setup logs.

 On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:

> Hello Everyone,
> Using 4.3.2 installation, and after running through HyperConverged
> Setup,  at the last stage it fails. It seems that the previously created
> "engine" volume is not mounted under "/rhev" path, therefore the setup
> cannot finish the deployment.
> Any ideea which are the services responsible of mounting the volumes
> on oVirt Node distribution ? I'm thinking that maybe this particularly one
> failed to start for some reason...
> Thank you very much !
>
> --
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> 

[ovirt-users] Re: 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-02 Thread Simone Tiraboschi
On Tue, Apr 2, 2019 at 10:20 AM 
wrote:

> Thanks for your answer.
>
> > Yes, now you can do it via backup and restore:
> > take a backup of the engine with engine-backup and restore it on a new
> > hosted-engine VM on a new storage domain with:
> > hosted-engine --deploy --restore-from-file=mybackup.tar.gz
>
> That is great news. Just to clear things up completely:
>
> - This backup command is sufficient ? No further switches ?
>   "engine-backup --mode=backup --file=[EngineBackupFile] --log=[LogFILE]"
>

Yes, right.


>
> - Do I need to place the node to which I want to deploy engine into
> maintenance
>   before doing the backup? This is referred in the stated documentation as:
>
>   "If a hosted-engine host is carrying a virtual load at the time of
> backup [...]
>then a host [...] cannot be used to deploy a restored self-hosted
> engine."
>
>This is still true? If yes: all other precautions apply from that
> documentation?
>

It's a good idea but it's not strictly required.



> - before creating the new engine on the new storage, the old engine need
> to be
>   un-deployed on all engine-HA hosts. I am unable to find information
> about this
>   issue. Just un-deploy via web ui?
>

No need for that; but you will required to redeploy them from the new
engine to update their configuration.


>
> - I want to deploy on a gluster volume that has been automatically created
> by the
>   Node setup. The file /etc/gluster/glusterd.vol does not carry
>   "option rpc-auth-allow-insecure on" which is mentioned in the
> documentation.
>   Do I need to follow the instructions for gluster, or are settings
> already sufficiently
>   set by the automatic gluster deployment done by oVirt Node setup? I
> already
>   have some VM images running on that gluster storage anyway.
>
> Thanks for your help.
> - Andreas
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQNWNUHROMQLO2RSPRCMW3E5J7Y4USJD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A3H7FUPRUU4JTOXB36SG5FLWVV4CHBU7/


[ovirt-users] oVirt Survey 2019 results

2019-04-02 Thread Sandro Bonazzola
Thanks to the 143 participants to oVirt Survey 2019!
The survey is now closed and results are publicly available at
https://bit.ly/2JYlI7U
We'll analyze collected data in order to improve oVirt thanks to your
feedback.

As a first step after reading the results I'd like to invite the 30 persons
who replied they're willing to contribute code to send an email to
de...@ovirt.org introducing themselves: we'll be more than happy to welcome
them and helping them getting started.

I would also like to invite the 17 people who replied they'd like to help
organizing oVirt events in their area to either get in touch with me or
introduce themselves to users@ovirt.org so we can discuss about events
organization.

Last but not least I'd like to invite the 38 people willing to contribute
documentation and the one willing to contribute localization to introduce
themselves to de...@ovirt.org.

Thanks!
-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4N5DYCXY2S6ZAUI7BWD4DEKZ6JL6MSGN/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Thank you very much !
I have just installed a new fresh node,   and triggered the single instance
hyperconverged setup. It seems it fails at the hosted engine final steps of
deployment:
 INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Cannot attach Storage. There is no active Host in the Data Center.]".
HTTP response code is 409.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage.
There is no active Host in the Data Center.]\". HTTP response code is 409."}
Also,  the
ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
throws
the following:

2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
in main\nstorage_domains_module.post_create_check(sd_id)\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
in post_create_check\nid=storage_domain.id,\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
in _internal_add\nreturn future.wait() if wait else future\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
wait\nreturn self._code(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
callback\nself._check_fault(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
_check_fault\nself._raise_error(response, body)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
_raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
Fault detail is \"[Cannot attach Storage. There is no active Host in the
Data Center.]\". HTTP response code is 409.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot
attach Storage. There is no active Host in the Data Center.]\". HTTP
response code is 409."
}"

I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far,
I am unable to deploy oVirt single node Hyperconverged...
Any thoughts ?



On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi 
wrote:

>
>
> On Mon, Apr 1, 2019 at 6:14 PM Leo David  wrote:
>
>> Thank you Simone.
>> I've decides to go for a new fresh install from iso, and i'll keep posted
>> if any troubles arise. But I am still trying to understand what are the
>> services that mount the lvms and volumes after configuration. There is
>> nothing related in fstab, so I assume there are a couple of .mount files
>> somewhere in the filesystem.
>> Im just trying to understand node's underneath workflow.
>>
>
> hosted-engine configuration is stored
> in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount
> the hosted-engine storage domain according to that and so ovirt-ha-agent
> will be able to start the engine VM.
> Everything else is just in the engine DB.
>
>
>>
>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi  wrote:
>>
>>> Hi,
>>> to understand what's failing I'd suggest to start attaching setup logs.
>>>
>>> On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:
>>>
 Hello Everyone,
 Using 4.3.2 installation, and after running through HyperConverged
 Setup,  at the last stage it fails. It seems that the previously created
 "engine" volume is not mounted under "/rhev" path, therefore the setup
 cannot finish the deployment.
 Any ideea which are the services responsible of mounting the volumes on
 oVirt Node distribution ? I'm thinking that maybe this particularly one
 failed to start for some reason...
 Thank you very much !

 --
 Best regards, Leo David
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/

>>>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?

2019-04-02 Thread Simone Tiraboschi
TASK [ovirt.hosted_engine_setup : Activate storage domain]
**
...
Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP
response code is 400.

usually means that the engine failed to activate that storage domain;
unfortunately engine error messages are not always that clear (see
https://bugzilla.redhat.com/1554922
 ) but this is often
due to fact the the NFS share or the iSCSI lun or whatever you used wasn't
really clean.
Are you manually cleaning it between one attempt and the next one?

On Tue, Apr 2, 2019 at 10:50 AM Callum Smith  wrote:

> Dear Simone,
>
> With no changes, we're now seeing this baffling error:
>
> TASK [ovirt.hosted_engine_setup : Parse OVF]
> 
> task path:
> /etc/ansible/playbook/ovirt-ansible-hosted-engine-setup/tasks/create_storage_domain.yml:120
>  ESTABLISH SSH CONNECTION FOR USER: root
>  SSH: EXEC ssh -C -o ControlMaster=auto
> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
> PreferredAuthentications=
> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
> ControlPath=/etc/ansible/.ansible/cp/2c1e73
> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'echo ~root && sleep
> 0'"'"''
>  (0, '/root\n', '')
>  ESTABLISH SSH CONNECTION FOR USER: root
>  SSH: EXEC ssh -C -o ControlMaster=auto
> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
> PreferredAuthentications=
> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
> ControlPath=/etc/ansible/.ansible/cp/2c1e73
> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'( umask 77 && mkdir
> -p "` echo /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320 `"
> && echo a
> nsible-tmp-1553937522.31-129798476242320="` echo
> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320 `" ) && sleep
> 0'"'"''
>  (0,
> 'ansible-tmp-1553937522.31-129798476242320=/root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320\n',
> '')
> Using module file /opt/ansible/lib/ansible/modules/files/xml.py
>  PUT
> /etc/ansible/.ansible/tmp/ansible-local-32213KmUe6/tmp8wMU8o TO
> /root/.ansible/tmp/ansible-tmp-1553937522.31-12979847624
> 2320/AnsiballZ_xml.py
>  SSH: EXEC sftp -b - -C -o
> ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no
> -o PreferredAuthentica
> tions=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
> ControlPath=/etc/ansible/.ansible/cp/
> 2c1e73363c '[virthyp04.virt.in.bmrc.ox.ac.uk]'
>  (0, 'sftp> put
> /etc/ansible/.ansible/tmp/ansible-local-32213KmUe6/tmp8wMU8o
> /root/.ansible/tmp/ansible-tmp-1553937522.31-129
> 798476242320/AnsiballZ_xml.py\n', '')
>  ESTABLISH SSH CONNECTION FOR USER: root
>  SSH: EXEC ssh -C -o ControlMaster=auto
> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
> PreferredAuthentications=
> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
> ControlPath=/etc/ansible/.ansible/cp/2c1e73
> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'chmod u+x
> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320/
> /root/.ansible/tmp/ansible-tmp-1
> 553937522.31-129798476242320/AnsiballZ_xml.py && sleep 0'"'"''
>  (0, '', '')
>  ESTABLISH SSH CONNECTION FOR USER: root
>  SSH: EXEC ssh -C -o ControlMaster=auto
> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
> PreferredAuthentications=
> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
> ControlPath=/etc/ansible/.ansible/cp/2c1e73
> 363c -tt virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'/usr/bin/python
> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320/AnsiballZ_xml.py
> && sle
> ep 0'"'"''
>  (0, '\r\n{"count": 1, "matches":
> [{"Disk": {"{http://schemas.dmtf.org/ovf/envelope/1/}wipe-after-delete":
> "false", "{http://
> schemas.dmtf.org/ovf/envelope/1/}format": "
> http://www.vmware.com/specifications/vmdk.html#sparse;, "{
> http://schemas.dmtf.org/ovf/envelope/1/}vm_snapshot_id":
> "5f2be758-82d7-4c07-a220-9060e782dc7a", "{
> http://schemas.dmtf.org/ovf/envelope/1/}parentRef": "", "{
> http://schemas.dmtf.org/ovf/envelope/1/}fileRef": "6f76686
> b-199c-4cb3-bbbe-86fc34365745/72bc3948-5d8d-4877-bac8-7db4995045b5", "{
> http://schemas.dmtf.org/ovf/envelope/1/}actual_size": "51", "{
> http://schemas.dmtf.org/o
> vf/envelope/1/}volume-format": "COW", "{
> http://schemas.dmtf.org/ovf/envelope/1/}boot": "true", "{
> http://schemas.dmtf.org/ovf/envelope/1/}size": "51", "{http:/
> /schemas.dmtf.org/ovf/envelope/1/}volume-type": "Sparse", "{
> http://schemas.dmtf.org/ovf/envelope/1/}disk-type": "System", "{
> http://schemas.dmtf.org/ovf/envelo
> pe/1/}diskId": 

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Hi,
I have just hit "Redeploy"  and not the volume seems to be mounted:

Filesystem Type
Size  Used Avail Use% Mounted on
/dev/mapper/onn-ovirt--node--ng--4.3.2--0.20190319.0+1 ext4
57G  3.0G   51G   6% /
devtmpfs   devtmpfs
48G 0   48G   0% /dev
tmpfs  tmpfs
48G  4.0K   48G   1% /dev/shm
tmpfs  tmpfs
48G   34M   48G   1% /run
tmpfs  tmpfs
48G 0   48G   0% /sys/fs/cgroup
/dev/sda1  ext4
976M  183M  726M  21% /boot
/dev/mapper/onn-varext4
15G  4.4G  9.5G  32% /var
/dev/mapper/onn-tmpext4
976M  3.2M  906M   1% /tmp
/dev/mapper/onn-var_logext4
17G   56M   16G   1% /var/log
/dev/mapper/onn-var_log_audit  ext4
2.0G  8.7M  1.8G   1% /var/log/audit
/dev/mapper/onn-home   ext4
976M  2.6M  907M   1% /home
/dev/mapper/onn-var_crash  ext4
9.8G   37M  9.2G   1% /var/crash
tmpfs  tmpfs
9.5G 0  9.5G   0% /run/user/0
/dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
100G   35M  100G   1% /gluster_bricks/engine
c6100-ch3-node1-gluster.internal.lab:/engine   fuse.glusterfs
100G  1.1G   99G   2%
/rhev/data-center/mnt/glusterSD/c6100-ch3-node1-gluster.internal.lab:_engine

[root@c6100-ch3-node1 ovirt-hosted-engine-setup]# gluster v status
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick c6100-ch3-node1-gluster.internal.lab:
/gluster_bricks/engine/engine   49152 0  Y
25397
Task Status of Volume engine
--
There are no active volume tasks

The problem is that the deployment still not finishing, now the error is:

INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

I just do not understand anymore...



On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das  wrote:

> Hi Leo,
>  Can you please paste "df -Th" and "gluster v status" out put ?
> Wanted to make sure engine mounted and volumes and bricks are up.
> What does vdsm log say?
>
> On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:
>
>> Thank you very much !
>> I have just installed a new fresh node,   and triggered the single
>> instance hyperconverged setup. It seems it fails at the hosted engine final
>> steps of deployment:
>>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
>> domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
>> space]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[Cannot attach Storage. There is no active Host in the Data Center.]".
>> HTTP response code is 409.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage.
>> There is no active Host in the Data Center.]\". HTTP response code is 409."}
>> Also,  the
>> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
>>  throws
>> the following:
>>
>> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
>> "otopi_storage_domain_details" type "" value: "{
>> "changed": false,
>> "exception": "Traceback (most recent call last):\n  File
>> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
>> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
>> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
>> in post_create_check\nid=storage_domain.id,\n  File
>> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
>> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
>> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
>> in _internal_add\nreturn future.wait() if wait else future\n  File
>> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
>> wait\nreturn self._code(response)\n  File
>> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
>> 

[ovirt-users] Strange storage data center failure

2019-04-02 Thread Fabrice Bacchella
I have a storage data center that I can't use. It's a local one.

When I look on vdsm.log:
2019-04-02 10:55:48,336+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH 
connectStoragePool error=Cannot find master domain: 
u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, 
msdUUID=49b1bd15-486a-4064-878e-8030c8108e09' from=:::X,59590, 
task_id=a56a5869-a219-4659-baa3-04f673b2ad55 (api:50)
2019-04-02 10:55:48,336+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='a56a5869-a219-4659-baa3-04f673b2ad55') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in connectStoragePool
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1035, in 
connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1097, in 
_connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 700, in 
connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1274, in 
__rebuild
self.setMasterDomain(msdUUID, masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1495, in 
setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: 
u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, 
msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'
2019-04-02 10:55:48,336+0200 INFO  (jsonrpc/2) [storage.TaskManager.Task] 
(Task='a56a5869-a219-4659-baa3-04f673b2ad55') aborting: Task is aborted: 
"Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, 
msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'" - code 304 (task:1181)

2019-04-02 11:44:50,862+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH getSpmStatus 
error=Unknown pool id, pool not connected: 
(u'063d1217-6194-48a0-943e-3d873f2147de',) from=:::10.83.16.34,46546, 
task_id=cfb1c871-b1d4-4b1a-b2a5-f91ddfaba
54b (api:50)
2019-04-02 11:44:50,862+0200 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
(Task='cfb1c871-b1d4-4b1a-b2a5-f91ddfaba54b') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in getSpmStatus
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 634, in 
getSpmStatus
pool = self.getPool(spUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 350, in 
getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: 
(u'063d1217-6194-48a0-943e-3d873f2147de',)


063d1217-6194-48a0-943e-3d873f2147de is indeed the datacenter id and 
49b1bd15-486a-4064-878e-8030c8108e09 the storage domain:


  
fcp

  
  

  
  v4
  

  


On engine.log, I'm also getting:
2019-04-02 11:43:57,531+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-12) [] Command 
'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand' 
return value '
TaskStatusListReturn:{status='Status [code=654, message=Not SPM: ()]'}
'

lsblk shows that the requested volumes are here:

lsblk 
NAME
MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
cciss!c0d1  
104:16   0  1.9T  0 disk 
|-49b1bd15--486a--4064--878e--8030c8108e09-metadata 
253:00  512M  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-outbox   
253:10  128M  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-xleases  
253:201G  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-leases   
253:302G  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-ids  
253:40  128M  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-inbox
253:50  128M  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-master   
253:601G  0 lvm  
|-49b1bd15--486a--4064--878e--8030c8108e09-6225ddc3--b600--49ef--8de4--6e53bf4cad1f
 253:70  128M  0 lvm  
`-49b1bd15--486a--4064--878e--8030c8108e09-bdac3a3a--8633--41bf--921d--db2cf31f5d1c
 253:80  128M  0 lvm  

There is no usefull data on them. So I don't mind destroying everything. 

[ovirt-users] Re: oVirt Survey 2019 results

2019-04-02 Thread Sahina Bose
On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola 
wrote:

> Thanks to the 143 participants to oVirt Survey 2019!
> The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
>
> As a first step after reading the results I'd like to invite the 30
> persons who replied they're willing to contribute code to send an email to
> de...@ovirt.org introducing themselves: we'll be more than happy to
> welcome them and helping them getting started.
>
> I would also like to invite the 17 people who replied they'd like to help
> organizing oVirt events in their area to either get in touch with me or
> introduce themselves to users@ovirt.org so we can discuss about events
> organization.
>
> Last but not least I'd like to invite the 38 people willing to contribute
> documentation and the one willing to contribute localization to introduce
> themselves to de...@ovirt.org.
>

Thank you all for the feedback.
I was looking at the feedback specific to Gluster. While it's disheartening
to see "Gluster weakest link in oVirt", I can understand where the feedback
and frustration is coming from.

Over the past month and in this survey, the common themes that have come up
- Ensure smoother upgrades for the hyperconverged deployments with
GlusterFS.  The oVirt 4.3 release with upgrade to gluster 5.3 caused
disruption for many users and we want to ensure this does not happen again.
To this end, we are working on adding upgrade tests to OST based CI .
Contributions are welcome.

- improve performance on gluster storage domain. While we have seen
promising results with gluster 6 release this is an ongoing effort. Please
help this effort with inputs on the specific workloads and usecases that
you run, gathering data and running tests.

- deployment issues. We have worked to improve the deployment flow in 4.3
by adding pre-checks and changing to gluster-ansible role based deployment.
We would love to hear specific issues that you're facing around this -
please raise bugs if you haven't already (
https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt)



> Thanks!
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4N5DYCXY2S6ZAUI7BWD4DEKZ6JL6MSGN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BF4TKC2MIROT7QZBHZQITCAHQMQNGZ3Q/


[ovirt-users] Re: oVirt Survey 2019 results

2019-04-02 Thread Roni Eliezer
Regarding the IPv6 question:
"If you are using IPv6 for hosts, which kind of addressing you are using"?
"Dynamic or Static?"

I would add 'AutoConf' as well

Thx
Roni


On Tue, Apr 2, 2019 at 10:30 AM Dan Kenigsberg  wrote:

> On Tue, Apr 2, 2019 at 9:36 AM Sandro Bonazzola 
> wrote:
> >
> > Thanks to the 143 participants to oVirt Survey 2019!
> > The survey is now closed and results are publicly available at
> https://bit.ly/2JYlI7U
> > We'll analyze collected data in order to improve oVirt thanks to your
> feedback.
> >
> > As a first step after reading the results I'd like to invite the 30
> persons who replied they're willing to contribute code to send an email to
> de...@ovirt.org introducing themselves: we'll be more than happy to
> welcome them and helping them getting started.
> >
> > I would also like to invite the 17 people who replied they'd like to
> help organizing oVirt events in their area to either get in touch with me
> or introduce themselves to users@ovirt.org so we can discuss about events
> organization.
> >
> > Last but not least I'd like to invite the 38 people willing to
> contribute documentation and the one willing to contribute localization to
> introduce themselves to de...@ovirt.org.
> >
> > Thanks!
>
> and thank you, Sandro, for shepherding this survey.
>
> It has, as usual, very interesting results. I am happily surprised to
> see how many are using OvS, OVN and IPv6. I am less happy (but
> unsurprised) to see that nobody responded that they were using
> Fedora-based oVirt.
>
> I know the survey is anonymous, but I would love to reach out and
> obtain more information about the painful use case of whomever
> answered
> What is the most challenging flow in oVirt? with "Working with networks."
> I would love to hear more about your (and others'!) challenges, and
> see how we developers can ease them.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LLP7KDV7WFYFQRXM24HYAMYPNSSI56V6/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UM7YM6OTAAKQXPV772UXQWKNVQA5YCXO/


[ovirt-users] Re: 4.2 / 4.3 : Moving the hosted-engine to another storage

2019-04-02 Thread Andreas Elvers

> No need for that; but you will required to redeploy them from the new
> engine to update their configuration.

so I keep the old engine running while deploying the new engine on a different
storage? Curious. 

I don't understand what "redeploy them [the old engine hosts] from 
the new engine to update their configuration" means. 

In fact the current engine incarnation is running on an old cluster, that
has no access to the new storage (the cluster is to go away). 
The engine is also managing the new cluster to which we want to move the 
engine. 
The engine is the only piece that keeps us from shutting down the old cluster. 
That's the motivation for restoring the engine on the new cluster.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WVOFX4ZXDF4PKR5JHW4BKR45ZQRC56CS/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
And there it is the last lines on the ansible_create_storage_domain log:

2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
in main\nstorage_domains_module.post_create_check(sd_id)\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
in post_create_check\nid=storage_domain.id,\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
in _internal_add\nreturn future.wait() if wait else future\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
wait\nreturn self._code(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
callback\nself._check_fault(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
_check_fault\nself._raise_error(response, body)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
_raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
HTTP response code is 400."
}"
2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "" value: "[]"
2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
"play_hosts" type "" value: "[]"
2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "" value: "[]"
2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
u\'exception\': u\'Traceback (most recent call last):\\n  File
"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
in main\\nstorage_domains_module.post_create_check(sd_id)\\n  File
"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
 kwargs
ignore_errors:None
2019-04-02 10:53:49,148+0100 INFO ansible stats {
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "01:15 Minutes",
"ansible_result": "type: \nstr: {u'localhost':
{'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}",
"ansible_type": "finish",
"status": "FAILED"
}
2019-04-02 10:53:49,149+0100 INFO SUMMARY:
DurationTask Name

[ < 1 sec ] Execute just a specific set of steps
[  00:02  ] Force facts gathering
[  00:02  ] Check local VM dir stat
[  00:02  ] Obtain SSO token using username/password credentials
[  00:02  ] Fetch host facts
[  00:01  ] Fetch cluster ID
[  00:02  ] Fetch cluster facts
[  00:02  ] Fetch Datacenter facts
[  00:01  ] Fetch Datacenter ID
[  00:01  ] Fetch Datacenter name
[  00:02  ] Add glusterfs storage domain
[  00:02  ] Get storage domain details
[  00:02  ] Find the appliance OVF
[  00:02  ] Parse OVF
[  00:02  ] Get required size
[ FAILED  ] Activate storage domain

Any ideea on how to escalate this issue ?
It just does not make sense to not be able to install from scratch a fresh
node...

Have a nice day  !

Leo


On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das  wrote:

> Hi Leo,
>  Can you please paste "df -Th" and "gluster v status" out put ?
> Wanted to make sure engine mounted and volumes and bricks are up.
> What does vdsm log say?
>
> On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:
>
>> Thank you very much !
>> I have just installed a new fresh node,   and triggered the single
>> instance hyperconverged setup. It seems it fails at the hosted engine final
>> steps of deployment:
>>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
>> domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
>> space]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[Cannot attach Storage. There is no active Host in the Data Center.]".
>> HTTP response code is 409.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation 

[ovirt-users] Vm status not update after update

2019-04-02 Thread Marcelo Leandro
Hi, After update my hosts to ovirt node 4.3.2 with vdsm  version
vdsm-4.30.11-1.el7
my vms status not update, if I do anything with vm like shutdown, migrate
this status not change , only a restart the vdsm the host that vm is runnig.

vdmd status :

ERROR Internal server error
   Traceback (most
recent call last):
 File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request..

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ERLSYUQKPXIAAPDZ6KAOBTHW7DMCSA/


[ovirt-users] Re: Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?

2019-04-02 Thread Simone Tiraboschi
On Tue, Apr 2, 2019 at 11:18 AM Callum Smith  wrote:

> No, the NFS is full of artefacts - should i be rm -rf the whole thing
> every time?
>

Yes, right.


>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 2 Apr 2019, at 10:09, Simone Tiraboschi  wrote:
>
> TASK [ovirt.hosted_engine_setup : Activate storage domain]
> **
> ...
> Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP
> response code is 400.
>
> usually means that the engine failed to activate that storage domain;
> unfortunately engine error messages are not always that clear (see
> https://bugzilla.redhat.com/1554922
>  ) but this is often
> due to fact the the NFS share or the iSCSI lun or whatever you used wasn't
> really clean.
> Are you manually cleaning it between one attempt and the next one?
>
> On Tue, Apr 2, 2019 at 10:50 AM Callum Smith  wrote:
>
>> Dear Simone,
>>
>> With no changes, we're now seeing this baffling error:
>>
>> TASK [ovirt.hosted_engine_setup : Parse OVF]
>> 
>> task path:
>> /etc/ansible/playbook/ovirt-ansible-hosted-engine-setup/tasks/create_storage_domain.yml:120
>>  ESTABLISH SSH CONNECTION FOR USER: root
>>  SSH: EXEC ssh -C -o ControlMaster=auto
>> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>> PreferredAuthentications=
>> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
>> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
>> ControlPath=/etc/ansible/.ansible/cp/2c1e73
>> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'echo ~root &&
>> sleep 0'"'"''
>>  (0, '/root\n', '')
>>  ESTABLISH SSH CONNECTION FOR USER: root
>>  SSH: EXEC ssh -C -o ControlMaster=auto
>> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>> PreferredAuthentications=
>> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
>> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
>> ControlPath=/etc/ansible/.ansible/cp/2c1e73
>> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'( umask 77 &&
>> mkdir -p "` echo
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320 `" && echo a
>> nsible-tmp-1553937522.31-129798476242320="` echo
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320 `" ) && sleep
>> 0'"'"''
>>  (0,
>> 'ansible-tmp-1553937522.31-129798476242320=/root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320\n',
>> '')
>> Using module file /opt/ansible/lib/ansible/modules/files/xml.py
>>  PUT
>> /etc/ansible/.ansible/tmp/ansible-local-32213KmUe6/tmp8wMU8o TO
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-12979847624
>> 2320/AnsiballZ_xml.py
>>  SSH: EXEC sftp -b - -C -o
>> ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no
>> -o PreferredAuthentica
>> tions=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
>> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
>> ControlPath=/etc/ansible/.ansible/cp/
>> 2c1e73363c '[virthyp04.virt.in.bmrc.ox.ac.uk]'
>>  (0, 'sftp> put
>> /etc/ansible/.ansible/tmp/ansible-local-32213KmUe6/tmp8wMU8o
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-129
>> 798476242320/AnsiballZ_xml.py\n', '')
>>  ESTABLISH SSH CONNECTION FOR USER: root
>>  SSH: EXEC ssh -C -o ControlMaster=auto
>> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>> PreferredAuthentications=
>> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
>> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
>> ControlPath=/etc/ansible/.ansible/cp/2c1e73
>> 363c virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c '"'"'chmod u+x
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320/
>> /root/.ansible/tmp/ansible-tmp-1
>> 553937522.31-129798476242320/AnsiballZ_xml.py && sleep 0'"'"''
>>  (0, '', '')
>>  ESTABLISH SSH CONNECTION FOR USER: root
>>  SSH: EXEC ssh -C -o ControlMaster=auto
>> -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
>> PreferredAuthentications=
>> gssapi-with-mic,gssapi-keyex,hostbased,publickey -o
>> PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o
>> ControlPath=/etc/ansible/.ansible/cp/2c1e73
>> 363c -tt virthyp04.virt.in.bmrc.ox.ac.uk '/bin/sh -c
>> '"'"'/usr/bin/python
>> /root/.ansible/tmp/ansible-tmp-1553937522.31-129798476242320/AnsiballZ_xml.py
>> && sle
>> ep 0'"'"''
>>  (0, '\r\n{"count": 1, "matches":
>> [{"Disk": {"{http://schemas.dmtf.org/ovf/envelope/1/}wipe-after-delete":
>> "false", "{http://
>> schemas.dmtf.org/ovf/envelope/1/}format": "
>> http://www.vmware.com/specifications/vmdk.html#sparse;, "{
>> http://schemas.dmtf.org/ovf/envelope/1/}vm_snapshot_id":
>> "5f2be758-82d7-4c07-a220-9060e782dc7a", "{
>> http://schemas.dmtf.org/ovf/envelope/1/}parentRef": "", "{
>> http://schemas.dmtf.org/ovf/envelope/1/}fileRef": "6f76686
>> 

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Sahina Bose
Is it possible you have not cleared the gluster volume between installs?

What's the corresponding error in vdsm.log?


On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
>
> And there it is the last lines on the ansible_create_storage_domain log:
>
> 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var 
> "otopi_storage_domain_details" type "" value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n  File 
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664, 
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File 
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526, 
> in post_create_check\nid=storage_domain.id,\n  File 
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in 
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n  
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, 
> in _internal_add\nreturn future.wait() if wait else future\n  File 
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in 
> wait\nreturn self._code(response)\n  File 
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in 
> callback\nself._check_fault(response)\n  File 
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in 
> _check_fault\nself._raise_error(response, body)\n  File 
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in 
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\". 
> Fault detail is \"[]\". HTTP response code is 400.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". 
> HTTP response code is 400."
> }"
> 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var 
> "ansible_play_hosts" type "" value: "[]"
> 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var 
> "play_hosts" type "" value: "[]"
> 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var 
> "ansible_play_batch" type "" value: "[]"
> 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 
> 'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True, 
> u\'exception\': u\'Traceback (most recent call last):\\n  File 
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664, in 
> main\\nstorage_domains_module.post_create_check(sd_id)\\n  File 
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526', 
> 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook': 
> u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
> 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args 
>  kwargs 
> ignore_errors:None
> 2019-04-02 10:53:49,148+0100 INFO ansible stats {
> "ansible_playbook": 
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> "ansible_playbook_duration": "01:15 Minutes",
> "ansible_result": "type: \nstr: {u'localhost': 
> {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}",
> "ansible_type": "finish",
> "status": "FAILED"
> }
> 2019-04-02 10:53:49,149+0100 INFO SUMMARY:
> DurationTask Name
> 
> [ < 1 sec ] Execute just a specific set of steps
> [  00:02  ] Force facts gathering
> [  00:02  ] Check local VM dir stat
> [  00:02  ] Obtain SSO token using username/password credentials
> [  00:02  ] Fetch host facts
> [  00:01  ] Fetch cluster ID
> [  00:02  ] Fetch cluster facts
> [  00:02  ] Fetch Datacenter facts
> [  00:01  ] Fetch Datacenter ID
> [  00:01  ] Fetch Datacenter name
> [  00:02  ] Add glusterfs storage domain
> [  00:02  ] Get storage domain details
> [  00:02  ] Find the appliance OVF
> [  00:02  ] Parse OVF
> [  00:02  ] Get required size
> [ FAILED  ] Activate storage domain
>
> Any ideea on how to escalate this issue ?
> It just does not make sense to not be able to install from scratch a fresh 
> node...
>
> Have a nice day  !
>
> Leo
>
>
> On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das  wrote:
>>
>> Hi Leo,
>>  Can you please paste "df -Th" and "gluster v status" out put ?
>> Wanted to make sure engine mounted and volumes and bricks are up.
>> What does vdsm log say?
>>
>> On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:
>>>
>>> Thank you very much !
>>> I have just installed a new fresh node,   and triggered the single instance 
>>> hyperconverged setup. It seems it fails at the hosted engine final steps of 
>>> deployment:
>>>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
>>> [ INFO ] ok: [localhost]
>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain]
>>> [ INFO ] skipping: [localhost]
>>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space]
>>> [ 

[ovirt-users] Re: Vm status not update after update

2019-04-02 Thread Strahil Nikolov
 I think I already met a solution in the mail lists. Can you check and apply 
the fix mentioned there ?
Best Regards,Strahil Nikolov

В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro 
 написа:  
 
 Hi, After update my hosts to ovirt node 4.3.2 with vdsm  version 
vdsm-4.30.11-1.el7 my vms status not update, if I do anything with vm like 
shutdown, migrate this status not change , only a restart the vdsm the host 
that vm is runnig.
vdmd status :

 ERROR Internal server error                                                    
     Traceback (most recent call last):                                         
                  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in 
_handle_request..
Thanks,___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ERLSYUQKPXIAAPDZ6KAOBTHW7DMCSA/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCUPKIGKHICIJDM2QZUHGEQVYVY5HY5E/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-02 Thread Benny Zlotnik
Glad to hear it!


On Tue, Apr 2, 2019 at 3:53 PM Matthias Leopold
 wrote:
>
> No, I didn't...
> I wasn't used to using both "rbd_user" and "rbd_keyring_conf" (I don't
> use "rbd_keyring_conf" in standalone Cinder), nevermind
>
> After fixing that and dealing with the rbd feature issues I could
> proudly start my first VM with a cinderlib provisioned disk :-)
>
> Thanks for help!
> I'll keep posting my experiences concerning cinderlib to this list.
>
> Matthias
>
> Am 01.04.19 um 16:24 schrieb Benny Zlotnik:
> > Did you pass the rbd_user when creating the storage domain?
> >
> > On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
> >  wrote:
> >>
> >>
> >> Am 01.04.19 um 13:17 schrieb Benny Zlotnik:
>  OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
> 
>  2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
>  connecting to ceph cluster.
>  Traceback (most recent call last):
>   File 
>  "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
>  line 337, in _do_conn
> client.connect()
>   File "rados.pyx", line 885, in rados.Rados.connect
>  (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
>  OSError: [errno 95] error connecting to the cluster
>  2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
>  run command 'storage_stats': Bad or unexpected response from the storage
>  volume backend API: Error connecting to ceph cluster.
> 
>  I don't really know what to do with that either.
>  BTW, the cinder version on engine host is "pike"
>  (openstack-cinder-11.2.0-1.el7.noarch)
> >>> Not sure if the version is related (I know it's been tested with
> >>> pike), but you can try and install the latest rocky (that's what I use
> >>> for development)
> >>
> >> I upgraded cinder on engine and hypervisors to rocky and installed
> >> missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf"
> >> and "rbd_ceph_conf" as indicated and got as far as adding a "Managed
> >> Block Storage" domain and creating a disk (which is also visible through
> >> "rbd ls"). I used a keyring that is only authorized for the pool I
> >> specified with "rbd_pool". When I try to start the VM it fails and I see
> >> the following in supervdsm.log on hypervisor:
> >>
> >> ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
> >> executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
> >> \'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
> >> privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
> >> \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
> >> \\\'--privsep_sock_path\\\',
> >> \\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> >> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> >> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> >> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> >> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> >> privsep daemon running as pid 15944\\nTraceback (most recent call
> >> last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
> >> \\nsys.exit(main(sys.argv[1:]))\\n  File
> >> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> >> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper",
> >> line 137, in attach\\nattachment =
> >> conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
> >> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96,
> >> in connect_volume\\nrun_as_root=True)\\n  File
> >> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> >> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> >> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> >> 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> >> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
> >> 207, in _wrap\\nreturn self.channel.remote_call(name, args,
> >> kwargs)\\n  File
> >> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
> >> remote_call\\nraise
> >> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> >> Unexpected error while running command.\\nCommand: rbd map
> >> volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
> >> /tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
> >> xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
> >> 22\\nStdout: u\\\'In some cases useful info is found in syslog - try
> >> "dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196
> >> 7fe0b4632d40 -1 auth: unable to find a keyring on
> >> 

[ovirt-users] Re: Vm status not update after update

2019-04-02 Thread Marcelo Leandro
Sorry, I can't find this.

Em ter, 2 de abr de 2019 às 09:49, Strahil Nikolov 
escreveu:

> I think I already met a solution in the mail lists. Can you check and
> apply the fix mentioned there ?
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro <
> marcelol...@gmail.com> написа:
>
>
> Hi, After update my hosts to ovirt node 4.3.2 with vdsm  version 
> vdsm-4.30.11-1.el7
> my vms status not update, if I do anything with vm like shutdown, migrate
> this status not change , only a restart the vdsm the host that vm is runnig.
>
> vdmd status :
>
> ERROR Internal server error
>Traceback (most
> recent call last):
>  File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
> _handle_request..
>
> Thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ERLSYUQKPXIAAPDZ6KAOBTHW7DMCSA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QVCTHCSDNEII2NMO76C7AU422TQQBMPW/


[ovirt-users] Fwd: Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
-- Forwarded message -
From: Leo David 
Date: Tue, Apr 2, 2019, 15:10
Subject: Re: [ovirt-users] Re: HE - engine gluster volume - not mounted
To: Sahina Bose 


I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) ,  and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume,  volume gets mounted under the correct path, but installation still
fails:

[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)

Should I perform an "engine-cleanup",  delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo


On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose  wrote:

> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
> u\'exception\': u\'Traceback (most recent call last):\\n  File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
> in main\\nstorage_domains_module.post_create_check(sd_id)\\n  File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
> 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
> > 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
>  kwargs
> ignore_errors:None
> > 2019-04-02 10:53:49,148+0100 INFO 

[ovirt-users] Re: [Gluster-devel] oVirt Survey 2019 results

2019-04-02 Thread Atin Mukherjee
Thanks Sahina for including Gluster community mailing lists.

As Sahina already mentioned we had a strong focus on upgrade testing path
before releasing glusterfs-6. We conducted test day and along with
functional pieces, tested upgrade paths like from 3.12, 4 & 5 to release-6,
we encountered problems but we fixed them before releasing glusterfs-6. So
overall this experience should definitely improve with glusterfs-6.

On Tue, 2 Apr 2019 at 15:16, Sahina Bose  wrote:

>
>
> On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola 
> wrote:
>
>> Thanks to the 143 participants to oVirt Survey 2019!
>> The survey is now closed and results are publicly available at
>> https://bit.ly/2JYlI7U
>> We'll analyze collected data in order to improve oVirt thanks to your
>> feedback.
>>
>> As a first step after reading the results I'd like to invite the 30
>> persons who replied they're willing to contribute code to send an email to
>> de...@ovirt.org introducing themselves: we'll be more than happy to
>> welcome them and helping them getting started.
>>
>> I would also like to invite the 17 people who replied they'd like to help
>> organizing oVirt events in their area to either get in touch with me or
>> introduce themselves to users@ovirt.org so we can discuss about events
>> organization.
>>
>> Last but not least I'd like to invite the 38 people willing to contribute
>> documentation and the one willing to contribute localization to introduce
>> themselves to de...@ovirt.org.
>>
>
> Thank you all for the feedback.
> I was looking at the feedback specific to Gluster. While it's
> disheartening to see "Gluster weakest link in oVirt", I can understand
> where the feedback and frustration is coming from.
>
> Over the past month and in this survey, the common themes that have come up
> - Ensure smoother upgrades for the hyperconverged deployments with
> GlusterFS.  The oVirt 4.3 release with upgrade to gluster 5.3 caused
> disruption for many users and we want to ensure this does not happen again.
> To this end, we are working on adding upgrade tests to OST based CI .
> Contributions are welcome.
>
> - improve performance on gluster storage domain. While we have seen
> promising results with gluster 6 release this is an ongoing effort. Please
> help this effort with inputs on the specific workloads and usecases that
> you run, gathering data and running tests.
>
> - deployment issues. We have worked to improve the deployment flow in 4.3
> by adding pre-checks and changing to gluster-ansible role based deployment.
> We would love to hear specific issues that you're facing around this -
> please raise bugs if you haven't already (
> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt)
>
>
>
>> Thanks!
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4N5DYCXY2S6ZAUI7BWD4DEKZ6JL6MSGN/
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
--Atin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P5QM4H6IWFK2ISWU4DEJV7KPVRXWLAJR/


[ovirt-users] Re: Actual size bigger than virtual size

2019-04-02 Thread Tal Nisan
On Fri, Mar 29, 2019 at 4:58 PM  wrote:

> Hi,
>
> Actual size 209GiB and Virtual Size 150 GiB on a thin provision disk,
> shows on the engine GUI.
> 30,9 GB of used space, is what I see on the windows machine when I remote
> desktop to it.
>
Is it possible that you have big snapshots in the chain?
Just as an example  if you have a VM with 150GB used space in the guest,
you take a snapshot and deleted 100GB within the guest it will appear as
the used space within the guest is 50GB but the actual size of the snapshot
chain will be more like 250GB

>
> This VM is slower than the others mainly when we reboot the machine it
> takes around 2 hours.
>
> Thanks
>
> José
>
> --
> *De: *"Sahina Bose" 
> *Para: *"suporte" 
> *Cc: *"users" 
> *Enviadas: *Sexta-feira, 29 De Março de 2019 12:35:23
> *Assunto: *Re: [ovirt-users] Re: Actual size bigger than virtual size
>
> On Fri, Mar 29, 2019 at 6:02 PM  wrote:
> >
> > Hi,
> >
> > Any help?
> >
> > Thanks
> >
> > José
> >
> > 
> > From: supo...@logicworks.pt
> > To: "users" 
> > Sent: Wednesday, March 27, 2019 11:21:41 AM
> > Subject: Actual size bigger than virtual size
> >
> > Hi,
> >
> > I have an all in one ovirt 4.2.2 with gluster storage and a couple of
> windows 2012 VMs.
> > One w2012 is showing actual size 209GiB and Virtual Size 150 GiB on a
> thin provision disk. The vm shows 30,9 GB of used space.
>
> I'm not sure I understand. Could you provide output of commands you
> ran to clarify? What do you mean when you say vm shows 30,9 GB space -
> is that in the UI?
>
> >
> > This VM is slower than the others mainly when we reboot the machine it
> takes around 2 hours.
> >
> > Any idea?
> >
> > Thanks
> >
> >
> > --
> > 
> > Jose Ferradeira
> > http://www.logicworks.pt
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XD7HTZCSF3JIAGDBVR5N3DDQSEZJAXIA/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C672PK4W4AMTJF3KMYPMDETC4GD65TVX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4NHNHHJLRTAJ2F7SSNJEOQLNALLW7JJ/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Just to loop in,  i've forgot to hit "Reply all"

I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) ,  and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume,  volume gets mounted under the correct path, but installation still
fails:

[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)

Should I perform an "engine-cleanup",  delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo

On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose  wrote:

> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
> u\'exception\': u\'Traceback (most recent call last):\\n  File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
> in main\\nstorage_domains_module.post_create_check(sd_id)\\n  File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
> 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
> > 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
>  kwargs
> ignore_errors:None
> > 2019-04-02 10:53:49,148+0100 INFO ansible stats {
> > "ansible_playbook":
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> > 

[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-02 Thread Matthias Leopold

No, I didn't...
I wasn't used to using both "rbd_user" and "rbd_keyring_conf" (I don't 
use "rbd_keyring_conf" in standalone Cinder), nevermind


After fixing that and dealing with the rbd feature issues I could 
proudly start my first VM with a cinderlib provisioned disk :-)


Thanks for help!
I'll keep posting my experiences concerning cinderlib to this list.

Matthias

Am 01.04.19 um 16:24 schrieb Benny Zlotnik:

Did you pass the rbd_user when creating the storage domain?

On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
 wrote:



Am 01.04.19 um 13:17 schrieb Benny Zlotnik:

OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
connecting to ceph cluster.
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 337, in _do_conn
   client.connect()
 File "rados.pyx", line 885, in rados.Rados.connect
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
run command 'storage_stats': Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.

I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike"
(openstack-cinder-11.2.0-1.el7.noarch)

Not sure if the version is related (I know it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)


I upgraded cinder on engine and hypervisors to rocky and installed
missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf"
and "rbd_ceph_conf" as indicated and got as far as adding a "Managed
Block Storage" domain and creating a disk (which is also visible through
"rbd ls"). I used a keyring that is only authorized for the pool I
specified with "rbd_pool". When I try to start the VM it fails and I see
the following in supervdsm.log on hypervisor:

ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
\'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
\\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
\\\'--privsep_sock_path\\\',
\\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
starting\\noslo.privsep.daemon: privsep process running with uid/gid:
0/0\\noslo.privsep.daemon: privsep process running with capabilities
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
privsep daemon running as pid 15944\\nTraceback (most recent call
last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
\\nsys.exit(main(sys.argv[1:]))\\n  File
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper",
line 137, in attach\\nattachment =
conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
"/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96,
in connect_volume\\nrun_as_root=True)\\n  File
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
_execute\\nresult = self.__execute(*args, **kwargs)\\n  File
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
207, in _wrap\\nreturn self.channel.remote_call(name, args,
kwargs)\\n  File
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
remote_call\\nraise
exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
Unexpected error while running command.\\nCommand: rbd map
volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
/tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
22\\nStdout: u\\\'In some cases useful info is found in syslog - try
"dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196
7fe0b4632d40 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
(2) No such file or directorynrbd: sysfs write failedn2019-04-01
15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
(2) No such file or directoryn2019-04-01 15:27:30.747896
7fe0b4632d40 -1 monclient: authenticate NOTE: no keyring found; disabled
cephx authenticationn2019-04-01 15:27:30.747903 

[ovirt-users] Re: oVirt Survey 2019 results

2019-04-02 Thread Laura Wright
Thank you for organizing this Sandro! Surveys like this are always great
for helping to inform and improve the user experience of the application.
Also to learn more about the users who are using it.

On Tue, Apr 2, 2019 at 6:31 AM Sahina Bose  wrote:

>
>
> On Tue, Apr 2, 2019 at 12:07 PM Sandro Bonazzola 
> wrote:
>
>> Thanks to the 143 participants to oVirt Survey 2019!
>> The survey is now closed and results are publicly available at
>> https://bit.ly/2JYlI7U
>> We'll analyze collected data in order to improve oVirt thanks to your
>> feedback.
>>
>> As a first step after reading the results I'd like to invite the 30
>> persons who replied they're willing to contribute code to send an email to
>> de...@ovirt.org introducing themselves: we'll be more than happy to
>> welcome them and helping them getting started.
>>
>> I would also like to invite the 17 people who replied they'd like to help
>> organizing oVirt events in their area to either get in touch with me or
>> introduce themselves to users@ovirt.org so we can discuss about events
>> organization.
>>
>> Last but not least I'd like to invite the 38 people willing to contribute
>> documentation and the one willing to contribute localization to introduce
>> themselves to de...@ovirt.org.
>>
>
> Thank you all for the feedback.
> I was looking at the feedback specific to Gluster. While it's
> disheartening to see "Gluster weakest link in oVirt", I can understand
> where the feedback and frustration is coming from.
>
> Over the past month and in this survey, the common themes that have come up
> - Ensure smoother upgrades for the hyperconverged deployments with
> GlusterFS.  The oVirt 4.3 release with upgrade to gluster 5.3 caused
> disruption for many users and we want to ensure this does not happen again.
> To this end, we are working on adding upgrade tests to OST based CI .
> Contributions are welcome.
>
> - improve performance on gluster storage domain. While we have seen
> promising results with gluster 6 release this is an ongoing effort. Please
> help this effort with inputs on the specific workloads and usecases that
> you run, gathering data and running tests.
>
> - deployment issues. We have worked to improve the deployment flow in 4.3
> by adding pre-checks and changing to gluster-ansible role based deployment.
> We would love to hear specific issues that you're facing around this -
> please raise bugs if you haven't already (
> https://bugzilla.redhat.com/enter_bug.cgi?product=cockpit-ovirt)
>
>
>
>> Thanks!
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4N5DYCXY2S6ZAUI7BWD4DEKZ6JL6MSGN/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BF4TKC2MIROT7QZBHZQITCAHQMQNGZ3Q/
>


-- 

LAURA WRIGHT

ASSOCIATE INTERACTION DESIGNER, UXD TEAM

Red Hat Massachusetts 

314 Littleton Rd

lwri...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F6EQNV35SY42QAHF52HVRRKKLJUBFOPL/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Sahina Bose
On Tue, Apr 2, 2019 at 8:14 PM Leo David  wrote:
>
> Just to loop in,  i've forgot to hit "Reply all"
>
> I have deleted everything in the engine gluster mount path, unmounted the 
> engine gluster volume ( not deleted the volume ) ,  and started the wizard 
> with "Use already configured storage". I have pointed to use this gluster 
> volume,  volume gets mounted under the correct path, but installation still 
> fails:
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". 
> HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault 
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 
> 400."}

And I guess we don't have the engine logs to look at this?
Is there any way you can access the engine console to check?

>
> On the node's vdsm.log I can continuously see:
> 2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
> call Host.getStats succeeded in 0.03 seconds (__init__:312)
> 2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START 
> getConnectedStoragePoolsList(options=None) from=internal, 
> task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
> 2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH 
> getConnectedStoragePoolsList return={'poollist': []} from=internal, 
> task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
> 2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting for 
> storage pool to go up (clientIF:709)
> 2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
> 2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats 
> return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:54)
>

Any calls to "START connectStorageServer" in vdsm.log?

> Should I perform an "engine-cleanup",  delete lvms from Cockpit and start it 
> all over ?

I doubt if that would resolve issue since you did clean up files from the mount.

> Did anyone successfully used this particular iso image 
> "ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node 
> installation ?
Sorry, don't know.

> Thank you !
> Leo
>
> On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose  wrote:
>>
>> Is it possible you have not cleared the gluster volume between installs?
>>
>> What's the corresponding error in vdsm.log?
>>
>>
>> On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
>> >
>> > And there it is the last lines on the ansible_create_storage_domain log:
>> >
>> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var 
>> > "otopi_storage_domain_details" type "" value: "{
>> > "changed": false,
>> > "exception": "Traceback (most recent call last):\n  File 
>> > \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 
>> > 664, in main\nstorage_domains_module.post_create_check(sd_id)\n  File 
>> > \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 
>> > 526, in post_create_check\nid=storage_domain.id,\n  File 
>> > \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, 
>> > in add\nreturn self._internal_add(storage_domain, headers, query, 
>> > wait)\n  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", 
>> > line 232, in _internal_add\nreturn future.wait() if wait else future\n 
>> >  File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 
>> > 55, in wait\nreturn self._code(response)\n  File 
>> > \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in 
>> > callback\nself._check_fault(response)\n  File 
>> > \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in 
>> > _check_fault\nself._raise_error(response, body)\n  File 
>> > \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in 
>> > _raise_error\nraise error\nError: Fault reason is \"Operation 
>> > Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
>> > "failed": true,
>> > "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". 
>> > HTTP response code is 400."
>> > }"
>> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var 
>> > "ansible_play_hosts" type "" value: "[]"
>> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var 
>> > "play_hosts" type "" value: "[]"
>> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var 
>> > "ansible_play_batch" type "" value: "[]"
>> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED', 
>> > 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 
>> > 'ansible_result': u'type: \nstr: {\'_ansible_parsed\': 
>> > True, u\'exception\': u\'Traceback (most recent call last):\\n  File 
>> > "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", 

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Hi,
Started from scratch...
And all the things became more strange. First of all,  after adding fqdn
names for both management and gluster interface in /etc/hosts ( ip address
specification for gluster nodes is not possible because of a known bug )
and although i had proper dns resolving for gluster fqdn address ,
installation went almost to finnish:

[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the local bootstrap VM
to be down at engine eyes]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_vms":
[{"affinity_labels": [], "applications": [], "bios": {"boot_menu":
{"enabled": false}, "type": "i440fx_sea_bios"}, "cdroms": [], "cluster":
{"href": "/ovirt-engine/api/clusters/b4eb4bba-5564-11e9-82f1-00163e41da1e",
"id": "b4eb4bba-5564-11e9-82f1-00163e41da1e"}, "comment": "", "cpu":
{"architecture": "x86_64", "topology": {"cores": 1, "sockets": 8,
"threads": 1}}, "cpu_profile": {"href":
"/ovirt-engine/api/cpuprofiles/58ca604e-01a7-003f-01de-0250", "id":
"58ca604e-01a7-003f-01de-0250"}, "cpu_shares": 0, "creation_time":
"2019-04-02 17:42:48.463000+01:00", "delete_protected": false,
"description": "", "disk_attachments": [], "display": {"address":
"127.0.0.1", "allow_override": false, "certificate": {"content":
"-BEGIN
CERTIFICATE-\nMIID9DCCAtygAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwWDELMAkGA1UEBhMCVVMxFTATBgNVBAoM\nDHN5bmNyYXN5LmxhYjEyMDAGA1UEAwwpdmlydHVhbGlzYXRpb24tc2FuZGJveC5zeW5jcmFzeS5s\nYWIuNDk5NjcwHhcNMTkwNDAxMTYzMDA5WhcNMjkwMzMwMTYzMDA5WjBYMQswCQYDVQQGEwJVUzEV\nMBMGA1UECgwMc3luY3Jhc3kubGFiMTIwMAYDVQQDDCl2aXJ0dWFsaXNhdGlvbi1zYW5kYm94LnN5\nbmNyYXN5LmxhYi40OTk2NzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANdcj83LBAsU\nLUS18TIKmFG4pFj0a3VR1r3gfA9+FBVzm60dmIs7zmFR843xQjNTe4n6+uJCbQ09XdOSUyRpWi+9\nq4T5nL4kHbEnPbMUnQ9TDf3bX3S6SQXN678JELobeBDRaV89kGMCsjb7boQUofs3ScMduK77Fmvf\nyhCBVomo2nS8R9FQsv7KnR+3UXPQ1LQ30gv0hRs22vRWUB8ljCh1BCEDBMh1xdDLRI+jhf3mqMZc\n3Sb6qeLyslB9p1kmb/s2wxvdrjrsvpNSpQeZbi7r0FhbkH1GMgsi8V9NGaX3zKwPDgdYt18H2k5K\niRGpF2dWBxxeBPY9R7P+5tKIflcCAwEAAaOBxzCBxDAdBgNVHQ4EFgQUyKAePwI5dLdXIpWuqDDY\njS5S0dMwgYEGA1UdIwR6MHiAFMigHj8COXS3VyKVrqgw2I0uUtHToVykWjBYMQswCQYDVQQGEwJV\nUzEVMBMGA1UECgwMc3luY3Jhc3kubGFiMTIwMAYDVQQDDCl2aXJ0dWFsaXNhdGlvbi1zYW5kYm94\nLnN5bmNyYXN5LmxhYi40OTk2N4ICEAAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw\nDQYJKoZIhvcNAQELBQADggEBAElAlZvQZHep9ujnvJ3cOGe1bHeRpvFyThAb3YEpG9LRx91jYl+N\ndd6YmIa/wbUt9/SIwlsB5lOzbwI47yFK9zRjjIfR1nDuv5aDL+ZQhoU0zTypa3dx6OZekx11VGyF\ndFBMFSYVM2uiSaKzLB9clQjCMiLpiT00zfpCBDrORrpIJjWNWyC5AJiq0CXPQzTUq5Lylafe6fhH\nJab3bxrCDkREgb3eZN9uuT12BxrVtJkF4QaonTn2o/62hEOyVy6v8vyC66r4lz7AGwVIkuxa2bXU\nQvIhfhm1mC4ZFzKPMcJzpW0ze+OCoFPYaQFDmiO210j7prZaPobvq7JCBh1GleM=\n-END
CERTIFICATE-\n", "organization": "internal.lab", "subject":
"O=internal.lab,CN=c6100-ch3-node1.internal.lab"}, "copy_paste_enabled":
true, "disconnect_action": "LOCK_SCREEN", "file_transfer_enabled": true,
"monitors": 1, "port": 5900, "single_qxl_pci": false, "smartcard_enabled":
false, "type": "vnc"}, "fqdn": "virtualisation-sandbox.internal.lab",
"graphics_consoles": [], "guest_operating_system": {"architecture":
"x86_64", "codename": "", "distribution": "CentOS Linux", "family":
"Linux", "kernel": {"version": {"build": 0, "full_version":
"3.10.0-957.10.1.el7.x86_64", "major": 3, "minor": 10, "revision": 957}},
"version": {"full_version": "7", "major": 7}}, "guest_time_zone": {"name":
"BST", "utc_offset": "+01:00"}, "high_availability": {"enabled": false,
"priority": 0}, "host": {"href":
"/ovirt-engine/api/hosts/740c07ae-504a-49b5-967c-676fd6ca16c3", "id":
"740c07ae-504a-49b5-967c-676fd6ca16c3"}, "host_devices": [], "href":
"/ovirt-engine/api/vms/780c584b-28fa-4bde-af02-99b296522d17", "id":
"780c584b-28fa-4bde-af02-99b296522d17", "io": {"threads": 1},
"katello_errata": [], "large_icon": {"href":
"/ovirt-engine/api/icons/defaf775-731c-4e75-8c51-9119ac6dc689", "id":
"defaf775-731c-4e75-8c51-9119ac6dc689"}, "memory": 34359738368,
"memory_policy": {"guaranteed": 34359738368, "max": 34359738368},
"migration": {"auto_converge": "inherit", "compressed": "inherit"},
"migration_downtime": -1, "multi_queues_enabled": true, "name":
"external-HostedEngineLocal", "next_run_configuration_exists": false,
"nics": [], "numa_nodes": [], "numa_tune_mode": "interleave", "origin":
"external", "original_template": {"href":
"/ovirt-engine/api/templates/----", "id":
"----"}, "os": {"boot": {"devices":
["hd"]}, "type": "other"}, "permissions": [], "placement_policy":
{"affinity": "migratable"}, "quota": {"id":
"d27a97ee-5564-11e9-bba0-00163e41da1e"}, "reported_devices": [],
"run_once": false, "sessions": [], "small_icon": {"href":
"/ovirt-engine/api/icons/a29967f4-53e5-4acc-92d8-4a971b54e655", "id":
"a29967f4-53e5-4acc-92d8-4a971b54e655"}, "snapshots": [], "sso":
{"methods": [{"id": "guest_agent"}]}, "start_paused": false, "stateless":
false, "statistics": [], "status": "up", "storage_error_resume_behaviour":

[ovirt-users] Re: Backup VMs to external USB Disk

2019-04-02 Thread daniel94 . oeller
Thanks Arik Hadas for your reply.

And how can i do this regulary and automaticly every day?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6FKO5OFTDA4HDOXG37TP2URI762TE3HW/


[ovirt-users] Re: Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?

2019-04-02 Thread Simone Tiraboschi
On Tue, Apr 2, 2019 at 4:57 PM Callum Smith  wrote:

> Re-running same config sorted this error... Though we're back here:
>
> - Clean NFS
> - Task run as normal user
> - name: Install oVirt Hosted Engine
>   hosts: virthyp04.virt.in.bmrc.ox.ac.uk
>   roles:
> - ovirt.hosted_engine_setup
> - No overrides in ansible.cfg
> - ansible_user=root set inside /etc/ansible/hosts
>
> I can't see the command actually trying to do any sudo command for the
> `dd` - but it's clearly in the playbook it should be running the command as
> `vdsm` - is there an obvious next-step?
>
>

I tried isolating it and, at least with ansible 2.7.8, everything works
exactly as expected: become at task level wins over playbook or role one.

Honestly I've no idea on why if fails on your have.
Do you have any customization to that role?


[stirabos@ansiblec ~]$ ansible --version
ansible 2.7.8
  config file = /etc/ansible/ansible.cfg
  configured module search path =
[u'/home/stirabos/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5
20150623 (Red Hat 4.8.5-36)]
[stirabos@ansiblec ~]$ cat roles/test_role/tasks/main.yml
---
- name: test become behaviour
  command: whoami
  become: true
  become_user: vdsm
  become_method: sudo
  register: whoamiout
- debug: var=whoamiout.stdout
[stirabos@ansiblec ~]$ cat test1.yml
---
- name: Test role
  hosts: c76he20190321h1.localdomain
  become: yes
  become_user: root
  roles:
- role: test_role
[stirabos@ansiblec ~]$ cat test2.yml
---
- name: Test role
  hosts: c76he20190321h1.localdomain
  roles:
- role: test_role
  become: yes
  become_user: root
[stirabos@ansiblec ~]$ ansible-playbook -i c76he20190321h1.localdomain,
test1.yml

PLAY [Test role]
*

TASK [Gathering Facts]
***
ok: [c76he20190321h1.localdomain]

TASK [test_role : test become behaviour]
*
changed: [c76he20190321h1.localdomain]

TASK [test_role : debug]
*
ok: [c76he20190321h1.localdomain] => {
"whoamiout.stdout": "vdsm"
}

PLAY RECAP
***
c76he20190321h1.localdomain : ok=3changed=1unreachable=0
failed=0

[stirabos@ansiblec ~]$ ansible-playbook -i c76he20190321h1.localdomain,
test2.yml

PLAY [Test role]
*

TASK [Gathering Facts]
***
ok: [c76he20190321h1.localdomain]

TASK [test_role : test become behaviour]
*
changed: [c76he20190321h1.localdomain]

TASK [test_role : debug]
*
ok: [c76he20190321h1.localdomain] => {
"whoamiout.stdout": "vdsm"
}

PLAY RECAP

[ovirt-users] Re: Backup VMs to external USB Disk

2019-04-02 Thread Arik Hadas
On Tue, Apr 2, 2019 at 7:56 PM  wrote:

> Thanks Arik Hadas for your reply.
>
> And how can i do this regulary and automaticly every day?
>

oVirt does not provide an integrated way to define periodic tasks like that.
So you need, e.g., to set up a cron job that executes a script that
triggers the export_to_path_on_host API call - the script can do that via
the python-sdk as you can find at [1]

[1]
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py


>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6FKO5OFTDA4HDOXG37TP2URI762TE3HW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C7CXVOVZRBCVTBW5BFPVJQXMIXWPKFKW/


[ovirt-users] Re: Vagrant Plugin

2019-04-02 Thread Staniforth, Paul
Hi Jeremy,

I've just had a quick word with Cliffe and will forward the 
email to him.


Regards,

Paul S.


From: Jeremy Tourville 
Sent: 30 March 2019 18:50
To: users@ovirt.org
Cc: Luca 'remix_tj' Lorenzetto
Subject: [ovirt-users] Re: Vagrant Plugin

Thanks for your reply Luca,
I confirmed the cluster name, it is "Default".  I even tried to run the script 
again and I made sure the D in default was upper case because linux is case 
sensitive.  It still fails in the same way as before.


[cid:c5758639-8ce0-46d1-ad0b-900af9cc138e]

From: Luca 'remix_tj' Lorenzetto 
Sent: Saturday, March 30, 2019 3:34 AM
To: Jeremy Tourville
Subject: Re: [ovirt-users] Vagrant Plugin




Il ven 29 mar 2019, 19:12 Jeremy Tourville 
mailto:jeremy_tourvi...@hotmail.com>> ha scritto:
I am having some trouble getting the Ovirt Vagrant plugin working.  I was able 
to get Vagrant installed and could even run the example scenario listed in the 
blog. 
https://www.ovirt.org/blog/2017/02/using-oVirt-vagrant.html

My real issue is getting a vm generated by the SecGen project 
https://github.com/SecGen/SecGen
  to come up.  If I use the VirtualBox provider everything works as expected 
and I can launch the vm with vagrant up.  If I try to run using Ovirt provider 
it fails.

I had originally posted this over in Google groups /  Vagrant forums and it was 
suggested to take it to Ovirt.  Hopefully, somebody here has some insights.

The process fails quickly with the following output.  Can anyone give some 
suggestions on how to fix the issue?  I have also included a copy of my 
vagrantfile below. Thanks in advance for your assistance!

***Output***

Bringing machine 'escalation' up with 'ovirt4' provider...
==> escalation: Creating VM with the following settings...
==> escalation:  -- Name:  SecGen-default-scenario-escalation
==> escalation:  -- Cluster:   default
==> escalation:  -- Template:  debian_stretch_server_291118
==> escalation:  -- Console Type:  spice
==> escalation:  -- Memory:
==> escalation:   Memory:  512 MB
==> escalation:   Maximum: 512 MB
==> escalation:   Guaranteed:  512 MB
==> escalation:  -- Cpu:
==> escalation:   Cores:   1
==> escalation:   Sockets: 1
==> escalation:   Threads: 1
==> escalation:  -- Cloud-Init:false
==> escalation: An error occured. Recovering..
==> escalation: VM is not created. Please run `vagrant up` first.
/home/secgenadmin/.vagrant.d/gems/2.4.4/gems/ovirt-engine-sdk-4.0.12/lib/ovirtsdk4/service.rb:52:in
 `raise_error': Fault reason is "Operation Failed". Fault detail is "Entity not 
found: Cluster: name=default". HTTP response code is 404.

Hello Jeremy,

Looks like you have no cluster called default in your setup. Edit your vagrant 
file accordingly to your setup.

Luca
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LSSL3IHEHFEPLC6BPAZURV7QOYLPCD3R/


[ovirt-users] Re: oVirt Performance (Horrific)

2019-04-02 Thread Drew Rash
Sorry about the delay.  We did confirm the Jumbo frames.  We dropped the
iSCSI and and switched to NFS on FreeNAS before I got your reply.  Seems to
have gotten rid of any hiccups.  And we like being able to see the files
better than iSCSI anyway.

So we decided that we were very confident it's oVirt (reason I'm here on
oVirt forum) and we took a complete shit box: a 4 core i5 from multiple
years ago with 16gb ram (unknown speed) and a 1tb hard disk drive fedora 27
box, mount the freenas NFS  and run the a VM using libvirt/kvm directly
over 10gb nic and got 400-900 MBps.  Also tested proxmox, got the slow
speeds again (faster than oVirt) at around 90MBps.

So there is definitely something with proxmox that is slower than kvm and
something with oVirt that is slower than proxmox.  Each a step up in cool
features..and a large step down in performance.

So we're dropping all this.  Spent way too much time on it.  Going FreeNAS
with KVM virt manager.  Lots of people online talking about oVirt actually
working well but then don't back it up with any info.  No clear cut way to
create an oVirt setup that actually performs.  So we're junking it.  Thanks
for the help guys, I think you've set us on the right course for the next
decade.




On Thu, Mar 14, 2019 at 8:12 AM Karli Sjöberg  wrote:

>
> On 2019-03-13 05:20, Drew Rash wrote:
>
> Pictures and speeds are the latest. Which seems to be the best performance
> we've ever gotten so far. Still seems like the hardware is sitting idling
> by not doing much after an initial burst.
>
> Took a picture of a file copy using the latest setup. You can see it
> transfer like 25% of a 7gig file at some where around 1GBps or 600MBps ish
> (it disappears quickly) down to 40MBps
> The left vm "MikeWin10:1" is freeNAS'd and achieves much higher highs.
> Still crawls down to the lows and has pause and weird stuff.
> The right vm "MikeWin10_Drew:1" is a gluster fs mount. We tried nfs and
> decided to try gluster again but with a "negative-timeout=1" option
> set...appears to have made it faster by 4x.
> *https://imgur.com/a/R2w6IcO *
>
> *4 Boxes:*
> (2)Two are c9x299-PG300F super micro boards with 14c (28thread) i9's
> 128GB 3200MHz Ram
> (1)FreeNAS is our weakest of all 4 boxes - 6 core, 64GB ram i7 extreme
> version.
>
> Heyo!
>
> Not that the thread is about ZFS, but I find this "stop and go" behavior
> interesting.
>
> FreeNAS is a excellent NAS platform, I mean, it's in the name, right? ;)
> However, the ZFS filesystem and how you configure the system does impact
> performance. First of all, how have you configured the drives in the zpool?
> RAIDZ is not recommended for virtualization, just because it's random IOPS
> performance are set to 1 HDD/vdev. If we assume a SATA drive has 150 random
> IOPS and you create a 8 x 6 TB RAIDZ2 vdev, that entire pool only have 150
> random IOPS total. Can you do a "zpool status" and post the output?
>
> Second, it's worth mentioning that block sizes still matter. Most drives
> still lie to the OS that they are 512 byte sectors while really being 4k,
> just so that older OS'es don't freak out because they don't know drives can
> have any else than 512. I don't know if FreeNAS solves this issue for you
> but it's something I always take care of, either by "sysctl
> vfs.zfs.min_auto_ashift=12" or trick ZFS into thinking the drives are true
> 4k disks with "gnop". A way to check is "zdb | grep ashift"; it should be
> 12. If 9, you may have worse performance than you should have, but not way
> worse. Still... Then there's alignment that I also think that FreeNAS takes
> care of, probably... Most systems place the partition start at 1 MiB which
> makes it OK for any disk regardless. Your disks should be called "adaX",
> run "camcontrol devlist" to get a list of all of them, then pick one disk
> to check the partitioning on with "gpart show adaX". The "freebsd-zfs"
> partition should start at something evenly divisible by 4096 (4k). Most of
> the time they're at 2048, because 512*2048=1048576(1MiB) and that divided
> by 4k is (1048576/4096=256), which is a beautifully even number.
>
> Third and maybe most important, ZFS _does_ listen to "sync" calls, which
> is about everything over iSCSI (with ctld) or NFS. That means, since your
> hosts are connecting to it over one of the two, for _every_ write, the NAS
> stops and waits for it to be actually written safely to disk before doing
> another write, it's sooo slow (but super awesome, because it saves you from
> data corruption). What you do with ZFS to mitigate that is to add a so
> called SLOG (separate log) disk, typically a hella-fast SSD or NVME that
> only does that and nothing else, so that the fast disk takes all the
> random, small writes and turns them into big streaming writes that the
> HDD's can take. You can partition just a bit of an SSD and use that as a
> SLOG, typically not more than the bandwidth you could maximally take, times
> the interval between