Re: [ovirt-devel] dynamic ownership changes

2018-05-07 Thread Michal Skrivanek
Hi Elad,
why did you install vdsm-hook-allocate_net?

adding Dan as I think the hook is not supposed to fail this badly in any case

Thanks,
michal

> On 5 May 2018, at 19:22, Elad Ben Aharon  wrote:
> 
> Start VM fails on:
> 
> 2018-05-05 17:53:27,399+0300 INFO  (vm/e6ce66ce) [virt.vm] 
> (vmId='e6ce66ce-852f-48c5-9997-5d2959432a27') drive 'vda' path: 
> 'dev=/rhev/data-center/mnt/blockSD/db5a6696-d907-4938-9a78-bdd13a843c62/images/6cdabfe5-
>  
> d1ca-40af-ae63-9834f235d1c8/7ef97445-30e6-4435-8425-f35a01928211' -> 
> u'*dev=/rhev/data-center/mnt/blockSD/db5a6696-d907-4938-9a78-bdd13a843c62/images/6cdabfe5-d1ca-40af-ae63-9834f235d1c8/7ef97445-30e6-4435-8425-
>  
> f35a01928211' (storagexml:334) 
> 2018-05-05 17:53:27,888+0300 INFO  (jsonrpc/1) [vdsm.api] START 
> getSpmStatus(spUUID='940fe6f3-b0c6-4d0c-a921-198e7819c1cc', options=None) 
> from=:::10.35.161.127,53512, task_id=c70ace39-dbfe-4f5c-ae49-a1e3a82c 
> 2758 (api:46) 
> 2018-05-05 17:53:27,909+0300 INFO  (vm/e6ce66ce) [root] 
> /usr/libexec/vdsm/hooks/before_device_create/10_allocate_net: rc=2 err=vm net 
> allocation hook: [unexpected error]: Traceback (most recent call last): 
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 105, in  
>main() 
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 93, in main 
>allocate_random_network(device_xml) 
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 62, in allocate_random_network 
>net = _get_random_network() 
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 50, in _get_random_network 
>available_nets = _parse_nets() 
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 46, in _parse_nets 
>return [net for net in os.environ[AVAIL_NETS_KEY].split()] 
>  File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ 
>raise KeyError(key) 
> KeyError: 'equivnets' 
> 
> 
> (hooks:110) 
> 2018-05-05 17:53:27,915+0300 ERROR (vm/e6ce66ce) [virt.vm] 
> (vmId='e6ce66ce-852f-48c5-9997-5d2959432a27') The vm start process failed 
> (vm:943) 
> Traceback (most recent call last): 
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in 
> _startUnderlyingVm 
>self._run() 
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861, in _run 
>domxml = hooks.before_vm_start(self._buildDomainXML(), 
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2254, in 
> _buildDomainXML 
>dom, self.id , self._custom['custom']) 
>  File "/usr/lib/python2.7/site-packages/vdsm/virt/domxml_preprocess.py", line 
> 240, in replace_device_xml_with_hooks_xml 
>dev_custom) 
>  File "/usr/lib/python2.7/site-packages/vdsm/common/hooks.py", line 134, in 
> before_device_create 
>params=customProperties) 
>  File "/usr/lib/python2.7/site-packages/vdsm/common/hooks.py", line 120, in 
> _runHooksDir 
>raise exception.HookError(err) 
> HookError: Hook Error: ('vm net allocation hook: [unexpected error]: 
> Traceback (most recent call last):\n  File 
> "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 105, in 
> \nmain()\n
>  File "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 
> 93, in main\nallocate_random_network(device_xml)\n  File 
> "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 62, i
> n allocate_random_network\nnet = _get_random_network()\n  File 
> "/usr/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 50, in 
> _get_random_network\navailable_nets = _parse_nets()\n  File "/us
> r/libexec/vdsm/hooks/before_device_create/10_allocate_net", line 46, in 
> _parse_nets\nreturn [net for net in os.environ[AVAIL_NETS_KEY].split()]\n 
>  File "/usr/lib64/python2.7/UserDict.py", line 23, in __getit
> em__\nraise KeyError(key)\nKeyError: \'equivnets\'\n\n\n',)
> 
> 
> 
> Hence, the success rate was 28% against 100% running with d/s (d/s). If 
> needed, I'll compare against the latest master, but I think you get the 
> picture with d/s.
> 
> vdsm-4.20.27-3.gitfee7810.el7.centos.x86_64 
> libvirt-3.9.0-14.el7_5.3.x86_64 
> qemu-kvm-rhev-2.10.0-21.el7_5.2.x86_64 
> kernel 3.10.0-862.el7.x86_64
> rhel7.5
> 
> 
> Logs attached
> 
> On Sat, May 5, 2018 at 1:26 PM, Elad Ben Aharon  > wrote:
> nvm, found gluster 3.12 repo, managed to install vdsm
> 
> On Sat, May 5, 2018 at 1:12 PM, Elad Ben Aharon  > wrote:
> No, vdsm requires it:
> 
> Error: Package: vdsm-4.20.27-3.gitfee7810.el7.centos.x86_64 
> (/vdsm-4.20.27-3.gitfee7810.el7.centos.x86_64) 
>   Requires: glusterfs-fuse >= 3.12 
>   Installed: glusterfs-fuse-3.8.4-54.8.el7.x86_64 (@rhv-4.2.3)
> 
> Therefore, vdsm package installation is skipped upon force install.
> 
> On Sat, May 5, 2018 at 11:42 AM, Michal 

Re: [ovirt-devel] dynamic ownership changes

2018-05-04 Thread Michal Skrivanek
Hi Elad,
to make it easier to compare, Martin backported the change to 4.2 so it is 
actually comparable with a run without that patch. Would you please try that 
out? 
It would be best to have 4.2 upstream and this[1] run to really minimize the 
noise.

Thanks,
michal

[1] 
http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/28/ 


> On 27 Apr 2018, at 09:23, Martin Polednik  wrote:
> 
> On 24/04/18 00:37 +0300, Elad Ben Aharon wrote:
>> I will update with the results of the next tier1 execution on latest 4.2.3
> 
> That isn't master but old branch though. Could you run it against
> *current* VDSM master?
> 
>> On Mon, Apr 23, 2018 at 3:56 PM, Martin Polednik 
>> wrote:
>> 
>>> On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:
>>> 
 Hi, I've triggered another execution [1] due to some issues I saw in the
 first which are not related to the patch.
 
 The success rate is 78% which is low comparing to tier1 executions with
 code from downstream builds (95-100% success rates) [2].
 
>>> 
>>> Could you run the current master (without the dynamic_ownership patch)
>>> so that we have viable comparision?
>>> 
>>> From what I could see so far, there is an issue with move and copy
 operations to and from Gluster domains. For example [3].
 
 The logs are attached.
 
 
 [1]
 *https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv
 -4.2-ge-runner-tier1-after-upgrade/7/testReport/
 *
 
 
 
 [2]
 https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/
 
 rhv-4.2-ge-runner-tier1-after-upgrade/7/
 
 
 
 [3]
 2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
 deleteImage error=Image does not exist in domain:
 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
 domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
 from=:
 ::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
 task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
 2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
 (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error (task:875)
 Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
 in
 _run
  return fn(*args, **kargs)
 File "", line 2, in deleteImage
 File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
 method
  ret = func(*args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503,
 in
 deleteImage
  raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
 ImageDoesNotExistInSD: Image does not exist in domain:
 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
 domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
 
 2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
 (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
 "Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
 5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
 (task:1181)
 2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
 deleteImage error=Image does not exist in domain:
 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
 domain=e5fd29c8-52ba-467e-be09
 -ca40ff054d
 d4' (dispatcher:82)
 
 
 
 On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
 wrote:
 
 Triggered a sanity tier1 execution [1] using [2], which covers all the
> requested areas, on iSCSI, NFS and Gluster.
> I'll update with the results.
> 
> [1]
> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
> _dev/job/rhv-4.2-ge-flow-storage/1161/
> 
> [2]
> https://gerrit.ovirt.org/#/c/89830/
> vdsm-4.30.0-291.git77aef9a.el7.x86_64
> 
> 
> 
> On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
> wrote:
> 
> On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:
>> 
>> Hi Martin,
>>> 
>>> I see [1] requires a rebase, can you please take care?
>>> 
>>> 
>> Should be rebased.
>> 
>> At the moment, our automation is stable only on iSCSI, NFS, Gluster and
>> 
>>> FC.
>>> Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's
>>> not
>>> stable enough at the moment.
>>> 
>>> 
>> That is still pretty good.
>> 
>> 
>> [1] https://gerrit.ovirt.org/#/c/89830/
>> 
>>> 
>>> 
>>> Thanks
>>> 
>>> On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik >> >
>>> wrote:
>>> 
>>> On 

Re: [ovirt-devel] dynamic ownership changes

2018-04-27 Thread Martin Polednik

On 24/04/18 00:37 +0300, Elad Ben Aharon wrote:

I will update with the results of the next tier1 execution on latest 4.2.3


That isn't master but old branch though. Could you run it against
*current* VDSM master?


On Mon, Apr 23, 2018 at 3:56 PM, Martin Polednik 
wrote:


On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:


Hi, I've triggered another execution [1] due to some issues I saw in the
first which are not related to the patch.

The success rate is 78% which is low comparing to tier1 executions with
code from downstream builds (95-100% success rates) [2].



Could you run the current master (without the dynamic_ownership patch)
so that we have viable comparision?

From what I could see so far, there is an issue with move and copy

operations to and from Gluster domains. For example [3].

The logs are attached.


[1]
*https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv
-4.2-ge-runner-tier1-after-upgrade/7/testReport/
*



[2]
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/

rhv-4.2-ge-runner-tier1-after-upgrade/7/



[3]
2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
deleteImage error=Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33,
domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
from=:
::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in
_run
  return fn(*args, **kargs)
File "", line 2, in deleteImage
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
method
  ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503,
in
deleteImage
  raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
ImageDoesNotExistInSD: Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33,
domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'

2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
(Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
"Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
(task:1181)
2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
deleteImage error=Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33,
domain=e5fd29c8-52ba-467e-be09
-ca40ff054d
d4' (dispatcher:82)



On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
wrote:

Triggered a sanity tier1 execution [1] using [2], which covers all the

requested areas, on iSCSI, NFS and Gluster.
I'll update with the results.

[1]
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
_dev/job/rhv-4.2-ge-flow-storage/1161/

[2]
https://gerrit.ovirt.org/#/c/89830/
vdsm-4.30.0-291.git77aef9a.el7.x86_64



On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
wrote:

On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:


Hi Martin,


I see [1] requires a rebase, can you please take care?



Should be rebased.

At the moment, our automation is stable only on iSCSI, NFS, Gluster and


FC.
Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's
not
stable enough at the moment.



That is still pretty good.


[1] https://gerrit.ovirt.org/#/c/89830/




Thanks

On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
wrote:

On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:



Hi, sorry if I misunderstood, I waited for more input regarding what


areas
have to be tested here.


I'd say that you have quite a bit of freedom in this regard.

GlusterFS
should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
that covers basic operations (start & stop VM, migrate it), snapshots
and merging them, and whatever else would be important for storage
sanity.

mpolednik


On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <
mpoled...@redhat.com
>

wrote:


On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:



We can test this on iSCSI, NFS and GlusterFS. As for ceph and
cinder,

will

have to check, since usually, we don't execute our automation on
them.


Any update on this? I believe the gluster tests were successful,
OST


passes fine and unit tests pass fine, that makes the storage
backends
test the last required piece.


On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir 
wrote:


+Elad



On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri 

wrote:


Please make sure to 

Re: [ovirt-devel] dynamic ownership changes

2018-04-25 Thread Elad Ben Aharon
Here it is - https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/
rhv-4.2-ge-runner-tier1/122/

This was executed over iscsi, nfs, gluster and fcp. It ended up with 98.43%
success rate.

2 storage related test cases failed and their failures, from first glance,
don't seem to be bugs.




On Tue, Apr 24, 2018 at 12:37 AM, Elad Ben Aharon 
wrote:

> I will update with the results of the next tier1 execution on latest
> 4.2.3
>
> On Mon, Apr 23, 2018 at 3:56 PM, Martin Polednik 
> wrote:
>
>> On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:
>>
>>> Hi, I've triggered another execution [1] due to some issues I saw in the
>>> first which are not related to the patch.
>>>
>>> The success rate is 78% which is low comparing to tier1 executions with
>>> code from downstream builds (95-100% success rates) [2].
>>>
>>
>> Could you run the current master (without the dynamic_ownership patch)
>> so that we have viable comparision?
>>
>> From what I could see so far, there is an issue with move and copy
>>> operations to and from Gluster domains. For example [3].
>>>
>>> The logs are attached.
>>>
>>>
>>> [1]
>>> *https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv
>>> -4.2-ge-runner-tier1-after-upgrade/7/testReport/
>>> >> -4.2-ge-runner-tier1-after-upgrade/7/testReport/>*
>>>
>>>
>>>
>>> [2]
>>> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/
>>>
>>> rhv-4.2-ge-runner-tier1-after-upgrade/7/
>>>
>>>
>>>
>>> [3]
>>> 2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
>>> deleteImage error=Image does not exist in domain:
>>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>>> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
>>> from=:
>>> ::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
>>> task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
>>> 2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
>>> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error
>>> (task:875)
>>> Traceback (most recent call last):
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
>>> in
>>> _run
>>>   return fn(*args, **kargs)
>>> File "", line 2, in deleteImage
>>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
>>> method
>>>   ret = func(*args, **kwargs)
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503,
>>> in
>>> deleteImage
>>>   raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
>>> ImageDoesNotExistInSD: Image does not exist in domain:
>>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>>> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
>>>
>>> 2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
>>> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
>>> "Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
>>> 5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
>>> (task:1181)
>>> 2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher]
>>> FINISH
>>> deleteImage error=Image does not exist in domain:
>>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>>> domain=e5fd29c8-52ba-467e-be09
>>> -ca40ff054d
>>> d4' (dispatcher:82)
>>>
>>>
>>>
>>> On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
>>> wrote:
>>>
>>> Triggered a sanity tier1 execution [1] using [2], which covers all the
 requested areas, on iSCSI, NFS and Gluster.
 I'll update with the results.

 [1]
 https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
 _dev/job/rhv-4.2-ge-flow-storage/1161/

 [2]
 https://gerrit.ovirt.org/#/c/89830/
 vdsm-4.30.0-291.git77aef9a.el7.x86_64



 On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
 wrote:

 On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:
>
> Hi Martin,
>>
>> I see [1] requires a rebase, can you please take care?
>>
>>
> Should be rebased.
>
> At the moment, our automation is stable only on iSCSI, NFS, Gluster and
>
>> FC.
>> Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's
>> not
>> stable enough at the moment.
>>
>>
> That is still pretty good.
>
>
> [1] https://gerrit.ovirt.org/#/c/89830/
>
>>
>>
>> Thanks
>>
>> On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik <
>> mpoled...@redhat.com>
>> wrote:
>>
>> On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>>
>>>
>>> Hi, sorry if I misunderstood, I waited for more input regarding what
>>>
 areas
 have to be tested here.


 I'd say that you have quite a bit of freedom in this regard.
>>> GlusterFS
>>> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
>>> that covers basic operations (start & stop VM, migrate it), 

Re: [ovirt-devel] dynamic ownership changes

2018-04-23 Thread Elad Ben Aharon
I will update with the results of the next tier1 execution on latest 4.2.3

On Mon, Apr 23, 2018 at 3:56 PM, Martin Polednik 
wrote:

> On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:
>
>> Hi, I've triggered another execution [1] due to some issues I saw in the
>> first which are not related to the patch.
>>
>> The success rate is 78% which is low comparing to tier1 executions with
>> code from downstream builds (95-100% success rates) [2].
>>
>
> Could you run the current master (without the dynamic_ownership patch)
> so that we have viable comparision?
>
> From what I could see so far, there is an issue with move and copy
>> operations to and from Gluster domains. For example [3].
>>
>> The logs are attached.
>>
>>
>> [1]
>> *https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv
>> -4.2-ge-runner-tier1-after-upgrade/7/testReport/
>> > -4.2-ge-runner-tier1-after-upgrade/7/testReport/>*
>>
>>
>>
>> [2]
>> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/
>>
>> rhv-4.2-ge-runner-tier1-after-upgrade/7/
>>
>>
>>
>> [3]
>> 2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
>> deleteImage error=Image does not exist in domain:
>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
>> from=:
>> ::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
>> task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
>> 2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
>> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error (task:875)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
>> in
>> _run
>>   return fn(*args, **kargs)
>> File "", line 2, in deleteImage
>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
>> method
>>   ret = func(*args, **kwargs)
>> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503,
>> in
>> deleteImage
>>   raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
>> ImageDoesNotExistInSD: Image does not exist in domain:
>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
>>
>> 2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
>> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
>> "Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
>> 5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
>> (task:1181)
>> 2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
>> deleteImage error=Image does not exist in domain:
>> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
>> domain=e5fd29c8-52ba-467e-be09
>> -ca40ff054d
>> d4' (dispatcher:82)
>>
>>
>>
>> On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
>> wrote:
>>
>> Triggered a sanity tier1 execution [1] using [2], which covers all the
>>> requested areas, on iSCSI, NFS and Gluster.
>>> I'll update with the results.
>>>
>>> [1]
>>> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
>>> _dev/job/rhv-4.2-ge-flow-storage/1161/
>>>
>>> [2]
>>> https://gerrit.ovirt.org/#/c/89830/
>>> vdsm-4.30.0-291.git77aef9a.el7.x86_64
>>>
>>>
>>>
>>> On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
>>> wrote:
>>>
>>> On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:

 Hi Martin,
>
> I see [1] requires a rebase, can you please take care?
>
>
 Should be rebased.

 At the moment, our automation is stable only on iSCSI, NFS, Gluster and

> FC.
> Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's
> not
> stable enough at the moment.
>
>
 That is still pretty good.


 [1] https://gerrit.ovirt.org/#/c/89830/

>
>
> Thanks
>
> On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik  >
> wrote:
>
> On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>
>>
>> Hi, sorry if I misunderstood, I waited for more input regarding what
>>
>>> areas
>>> have to be tested here.
>>>
>>>
>>> I'd say that you have quite a bit of freedom in this regard.
>> GlusterFS
>> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
>> that covers basic operations (start & stop VM, migrate it), snapshots
>> and merging them, and whatever else would be important for storage
>> sanity.
>>
>> mpolednik
>>
>>
>> On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <
>> mpoled...@redhat.com
>> >
>>
>> wrote:
>>>
>>> On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:
>>>
>>>
 We can test this on iSCSI, NFS and GlusterFS. As for ceph and
 cinder,

 will
> have to check, since usually, we don't execute our automation on
> 

Re: [ovirt-devel] dynamic ownership changes

2018-04-23 Thread Martin Polednik

On 23/04/18 01:23 +0300, Elad Ben Aharon wrote:

Hi, I've triggered another execution [1] due to some issues I saw in the
first which are not related to the patch.

The success rate is 78% which is low comparing to tier1 executions with
code from downstream builds (95-100% success rates) [2].


Could you run the current master (without the dynamic_ownership patch)
so that we have viable comparision?


From what I could see so far, there is an issue with move and copy
operations to and from Gluster domains. For example [3].

The logs are attached.


[1]
*https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-tier1-after-upgrade/7/testReport/
*



[2]
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/
rhv-4.2-ge-runner-tier1-after-upgrade/7/



[3]
2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
deleteImage error=Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33,
domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
from=:
::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_run
  return fn(*args, **kargs)
File "", line 2, in deleteImage
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
method
  ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503, in
deleteImage
  raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
ImageDoesNotExistInSD: Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33,
domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'

2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
(Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
"Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
(task:1181)
2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
deleteImage error=Image does not exist in domain:
'image=cabb8846-7a4b-4244-9835-5f603e682f33, domain=e5fd29c8-52ba-467e-be09
-ca40ff054d
d4' (dispatcher:82)



On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
wrote:


Triggered a sanity tier1 execution [1] using [2], which covers all the
requested areas, on iSCSI, NFS and Gluster.
I'll update with the results.

[1]
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
_dev/job/rhv-4.2-ge-flow-storage/1161/

[2]
https://gerrit.ovirt.org/#/c/89830/
vdsm-4.30.0-291.git77aef9a.el7.x86_64



On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
wrote:


On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:


Hi Martin,

I see [1] requires a rebase, can you please take care?



Should be rebased.

At the moment, our automation is stable only on iSCSI, NFS, Gluster and

FC.
Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not
stable enough at the moment.



That is still pretty good.


[1] https://gerrit.ovirt.org/#/c/89830/



Thanks

On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
wrote:

On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:


Hi, sorry if I misunderstood, I waited for more input regarding what

areas
have to be tested here.



I'd say that you have quite a bit of freedom in this regard. GlusterFS
should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
that covers basic operations (start & stop VM, migrate it), snapshots
and merging them, and whatever else would be important for storage
sanity.

mpolednik


On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 


wrote:

On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:



We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,


will
have to check, since usually, we don't execute our automation on
them.


Any update on this? I believe the gluster tests were successful, OST

passes fine and unit tests pass fine, that makes the storage backends
test the last required piece.


On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir 
wrote:



+Elad



On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
wrote:



On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri 
wrote:



Please make sure to run as much OST suites on this patch as
possible

before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.



Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <
mpoled...@redhat.com
>

wrote:



Hey,

Re: [ovirt-devel] dynamic ownership changes

2018-04-22 Thread Elad Ben Aharon
Also, snapshot preview failed (2nd snapshot):

2018-04-22 18:01:06,253+0300 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Volume.create succeeded in 0.84 seconds (__init__:311)
2018-04-22 18:01:06,261+0300 INFO  (tasks/6)
[storage.ThreadPool.WorkerThread] START task
6823d724-cb1b-4706-a58a-83428363cce5 (cmd=>, args=None)
(threadPool
:208)
2018-04-22 18:01:06,906+0300 WARN  (check/loop) [storage.asyncutils] Call
> delayed by 0.51 seconds
(asyncutils:138)
2018-04-22 18:01:07,082+0300 WARN  (tasks/6) [storage.ResourceManager]
Resource factory failed to create resource
'01_img_7df9d2b2-52b5-4ac2-a9f0-a1d1e93eb6d2.095ad9d6-3154-449c-868c-f975dcdcb729'.
Canceling request. (resourceManager:543
)
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
line 539, in registerResource
   obj = namespaceObj.factory.createResource(name, lockType)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
line 193, in createResource
   lockType)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py",
line 122, in __getResourceCandidatesList
   imgUUID=resourceName)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 198,
in getChain
   uuidlist = volclass.getImageVolumes(sdUUID, imgUUID)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1537,
in getImageVolumes
   return cls.manifestClass.getImageVolumes(sdUUID, imgUUID)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
337, in getImageVolumes
   if (sd.produceVolume(imgUUID, volid).getImage() == imgUUID):
 File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 438, in
produceVolume
   volUUID)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
69, in __init__
   volUUID)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86,
in __init__
   self.validate()
 File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112,
in validate
   self.validateVolumePath()
 File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
129, in validateVolumePath
   raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist:
(u'a404bfc9-57ef-4dcc-9f1b-458dfb08ad74',)
2018-04-22 18:01:07,083+0300 WARN  (tasks/6)
[storage.ResourceManager.Request]
(ResName='01_img_7df9d2b2-52b5-4ac2-a9f0-a1d1e93eb6d2.095ad9d6-3154-449c-868c-f975dcdcb729',
ReqID='79c96e70-7334-4402-a390-dc87f939b7d2') Tried to cancel a p
rocessed request (resourceManager:187)
2018-04-22 18:01:07,084+0300 ERROR (tasks/6) [storage.TaskManager.Task]
(Task='6823d724-cb1b-4706-a58a-83428363cce5') Unexpected error (task:875)
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_run
   return fn(*args, **kargs)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in
run
   return self.cmd(*self.argslist, **self.argsdict)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
   return method(self, *args, **kwargs)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1939, in
createVolume
   with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
 File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
line 1025, in acquireResource
   return _manager.acquireResource(namespace, name, lockType,
timeout=timeout)
 File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py",
line 475, in acquireResource
   raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource
factory threw an exception.: ()
2018-04-22 18:01:07,735+0300 INFO  (tasks/6)
[storage.ThreadPool.WorkerThread] FINISH task
6823d724-cb1b-4706-a58a-83428363cce5 (threadPool:210)



*Steps from [1]:*

*17:54:41* 2018-04-22 17:54:41,574 INFO   Test Setup   2: Creating
VM vm_TestCase11660_2217544157*17:54:55* 2018-04-22 17:54:55,593 INFO
049: 
storage/rhevmtests.storage.storage_snapshots.test_live_snapshot.TestCase11660.test_live_snapshot[glusterfs]*17:54:55*
2018-04-22 17:54:55,593 INFO Create a snapshot while VM is
running*17:54:55* 2018-04-22 17:54:55,593 INFO STORAGE:
GLUSTERFS*17:58:04* 2018-04-22 17:58:04,761 INFO   Test Step   3:
Start writing continuously on VM vm_TestCase11660_2217544157 via
dd*17:58:35* 2018-04-22 17:58:35,334 INFO   Test Step   4:
Creating live snapshot on a VM vm_TestCase11660_2217544157*17:58:35*
2018-04-22 17:58:35,334 INFO   Test Step   5: Adding new snapshot
to VM vm_TestCase11660_2217544157 with all disks*17:58:35* 2018-04-22
17:58:35,337 INFO   Test Step   6: Add snapshot to VM
vm_TestCase11660_2217544157 with {'description':
'snap_TestCase11660_2217545559', 'wait': True}*17:59:26* 2018-04-22
17:59:26,179 INFO   Test Step   7: Writing files to VM's
vm_TestCase11660_2217544157 disk*18:00:33* 2018-04-22 18:00:33,117
INFO   Test Step   8: Shutdown vm 

Re: [ovirt-devel] dynamic ownership changes

2018-04-22 Thread Elad Ben Aharon
Sorry, this is the new execution link:
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-storage/1048/testReport/

On Mon, Apr 23, 2018 at 1:23 AM, Elad Ben Aharon 
wrote:

> Hi, I've triggered another execution [1] due to some issues I saw in the
> first which are not related to the patch.
>
> The success rate is 78% which is low comparing to tier1 executions with
> code from downstream builds (95-100% success rates) [2].
>
> From what I could see so far, there is an issue with move and copy
> operations to and from Gluster domains. For example [3].
>
> The logs are attached.
>
>
> [1]
> *https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2-ge-runner-tier1-after-upgrade/7/testReport/
> *
>
>
>
> [2]
> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-
> 4.2-ge-runner-tier1-after-upgrade/7/
>
>
>
> [3]
> 2018-04-22 13:06:28,316+0300 INFO  (jsonrpc/7) [vdsm.api] FINISH
> deleteImage error=Image does not exist in domain:
> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4' from=:
> ::10.35.161.182,40936, flow_id=disks_syncAction_ba6b2630-5976-4935,
> task_id=3d5f2a8a-881c-409e-93e9-aaa643c10e42 (api:51)
> 2018-04-22 13:06:28,317+0300 ERROR (jsonrpc/7) [storage.TaskManager.Task]
> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') Unexpected error (task:875)
> Traceback (most recent call last):
>  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
>return fn(*args, **kargs)
>  File "", line 2, in deleteImage
>  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in
> method
>ret = func(*args, **kwargs)
>  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1503,
> in deleteImage
>raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
> ImageDoesNotExistInSD: Image does not exist in domain:
> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
> domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'
> 2018-04-22 13:06:28,317+0300 INFO  (jsonrpc/7) [storage.TaskManager.Task]
> (Task='3d5f2a8a-881c-409e-93e9-aaa643c10e42') aborting: Task is aborted:
> "Image does not exist in domain: 'image=cabb8846-7a4b-4244-9835-
> 5f603e682f33, domain=e5fd29c8-52ba-467e-be09-ca40ff054dd4'" - code 268
> (task:1181)
> 2018-04-22 13:06:28,318+0300 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
> deleteImage error=Image does not exist in domain:
> 'image=cabb8846-7a4b-4244-9835-5f603e682f33,
> domain=e5fd29c8-52ba-467e-be09-ca40ff054d
> d4' (dispatcher:82)
>
>
>
> On Thu, Apr 19, 2018 at 5:34 PM, Elad Ben Aharon 
> wrote:
>
>> Triggered a sanity tier1 execution [1] using [2], which covers all the
>> requested areas, on iSCSI, NFS and Gluster.
>> I'll update with the results.
>>
>> [1]
>> https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.2
>> _dev/job/rhv-4.2-ge-flow-storage/1161/
>>
>> [2]
>> https://gerrit.ovirt.org/#/c/89830/
>> vdsm-4.30.0-291.git77aef9a.el7.x86_64
>>
>>
>>
>> On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
>> wrote:
>>
>>> On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:
>>>
 Hi Martin,

 I see [1] requires a rebase, can you please take care?

>>>
>>> Should be rebased.
>>>
>>> At the moment, our automation is stable only on iSCSI, NFS, Gluster and
 FC.
 Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's
 not
 stable enough at the moment.

>>>
>>> That is still pretty good.
>>>
>>>
>>> [1] https://gerrit.ovirt.org/#/c/89830/


 Thanks

 On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
 wrote:

 On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>
> Hi, sorry if I misunderstood, I waited for more input regarding what
>> areas
>> have to be tested here.
>>
>>
> I'd say that you have quite a bit of freedom in this regard. GlusterFS
> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
> that covers basic operations (start & stop VM, migrate it), snapshots
> and merging them, and whatever else would be important for storage
> sanity.
>
> mpolednik
>
>
> On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik <
> mpoled...@redhat.com>
>
>> wrote:
>>
>> On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:
>>
>>>
>>> We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,
>>>
 will
 have to check, since usually, we don't execute our automation on
 them.


 Any update on this? I believe the gluster tests were successful, OST
>>> passes fine and unit tests pass fine, that makes the storage backends
>>> test the last required piece.
>>>
>>>
>>> On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir 
>>> 

Re: [ovirt-devel] dynamic ownership changes

2018-04-22 Thread Raz Tamir
+Elad

On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg  wrote:

> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>
>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>
>>> Please make sure to run as much OST suites on this patch as possible
>>> before merging ( using 'ci please build' )
>>>
>>
>> But note that OST is not a way to verify the patch.
>>
>> Such changes require testing with all storage types we support.
>>
>> Nir
>>
>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>>> wrote:
>>>
 Hey,

 I've created a patch[0] that is finally able to activate libvirt's
 dynamic_ownership for VDSM while not negatively affecting
 functionality of our storage code.

 That of course comes with quite a bit of code removal, mostly in the
 area of host devices, hwrng and anything that touches devices; bunch
 of test changes and one XML generation caveat (storage is handled by
 VDSM, therefore disk relabelling needs to be disabled on the VDSM
 level).

 Because of the scope of the patch, I welcome storage/virt/network
 people to review the code and consider the implication this change has
 on current/future features.

 [0] https://gerrit.ovirt.org/#/c/89830/

>>>
> In particular:  dynamic_ownership was set to 0 prehistorically (as part of
> https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
> running as root, was not able to play properly with root-squash nfs mounts.
>
> Have you attempted this use case?
>
> I join to Nir's request to run this with storage QE.
>



-- 


Raz Tamir
Manager, RHV QE
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-19 Thread Elad Ben Aharon
Triggered a sanity tier1 execution [1] using [2], which covers all the
requested areas, on iSCSI, NFS and Gluster.
I'll update with the results.

[1]
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/4.
2_dev/job/rhv-4.2-ge-flow-storage/1161/

[2]
https://gerrit.ovirt.org/#/c/89830/
vdsm-4.30.0-291.git77aef9a.el7.x86_64



On Thu, Apr 19, 2018 at 3:07 PM, Martin Polednik 
wrote:

> On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:
>
>> Hi Martin,
>>
>> I see [1] requires a rebase, can you please take care?
>>
>
> Should be rebased.
>
> At the moment, our automation is stable only on iSCSI, NFS, Gluster and FC.
>> Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not
>> stable enough at the moment.
>>
>
> That is still pretty good.
>
>
> [1] https://gerrit.ovirt.org/#/c/89830/
>>
>>
>> Thanks
>>
>> On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
>> wrote:
>>
>> On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>>>
>>> Hi, sorry if I misunderstood, I waited for more input regarding what
 areas
 have to be tested here.


>>> I'd say that you have quite a bit of freedom in this regard. GlusterFS
>>> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
>>> that covers basic operations (start & stop VM, migrate it), snapshots
>>> and merging them, and whatever else would be important for storage
>>> sanity.
>>>
>>> mpolednik
>>>
>>>
>>> On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 
>>>
 wrote:

 On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:

>
> We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,
>
>> will
>> have to check, since usually, we don't execute our automation on them.
>>
>>
>> Any update on this? I believe the gluster tests were successful, OST
> passes fine and unit tests pass fine, that makes the storage backends
> test the last required piece.
>
>
> On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:
>
>
>> +Elad
>>
>>
>>> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
>>> wrote:
>>>
>>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
>>> wrote:
>>>
>>>
 On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri 
 wrote:


> Please make sure to run as much OST suites on this patch as
> possible
>
> before merging ( using 'ci please build' )
>>
>>
>> But note that OST is not a way to verify the patch.
>>
>
> Such changes require testing with all storage types we support.
>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <
> mpoled...@redhat.com
> >
>
> wrote:
>
>>
>> Hey,
>>
>>
>>> I've created a patch[0] that is finally able to activate
>>> libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in
>>> the
>>> area of host devices, hwrng and anything that touches devices;
>>> bunch
>>> of test changes and one XML generation caveat (storage is handled
>>> by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this
>>> change
>>> has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>>
>>> In particular:  dynamic_ownership was set to 0 prehistorically
>>> (as
>>>
>>
>> part
>
 of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because
 libvirt,
 running as root, was not able to play properly with root-squash nfs
 mounts.

 Have you attempted this use case?

 I join to Nir's request to run this with storage QE.




>>> --
>>>
>>>
>>> Raz Tamir
>>> Manager, RHV QE
>>>
>>>
>>>
>>>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-19 Thread Martin Polednik

On 19/04/18 14:54 +0300, Elad Ben Aharon wrote:

Hi Martin,

I see [1] requires a rebase, can you please take care?


Should be rebased.


At the moment, our automation is stable only on iSCSI, NFS, Gluster and FC.
Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not
stable enough at the moment.


That is still pretty good.


[1] https://gerrit.ovirt.org/#/c/89830/


Thanks

On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
wrote:


On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:


Hi, sorry if I misunderstood, I waited for more input regarding what areas
have to be tested here.



I'd say that you have quite a bit of freedom in this regard. GlusterFS
should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
that covers basic operations (start & stop VM, migrate it), snapshots
and merging them, and whatever else would be important for storage
sanity.

mpolednik


On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 

wrote:

On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:


We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,

will
have to check, since usually, we don't execute our automation on them.



Any update on this? I believe the gluster tests were successful, OST
passes fine and unit tests pass fine, that makes the storage backends
test the last required piece.


On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:



+Elad



On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
wrote:



On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:



Please make sure to run as much OST suites on this patch as possible


before merging ( using 'ci please build' )


But note that OST is not a way to verify the patch.


Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <
mpoled...@redhat.com
>

wrote:


Hey,



I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in
the
area of host devices, hwrng and anything that touches devices;
bunch
of test changes and one XML generation caveat (storage is handled
by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change
has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/


In particular:  dynamic_ownership was set to 0 prehistorically (as



part

of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because
libvirt,
running as root, was not able to play properly with root-squash nfs
mounts.

Have you attempted this use case?

I join to Nir's request to run this with storage QE.





--


Raz Tamir
Manager, RHV QE




___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-19 Thread Elad Ben Aharon
Hi Martin,

I see [1] requires a rebase, can you please take care?

At the moment, our automation is stable only on iSCSI, NFS, Gluster and FC.
Ceph is not supported and Cinder will be stabilized soon, AFAIR, it's not
stable enough at the moment.


[1] https://gerrit.ovirt.org/#/c/89830/


Thanks

On Wed, Apr 18, 2018 at 2:17 PM, Martin Polednik 
wrote:

> On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:
>
>> Hi, sorry if I misunderstood, I waited for more input regarding what areas
>> have to be tested here.
>>
>
> I'd say that you have quite a bit of freedom in this regard. GlusterFS
> should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
> that covers basic operations (start & stop VM, migrate it), snapshots
> and merging them, and whatever else would be important for storage
> sanity.
>
> mpolednik
>
>
> On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 
>> wrote:
>>
>> On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:
>>>
>>> We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder,
 will
 have to check, since usually, we don't execute our automation on them.


>>> Any update on this? I believe the gluster tests were successful, OST
>>> passes fine and unit tests pass fine, that makes the storage backends
>>> test the last required piece.
>>>
>>>
>>> On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:
>>>

 +Elad

>
> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
> wrote:
>
> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
> wrote:
>
>>
>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>
>>>
>>> Please make sure to run as much OST suites on this patch as possible
>>>
 before merging ( using 'ci please build' )


 But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik <
>>> mpoled...@redhat.com
>>> >
>>>
>>> wrote:

 Hey,

>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in
> the
> area of host devices, hwrng and anything that touches devices;
> bunch
> of test changes and one XML generation caveat (storage is handled
> by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change
> has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
>
> In particular:  dynamic_ownership was set to 0 prehistorically (as

>>> part
>> of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because
>> libvirt,
>> running as root, was not able to play properly with root-squash nfs
>> mounts.
>>
>> Have you attempted this use case?
>>
>> I join to Nir's request to run this with storage QE.
>>
>>
>>
>
> --
>
>
> Raz Tamir
> Manager, RHV QE
>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-18 Thread Martin Polednik

On 18/04/18 11:37 +0300, Elad Ben Aharon wrote:

Hi, sorry if I misunderstood, I waited for more input regarding what areas
have to be tested here.


I'd say that you have quite a bit of freedom in this regard. GlusterFS
should be covered by Dennis, so iSCSI/NFS/ceph/cinder with some suite
that covers basic operations (start & stop VM, migrate it), snapshots
and merging them, and whatever else would be important for storage
sanity.

mpolednik


On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 
wrote:


On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:


We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder, will
have to check, since usually, we don't execute our automation on them.



Any update on this? I believe the gluster tests were successful, OST
passes fine and unit tests pass fine, that makes the storage backends
test the last required piece.


On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:


+Elad


On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible

before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 


wrote:

Hey,


I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change
has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/



In particular:  dynamic_ownership was set to 0 prehistorically (as

part
of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because
libvirt,
running as root, was not able to play properly with root-squash nfs
mounts.

Have you attempted this use case?

I join to Nir's request to run this with storage QE.





--


Raz Tamir
Manager, RHV QE



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-18 Thread Elad Ben Aharon
Hi, sorry if I misunderstood, I waited for more input regarding what areas
have to be tested here.

On Wed, Apr 18, 2018 at 11:16 AM, Martin Polednik 
wrote:

> On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:
>
>> We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder, will
>> have to check, since usually, we don't execute our automation on them.
>>
>
> Any update on this? I believe the gluster tests were successful, OST
> passes fine and unit tests pass fine, that makes the storage backends
> test the last required piece.
>
>
> On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:
>>
>> +Elad
>>>
>>> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg 
>>> wrote:
>>>
>>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:

 On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>
> Please make sure to run as much OST suites on this patch as possible
>> before merging ( using 'ci please build' )
>>
>>
> But note that OST is not a way to verify the patch.
>
> Such changes require testing with all storage types we support.
>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik  >
>
>> wrote:
>>
>> Hey,
>>>
>>> I've created a patch[0] that is finally able to activate libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in the
>>> area of host devices, hwrng and anything that touches devices; bunch
>>> of test changes and one XML generation caveat (storage is handled by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this change
>>> has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>>
>> In particular:  dynamic_ownership was set to 0 prehistorically (as
 part
 of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because
 libvirt,
 running as root, was not able to play properly with root-squash nfs
 mounts.

 Have you attempted this use case?

 I join to Nir's request to run this with storage QE.


>>>
>>>
>>> --
>>>
>>>
>>> Raz Tamir
>>> Manager, RHV QE
>>>
>>>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-18 Thread Martin Polednik

On 11/04/18 16:52 +0300, Elad Ben Aharon wrote:

We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder, will
have to check, since usually, we don't execute our automation on them.


Any update on this? I believe the gluster tests were successful, OST
passes fine and unit tests pass fine, that makes the storage backends
test the last required piece.


On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:


+Elad

On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg  wrote:


On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible
before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 

wrote:


Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/




In particular:  dynamic_ownership was set to 0 prehistorically (as part
of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
running as root, was not able to play properly with root-squash nfs mounts.

Have you attempted this use case?

I join to Nir's request to run this with storage QE.





--


Raz Tamir
Manager, RHV QE


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Elad Ben Aharon
We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder, will
have to check, since usually, we don't execute our automation on them.

On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:

> +Elad
>
> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg  wrote:
>
>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>>
>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>>
 Please make sure to run as much OST suites on this patch as possible
 before merging ( using 'ci please build' )

>>>
>>> But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
 wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>

>> In particular:  dynamic_ownership was set to 0 prehistorically (as part
>> of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
>> running as root, was not able to play properly with root-squash nfs mounts.
>>
>> Have you attempted this use case?
>>
>> I join to Nir's request to run this with storage QE.
>>
>
>
>
> --
>
>
> Raz Tamir
> Manager, RHV QE
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Martin Polednik

On 11/04/18 16:28 +0300, Dan Kenigsberg wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible
before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 

wrote:


Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/




In particular:  dynamic_ownership was set to 0 prehistorically (as part of
https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
running as root, was not able to play properly with root-squash nfs mounts.

Have you attempted this use case?


I have not. Added this to my to-do list.

The important part to note about this patch (compared to my previous
attempts in the past) is that it explicitly disables dynamic_ownership
for FILE/BLOCK-backed disks. That means, unless `seclabel` is broken
on libivrt side, the behavior would be unchanged for storage.


I join to Nir's request to run this with storage QE.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Dan Kenigsberg
On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>
>> Please make sure to run as much OST suites on this patch as possible
>> before merging ( using 'ci please build' )
>>
>
> But note that OST is not a way to verify the patch.
>
> Such changes require testing with all storage types we support.
>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>> wrote:
>>
>>> Hey,
>>>
>>> I've created a patch[0] that is finally able to activate libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in the
>>> area of host devices, hwrng and anything that touches devices; bunch
>>> of test changes and one XML generation caveat (storage is handled by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this change has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>
In particular:  dynamic_ownership was set to 0 prehistorically (as part of
https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
running as root, was not able to play properly with root-squash nfs mounts.

Have you attempted this use case?

I join to Nir's request to run this with storage QE.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Yaniv Kaul
On Wed, Apr 11, 2018 at 3:27 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:
>
>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>>
>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>>
 Please make sure to run as much OST suites on this patch as possible
 before merging ( using 'ci please build' )

>>>
>>> But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>
>> Well, we already have HE suite that runs on ISCSI, so at least we have
>> NFS+ISCSI on nested,
>> for real storage testing, you'll have to do it manually
>>
>
> We need glusterfs (both native and fuse based), and cinder/ceph storage.
>

We have Gluster in o-s-t as well, as part of the HC suite. It doesn't use
Fuse though.


>
> But we cannot practically test all flows with all types of storage for
> every patch.
>

Indeed. But we could add easily do some, and we should at least execute the
minimal set that we are able to easily via o-s-t.
Y.

>
> Nir
>
>
>>
>>
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
 wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
> mpolednik
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



 --

 Eyal edri


 MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 <+972%209-769-2018>
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 3:30 PM Martin Polednik 
wrote:

> On 11/04/18 12:27 +, Nir Soffer wrote:
> >On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:
> >
> >> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
> wrote:
> >>
> >>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
> >>>
>  Please make sure to run as much OST suites on this patch as possible
>  before merging ( using 'ci please build' )
> 
> >>>
> >>> But note that OST is not a way to verify the patch.
> >>>
> >>> Such changes require testing with all storage types we support.
> >>>
> >>
> >> Well, we already have HE suite that runs on ISCSI, so at least we have
> >> NFS+ISCSI on nested,
> >> for real storage testing, you'll have to do it manually
> >>
> >
> >We need glusterfs (both native and fuse based), and cinder/ceph storage.
> >
> >But we cannot practically test all flows with all types of storage for
> >every patch.
>
> That leads to a question - how do I go around verifying such patch
> without sufficient environment? Is there someone from storage QA that
> could assist with this?
>

Good question!

I hope Denis can help with verifying the glusterfs changes.

With cinder/ceph, maybe Elad can provide a setup for testing, or run some
automation tests on the patch?

Elad also have other automated tests for NFS/iSCSI that are worth running
before we merge such changes.

Nir


>
> >Nir
> >
> >
> >>
> >>
> >>>
> >>> Nir
> >>>
> >>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik  >
>  wrote:
> 
> > Hey,
> >
> > I've created a patch[0] that is finally able to activate libvirt's
> > dynamic_ownership for VDSM while not negatively affecting
> > functionality of our storage code.
> >
> > That of course comes with quite a bit of code removal, mostly in the
> > area of host devices, hwrng and anything that touches devices; bunch
> > of test changes and one XML generation caveat (storage is handled by
> > VDSM, therefore disk relabelling needs to be disabled on the VDSM
> > level).
> >
> > Because of the scope of the patch, I welcome storage/virt/network
> > people to review the code and consider the implication this change
> has
> > on current/future features.
> >
> > [0] https://gerrit.ovirt.org/#/c/89830/
> >
> > mpolednik
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> 
> 
> 
>  --
> 
>  Eyal edri
> 
> 
>  MANAGER
> 
>  RHV DevOps
> 
>  EMEA VIRTUALIZATION R
> 
> 
>  Red Hat EMEA 
>   TRIED. TESTED. TRUSTED.
>  
>  phone: +972-9-7692018 <+972%209-769-2018> <+972%209-769-2018>
>  irc: eedri (on #tlv #rhev-dev #rhev-integ)
>  ___
>  Devel mailing list
>  Devel@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/devel
> >>>
> >>>
> >>
> >>
> >> --
> >>
> >> Eyal edri
> >>
> >>
> >> MANAGER
> >>
> >> RHV DevOps
> >>
> >> EMEA VIRTUALIZATION R
> >>
> >>
> >> Red Hat EMEA 
> >>  TRIED. TESTED. TRUSTED. <
> https://redhat.com/trusted>
> >> phone: +972-9-7692018 <+972%209-769-2018> <+972%209-769-2018>
> >> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Martin Polednik

On 11/04/18 12:27 +, Nir Soffer wrote:

On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:


On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible
before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.



Well, we already have HE suite that runs on ISCSI, so at least we have
NFS+ISCSI on nested,
for real storage testing, you'll have to do it manually



We need glusterfs (both native and fuse based), and cinder/ceph storage.

But we cannot practically test all flows with all types of storage for
every patch.


That leads to a question - how do I go around verifying such patch
without sufficient environment? Is there someone from storage QA that
could assist with this?


Nir







Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 

wrote:


Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/

mpolednik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel





--

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED.

phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel






--

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:

> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>
>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>
>>> Please make sure to run as much OST suites on this patch as possible
>>> before merging ( using 'ci please build' )
>>>
>>
>> But note that OST is not a way to verify the patch.
>>
>> Such changes require testing with all storage types we support.
>>
>
> Well, we already have HE suite that runs on ISCSI, so at least we have
> NFS+ISCSI on nested,
> for real storage testing, you'll have to do it manually
>

We need glusterfs (both native and fuse based), and cinder/ceph storage.

But we cannot practically test all flows with all types of storage for
every patch.

Nir


>
>
>>
>> Nir
>>
>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>>> wrote:
>>>
 Hey,

 I've created a patch[0] that is finally able to activate libvirt's
 dynamic_ownership for VDSM while not negatively affecting
 functionality of our storage code.

 That of course comes with quite a bit of code removal, mostly in the
 area of host devices, hwrng and anything that touches devices; bunch
 of test changes and one XML generation caveat (storage is handled by
 VDSM, therefore disk relabelling needs to be disabled on the VDSM
 level).

 Because of the scope of the patch, I welcome storage/virt/network
 people to review the code and consider the implication this change has
 on current/future features.

 [0] https://gerrit.ovirt.org/#/c/89830/

 mpolednik
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Eyal Edri
On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>
>> Please make sure to run as much OST suites on this patch as possible
>> before merging ( using 'ci please build' )
>>
>
> But note that OST is not a way to verify the patch.
>
> Such changes require testing with all storage types we support.
>

Well, we already have HE suite that runs on ISCSI, so at least we have
NFS+ISCSI on nested,
for real storage testing, you'll have to do it manually


>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>> wrote:
>>
>>> Hey,
>>>
>>> I've created a patch[0] that is finally able to activate libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in the
>>> area of host devices, hwrng and anything that touches devices; bunch
>>> of test changes and one XML generation caveat (storage is handled by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this change has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>> mpolednik
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:

> Please make sure to run as much OST suites on this patch as possible
> before merging ( using 'ci please build' )
>

But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
> wrote:
>
>> Hey,
>>
>> I've created a patch[0] that is finally able to activate libvirt's
>> dynamic_ownership for VDSM while not negatively affecting
>> functionality of our storage code.
>>
>> That of course comes with quite a bit of code removal, mostly in the
>> area of host devices, hwrng and anything that touches devices; bunch
>> of test changes and one XML generation caveat (storage is handled by
>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>> level).
>>
>> Because of the scope of the patch, I welcome storage/virt/network
>> people to review the code and consider the implication this change has
>> on current/future features.
>>
>> [0] https://gerrit.ovirt.org/#/c/89830/
>>
>> mpolednik
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Eyal Edri
Please make sure to run as much OST suites on this patch as possible before
merging ( using 'ci please build' )

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
> mpolednik
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] dynamic ownership changes

2018-04-10 Thread Martin Polednik

Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/

mpolednik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel