[ovirt-devel] Re: VDSM-Gluster python 3 compatibility

2018-11-13 Thread Kaustav Majumder
Hi Dan,
Thanks for your input. Parsing putput of  ansible modules is indeed
cumbersome. I think I will stick to parsing lsblk output

On Wed, Nov 14, 2018 at 1:05 PM Dan Kenigsberg  wrote:

> On Wed, Nov 14, 2018 at 9:08 AM Kaustav Majumder 
> wrote:
> >
> > Hi all,
> > I have been working on making vdsm-gluster python 3 compatible.
> > I had some queries regarding the tool used for getting storage devices
> info. Till now vdsm uses python blivet for getting the device tree
> information.
>
> python3-blivet  supports only py3; python2-blivet1 supports only py2.
> You may re-implement vdsm.gluster.storagedev with python3-blivet if
> six.PY3 and keep the current code if six.PY2.
>
> >
> > I am not sure if blivet is compatible with python 3 so I had two
> propositions which I want some advice on.
> >
> > 1) Using lsblk and parsing the output
>
> I'd do this if parsing lsblk is simpler than calling python3-blivet
>
>
> > 2) Using ansible modules locally in the vdsm hosts.
>
> Do you plan to parse their output? How are they implemented? This
> seems like to most cumbersome option.
>
> >
> > Although I feel the 2nd way will add more dependencies to the project
> but it can help us in the long run to add more functionality easily.
> >
> > Thanks,
> > Kaustav
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DJORSIZMTZ3HSAV7G7N6ASQ3SWDDMCFH/
>


-- 

KAUSTAV MAJUMDER

ASSOCIATE SOFTWARE ENGINEER

Red Hat India PVT LTD. 

kmajum...@redhat.comM: 08981884037 IM: IRC: kmajumder

TRIED. TESTED. TRUSTED. 
@redhatway    @redhatinc
   @redhatsnaps

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UKVUY5IHXVH6Y2RFPDXQ7MWKFWP6GU25/


[ovirt-devel] Re: VDSM-Gluster python 3 compatibility

2018-11-13 Thread Dan Kenigsberg
On Wed, Nov 14, 2018 at 9:08 AM Kaustav Majumder  wrote:
>
> Hi all,
> I have been working on making vdsm-gluster python 3 compatible.
> I had some queries regarding the tool used for getting storage devices info. 
> Till now vdsm uses python blivet for getting the device tree information.

python3-blivet  supports only py3; python2-blivet1 supports only py2.
You may re-implement vdsm.gluster.storagedev with python3-blivet if
six.PY3 and keep the current code if six.PY2.

>
> I am not sure if blivet is compatible with python 3 so I had two propositions 
> which I want some advice on.
>
> 1) Using lsblk and parsing the output

I'd do this if parsing lsblk is simpler than calling python3-blivet


> 2) Using ansible modules locally in the vdsm hosts.

Do you plan to parse their output? How are they implemented? This
seems like to most cumbersome option.

>
> Although I feel the 2nd way will add more dependencies to the project but it 
> can help us in the long run to add more functionality easily.
>
> Thanks,
> Kaustav
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DJORSIZMTZ3HSAV7G7N6ASQ3SWDDMCFH/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EQW44E6E6BIESCCFSF24PMB2QPUAP72E/


[ovirt-devel] Proposing Shmuel Melamud as a Virt backend maintainer

2018-11-13 Thread Michal Skrivanek
Hi all,
I’d like to propose Shmuel as a backend maintainer for the virt area. He’s a 
good candidate with his 200+ patches and insight into complex parts of VM 
pools, templates, q35 support, v2v, sparsify...
Feel free to bother him more with review requests;)

Thanks,
michal
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TBYGMRTCF3BUMZ4ZC46DI6BQONNUMWOV/


[ovirt-devel] VDSM-Gluster python 3 compatibility

2018-11-13 Thread Kaustav Majumder
Hi all,
I have been working on making vdsm-gluster python 3 compatible. I had some
queries regarding the tool used for getting storage devices info. Till now
vdsm uses python blivet for getting the device tree information.

I am not sure if blivet is compatible with python 3 so I had two
propositions which I want some advice on.

1) Using lsblk and parsing the output
2) Using ansible modules locally in the vdsm hosts.

Although I feel the 2nd way will add more dependencies to the project but
it can help us in the long run to add more functionality easily.

Thanks,
Kaustav
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DJORSIZMTZ3HSAV7G7N6ASQ3SWDDMCFH/


[ovirt-devel] Re: [EXTERNAL] Re: List of Queries related to RHV

2018-11-13 Thread Pavan Chavva
On Fri, Nov 2, 2018 at 10:42 AM Nir Soffer  wrote:

> On Fri, Nov 2, 2018 at 3:22 PM Mahesh Falmari 
> wrote:
>
>> Thanks Nir for the answers.
>>
>> Just a follow up question to your answer on Q.3>
>> [Nir] Regarding backup, I think you need to store the vm configuration at
>> the time of the backup regardless of having a version or not. The amount of
>> data is very small.
>>
>> [Mahesh] We are looking for VM version information to be stored during
>> backup for different reasons. One of the reason is in order to determine
>> the compatibility of VM backed up from the latest version of RHV server and
>> getting restored to the older versions.
>>
>
> Trying to restore VM on older version sound fragile. Even if this works,
> I don't think we test such scenarios, so this is likely to break in future
> versions.
>
>
>> Also, when you say vm configuration, what specific APIs could be
>> leveraged to get it?
>> Still looking for answer to whether VM version information will be
>> available or not.
>>
>
> I hope that Ryan to help with this.
>

@Ryan Barry   Can you provide more feedback about this?

Thanks,
Pavan.


>
>
>>
>>
>> Thanks & Regards,
>> Mahesh Falmari
>>
>>
>>
>> *From:* Nir Soffer 
>> *Sent:* Friday, November 2, 2018 1:01 AM
>> *To:* Mahesh Falmari ; Arik Hadas <
>> aha...@redhat.com>; Martin Perina 
>> *Cc:* Yaniv Lavi (Dary) ; Daniel Erez ;
>> Nisan, Tal ; Pavan Chavva ; devel
>> ; James Olson ; Navin Tah <
>> navin@veritas.com>; Sudhakar Paulzagade <
>> sudhakar.paulzag...@veritas.com>; Abhay Marode ;
>> Suchitra Herwadkar ; Nirmalya Sirkar <
>> nirmalya.sir...@veritas.com>; Abhijeet Barve 
>> *Subject:* Re: [EXTERNAL] Re: List of Queries related to RHV
>>
>>
>>
>> On Wed, Oct 17, 2018 at 5:03 PM Mahesh Falmari <
>> mahesh.falm...@veritas.com> wrote:
>>
>> Thanks for the prompt response on these queries. We have few follow-up
>> queries mentioned inline.
>>
>>
>>
>> Thanks & Regards,
>> Mahesh Falmari
>>
>>
>>
>> *From:* Yaniv Lavi 
>> *Sent:* Tuesday, October 16, 2018 7:19 PM
>> *To:* Mahesh Falmari 
>> *Cc:* Nir Soffer ; Erez, Daniel ;
>> Tal Nisan ; Pavan Chavva ; devel <
>> devel@ovirt.org>; James Olson ; Navin Tah <
>> navin@veritas.com>; Sudhakar Paulzagade <
>> sudhakar.paulzag...@veritas.com>; Abhay Marode ;
>> Suchitra Herwadkar ; Nirmalya Sirkar <
>> nirmalya.sir...@veritas.com>; Abhijeet Barve 
>> *Subject:* [EXTERNAL] Re: List of Queries related to RHV
>>
>>
>>
>>
>> *YANIV LAVI*
>>
>> SENIOR TECHNICAL PRODUCT MANAGER
>>
>> Red Hat Israel Ltd. 
>>
>> 34 Jerusalem Road, Building A, 1st floor
>>
>> Ra'anana, Israel 4350109
>>
>> yl...@redhat.comT: +972-9-7692306/8272306 F: +972-9-7692223
>>  IM: ylavi
>>
>> 
>>
>> *TRIED. TESTED. TRUSTED.* 
>>
>> @redhatnews    Red Hat 
>>    Red Hat 
>> 
>>
>>
>>
>> On Tue, Oct 16, 2018 at 4:35 PM Mahesh Falmari <
>> mahesh.falm...@veritas.com> wrote:
>>
>> Hi Nir,
>>
>> We have few queries with respect to RHV which we would like to understand
>> from you.
>>
>>
>>
>> *1. Does RHV maintains the virtual machine configuration file in back
>> end?*
>>
>> Just like we have configuration files for other hypervisors like for
>> VMware it is .vmx and for Hyper-V, it is .vmcx which captures most of the
>> virtual machine configuration information in that. On the similar lines,
>> does RHV also maintains such file? If not, what is the other way to get all
>> the virtual machine configuration information from a single API?
>>
>>
>>
>> There is a OVF storage, but this is not meant for consumption.
>>
>>
>>
>> Right, this is only for internal use.
>>
>>
>>
>> ...
>>
>> *3. Do we have any version associated with the virtual machine?*
>>
>> Just like we have hardware version in case of VMware and virtual machine
>> version in case of Hyper-V, does RHV also associate any such version with
>> virtual machine?
>>
>>
>>
>> The HW version is based on the VM machine type.
>>
>>  [Mahesh] Can you please elaborate more on this? How simply VM machine
>> type going to determine it’s version?
>>
>>
>>
>> Arik, can you answer this?
>>
>>
>>
>> Regarding backup, I think you need to store the vm configuration at the
>> time of the
>>
>> backup regardless of having a version or not. The amount of data is very
>> small.
>>
>> *4. Is it possible to create virtual machines with QCOW2 as base disks
>> instead of RAW?*
>>
>> We would like to understand if there are any use cases customers prefer
>> creating virtual machines from QCOW2 as base disks instead of RAW ones.
>>
>>
>>
>> That is a possibility in cases of thin disk on file storage.
>>
>>   [Mahesh] Can you please elaborate more on this?
>>
>>
>>
>> Using the UI you can use qcow2 format only for thin disks on block
>> storage.
>>
>>
>>
>> Using the SDK you can also create qcow2 image on thin file based storage.
>>
>>
>>
>> You can 

[ovirt-devel] Unassigned bugs for targeted releases

2018-11-13 Thread Sandro Bonazzola
Hi,
we have 32 bugs targeted to a release but without an assignee:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=classification%3Aovirt%20assignee%3Anobody%40redhat.com%20-target_milestone%3A%27---%27

Can you please review the list and ensure someone will work on them or drop
the target if they're not going to be worked on?

Thanks
-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C5GVWZKMLT2RM7SUEKJK4BDPR3Y6RVLD/


[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Martin Perina
On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek 
wrote:

>
>
> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
>
> On Tue, 13 Nov 2018 11:56:37 +0100
> Martin Perina  wrote:
>
> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
>
> Martin? can you please look at the patch that Dominik sent?
> We need to resolve this as we have not had an engine build for the last 11
> days
>
>
> Yesterday I've merged Dominik's revert patch
> https://gerrit.ovirt.org/95377
> which should switch cluster level back to 4.2. Below mentioned change
> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> right Michal?
>
> The build mentioned
>
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> is from yesterday. Are we sure that it was executed only after #95377 was
> merged? I'd like to see the results from latest
>
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> but unfortunately it already waits more than an hour for available hosts
> ...
>
>
>
>
>
> https://gerrit.ovirt.org/#/c/95283/ results in
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> which is used in
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> results in run_vms succeeding.
>
> The next merged change
> https://gerrit.ovirt.org/#/c/95310/ results in
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> which is used in
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> results in run_vms failing with
> 2018-11-12 17:35:10,109-05 INFO
>  [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-1)
> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: RunVmOnceCommand
> internal: false. Entities affected :  ID:
> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role
> type USER
> 2018-11-12 17:35:10,113-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> getVmManager, params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
> 4ms
> 2018-11-12 17:35:10,128-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method:
> getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260,
> Up], timeElapsed: 7ms
> 2018-11-12 17:35:10,129-05 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af')
> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> 2018-11-12 17:35:10,129-05 INFO
>  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1)
> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host
> 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8')
> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
> (correlation id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to
> run the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not
> be run.
> in
>
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/
>
> Is this helpful for you?
>
>
>
> actually, there ire two issues
> 1) cluster is still 4.3 even after Martin’s revert.
>

https://gerrit.ovirt.org/#/c/95409/ should align cluster level with dc level

2) the patch is wrong too, as in HandleVdsCpuFlagsOrClusterChangedCommand
> it just goes ahead and sets the cluster cpu to whatever the host reported
> regardless if it is valid or not. Steven, please fix that (line 96 in
> backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/HandleVdsCpuFlagsOrClusterChangedCommand.java).
> It needs to pass the validation or we need some other solution.
> 3) regardless, we should make 4.3 work too , I tried to play with it a bit
> in https://gerrit.ovirt.org/#/c/95407/, let’s see…
>
> Thanks,
> michal
>
>
>
> On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
>
> On Mon, 12 Nov 2018 13:45:54 +0100
> Martin Perina  wrote:
>
> On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler 
>
> wrote:
>
>
> On Mon, 12 Nov 2018 12:29:17 +0100
> Martin Perina  wrote:
>
> On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
>
> There are currently two issues failing ovirt-engine on CQ ovirt
>
> master:
>
>
> 1. edit vm pool is causing failure in different tests. it has a
>
> patch
>
> *waiting
>
> to 

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Michal Skrivanek


> On 13 Nov 2018, at 12:20, Dominik Holler  wrote:
> 
> On Tue, 13 Nov 2018 11:56:37 +0100
> Martin Perina  wrote:
> 
>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
>> 
>>> Martin? can you please look at the patch that Dominik sent?
>>> We need to resolve this as we have not had an engine build for the last 11
>>> days
>>> 
>> 
>> Yesterday I've merged Dominik's revert patch https://gerrit.ovirt.org/95377
>> which should switch cluster level back to 4.2. Below mentioned change
>> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
>> right Michal?
>> 
>> The build mentioned
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
>> is from yesterday. Are we sure that it was executed only after #95377 was
>> merged? I'd like to see the results from latest
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
>> but unfortunately it already waits more than an hour for available hosts ...
>> 
> 
> 
> 
> 
> https://gerrit.ovirt.org/#/c/95283/ results in 
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
> which is used in
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
> results in run_vms succeeding.
> 
> The next merged change
> https://gerrit.ovirt.org/#/c/95310/ results in
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
> which is used in
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
> results in run_vms failing with
> 2018-11-12 17:35:10,109-05 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
> RunVmOnceCommand internal: false. Entities affected :  ID: 
> d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role 
> type USER
> 2018-11-12 17:35:10,113-05 DEBUG 
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] 
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: getVmManager, 
> params: [d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed: 4ms
> 2018-11-12 17:35:10,128-05 DEBUG 
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] 
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: 
> getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260, 
> Up], timeElapsed: 7ms
> 2018-11-12 17:35:10,129-05 INFO  
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
> 'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af') was 
> filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation 
> id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> 2018-11-12 17:35:10,129-05 INFO  
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
> 'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8') was 
> filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation 
> id: 6930b632-5593-4481-bf2a-a1d8b14a583a)
> 2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
> (default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to run 
> the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not be run.
> in
> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/
> 
> Is this helpful for you?


actually, there ire two issues
1) cluster is still 4.3 even after Martin’s revert. 
2) the patch is wrong too, as in HandleVdsCpuFlagsOrClusterChangedCommand it 
just goes ahead and sets the cluster cpu to whatever the host reported 
regardless if it is valid or not. Steven, please fix that (line 96 in 
backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/HandleVdsCpuFlagsOrClusterChangedCommand.java).
 It needs to pass the validation or we need some other solution. 
3) regardless, we should make 4.3 work too , I tried to play with it a bit in 
https://gerrit.ovirt.org/#/c/95407/ , 
let’s see…

Thanks,
michal

> 
>> 
>>> On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
>>> 
 On Mon, 12 Nov 2018 13:45:54 +0100
 Martin Perina  wrote:
 
> On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler   
 wrote:  
> 
>> On Mon, 12 Nov 2018 12:29:17 +0100
>> Martin Perina  wrote:
>> 
>>> On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
>>> 
 There are currently two issues failing ovirt-engine on CQ ovirt  
 master:  
 
 1. edit vm pool is causing failure in different tests. it has a  
 patch  
>> *waiting  

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Dominik Holler
On Tue, 13 Nov 2018 11:56:37 +0100
Martin Perina  wrote:

> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:
> 
> > Martin? can you please look at the patch that Dominik sent?
> > We need to resolve this as we have not had an engine build for the last 11
> > days
> >  
> 
> Yesterday I've merged Dominik's revert patch https://gerrit.ovirt.org/95377
> which should switch cluster level back to 4.2. Below mentioned change
> https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
> right Michal?
> 
> The build mentioned
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
> is from yesterday. Are we sure that it was executed only after #95377 was
> merged? I'd like to see the results from latest
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
> but unfortunately it already waits more than an hour for available hosts ...
> 




https://gerrit.ovirt.org/#/c/95283/ results in 
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8071/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3489/parameters/
results in run_vms succeeding.

The next merged change
https://gerrit.ovirt.org/#/c/95310/ results in
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/8072/
which is used in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/parameters/
results in run_vms failing with
2018-11-12 17:35:10,109-05 INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Running command: 
RunVmOnceCommand internal: false. Entities affected :  ID: 
d10aa133-b9b6-455d-8137-ab822d1c1971 Type: VMAction group RUN_VM with role type 
USER
2018-11-12 17:35:10,113-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: getVmManager, params: 
[d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed: 4ms
2018-11-12 17:35:10,128-05 DEBUG 
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] (default 
task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] method: 
getAllForClusterWithStatus, params: [2ca9ccd8-61f0-470c-ba3f-07766202f260, Up], 
timeElapsed: 7ms
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-1' ('282860ab-8873-4702-a2be-100a6da111af') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,129-05 INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-1) 
[6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate host 
'lago-basic-suite-master-host-0' ('c48eca36-ea98-46b2-8473-f184833e68a8') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level' (correlation id: 
6930b632-5593-4481-bf2a-a1d8b14a583a)
2018-11-12 17:35:10,130-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-1) [6930b632-5593-4481-bf2a-a1d8b14a583a] Can't find VDS to run 
the VM 'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so this VM will not be run.
in
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3490/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/

Is this helpful for you?

> 
> > On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
> >  
> >> On Mon, 12 Nov 2018 13:45:54 +0100
> >> Martin Perina  wrote:
> >>  
> >> > On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler   
> >> wrote:  
> >> >  
> >> > > On Mon, 12 Nov 2018 12:29:17 +0100
> >> > > Martin Perina  wrote:
> >> > >  
> >> > > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
> >> > > >  
> >> > > > > There are currently two issues failing ovirt-engine on CQ ovirt  
> >> master:  
> >> > > > >
> >> > > > > 1. edit vm pool is causing failure in different tests. it has a  
> >> patch  
> >> > > *waiting  
> >> > > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
> >> > > > >  
> >> > > >
> >> > > > Merged
> >> > > >  
> >> > > > >
> >> > > > > 2. we have a failure in upgrade suite as well to run vm but this  
> >> seems  
> >> > > to  
> >> > > > > be related to the tests as well:
> >> > > > > 2018-11-12 05:41:07,831-05 WARN
> >> > > > > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator]  
> >> (default  
> >> > > task-1)  
> >> > > > > [] Random number source URANDOM is not supported in cluster  
> >> > > 'test-cluster'  
> >> > > > > compatibility version 4.0.
> >> > > > >
> >> > > > > here is the full error from the upgrade suite failure in run vm:
> >> > > > > https://pastebin.com/XLHtWGGx
> >> > > > >
> >> > > > > Here is the latest failure:
> >> > > > >  
> >> > >  
> >> 

[ovirt-devel] Re: [CQ ovirt master] [ovirt-engine] - not passing for 10 days

2018-11-13 Thread Martin Perina
On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron  wrote:

> Martin? can you please look at the patch that Dominik sent?
> We need to resolve this as we have not had an engine build for the last 11
> days
>

Yesterday I've merged Dominik's revert patch https://gerrit.ovirt.org/95377
which should switch cluster level back to 4.2. Below mentioned change
https://gerrit.ovirt.org/95310 is relevant only to cluster level 4.3, am I
right Michal?

The build mentioned
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
is from yesterday. Are we sure that it was executed only after #95377 was
merged? I'd like to see the results from latest
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11127/
but unfortunately it already waits more than an hour for available hosts ...


> On Mon, Nov 12, 2018 at 3:58 PM Dominik Holler  wrote:
>
>> On Mon, 12 Nov 2018 13:45:54 +0100
>> Martin Perina  wrote:
>>
>> > On Mon, Nov 12, 2018 at 12:58 PM Dominik Holler 
>> wrote:
>> >
>> > > On Mon, 12 Nov 2018 12:29:17 +0100
>> > > Martin Perina  wrote:
>> > >
>> > > > On Mon, Nov 12, 2018 at 12:20 PM Dafna Ron  wrote:
>> > > >
>> > > > > There are currently two issues failing ovirt-engine on CQ ovirt
>> master:
>> > > > >
>> > > > > 1. edit vm pool is causing failure in different tests. it has a
>> patch
>> > > *waiting
>> > > > > to be merged*: https://gerrit.ovirt.org/#/c/95354/
>> > > > >
>> > > >
>> > > > Merged
>> > > >
>> > > > >
>> > > > > 2. we have a failure in upgrade suite as well to run vm but this
>> seems
>> > > to
>> > > > > be related to the tests as well:
>> > > > > 2018-11-12 05:41:07,831-05 WARN
>> > > > > [org.ovirt.engine.core.bll.validator.VirtIoRngValidator]
>> (default
>> > > task-1)
>> > > > > [] Random number source URANDOM is not supported in cluster
>> > > 'test-cluster'
>> > > > > compatibility version 4.0.
>> > > > >
>> > > > > here is the full error from the upgrade suite failure in run vm:
>> > > > > https://pastebin.com/XLHtWGGx
>> > > > >
>> > > > > Here is the latest failure:
>> > > > >
>> > >
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/8/
>>
>> > > > >
>> > > >
>> > > > I will try to take a look later today
>> > > >
>> > >
>> > > I have the idea that this might be related to
>> > > https://gerrit.ovirt.org/#/c/95377/ , and I check in
>> > >
>> https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/3485/console
>> > > , but I have to stop now, if not solved I can go on later today.
>> > >
>> >
>> > OK, both CI and above manual OST job went fine, so I've just merged the
>> > revert patch. I will take a look at it later in detail, we should
>> really be
>> > testing 4.3 on master and not 4.2
>> >
>>
>> Ack.
>>
>> Now
>>
>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/11121/
>> is failing on
>> File
>> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
>> line 698, in run_vms
>> api.vms.get(VM0_NAME).start(start_params)
>> status: 400
>> reason: Bad Request
>>
>> 2018-11-12 10:06:30,722-05 INFO
>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
>> [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host
>> 'lago-basic-suite-master-host-1' ('dbfe1b0c-f940-4dba-8fb1-0cfe5ca7ddfc')
>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
>> (correlation id: b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
>> 2018-11-12 10:06:30,722-05 INFO
>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
>> [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Candidate host
>> 'lago-basic-suite-master-host-0' ('e83a63ca-381e-40db-acb2-65a3e7953e11')
>> was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
>> (correlation id: b8d11cb0-5be9-4b7e-b45a-c95fa1f18681)
>> 2018-11-12 10:06:30,723-05 ERROR [org.ovirt.engine.core.bll.RunVmCommand]
>> (default task-3) [b8d11cb0-5be9-4b7e-b45a-c95fa1f18681] Can't find VDS to
>> run the VM '57a66eff-8cbf-4643-b045-43d4dda80c66' on, so this VM will not
>> be run.
>>
>> Is this related to
>> https://gerrit.ovirt.org/#/c/95310/
>> ?
>>
>>
>>
>> > >
>> > > > >
>> > > > >
>> > > > > Thanks,
>> > > > > Dafna
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Mon, Nov 12, 2018 at 9:23 AM Dominik Holler <
>> dhol...@redhat.com>
>> > > wrote:
>> > > > >
>> > > > >> On Sun, 11 Nov 2018 19:04:40 +0200
>> > > > >> Dan Kenigsberg  wrote:
>> > > > >>
>> > > > >> > On Sun, Nov 11, 2018 at 5:27 PM Eyal Edri 
>> > > wrote:
>> > > > >> > >
>> > > > >> > >
>> > > > >> > >
>> > > > >> > > On Sun, Nov 11, 2018 at 5:24 PM Eyal Edri 
>>
>> > > wrote:
>> > > > >> > >>
>> > > > >> > >>
>> > > > >> > >>
>> > > > >> > >> On Sun, Nov 11, 2018 at 5:20 PM Dan Kenigsberg <
>> > > dan...@redhat.com>
>> > > > >> wrote:
>> > > > >> > >>>
>> > > > >> > >>> On Sun, Nov 11, 2018 

[ovirt-devel] Re: [CentOS-devel] SIGs: Possibility to drop EOL content at 7.6.1810 release time

2018-11-13 Thread Sandro Bonazzola
Il giorno mar 13 nov 2018 alle ore 11:24 Niels de Vos 
ha scritto:

> On Fri, Nov 02, 2018 at 12:23:38PM +, Niels de Vos wrote:
> > On Fri, Nov 02, 2018 at 12:52:59PM +0100, Sandro Bonazzola wrote:
> > > Il giorno ven 2 nov 2018 alle ore 12:33 Niels de Vos <
> nde...@redhat.com> ha
> > > scritto:
> > >
> > > > On Thu, Nov 01, 2018 at 11:01:51AM +0200, Anssi Johansson wrote:
> > > > ...
> > > > > For reference and inspiration, here are some directories from
> > > > > mirror.centos.org, including both up-to-date content and
> potentially EOL
> > > > > content. SIGs should review the list to make sure these
> directories can
> > > > > be copied over to 7.6.1810 when that time comes. Making the
> decisions
> > > > > now would save a bit of time at 7.6.1810 release time.
> > > > >
> > > > ..
> > > >
> > > > > storage/x86_64/gluster-3.10
> > > > > storage/x86_64/gluster-3.12
> > > >
> > > > The above two can be dropped (also for CentOS 6). This includes the
> > > > centos-release-gluster310 and centos-release-gluster-312 packages in
> > > > Extras.
> > > >
> > > >
> > > Can we keep 3.12? It's still consumed by oVirt 4.2.
> > > Adding oVirt devel team, just in case we'll need urgently to rebase on
> > > newer Gluster on CentOS 7.6 GA or block upgrade to CentOS 7.6 from
> oVirt
> > > side till we figure out how to handle.
> >
> > 3.12 does not get any updates anymore... Ideally users do not consume
> > that version but have moved on to 4.0 or 4.1 already. Gluster 5 is
> > available and should get announced soon (not sure if that can happen
> > with the CentOS 7.6 release).
>
> Is it possible for us to remove the centos-release-gluster312 package
> from CentOS Extras, but keep the gluster-3.12 repository available for
> you to consume with oVirt 4.2? That would prevent users from installing
> an unmaintained version, but you can still download the packages.
>
> We also forcefully disable the unmaintained repositories (as they will
> get deleted later), but I can wait with that if it helps. This is done
> through
> https://github.com/CentOS-Storage-SIG/centos-release-gluster-legacy in
> case you're interested.
>
> So, what I need from you, is an estimation of dates of when the
> following actions can be done:
>
> 1. delete centos-release-gluster312 from CentOS Extras
>

I think this can be removed, we don't have a direct requirement on that
package


> 2. remove the gluster-3.12 repository from the mirrors
>

I started discussion with oVirt development team about switching to a newer
Gluster release, I need to get feedback from them.


> 3. update centos-release-gluster-legacy with Gluster 3.12 deprecation
>

I think this can be updated while removing centos-release-gluster312 from
CentOS Extras


>
> Thanks,
> Niels
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YQTD2TLQPV2F6UZZHB2NKOF37WUDGZTR/


[ovirt-devel] Re: [CentOS-devel] SIGs: Possibility to drop EOL content at 7.6.1810 release time

2018-11-13 Thread Niels de Vos
On Fri, Nov 02, 2018 at 12:23:38PM +, Niels de Vos wrote:
> On Fri, Nov 02, 2018 at 12:52:59PM +0100, Sandro Bonazzola wrote:
> > Il giorno ven 2 nov 2018 alle ore 12:33 Niels de Vos  ha
> > scritto:
> > 
> > > On Thu, Nov 01, 2018 at 11:01:51AM +0200, Anssi Johansson wrote:
> > > ...
> > > > For reference and inspiration, here are some directories from
> > > > mirror.centos.org, including both up-to-date content and potentially EOL
> > > > content. SIGs should review the list to make sure these directories can
> > > > be copied over to 7.6.1810 when that time comes. Making the decisions
> > > > now would save a bit of time at 7.6.1810 release time.
> > > >
> > > ..
> > >
> > > > storage/x86_64/gluster-3.10
> > > > storage/x86_64/gluster-3.12
> > >
> > > The above two can be dropped (also for CentOS 6). This includes the
> > > centos-release-gluster310 and centos-release-gluster-312 packages in
> > > Extras.
> > >
> > >
> > Can we keep 3.12? It's still consumed by oVirt 4.2.
> > Adding oVirt devel team, just in case we'll need urgently to rebase on
> > newer Gluster on CentOS 7.6 GA or block upgrade to CentOS 7.6 from oVirt
> > side till we figure out how to handle.
> 
> 3.12 does not get any updates anymore... Ideally users do not consume
> that version but have moved on to 4.0 or 4.1 already. Gluster 5 is
> available and should get announced soon (not sure if that can happen
> with the CentOS 7.6 release).

Is it possible for us to remove the centos-release-gluster312 package
from CentOS Extras, but keep the gluster-3.12 repository available for
you to consume with oVirt 4.2? That would prevent users from installing
an unmaintained version, but you can still download the packages.

We also forcefully disable the unmaintained repositories (as they will
get deleted later), but I can wait with that if it helps. This is done
through
https://github.com/CentOS-Storage-SIG/centos-release-gluster-legacy in
case you're interested.

So, what I need from you, is an estimation of dates of when the
following actions can be done:

1. delete centos-release-gluster312 from CentOS Extras
2. remove the gluster-3.12 repository from the mirrors
3. update centos-release-gluster-legacy with Gluster 3.12 deprecation

Thanks,
Niels
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MPR3VQHGCXKLC4ST26RITLLETWGRUQUN/


[ovirt-devel] Re: GlusterFS rebase for oVirt 4.3

2018-11-13 Thread Niels de Vos
On Tue, Nov 13, 2018 at 10:01:15AM +0100, Sandro Bonazzola wrote:
> According to Sahina we dropped dependency on gluster-gnfs and it's safe to
> move on from the unsupported 3.12 version to a newer one 4.1 / 5.0.
> 
> This is for getting an agreement on what we should require in oVirt 4.3:
> CentOS Storage SIG provides bot 4.1 and 5 for x86_64[1] and ppc64le[2]
> 
> If I understood correctly, 5 is shipping glusterd2 which requires a
> significant effort to get support for while 4.1 is still on glusterd which
> should work with current oVirt code.

Also Gluster 5 still offers the traditional glusterd service. glusterd2
is available for both 4.1 and 5, but it is still an opt-in.

HTH,
Niels


PS: Gluster 5 is not announced for CentOS yet, the
centos-release-gluster5 package is not yet publicly available


> Sahina, can you give directions on what we should move to?
> 
> Thanks.
> 
> [1] http://mirror.centos.org/centos/7/storage/x86_64/
> [2] http://mirror.centos.org/altarch/7/storage/ppc64le/
> 
> -- 
> 
> SANDRO BONAZZOLA
> 
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> 
> Red Hat EMEA 
> 
> sbona...@redhat.com
> 

> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IQYAWSN5KLO5OQJ4TT75BYB3GZFECVE2/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GS6I45244RJAEQNDCHXDEXZYBPR6CMY7/


[ovirt-devel] Please Stop merging to ovirt-engine master branch

2018-11-13 Thread Dafna Ron
Hi ,

Please stop merging to ovirt-engine master branch as we have been failing
on different regressions for the last 11 days.

I do not want any other regressions to be introduced until we can properly
catch them which means we need to fix the current issues before merging
more changes.

Thanks,
Dafna
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PNSIBWR6POPFCOCQDXQ4ZQT6K2TALMZ2/


[ovirt-devel] Re: Upstream package missing release suffix

2018-11-13 Thread Dan Kenigsberg
On Mon, Nov 12, 2018 at 2:46 PM Ales Musil  wrote:
>
> Hello,
>
> ovirt-lldp-labeler is missing release suffix in nightly build [1]. This has 
> been fixed some time ago in [2]. Any idea what could be wrong?
>
> Thank you.
> Regards,
> Ales.
>
> [1] 
> https://plain.resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/
> [2]  https://gerrit.ovirt.org/c/94616/

Congratulations, "ci re-merge" seems to have helped, [3] is pretty.

[3] 
https://plain.resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el7/noarch/ovirt-lldp-labeler-1.0.1-0.20181112144328.git5de44ea.el7.noarch.rpm

>
> --
>
> ALES MUSIL
>
> Associate Software Engineer - rhv network
>
> Red Hat EMEA
>
>
> amu...@redhat.com   IM: amusil
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XACU2LFOMFXCFQKMF6FS6YTF5F3QDL4M/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZD7QTTKDFINTIWF3XJMPJCOMORYSFCL4/