[ovirt-devel] Re: [VDSM] Fedora rawhide status

2018-08-09 Thread Nir Soffer
On Thu, Aug 9, 2018 at 11:09 PM Nir Soffer  wrote:

> We did not update the Fedora rawhide for few month since a package
> was missing. Now that this issue was solved, we have new errors,
>
>
> 1. async is a keywoard in python 3.7
>
>
> Compiling './tests/storage/fakesanlock.py'...
> ***   File "./tests/storage/fakesanlock.py", line 65
> async=False):
> ^
> SyntaxError: invalid syntax
>
>
> We have many of these. The issue is sanlock api uses the kwarg "async", and
> python 3.7 made this invalid syntax.
>
> $ ./python
> Python 3.7.0+ (heads/3.7:426135b674, Aug  9 2018, 22:50:16)
> [GCC 8.1.1 20180712 (Red Hat 8.1.1-5)] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> >>> def foo(async=False):
>   File "", line 1
> def foo(async=False):
> ^
> SyntaxError: invalid syntax
> >>> async = True
>   File "", line 1
> async = True
>   ^
> SyntaxError: invalid syntax
>
> Thank you python developers for making our life more interesting :-)
>
> So we will have to change sanlock python binding to replace "async"
> with something else.
>
> I'll file sanlock bug for this.
>
>
> 2. test_sourceroute_add_remove_and_read fails
>
> No idea why it fails, hopefully Dan or Edward have a clue.
>
>
> FAIL: test_sourceroute_add_remove_and_read 
> (network.sourceroute_test.TestSourceRoute)
> --
> Traceback (most recent call last):
>   File "/vdsm/tests/testValidation.py", line 193, in wrapper
> return f(*args, **kwargs)
>   File "/vdsm/tests/network/sourceroute_test.py", line 80, in 
> test_sourceroute_add_remove_and_read
> self.assertEqual(2, len(routes), routes)
> AssertionError: 2 != 0
>  >> begin captured logging << 
> 2018-08-09 19:36:31,446 DEBUG (MainThread) [root] /sbin/ip link add name 
> dummy_UTpMy type dummy (cwd None) (cmdutils:151)
> 2018-08-09 19:36:31,455 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,456 DEBUG (netlink/events) [root] START thread 
>  (func= Monitor._scan of  0x7fa1b18dc6d0>>, args=(), kwargs={}) (concurrent:193)
> 2018-08-09 19:36:31,456 DEBUG (MainThread) [root] /sbin/ip link set dev 
> dummy_UTpMy up (cwd None) (cmdutils:151)
> 2018-08-09 19:36:31,463 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,464 DEBUG (netlink/events) [root] FINISH thread 
>  (concurrent:196)
> 2018-08-09 19:36:31,471 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,471 DEBUG (MainThread) [root] Adding source route for 
> device dummy_UTpMy (sourceroute:195)
> 2018-08-09 19:36:31,472 DEBUG (MainThread) [root] /sbin/ip -4 route add 
> 0.0.0.0/0 via 192.168.99.2 dev dummy_UTpMy table 3232260865 
> <(323)%20226-0865> (cwd None) (cmdutils:151)
> 2018-08-09 19:36:31,478 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,479 DEBUG (MainThread) [root] /sbin/ip -4 route add 
> 192.168.99.0/29 via 192.168.99.1 dev dummy_UTpMy table 3232260865 
> <(323)%20226-0865> (cwd None) (cmdutils:151)
> 2018-08-09 19:36:31,485 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,485 DEBUG (MainThread) [root] /sbin/ip rule add from 
> 192.168.99.0/29 prio 32000 table 3232260865 <(323)%20226-0865> (cwd None) 
> (cmdutils:151)
> 2018-08-09 19:36:31,492 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,492 DEBUG (MainThread) [root] /sbin/ip rule add from all 
> to 192.168.99.0/29 dev dummy_UTpMy prio 32000 table 3232260865 
> <(323)%20226-0865> (cwd None) (cmdutils:151)
> 2018-08-09 19:36:31,498 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,499 DEBUG (MainThread) [root] /sbin/ip rule (cwd None) 
> (cmdutils:151)
> 2018-08-09 19:36:31,505 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse rule 
> 32000:   from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached] lookup 
> 3232260865  (iproute2:60)
> 2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse rule 
> 32000:   from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached] lookup 
> 3232260865  (iproute2:60)
> 2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse rule 
> 32000:   from all to 192.168.99.0 /29 iif dummy_UTpMy lookup 3232260865  
> (iproute2:60)
> 2018-08-09 19:36:31,506 DEBUG (MainThread) [root] /sbin/ip rule (cwd None) 
> (cmdutils:151)
> 2018-08-09 19:36:31,512 DEBUG (MainThread) [root] SUCCESS:  = '';  = 
> 0 (cmdutils:159)
> 2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse rule 
> 32000:   from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached] lookup 
> 3232260865  (iproute2:60)
> 2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse rule 
> 32000:   from all to 

[ovirt-devel] [VDSM] Fedora rawhide status

2018-08-09 Thread Nir Soffer
We did not update the Fedora rawhide for few month since a package
was missing. Now that this issue was solved, we have new errors,


1. async is a keywoard in python 3.7


Compiling './tests/storage/fakesanlock.py'...
***   File "./tests/storage/fakesanlock.py", line 65
async=False):
^
SyntaxError: invalid syntax


We have many of these. The issue is sanlock api uses the kwarg "async", and
python 3.7 made this invalid syntax.

$ ./python
Python 3.7.0+ (heads/3.7:426135b674, Aug  9 2018, 22:50:16)
[GCC 8.1.1 20180712 (Red Hat 8.1.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def foo(async=False):
  File "", line 1
def foo(async=False):
^
SyntaxError: invalid syntax
>>> async = True
  File "", line 1
async = True
  ^
SyntaxError: invalid syntax

Thank you python developers for making our life more interesting :-)

So we will have to change sanlock python binding to replace "async"
with something else.

I'll file sanlock bug for this.


2. test_sourceroute_add_remove_and_read fails

No idea why it fails, hopefully Dan or Edward have a clue.


FAIL: test_sourceroute_add_remove_and_read
(network.sourceroute_test.TestSourceRoute)
--
Traceback (most recent call last):
  File "/vdsm/tests/testValidation.py", line 193, in wrapper
return f(*args, **kwargs)
  File "/vdsm/tests/network/sourceroute_test.py", line 80, in
test_sourceroute_add_remove_and_read
self.assertEqual(2, len(routes), routes)
AssertionError: 2 != 0
 >> begin captured logging << 
2018-08-09 19:36:31,446 DEBUG (MainThread) [root] /sbin/ip link add
name dummy_UTpMy type dummy (cwd None) (cmdutils:151)
2018-08-09 19:36:31,455 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,456 DEBUG (netlink/events) [root] START thread
 (func=>, args=(), kwargs={}) (concurrent:193)
2018-08-09 19:36:31,456 DEBUG (MainThread) [root] /sbin/ip link set
dev dummy_UTpMy up (cwd None) (cmdutils:151)
2018-08-09 19:36:31,463 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,464 DEBUG (netlink/events) [root] FINISH thread

(concurrent:196)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] Adding source route
for device dummy_UTpMy (sourceroute:195)
2018-08-09 19:36:31,472 DEBUG (MainThread) [root] /sbin/ip -4 route
add 0.0.0.0/0 via 192.168.99.2 dev dummy_UTpMy table 3232260865 (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,478 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,479 DEBUG (MainThread) [root] /sbin/ip -4 route
add 192.168.99.0/29 via 192.168.99.1 dev dummy_UTpMy table 3232260865
(cwd None) (cmdutils:151)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] /sbin/ip rule add
from 192.168.99.0/29 prio 32000 table 3232260865 (cwd None)
(cmdutils:151)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] /sbin/ip rule add
from all to 192.168.99.0/29 dev dummy_UTpMy prio 32000 table
3232260865 (cwd None) (cmdutils:151)
2018-08-09 19:36:31,498 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,499 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,505 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865  (iproute2:60)
2018-08-09 19:36:31,506 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,512 DEBUG (MainThread) [root] SUCCESS:  = '';
 = 0 (cmdutils:159)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,513 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865  (iproute2:60)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] Removing source
route for device dummy_UTpMy (sourceroute:215)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] /sbin/ip link del
dev 

[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Nir Soffer
On Thu, Aug 9, 2018 at 10:10 PM Dan Kenigsberg  wrote:

> Thanks, Barak, for finding out my PEBCAK, and sorry for the noise.
>
> This might be a very good moment to request a "ci please ost" which
> would build a project, and on success shoot it into OST. This is a
> very typical use case, which does not seem terribly hard to implement
> with a jenkins job, and would make life easier to me and the other
> developers who love and depend on oVirt CI.
>

Yes please!


>
> On Thu, Aug 9, 2018 at 1:36 PM, Barak Korren  wrote:
> > I found out what happened here in the 1st run.
> >
> > The build job started at 18:41:26 and finished at 18:48:19 while
> artifacts
> > were archived at 18:48:18
> >
> > The test job started at 18:44:13, finished at 19:24:10, so it had reached
> > the point of trying to download the RPMs at 18:46:36 so almost two
> minutes
> > before they actually became available
> >
> > (All times are in UTC)
> >
> > Dan, you need to wait for the build job to finish before you can launch
> the
> > test job...
> >
> >
> >
> > On 9 August 2018 at 12:46, Dan Kenigsberg  wrote:
> >>
> >> On Thu, Aug 9, 2018 at 12:41 PM, Anton Marchukov 
> >> wrote:
> >> > Hello Barak, Dan.
> >> >
> >> > Repoman indeed expect the link to jenkins job only and cannot work
> >> > with specific artifact path. So I think the last rerun [1] with just
> >> >
> >> >
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> >> > worked on repoman side as I see from lago log, the artifacts were
> >> > detected and downloaded:
> >> >
> >> > 2018-08-08 19:58:14,067::INFO::root::Saving
> >> >
> >> >
> /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> >> > 2018-08-08 19:58:14,068::INFO::root::Saving
> >> >
> >> >
> /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> >> > …
> >> >
> >> > That matches artifact names produced by the job Dan passed as the
> >> > parameter:
> >> >
> >> >
> >> > vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> >> > vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> >> > ...
> >> >
> >> >
> >> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/
> >>
> >> Darn, you are right. The second job did take the correct vdsm. It
> >> failed due to a production bug that we need to fix.
> >>
> >> >
> >> >
> >> > On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com)
> wrote:
> >> >> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
> >> >> >
> >> >> >
> >> >> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
> >> >> >>
> >> >> >> I've executed
> >> >> >>
> >> >> >>
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
> >> >> >> using
> >> >> >>
> >> >> >>
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
> >> >> >> as customer repo.
> >> >> >>
> >> >> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> which
> >> >> >> I
> >> >> >> expected would be pulled onto ost hosts. However
> >> >> >>
> >> >> >>
> >> >> >>
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
> >> >> >> shows that this was not the case.
> >> >> >>
> >> >> >> Any idea why is that?
> >> >> >
> >> >> >
> >> >> >
> >> >> > I can see the following in lago.log (in the section that includes
> the
> >> >> > repoman log):
> >> >> >
> >> >> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving
> >> >> > artifact
> >> >> > source
> >> >> >
> >> >> >
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> >> >> > 2018-08-08
> >> >> > 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
> >> >> > jenkins URL:
> >> >> >
> >> >> >
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> >> >> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
> >> >> > 2018-08-08 18:47:02,493::INFO::root:: Done
> >> >> >
> >> >> >
> >> >> > The fact that the log says 'Parsing jenkins URL' means that repoman
> >> >> > properly
> >> >> > detects that it is a URL to a Jenkins build, additionally when I
> run
> >> >> > the
> >> >> > following locally it seems to download the packages just fine:
> >> >> >
> >> >> > repoman ~/tmp/repo add
> >> >> >
> >> >> >
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> >> >> >
> >> >> > So this looks like a repoman bug. Adding Anton.
> >> >> >
> >> >> > @Dan - can you just retry?
> >> >>
> >> >> I did try again, in
> >> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
> >> >> failed again.
> >> >> However, this time it has an empty lago.log.
> >> >>
> >> >> >
> >> >> >
> >> >> >>
> >> >> >> ___
> >> >> >> Devel mailing list -- devel@ovirt.org
> >> >> >> To unsubscribe 

[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Dan Kenigsberg
Thanks, Barak, for finding out my PEBCAK, and sorry for the noise.

This might be a very good moment to request a "ci please ost" which
would build a project, and on success shoot it into OST. This is a
very typical use case, which does not seem terribly hard to implement
with a jenkins job, and would make life easier to me and the other
developers who love and depend on oVirt CI.

On Thu, Aug 9, 2018 at 1:36 PM, Barak Korren  wrote:
> I found out what happened here in the 1st run.
>
> The build job started at 18:41:26 and finished at 18:48:19 while artifacts
> were archived at 18:48:18
>
> The test job started at 18:44:13, finished at 19:24:10, so it had reached
> the point of trying to download the RPMs at 18:46:36 so almost two minutes
> before they actually became available
>
> (All times are in UTC)
>
> Dan, you need to wait for the build job to finish before you can launch the
> test job...
>
>
>
> On 9 August 2018 at 12:46, Dan Kenigsberg  wrote:
>>
>> On Thu, Aug 9, 2018 at 12:41 PM, Anton Marchukov 
>> wrote:
>> > Hello Barak, Dan.
>> >
>> > Repoman indeed expect the link to jenkins job only and cannot work
>> > with specific artifact path. So I think the last rerun [1] with just
>> >
>> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> > worked on repoman side as I see from lago log, the artifacts were
>> > detected and downloaded:
>> >
>> > 2018-08-08 19:58:14,067::INFO::root::Saving
>> >
>> > /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
>> > 2018-08-08 19:58:14,068::INFO::root::Saving
>> >
>> > /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
>> > …
>> >
>> > That matches artifact names produced by the job Dan passed as the
>> > parameter:
>> >
>> >
>> > vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
>> > vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
>> > ...
>> >
>> >
>> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/
>>
>> Darn, you are right. The second job did take the correct vdsm. It
>> failed due to a production bug that we need to fix.
>>
>> >
>> >
>> > On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com) wrote:
>> >> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
>> >> >
>> >> >
>> >> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
>> >> >>
>> >> >> I've executed
>> >> >>
>> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
>> >> >> using
>> >> >>
>> >> >> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
>> >> >> as customer repo.
>> >> >>
>> >> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which
>> >> >> I
>> >> >> expected would be pulled onto ost hosts. However
>> >> >>
>> >> >>
>> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
>> >> >> shows that this was not the case.
>> >> >>
>> >> >> Any idea why is that?
>> >> >
>> >> >
>> >> >
>> >> > I can see the following in lago.log (in the section that includes the
>> >> > repoman log):
>> >> >
>> >> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving
>> >> > artifact
>> >> > source
>> >> >
>> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> >> > 2018-08-08
>> >> > 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
>> >> > jenkins URL:
>> >> >
>> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> >> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
>> >> > 2018-08-08 18:47:02,493::INFO::root:: Done
>> >> >
>> >> >
>> >> > The fact that the log says 'Parsing jenkins URL' means that repoman
>> >> > properly
>> >> > detects that it is a URL to a Jenkins build, additionally when I run
>> >> > the
>> >> > following locally it seems to download the packages just fine:
>> >> >
>> >> > repoman ~/tmp/repo add
>> >> >
>> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> >> >
>> >> > So this looks like a repoman bug. Adding Anton.
>> >> >
>> >> > @Dan - can you just retry?
>> >>
>> >> I did try again, in
>> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
>> >> failed again.
>> >> However, this time it has an empty lago.log.
>> >>
>> >> >
>> >> >
>> >> >>
>> >> >> ___
>> >> >> Devel mailing list -- devel@ovirt.org
>> >> >> To unsubscribe send an email to devel-le...@ovirt.org
>> >> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> >> >> oVirt Code of Conduct:
>> >> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> >> List Archives:
>> >> >>
>> >> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
>> >> >
>> >> >

[ovirt-devel] Re: [EXTERNAL] Re: Veritas: Image Transfer Finalize Call Failure

2018-08-09 Thread Ketan Pachpande
Thanks Nir,
I figured out that there was a mismatch in disk format while uploading. 
Corrected it and it started working as expected. Finalize call is successful 
now.

About using SDK, as of now we are bound to use curl from C++, because of our 
own limitations of using python on Netbackup server.
But we will definitely consider your inputs about curl being less efficient and 
less secure.
Also, we will make use transfer_url instead of proxy_url.

Thanks,
Ketan Pachpande

From: Nir Soffer 
Sent: 09 August 2018 23:24
To: Pavan Chavva 
Cc: devel ; Adelino Barbosa ; Navin Tah 
; Mahesh Falmari ; Yaniv 
Lavi (Dary) ; Sudhakar Paulzagade 
; Ketan Pachpande 
; Suchitra Herwadkar 
; Abhay Marode ; 
Daniel Erez 
Subject: [EXTERNAL] Re: [ovirt-devel] Veritas: Image Transfer Finalize Call 
Failure

On Thu, Aug 9, 2018 at 6:02 PM Pavan Chavva 
mailto:pcha...@redhat.com>> wrote:
I have a question regarding imagetransfer finalize call.
After imagetransfer upload, when I call the finalize the transfer, I am getting 
Finalize Failure error.

I am following these steps to upload a disk via rest API.

  1.  Create disk on a storage domain (POST 
https:///ovirt-engine/api/disks)
Why not use the SDK?


  1.  Initiate imagetransfer and get the proxy_url and signed_ticket (https:// 
 /ovirt-engine/api/imagetransfers)
Why are you using proxy_url? The recommended way is to use transfer_url, so
you upload directly to the host. The only reason for using the proxy url is not 
being
able to access the host from the machine doing the upload.


  1.  Upload data using curl (to proxy URL)
[image003.jpg]
curl is less efficient than imageio upload_disk.py from the SDK. Also, using 
curl you are
exposing the transfer id in the command line, so every process running on the 
machine
doing the upload can steal that id and access the image. The proxy_url and 
transfer_url
are sensitive and should not be exposed in the command line.


  1.  Finalize the transfer:
[image005.jpg]
[image009.jpg]

After that disk is getting deleted automatically.
Sequence of events in events tab:
[image010.png]

Is it expected behaviour on finalizing the imagetransfer failure?

Yes, it there was an issue with the upload.

A possible issue may be mismatch between the created disk and the actual data 
uploaded.
For example, incorrect backing file in qcow2 format, or incorrect format, like 
uploading raw
file to qcow2 image.

If so, how to troubleshoot and get the reason of finalizing failure?

Without engine, vdsm, proxy and daemon logs we cannot tell.

Please share:
engine host:
/var/log/ovirt-engine/engine.log
/var/log/ovirt-imageio-proxy/image-proxy.log

host performing the upload:
/var/log/vdsm/vdsm.log
/var/log/ovirt-imageio-daemon/daemon.log

Make sure that all logs include the transfer id - you can find it in the 
proxy_url and fransfer_url:
https://server:port/images/transfer-id

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2KHDCFIL4KW5IG7WVDNA7WNZJNDT2PUG/


[ovirt-devel] Re: [EXTERNAL] Re: Veritas: Image Transfer Finalize Call Failure

2018-08-09 Thread Nir Soffer
On Thu, Aug 9, 2018 at 9:10 PM Ketan Pachpande 
wrote:

> Thanks Nir,
>
> I figured out that there was a mismatch in disk format while uploading.
> Corrected it and it started working as expected. Finalize call is
> successful now.
>
>
>
> About using SDK, as of now we are bound to use curl from C++, because of
> our own limitations of using python on Netbackup server.
>

Maybe use libcurl?
https://curl.haxx.se/libcurl/c/example.html


> But we will definitely consider your inputs about curl being less
> efficient and less secure.
>
> Also, we will make use transfer_url instead of proxy_url.
>
>
>
> Thanks,
>
> Ketan Pachpande
>
>
>
> *From:* Nir Soffer 
> *Sent:* 09 August 2018 23:24
> *To:* Pavan Chavva 
> *Cc:* devel ; Adelino Barbosa ;
> Navin Tah ; Mahesh Falmari <
> mahesh.falm...@veritas.com>; Yaniv Lavi (Dary) ;
> Sudhakar Paulzagade ; Ketan Pachpande <
> ketan.pachpa...@veritas.com>; Suchitra Herwadkar <
> suchitra.herwad...@veritas.com>; Abhay Marode ;
> Daniel Erez 
> *Subject:* [EXTERNAL] Re: [ovirt-devel] Veritas: Image Transfer Finalize
> Call Failure
>
>
>
> On Thu, Aug 9, 2018 at 6:02 PM Pavan Chavva  wrote:
>
> I have a question regarding imagetransfer finalize call.
>
> After imagetransfer upload, when I call the finalize the transfer, I am
> getting Finalize Failure error.
>
>
>
> I am following these steps to upload a disk via rest API.
>
>1. Create disk on a storage domain (POST
>https:///ovirt-engine/api/disks)
>
> Why not use the SDK?
>
>
>
>
>1. Initiate imagetransfer and get the proxy_url and signed_ticket
>(https://  /ovirt-engine/api/imagetransfers)
>
> Why are you using proxy_url? The recommended way is to use transfer_url, so
>
> you upload directly to the host. The only reason for using the proxy url
> is not being
>
> able to access the host from the machine doing the upload.
>
>
>
>
>1. Upload data using curl (to proxy URL)
>
> [image: image003.jpg]
>
> curl is less efficient than imageio upload_disk.py from the SDK. Also,
> using curl you are
>
> exposing the transfer id in the command line, so every process running on
> the machine
>
> doing the upload can steal that id and access the image. The proxy_url and
> transfer_url
>
> are sensitive and should not be exposed in the command line.
>
>
>
>
>1. Finalize the transfer:
>
> [image: image001.jpg]
>
> [image: image002.jpg]
>
> After that disk is getting deleted automatically.
>
> Sequence of events in events tab:
>
> [image: image003.png]
>
>
>
> Is it expected behaviour on finalizing the imagetransfer failure?
>
>
>
> Yes, it there was an issue with the upload.
>
>
>
> A possible issue may be mismatch between the created disk and the actual
> data uploaded.
>
> For example, incorrect backing file in qcow2 format, or incorrect format,
> like uploading raw
>
> file to qcow2 image.
>
>
>
> If so, how to troubleshoot and get the reason of finalizing failure?
>
>
>
> Without engine, vdsm, proxy and daemon logs we cannot tell.
>
>
>
> Please share:
>
> engine host:
>
> /var/log/ovirt-engine/engine.log
>
> /var/log/ovirt-imageio-proxy/image-proxy.log
>
>
>
> host performing the upload:
>
> /var/log/vdsm/vdsm.log
>
> /var/log/ovirt-imageio-daemon/daemon.log
>
>
>
> Make sure that all logs include the transfer id - you can find it in the
> proxy_url and fransfer_url:
>
> https://server:port/images/transfer-id
>
>
>
> Nir
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/64MLY7XEIESXAYKDTFYVGBL3AZTC5C6G/


[ovirt-devel] Re: Veritas: Image Transfer Finalize Call Failure

2018-08-09 Thread Nir Soffer
On Thu, Aug 9, 2018 at 6:02 PM Pavan Chavva  wrote:

> I have a question regarding imagetransfer finalize call.
>
> After imagetransfer upload, when I call the finalize the transfer, I am
> getting Finalize Failure error.
>
>
>
> I am following these steps to upload a disk via rest API.
>
>1. Create disk on a storage domain (POST https://
>/ovirt-engine/api/disks)
>
> Why not use the SDK?


>
>1. Initiate imagetransfer and get the proxy_url and signed_ticket
>(https://  /ovirt-engine/api/imagetransfers)
>
> Why are you using proxy_url? The recommended way is to use transfer_url, so
you upload directly to the host. The only reason for using the proxy url is
not being
able to access the host from the machine doing the upload.


>
>1. Upload data using curl (to proxy URL)
>
> [image: image003.jpg]
>
> curl is less efficient than imageio upload_disk.py from the SDK. Also,
using curl you are
exposing the transfer id in the command line, so every process running on
the machine
doing the upload can steal that id and access the image. The proxy_url and
transfer_url
are sensitive and should not be exposed in the command line.


>
>1. Finalize the transfer:
>
> [image: image005.jpg]
>
> [image: image009.jpg]
>
> After that disk is getting deleted automatically.
>
Sequence of events in events tab:
>
> [image: image010.png]
>
>
>
> Is it expected behaviour on finalizing the imagetransfer failure?
>

Yes, it there was an issue with the upload.

A possible issue may be mismatch between the created disk and the actual
data uploaded.
For example, incorrect backing file in qcow2 format, or incorrect format,
like uploading raw
file to qcow2 image.


> If so, how to troubleshoot and get the reason of finalizing failure?
>

Without engine, vdsm, proxy and daemon logs we cannot tell.

Please share:
engine host:
/var/log/ovirt-engine/engine.log
/var/log/ovirt-imageio-proxy/image-proxy.log

host performing the upload:
/var/log/vdsm/vdsm.log
/var/log/ovirt-imageio-daemon/daemon.log

Make sure that all logs include the transfer id - you can find it in the
proxy_url and fransfer_url:
https://server:port/images/transfer-id

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ECVQZY3LC7KYM47E2VZMR575LHQA3E33/


[ovirt-devel] Veritas: Image Transfer Finalize Call Failure

2018-08-09 Thread Pavan Chavva
Hi Team,

Can anyone help answer this question?

Best,
Pavan.

-- Forwarded message -
From: Ketan Pachpande 
Date: Thu, Aug 9, 2018 at 9:44 AM
Subject: RE: [EXTERNAL] Updated invitation: RHV- Veritas Netbackup Weekly
Sync (Tentative) @ Weekly from 10am to 10:30am on Thursday (EDT) (
ketan.pachpa...@veritas.com)
To: pcha...@redhat.com , Abhay Marode <
abhay.mar...@veritas.com>, Suchitra Herwadkar <
suchitra.herwad...@veritas.com>, Mahesh Falmari ,
Sudhakar Paulzagade , Navin Tah <
navin@veritas.com>
Cc: yd...@redhat.com , adbar...@redhat.com <
adbar...@redhat.com>


Hi Pavan,

I have a question regarding imagetransfer finalize call.

After imagetransfer upload, when I call the finalize the transfer, I am
getting Finalize Failure error.



I am following these steps to upload a disk via rest API.

   1. Create disk on a storage domain (POST https://
   /ovirt-engine/api/disks)
   2. Initiate imagetransfer and get the proxy_url and signed_ticket
   (https://  /ovirt-engine/api/imagetransfers)
   3. Upload data using curl (to proxy URL)


   1. Finalize the transfer:

After that disk is getting deleted automatically.



Sequence of events in events tab:



Is it expected behaviour on finalizing the imagetransfer failure?

If so, how to troubleshoot and get the reason of finalizing failure?





Thanks,

Ketan Pachpande
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2IVQBTYIGDLH5ICY4DGPXE5IRP2SKE52/


[ovirt-devel] ovirt-engine has been tagged (ovirt-engine-4.2.5.3)

2018-08-09 Thread Sandro Bonazzola
-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4Z3U4WLANYFQLX5WCGRQJDD76KEJHIAA/


[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Barak Korren
I found out what happened here in the 1st run.

The build job started at* 18:41:26* and finished at* 18:48:19* while
artifacts were archived at* 18:48:18*

The test job started at* 18:44:13*, finished at* 19:24:10*, so it had
reached the point of trying to download the RPMs at* 18:46:36* so almost
two minutes before they actually became available

(All times are in UTC)

Dan, you need to wait for the build job to finish before you can launch the
test job...



On 9 August 2018 at 12:46, Dan Kenigsberg  wrote:

> On Thu, Aug 9, 2018 at 12:41 PM, Anton Marchukov 
> wrote:
> > Hello Barak, Dan.
> >
> > Repoman indeed expect the link to jenkins job only and cannot work
> > with specific artifact path. So I think the last rerun [1] with just
> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> > worked on repoman side as I see from lago log, the artifacts were
> > detected and downloaded:
> >
> > 2018-08-08 19:58:14,067::INFO::root::Saving
> > /dev/shm/ost/deployment-network-suite-4.2/default/
> internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> > 2018-08-08 19:58:14,068::INFO::root::Saving
> > /dev/shm/ost/deployment-network-suite-4.2/default/
> internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.
> git9f9bbcc.el7.noarch.rpm
> > …
> >
> > That matches artifact names produced by the job Dan passed as the
> parameter:
> >
> >
> > vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> > vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> > ...
> >
> >
> > [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/
>
> Darn, you are right. The second job did take the correct vdsm. It
> failed due to a production bug that we need to fix.
>
> >
> >
> > On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com) wrote:
> >> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
> >> >
> >> >
> >> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
> >> >>
> >> >> I've executed
> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/parameters/
> >> >> using
> >> >> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/artifact/exported-artifacts/
> >> >> as customer repo.
> >> >>
> >> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which
> I
> >> >> expected would be pulled onto ost hosts. However
> >> >>
> >> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/
> 3053/artifact/exported-artifacts/tests.test_vm_
> operations/lago-network-suite-4-2-host-0/_var_log/yum.log
> >> >> shows that this was not the case.
> >> >>
> >> >> Any idea why is that?
> >> >
> >> >
> >> >
> >> > I can see the following in lago.log (in the section that includes the
> >> > repoman log):
> >> >
> >> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving
> artifact
> >> > source
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> > 2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::
> Parsing
> >> > jenkins URL:
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
> >> > 2018-08-08 18:47:02,493::INFO::root:: Done
> >> >
> >> >
> >> > The fact that the log says 'Parsing jenkins URL' means that repoman
> properly
> >> > detects that it is a URL to a Jenkins build, additionally when I run
> the
> >> > following locally it seems to download the packages just fine:
> >> >
> >> > repoman ~/tmp/repo add
> >> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-
> demand-el7-x86_64/44/
> >> >
> >> > So this looks like a repoman bug. Adding Anton.
> >> >
> >> > @Dan - can you just retry?
> >>
> >> I did try again, in
> >> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
> >> failed again.
> >> However, this time it has an empty lago.log.
> >>
> >> >
> >> >
> >> >>
> >> >> ___
> >> >> Devel mailing list -- devel@ovirt.org
> >> >> To unsubscribe send an email to devel-le...@ovirt.org
> >> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> >> oVirt Code of Conduct:
> >> >> https://www.ovirt.org/community/about/community-guidelines/
> >> >> List Archives:
> >> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/
> PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Barak Korren
> >> > RHV DevOps team , RHCE, RHCi
> >> > Red Hat EMEA
> >> > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> >>
>



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (imgbased) ] [ 09-08-2018 ] [ 004_basic_sanity.check_snapshot_with_memory ]

2018-08-09 Thread Yuval Turgeman
Imgbased runs on ovirt-node-ng, which is not part of the basic-suite
afaik...

On Thu, Aug 9, 2018, 09:25 Dafna Ron  wrote:

> Hi,
>
> We have a failure in 4.2 which I think may be related to the patch itself.
> Jira opened with additional info:
> https://ovirt-jira.atlassian.net/browse/OVIRT-2418
>
> *Link and headline of suspected patches: *
>
>
> *We failed patch https://gerrit.ovirt.org/#/c/93545/
>  - core: remove lvs after a failed
> upgrade on ovirt-4.2Link to Job:*
>
>
> * https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/
> Link to
> all logs:*
>
>
>
> * 
> https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-004_basic_sanity.py/
> (Relevant)
> error snippet from the log: *
> Error Message
>
> Fault reason is "Operation Failed". Fault detail is "[Cannot run VM. Low disk 
> space on Storage Domain iscsi.]". HTTP response code is 409.
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in 
> wrapper
> prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>   File 
> "/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/004_basic_sanity.py",
>  line 589, in check_snapshot_with_memory
> vm_service.start()
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 
> 30074, in start
> return self._internal_action(action, 'start', None, headers, query, wait)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 299, 
> in _internal_action
> return future.wait() if wait else future
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in 
> wait
> return self._code(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 296, 
> in callback
> self._check_fault(response)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 134, 
> in _check_fault
> self._raise_error(response, body.fault)
>   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, 
> in _raise_error
> raise error
> Error: Fault reason is "Operation Failed". Fault detail is "[Cannot run VM. 
> Low disk space on Storage Domain iscsi.]". HTTP response code is 409.
>
> **
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HBUDCEWRNTLZ5IOVXGLFFV55VJD3XX4S/


[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Dan Kenigsberg
On Thu, Aug 9, 2018 at 12:41 PM, Anton Marchukov  wrote:
> Hello Barak, Dan.
>
> Repoman indeed expect the link to jenkins job only and cannot work
> with specific artifact path. So I think the last rerun [1] with just
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> worked on repoman side as I see from lago log, the artifacts were
> detected and downloaded:
>
> 2018-08-08 19:58:14,067::INFO::root::Saving
> /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> 2018-08-08 19:58:14,068::INFO::root::Saving
> /dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> …
>
> That matches artifact names produced by the job Dan passed as the parameter:
>
>
> vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
> vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
> ...
>
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/

Darn, you are right. The second job did take the correct vdsm. It
failed due to a production bug that we need to fix.

>
>
> On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com) wrote:
>> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
>> >
>> >
>> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
>> >>
>> >> I've executed
>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
>> >> using
>> >> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
>> >> as customer repo.
>> >>
>> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which I
>> >> expected would be pulled onto ost hosts. However
>> >>
>> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
>> >> shows that this was not the case.
>> >>
>> >> Any idea why is that?
>> >
>> >
>> >
>> > I can see the following in lago.log (in the section that includes the
>> > repoman log):
>> >
>> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving artifact
>> > source
>> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> > 2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
>> > jenkins URL:
>> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
>> > 2018-08-08 18:47:02,493::INFO::root:: Done
>> >
>> >
>> > The fact that the log says 'Parsing jenkins URL' means that repoman 
>> > properly
>> > detects that it is a URL to a Jenkins build, additionally when I run the
>> > following locally it seems to download the packages just fine:
>> >
>> > repoman ~/tmp/repo add
>> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>> >
>> > So this looks like a repoman bug. Adding Anton.
>> >
>> > @Dan - can you just retry?
>>
>> I did try again, in
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
>> failed again.
>> However, this time it has an empty lago.log.
>>
>> >
>> >
>> >>
>> >> ___
>> >> Devel mailing list -- devel@ovirt.org
>> >> To unsubscribe send an email to devel-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> >> oVirt Code of Conduct:
>> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives:
>> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
>> >
>> >
>> >
>> >
>> > --
>> > Barak Korren
>> > RHV DevOps team , RHCE, RHCi
>> > Red Hat EMEA
>> > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4ZCFK274LKXUGIUFW55AUBX6D6PFC5FI/


[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Anton Marchukov
Hello Barak, Dan.

Repoman indeed expect the link to jenkins job only and cannot work
with specific artifact path. So I think the last rerun [1] with just
http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
worked on repoman side as I see from lago log, the artifacts were
detected and downloaded:

2018-08-08 19:58:14,067::INFO::root::Saving
/dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/x86_64/vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
2018-08-08 19:58:14,068::INFO::root::Saving
/dev/shm/ost/deployment-network-suite-4.2/default/internal_repo/default/el7/noarch/vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
…

That matches artifact names produced by the job Dan passed as the parameter:


vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm
vdsm-api-4.20.36-11.git9f9bbcc.el7.noarch.rpm
...


[1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054/


On 9 August 2018 at 09:25:40, Dan Kenigsberg (dan...@redhat.com) wrote:
> On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren wrote:
> >
> >
> > On 8 August 2018 at 22:53, Dan Kenigsberg wrote:
> >>
> >> I've executed
> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
> >> using
> >> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
> >> as customer repo.
> >>
> >> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which I
> >> expected would be pulled onto ost hosts. However
> >>
> >> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
> >> shows that this was not the case.
> >>
> >> Any idea why is that?
> >
> >
> >
> > I can see the following in lago.log (in the section that includes the
> > repoman log):
> >
> > 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving artifact
> > source
> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> > 2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
> > jenkins URL:
> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> > 2018-08-08 18:47:02,493::WARNING::root:: No artifacts found
> > 2018-08-08 18:47:02,493::INFO::root:: Done
> >
> >
> > The fact that the log says 'Parsing jenkins URL' means that repoman properly
> > detects that it is a URL to a Jenkins build, additionally when I run the
> > following locally it seems to download the packages just fine:
> >
> > repoman ~/tmp/repo add
> > http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> >
> > So this looks like a repoman bug. Adding Anton.
> >
> > @Dan - can you just retry?
>
> I did try again, in
> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
> failed again.
> However, this time it has an empty lago.log.
>
> >
> >
> >>
> >> ___
> >> Devel mailing list -- devel@ovirt.org
> >> To unsubscribe send an email to devel-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
> >
> >
> >
> >
> > --
> > Barak Korren
> > RHV DevOps team , RHCE, RHCi
> > Red Hat EMEA
> > redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/F3FIX5RA3YCQDBVWNCMBS7A57ZWYJXTI/


[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Eyal Edri
Actually adding Anton this time.

On Thu, Aug 9, 2018 at 8:30 AM Barak Korren  wrote:

>
>
> On 8 August 2018 at 22:53, Dan Kenigsberg  wrote:
>
>> I've executed
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
>> using
>> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
>> as customer repo.
>>
>> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which I
>> expected would be pulled onto ost hosts. However
>>
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
>> shows that this was not the case.
>>
>> Any idea why is that?
>>
>
>
> I can see the following in lago.log (in the section that includes the
> repoman log):
>
> 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving artifact source 
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> 2018-08-08 
> 
>  18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing jenkins URL: 
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> 2018-08-08 
> 
>  18:47:02,493::WARNING::root::No artifacts found
> 2018-08-08 18:47:02,493::INFO::root::Done
>
>
> The fact that the log says 'Parsing jenkins URL' means that repoman
> properly detects that it is a URL to a Jenkins build, additionally when I
> run the following locally it seems to download the packages just fine:
>
> repoman ~/tmp/repo add
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>
> So this looks like a repoman bug. Adding Anton.
>
> @Dan - can you just retry?
>
>
>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
>>
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2F7XQSVQZDD76WOVEJ3TSHGJY37I6SXG/
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QVAWPGLIFZDZIGNC2ULFC6BBUOEMXB2Q/


[ovirt-devel] Re: [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 07-08-2018 ] [ 004_basic_sanity.update_template_version ]

2018-08-09 Thread Dafna Ron
thanks Martin.
I can see the change ran:
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.2_change-queue-tester/2820/

and there is a run for ovirt-engine now:
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/rhv-4.2_change-queue-tester/404/

I will update once ovirt-engine run finishes.

On Wed, Aug 8, 2018 at 1:08 PM, Martin Perina  wrote:

>
>
> On Wed, Aug 8, 2018 at 11:28 AM, Dafna Ron  wrote:
>
>> Eli, Any updates?
>>
>
> Below patch should fix the issue:
> https://gerrit.ovirt.org/#/c/93568/
>
>>
>> On Tue, Aug 7, 2018 at 4:51 PM, Eli Mesika  wrote:
>>
>>> Looking 
>>>
>>> On Tue, Aug 7, 2018 at 2:42 PM, Dafna Ron  wrote:
>>>
 Hi,

 We are failing ovirt 4.2 on project ovirt-engine on test
 004_basic_sanity.update_template_version.

 I believe the reported patch from CQ may have indeed caused the issue.

 Eli, can you please check this issue?

 *Link and headline of suspected patches:
 https://gerrit.ovirt.org/#/c/93501/  -
 *















































 *core:make search string fields not null and emptyLink to
 Job:https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800
 Link to
 all
 logs:https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-004_basic_sanity.py/
 (Relevant)
 error snippet from the log: >>> "0aec9ef2-5e1f-4cb6-bf75-f5826d2ae135", method: Volume.getInfo, params:
 {storagepoolID=fe6f6819-4791-4624-aa56-c82e49b0eaf3,
 storagedomainID=2a3af2d0-c00e-4ac7-a162-fa08d33c173f,
 imageID=2d2f61d4-2347-4200-9c1f-0ee376104ef0,
 volumeID=21ea717f-e3a1-4c36-8101-ba746bd78c40}>2018-08-07 06:12:46,522-04
 INFO  [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-4)
 [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] Running command:
 AddVmTemplateCommand internal: false. Entities affected :  ID:
 fe6f6819-4791-4624-aa56-c82e49b0eaf3 Type: StoragePoolAction group
 CREATE_TEMPLATE with role type USER2018-08-07 06:12:46,525-04 INFO
 [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-4)
 [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] START, SetVmStatusVDSCommand(
 SetVmStatusVDSCommandParameters:{vmId='64293490-e128-48b7-9e23-0491b48d9a1f',
 status='ImageLocked', exitStatus='Normal'}), log id: eca02a92018-08-07
 06:12:46,527-04 INFO
 [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-4)
 [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] FINISH, SetVmStatusVDSCommand, log
 id: eca02a92018-08-07 06:12:46,527-04 DEBUG
 [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
 (default task-4) [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] method:
 runVdsCommand, params: [SetVmStatus,
 SetVmStatusVDSCommandParameters:{vmId='64293490-e128-48b7-9e23-0491b48d9a1f',
 status='ImageLocked', exitStatus='Normal'}], timeElapsed: 3ms2018-08-07
 06:12:46,537-04 DEBUG
 [org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
 (default task-4) [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] Translating
 SQLException with SQL state '23502', error code '0', message [ERROR: null
 value in column "description" violates not-null constraint  Detail: Failing
 row contains (ce532690-2131-49a8-b2a0-183936727092,
 CirrOS_0.4.0_for_x86_64_glance_template, 512,
 5b1f874d-dc92-43ed-86ef-a9c2d6bfc9a3, 0, null,
 fe7292a4-c998-4f6c-897c-fa7525911a16, 2018-08-07 06:12:46.529-04, 1, null,
 f, 1, 1, 1, Etc/GMT, t, f, 2018-08-07 06:12:46.530974-04, null, null, f, 1,
 0, 0, 1, 0, , 3, null, null, null, 0, , , 256, TEMPLATE, 0, 1,
 31a8e1fa-1fad-456c-8b8a-aa11551cae9d, f, null, f, f, 1, f, f, f,
 d0d66980-9a26-11e8-b2f3-5452c0a8c802, null, , f, 0, null, null, null,
 guest_agent, null, null, null, 2, null, 2, 12345678, f, interleave, t, t,
 deb3f53e-13c5-4aea-bf90-0339eba39fed, null, null, null, null,
 22716173-2816-a109-1d2f-c44d945e92dd, 3b3f239c-d8bb-423f-a39c-e2b905473b83,
 null, 1, LOCK_SCREEN, 2, null, null, 2048, null, AUTO_RESUME, t).  Where:
 SQL statement "INSERTINTO vm_static(child_count,
 creation_date,description,free_text_comment,
 mem_size_mb,max_memory_size_mb,num_of_io_threads,
 vm_name,num_of_sockets,cpu_per_socket,
 threads_per_cpu,os,vm_guid,cluster_id,
 num_of_monitors,

[ovirt-devel] Re: repoman glitch?

2018-08-09 Thread Dan Kenigsberg
On Thu, Aug 9, 2018 at 8:29 AM, Barak Korren  wrote:
>
>
> On 8 August 2018 at 22:53, Dan Kenigsberg  wrote:
>>
>> I've executed
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/parameters/
>> using
>> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/artifact/exported-artifacts/
>> as customer repo.
>>
>> The custom repo has vdsm-4.20.36-11.git9f9bbcc.el7.x86_64.rpm which I
>> expected would be pulled onto ost hosts. However
>>
>> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/3053/artifact/exported-artifacts/tests.test_vm_operations/lago-network-suite-4-2-host-0/_var_log/yum.log
>> shows that this was not the case.
>>
>> Any idea why is that?
>
>
>
> I can see the following in lago.log (in the section that includes the
> repoman log):
>
> 2018-08-08 18:47:02,357::INFO::repoman.common.repo::Resolving artifact
> source
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> 2018-08-08 18:47:02,493::INFO::repoman.common.sources.jenkins::Parsing
> jenkins URL:
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
> 2018-08-08 18:47:02,493::WARNING::root::No artifacts found
> 2018-08-08 18:47:02,493::INFO::root::Done
>
>
> The fact that the log says 'Parsing jenkins URL' means that repoman properly
> detects that it is a URL to a Jenkins build, additionally when I run the
> following locally it seems to download the packages just fine:
>
> repoman ~/tmp/repo add
> http://jenkins.ovirt.org/job/vdsm_4.2_build-artifacts-on-demand-el7-x86_64/44/
>
> So this looks like a repoman bug. Adding Anton.
>
> @Dan - can you just retry?

I did try again, in
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/3054 which
failed again.
However, this time it has an empty lago.log.

>
>
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PQHTXDZ6SLWI53FRHIOE5HDUI5ZBM4Z6/
>
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UBIKDPKA2PFF7EFSNCK5XZSC5UMHLMP3/


[ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (imgbased) ] [ 09-08-2018 ] [ 004_basic_sanity.check_snapshot_with_memory ]

2018-08-09 Thread Dafna Ron
Hi,

We have a failure in 4.2 which I think may be related to the patch itself.
Jira opened with additional info:
https://ovirt-jira.atlassian.net/browse/OVIRT-2418

*Link and headline of suspected patches: *


*We failed patch https://gerrit.ovirt.org/#/c/93545/
 - core: remove lvs after a failed
upgrade on ovirt-4.2Link to Job:*


* https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/
Link to
all logs:*



* 
https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2819/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-4.2/post-004_basic_sanity.py/
(Relevant)
error snippet from the log: *
Error Message

Fault reason is "Operation Failed". Fault detail is "[Cannot run VM.
Low disk space on Storage Domain iscsi.]". HTTP response code is 409.

Stacktrace

Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
142, in wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
  File 
"/home/jenkins/workspace/ovirt-4.2_change-queue-tester/ovirt-system-tests/basic-suite-4.2/test-scenarios/004_basic_sanity.py",
line 589, in check_snapshot_with_memory
vm_service.start()
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py",
line 30074, in start
return self._internal_action(action, 'start', None, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
299, in _internal_action
return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
55, in wait
return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
296, in callback
self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
134, in _check_fault
self._raise_error(response, body.fault)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line
118, in _raise_error
raise error
Error: Fault reason is "Operation Failed". Fault detail is "[Cannot
run VM. Low disk space on Storage Domain iscsi.]". HTTP response code
is 409.

**
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WLQQPD6HONCT3YHSQDEGT5KA67JL6KHK/