We did not update the Fedora rawhide for few month since a package
was missing. Now that this issue was solved, we have new errors,


1. async is a keywoard in python 3.7


Compiling './tests/storage/fakesanlock.py'...
***   File "./tests/storage/fakesanlock.py", line 65
    async=False):
        ^
SyntaxError: invalid syntax


We have many of these. The issue is sanlock api uses the kwarg "async", and
python 3.7 made this invalid syntax.

$ ./python
Python 3.7.0+ (heads/3.7:426135b674, Aug  9 2018, 22:50:16)
[GCC 8.1.1 20180712 (Red Hat 8.1.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def foo(async=False):
  File "<stdin>", line 1
    def foo(async=False):
                ^
SyntaxError: invalid syntax
>>> async = True
  File "<stdin>", line 1
    async = True
          ^
SyntaxError: invalid syntax

Thank you python developers for making our life more interesting :-)

So we will have to change sanlock python binding to replace "async"
with something else.

I'll file sanlock bug for this.


2. test_sourceroute_add_remove_and_read fails

No idea why it fails, hopefully Dan or Edward have a clue.


FAIL: test_sourceroute_add_remove_and_read
(network.sourceroute_test.TestSourceRoute)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/vdsm/tests/testValidation.py", line 193, in wrapper
    return f(*args, **kwargs)
  File "/vdsm/tests/network/sourceroute_test.py", line 80, in
test_sourceroute_add_remove_and_read
    self.assertEqual(2, len(routes), routes)
AssertionError: 2 != 0
-------------------- >> begin captured logging << --------------------
2018-08-09 19:36:31,446 DEBUG (MainThread) [root] /sbin/ip link add
name dummy_UTpMy type dummy (cwd None) (cmdutils:151)
2018-08-09 19:36:31,455 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,456 DEBUG (netlink/events) [root] START thread
<Thread(netlink/events, started daemon 140332374083328)> (func=<bound
method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object
at 0x7fa1b18dc6d0>>, args=(), kwargs={}) (concurrent:193)
2018-08-09 19:36:31,456 DEBUG (MainThread) [root] /sbin/ip link set
dev dummy_UTpMy up (cwd None) (cmdutils:151)
2018-08-09 19:36:31,463 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,464 DEBUG (netlink/events) [root] FINISH thread
<Thread(netlink/events, started daemon 140332374083328)>
(concurrent:196)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] Adding source route
for device dummy_UTpMy (sourceroute:195)
2018-08-09 19:36:31,472 DEBUG (MainThread) [root] /sbin/ip -4 route
add 0.0.0.0/0 via 192.168.99.2 dev dummy_UTpMy table 3232260865 (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,478 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,479 DEBUG (MainThread) [root] /sbin/ip -4 route
add 192.168.99.0/29 via 192.168.99.1 dev dummy_UTpMy table 3232260865
(cwd None) (cmdutils:151)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] /sbin/ip rule add
from 192.168.99.0/29 prio 32000 table 3232260865 (cwd None)
(cmdutils:151)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] /sbin/ip rule add
from all to 192.168.99.0/29 dev dummy_UTpMy prio 32000 table
3232260865 (cwd None) (cmdutils:151)
2018-08-09 19:36:31,498 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,499 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,505 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865  (iproute2:60)
2018-08-09 19:36:31,506 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,512 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865  (iproute2:60)
2018-08-09 19:36:31,513 WARNING (MainThread) [root] Could not parse
rule 32000:     from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865  (iproute2:60)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] Removing source
route for device dummy_UTpMy (sourceroute:215)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] /sbin/ip link del
dev dummy_UTpMy (cwd None) (cmdutils:151)
2018-08-09 19:36:31,532 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)


3. qemu-io returns now non-zero code on failures, breaking our tests

This command use to return 0 on failures, and the only way to detect
failures was
to check the output. Now it was fixed to return non-zero code, and this
breaks our
tests.

I think this should be an easy fix.

Here is an example:


1m_____________________________ test_no_match[qcow2]
_____________________________
tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'), image_format = 'qcow2'
    def test_no_match(tmpdir, image_format):
        path = str(tmpdir.join('test.' + image_format))
        op = qemuimg.create(path, '1m', image_format)
        op.run()
        qemuio.write_pattern(path, image_format, pattern=2)
        with pytest.raises(qemuio.VerificationError):
>           qemuio.verify_pattern(path, image_format, pattern=4)
storage/qemuio_test.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2', format = 'qcow2'
offset = 512, len = 1024, pattern = 4
    def verify_pattern(path, format, offset=512, len=1024, pattern=5):
        read_cmd = 'read -P %d -s 0 -l %d %d %d' % (pattern, len, offset, len)
        cmd = ['qemu-io', '-f', format, '-c', read_cmd, path]
        rc, out, err = commands.execCmd(cmd, raw=True)
        if rc != 0 or err != b"":
>           raise cmdutils.Error(cmd, rc, out, err)
E           Error: Command ['qemu-io', '-f', 'qcow2', '-c', 'read -P 4
-s 0 -l 1024 512 1024',
'/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
out='Pattern verification failed at offset 512, 1024 bytes\nread
1024/1024 bytes at offset 512\n1 KiB, 1 ops; 0.0001 sec (7.570 MiB/sec
and 7751.9380 ops/sec)\n' err=''
storage/qemuio.py:50: Error
----------------------------- Captured stderr call -----------------------------
2018-08-09 19:37:32,126 DEBUG (MainThread) [storage.operation]
/usr/bin/taskset --cpu-list 0-1 /usr/bin/nice -n 19 /usr/bin/ionice -c
3 /usr/bin/qemu-img create -f qcow2 -o compat=0.10
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 1m (cwd None)
(operation:150)
2018-08-09 19:37:32,148 DEBUG (MainThread) [storage.operation]
SUCCESS: <err> = ''; <rc> = 0 (operation:169)
2018-08-09 19:37:32,148 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 qemu-io -f qcow2 -c 'write -P 2 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
(commands:66)
2018-08-09 19:37:32,170 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (commands:87)
2018-08-09 19:37:32,171 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 qemu-io -f qcow2 -c 'read -P 4 -s 0 -l 1024 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
(commands:66)
2018-08-09 19:37:32,181 DEBUG (MainThread) [root] FAILED: <err> = '';
<rc> = 1 (commands:87)
------------------------------ Captured log call -------------------------------
operation.py               150 DEBUG    /usr/bin/taskset --cpu-list
0-1 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/qemu-img create
-f qcow2 -o compat=0.10 /var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2
1m (cwd None)
operation.py               169 DEBUG    SUCCESS: <err> = ''; <rc> = 0
commands.py                 66 DEBUG    /usr/bin/taskset --cpu-list
0-1 qemu-io -f qcow2 -c 'write -P 2 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
commands.py                 87 DEBUG    SUCCESS: <err> = ''; <rc> = 0
commands.py                 66 DEBUG    /usr/bin/taskset --cpu-list
0-1 qemu-io -f qcow2 -c 'read -P 4 -s 0 -l 1024 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
commands.py                 87 DEBUG    FAILED: <err> = ''; <rc> = 1


4. pywatch tests still fail

Looks like our on_fedora() helper is broken, so this test is not marked as
xfail.


______________________ TestPyWatch.test_timeout_backtrace ______________________
self = <pywatch_test.TestPyWatch object at 0x7f5cf42899d0>
    @pytest.mark.xfail(on_fedora(), reason="py-bt is broken on Fedora 27")
    @pytest.mark.xfail(on_ovirt_ci(),
                       reason="py-bt randomly unavailable on EL7 nodes")
    def test_timeout_backtrace(self):
        script = '''
    import time
        def outer():
        inner()
        def inner():
        time.sleep(10)
        outer()
    '''
        rc, out, err = exec_cmd(['./py-watch', '0.1', 'python', '-c', script])
>       assert b'in inner ()' in out
E       AssertionError: assert 'in inner ()' in
'=============================================================\n=
           Watched process timed out           ...         Terminating
watched process
=\n=============================================================\n'
pywatch_test.py:74: AssertionError
------------------------------ Captured log call -------------------------------
cmdutils.py                151 DEBUG    ./py-watch 0.1 python -c '
import time
def outer():
    inner()
def inner():
    time.sleep(10)
outer()
' (cwd None)
cmdutils.py                159 DEBUG    FAILED: <err> = '\nwarning:
Loadable section ".note.gnu.property" outside of ELF
segments\n\nwarning: Loadable section ".note.gnu.property" outside of
ELF segments\n'; <rc> = 143


5. ovs tests that pass on Fedora 28 fail on rawhide

Not sure how why the tests pass on Fedora 28 and fail on CentOS and Fedora
rawhide.


________ ERROR at setup of TestOvsApiBase.test_execute_a_single_command ________
    @pytest.fixture(scope='session', autouse=True)
    def ovs_service():
        service = OvsService()
        with xfail_when_running_on_travis_with_centos():
>           service.setup()
network/integration/ovs/conftest.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f79737589d0>
    def setup(self):
        if not self.ovs_init_state_is_up:
            cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
>       assert self.is_service_running()
E       AssertionError
network/ovsnettestlib.py:39: AssertionError
------------------------------ Captured log setup ------------------------------
cmdutils.py                151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py                159 DEBUG    FAILED: <err> = ''; <rc> = 1
cmdutils.py                151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl --system-id=random start (cwd
None)
cmdutils.py                159 DEBUG    FAILED: <err> = 'modprobe:
FATAL: Module openvswitch not found in directory
/lib/modules/4.4.0-104-generic\nrmmod: ERROR: Module bridge is in use
by: br_netfilter\n'; <rc> = 1
cmdutils.py                151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py                159 DEBUG    FAILED: <err> = ''; <rc> = 1


Otherwise all tests pass, so we are in pretty good shape :-)

Nir
_______________________________________________
Devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/I3ACUQK23DMJX7XOW2RYEAM53F7FIYJV/

Reply via email to