Sounds alright then.  Thanks for the heads up.

Acked-by: Jose Fonseca <[email protected]>

Jose

On 05/12/15 23:26, Dylan Baker wrote:
According to the schema it's allowed. The schema is in the piglit repo
at "framework/tests/schema/junit-7.xsd"

On Sat, Dec 5, 2015 at 4:36 AM, Jose Fonseca <[email protected]
<mailto:[email protected]>> wrote:

    On 05/12/15 00:21, [email protected]
    <mailto:[email protected]> wrote:

        From: Dylan Baker <[email protected]
        <mailto:[email protected]>>

        Currently the JUnit backend has no way to represent subtests in
        such a
        way that they can be understood by jenkins and by the summary tools.

        Mark, Nanley and myself consulted and came up with several
        approaches,
        each with serious drawbacks:
        1. Print the subtest statuses into stdout or stderr nodes in the
        JUnit.
             This has the advantage of being simple, but has a problem
        with being
             shadowed by an expected-<status>. If subtest A fails, an
        expected
             fail can be entered. If subtest B also starts failing, no
        one will
             notice. This wont work
        2. Treat each subtest as a full test, and the test as a group. I
        have
             two reservations about this approach. It's different than
        the JSON
             for one, and there isn't a good way to turn the JUnit back
        into the
             piglit internal representation using this approach, which
        would make
             running JUnit results through the piglit status tools
        difficult. This
             would also massively inflate the size of the JSON results,
        and that's
             already becoming a problem for us.
        3. Create a main test entry, and then subtest entries as well, which
             pointed back to the original test as the parent in their
        stdout. This
             also has shadowing problems, and would still make the XML
        very large.

        The final approach taken was suggested by Nanely, to turn tests with
        subtests into a testsuite element, which could represent the shared
        values (stdout, stderr), and could hold individual testcases
        elements
        for each subtest. This solves the shadowing issue, and
        introduces less
        file size increase than the other ideas floated.


    Having the sub-tests being a testsuite is OK, but I'm not sure
    nesting a testsuite inside a testsuite is reliable.

    I don't know if JUnit generated by other frameworks ever have that.
    And what exactly is the semantics for names.

    I wonder if it wouldn't be better to ensure that the `testsuite` tag
    is always a child from the root `testsuites` tag.

    Jose



        Also adds test for the new feature.

        cc: [email protected] <mailto:[email protected]>
        cc: [email protected] <mailto:[email protected]>
        Signed-off-by: Dylan Baker <[email protected]
        <mailto:[email protected]>>
        ---
           framework/backends/junit.py             | 174
        +++++++++++++++++++++------
           framework/tests/junit_backends_tests.py | 206
        +++++++++++++++++++++++++++++++-
           2 files changed, 343 insertions(+), 37 deletions(-)

        diff --git a/framework/backends/junit.py
        b/framework/backends/junit.py
        index 34df300..329fc4b 100644
        --- a/framework/backends/junit.py
        +++ b/framework/backends/junit.py
        @@ -117,8 +117,6 @@ class JUnitBackend(FileBackend):

                   shutil.rmtree(os.path.join(self._dest, 'tests'))

        -
        -
               def _write(self, f, name, data):
                   # Split the name of the test and the group (what
        junit refers to as
                   # classname), and replace piglits '/' separated
        groups with '.', after
        @@ -135,11 +133,41 @@ class JUnitBackend(FileBackend):
                   # set different root names.
                   classname = 'piglit.' + classname

        -        element = self.__make_case(testname, classname, data)
        +        if data.subtests:
        +            # If there are subtests treat the test as a suite
        instead of a
        +            # test, set system-out, system-err, and time on the
        suite rather
        +            # than on the testcase
        +            name='{}.{}'.format(classname, testname)
        +            element = etree.Element(
        +                'testsuite',
        +                name=name,
        +                time=str(data.time.total))
        +
        +            out = etree.SubElement(element, 'system-out')
        +            out.text = data.command + '\n' + data.out
        +            err = etree.SubElement(element, 'system-err')
        +            err.text = data.err
        +            err.text += '\n\nstart time: {}\nend time:
        {}\n'.format(
        +                data.time.start, data.time.end)
        +
        +            for name, result in data.subtests.iteritems():
        +                sub = self.__make_subcase(name, result, err)
        +                out = etree.SubElement(sub, 'system-out')
        +                out.text = 'I am a subtest of {}'.format(name)
        +                element.append(sub)
        +
        +            for attrib, xpath in [('failures',
        './/testcase/failure'),
        +                                  ('errors', './/testcase/error'),
        +                                  ('skipped',
        './/testcase/skipped'),
        +                                  ('tests', './/testcase')]:
        +                element.attrib[attrib] =
        str(len(element.findall(xpath)))
        +
        +        else:
        +            element = self.__make_case(testname, classname, data)
        +
                   f.write(etree.tostring(element))

        -    def __make_case(self, testname, classname, data):
        -        """Create a test case element and return it."""
        +    def __make_name(self, testname):
                   # Jenkins will display special pages when the test
        has certain names,
                   # so add '_' so the tests don't match those names
                   # https://jenkins-ci.org/issue/18062
        @@ -148,17 +176,68 @@ class JUnitBackend(FileBackend):
                   if full_test_name in _JUNIT_SPECIAL_NAMES:
                       testname += '_'
                       full_test_name = testname + self._test_suffix
        +        return full_test_name
        +
        +    def __make_subcase(self, testname, result, err):
        +        """Create a <testcase> element for subtests.
        +
        +        This method is used to create a <testcase> element to
        nest inside of a
        +        <testsuite> element when that element represents a test
        with subtests.
        +        This differs from __make_case in that it doesn't add as
        much metadata
        +        to the <testcase>, since that was attached to the
        <testsuite> by
        +        _write, and that it doesn't handle incomplete cases,
        since subtests
        +        cannot have incomplete as a status (though that could
        change).
        +
        +        """
        +        full_test_name = self.__make_name(testname)
        +        element = etree.Element('testcase',
        +                                name=full_test_name,
        +                                status=str(result))
        +
        +        # replace special characters and make case insensitive
        +        lname = self.__normalize_name(testname)
        +
        +        expected_result = "pass"
        +
        +        if lname in self._expected_failures:
        +            expected_result = "failure"
        +            # a test can either fail or crash, but not both
        +            assert lname not in self._expected_crashes
        +
        +        if lname in self._expected_crashes:
        +            expected_result = "error"
        +
        +        self.__add_result(element, result, err, expected_result)
        +
        +        return element
        +
        +    def __make_case(self, testname, classname, data):
        +        """Create a <testcase> element and return it.
        +
        +        Specifically, this is used to create "normal" test
        case, one that
        +        doesn't contain any subtests. __make_subcase is used to
        create a
        +        <testcase> which belongs inside a nested <testsuite> node.
        +
        +        Arguments:
        +        testname -- the name of the test
        +        classname -- the name of the group (to use piglit
        terminology)
        +        data -- A TestResult instance
        +
        +        """
        +        full_test_name = self.__make_name(testname)

                   # Create the root element
        -        element = etree.Element('testcase', name=full_test_name,
        +        element = etree.Element('testcase',
        +                                name=full_test_name,
                                           classname=classname,
        -                                # Incomplete will not have a time.
                                           time=str(data.time.total),
                                           status=str(data.result))

                   # If this is an incomplete status then none of these
        values will be
                   # available, nor
                   if data.result != 'incomplete':
        +            expected_result = "pass"
        +
                       # Add stdout
                       out = etree.SubElement(element, 'system-out')
                       out.text = data.out
        @@ -171,7 +250,8 @@ class JUnitBackend(FileBackend):
                       err.text = data.err
                       err.text += '\n\nstart time: {}\nend time:
        {}\n'.format(
                           data.time.start, data.time.end)
        -            expected_result = "pass"
        +
        +            element.extend([err, out])

                       # replace special characters and make case
        insensitive
                       lname = self.__normalize_name(classname, testname)
        @@ -184,29 +264,34 @@ class JUnitBackend(FileBackend):
                       if lname in self._expected_crashes:
                           expected_result = "error"

        -            self.__add_result(element, data, err, expected_result)
        +            self.__add_result(element, data.result, err,
        expected_result)
                   else:
                       etree.SubElement(element, 'failure',
        message='Incomplete run.')

                   return element

               @staticmethod
        -    def __normalize_name(classname, testname):
        -        name = (classname + "." + testname).lower()
        +    def __normalize_name(testname, classname=None):
        +        """Nomralize the test name to what is stored in the
        expected statuses.
        +        """
        +        if classname is not None:
        +            name = (classname + "." + testname).lower()
        +        else:
        +            name = testname.lower()
                   name = name.replace("=", ".")
                   name = name.replace(":", ".")
                   return name

               @staticmethod
        -    def __add_result(element, data, err, expected_result):
        -        """Add a <skipped>, <failure>, or <error> if necissary."""
        +    def __add_result(element, result, err, expected_result):
        +        """Add a <skipped>, <failure>, or <error> if necessary."""
                   res = None
                   # Add relevant result value, if the result is pass
        then it doesn't
                   # need one of these statuses
        -        if data.result == 'skip':
        +        if result == 'skip':
                       res = etree.SubElement(element, 'skipped')

        -        elif data.result in ['fail', 'dmesg-warn', 'dmesg-fail']:
        +        elif result in ['fail', 'dmesg-warn', 'dmesg-fail']:
                       if expected_result == "failure":
                           err.text += "\n\nWARN: passing test as an
        expected failure"
                           res = etree.SubElement(element, 'skipped',
        @@ -214,7 +299,7 @@ class JUnitBackend(FileBackend):
                       else:
                           res = etree.SubElement(element, 'failure')

        -        elif data.result == 'crash':
        +        elif result == 'crash':
                       if expected_result == "error":
                           err.text += "\n\nWARN: passing test as an
        expected crash"
                           res = etree.SubElement(element, 'skipped',
        @@ -229,7 +314,7 @@ class JUnitBackend(FileBackend):

                   # Add the piglit type to the failure result
                   if res is not None:
        -            res.attrib['type'] = str(data.result)
        +            res.attrib['type'] = str(result)


           def _load(results_file):
        @@ -242,6 +327,27 @@ def _load(results_file):
               JUnit document.

               """
        +    def populate_result(result, test):
        +        # This is the fallback path, we'll try to overwrite
        this with the value
        +        # in stderr
        +        result.time =
        results.TimeAttribute(end=float(test.attrib['time']))
        +        result.err = test.find('system-err').text
        +
        +        # The command is prepended to system-out, so we need to
        separate those
        +        # into two separate elements
        +        out = test.find('system-out').text.split('\n')
        +        result.command = out[0]
        +        result.out = '\n'.join(out[1:])
        +
        +        # Try to get the values in stderr for time
        +        if 'time start' in result.err:
        +            for line in result.err.split('\n'):
        +                if line.startswith('time start:'):
        +                    result.time.start = float(line[len('time
        start: '):])
        +                elif line.startswith('time end:'):
        +                    result.time.end = float(line[len('time end:
        '):])
        +                    break
        +
               run_result = results.TestrunResult()

               splitpath =
        os.path.splitext(results_file)[0].split(os.path.sep)
        @@ -267,25 +373,25 @@ def _load(results_file):

                   result.result = test.attrib['status']

        -        # This is the fallback path, we'll try to overwrite
        this with the value
        -        # in stderr
        -        result.time =
        results.TimeAttribute(end=float(test.attrib['time']))
        -        result.err = test.find('system-err').text
        +        populate_result(result, test)

        -        # The command is prepended to system-out, so we need to
        separate those
        -        # into two separate elements
        -        out = test.find('system-out').text.split('\n')
        -        result.command = out[0]
        -        result.out = '\n'.join(out[1:])
        +        run_result.tests[name] = result

        -        # Try to get the values in stderr for time
        -        if 'time start' in result.err:
        -            for line in result.err.split('\n'):
        -                if line.startswith('time start:'):
        -                    result.time.start = float(line[len('time
        start: '):])
        -                elif line.startswith('time end:'):
        -                    result.time.end = float(line[len('time end:
        '):])
        -                    break
        +    for test in tree.iterfind('testsuite'):
        +        result = results.TestResult()
        +        # Take the class name minus the 'piglit.' element,
        replace junit's '.'
        +        # separator with piglit's separator, and join the group
        and test names
        +        name = test.attrib['name'].split('.', 1)[1]
        +        name = name.replace('.', grouptools.SEPARATOR)
        +
        +        # Remove the trailing _ if they were added (such as to
        api and search)
        +        if name.endswith('_'):
        +            name = name[:-1]
        +
        +        populate_result(result, test)
        +
        +        for subtest in test.iterfind('testcase'):
        +            result.subtests[subtest.attrib['name']] =
        subtest.attrib['status']

                   run_result.tests[name] = result

        diff --git a/framework/tests/junit_backends_tests.py
        b/framework/tests/junit_backends_tests.py
        index 7d5a3fc..ae18e3e 100644
        --- a/framework/tests/junit_backends_tests.py
        +++ b/framework/tests/junit_backends_tests.py
        @@ -29,6 +29,7 @@ try:
               from lxml import etree
           except ImportError:
               import xml.etree.cElementTree as etree
        +import mock
           import nose.tools as nt
           from nose.plugins.skip import SkipTest

        @@ -44,7 +45,7 @@ doc_formatter =
        utils.DocFormatter({'separator': grouptools.SEPARATOR})
           _XML = """\
           <?xml version='1.0' encoding='utf-8'?>
             <testsuites>
        -    <testsuite name="piglit" tests="1">
        +    <testsuite name="piglit" tests="5">
                 <testcase classname="piglit.foo.bar" name="a-test"
        status="pass" time="1.12345">
                   <system-out>this/is/a/command\nThis is
        stdout</system-out>
                   <system-err>this is stderr
        @@ -53,6 +54,24 @@ time start: 1.0
           time end: 4.5
                   </system-err>
                 </testcase>
        +      <testsuite name="piglit.bar" time="1.234" tests="4"
        failures="1" skipped="1" errors="1">
        +        <system-err>this is stderr
        +
        +time start: 1.0
        +time end: 4.5
        +</system-err>
        +        <system-out>this/is/a/command\nThis is stdout</system-out>
        +        <testcase name="subtest1" status="pass"/>
        +        <testcase name="subtest2" status="fail">
        +          <failed/>
        +        </testcase>
        +        <testcase name="subtest3" status="crash">
        +          <error/>
        +        </testcase>
        +        <testcase name="subtest4" status="skip">
        +          <skipped/>
        +        </testcase>
        +      </testsuite>
               </testsuite>
             </testsuites>
           """
        @@ -203,11 +222,12 @@ class TestJUnitLoad(utils.StaticDirectory):
               def setup_class(cls):
                   super(TestJUnitLoad, cls).setup_class()
                   cls.xml_file = os.path.join(cls.tdir, 'results.xml')
        -
        +
                   with open(cls.xml_file, 'w') as f:
                       f.write(_XML)

                   cls.testname = grouptools.join('foo', 'bar', 'a-test')
        +        cls.subtestname = 'bar'

               @classmethod
               def xml(cls):
        @@ -270,7 +290,6 @@ class TestJUnitLoad(utils.StaticDirectory):
                   """backends.junit._load: Totals are calculated."""
                   nt.ok_(bool(self.xml()))

        -
               @utils.no_error
               def test_load_file(self):
                   """backends.junit.load: Loads a file directly"""
        @@ -281,6 +300,48 @@ class TestJUnitLoad(utils.StaticDirectory):
                   """backends.junit.load: Loads a directory"""
                   backends.junit.REGISTRY.load(self.tdir, 'none')

        +    def test_subtest_added(self):
        +        """backends.junit._load: turns secondlevel <testsuite>
        into test with stubtests"""
        +        xml = self.xml()
        +        nt.assert_in(self.subtestname, xml.tests)
        +
        +    def test_subtest_time(self):
        +        """backends.junit._load: handles time from subtest"""
        +        time = self.xml().tests[self.subtestname].time
        +        nt.assert_is_instance(time, results.TimeAttribute)
        +        nt.eq_(time.start, 1.0)
        +        nt.eq_(time.end, 4.5)
        +
        +    def test_subtest_out(self):
        +        """backends.junit._load: subtest stderr is loaded
        correctly"""
        +        test = self.xml().tests[self.subtestname].out
        +        nt.eq_(test, 'This is stdout')
        +
        +    def test_subtest_err(self):
        +        """backends.junit._load: stderr is loaded correctly."""
        +        test = self.xml().tests[self.subtestname].err
        +        nt.eq_(test, 'this is stderr\n\ntime start: 1.0\ntime
        end: 4.5\n')
        +
        +    def test_subtest_statuses(self):
        +        """backends.juint._load: subtest statuses are restored
        correctly
        +
        +        This is not implemented as separate tests or a
        generator becuase while
        +        it asserts multiple values, it is testing one peice of
        funcitonality:
        +        whether the subtests are restored correctly.
        +
        +        """
        +        test = self.xml().tests[self.subtestname]
        +
        +        subtests = [
        +            ('subtest1', 'pass'),
        +            ('subtest2', 'fail'),
        +            ('subtest3', 'crash'),
        +            ('subtest4', 'skip'),
        +        ]
        +
        +        for name, value in subtests:
        +            nt.eq_(test.subtests[name], value)
        +

           def test_load_file_name():
               """backends.junit._load: uses the filename for name if
        filename != 'results'
        @@ -319,3 +380,142 @@ def test_load_default_name():
                   test = backends.junit.REGISTRY.load(filename, 'none')

               nt.assert_equal(test.name <http://test.name>, 'junit result')
        +
        +
        +class TestJunitSubtestWriting(object):
        +    """Tests for Junit subtest writing.
        +
        +    Junit needs to write out subtests as full tests, so jenkins
        will consume
        +    them correctly.
        +
        +    """
        +    __patchers = [
        +        mock.patch('framework.backends.abstract.shutil.move',
        mock.Mock()),
        +    ]
        +
        +    @staticmethod
        +    def _make_result():
        +        result = results.TestResult()
        +        result.time.end = 1.2345
        +        result.result = 'pass'
        +        result.out = 'this is stdout'
        +        result.err = 'this is stderr'
        +        result.command = 'foo'
        +        result.subtests['foo'] = 'skip'
        +        result.subtests['bar'] = 'fail'
        +        result.subtests['oink'] = 'crash'
        +
        +        test = backends.junit.JUnitBackend('foo')
        +        mock_open = mock.mock_open()
        +        with mock.patch('framework.backends.abstract.open',
        mock_open):
        +            with test.write_test(grouptools.join('a', 'group',
        'test1')) as t:
        +                t(result)
        +
        +        # Return an xml object
        +        # This seems pretty fragile, but I don't see a better
        way to get waht
        +        # we want
        +        return etree.fromstring(mock_open.mock_calls[-3][1][0])
        +
        +    @classmethod
        +    def setup_class(cls):
        +        for p in cls.__patchers:
        +            p.start()
        +
        +        cls.output = cls._make_result()
        +
        +    @classmethod
        +    def teardown_class(cls):
        +        for p in cls.__patchers:
        +            p.stop()
        +
        +    def test_suite(self):
        +        """backends.junit.JUnitBackend.write_test: wraps the
        cases in a suite"""
        +        nt.eq_(self.output.tag, 'testsuite')
        +
        +    def test_cases(self):
        +        """backends.junit.JUnitBackend.write_test: has one
        <testcase> per subtest"""
        +        nt.eq_(len(self.output.findall('testcase')), 3)
        +
        +    @utils.nose_generator
        +    def test_metadata(self):
        +        """backends.junit.JUnitBackend.write_test: metadata
        written into the
        +        suite
        +
        +        """
        +        def test(actual, expected):
        +            nt.eq_(expected, actual)
        +
        +        descrption = ('backends.junit.JUnitBackend.write_test: '
        +                      '{} is written into the suite')
        +
        +        if self.output.tag != 'testsuite':
        +            raise Exception('Could not find a testsuite!')
        +
        +        tests = [
        +            (self.output.find('system-out').text, 'this is stdout',
        +             'system-out'),
        +            (self.output.find('system-err').text,
        +             'this is stderr\n\nstart time: 0.0\nend time:
        1.2345\n',
        +             'system-err'),
        +            (self.output.attrib.get('name'),
        'piglit.a.group.test1', 'name'),
        +            (self.output.attrib.get('time'), '1.2345',
        'timestamp'),
        +            (self.output.attrib.get('failures'), '1', 'failures'),
        +            (self.output.attrib.get('skipped'), '1', 'skipped'),
        +            (self.output.attrib.get('errors'), '1', 'errors'),
        +            (self.output.attrib.get('tests'), '3', 'tests'),
        +        ]
        +
        +        for actual, expected, name in tests:
        +            test.description = descrption.format(name)
        +            yield test, actual, expected
        +
        +    def test_testname(self):
        +        """backends.junit.JUnitBackend.write_test: the testname
        should be the subtest name"""
        +        nt.ok_(self.output.find('testcase[@name="foo"]') is not
        None)
        +
        +    def test_fail(self):
        +        """Backends.junit.JUnitBackend.write_test: add
        <failure> if the subtest failed"""
        +
        
nt.eq_(len(self.output.find('testcase[@name="bar"]').findall('failure')),
        1)
        +
        +    def test_skip(self):
        +        """Backends.junit.JUnitBackend.write_test: add
        <skipped> if the subtest skipped"""
        +
        
nt.eq_(len(self.output.find('testcase[@name="foo"]').findall('skipped')),
        1)
        +
        +    def test_error(self):
        +        """Backends.junit.JUnitBackend.write_test: add <error>
        if the subtest crashed"""
        +
        nt.eq_(len(self.output.find('testcase[@name="oink"]').findall('error')),
        1)
        +
        +
        +class TestJunitSubtestFinalize(utils.StaticDirectory):
        +    @classmethod
        +    def setup_class(cls):
        +        super(TestJunitSubtestFinalize, cls).setup_class()
        +
        +        result = results.TestResult()
        +        result.time.end = 1.2345
        +        result.result = 'pass'
        +        result.out = 'this is stdout'
        +        result.err = 'this is stderr'
        +        result.command = 'foo'
        +        result.subtests['foo'] = 'pass'
        +        result.subtests['bar'] = 'fail'
        +
        +        test = backends.junit.JUnitBackend(cls.tdir)
        +        test.initialize(BACKEND_INITIAL_META)
        +        with test.write_test(grouptools.join('a', 'test',
        'group', 'test1')) as t:
        +            t(result)
        +        test.finalize()
        +
        +    @utils.not_raises(etree.ParseError)
        +    def test_valid_xml(self):
        +        """backends.jUnit.JunitBackend.finalize: produces valid
        xml with subtests"""
        +        etree.parse(os.path.join(self.tdir, 'results.xml'))
        +
        +    def test_valid_junit(self):
        +        """backends.jUnit.JunitBackend.finalize: prodives valid
        junit with subtests"""
        +        if etree.__name__ != 'lxml.etree':
        +            raise SkipTest('Test requires lxml features')
        +
        +        schema = etree.XMLSchema(file=JUNIT_SCHEMA)
        +        xml = etree.parse(os.path.join(self.tdir, 'results.xml'))
        +        nt.ok_(schema.validate(xml), msg='xml is not valid')




_______________________________________________
Piglit mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/piglit

Reply via email to