On Sat, 2020-06-13 at 12:19 +0200, Konrad Weihmann wrote:
> Add new decorator which behaves like OEHasPackage, but
> fails the testcase if a dependency isn't met.
> 
> This helps to identify missing packages in the image
> under test when using static test suite lists, otherwise
> a missing package won't fail the overall test suite and
> errors might slip through unnoticed
> 
> Signed-off-by: Konrad Weihmann <[email protected]>
> ---
>  meta/lib/oeqa/runtime/decorator/package.py | 50 ++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
> 
> diff --git a/meta/lib/oeqa/runtime/decorator/package.py 
> b/meta/lib/oeqa/runtime/decorator/package.py
> index 4c5ca198b0..b3d3fdbec2 100644
> --- a/meta/lib/oeqa/runtime/decorator/package.py
> +++ b/meta/lib/oeqa/runtime/decorator/package.py
> @@ -54,3 +54,53 @@ class OEHasPackage(OETestDecorator):
>              if self.case.tc.image_packages.isdisjoint(need_pkgs):
>                  msg = "Test requires %s to be installed" % ', 
> or'.join(need_pkgs)
>                  self.case.skipTest(msg)
> +
> +@registerDecorator
> +class OERequirePackage(OETestDecorator):
> +    """
> +        Checks if image has packages (un)installed.
> +        It is almost the same as OEHasPackage, but if dependencies are 
> missing
> +        the test case fails.
> +
> +        The argument must be a string, set, or list of packages that must be
> +        installed or not present in the image.
> +
> +        The way to tell a package must not be in an image is using an
> +        exclamation point ('!') before the name of the package.
> +
> +        If test depends on pkg1 or pkg2 you need to use:
> +        @OERequirePackage({'pkg1', 'pkg2'})
> +
> +        If test depends on pkg1 and pkg2 you need to use:
> +        @OERequirePackage('pkg1')
> +        @OERequirePackage('pkg2')
> +
> +        If test depends on pkg1 but pkg2 must not be present use:
> +        @OERequirePackage({'pkg1', '!pkg2'})
> +    """
> +
> +    attrs = ('need_pkgs',)
> +
> +    def setUpDecorator(self):
> +        need_pkgs = set()
> +        unneed_pkgs = set()
> +        pkgs = strToSet(self.need_pkgs)
> +        for pkg in pkgs:
> +            if pkg.startswith('!'):
> +                unneed_pkgs.add(pkg[1:])
> +            else:
> +                need_pkgs.add(pkg)
> +
> +        if unneed_pkgs:
> +            msg = 'Checking if %s is not installed' % ', '.join(unneed_pkgs)
> +            self.logger.debug(msg)
> +            if not self.case.tc.image_packages.isdisjoint(unneed_pkgs):
> +                msg = "Test can't run with %s installed" % ', 
> or'.join(unneed_pkgs)
> +                self.case.fail(msg)
> +
> +        if need_pkgs:
> +            msg = 'Checking if at least one of %s is installed' % ', 
> '.join(need_pkgs)
> +            self.logger.debug(msg)
> +            if self.case.tc.image_packages.isdisjoint(need_pkgs):
> +                msg = "Test requires %s to be installed" % ', 
> or'.join(need_pkgs)
> +                self.case.fail(msg)

I can see the use case, I'm a bit torn on whether we should fail in
these cases, or whether we should enourage people to check the tests
they expected to run really did.

With the complexity on the autobuilder we've had to rely on the latter,
comparing that all tests that ran previously, still run.

With regard to the patch itself, I'm worried about the code duplication
with the other skip decorator, is there a way to reduce that and make
things a bit neater?

Cheers,

Richard





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#139506): 
https://lists.openembedded.org/g/openembedded-core/message/139506
Mute This Topic: https://lists.openembedded.org/mt/74855190/21656
Group Owner: [email protected]
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub  
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to