Hey, gang! As I read through standard interface and tried ansiblized branch of libtaskotron, I found things that were not exactly clear to me and I have some questions. My summer afternoon schedule involves feeding rabbits (true story!) and I keep missing people on IRC, hence this email.
= Test output and its format = Standard test interface specifies that [1]: 1) "test system must examine the exit code of the playbook. A zero exit code is successful test result, non-zero is failure" and 2) "test suite must treat the file test.log in the artifacts folder as the main readable output of the test" ad 1) Examining the exit code is pretty straight forward. The mapping to outcome would be zero to PASSED and non-zero to FAILED. Currently we use more than these two outcomes, i.e. INFO and NEEDS_INSPECTION. Are we still going to use them, if so, what would be the cases? The playbook can fail by itself (e.g. fail like command not found, or permission denied), but I presume this failure would be reported to ExecDB not to ResultsDB. Any thoughts on this? ad 2) The standard interface does not specify the format of test output, just that the test.log must be readable. Does this mean that the output can be in any arbitrary format and the parsing of it would be left to people who care, i.e. packagers? Wouldn't be this a problem with if, for example, bodhi wanted to extract/parse this information from ResultsDB and show it on update page? = Triggering generic tasks = Standard interface is centered around dist-git style tasks and doesn't cover generic tasks like rpmlint or rpmdeplint. As these tasks are Fedora QA specific, are we going to create custom extension to standard interface, used only by our team, to be able to run generic tasks? = Reporting to ResultsDB = Gating requirements for CI and CD contains [2]: "It must be possible to represent CI test results in resultsdb." However standard interface does not speak about resultsdb. Does this mean, that task playbook won't contain something like ResultsDB module (in contrast to ResultsDB directive in formulae), as the task playbook should be agnostic to system in which it is run, and the reporting will be done by our code in runtask? = Output of runtask = Libtaskotron's output is nice and readable, but output of the parts, handled by ansible now, is not. My knowledge of ansible is still limited, but as far as my experience goes, debuging ansible playbooks or even asnible modules is kind of PITA. Are we going to address this in some way, or just bite the bullet and move along? = Params of runtask = When I tried ansiblized branch of libtaskotron, I ran into issues such as unsupported params: ansible told me to run it with "-vvv" param, which runtask does not understand. Is there a plan on how are we going to forward such parameters (--ansible-opts= or just forward any params we don't understand)? Runtask, at the moment, maps our params to ansible-playbook params and those defined by standard interface. Are we going to stick with this or change our params to match the ones of ansible-playbook and standard interface (e.g. item would become subject, etc)? = Future of runtask = For now, runtask is user-facing part of Taskotron. However, standard interface is designed in such way, that authors of task playbooks shouldn't care about Taskotron (or any other system that will run their code). They can develop the tasks by simply using ansible-playbook. Does this mean that runtask will become convenient script for us that parses arguments and spins up a VM? Because everything else is in wrapping ansbile playbook... Lukas [1] https://fedoraproject.org/wiki/Changes/InvokingTests [2] https://fedoraproject.org/wiki/Fedora_requirements_for_CI_and_CD _______________________________________________ qa-devel mailing list -- qa-devel@lists.fedoraproject.org To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org