Hello!

A comment in JIRA about running unit testing sparked a debate in
TIVI-1748 whether running automated tests as part of the OBS build of
Tizen packages is feasible and/or desirable. This list here is a better
place to discuss this.

Traditionally, formal QA testing was (and still is) decoupled from
development and packaging: QA testing happens only after a package was
tested by developers and release engineers and included in images. At
each step of the process, the different people use different tests:
developers maintain unit tests, release engineers have manual (?)
checklists, QA maintains yet another set of tests.

Developers have to maintain their own, project specific tests because
otherwise they cannot ensure reliably that the code that they are
committing is not causing regressions. Doing QA after packaging can't be
a replacement for that, it would happen much too late in the development
cycle.

Obviously this is often causing a duplication of effort on maintaining
tests. In the past before Tizen, I tried to get QA engineers to
contribute tests to SyncEvolution's original set of tests, without
success. I also packaged these tests and provided instructions to QA on
how to use them, which worked better until the QA engineer got
reassigned.

The proposal in TIVI-1748 is about that second approach.

There are two different complementary options: first, run "make
check" (or something equivalent) as part of the compile rules in
the .spec file.

The advantage is that it works automatically the same way for all
packages, there is no need to provide instructions on how to run the
tests.

The main downside of this is that thorough checking can easily take
longer than the actual compilation. The code might also have to be
compiled twice, once with embedded unit tests enabled and once in
release mode. Is such a slowdown something that we can and/or want to
accept?

How can we ensure that failed tests will be recognized and handled
efficiently?

The other option is installing and packaging tests in a separate .rpm
for later use by QA or developers on a device.

This is non-standard and thus would require extra effort from developers
(making tests runnable outside of the build tree) and packagers. I think
it is only worth the effort if the resulting test package really gets
used. The bluez-test package is a good, albeit also limited example: it
includes additional tools that can be used for testing, but no automated
test suite.

In both cases, there has to be a commitment from the distro maintainers
that tools required for unit testing are provided by the distro. For
example, Tizen currently lacks CPPUnit and thus SyncEvolution has to be
compiled without tests.

Any comments or suggestions?

-- 
Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.



_______________________________________________
Dev mailing list
[email protected]
https://lists.tizen.org/listinfo/dev

Reply via email to