Hello, On 12 June 2014 10:07, Patrick Ohly <[email protected]> wrote: > Hello! > > A comment in JIRA about running unit testing sparked a debate in > TIVI-1748 whether running automated tests as part of the OBS build of > Tizen packages is feasible and/or desirable. This list here is a better > place to discuss this. > > Traditionally, formal QA testing was (and still is) decoupled from > development and packaging:
I suppose that "Traditionally" is referred to Tizen only =) But even that is not fully correct. > QA testing happens only after a package was > tested by developers and release engineers and included in images. Right now there is pre-integration testing deployed in small scale for Tizen:IVI, on IA, running on production servers in tizen.org. The jenkins code/xmls are already published as part of the jenkins-script package, AFAIK. The other HW specific part should be available soon, after I clear the legal obligations. IVI is acting as lead vertical, but also Common will soon receive similar treatment. The testing currently performed acts as smoke test for IVI. The automation consists of: * notification to the tester that there is an image ready * preparation of the device with flashing of the image and optional installation of components needed for testing Preparation includes flashing, installation of ad-hoc keys for passwordless login, extra packages, etc. * execution of the test cases * labeling of the image with the result of the testing > At > each step of the process, the different people use different tests: > developers maintain unit tests, release engineers have manual (?) > checklists, QA maintains yet another set of tests. > > Developers have to maintain their own, project specific tests because > otherwise they cannot ensure reliably that the code that they are > committing is not causing regressions. Doing QA after packaging can't be > a replacement for that, it would happen much too late in the development > cycle. Hmm, I suspect hat here you intend "packaging" as "integrating into the repo of accepted components and delivered as released image" rather than "put in a .rpm .deb file". Right? > Obviously this is often causing a duplication of effort on maintaining > tests. In the past before Tizen, I tried to get QA engineers to > contribute tests to SyncEvolution's original set of tests, without > success. I also packaged these tests and provided instructions to QA on > how to use them, which worked better until the QA engineer got > reassigned. Personally I would prefer to make the job simple enough that the developer can own completely the test cases used for own development. Like you did. There are, right now, means to do so. Ex: fMBT [https://01.org/fmbt] this is what I'm relying on right now - I'll leave to Antti (added to this thread) further commenting. So I'm leaving intact the text below .... > The proposal in TIVI-1748 is about that second approach. > > There are two different complementary options: first, run "make > check" (or something equivalent) as part of the compile rules in > the .spec file. > > The advantage is that it works automatically the same way for all > packages, there is no need to provide instructions on how to run the > tests. > > The main downside of this is that thorough checking can easily take > longer than the actual compilation. The code might also have to be > compiled twice, once with embedded unit tests enabled and once in > release mode. Is such a slowdown something that we can and/or want to > accept? > > How can we ensure that failed tests will be recognized and handled > efficiently? > > The other option is installing and packaging tests in a separate .rpm > for later use by QA or developers on a device. > > This is non-standard and thus would require extra effort from developers > (making tests runnable outside of the build tree) and packagers. I think > it is only worth the effort if the resulting test package really gets > used. The bluez-test package is a good, albeit also limited example: it > includes additional tools that can be used for testing, but no automated > test suite. > > In both cases, there has to be a commitment from the distro maintainers > that tools required for unit testing are provided by the distro. For > example, Tizen currently lacks CPPUnit and thus SyncEvolution has to be > compiled without tests. -- cheers, igor _______________________________________________ Dev mailing list [email protected] https://lists.tizen.org/listinfo/dev
