On Thu, Feb 11, 2016 at 08:33:39AM -0500, Cleber Rosa wrote: > > > ----- Original Message ----- > > From: "Lukasz Majewski" <[email protected]> > > To: [email protected] > > Sent: Thursday, February 11, 2016 6:27:22 AM > > Subject: [Autotest] Feasibility study - issues clarification > > > > Dear all, > > > > I'd be grateful for clarifying a few issues regarding Autotest. > > > > I have following setup: > > 1. Custom HW interface to connect Target to Host > > 2. Target board with Linux > > 3. Host PC - debian/ubuntu. > > > > I would like to unify the test setup and it seems that the Autotest > > test framework has all the features that I would need: > > > > - Extensible Host class (other interfaces can be used for communication > > - i.e. USB) > > - SSH support for sending client tests from Host to Target > > - Control of tests execution on Target from Host and gathering results > > - Standardized tests results format > > - Autotest host's and client's test results are aggregated and > > displayed as HTML > > - Possibility to easily reuse other tests (like LTP, linaro's PM-QA) > > - Scheduling, HTML visualization (if needed) > > > > On the beginning I would like to use test harness (server+client) to > > run tests and gather results in a structured way. > > > > However, I have got a few questions (please correct me if I'm wrong): > > > > - On several presentations it was mentioned that Avocado project is a > > successor of Autotest. However it seems that Avocado is missing the > > client + server approach from Autotest. > > Right. It's something that is being worked on at this very moment: > > https://trello.com/c/AnoH6vhP/530-experiment-multiple-machine-support-for-tests > > > > > - What is the future of Autotest? Will it be gradually replaced by > > Avocado? > > Autotest has been mostly in maintenance mode for the last 20 months or > so. Most of the energy of the Autotest maintainers has been shifted > towards Avocado. So, while no Open Source project can be killed (nor > should), yes, Autotest users should start looking into Avocado. > > > > > - It seems that there are only two statuses returned from a simple > > test (like sleeptest), namely "PASS" and "FAIL". How can I indicate > > that the test has ended because the environment was not ready to run > > the test (something similar to LTP's "BROK" code, or exit codes > > complying with POSIX 1003.1)? > > I reckon this is a question on Autotest test result status, so I'll try > to answer in that context. First, the framework itself gives you intentionally > limited test result status. If you want to save additional information about > your test, including say the mapping to POSIX 1003.1 codes, you can try to use > the test's "keyval" store for that. The "keyval" is both saved to a local file > and to the server's database (when that is used).
You're probably referring to the whiteboard: http://avocado-framework.readthedocs.org/en/latest/WritingTests.html#saving-test-generated-custom-data Thanks. - Ademar > > Avocado INSTRUMENTED tests, though, have a better separation of test setup and > execution, and a test can be SKIPPED during the setup phase. A few pointers: > > * > https://github.com/avocado-framework/avocado/blob/master/examples/tests/skiponsetup.py > * > http://avocado-framework.readthedocs.org/en/latest/api/core/avocado.core.html#avocado.core.test.Test.skip > > > > > - Is there any road map for Autotest development? I'm wondering if > > avocado's features (like per test SHA1 generation) would be ported to > > Autotest? > > Not really. Avocado's roadmap though, is accessible here: > > https://trello.com/b/WbqPNl2S/avocado > -- Ademar Reis Red Hat ^[:wq! _______________________________________________ Autotest-kernel mailing list [email protected] https://www.redhat.com/mailman/listinfo/autotest-kernel
