On 03/26/2015 01:33 PM, Tyler Baker wrote: > On 26 March 2015 at 09:29, John Stultz <[email protected]> wrote: >> On Thu, Mar 26, 2015 at 4:31 AM, Prarit Bhargava <[email protected]> wrote: >>> On 03/25/2015 07:44 PM, John Stultz wrote: >>>> + printf("%-22s %s missing CAP_WAKE_ALARM? : >>>> [UNSUPPORTED]\n", >>>> + clockstring(clock_id), >>>> + flags ? "ABSTIME":"RELTIME"); >>> >>> Something to think about: Do you want to write these tests to be more human >>> readable or machine readable? In theory with awk I guess it doesn't matter >>> too >>> much, however, it is something that we should think about moving forward. >> >> So this came up at ELC in a few discussions. Right now there isn't any >> established output format, but there's some nice and simple >> infrastructure for counting pass/fails. >> >> However, in talking to Tyler, I know he has started looking at how to >> integrate the selftests into our automated infrastructure and was >> interested in how we improve the output parsing for reports. So there >> is interest in improving this, and I'm open to whatever changes might >> be needed (adding extra arguments to the test to put them into "easy >> parse" mode or whatever). > > Thanks for looping me in John. My interest in kselftest stems from my > involvement with kernelci.org, a communityservice focused on upstream > kernel validation across multiple architectures. In it's current form, > it is merely build and boot testing boards. However, we are at a point > where we'd like to start running some tests. The automation framework > (LAVA) used to execute these tests essentially uses a regular > expression to parse the test's standard output. This is advantageous > as a test can be written in any language, as long as it produces sane > uniform output. > > Ideally, we would like to perform the kernel builds as we do today > along with building all the kseltests present in the tree, and > inserting them into a 'testing' ramdisk for deployment. Once we > successfully boot the platform, we execute all the kselftests, parse > standard out, and report the results. The benefit from this > implementation is that a developer writing a test does have to do > anything 'special' to get his/her test to run once it has been applied > to a upstream tree. I'll explain below some concerns I have about > accomplishing this. > > Currently, we have had to write wrappers[1][2] for some kselftests to > be able parse the output. If we can choose/agree on a standard output > format all of this complexity goes away, and then we can dynamically > run kselftests. Integration of new tests will not be needed, as they > all produce output in standard way. I've taken a look at the wiki page > for standardizing output[3] and TAP looks like the good format IMO. > > Also, for arch != x86 there are some barriers to overcome to get all > the kselftests cross compiling, which would be nice to have as well. > > I realize this may be a good amount of work, so I'd like to help out. > Perhaps working John to convert his timer tests to use TAP output > would be a good starting point?
John, I could probably do that for you. I'm always willing to give it a shot. > >> >> thanks >> -john > > [1] > https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/common/scripts/kselftest-runner.sh > [2] > https://git.linaro.org/qa/test-definitions.git/blob/HEAD:/common/scripts/kselftest-mqueue.sh > [3] https://kselftest.wiki.kernel.org/standardize_the_test_output I'll go off and look at this and wait for the current patchset(s) to make it into Linus' tree before posting or suggesting patches. P. > > Cheers, > > Tyler > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

