Re: [PATCH v18 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-10-07 Thread Steven Rostedt
On Sun, 6 Oct 2019 10:18:11 -0700
Linus Torvalds  wrote:

> On Sun, Oct 6, 2019 at 9:55 AM Theodore Y. Ts'o  wrote:
> >
> > Well, one thing we *can* do is if (a) if we can create a kselftest
> > branch which we know is stable and won't change, and (b) we can get
> > assurances that Linus *will* accept that branch during the next merge
> > window, those subsystems which want to use kself test can simply pull
> > it into their tree.  
> 
> Yes.
> 
> At the same time, I don't think it needs to be even that fancy. Even
> if it's not a stable branch that gets shared between different
> developers, it would be good to just have people do a "let's try this"
> throw-away branch to use the kunit functionality and verify that
> "yeah, this is fairly convenient for ext4".
> 
> It doesn't have to be merged in that form, but just confirmation that
> the infrastructure is helpful before it gets merged would be good.

Can't you just create an ext4 branch that has the kselftest-next branch
in it, that you build upon. And push that after the kunit test is
merged?

In the past I've had to rely on other branches in next, and would just
hold two branches myself. One with everything not dependent on the other
developer's branch, and one with the work that was. At the merge
window, I would either merge the two or just send two pull requests
with the two branches.

-- Steve


Re: [PATCH v18 00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-10-04 Thread Brendan Higgins
On Fri, Oct 04, 2019 at 03:59:10PM -0600, shuah wrote:
> On 10/4/19 3:42 PM, Linus Torvalds wrote:
> > On Fri, Oct 4, 2019 at 2:39 PM Theodore Y. Ts'o  wrote:
> > > 
> > > This question is primarily directed at Shuah and Linus
> > > 
> > > What's the current status of the kunit series now that Brendan has
> > > moved it out of the top-level kunit directory as Linus has requested?
> > 
> 
> The move happened smack in the middle of merge window and landed in
> linux-next towards the end of the merge window.
> 
> > We seemed to decide to just wait for 5.5, but there is nothing that
> > looks to block that. And I encouraged Shuah to find more kunit cases
> > for when it _does_ get merged.
> > 
> 
> Right. I communicated that to Brendan that we could work on adding more
> kunit based tests which would help get more mileage on the kunit.
> 
> > So if the kunit branch is stable, and people want to start using it
> > for their unit tests, then I think that would be a good idea, and then
> > during the 5.5 merge window we'll not just get the infrastructure,
> > we'll get a few more users too and not just examples.

I was planning on holding off on accepting more tests/changes until
KUnit is in torvalds/master. As much as I would like to go around
promoting it, I don't really want to promote too much complexity in a
non-upstream branch before getting it upstream because I don't want to
risk adding something that might cause it to get rejected again.

To be clear, I can understand from your perspective why getting more
tests/usage before accepting it is a good thing. The more people that
play around with it, the more likely that someone will find an issue
with it, and more likely that what is accepted into torvalds/master is
of high quality.

However, if I encourage arbitrary tests/improvements into my KUnit
branch, it further diverges away from torvalds/master, and is more
likely that there will be a merge conflict or issue that is not related
to the core KUnit changes that will cause the whole thing to be
rejected again in v5.5.

I don't know. I guess we could maybe address that situation by splitting
up the pull request into features and tests when we go to send it in,
but that seems to invite a lot of unnecessary complexity. I actually
already had some other tests/changes ready to send for review, but was
holding off until the initial set of patches mad it in.

Looking forward to hearing other people's thoughts.


Re: [PATCH v13 00/18] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-08-14 Thread Stephen Boyd
Quoting Brendan Higgins (2019-08-14 03:03:47)
> On Tue, Aug 13, 2019 at 10:52 PM Brendan Higgins
>  wrote:
> >
> > ## TL;DR
> >
> > This revision addresses comments from Stephen and Bjorn Helgaas. Most
> > changes are pretty minor stuff that doesn't affect the API in anyway.
> > One significant change, however, is that I added support for freeing
> > kunit_resource managed resources before the test case is finished via
> > kunit_resource_destroy(). Additionally, Bjorn pointed out that I broke
> > KUnit on certain configurations (like the default one for x86, whoops).
> >
> > Based on Stephen's feedback on the previous change, I think we are
> > pretty close. I am not expecting any significant changes from here on
> > out.
> 
> Stephen, it looks like you have just replied with "Reviewed-bys" on
> all the remaining emails that you looked at. Is there anything else
> that we are missing? Or is this ready for Shuah to apply?
> 

I think it's good to go! Thanks for the persistence.



Re: [PATCH v5 00/18] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-06-21 Thread Theodore Ts'o
On Fri, Jun 21, 2019 at 08:59:48AM -0600, shuah wrote:
> > > ### But wait! Doesn't kselftest support in kernel testing?!
> > >
> > > 
> 
> I think I commented on this before. I agree with the statement that
> there is no overlap between Kselftest and KUnit. I would like see this
> removed. Kselftest module support supports use-cases KUnit won't be able
> to. I can build an kernel with Kselftest test modules and use it in the
> filed to load and run tests if I need to debug a problem and get data
> from a system. I can't do that with KUnit.
> 
> In my mind, I am not viewing this as which is better. Kselftest and
> KUnit both have their place in the kernel development process. It isn't
> productive and/or necessary to comparing Kselftest and KUnit without a
> good understanding of the problem spaces for each of these.
>
> I would strongly recommend not making reference to Kselftest and talk
> about what KUnit offers.

Shuah,

Just to recall the history, this section of the FAQ was added to rebut
the ***very*** strong statements that Frank made that there was
overlap between Kselftest and Kunit, and that having too many ways for
kernel developers to do the identical thing was harmful (he said it
was too much of a burden on a kernel developer) --- and this was an
argument for not including Kunit in the upstream kernel.

If we're past that objection, then perhaps this section can be
dropped, but there's a very good reason why it was there.  I wouldn't
Brendan to be accused of ignoring feedback from those who reviewed his
patches.   :-)

- Ted


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-23 Thread Daniel Vetter
On Wed, May 22, 2019 at 02:38:48PM -0700, Brendan Higgins wrote:
> +Bjorn Helgaas
> 
> On Wed, May 15, 2019 at 12:41 AM Daniel Vetter  wrote:
> >
> > On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> > > On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > > > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > > >  wrote:
> > > > >
> > > > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > > > However, the reply is incorrect.  Kselftest in-kernel tests 
> > > > > > > > (which
> > > > > > > > is the context here) can be configured as built in instead of as
> > > > > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > > > init process.
> > > > > > >
> > > > > > > Um, Citation needed?
> > > > > > >
> > > > > > > I don't see any evidence for this in the kselftest documentation, 
> > > > > > > nor
> > > > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > > > >
> > > > > > > There exists test modules in the kernel that run before the init
> > > > > > > scripts run --- but that's not strictly speaking part of 
> > > > > > > kselftests,
> > > > > > > and do not have any kind of infrastructure.  As noted, the
> > > > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > > > running test code in userspace.
> > > > > >
> > > > > > Yeah I really like the "no userspace required at all" design of 
> > > > > > kunit,
> > > > > > while still collecting results in a well-defined way (unless the 
> > > > > > current
> > > > > > self-test that just run when you load the module, with maybe some
> > > > > > kselftest ad-hoc wrapper around to collect the results).
> > > > > >
> > > > > > What I want to do long-term is to run these kernel unit tests as 
> > > > > > part of
> > > > > > the build-testing, most likely in gitlab (sooner or later, for 
> > > > > > drm.git
> > > > >
> > > > > Totally! This is part of the reason I have been insisting on a minimum
> > > > > of UML compatibility for all unit tests. If you can suffiently 
> > > > > constrain
> > > > > the environment that is required for tests to run in, it makes it much
> > > > > easier not only for a human to run your tests, but it also makes it a
> > > > > lot easier for an automated service to be able to run your tests.
> > > > >
> > > > > I actually have a prototype presubmit already working on my
> > > > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > > > look like here[1][2].
> > > >
> > > > ug gerrit :-)
> > >
> > > Yeah, yeah, I know, but it is a lot easier for me to get a project set
> > > up here using Gerrit, when we already use that for a lot of other
> > > projects.
> > >
> > > Also, Gerrit has gotten a lot better over the last two years or so. Two
> > > years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> > > anymore, at least if you are used to using a web UI to review code.
> >
> > I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
> > be a lot more polished than last time I looked at it seriously.
> 
> I mean, it is still not perfect, but I think it has finally gotten to
> the point where I prefer it over reviewing by email for high context
> patches where you don't expect a lot of deep discussion.
> 
> Still not great for patches where you want to have a lot of discussion.
> 
> > > > > > only ofc). So that people get their pull requests (and patch 
> > > > > > series, we
> > > > > > have some ideas to tie this into patchwork) automatically tested 
> > > > > > for this
> > > > >
> > > > > Might that be Snowpatch[3]? I talked to Russell, the creator of 
> > > > > Snowpatch,
> > > > > and he seemed pretty open to collaboration.
> > > > >
> > > > > Before I heard about Snowpatch, I had an intern write a translation
> > > > > layer that made Prow (the presubmit service that I used in the 
> > > > > prototype
> > > > > above) work with LKML[4].
> > > >
> > > > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > > > a different one on freedesktop.org. It's a bit a mess :-/
> 
> I think Snowpatch is an ozlabs project; at least the maintainer works at IBM.
> 
> Patchwork originally was a ozlabs project, right?

So there's two patchworks (snowpatch makes the 3rd): the one on
freedesktop is another fork.

> Has any discussion taken place trying to consolidate some of the forks?

Yup, but didn't lead anywhere unfortunately :-/ At least between patchwork
and patchwork-fdo, I think snowpatch happened in parallel and once it was
done it's kinda too late. The trouble is that they all have slightly
different REST api and functionality, so for CI integration you can't just
switch one for the other.

> Presubmit 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-15 Thread Daniel Vetter
On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> >  wrote:
> > >
> > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > > > is the context here) can be configured as built in instead of as
> > > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > init process.
> > > > >
> > > > > Um, Citation needed?
> > > > >
> > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > >
> > > > > There exists test modules in the kernel that run before the init
> > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > and do not have any kind of infrastructure.  As noted, the
> > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > running test code in userspace.
> > > >
> > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > while still collecting results in a well-defined way (unless the current
> > > > self-test that just run when you load the module, with maybe some
> > > > kselftest ad-hoc wrapper around to collect the results).
> > > >
> > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > >
> > > Totally! This is part of the reason I have been insisting on a minimum
> > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > the environment that is required for tests to run in, it makes it much
> > > easier not only for a human to run your tests, but it also makes it a
> > > lot easier for an automated service to be able to run your tests.
> > >
> > > I actually have a prototype presubmit already working on my
> > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > look like here[1][2].
> > 
> > ug gerrit :-)
> 
> Yeah, yeah, I know, but it is a lot easier for me to get a project set
> up here using Gerrit, when we already use that for a lot of other
> projects.
> 
> Also, Gerrit has gotten a lot better over the last two years or so. Two
> years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> anymore, at least if you are used to using a web UI to review code.

I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
be a lot more polished than last time I looked at it seriously.

> > > > only ofc). So that people get their pull requests (and patch series, we
> > > > have some ideas to tie this into patchwork) automatically tested for 
> > > > this
> > >
> > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > and he seemed pretty open to collaboration.
> > >
> > > Before I heard about Snowpatch, I had an intern write a translation
> > > layer that made Prow (the presubmit service that I used in the prototype
> > > above) work with LKML[4].
> > 
> > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > a different one on freedesktop.org. It's a bit a mess :-/
> 
> Oh, I didn't realize that. I found your patchwork instance here[5], but
> do you have a place where I can see the changes you have added to
> support presubmit?

Ok here's a few links. Aside from the usual patch view we've also added a
series view:

https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated

This ties the patches + cover letter together, and it even (tries to at
least) track revisions. Here's an example which is currently at revision
9:

https://patchwork.freedesktop.org/series/57232/

Below the patch list for each revision we also have the test result list.
If you click on the grey bar it'll expand with the summary from CI, the
"See full logs" is link to the full results from our CI. This is driven
with some REST api from our jenkins.

Patchwork also sends out mails for these results.

Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo
 
> > > I am not married to either approach, but I think between the two of
> > > them, most of the initial legwork has been done to make presubmit on
> > > LKML a reality.
> > 
> > We do have presubmit CI working already with our freedesktop.org
> > patchwork. The missing glue is just tying that into gitlab CI somehow
> > (since we want to unify build testing more and make it easier for
> > contributors to adjust things).
> 
> I checked out a couple of your projects on your patchwork instance: AMD
> X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-14 Thread Brendan Higgins
rness.h *
> > Documentation/dev-tools/kselftest.rst
> > MAINTAINERS
> > tools/testing/selftests/kselftest_harness.h
> > tools/testing/selftests/net/tls.c
> > tools/testing/selftests/rtc/rtctest.c
> > tools/testing/selftests/seccomp/Makefile
> > tools/testing/selftests/seccomp/seccomp_bpf.c
> > tools/testing/selftests/uevent/Makefile
> > tools/testing/selftests/uevent/uevent_filtering.c
> 
> 
> > Thus, I can personally see a lot of value in integrating the kunit
> > test framework with this kselftest harness. There's only a small
> > number of users of the kselftest harness today, so one way or another
> > it seems like getting this integrated early would be a good idea.
> > Letting Kunit and Kselftests progress independently for a few years
> > will only make this worse and may become something we end up
> > regretting.
> 
> Yes, this I agree with.

I think I agree with this point. I cannot see any reason not to have
KUnit tests able to be run from the kselftest harness.

Conceptually, I think we are mostly in agreement that kselftest and
KUnit are distinct things. Like Shuah said, kselftest is a black box
regression test framework, KUnit is a white box unit testing framework.
So making kselftest the only interface to use KUnit would be a mistake
in my opinion (and I think others on this thread would agree).

That being said, when you go to run kselftest, I think there is an
expectation that you run all your tests. Or at least that kselftest
should make that possible. From my experience, usually when someone
wants to run all the end-to-end tests, *they really just want to run all
the tests*. This would imply that all your KUnit tests get run too.

Another added benefit of making it possible for the kselftest harness to
run KUnit tests would be that it would somewhat guarantee that the
interfaces between the two would remain compatible meaning that test
automation tools like CI and presubmit systems are more likely to be
easy to integrate in each and less likely to break for either.

Would anyone object if I explore this in a follow-up patchset? I have an
idea of how I might start, but I think it would be easiest to explore in
it's own patchset. I don't expect it to be a trivial amount of work.

Cheers!

[1] 
https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L329
[2] 
https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L681


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-14 Thread Brendan Higgins
On Sat, May 11, 2019 at 08:43:23AM +0200, Knut Omang wrote:
> On Fri, 2019-05-10 at 14:59 -0700, Frank Rowand wrote:
> > On 5/10/19 3:23 AM, Brendan Higgins wrote:
> > >> On Fri, May 10, 2019 at 7:49 AM Knut Omang  wrote:
> > >>>
> > >>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> >  On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>
> > >>> The second item, arguably, does have significant overlap with 
> > >>> kselftest.
> > >>> Whether you are running short tests in a light weight UML 
> > >>> environment or
> > >>> higher level tests in an heavier VM the two could be using the same
> > >>> framework for writing or defining in-kernel tests. It *may* also be 
> > >>> valuable
> > >>> for some people to be able to run all the UML tests in the heavy VM
> > >>> environment along side other higher level tests.
> > >>>
> > >>> Looking at the selftests tree in the repo, we already have similar 
> > >>> items to
> > >>> what Kunit is adding as I described in point (2) above. 
> > >>> kselftest_harness.h
> > >>> contains macros like EXPECT_* and ASSERT_* with very similar 
> > >>> intentions to
> > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>
> > >>> However, the number of users of this harness appears to be quite 
> > >>> small. Most
> > >>> of the code in the selftests tree seems to be a random mismash of 
> > >>> scripts
> > >>> and userspace code so it's not hard to see it as something 
> > >>> completely
> > >>> different from the new Kunit:
> > >>>
> > >>> $ git grep --files-with-matches kselftest_harness.h *
> > >>
> > >> To the extent that we can unify how tests are written, I agree that
> > >> this would be a good thing.  However, you should note that
> > >> kselftest_harness.h is currently assums that it will be included in
> > >> userspace programs.  This is most obviously seen if you look closely
> > >> at the functions defined in the header files which makes calls to
> > >> fork(), abort() and fprintf().
> > >
> > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > here sounds like it could be difficult so it's probably not worth
> > > forcing the issue unless someone wants to do some really fancy work
> > > to get it done.
> > >
> > > Based on some of the other commenters, I was under the impression
> > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > exist.
> > 
> >  YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > 
> >  Here is a likely list of them in the kernel source tree:
> > 
> >  $ grep module_init lib/test_*.c
> >  lib/test_bitfield.c:module_init(test_bitfields)
> >  lib/test_bitmap.c:module_init(test_bitmap_init);
> >  lib/test_bpf.c:module_init(test_bpf_init);
> >  lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> >  lib/test_firmware.c:module_init(test_firmware_init);
> >  lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> >  lib/test_hexdump.c:module_init(test_hexdump_init);
> >  lib/test_ida.c:module_init(ida_checks);
> >  lib/test_kasan.c:module_init(kmalloc_tests_init);
> >  lib/test_list_sort.c:module_init(list_sort_test);
> >  lib/test_memcat_p.c:module_init(test_memcat_p_init);
> >  lib/test_module.c:static int __init test_module_init(void)
> >  lib/test_module.c:module_init(test_module_init);
> >  lib/test_objagg.c:module_init(test_objagg_init);
> >  lib/test_overflow.c:static int __init test_module_init(void)
> >  lib/test_overflow.c:module_init(test_module_init);
> >  lib/test_parman.c:module_init(test_parman_init);
> >  lib/test_printf.c:module_init(test_printf_init);
> >  lib/test_rhashtable.c:module_init(test_rht_init);
> >  lib/test_siphash.c:module_init(siphash_test_init);
> >  lib/test_sort.c:module_init(test_sort_init);
> >  lib/test_stackinit.c:module_init(test_stackinit_init);
> >  lib/test_static_key_base.c:module_init(test_static_key_base_init);
> >  lib/test_static_keys.c:module_init(test_static_key_init);
> >  lib/test_string.c:module_init(string_selftest_init);
> >  lib/test_ubsan.c:module_init(test_ubsan_init);
> >  lib/test_user_copy.c:module_init(test_user_copy_init);
> >  lib/test_uuid.c:module_init(test_uuid_init);
> >  lib/test_vmalloc.c:module_init(vmalloc_test_init)
> >  lib/test_xarray.c:module_init(xarray_checks);
> > 
> > 
> > > If they do exists, it seems like it would make sense to
> > > 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-13 Thread Daniel Vetter
On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > is the context here) can be configured as built in instead of as
> > a module, and built in a UML kernel.  The UML kernel can boot,
> > running the in-kernel tests before UML attempts to invoke the
> > init process.
> 
> Um, Citation needed?
> 
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
> 
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure.  As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

Yeah I really like the "no userspace required at all" design of kunit,
while still collecting results in a well-defined way (unless the current
self-test that just run when you load the module, with maybe some
kselftest ad-hoc wrapper around to collect the results).

What I want to do long-term is to run these kernel unit tests as part of
the build-testing, most likely in gitlab (sooner or later, for drm.git
only ofc). So that people get their pull requests (and patch series, we
have some ideas to tie this into patchwork) automatically tested for this
super basic stuff.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Frank Rowand
On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>>
>>>     "My understanding is that the intent of KUnit is to avoid booting a 
>>> kernel on
>>>     real hardware or in a virtual machine.  That seems to be a matter of 
>>> semantics
>>>     to me because isn't invoking a UML Linux just running the Linux kernel 
>>> in
>>>     a different form of virtualization?
>>>
>>>     So I do not understand why KUnit is an improvement over kselftest.
>>>
>>>     ...
>>> 
>>> What am I missing?"
>> 
>> One major difference: kselftest requires a userspace environment;
>> it starts systemd, requires a root file system from which you can
>> load modules, etc.  Kunit doesn't require a root file system;
>> doesn't require that you start systemd; doesn't allow you to run
>> arbitrary perl, python, bash, etc. scripts.  As such, it's much
>> lighter weight than kselftest, and will have much less overhead
>> before you can start running tests.  So it's not really the same
>> kind of virtualization.

I'm back to reply to this subthread, after a delay, as promised.


> I largely agree with everything Ted has said in this thread, but I
> wonder if we are conflating two different ideas that is causing an
> impasse. From what I see, Kunit actually provides two different
> things:

> 1) An execution environment that can be run very quickly in userspace
> on tests in the kernel source. This speeds up the tests and gives a
> lot of benefit to developers using those tests because they can get
> feedback on their code changes a *lot* quicker.

kselftest in-kernel tests provide exactly the same when the tests are
configured as "built-in" code instead of as modules.


> 2) A framework to write unit tests that provides a lot of the same
> facilities as other common unit testing frameworks from userspace
> (ie. a runner that runs a list of tests and a bunch of helpers such
> as KUNIT_EXPECT_* to simplify test passes and failures).

> The first item from Kunit is novel and I see absolutely no overlap
> with anything kselftest does. It's also the valuable thing I'd like
> to see merged and grow.

The first item exists in kselftest.


> The second item, arguably, does have significant overlap with
> kselftest. Whether you are running short tests in a light weight UML
> environment or higher level tests in an heavier VM the two could be
> using the same framework for writing or defining in-kernel tests. It
> *may* also be valuable for some people to be able to run all the UML
> tests in the heavy VM environment along side other higher level
> tests.
> 
> Looking at the selftests tree in the repo, we already have similar
> items to what Kunit is adding as I described in point (2) above.
> kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> macros.

I might be wrong here because I have not dug deeply enough into the
code!!!  Does this framework apply to the userspace tests, the
in-kernel tests, or both?  My "not having dug enough GUESS" is that
these are for the user space tests (although if so, they could be
extended for in-kernel use also).

So I think this one maybe does not have an overlap between KUnit
and kselftest.


> However, the number of users of this harness appears to be quite
> small. Most of the code in the selftests tree seems to be a random
> mismash of scripts and userspace code so it's not hard to see it as
> something completely different from the new Kunit:
> $ git grep --files-with-matches kselftest_harness.h *
> Documentation/dev-tools/kselftest.rst
> MAINTAINERS
> tools/testing/selftests/kselftest_harness.h
> tools/testing/selftests/net/tls.c
> tools/testing/selftests/rtc/rtctest.c
> tools/testing/selftests/seccomp/Makefile
> tools/testing/selftests/seccomp/seccomp_bpf.c
> tools/testing/selftests/uevent/Makefile
> tools/testing/selftests/uevent/uevent_filtering.c


> Thus, I can personally see a lot of value in integrating the kunit
> test framework with this kselftest harness. There's only a small
> number of users of the kselftest harness today, so one way or another
> it seems like getting this integrated early would be a good idea.
> Letting Kunit and Kselftests progress independently for a few years
> will only make this worse and may become something we end up
> regretting.

Yes, this I agree with.

-Frank

> 
> Logan


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Brendan Higgins
> On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
>  wrote:
> >
> > ## TLDR
> >
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> >
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> >
> > ## Background
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
> >
> > ## Changes Since Last Version
> >
> > None. I just rebased the last patchset on v5.1-rc7.
> >
> > --
> > 2.21.0.593.g511ec345e18-goog
> >
>
> The following is the log of 'git am' of this series.
> I see several 'new blank line at EOF' warnings.
>
>
>
> masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
> Applying: kunit: test: add KUnit test runner core
> Applying: kunit: test: add test resource management API
> Applying: kunit: test: add string_stream a std::stream like string builder
> .git/rebase-apply/patch:223: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add kunit_stream a std::stream like logger
> Applying: kunit: test: add the concept of expectations
> .git/rebase-apply/patch:475: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kbuild: enable building KUnit
> Applying: kunit: test: add initial tests
> .git/rebase-apply/patch:203: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add support for test abort
> .git/rebase-apply/patch:453: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add tests for kunit test abort
> Applying: kunit: test: add the concept of assertions
> .git/rebase-apply/patch:518: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add test managed resource tests
> Applying: kunit: tool: add Python wrappers for running KUnit tests
> .gi

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-10 Thread Daniel Vetter
On Fri, May 10, 2019 at 7:49 AM Knut Omang  wrote:
>
> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>
> > >>> The second item, arguably, does have significant overlap with kselftest.
> > >>> Whether you are running short tests in a light weight UML environment or
> > >>> higher level tests in an heavier VM the two could be using the same
> > >>> framework for writing or defining in-kernel tests. It *may* also be 
> > >>> valuable
> > >>> for some people to be able to run all the UML tests in the heavy VM
> > >>> environment along side other higher level tests.
> > >>>
> > >>> Looking at the selftests tree in the repo, we already have similar 
> > >>> items to
> > >>> what Kunit is adding as I described in point (2) above. 
> > >>> kselftest_harness.h
> > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions 
> > >>> to
> > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>
> > >>> However, the number of users of this harness appears to be quite small. 
> > >>> Most
> > >>> of the code in the selftests tree seems to be a random mismash of 
> > >>> scripts
> > >>> and userspace code so it's not hard to see it as something completely
> > >>> different from the new Kunit:
> > >>>
> > >>> $ git grep --files-with-matches kselftest_harness.h *
> > >>
> > >> To the extent that we can unify how tests are written, I agree that
> > >> this would be a good thing.  However, you should note that
> > >> kselftest_harness.h is currently assums that it will be included in
> > >> userspace programs.  This is most obviously seen if you look closely
> > >> at the functions defined in the header files which makes calls to
> > >> fork(), abort() and fprintf().
> > >
> > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > here sounds like it could be difficult so it's probably not worth
> > > forcing the issue unless someone wants to do some really fancy work
> > > to get it done.
> > >
> > > Based on some of the other commenters, I was under the impression
> > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > exist.
> >
> > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> >
> > Here is a likely list of them in the kernel source tree:
> >
> > $ grep module_init lib/test_*.c
> > lib/test_bitfield.c:module_init(test_bitfields)
> > lib/test_bitmap.c:module_init(test_bitmap_init);
> > lib/test_bpf.c:module_init(test_bpf_init);
> > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > lib/test_firmware.c:module_init(test_firmware_init);
> > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > lib/test_hexdump.c:module_init(test_hexdump_init);
> > lib/test_ida.c:module_init(ida_checks);
> > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > lib/test_list_sort.c:module_init(list_sort_test);
> > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > lib/test_module.c:static int __init test_module_init(void)
> > lib/test_module.c:module_init(test_module_init);
> > lib/test_objagg.c:module_init(test_objagg_init);
> > lib/test_overflow.c:static int __init test_module_init(void)
> > lib/test_overflow.c:module_init(test_module_init);
> > lib/test_parman.c:module_init(test_parman_init);
> > lib/test_printf.c:module_init(test_printf_init);
> > lib/test_rhashtable.c:module_init(test_rht_init);
> > lib/test_siphash.c:module_init(siphash_test_init);
> > lib/test_sort.c:module_init(test_sort_init);
> > lib/test_stackinit.c:module_init(test_stackinit_init);
> > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > lib/test_static_keys.c:module_init(test_static_key_init);
> > lib/test_string.c:module_init(string_selftest_init);
> > lib/test_ubsan.c:module_init(test_ubsan_init);
> > lib/test_user_copy.c:module_init(test_user_copy_init);
> > lib/test_uuid.c:module_init(test_uuid_init);
> > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > lib/test_xarray.c:module_init(xarray_checks);
> >
> >
> > > If they do exists, it seems like it would make sense to
> > > convert those to kunit and have Kunit tests run-able in a VM or
> > > baremetal instance.
> >
> > They already run in a VM.
> >
> > They already run on bare metal.
> >
> > They already run in UML.
> >
> > This is not to say that KUnit does not make sense.  But I'm still trying
> > to get a better description of the KUnit features (and there are
> > some).
>
> FYI, I have a master student who looks at converting some of these to KTF, 
> such as for
> instance the XArray tests, which lended themselves quite good to a 
> semi-automated
> conversion.
>
> The result is also a somewhat more compact code as well 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Daniel Vetter
On Thu, May 9, 2019 at 7:00 PM  wrote:
> > -Original Message-
> > From: Theodore Ts'o
> >
> > On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > > 1) Tests that exercises typically algorithmic or intricate, complex
> > >code with relatively few outside dependencies, or where the
> > dependencies
> > >are considered worth mocking, such as the basics of container data
> > >structures or page table code. If I get you right, Ted, the tests
> > >you refer to in this thread are such tests. I believe covering this 
> > > space
> > >is the goal Brendan has in mind for KUnit.
> >
> > Yes, that's correct.  I'd also add that one of the key differences is
> > that it sounds like Frank and you are coming from the perspective of
> > testing *device drivers* where in general there aren't a lot of
> > complex code which is hardware independent.
>
> Ummm.  Not to speak for Frank, but he's representing the device tree
> layer, which I'd argue sits exactly at the intersection of testing device 
> drivers
> AND lots of complex code which is hardware independent.  So maybe his
> case is special.

Jumping in with a pure device driver hat: We already have add-hoc unit
tests in drivers/gpu, which somewhat shoddily integrate into
kselftests and our own gpu test suite from userspace. We'd like to do
a lot more in this area (there's enormous amounts of code in a gpu
driver that's worth testing on its own, or against a mocked model of a
part of the real hw), and I think a unit test framework for the entire
kernel would be great. Plus gpu/drm isn't the only subsystem by far
that already has a home-grown solution. So it's actually worse than
what Ted said: We don't just have a multitude of test frameworks
already, we have a multitude of ad-hoc unit test frameworks, each with
their own way to run tests, write tests and mock parts of the system.
Kunit hopefully helps us to standardize more in this area. I do plan
to look into converting all the drm selftest we have already as soon
as this lands (and as soon as I find some time ...).

Cheers, Daniel

>
> > After all, the vast
> > majority of device drivers are primarily interface code to hardware,
> > with as much as possible abstracted away to common code.  (Take, for
> > example, the model of the SCSI layer; or all of the kobject code.)
> >
> > > 2) Tests that exercises interaction between a module under test and other
> > >parts of the kernel, such as testing intricacies of the interaction of
> > >a driver or file system with the rest of the kernel, and with hardware,
> > >whether that is real hardware or a model/emulation.
> > >Using your testing needs as example again, Ted, from my shallow
> > understanding,
> > >you have such needs within the context of xfstests
> > (https://github.com/tytso/xfstests)
> >
> > Well, upstream is for xfstests is 
> > git://git.kernel.org/pub/scm/fs/xfs/xfstests-
> > dev.git
> >
> > The test framework where I can run 20 hours worth of xfstests
> > (multiple file system features enabled, multiple mount options, etc.)
> > in 3 hours of wall clock time using multiple cloud VM is something
> > called gce-xfstests.
> >
> > I also have kvm-xfstests, which optimizes low test latency, where I
> > want to run a one or a small number of tests with a minimum of
> > overhead --- gce startup and shutdown is around 2 minutes, where as
> > kvm startup and shutdown is about 7 seconds.  As far as I'm concerned,
> > 7 seconds is still too slow, but that's the best I've been able to do
> > given all of the other things I want a test framework to do, including
> > archiving test results, parsing the test results so it's easy to
> > interpret, etc.  Both kvm-xfstests and gce-xfstests are located at:
> >
> >   git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> >
> > So if Frank's primary argument is "too many frameworks", it's already
> > too late.  The block layer has blktests has a seprate framework,
> > called blktests --- and yeah, it's a bit painful to launch or learn
> > how to set things up.
> >
> > That's why I added support to run blktests using gce-xfstests and
> > kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> > --xfstests" will pluck a kernel from your build tree, and launch at
> > test appliance VM using that kernel and run the block layer tests.
> >
> > The point is we *already* have multiple test frameworks, which are
> > optimized for testing different parts of the kernel.  And if you plan
> > to do a lot of work in these parts of the kernel, you're going to have
> > to learn how to use some other test framework other than kselftest.
> > Sorry, that's just the way it goes.
> >
> > Of course, I'll accept trivial patches that haven't been tested using
> > xfstests --- but that's because I can trivially run the smoke test for
> > you.  Of course, if I get a lot of patches from a contributor which
> > cause test regressions, I'll treat them much like someone who
> > 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Masahiro Yamada
On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
 wrote:
>
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
>
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
>
> ## Background
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
>
> ## What's so special about unit testing?
>
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
>
> ## Is KUnit trying to replace other testing frameworks for the kernel?
>
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
>
> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
>
> ## Changes Since Last Version
>
> None. I just rebased the last patchset on v5.1-rc7.
>
> --
> 2.21.0.593.g511ec345e18-goog
>

The following is the log of 'git am' of this series.
I see several 'new blank line at EOF' warnings.



masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
Applying: kunit: test: add KUnit test runner core
Applying: kunit: test: add test resource management API
Applying: kunit: test: add string_stream a std::stream like string builder
.git/rebase-apply/patch:223: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add kunit_stream a std::stream like logger
Applying: kunit: test: add the concept of expectations
.git/rebase-apply/patch:475: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kbuild: enable building KUnit
Applying: kunit: test: add initial tests
.git/rebase-apply/patch:203: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add support for test abort
.git/rebase-apply/patch:453: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add tests for kunit test abort
Applying: kunit: test: add the concept of assertions
.git/rebase-apply/patch:518: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add test managed resource tests
Applying: kunit: tool: add Python wrappers for running KUnit tests
.git/rebase-apply/patch:457: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: defconfig: add defconfigs for building KUnit tests
Applying: Documentation: kunit: add documentation for KUnit
.git/rebase-apply/patch:71: new blank line at EOF.
+
.git/rebase-apply/patch:209: new blank line at EOF.
+
.git/rebase-apply/patch:848: new blank line at EOF.
+
warning: 3 lines add whitespace errors.
Applying: MAINTAINERS: add entry for KUnit the unit testing framework
Applying: kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()
Applying: MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section






--
Best Regards
Masahiro Yamada


Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-09 Thread Knut Omang
On Wed, 2019-05-08 at 23:20 -0400, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > > etc.  there's nothing stopping you from doing that.  But it's not fair
> > > to object to other people who might want to use better tools.
> > > 
> > > The reality is that we have a lot of testing tools.  It's not just
> > > kselftests.  There is xfstests for file system code, blktests for
> > > block layer tests, etc.   We use the right tool for the right job.
> > 
> > More specious arguments.
> 
> Well, *I* don't think they are specious; so I think we're going to
> have to agree to disagree.

Looking at both Frank's and Ted's arguments here, I don't think you 
really disagree, I just think you are having different classes of tests in mind.

In my view it's useful to think in terms of two main categories of 
interesting unit tests for kernel code (using the term "unit test" 
pragmatically):

1) Tests that exercises typically algorithmic or intricate, complex
   code with relatively few outside dependencies, or where the dependencies 
   are considered worth mocking, such as the basics of container data 
   structures or page table code. If I get you right, Ted, the tests 
   you refer to in this thread are such tests. I believe covering this space 
   is the goal Brendan has in mind for KUnit.

2) Tests that exercises interaction between a module under test and other 
   parts of the kernel, such as testing intricacies of the interaction of 
   a driver or file system with the rest of the kernel, and with hardware, 
   whether that is real hardware or a model/emulation. 
   Using your testing needs as example again, Ted, from my shallow 
understanding,
   you have such needs within the context of xfstests 
(https://github.com/tytso/xfstests)

To 1) I agree with Frank in that the problem with using UML is that you still 
have to
relate to the complexity of a kernel run time system, while what you really 
want for these
types of tests is just to compile a couple of kernel source files in a normal 
user land
context, to allow the use of Valgrind and other user space tools on the code. 
The
challenge is to get the code compiled in such an environment as it usually 
relies on
subtle kernel macros and definitions, which is why UML seems like such an 
attractive
solution. Like Frank I really see no big difference from a testing and 
debugging 
perspective of UML versus running inside a Qemu/KVM process, and I think I have 
an idea 
for a better solution: 

In the early phases of the SIF project which mention below, I did a lot of 
experimentation around this. My biggest challenge then was to test the driver
implementation of the pages table handling of an Intel page table compatible 
on-device 
MMU, using a mix of page sizes, but with a few subtle limitations in the 
hardware. With some efforts of code generation and heavy automated use of
compiler feedback, I was able 
to do that to great satisfaction, as it probably saved the project a lot of 
time in 
debugging, and myself a lot of pain :)

To 2) most of the current xfstests (if not all?) are user space tests that do 
not use 
extra test specific kernel code, or test specific changes to the modules under 
test (am I 
right, Ted?) and I believe that's just as it should be: if something can be 
exercised well enough from user space, then that's the easier approach. 

However sometimes the test cannot be made easily without interacting directly 
with internal kernel interfaces, or having such interaction would greatly 
simplify or
increase the precision of the test. This need was the initial motivation for us 
to make 
KTF (https://github.com/oracle/ktf, 
http://heim.ifi.uio.no/~knuto/ktf/index.html) which we are working on to adapt 
to fit naturally and in the right way
as a kernel patch set.

We developed the SIF infiniband HCA driver
(https://github.com/oracle/linux-uek/tree/uek4/qu7/drivers/infiniband/hw/sif)
and associated user level libraries in what I like to call a "pragmatically 
test driven" 
way. At the end of the project we had quite a few unit tests, but only a small 
fraction of them were KTF tests, most of the testing needs were covered
by user land unit tests, 
and higher level application testing.

To you Frank, and your concern about having to learn yet another tool with it's 
own set of syntax, I completely agree with you. We definitely would want
to minimize the need to 
learn new ways, which is why I think it is important to see the whole complex 
of unit
testing together, and at least make sure it works in a unified and efficient 
way from a
syntax and operational way. 

With KTF we focus on trying to make kernel testing as similar and integrated 
with user
space tests as possible, using similar test macros, and also to not reinvent 
more wheels than necessary by basing reporting and test execution on
existing user land tools.
KTF 

Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Frank Rowand
On 5/8/19 6:44 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
>>
>> If KUnit is added to the kernel, and a subsystem that I am submitting
>> code for has chosen to use KUnit instead of kselftest, then yes, I do
>> *have* to use KUnit if my submission needs to contain a test for the
>> code unless I want to convince the maintainer that somehow my case
>> is special and I prefer to use kselftest instead of KUnittest.
> 
> That's going to be between you and the maintainer.  Today, if you want
> to submit a substantive change to xfs or ext4, you're going to be
> asked to create test for that new feature using xfstests.  It doesn't
> matter that xfstests isn't in the kernel --- if that's what is
> required by the maintainer.

Yes, that is exactly what I was saying.

Please do not cut the pertinent parts of context that I am replying to.


>>> supposed to be a simple way to run a large number of small tests that
>>> for specific small components in a system.
>>
>> kselftest also supports running a subset of tests.  That subset of tests
>> can also be a large number of small tests.  There is nothing inherent
>> in KUnit vs kselftest in this regard, as far as I am aware.


> The big difference is that kselftests are driven by a C program that
> runs in userspace.  Take a look at 
> tools/testing/selftests/filesystem/dnotify_test.c
> it has a main(int argc, char *argv) function.
> 
> In contrast, KUnit are fragments of C code which run in the kernel;
> not in userspace.  This allows us to test internal functions inside
> complex file system (such as the block allocator in ext4) directly.
> This makes it *fundamentally* different from kselftest.

No, totally incorrect.  kselftests also supports in kernel modules as
I mention in another reply to this patch.

This is talking past each other a little bit, because your next reply
is a reply to my email about modules.

-Frank

> 
> Cheers,
> 
>   - Ted
> 



Re: [PATCH v2 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-05-08 Thread Brendan Higgins
> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> > > My understanding is that the intent of KUnit is to avoid booting a kernel 
> > > on
> > > real hardware or in a virtual machine.  That seems to be a matter of 
> > > semantics
> > > to me because isn't invoking a UML Linux just running the Linux kernel in
> > > a different form of virtualization?
> > >
> > > So I do not understand why KUnit is an improvement over kselftest.
> > >
> > > It seems to me that KUnit is just another piece of infrastructure that I
> > > am going to have to be familiar with as a kernel developer.  More 
> > > overhead,
> > > more information to stuff into my tiny little brain.
> > >
> > > I would guess that some developers will focus on just one of the two test
> > > environments (and some will focus on both), splitting the development
> > > resources instead of pooling them on a common infrastructure.
> > >
> > > What am I missing?
> >
> > kselftest provides no in-kernel framework for testing kernel code
> > specifically.  That should be what kunit provides, an "easy" way to
> > write in-kernel tests for things.
> >
> > Brendan, did I get it right?
>
> Yes, that's basically right.  You don't *have* to use KUnit.  It's
> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.
>
> For example, I currently use xfstests using KVM and GCE to test all of
> ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc.  It requires time overhead to start the VM, create and
> allocate virtual disks, etc.  For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
>
> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.
>
> As an example, I have a volunteer working on developing KUinit tests
> for ext4.  We're going to start by testing the ext4 extent status
> tree.  The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC.  The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
>
> Next we'll test the ext4 block allocator, again in isolation.  To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk.  Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.
>
> So this is why it's largely irrelevant to me that KUinit uses UML.  In
> fact, it's a feature.  We're not testing device drivers, or the
> scheduler, or anything else architecture-specific.  UML is not about
> virtualization.  What it's about in this context is allowing us to
> start running test code as quickly as possible.  Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers.  If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter weight testing.
>
> Why did I go looking for a volunteer to write KUnit tests for ext4?
> Well, I have a plan to make some changes in restructing how ext4's
> write path works, in order to support things like copy-on-write, a
> more efficient delayed allocation system, etc.  This will require
> making changes to the extent status tree, and by having unit tests for
> the extent status tree, we'll be able to detect any bugs that we might
> accidentally introduce in the es tree far more quickly than if we
> didn't have those tests available.  Google has long found that having
> these sorts of unit tests is a real win for developer velocity for any
> non-trivial code module (or C++ class), even when you take into
> account the time it takes to create the unit tests.
>
> - Ted
>
> P.S.  Many thanks to Brendan for finding such a volunteer for me; the
> person in question is a SRE from Switzerland who is interested in
> getting involved with kernel testing, and this is going to be their
> 20% project.  :-)

Thanks Ted, I really 

Re: [PATCH v2 15/17] MAINTAINERS: add entry for KUnit the unit testing framework

2019-05-06 Thread Brendan Higgins
On Fri, May 3, 2019 at 7:38 AM shuah  wrote:
>
> On 5/1/19 5:01 PM, Brendan Higgins wrote:
> > Add myself as maintainer of KUnit, the Linux kernel's unit testing
> > framework.
> >
> > Signed-off-by: Brendan Higgins 
> > ---
> >   MAINTAINERS | 10 ++
> >   1 file changed, 10 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 5c38f21aee787..c78ae95c56b80 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -8448,6 +8448,16 @@ S: Maintained
> >   F:  tools/testing/selftests/
> >   F:  Documentation/dev-tools/kselftest*
> >
> > +KERNEL UNIT TESTING FRAMEWORK (KUnit)
> > +M:   Brendan Higgins 
> > +L:   kunit-...@googlegroups.com
> > +W:   https://google.github.io/kunit-docs/third_party/kernel/docs/
> > +S:   Maintained
> > +F:   Documentation/kunit/
> > +F:   include/kunit/
> > +F:   kunit/
> > +F:   tools/testing/kunit/
> > +
>
> Please add kselftest mailing list to this entry, based on our
> conversation on taking these patches through kselftest tree.

Will do.

Thanks!


Re: [PATCH v2 15/17] MAINTAINERS: add entry for KUnit the unit testing framework

2019-05-03 Thread shuah

On 5/1/19 5:01 PM, Brendan Higgins wrote:

Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins 
---
  MAINTAINERS | 10 ++
  1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 5c38f21aee787..c78ae95c56b80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8448,6 +8448,16 @@ S:   Maintained
  F:tools/testing/selftests/
  F:Documentation/dev-tools/kselftest*
  
+KERNEL UNIT TESTING FRAMEWORK (KUnit)

+M: Brendan Higgins 
+L: kunit-...@googlegroups.com
+W: https://google.github.io/kunit-docs/third_party/kernel/docs/
+S: Maintained
+F: Documentation/kunit/
+F: include/kunit/
+F: kunit/
+F: tools/testing/kunit/
+


Please add kselftest mailing list to this entry, based on our
conversation on taking these patches through kselftest tree.

thanks,
-- Shuah


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-25 Thread Brendan Higgins
On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand  wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>[RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>>> After adding patch 15, there are a lot of "unittest internal error" 
> messages.
>>
>> Yeah, I meant to ask you about that. I thought it was due to a change
>> you made, but after further examination, just now, I found it was my
>> fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-21 Thread Logan Gunthorpe



On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as 
> a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

>From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-03-04 Thread Brendan Higgins
On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
 wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>



> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-27 Thread Brendan Higgins
On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
 wrote:
>
>
> Frank Rowand  writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  
> >> wrote:
> >> 
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in  for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-27 Thread Brendan Higgins
On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand  wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  
> > wrote:
> > 
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-22 Thread Thiago Jung Bauermann


Frank Rowand  writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  wrote:
>> 
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in  for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center



Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-19 Thread Frank Rowand
On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  wrote:
> 
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-19 Thread Brendan Higgins
On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand  wrote:

> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.


Re: [RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-18 Thread Frank Rowand
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank


[RFC v4 14/17] MAINTAINERS: add entry for KUnit the unit testing framework

2019-02-14 Thread Brendan Higgins
Add myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins 
---
 MAINTAINERS | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8c68de3cfd80e..ff2cc9fcb49ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8267,6 +8267,16 @@ S:   Maintained
 F: tools/testing/selftests/
 F: Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M: Brendan Higgins 
+L: kunit-...@googlegroups.com
+W: https://google.github.io/kunit-docs/third_party/kernel/docs/
+S: Maintained
+F: Documentation/kunit/
+F: include/kunit/
+F: kunit/
+F: tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M: Luis Chamberlain 
 L: linux-kernel@vger.kernel.org
-- 
2.21.0.rc0.258.g878e2cd30e-goog



[RFC v4 00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

2019-02-14 Thread Brendan Higgins
This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

-- 
2.21.0.rc0.258.g878e2cd30e-goog



Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Brendan Higgins
On Wed, Oct 17, 2018 at 4:12 PM Randy Dunlap  wrote:
>
> On 10/16/18 4:50 PM, Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> Hi,
>
> Just a general comment:
>
> Documentation/process/submitting-patches.rst says:
> < instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
> to do frotz", as if you are giving orders to the codebase to change
> its behaviour.>>
>

Thanks! I will fix this in the next revision.


Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Brendan Higgins
On Wed, Oct 17, 2018 at 4:12 PM Randy Dunlap  wrote:
>
> On 10/16/18 4:50 PM, Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> Hi,
>
> Just a general comment:
>
> Documentation/process/submitting-patches.rst says:
> < instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy
> to do frotz", as if you are giving orders to the codebase to change
> its behaviour.>>
>

Thanks! I will fix this in the next revision.


Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Randy Dunlap
On 10/16/18 4:50 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

Hi,

Just a general comment:

Documentation/process/submitting-patches.rst says:
<>

That also means saying things like:
... test: add
instead of ... test: added
and "enable" instead of "enabled"
and "improve" instead of "improved"
and "implement" instead of "implemented".



thanks.
-- 
~Randy


Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Randy Dunlap
On 10/16/18 4:50 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

Hi,

Just a general comment:

Documentation/process/submitting-patches.rst says:
<>

That also means saying things like:
... test: add
instead of ... test: added
and "enable" instead of "enabled"
and "improve" instead of "improved"
and "implement" instead of "implemented".



thanks.
-- 
~Randy


Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Brendan Higgins
On Wed, Oct 17, 2018 at 10:49 AM  wrote:
>
> > -Original Message-
> > From: Brendan Higgins
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I'm interested in this, and think the kernel might benefit from this,
> but I have lots of questions.
>
Awesome!

> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel.
>
>
> This is stated here and a few places in the documentation.  Just to clarify,
> KUnit works by compiling the unit under test, along with the test code
> itself, and then runs it on the machine where the compilation took
> place?  Is this right?  How does cross-compiling enter into the equation?
> If not what I described, then what exactly is happening?
>
Yep, that's exactly right!

The test and the code under test are linked together in the same
binary and are compiled under Kbuild. Right now I am linking
everything into a UML kernel, but I would ultimately like to make
tests compile into completely independent test binaries. So each test
file would get compiled into its own test binary and would link
against only the code needed to run the test, but we are a bit of a
ways off from that.

For now, tests compile as part of a UML kernel and a test script boots
the UML kernel, tests run as part of the boot process, and the script
extracts test results and reports them.

I intentionally made it so the KUnit test libraries could be
relatively easily ported to other architectures, but in the long term,
tests that depend on being built into a real kernel that boots on real
hardware would be a lot more difficult to maintain and we would never
be able to provide the kind of resources and infrastructure as we
could for tests that run as normal user space binaries.

Does that answer your question?

> Sorry - I haven't had time to look through the patches in detail.
>
> Another issue is, what requirements does this place on the tested
> code?  Is extra instrumentation required?  I didn't see any, but I
> didn't look exhaustively at the code.
>
Nope, no special instrumentation. As long as the code under tests can
be compiled under COMPILE_TEST for the host architecture, you should
be able to use KUnit.

> Are all unit tests stored separately from the unit-under-test, or are
> they expected to be in the same directory?  Who is expected to
> maintain the unit tests?  How often are they expected to change?
> (Would it be every time the unit-under-test changed?)
>
Tests are in the same directory as the code under test. For example,
if I have a driver drivers/i2c/busses/i2c-aspeed.c, I would write a
test drivers/i2c/busses/i2c-aspeed-test.c (that's my opinion anyway).

Unit tests should be the responsibility of the person who is
responsible for the code. So one way to do this would be that unit
tests should be the responsibility of the maintainer who would in turn
require that new tests are written for any new code added, and that
all tests should pass for every patch sent for review.

A well written unit test tests public interfaces (by public I just
mean functions exported outside of a .c file, so non-static functions
and functions which are shared as a member of a struct) so a unit test
should change at a slower rate than the code under test, but you would
likely have to change the test anytime the public interface changes
(intended behavior changes, function signature changes, new public
feature added, etc). More succinctly, if the contract that your code
provide changes your test should probably change, if the contract
doesn't change, your test probably shouldn't change. Does that make
sense?

> Does the test code require the same level of expertise to write
> and maintain as the unit-under-test code?  That is, could this be
> a new opportunity for additional developers (especially relative
> newcomers) to add value to the kernel by writing and maintaining
> test code, or does this add to the already large burden of code
> maintenance for our existing maintainers.

So a couple things, in order to write a unit test, the person who
writes the test must understand what the code they are testing is
supposed to do. To some extent that will probably require someone with
some expertise to ensure that the test makes sense, and indeed a
change that breaks a test should be accompanied by a update to the
test.

On the other hand, I think understanding what pre-existing code does
and is supposed to do is much easier than writing new code from
scratch, and probably doesn't require too much expertise. I actually
did a bit of an experiment internally on this: I had some people with
no prior knowledge of the kernel write some t

Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Brendan Higgins
On Wed, Oct 17, 2018 at 10:49 AM  wrote:
>
> > -Original Message-
> > From: Brendan Higgins
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I'm interested in this, and think the kernel might benefit from this,
> but I have lots of questions.
>
Awesome!

> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel.
>
>
> This is stated here and a few places in the documentation.  Just to clarify,
> KUnit works by compiling the unit under test, along with the test code
> itself, and then runs it on the machine where the compilation took
> place?  Is this right?  How does cross-compiling enter into the equation?
> If not what I described, then what exactly is happening?
>
Yep, that's exactly right!

The test and the code under test are linked together in the same
binary and are compiled under Kbuild. Right now I am linking
everything into a UML kernel, but I would ultimately like to make
tests compile into completely independent test binaries. So each test
file would get compiled into its own test binary and would link
against only the code needed to run the test, but we are a bit of a
ways off from that.

For now, tests compile as part of a UML kernel and a test script boots
the UML kernel, tests run as part of the boot process, and the script
extracts test results and reports them.

I intentionally made it so the KUnit test libraries could be
relatively easily ported to other architectures, but in the long term,
tests that depend on being built into a real kernel that boots on real
hardware would be a lot more difficult to maintain and we would never
be able to provide the kind of resources and infrastructure as we
could for tests that run as normal user space binaries.

Does that answer your question?

> Sorry - I haven't had time to look through the patches in detail.
>
> Another issue is, what requirements does this place on the tested
> code?  Is extra instrumentation required?  I didn't see any, but I
> didn't look exhaustively at the code.
>
Nope, no special instrumentation. As long as the code under tests can
be compiled under COMPILE_TEST for the host architecture, you should
be able to use KUnit.

> Are all unit tests stored separately from the unit-under-test, or are
> they expected to be in the same directory?  Who is expected to
> maintain the unit tests?  How often are they expected to change?
> (Would it be every time the unit-under-test changed?)
>
Tests are in the same directory as the code under test. For example,
if I have a driver drivers/i2c/busses/i2c-aspeed.c, I would write a
test drivers/i2c/busses/i2c-aspeed-test.c (that's my opinion anyway).

Unit tests should be the responsibility of the person who is
responsible for the code. So one way to do this would be that unit
tests should be the responsibility of the maintainer who would in turn
require that new tests are written for any new code added, and that
all tests should pass for every patch sent for review.

A well written unit test tests public interfaces (by public I just
mean functions exported outside of a .c file, so non-static functions
and functions which are shared as a member of a struct) so a unit test
should change at a slower rate than the code under test, but you would
likely have to change the test anytime the public interface changes
(intended behavior changes, function signature changes, new public
feature added, etc). More succinctly, if the contract that your code
provide changes your test should probably change, if the contract
doesn't change, your test probably shouldn't change. Does that make
sense?

> Does the test code require the same level of expertise to write
> and maintain as the unit-under-test code?  That is, could this be
> a new opportunity for additional developers (especially relative
> newcomers) to add value to the kernel by writing and maintaining
> test code, or does this add to the already large burden of code
> maintenance for our existing maintainers.

So a couple things, in order to write a unit test, the person who
writes the test must understand what the code they are testing is
supposed to do. To some extent that will probably require someone with
some expertise to ensure that the test makes sense, and indeed a
change that breaks a test should be accompanied by a update to the
test.

On the other hand, I think understanding what pre-existing code does
and is supposed to do is much easier than writing new code from
scratch, and probably doesn't require too much expertise. I actually
did a bit of an experiment internally on this: I had some people with
no prior knowledge of the kernel write some t

Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Rob Herring
On Tue, Oct 16, 2018 at 6:53 PM Brendan Higgins
 wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

I very much like this. The DT code too has unit tests with our own,
simple infrastructure. They too can run under UML (and every other
arch).

Rob


Re: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Rob Herring
On Tue, Oct 16, 2018 at 6:53 PM Brendan Higgins
 wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

I very much like this. The DT code too has unit tests with our own,
simple infrastructure. They too can run under UML (and every other
arch).

Rob


RE: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Tim.Bird
> -Original Message-
> From: Brendan Higgins
> 
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I'm interested in this, and think the kernel might benefit from this,
but I have lots of questions.

> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel.


This is stated here and a few places in the documentation.  Just to clarify,
KUnit works by compiling the unit under test, along with the test code
itself, and then runs it on the machine where the compilation took
place?  Is this right?  How does cross-compiling enter into the equation?
If not what I described, then what exactly is happening?

Sorry - I haven't had time to look through the patches in detail.

Another issue is, what requirements does this place on the tested
code?  Is extra instrumentation required?  I didn't see any, but I
didn't look exhaustively at the code.

Are all unit tests stored separately from the unit-under-test, or are
they expected to be in the same directory?  Who is expected to
maintain the unit tests?  How often are they expected to change?
(Would it be every time the unit-under-test changed?)

Does the test code require the same level of expertise to write
and maintain as the unit-under-test code?  That is, could this be
a new opportunity for additional developers (especially relative
newcomers) to add value to the kernel by writing and maintaining
test code, or does this add to the already large burden of code
maintenance for our existing maintainers.

Thanks,
 -- Tim

...



RE: [RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-17 Thread Tim.Bird
> -Original Message-
> From: Brendan Higgins
> 
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I'm interested in this, and think the kernel might benefit from this,
but I have lots of questions.

> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel.


This is stated here and a few places in the documentation.  Just to clarify,
KUnit works by compiling the unit under test, along with the test code
itself, and then runs it on the machine where the compilation took
place?  Is this right?  How does cross-compiling enter into the equation?
If not what I described, then what exactly is happening?

Sorry - I haven't had time to look through the patches in detail.

Another issue is, what requirements does this place on the tested
code?  Is extra instrumentation required?  I didn't see any, but I
didn't look exhaustively at the code.

Are all unit tests stored separately from the unit-under-test, or are
they expected to be in the same directory?  Who is expected to
maintain the unit tests?  How often are they expected to change?
(Would it be every time the unit-under-test changed?)

Does the test code require the same level of expertise to write
and maintain as the unit-under-test code?  That is, could this be
a new opportunity for additional developers (especially relative
newcomers) to add value to the kernel by writing and maintaining
test code, or does this add to the already large burden of code
maintenance for our existing maintainers.

Thanks,
 -- Tim

...



[RFC v1 31/31] MAINTAINERS: add entry for KUnit the unit testing framework

2018-10-16 Thread Brendan Higgins
Added myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins 
---
 MAINTAINERS | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 544cac829cf44..9c3d34f0062ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7801,6 +7801,21 @@ S:   Maintained
 F: tools/testing/selftests/
 F: Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M: Brendan Higgins 
+L: kunit-...@googlegroups.com
+W: https://google.github.io/kunit-docs/third_party/kernel/docs/
+S: Maintained
+F: Documentation/kunit/
+F: arch/um/include/asm/io-mock-shared.h
+F: arch/um/include/asm/io-mock.h
+F: arch/um/kernel/io-mock.c
+F: drivers/base/platform-mock.c
+F: include/linux/platform_device_mock.h
+F: include/kunit/
+F: kunit/
+F: tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M: "Luis R. Rodriguez" 
 L: linux-kernel@vger.kernel.org
-- 
2.19.1.331.ge82ca0e54c-goog



[RFC v1 31/31] MAINTAINERS: add entry for KUnit the unit testing framework

2018-10-16 Thread Brendan Higgins
Added myself as maintainer of KUnit, the Linux kernel's unit testing
framework.

Signed-off-by: Brendan Higgins 
---
 MAINTAINERS | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 544cac829cf44..9c3d34f0062ad 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7801,6 +7801,21 @@ S:   Maintained
 F: tools/testing/selftests/
 F: Documentation/dev-tools/kselftest*
 
+KERNEL UNIT TESTING FRAMEWORK (KUnit)
+M: Brendan Higgins 
+L: kunit-...@googlegroups.com
+W: https://google.github.io/kunit-docs/third_party/kernel/docs/
+S: Maintained
+F: Documentation/kunit/
+F: arch/um/include/asm/io-mock-shared.h
+F: arch/um/include/asm/io-mock.h
+F: arch/um/kernel/io-mock.c
+F: drivers/base/platform-mock.c
+F: include/linux/platform_device_mock.h
+F: include/kunit/
+F: kunit/
+F: tools/testing/kunit/
+
 KERNEL USERMODE HELPER
 M: "Luis R. Rodriguez" 
 L: linux-kernel@vger.kernel.org
-- 
2.19.1.331.ge82ca0e54c-goog



[RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-16 Thread Brendan Higgins
This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/

-- 
2.19.1.331.ge82ca0e54c-goog



[RFC v1 00/31] kunit: Introducing KUnit, the Linux kernel unit testing framework

2018-10-16 Thread Brendan Higgins
This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/

-- 
2.19.1.331.ge82ca0e54c-goog



Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-10 Thread Mel Gorman
On Tue, Oct 10, 2017 at 08:09:20AM +1100, Dave Chinner wrote:
> > I'd _like_ to expand fio for cases we come up with that aren't possible, as
> > there's already a ton of measurements that are taken, especially around
> > latencies.
> 
> To be properly useful it needs to support more than just fio to run
> tests. Indeed, it's largely useless to me if that's all it can do or
> it's a major pain to add support for different tools like fsmark.
> 
> e.g.  my typical perf regression test that you see the concurrnet
> fsmark create workload is actually a lot more. It does:
> 
>   fsmark to create 50m zero length files
>   umount,
>   run parallel xfs_repair (excellent mmap_sem/page fault punisher)
>   mount
>   run parallel find -ctime (readdir + lookup traversal)
>   unmount, mount
>   run parallel ls -R (readdir + dtype traversal)
>   unmount, mount
>   parallel rm -rf of 50m files
> 
> I have variants that use small 4k files or large files rather than
> empty files, taht use different fsync patterns to stress the
> log, use grep -R to traverse the data as well as
> the directory/inode structure instead of find, etc.
> 

FWIW, this is partially implemented in mmtests as
configs/config-global-dhp__io-xfsrepair. It covers the fsmark and
xfs_repair part and an example report is

http://beta.suse.com/private/mgorman/results/home/marvin/openSUSE-LEAP-42.2/global-dhp__io-xfsrepair-xfs/delboy/#xfsrepair

(ignore 4.12.603, it's 4.12.3-stable with some additional patches that were
pending for -stable at the time the test was executed). That config was
added after a discussion with you a few years ago and I've kept it since as
it has been useful in a number of contexts. Adding additional tests to cover
parallel find, parallel ls and parallel rm would be relatively trivial but
it's not there. This is a test that doesn't have proper graphing support
but it could be added in 10-15 minutes as xfsrepair is the primary metric
and it's simply reported as elapsed time.

fsmark is also covered albeit not necessarily with parameters everyone wants
as configs/config-global-dhp__io-metadata in mmtests.  Example report is

http://beta.suse.com/private/mgorman/results/home/marvin/openSUSE-LEAP-42.2/global-dhp__io-metadata-xfs/delboy/#fsmark-threaded

mmtests has been modified multiple times as according as methodologies
were improved and it's far from perfect but it seems to me that fsperf
is going to end up reimplementing a lot of it.

It's not perfect as there are multiple quality-of-implementation issues
as it often takes the shortest path to being able to collect data but
it improves over time. When a test is found to be flawed, it's fixed and
historical data is discarded. It doesn't store data in sqlite or anything
fancy, just the raw logs are preserved and reports generated as required. In
terms of tools required, the core is just bash scripts. Some of the tests
require a number of packages to be installed but not all of them. It uses a
tool to install packages if they are missing but the naming is all based on
opensuse. It *can* map opensuse package names to fedora and debian but the
mappings are not up-to-date as I do not personally run those distributions.

Even with the quality-of-implementation issues, it seems to me that it
covers a lot of the requirements that fsperf aims for.

-- 
Mel Gorman
SUSE Labs


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-10 Thread Mel Gorman
On Tue, Oct 10, 2017 at 08:09:20AM +1100, Dave Chinner wrote:
> > I'd _like_ to expand fio for cases we come up with that aren't possible, as
> > there's already a ton of measurements that are taken, especially around
> > latencies.
> 
> To be properly useful it needs to support more than just fio to run
> tests. Indeed, it's largely useless to me if that's all it can do or
> it's a major pain to add support for different tools like fsmark.
> 
> e.g.  my typical perf regression test that you see the concurrnet
> fsmark create workload is actually a lot more. It does:
> 
>   fsmark to create 50m zero length files
>   umount,
>   run parallel xfs_repair (excellent mmap_sem/page fault punisher)
>   mount
>   run parallel find -ctime (readdir + lookup traversal)
>   unmount, mount
>   run parallel ls -R (readdir + dtype traversal)
>   unmount, mount
>   parallel rm -rf of 50m files
> 
> I have variants that use small 4k files or large files rather than
> empty files, taht use different fsync patterns to stress the
> log, use grep -R to traverse the data as well as
> the directory/inode structure instead of find, etc.
> 

FWIW, this is partially implemented in mmtests as
configs/config-global-dhp__io-xfsrepair. It covers the fsmark and
xfs_repair part and an example report is

http://beta.suse.com/private/mgorman/results/home/marvin/openSUSE-LEAP-42.2/global-dhp__io-xfsrepair-xfs/delboy/#xfsrepair

(ignore 4.12.603, it's 4.12.3-stable with some additional patches that were
pending for -stable at the time the test was executed). That config was
added after a discussion with you a few years ago and I've kept it since as
it has been useful in a number of contexts. Adding additional tests to cover
parallel find, parallel ls and parallel rm would be relatively trivial but
it's not there. This is a test that doesn't have proper graphing support
but it could be added in 10-15 minutes as xfsrepair is the primary metric
and it's simply reported as elapsed time.

fsmark is also covered albeit not necessarily with parameters everyone wants
as configs/config-global-dhp__io-metadata in mmtests.  Example report is

http://beta.suse.com/private/mgorman/results/home/marvin/openSUSE-LEAP-42.2/global-dhp__io-metadata-xfs/delboy/#fsmark-threaded

mmtests has been modified multiple times as according as methodologies
were improved and it's far from perfect but it seems to me that fsperf
is going to end up reimplementing a lot of it.

It's not perfect as there are multiple quality-of-implementation issues
as it often takes the shortest path to being able to collect data but
it improves over time. When a test is found to be flawed, it's fixed and
historical data is discarded. It doesn't store data in sqlite or anything
fancy, just the raw logs are preserved and reports generated as required. In
terms of tools required, the core is just bash scripts. Some of the tests
require a number of packages to be installed but not all of them. It uses a
tool to install packages if they are missing but the naming is all based on
opensuse. It *can* map opensuse package names to fedora and debian but the
mappings are not up-to-date as I do not personally run those distributions.

Even with the quality-of-implementation issues, it seems to me that it
covers a lot of the requirements that fsperf aims for.

-- 
Mel Gorman
SUSE Labs


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Tue, Oct 10, 2017 at 08:09:20AM +1100, Dave Chinner wrote:
> On Mon, Oct 09, 2017 at 09:00:51AM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> > > On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > > > > Integrating into fstests means it will be immediately available to
> > > > > all fs developers, it'll run on everything that everyone already has
> > > > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > > > option setup behaviour so there's no new hoops for everyone to jump
> > > > > through to run it...
> > > > > 
> > > > 
> > > > TBF I specifically made it as easy as possible because I know we all 
> > > > hate trying
> > > > to learn new shit.
> > > 
> > > Yeah, it's also hard to get people to change their workflows to add
> > > a whole new test harness into them. It's easy if it's just a new
> > > command to an existing workflow :P
> > > 
> > 
> > Agreed, so if you probably won't run this outside of fstests then I'll add 
> > it to
> > xfstests.  I envision this tool as being run by maintainers to verify their 
> > pull
> > requests haven't regressed since the last set of patches, as well as by 
> > anybody
> > trying to fix performance problems.  So it's way more important to me that 
> > you,
> > Ted, and all the various btrfs maintainers will run it than anybody else.
> > 
> > > > I figured this was different enough to warrant a separate
> > > > project, especially since I'm going to add block device jobs so Jens 
> > > > can test
> > > > block layer things.  If we all agree we'd rather see this in fstests 
> > > > then I'm
> > > > happy to do that too.  Thanks,
> > > 
> > > I'm not fussed either way - it's a good discussion to have, though.
> > > 
> > > If I want to add tests (e.g. my time-honoured fsmark tests), where
> > > should I send patches?
> > > 
> > 
> > I beat you to that!  I wanted to avoid adding fs_mark to the suite because 
> > it
> > means parsing another different set of outputs, so I added a new ioengine 
> > to fio
> > for this
> > 
> > http://www.spinics.net/lists/fio/msg06367.html
> > 
> > and added a fio job to do 500k files
> > 
> > https://github.com/josefbacik/fsperf/blob/master/tests/500kemptyfiles.fio
> > 
> > The test is disabled by default for now because obviously the fio support 
> > hasn't
> > landed yet.
> 
> That seems  misguided. fio is good, but it's not a universal
> solution.
> 
> > I'd _like_ to expand fio for cases we come up with that aren't possible, as
> > there's already a ton of measurements that are taken, especially around
> > latencies.
> 
> To be properly useful it needs to support more than just fio to run
> tests. Indeed, it's largely useless to me if that's all it can do or
> it's a major pain to add support for different tools like fsmark.
> 
> e.g.  my typical perf regression test that you see the concurrnet
> fsmark create workload is actually a lot more. It does:
> 
>   fsmark to create 50m zero length files
>   umount,
>   run parallel xfs_repair (excellent mmap_sem/page fault punisher)
>   mount
>   run parallel find -ctime (readdir + lookup traversal)
>   unmount, mount
>   run parallel ls -R (readdir + dtype traversal)
>   unmount, mount
>   parallel rm -rf of 50m files
> 
> I have variants that use small 4k files or large files rather than
> empty files, taht use different fsync patterns to stress the
> log, use grep -R to traverse the data as well as
> the directory/inode structure instead of find, etc.
> 
> > That said I'm not opposed to throwing new stuff in there, it just
> > means we have to add stuff to parse the output and store it in the database 
> > in a
> > consistent way, which seems like more of a pain than just making fio do 
> > what we
> > need it to.  Thanks,
> 
> fio is not going to be able to replace the sort of perf tests I run
> from week to week. If that's all it's going to do then it's not
> directly useful to me...
> 

Agreed, I'm just going to add this stuff to fstests since I'd like to be able to
use the _require stuff to make sure we only run stuff we have support for.  I'll
wire that up this week and send patches along.  Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Tue, Oct 10, 2017 at 08:09:20AM +1100, Dave Chinner wrote:
> On Mon, Oct 09, 2017 at 09:00:51AM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> > > On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > > > > Integrating into fstests means it will be immediately available to
> > > > > all fs developers, it'll run on everything that everyone already has
> > > > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > > > option setup behaviour so there's no new hoops for everyone to jump
> > > > > through to run it...
> > > > > 
> > > > 
> > > > TBF I specifically made it as easy as possible because I know we all 
> > > > hate trying
> > > > to learn new shit.
> > > 
> > > Yeah, it's also hard to get people to change their workflows to add
> > > a whole new test harness into them. It's easy if it's just a new
> > > command to an existing workflow :P
> > > 
> > 
> > Agreed, so if you probably won't run this outside of fstests then I'll add 
> > it to
> > xfstests.  I envision this tool as being run by maintainers to verify their 
> > pull
> > requests haven't regressed since the last set of patches, as well as by 
> > anybody
> > trying to fix performance problems.  So it's way more important to me that 
> > you,
> > Ted, and all the various btrfs maintainers will run it than anybody else.
> > 
> > > > I figured this was different enough to warrant a separate
> > > > project, especially since I'm going to add block device jobs so Jens 
> > > > can test
> > > > block layer things.  If we all agree we'd rather see this in fstests 
> > > > then I'm
> > > > happy to do that too.  Thanks,
> > > 
> > > I'm not fussed either way - it's a good discussion to have, though.
> > > 
> > > If I want to add tests (e.g. my time-honoured fsmark tests), where
> > > should I send patches?
> > > 
> > 
> > I beat you to that!  I wanted to avoid adding fs_mark to the suite because 
> > it
> > means parsing another different set of outputs, so I added a new ioengine 
> > to fio
> > for this
> > 
> > http://www.spinics.net/lists/fio/msg06367.html
> > 
> > and added a fio job to do 500k files
> > 
> > https://github.com/josefbacik/fsperf/blob/master/tests/500kemptyfiles.fio
> > 
> > The test is disabled by default for now because obviously the fio support 
> > hasn't
> > landed yet.
> 
> That seems  misguided. fio is good, but it's not a universal
> solution.
> 
> > I'd _like_ to expand fio for cases we come up with that aren't possible, as
> > there's already a ton of measurements that are taken, especially around
> > latencies.
> 
> To be properly useful it needs to support more than just fio to run
> tests. Indeed, it's largely useless to me if that's all it can do or
> it's a major pain to add support for different tools like fsmark.
> 
> e.g.  my typical perf regression test that you see the concurrnet
> fsmark create workload is actually a lot more. It does:
> 
>   fsmark to create 50m zero length files
>   umount,
>   run parallel xfs_repair (excellent mmap_sem/page fault punisher)
>   mount
>   run parallel find -ctime (readdir + lookup traversal)
>   unmount, mount
>   run parallel ls -R (readdir + dtype traversal)
>   unmount, mount
>   parallel rm -rf of 50m files
> 
> I have variants that use small 4k files or large files rather than
> empty files, taht use different fsync patterns to stress the
> log, use grep -R to traverse the data as well as
> the directory/inode structure instead of find, etc.
> 
> > That said I'm not opposed to throwing new stuff in there, it just
> > means we have to add stuff to parse the output and store it in the database 
> > in a
> > consistent way, which seems like more of a pain than just making fio do 
> > what we
> > need it to.  Thanks,
> 
> fio is not going to be able to replace the sort of perf tests I run
> from week to week. If that's all it's going to do then it's not
> directly useful to me...
> 

Agreed, I'm just going to add this stuff to fstests since I'd like to be able to
use the _require stuff to make sure we only run stuff we have support for.  I'll
wire that up this week and send patches along.  Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Dave Chinner
On Mon, Oct 09, 2017 at 09:00:51AM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> > On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > > > Integrating into fstests means it will be immediately available to
> > > > all fs developers, it'll run on everything that everyone already has
> > > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > > option setup behaviour so there's no new hoops for everyone to jump
> > > > through to run it...
> > > > 
> > > 
> > > TBF I specifically made it as easy as possible because I know we all hate 
> > > trying
> > > to learn new shit.
> > 
> > Yeah, it's also hard to get people to change their workflows to add
> > a whole new test harness into them. It's easy if it's just a new
> > command to an existing workflow :P
> > 
> 
> Agreed, so if you probably won't run this outside of fstests then I'll add it 
> to
> xfstests.  I envision this tool as being run by maintainers to verify their 
> pull
> requests haven't regressed since the last set of patches, as well as by 
> anybody
> trying to fix performance problems.  So it's way more important to me that 
> you,
> Ted, and all the various btrfs maintainers will run it than anybody else.
> 
> > > I figured this was different enough to warrant a separate
> > > project, especially since I'm going to add block device jobs so Jens can 
> > > test
> > > block layer things.  If we all agree we'd rather see this in fstests then 
> > > I'm
> > > happy to do that too.  Thanks,
> > 
> > I'm not fussed either way - it's a good discussion to have, though.
> > 
> > If I want to add tests (e.g. my time-honoured fsmark tests), where
> > should I send patches?
> > 
> 
> I beat you to that!  I wanted to avoid adding fs_mark to the suite because it
> means parsing another different set of outputs, so I added a new ioengine to 
> fio
> for this
> 
> http://www.spinics.net/lists/fio/msg06367.html
> 
> and added a fio job to do 500k files
> 
> https://github.com/josefbacik/fsperf/blob/master/tests/500kemptyfiles.fio
> 
> The test is disabled by default for now because obviously the fio support 
> hasn't
> landed yet.

That seems  misguided. fio is good, but it's not a universal
solution.

> I'd _like_ to expand fio for cases we come up with that aren't possible, as
> there's already a ton of measurements that are taken, especially around
> latencies.

To be properly useful it needs to support more than just fio to run
tests. Indeed, it's largely useless to me if that's all it can do or
it's a major pain to add support for different tools like fsmark.

e.g.  my typical perf regression test that you see the concurrnet
fsmark create workload is actually a lot more. It does:

fsmark to create 50m zero length files
umount,
run parallel xfs_repair (excellent mmap_sem/page fault punisher)
mount
run parallel find -ctime (readdir + lookup traversal)
unmount, mount
run parallel ls -R (readdir + dtype traversal)
unmount, mount
parallel rm -rf of 50m files

I have variants that use small 4k files or large files rather than
empty files, taht use different fsync patterns to stress the
log, use grep -R to traverse the data as well as
the directory/inode structure instead of find, etc.

> That said I'm not opposed to throwing new stuff in there, it just
> means we have to add stuff to parse the output and store it in the database 
> in a
> consistent way, which seems like more of a pain than just making fio do what 
> we
> need it to.  Thanks,

fio is not going to be able to replace the sort of perf tests I run
from week to week. If that's all it's going to do then it's not
directly useful to me...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Dave Chinner
On Mon, Oct 09, 2017 at 09:00:51AM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> > On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > > > Integrating into fstests means it will be immediately available to
> > > > all fs developers, it'll run on everything that everyone already has
> > > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > > option setup behaviour so there's no new hoops for everyone to jump
> > > > through to run it...
> > > > 
> > > 
> > > TBF I specifically made it as easy as possible because I know we all hate 
> > > trying
> > > to learn new shit.
> > 
> > Yeah, it's also hard to get people to change their workflows to add
> > a whole new test harness into them. It's easy if it's just a new
> > command to an existing workflow :P
> > 
> 
> Agreed, so if you probably won't run this outside of fstests then I'll add it 
> to
> xfstests.  I envision this tool as being run by maintainers to verify their 
> pull
> requests haven't regressed since the last set of patches, as well as by 
> anybody
> trying to fix performance problems.  So it's way more important to me that 
> you,
> Ted, and all the various btrfs maintainers will run it than anybody else.
> 
> > > I figured this was different enough to warrant a separate
> > > project, especially since I'm going to add block device jobs so Jens can 
> > > test
> > > block layer things.  If we all agree we'd rather see this in fstests then 
> > > I'm
> > > happy to do that too.  Thanks,
> > 
> > I'm not fussed either way - it's a good discussion to have, though.
> > 
> > If I want to add tests (e.g. my time-honoured fsmark tests), where
> > should I send patches?
> > 
> 
> I beat you to that!  I wanted to avoid adding fs_mark to the suite because it
> means parsing another different set of outputs, so I added a new ioengine to 
> fio
> for this
> 
> http://www.spinics.net/lists/fio/msg06367.html
> 
> and added a fio job to do 500k files
> 
> https://github.com/josefbacik/fsperf/blob/master/tests/500kemptyfiles.fio
> 
> The test is disabled by default for now because obviously the fio support 
> hasn't
> landed yet.

That seems  misguided. fio is good, but it's not a universal
solution.

> I'd _like_ to expand fio for cases we come up with that aren't possible, as
> there's already a ton of measurements that are taken, especially around
> latencies.

To be properly useful it needs to support more than just fio to run
tests. Indeed, it's largely useless to me if that's all it can do or
it's a major pain to add support for different tools like fsmark.

e.g.  my typical perf regression test that you see the concurrnet
fsmark create workload is actually a lot more. It does:

fsmark to create 50m zero length files
umount,
run parallel xfs_repair (excellent mmap_sem/page fault punisher)
mount
run parallel find -ctime (readdir + lookup traversal)
unmount, mount
run parallel ls -R (readdir + dtype traversal)
unmount, mount
parallel rm -rf of 50m files

I have variants that use small 4k files or large files rather than
empty files, taht use different fsync patterns to stress the
log, use grep -R to traverse the data as well as
the directory/inode structure instead of find, etc.

> That said I'm not opposed to throwing new stuff in there, it just
> means we have to add stuff to parse the output and store it in the database 
> in a
> consistent way, which seems like more of a pain than just making fio do what 
> we
> need it to.  Thanks,

fio is not going to be able to replace the sort of perf tests I run
from week to week. If that's all it's going to do then it's not
directly useful to me...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Theodore Ts'o
On Mon, Oct 09, 2017 at 08:54:16AM -0400, Josef Bacik wrote:
> I purposefully used as little as possible, just json and sqlite, and I tried 
> to
> use as little python3 isms as possible.  Any rpm based systems should have 
> these
> libraries already installed, I agree that using any of the PyPI stuff is a 
> pain
> and is a non-starter for me as I want to make it as easy as possible to get
> running.
> 
> Where do you fall on the including it in xfstests question?  I expect that the
> perf stuff would be run more as maintainers put their pull requests together 
> to
> make sure things haven't regressed.  To that end I was going to wire up
> xfstests-bld to run this as well.  Whatever you and Dave prefer is what I'll 
> do,
> I'll use it wherever it ends up so you two are the ones that get to decide.

I'm currently using Python 2.7 mainly because the LTM subsystem in
gce-xfstests was implemented using that version of Python, and my
initial efforts to port it to Python 3 were... not smooth.  (Because
it was doing authentication, I got bit by the Python 2 vs Python 3
"bytes vs. strings vs. unicode" transition especially hard.)

So I'm going to be annoyed by needing to package Python 2.7 and Python
3.5 in my test VM's, but I can deal, and this will probably be the
forcing function for me to figure out how make that jump.  To be
honest, the bigger issue I'm going to have to figure out is how to
manage the state in the sqlite database across disposable VM's running
in parallel.  And the assumption being made with having a static
sqllite database on a test machine is that the hardware capabilities
of the are static, and that's not true with a VM, whether it's running
via Qemu or in some cloud environment.

I'm not going to care that much about Python 3 not being available on
enterprise distro's, but I could see that being annoying for some
folks.  I'll let them worry about that.

The main thing I think we'll need to worry about once we let Python
into xfstests is to be pretty careful about specifying what version of
Python the scripts need to be portable against (Python 3.3?  3.4?
3.5?) and what versions of python packages get imported.

The bottom line is that I'm personally supportive of adding Python and
fsperf to xfstests.  We just need to be careful about the portability
concerns, not just now, but any time we check in new Python code.  And
having some documented Python style guidelines, and adding unit tests
so we can notice potential portability breakages would be a good idea.

 - Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Theodore Ts'o
On Mon, Oct 09, 2017 at 08:54:16AM -0400, Josef Bacik wrote:
> I purposefully used as little as possible, just json and sqlite, and I tried 
> to
> use as little python3 isms as possible.  Any rpm based systems should have 
> these
> libraries already installed, I agree that using any of the PyPI stuff is a 
> pain
> and is a non-starter for me as I want to make it as easy as possible to get
> running.
> 
> Where do you fall on the including it in xfstests question?  I expect that the
> perf stuff would be run more as maintainers put their pull requests together 
> to
> make sure things haven't regressed.  To that end I was going to wire up
> xfstests-bld to run this as well.  Whatever you and Dave prefer is what I'll 
> do,
> I'll use it wherever it ends up so you two are the ones that get to decide.

I'm currently using Python 2.7 mainly because the LTM subsystem in
gce-xfstests was implemented using that version of Python, and my
initial efforts to port it to Python 3 were... not smooth.  (Because
it was doing authentication, I got bit by the Python 2 vs Python 3
"bytes vs. strings vs. unicode" transition especially hard.)

So I'm going to be annoyed by needing to package Python 2.7 and Python
3.5 in my test VM's, but I can deal, and this will probably be the
forcing function for me to figure out how make that jump.  To be
honest, the bigger issue I'm going to have to figure out is how to
manage the state in the sqlite database across disposable VM's running
in parallel.  And the assumption being made with having a static
sqllite database on a test machine is that the hardware capabilities
of the are static, and that's not true with a VM, whether it's running
via Qemu or in some cloud environment.

I'm not going to care that much about Python 3 not being available on
enterprise distro's, but I could see that being annoying for some
folks.  I'll let them worry about that.

The main thing I think we'll need to worry about once we let Python
into xfstests is to be pretty careful about specifying what version of
Python the scripts need to be portable against (Python 3.3?  3.4?
3.5?) and what versions of python packages get imported.

The bottom line is that I'm personally supportive of adding Python and
fsperf to xfstests.  We just need to be careful about the portability
concerns, not just now, but any time we check in new Python code.  And
having some documented Python style guidelines, and adding unit tests
so we can notice potential portability breakages would be a good idea.

 - Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread David Sterba
On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> One thing that comes up a lot every LSF is the fact that we have no general 
> way
> that we do performance testing.  Every fs developer has a set of scripts or
> things that they run with varying degrees of consistency, but nothing central
> that we all use.  I for one am getting tired of finding regressions when we 
> are
> deploying new kernels internally, so I wired this thing up to try and address
> this need.
> 
> We all hate convoluted setups, the more brain power we have to put in to 
> setting
> something up the less likely we are to use it, so I took the xfstests approach
> of making it relatively simple to get running and relatively easy to add new
> tests.  For right now the only thing this framework does is run fio scripts.  
> I
> chose fio because it already gathers loads of performance data about it's 
> runs.
> We have everything we need there, latency, bandwidth, cpu time, and all broken
> down by reads, writes, and trims.  I figure most of us are familiar enough 
> with
> fio and how it works to make it relatively easy to add new tests to the
> framework.
> 
> I've posted my code up on github, you can get it here
> 
> https://github.com/josefbacik/fsperf

Let me propose an existing framework that is capable of what is in
fsperf, and much more. I'ts Mel Gorman's mmtests
http://github.com/gormanm/mmtests .

I've been using it for a year or so, built a few scripts on top of that to
help me set up configs for specific machines or run tests in sequences.

What are the capabilities regarding filesystem tests:

* create and mount filesystems (based on configs)
* start various workloads, that are possibly adapted to the machine
  (cpu, memory), there are many types, we'd be interested in those
  touching filesystems
* gather system statistics - cpu, memory, IO, latency there are scripts
  that understand the output of various benchmarking tools (fio, dbench,
  ffsb, tiobench, bonnie, fs_mark, iozone, blogbench, ...)
* export the results into plain text or html, with tables and graphs
* it is already able to compare results of several runs, with
  statistical indicators

The testsuite is actively used and maintained, which means that the
efforts are mosly on the configuration side. From users' perspective
this means to spend time with the setup and the rest will work as
expected. Ie. you don't have to start debugging the suite because there
are some version mismatches.

> All (well most) of the results from fio are stored in a local sqlite database.
> Right now the comparison stuff is very crude, it simply checks against the
> previous run and it only checks a few of the keys by default.  You can check
> latency if you want, but while writing this stuff up it seemed that latency 
> was
> too variable from run to run to be useful in a "did my thing regress or 
> improve"
> sort of way.
> 
> The configuration is brain dead simple, the README has examples.  All you need
> to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> good
> to go.
> 
> The plan is to add lots of workloads as we discover regressions and such.  We
> don't want anything that takes too long to run otherwise people won't run 
> this,
> so the existing tests don't take much longer than a few minutes each.

Sorry, this is IMO the wrong approach to benchmarking. Can you exercise
the filesystem enough in a few minutes? Can you write at least 2 times
memory size of data to the filesystem? Everything works fine when it's
from caches and the filesystem is fresh. With that you can simply start
using phoronix-test-suite and be done, with the same quality of results
we all roll eyes about.

Targeted tests using fio are fine and I understand the need to keep it
minimal. mmtests have support for fio and any jobfile can be used,
internally implemented with the 'fio --cmdline' option that will
transform it to a shell variable that's passed to fio in the end.

As proposed in the thread, why not use xfstests? It could be suitable
for the configs, mkfs/mount and running but I think it would need a lot
of work to enhance the result gathering and presentation. Essentially
duplicating mmtests from that side.

I was positively surprised by various performance monitors that I was
not primarily interested in, like memory allocations or context
switches.  This gives deeper insights into the system and may help
analyzing the benchmark results.

Side note: you can run xfstests from mmtests, ie. the machine/options
confugration is shared.

I'm willing to write more about the actual usage of mmtests, but at this
point I'm proposing the whole framework for consideration.


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread David Sterba
On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> One thing that comes up a lot every LSF is the fact that we have no general 
> way
> that we do performance testing.  Every fs developer has a set of scripts or
> things that they run with varying degrees of consistency, but nothing central
> that we all use.  I for one am getting tired of finding regressions when we 
> are
> deploying new kernels internally, so I wired this thing up to try and address
> this need.
> 
> We all hate convoluted setups, the more brain power we have to put in to 
> setting
> something up the less likely we are to use it, so I took the xfstests approach
> of making it relatively simple to get running and relatively easy to add new
> tests.  For right now the only thing this framework does is run fio scripts.  
> I
> chose fio because it already gathers loads of performance data about it's 
> runs.
> We have everything we need there, latency, bandwidth, cpu time, and all broken
> down by reads, writes, and trims.  I figure most of us are familiar enough 
> with
> fio and how it works to make it relatively easy to add new tests to the
> framework.
> 
> I've posted my code up on github, you can get it here
> 
> https://github.com/josefbacik/fsperf

Let me propose an existing framework that is capable of what is in
fsperf, and much more. I'ts Mel Gorman's mmtests
http://github.com/gormanm/mmtests .

I've been using it for a year or so, built a few scripts on top of that to
help me set up configs for specific machines or run tests in sequences.

What are the capabilities regarding filesystem tests:

* create and mount filesystems (based on configs)
* start various workloads, that are possibly adapted to the machine
  (cpu, memory), there are many types, we'd be interested in those
  touching filesystems
* gather system statistics - cpu, memory, IO, latency there are scripts
  that understand the output of various benchmarking tools (fio, dbench,
  ffsb, tiobench, bonnie, fs_mark, iozone, blogbench, ...)
* export the results into plain text or html, with tables and graphs
* it is already able to compare results of several runs, with
  statistical indicators

The testsuite is actively used and maintained, which means that the
efforts are mosly on the configuration side. From users' perspective
this means to spend time with the setup and the rest will work as
expected. Ie. you don't have to start debugging the suite because there
are some version mismatches.

> All (well most) of the results from fio are stored in a local sqlite database.
> Right now the comparison stuff is very crude, it simply checks against the
> previous run and it only checks a few of the keys by default.  You can check
> latency if you want, but while writing this stuff up it seemed that latency 
> was
> too variable from run to run to be useful in a "did my thing regress or 
> improve"
> sort of way.
> 
> The configuration is brain dead simple, the README has examples.  All you need
> to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> good
> to go.
> 
> The plan is to add lots of workloads as we discover regressions and such.  We
> don't want anything that takes too long to run otherwise people won't run 
> this,
> so the existing tests don't take much longer than a few minutes each.

Sorry, this is IMO the wrong approach to benchmarking. Can you exercise
the filesystem enough in a few minutes? Can you write at least 2 times
memory size of data to the filesystem? Everything works fine when it's
from caches and the filesystem is fresh. With that you can simply start
using phoronix-test-suite and be done, with the same quality of results
we all roll eyes about.

Targeted tests using fio are fine and I understand the need to keep it
minimal. mmtests have support for fio and any jobfile can be used,
internally implemented with the 'fio --cmdline' option that will
transform it to a shell variable that's passed to fio in the end.

As proposed in the thread, why not use xfstests? It could be suitable
for the configs, mkfs/mount and running but I think it would need a lot
of work to enhance the result gathering and presentation. Essentially
duplicating mmtests from that side.

I was positively surprised by various performance monitors that I was
not primarily interested in, like memory allocations or context
switches.  This gives deeper insights into the system and may help
analyzing the benchmark results.

Side note: you can run xfstests from mmtests, ie. the machine/options
confugration is shared.

I'm willing to write more about the actual usage of mmtests, but at this
point I'm proposing the whole framework for consideration.


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > > Hello,
> > > > 
> > > > One thing that comes up a lot every LSF is the fact that we have no 
> > > > general way
> > > > that we do performance testing.  Every fs developer has a set of 
> > > > scripts or
> > > > things that they run with varying degrees of consistency, but nothing 
> > > > central
> > > > that we all use.  I for one am getting tired of finding regressions 
> > > > when we are
> > > > deploying new kernels internally, so I wired this thing up to try and 
> > > > address
> > > > this need.
> > > > 
> > > > We all hate convoluted setups, the more brain power we have to put in 
> > > > to setting
> > > > something up the less likely we are to use it, so I took the xfstests 
> > > > approach
> > > > of making it relatively simple to get running and relatively easy to 
> > > > add new
> > > > tests.  For right now the only thing this framework does is run fio 
> > > > scripts.  I
> > > > chose fio because it already gathers loads of performance data about 
> > > > it's runs.
> > > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > > broken
> > > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > > enough with
> > > > fio and how it works to make it relatively easy to add new tests to the
> > > > framework.
> > > > 
> > > > I've posted my code up on github, you can get it here
> > > > 
> > > > https://github.com/josefbacik/fsperf
> > > > 
> > > > All (well most) of the results from fio are stored in a local sqlite 
> > > > database.
> > > > Right now the comparison stuff is very crude, it simply checks against 
> > > > the
> > > > previous run and it only checks a few of the keys by default.  You can 
> > > > check
> > > > latency if you want, but while writing this stuff up it seemed that 
> > > > latency was
> > > > too variable from run to run to be useful in a "did my thing regress or 
> > > > improve"
> > > > sort of way.
> > > > 
> > > > The configuration is brain dead simple, the README has examples.  All 
> > > > you need
> > > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > > are good
> > > > to go.
> > > 
> > > Why re-invent the test infrastructure? Why not just make it a
> > > tests/perf subdir in fstests?
> > > 
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> > 
> > Now if you (and others) are not opposed to this being dropped into 
> > tests/perf
> > then I'll work that up.  But it's definitely going to need to be done in 
> > python.
> > I know you yourself have said you aren't opposed to using python in the 
> > past, so
> > if that's still the case then I can definitely wire it all up.
> 
> I have no problems with people using python for stuff like this but,
> OTOH, I'm not the fstests maintainer anymore :P
> 
> > > > The plan is to add lots of workloads as we discover regressions and 
> > > > such.  We
> > > > don't want anything that takes too long to run otherwise people won't 
> > > > run this,
> > > > so the existing tests don't take much longer than a few minutes each.  
> > > > I will be
> > > > adding some more comparison options so you can compare against averages 
> > > > of all
> > > > previous runs and such.
> > > 
> > > Yup, that fits exactly into what fstests is for... :P
> > > 
> > > Integrating into fstests means it will be immediately available to
> > > all fs developers, it'll run on everything that everyone already has
> > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > option setup behaviour so there's no new hoops for everyone to jump
> > > through to run it...
> > > 
> > 
> > TBF I specifically made it as easy as possible because I know we all hate 
> > trying
> > to learn new shit.
> 
> Yeah, it's also hard to get people to change their workflows to add
> a whole new test harness into them. It's easy if it's just a new
> command to an existing workflow :P
> 

Agreed, so if you probably won't run this outside of fstests then I'll add it to
xfstests.  I envision this tool as being run by maintainers to verify their pull
requests haven't regressed since the last set of patches, as well as by anybody
trying to fix performance problems.  So it's way more important to me that you,
Ted, and all the various btrfs maintainers will run it than anybody else.

> > I figured this was different enough to warrant a separate
> > project, especially since I'm going to add block device jobs so Jens can 
> > test
> > block layer things.  If we all agree we'd rather 

Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > > Hello,
> > > > 
> > > > One thing that comes up a lot every LSF is the fact that we have no 
> > > > general way
> > > > that we do performance testing.  Every fs developer has a set of 
> > > > scripts or
> > > > things that they run with varying degrees of consistency, but nothing 
> > > > central
> > > > that we all use.  I for one am getting tired of finding regressions 
> > > > when we are
> > > > deploying new kernels internally, so I wired this thing up to try and 
> > > > address
> > > > this need.
> > > > 
> > > > We all hate convoluted setups, the more brain power we have to put in 
> > > > to setting
> > > > something up the less likely we are to use it, so I took the xfstests 
> > > > approach
> > > > of making it relatively simple to get running and relatively easy to 
> > > > add new
> > > > tests.  For right now the only thing this framework does is run fio 
> > > > scripts.  I
> > > > chose fio because it already gathers loads of performance data about 
> > > > it's runs.
> > > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > > broken
> > > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > > enough with
> > > > fio and how it works to make it relatively easy to add new tests to the
> > > > framework.
> > > > 
> > > > I've posted my code up on github, you can get it here
> > > > 
> > > > https://github.com/josefbacik/fsperf
> > > > 
> > > > All (well most) of the results from fio are stored in a local sqlite 
> > > > database.
> > > > Right now the comparison stuff is very crude, it simply checks against 
> > > > the
> > > > previous run and it only checks a few of the keys by default.  You can 
> > > > check
> > > > latency if you want, but while writing this stuff up it seemed that 
> > > > latency was
> > > > too variable from run to run to be useful in a "did my thing regress or 
> > > > improve"
> > > > sort of way.
> > > > 
> > > > The configuration is brain dead simple, the README has examples.  All 
> > > > you need
> > > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > > are good
> > > > to go.
> > > 
> > > Why re-invent the test infrastructure? Why not just make it a
> > > tests/perf subdir in fstests?
> > > 
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> > 
> > Now if you (and others) are not opposed to this being dropped into 
> > tests/perf
> > then I'll work that up.  But it's definitely going to need to be done in 
> > python.
> > I know you yourself have said you aren't opposed to using python in the 
> > past, so
> > if that's still the case then I can definitely wire it all up.
> 
> I have no problems with people using python for stuff like this but,
> OTOH, I'm not the fstests maintainer anymore :P
> 
> > > > The plan is to add lots of workloads as we discover regressions and 
> > > > such.  We
> > > > don't want anything that takes too long to run otherwise people won't 
> > > > run this,
> > > > so the existing tests don't take much longer than a few minutes each.  
> > > > I will be
> > > > adding some more comparison options so you can compare against averages 
> > > > of all
> > > > previous runs and such.
> > > 
> > > Yup, that fits exactly into what fstests is for... :P
> > > 
> > > Integrating into fstests means it will be immediately available to
> > > all fs developers, it'll run on everything that everyone already has
> > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > option setup behaviour so there's no new hoops for everyone to jump
> > > through to run it...
> > > 
> > 
> > TBF I specifically made it as easy as possible because I know we all hate 
> > trying
> > to learn new shit.
> 
> Yeah, it's also hard to get people to change their workflows to add
> a whole new test harness into them. It's easy if it's just a new
> command to an existing workflow :P
> 

Agreed, so if you probably won't run this outside of fstests then I'll add it to
xfstests.  I envision this tool as being run by maintainers to verify their pull
requests haven't regressed since the last set of patches, as well as by anybody
trying to fix performance problems.  So it's way more important to me that you,
Ted, and all the various btrfs maintainers will run it than anybody else.

> > I figured this was different enough to warrant a separate
> > project, especially since I'm going to add block device jobs so Jens can 
> > test
> > block layer things.  If we all agree we'd rather 

Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Sun, Oct 08, 2017 at 11:43:35PM -0400, Theodore Ts'o wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> 
> What version of python are you using?  From inspection it looks like
> some variant of python 3.x (you're using print as a function w/o using
> "from __future import print_function") but it's not immediately
> obvious from the top-level fsperf shell script what version of python
> your scripts are dependant upon.
> 
> This could potentially be a bit of a portability issue across various
> distributions --- RHEL doesn't ship with Python 3.x at all, and on
> Debian you need to use python3 to get python 3.x, since
> /usr/bin/python still points at Python 2.7 by default.  So I could see
> this as a potential issue for xfstests.
> 
> I'm currently using Python 2.7 in my wrapper scripts for, among other
> things, to parse xUnit XML format and create nice summaries like this:
> 
> ext4/4k: 337 tests, 6 failures, 21 skipped, 3814 seconds
>   Failures: generic/232 generic/233 generic/361 generic/388
> generic/451 generic/459
> 
> So I'm not opposed to python, but I will note that if you are using
> modules from the Python Package Index, and they are modules which are
> not packaged by your distribution (so you're using pip or easy_install
> to pull them off the network), it does make doing hermetic builds from
> trusted sources to be a bit trickier.
> 
> If you have a secops team who wants to know the provenance of software
> which get thrown in production data centers (and automated downloading
> from random external sources w/o any code review makes them break out
> in hives), use of PyPI adds a new wrinkle.  It's not impossible to
> solve, by any means, but it's something to consider.
>

I purposefully used as little as possible, just json and sqlite, and I tried to
use as little python3 isms as possible.  Any rpm based systems should have these
libraries already installed, I agree that using any of the PyPI stuff is a pain
and is a non-starter for me as I want to make it as easy as possible to get
running.

Where do you fall on the including it in xfstests question?  I expect that the
perf stuff would be run more as maintainers put their pull requests together to
make sure things haven't regressed.  To that end I was going to wire up
xfstests-bld to run this as well.  Whatever you and Dave prefer is what I'll do,
I'll use it wherever it ends up so you two are the ones that get to decide.
Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Josef Bacik
On Sun, Oct 08, 2017 at 11:43:35PM -0400, Theodore Ts'o wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> 
> What version of python are you using?  From inspection it looks like
> some variant of python 3.x (you're using print as a function w/o using
> "from __future import print_function") but it's not immediately
> obvious from the top-level fsperf shell script what version of python
> your scripts are dependant upon.
> 
> This could potentially be a bit of a portability issue across various
> distributions --- RHEL doesn't ship with Python 3.x at all, and on
> Debian you need to use python3 to get python 3.x, since
> /usr/bin/python still points at Python 2.7 by default.  So I could see
> this as a potential issue for xfstests.
> 
> I'm currently using Python 2.7 in my wrapper scripts for, among other
> things, to parse xUnit XML format and create nice summaries like this:
> 
> ext4/4k: 337 tests, 6 failures, 21 skipped, 3814 seconds
>   Failures: generic/232 generic/233 generic/361 generic/388
> generic/451 generic/459
> 
> So I'm not opposed to python, but I will note that if you are using
> modules from the Python Package Index, and they are modules which are
> not packaged by your distribution (so you're using pip or easy_install
> to pull them off the network), it does make doing hermetic builds from
> trusted sources to be a bit trickier.
> 
> If you have a secops team who wants to know the provenance of software
> which get thrown in production data centers (and automated downloading
> from random external sources w/o any code review makes them break out
> in hives), use of PyPI adds a new wrinkle.  It's not impossible to
> solve, by any means, but it's something to consider.
>

I purposefully used as little as possible, just json and sqlite, and I tried to
use as little python3 isms as possible.  Any rpm based systems should have these
libraries already installed, I agree that using any of the PyPI stuff is a pain
and is a non-starter for me as I want to make it as easy as possible to get
running.

Where do you fall on the including it in xfstests question?  I expect that the
perf stuff would be run more as maintainers put their pull requests together to
make sure things haven't regressed.  To that end I was going to wire up
xfstests-bld to run this as well.  Whatever you and Dave prefer is what I'll do,
I'll use it wherever it ends up so you two are the ones that get to decide.
Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Theodore Ts'o
On Mon, Oct 09, 2017 at 02:54:34PM +0800, Eryu Guan wrote:
> I have no problem either if python is really needed, after all this is a
> very useful infrastructure improvement. But the python version problem
> brought up by Ted made me a bit nervous, we need to work that round
> carefully.
> 
> OTOH, I'm just curious, what is the specific reason that python is a
> hard requirement? If we can use perl, that'll be much easier for
> fstests.

Note that perl has its own versioning issues (but it's been easier
given that Perl has been frozen so long waiting for Godot^H^H^H^H^H
Perl6).  It's for similar reasons that Python 2.7 is nice and stable.
Python 2 is frozen while Python 3 caught up.  The only downside is
that Python 2.7 is now deprecated, and will stop getting security
updates from Python Core in 2020.

I'll note that Python 3.6 has some nice features that aren't in Python
3.5 --- but Debian Stable only has Python 3.5, and Debian Unstable has
Python 3.6.  Which is why I said, "it looks like you're using some
variant of Python 3.x", but it wasn't obvious what version exactly was
required by fsperf.  This version instability in Python and Perl is
way Larry McVoy ended up using Tcl for Bitkeeper, by the way.  That
was the only thing that was guaranteed to work everywhere, exactly the
same, without random changes added by Perl/Python innovations...

It's a little easier for me with gce/kvm-xfstests, since I'm using a
Debian Stable images/chroots, and I don't even going to _pretend_ that
I care about cross-distro portability for the Test Appliance VM's that
xfstests-bld creates.  But I suspect this will be more of an issue
with xfstests.

Also, the same issues around versioning / "DLL hell" and provenance of
various high-level package/modules exists with Perl just as much with
Python.  Just substitute CPAN with PyPI.  And again, some of the
popular Perl packages have been packaged by the distro's to solve the
versioning / provenance problem, but exactly _which_ packages /
modules are packaged varies from distro to distro.  (Hopefully the
most popular ones will be packaged by both Red Hat, SuSE and Debian
derivitives, but you'll have to check for each package / module you
want to use.)

One way of solving the problem is just including those Perl / Python
package modules in the sources of xfstests itself; that way you're not
depending on which version of a particular module / package is
available on a distro, and you're also not randomly downloading
software over the network and hoping it works / hasn't been taken over
by some hostile state power.  (I'd be much happier if PyPI or CPAN
used SHA checksums of what you expect to be downloaded; even if you
use Easy_Install's requirements.txt, you're still trusting PyPI to
give you what it thinks is version of Junitparser 1.0.0.)

This is why I've dropped my own copy of junitparser into the git repo
of xfstests-bld.  It's the "ship your own version of the DLL" solution
to the "DLL hell" problem, but it was the best I could come up with,
especially since Debian hadn't packaged the Junitparser python module.
I also figured it was much better at lowering the blood pressure of
the friendly local secops types.  :-)

  - Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Theodore Ts'o
On Mon, Oct 09, 2017 at 02:54:34PM +0800, Eryu Guan wrote:
> I have no problem either if python is really needed, after all this is a
> very useful infrastructure improvement. But the python version problem
> brought up by Ted made me a bit nervous, we need to work that round
> carefully.
> 
> OTOH, I'm just curious, what is the specific reason that python is a
> hard requirement? If we can use perl, that'll be much easier for
> fstests.

Note that perl has its own versioning issues (but it's been easier
given that Perl has been frozen so long waiting for Godot^H^H^H^H^H
Perl6).  It's for similar reasons that Python 2.7 is nice and stable.
Python 2 is frozen while Python 3 caught up.  The only downside is
that Python 2.7 is now deprecated, and will stop getting security
updates from Python Core in 2020.

I'll note that Python 3.6 has some nice features that aren't in Python
3.5 --- but Debian Stable only has Python 3.5, and Debian Unstable has
Python 3.6.  Which is why I said, "it looks like you're using some
variant of Python 3.x", but it wasn't obvious what version exactly was
required by fsperf.  This version instability in Python and Perl is
way Larry McVoy ended up using Tcl for Bitkeeper, by the way.  That
was the only thing that was guaranteed to work everywhere, exactly the
same, without random changes added by Perl/Python innovations...

It's a little easier for me with gce/kvm-xfstests, since I'm using a
Debian Stable images/chroots, and I don't even going to _pretend_ that
I care about cross-distro portability for the Test Appliance VM's that
xfstests-bld creates.  But I suspect this will be more of an issue
with xfstests.

Also, the same issues around versioning / "DLL hell" and provenance of
various high-level package/modules exists with Perl just as much with
Python.  Just substitute CPAN with PyPI.  And again, some of the
popular Perl packages have been packaged by the distro's to solve the
versioning / provenance problem, but exactly _which_ packages /
modules are packaged varies from distro to distro.  (Hopefully the
most popular ones will be packaged by both Red Hat, SuSE and Debian
derivitives, but you'll have to check for each package / module you
want to use.)

One way of solving the problem is just including those Perl / Python
package modules in the sources of xfstests itself; that way you're not
depending on which version of a particular module / package is
available on a distro, and you're also not randomly downloading
software over the network and hoping it works / hasn't been taken over
by some hostile state power.  (I'd be much happier if PyPI or CPAN
used SHA checksums of what you expect to be downloaded; even if you
use Easy_Install's requirements.txt, you're still trusting PyPI to
give you what it thinks is version of Junitparser 1.0.0.)

This is why I've dropped my own copy of junitparser into the git repo
of xfstests-bld.  It's the "ship your own version of the DLL" solution
to the "DLL hell" problem, but it was the best I could come up with,
especially since Debian hadn't packaged the Junitparser python module.
I also figured it was much better at lowering the blood pressure of
the friendly local secops types.  :-)

  - Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Eryu Guan
On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > > Hello,
> > > > 
> > > > One thing that comes up a lot every LSF is the fact that we have no 
> > > > general way
> > > > that we do performance testing.  Every fs developer has a set of 
> > > > scripts or
> > > > things that they run with varying degrees of consistency, but nothing 
> > > > central
> > > > that we all use.  I for one am getting tired of finding regressions 
> > > > when we are
> > > > deploying new kernels internally, so I wired this thing up to try and 
> > > > address
> > > > this need.
> > > > 
> > > > We all hate convoluted setups, the more brain power we have to put in 
> > > > to setting
> > > > something up the less likely we are to use it, so I took the xfstests 
> > > > approach
> > > > of making it relatively simple to get running and relatively easy to 
> > > > add new
> > > > tests.  For right now the only thing this framework does is run fio 
> > > > scripts.  I
> > > > chose fio because it already gathers loads of performance data about 
> > > > it's runs.
> > > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > > broken
> > > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > > enough with
> > > > fio and how it works to make it relatively easy to add new tests to the
> > > > framework.
> > > > 
> > > > I've posted my code up on github, you can get it here
> > > > 
> > > > https://github.com/josefbacik/fsperf
> > > > 
> > > > All (well most) of the results from fio are stored in a local sqlite 
> > > > database.
> > > > Right now the comparison stuff is very crude, it simply checks against 
> > > > the
> > > > previous run and it only checks a few of the keys by default.  You can 
> > > > check
> > > > latency if you want, but while writing this stuff up it seemed that 
> > > > latency was
> > > > too variable from run to run to be useful in a "did my thing regress or 
> > > > improve"
> > > > sort of way.
> > > > 
> > > > The configuration is brain dead simple, the README has examples.  All 
> > > > you need
> > > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > > are good
> > > > to go.
> > > 
> > > Why re-invent the test infrastructure? Why not just make it a
> > > tests/perf subdir in fstests?
> > > 
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> > 
> > Now if you (and others) are not opposed to this being dropped into 
> > tests/perf
> > then I'll work that up.  But it's definitely going to need to be done in 
> > python.
> > I know you yourself have said you aren't opposed to using python in the 
> > past, so
> > if that's still the case then I can definitely wire it all up.
> 
> I have no problems with people using python for stuff like this but,
> OTOH, I'm not the fstests maintainer anymore :P

I have no problem either if python is really needed, after all this is a
very useful infrastructure improvement. But the python version problem
brought up by Ted made me a bit nervous, we need to work that round
carefully.

OTOH, I'm just curious, what is the specific reason that python is a
hard requirement? If we can use perl, that'll be much easier for
fstests.

BTW, opinions from key fs developers/fstests users, like you, are also
very important and welcomed :)

Thanks,
Eryu

> 
> > > > The plan is to add lots of workloads as we discover regressions and 
> > > > such.  We
> > > > don't want anything that takes too long to run otherwise people won't 
> > > > run this,
> > > > so the existing tests don't take much longer than a few minutes each.  
> > > > I will be
> > > > adding some more comparison options so you can compare against averages 
> > > > of all
> > > > previous runs and such.
> > > 
> > > Yup, that fits exactly into what fstests is for... :P
> > > 
> > > Integrating into fstests means it will be immediately available to
> > > all fs developers, it'll run on everything that everyone already has
> > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > option setup behaviour so there's no new hoops for everyone to jump
> > > through to run it...
> > > 
> > 
> > TBF I specifically made it as easy as possible because I know we all hate 
> > trying
> > to learn new shit.
> 
> Yeah, it's also hard to get people to change their workflows to add
> a whole new test harness into them. It's easy if it's just a new
> command to an existing workflow :P
> 
> > I figured this was different enough to warrant a separate
> > project, especially since I'm going to add block 

Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-09 Thread Eryu Guan
On Mon, Oct 09, 2017 at 04:17:31PM +1100, Dave Chinner wrote:
> On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> > On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > > Hello,
> > > > 
> > > > One thing that comes up a lot every LSF is the fact that we have no 
> > > > general way
> > > > that we do performance testing.  Every fs developer has a set of 
> > > > scripts or
> > > > things that they run with varying degrees of consistency, but nothing 
> > > > central
> > > > that we all use.  I for one am getting tired of finding regressions 
> > > > when we are
> > > > deploying new kernels internally, so I wired this thing up to try and 
> > > > address
> > > > this need.
> > > > 
> > > > We all hate convoluted setups, the more brain power we have to put in 
> > > > to setting
> > > > something up the less likely we are to use it, so I took the xfstests 
> > > > approach
> > > > of making it relatively simple to get running and relatively easy to 
> > > > add new
> > > > tests.  For right now the only thing this framework does is run fio 
> > > > scripts.  I
> > > > chose fio because it already gathers loads of performance data about 
> > > > it's runs.
> > > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > > broken
> > > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > > enough with
> > > > fio and how it works to make it relatively easy to add new tests to the
> > > > framework.
> > > > 
> > > > I've posted my code up on github, you can get it here
> > > > 
> > > > https://github.com/josefbacik/fsperf
> > > > 
> > > > All (well most) of the results from fio are stored in a local sqlite 
> > > > database.
> > > > Right now the comparison stuff is very crude, it simply checks against 
> > > > the
> > > > previous run and it only checks a few of the keys by default.  You can 
> > > > check
> > > > latency if you want, but while writing this stuff up it seemed that 
> > > > latency was
> > > > too variable from run to run to be useful in a "did my thing regress or 
> > > > improve"
> > > > sort of way.
> > > > 
> > > > The configuration is brain dead simple, the README has examples.  All 
> > > > you need
> > > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > > are good
> > > > to go.
> > > 
> > > Why re-invent the test infrastructure? Why not just make it a
> > > tests/perf subdir in fstests?
> > > 
> > 
> > Probably should have led with that shouldn't I have?  There's nothing 
> > keeping me
> > from doing it, but I didn't want to try and shoehorn in a python thing into
> > fstests.  I need python to do the sqlite and the json parsing to dump into 
> > the
> > sqlite database.
> > 
> > Now if you (and others) are not opposed to this being dropped into 
> > tests/perf
> > then I'll work that up.  But it's definitely going to need to be done in 
> > python.
> > I know you yourself have said you aren't opposed to using python in the 
> > past, so
> > if that's still the case then I can definitely wire it all up.
> 
> I have no problems with people using python for stuff like this but,
> OTOH, I'm not the fstests maintainer anymore :P

I have no problem either if python is really needed, after all this is a
very useful infrastructure improvement. But the python version problem
brought up by Ted made me a bit nervous, we need to work that round
carefully.

OTOH, I'm just curious, what is the specific reason that python is a
hard requirement? If we can use perl, that'll be much easier for
fstests.

BTW, opinions from key fs developers/fstests users, like you, are also
very important and welcomed :)

Thanks,
Eryu

> 
> > > > The plan is to add lots of workloads as we discover regressions and 
> > > > such.  We
> > > > don't want anything that takes too long to run otherwise people won't 
> > > > run this,
> > > > so the existing tests don't take much longer than a few minutes each.  
> > > > I will be
> > > > adding some more comparison options so you can compare against averages 
> > > > of all
> > > > previous runs and such.
> > > 
> > > Yup, that fits exactly into what fstests is for... :P
> > > 
> > > Integrating into fstests means it will be immediately available to
> > > all fs developers, it'll run on everything that everyone already has
> > > setup for filesystem testing, and it will have familiar mkfs/mount
> > > option setup behaviour so there's no new hoops for everyone to jump
> > > through to run it...
> > > 
> > 
> > TBF I specifically made it as easy as possible because I know we all hate 
> > trying
> > to learn new shit.
> 
> Yeah, it's also hard to get people to change their workflows to add
> a whole new test harness into them. It's easy if it's just a new
> command to an existing workflow :P
> 
> > I figured this was different enough to warrant a separate
> > project, especially since I'm going to add block 

Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Dave Chinner
On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > Hello,
> > > 
> > > One thing that comes up a lot every LSF is the fact that we have no 
> > > general way
> > > that we do performance testing.  Every fs developer has a set of scripts 
> > > or
> > > things that they run with varying degrees of consistency, but nothing 
> > > central
> > > that we all use.  I for one am getting tired of finding regressions when 
> > > we are
> > > deploying new kernels internally, so I wired this thing up to try and 
> > > address
> > > this need.
> > > 
> > > We all hate convoluted setups, the more brain power we have to put in to 
> > > setting
> > > something up the less likely we are to use it, so I took the xfstests 
> > > approach
> > > of making it relatively simple to get running and relatively easy to add 
> > > new
> > > tests.  For right now the only thing this framework does is run fio 
> > > scripts.  I
> > > chose fio because it already gathers loads of performance data about it's 
> > > runs.
> > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > broken
> > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > enough with
> > > fio and how it works to make it relatively easy to add new tests to the
> > > framework.
> > > 
> > > I've posted my code up on github, you can get it here
> > > 
> > > https://github.com/josefbacik/fsperf
> > > 
> > > All (well most) of the results from fio are stored in a local sqlite 
> > > database.
> > > Right now the comparison stuff is very crude, it simply checks against the
> > > previous run and it only checks a few of the keys by default.  You can 
> > > check
> > > latency if you want, but while writing this stuff up it seemed that 
> > > latency was
> > > too variable from run to run to be useful in a "did my thing regress or 
> > > improve"
> > > sort of way.
> > > 
> > > The configuration is brain dead simple, the README has examples.  All you 
> > > need
> > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > are good
> > > to go.
> > 
> > Why re-invent the test infrastructure? Why not just make it a
> > tests/perf subdir in fstests?
> > 
> 
> Probably should have led with that shouldn't I have?  There's nothing keeping 
> me
> from doing it, but I didn't want to try and shoehorn in a python thing into
> fstests.  I need python to do the sqlite and the json parsing to dump into the
> sqlite database.
> 
> Now if you (and others) are not opposed to this being dropped into tests/perf
> then I'll work that up.  But it's definitely going to need to be done in 
> python.
> I know you yourself have said you aren't opposed to using python in the past, 
> so
> if that's still the case then I can definitely wire it all up.

I have no problems with people using python for stuff like this but,
OTOH, I'm not the fstests maintainer anymore :P

> > > The plan is to add lots of workloads as we discover regressions and such. 
> > >  We
> > > don't want anything that takes too long to run otherwise people won't run 
> > > this,
> > > so the existing tests don't take much longer than a few minutes each.  I 
> > > will be
> > > adding some more comparison options so you can compare against averages 
> > > of all
> > > previous runs and such.
> > 
> > Yup, that fits exactly into what fstests is for... :P
> > 
> > Integrating into fstests means it will be immediately available to
> > all fs developers, it'll run on everything that everyone already has
> > setup for filesystem testing, and it will have familiar mkfs/mount
> > option setup behaviour so there's no new hoops for everyone to jump
> > through to run it...
> > 
> 
> TBF I specifically made it as easy as possible because I know we all hate 
> trying
> to learn new shit.

Yeah, it's also hard to get people to change their workflows to add
a whole new test harness into them. It's easy if it's just a new
command to an existing workflow :P

> I figured this was different enough to warrant a separate
> project, especially since I'm going to add block device jobs so Jens can test
> block layer things.  If we all agree we'd rather see this in fstests then I'm
> happy to do that too.  Thanks,

I'm not fussed either way - it's a good discussion to have, though.

If I want to add tests (e.g. my time-honoured fsmark tests), where
should I send patches?

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Dave Chinner
On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> > On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > > Hello,
> > > 
> > > One thing that comes up a lot every LSF is the fact that we have no 
> > > general way
> > > that we do performance testing.  Every fs developer has a set of scripts 
> > > or
> > > things that they run with varying degrees of consistency, but nothing 
> > > central
> > > that we all use.  I for one am getting tired of finding regressions when 
> > > we are
> > > deploying new kernels internally, so I wired this thing up to try and 
> > > address
> > > this need.
> > > 
> > > We all hate convoluted setups, the more brain power we have to put in to 
> > > setting
> > > something up the less likely we are to use it, so I took the xfstests 
> > > approach
> > > of making it relatively simple to get running and relatively easy to add 
> > > new
> > > tests.  For right now the only thing this framework does is run fio 
> > > scripts.  I
> > > chose fio because it already gathers loads of performance data about it's 
> > > runs.
> > > We have everything we need there, latency, bandwidth, cpu time, and all 
> > > broken
> > > down by reads, writes, and trims.  I figure most of us are familiar 
> > > enough with
> > > fio and how it works to make it relatively easy to add new tests to the
> > > framework.
> > > 
> > > I've posted my code up on github, you can get it here
> > > 
> > > https://github.com/josefbacik/fsperf
> > > 
> > > All (well most) of the results from fio are stored in a local sqlite 
> > > database.
> > > Right now the comparison stuff is very crude, it simply checks against the
> > > previous run and it only checks a few of the keys by default.  You can 
> > > check
> > > latency if you want, but while writing this stuff up it seemed that 
> > > latency was
> > > too variable from run to run to be useful in a "did my thing regress or 
> > > improve"
> > > sort of way.
> > > 
> > > The configuration is brain dead simple, the README has examples.  All you 
> > > need
> > > to do is make your local.cfg, run ./setup and then run ./fsperf and you 
> > > are good
> > > to go.
> > 
> > Why re-invent the test infrastructure? Why not just make it a
> > tests/perf subdir in fstests?
> > 
> 
> Probably should have led with that shouldn't I have?  There's nothing keeping 
> me
> from doing it, but I didn't want to try and shoehorn in a python thing into
> fstests.  I need python to do the sqlite and the json parsing to dump into the
> sqlite database.
> 
> Now if you (and others) are not opposed to this being dropped into tests/perf
> then I'll work that up.  But it's definitely going to need to be done in 
> python.
> I know you yourself have said you aren't opposed to using python in the past, 
> so
> if that's still the case then I can definitely wire it all up.

I have no problems with people using python for stuff like this but,
OTOH, I'm not the fstests maintainer anymore :P

> > > The plan is to add lots of workloads as we discover regressions and such. 
> > >  We
> > > don't want anything that takes too long to run otherwise people won't run 
> > > this,
> > > so the existing tests don't take much longer than a few minutes each.  I 
> > > will be
> > > adding some more comparison options so you can compare against averages 
> > > of all
> > > previous runs and such.
> > 
> > Yup, that fits exactly into what fstests is for... :P
> > 
> > Integrating into fstests means it will be immediately available to
> > all fs developers, it'll run on everything that everyone already has
> > setup for filesystem testing, and it will have familiar mkfs/mount
> > option setup behaviour so there's no new hoops for everyone to jump
> > through to run it...
> > 
> 
> TBF I specifically made it as easy as possible because I know we all hate 
> trying
> to learn new shit.

Yeah, it's also hard to get people to change their workflows to add
a whole new test harness into them. It's easy if it's just a new
command to an existing workflow :P

> I figured this was different enough to warrant a separate
> project, especially since I'm going to add block device jobs so Jens can test
> block layer things.  If we all agree we'd rather see this in fstests then I'm
> happy to do that too.  Thanks,

I'm not fussed either way - it's a good discussion to have, though.

If I want to add tests (e.g. my time-honoured fsmark tests), where
should I send patches?

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Theodore Ts'o
On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> 
> Probably should have led with that shouldn't I have?  There's nothing keeping 
> me
> from doing it, but I didn't want to try and shoehorn in a python thing into
> fstests.  I need python to do the sqlite and the json parsing to dump into the
> sqlite database.

What version of python are you using?  From inspection it looks like
some variant of python 3.x (you're using print as a function w/o using
"from __future import print_function") but it's not immediately
obvious from the top-level fsperf shell script what version of python
your scripts are dependant upon.

This could potentially be a bit of a portability issue across various
distributions --- RHEL doesn't ship with Python 3.x at all, and on
Debian you need to use python3 to get python 3.x, since
/usr/bin/python still points at Python 2.7 by default.  So I could see
this as a potential issue for xfstests.

I'm currently using Python 2.7 in my wrapper scripts for, among other
things, to parse xUnit XML format and create nice summaries like this:

ext4/4k: 337 tests, 6 failures, 21 skipped, 3814 seconds
  Failures: generic/232 generic/233 generic/361 generic/388
generic/451 generic/459

So I'm not opposed to python, but I will note that if you are using
modules from the Python Package Index, and they are modules which are
not packaged by your distribution (so you're using pip or easy_install
to pull them off the network), it does make doing hermetic builds from
trusted sources to be a bit trickier.

If you have a secops team who wants to know the provenance of software
which get thrown in production data centers (and automated downloading
from random external sources w/o any code review makes them break out
in hives), use of PyPI adds a new wrinkle.  It's not impossible to
solve, by any means, but it's something to consider.

- Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Theodore Ts'o
On Sun, Oct 08, 2017 at 10:25:10PM -0400, Josef Bacik wrote:
> 
> Probably should have led with that shouldn't I have?  There's nothing keeping 
> me
> from doing it, but I didn't want to try and shoehorn in a python thing into
> fstests.  I need python to do the sqlite and the json parsing to dump into the
> sqlite database.

What version of python are you using?  From inspection it looks like
some variant of python 3.x (you're using print as a function w/o using
"from __future import print_function") but it's not immediately
obvious from the top-level fsperf shell script what version of python
your scripts are dependant upon.

This could potentially be a bit of a portability issue across various
distributions --- RHEL doesn't ship with Python 3.x at all, and on
Debian you need to use python3 to get python 3.x, since
/usr/bin/python still points at Python 2.7 by default.  So I could see
this as a potential issue for xfstests.

I'm currently using Python 2.7 in my wrapper scripts for, among other
things, to parse xUnit XML format and create nice summaries like this:

ext4/4k: 337 tests, 6 failures, 21 skipped, 3814 seconds
  Failures: generic/232 generic/233 generic/361 generic/388
generic/451 generic/459

So I'm not opposed to python, but I will note that if you are using
modules from the Python Package Index, and they are modules which are
not packaged by your distribution (so you're using pip or easy_install
to pull them off the network), it does make doing hermetic builds from
trusted sources to be a bit trickier.

If you have a secops team who wants to know the provenance of software
which get thrown in production data centers (and automated downloading
from random external sources w/o any code review makes them break out
in hives), use of PyPI adds a new wrinkle.  It's not impossible to
solve, by any means, but it's something to consider.

- Ted


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Josef Bacik
On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > Hello,
> > 
> > One thing that comes up a lot every LSF is the fact that we have no general 
> > way
> > that we do performance testing.  Every fs developer has a set of scripts or
> > things that they run with varying degrees of consistency, but nothing 
> > central
> > that we all use.  I for one am getting tired of finding regressions when we 
> > are
> > deploying new kernels internally, so I wired this thing up to try and 
> > address
> > this need.
> > 
> > We all hate convoluted setups, the more brain power we have to put in to 
> > setting
> > something up the less likely we are to use it, so I took the xfstests 
> > approach
> > of making it relatively simple to get running and relatively easy to add new
> > tests.  For right now the only thing this framework does is run fio 
> > scripts.  I
> > chose fio because it already gathers loads of performance data about it's 
> > runs.
> > We have everything we need there, latency, bandwidth, cpu time, and all 
> > broken
> > down by reads, writes, and trims.  I figure most of us are familiar enough 
> > with
> > fio and how it works to make it relatively easy to add new tests to the
> > framework.
> > 
> > I've posted my code up on github, you can get it here
> > 
> > https://github.com/josefbacik/fsperf
> > 
> > All (well most) of the results from fio are stored in a local sqlite 
> > database.
> > Right now the comparison stuff is very crude, it simply checks against the
> > previous run and it only checks a few of the keys by default.  You can check
> > latency if you want, but while writing this stuff up it seemed that latency 
> > was
> > too variable from run to run to be useful in a "did my thing regress or 
> > improve"
> > sort of way.
> > 
> > The configuration is brain dead simple, the README has examples.  All you 
> > need
> > to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> > good
> > to go.
> 
> Why re-invent the test infrastructure? Why not just make it a
> tests/perf subdir in fstests?
> 

Probably should have led with that shouldn't I have?  There's nothing keeping me
from doing it, but I didn't want to try and shoehorn in a python thing into
fstests.  I need python to do the sqlite and the json parsing to dump into the
sqlite database.

Now if you (and others) are not opposed to this being dropped into tests/perf
then I'll work that up.  But it's definitely going to need to be done in python.
I know you yourself have said you aren't opposed to using python in the past, so
if that's still the case then I can definitely wire it all up.

> > The plan is to add lots of workloads as we discover regressions and such.  
> > We
> > don't want anything that takes too long to run otherwise people won't run 
> > this,
> > so the existing tests don't take much longer than a few minutes each.  I 
> > will be
> > adding some more comparison options so you can compare against averages of 
> > all
> > previous runs and such.
> 
> Yup, that fits exactly into what fstests is for... :P
> 
> Integrating into fstests means it will be immediately available to
> all fs developers, it'll run on everything that everyone already has
> setup for filesystem testing, and it will have familiar mkfs/mount
> option setup behaviour so there's no new hoops for everyone to jump
> through to run it...
> 

TBF I specifically made it as easy as possible because I know we all hate trying
to learn new shit.  I figured this was different enough to warrant a separate
project, especially since I'm going to add block device jobs so Jens can test
block layer things.  If we all agree we'd rather see this in fstests then I'm
happy to do that too.  Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Josef Bacik
On Mon, Oct 09, 2017 at 11:51:37AM +1100, Dave Chinner wrote:
> On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> > Hello,
> > 
> > One thing that comes up a lot every LSF is the fact that we have no general 
> > way
> > that we do performance testing.  Every fs developer has a set of scripts or
> > things that they run with varying degrees of consistency, but nothing 
> > central
> > that we all use.  I for one am getting tired of finding regressions when we 
> > are
> > deploying new kernels internally, so I wired this thing up to try and 
> > address
> > this need.
> > 
> > We all hate convoluted setups, the more brain power we have to put in to 
> > setting
> > something up the less likely we are to use it, so I took the xfstests 
> > approach
> > of making it relatively simple to get running and relatively easy to add new
> > tests.  For right now the only thing this framework does is run fio 
> > scripts.  I
> > chose fio because it already gathers loads of performance data about it's 
> > runs.
> > We have everything we need there, latency, bandwidth, cpu time, and all 
> > broken
> > down by reads, writes, and trims.  I figure most of us are familiar enough 
> > with
> > fio and how it works to make it relatively easy to add new tests to the
> > framework.
> > 
> > I've posted my code up on github, you can get it here
> > 
> > https://github.com/josefbacik/fsperf
> > 
> > All (well most) of the results from fio are stored in a local sqlite 
> > database.
> > Right now the comparison stuff is very crude, it simply checks against the
> > previous run and it only checks a few of the keys by default.  You can check
> > latency if you want, but while writing this stuff up it seemed that latency 
> > was
> > too variable from run to run to be useful in a "did my thing regress or 
> > improve"
> > sort of way.
> > 
> > The configuration is brain dead simple, the README has examples.  All you 
> > need
> > to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> > good
> > to go.
> 
> Why re-invent the test infrastructure? Why not just make it a
> tests/perf subdir in fstests?
> 

Probably should have led with that shouldn't I have?  There's nothing keeping me
from doing it, but I didn't want to try and shoehorn in a python thing into
fstests.  I need python to do the sqlite and the json parsing to dump into the
sqlite database.

Now if you (and others) are not opposed to this being dropped into tests/perf
then I'll work that up.  But it's definitely going to need to be done in python.
I know you yourself have said you aren't opposed to using python in the past, so
if that's still the case then I can definitely wire it all up.

> > The plan is to add lots of workloads as we discover regressions and such.  
> > We
> > don't want anything that takes too long to run otherwise people won't run 
> > this,
> > so the existing tests don't take much longer than a few minutes each.  I 
> > will be
> > adding some more comparison options so you can compare against averages of 
> > all
> > previous runs and such.
> 
> Yup, that fits exactly into what fstests is for... :P
> 
> Integrating into fstests means it will be immediately available to
> all fs developers, it'll run on everything that everyone already has
> setup for filesystem testing, and it will have familiar mkfs/mount
> option setup behaviour so there's no new hoops for everyone to jump
> through to run it...
> 

TBF I specifically made it as easy as possible because I know we all hate trying
to learn new shit.  I figured this was different enough to warrant a separate
project, especially since I'm going to add block device jobs so Jens can test
block layer things.  If we all agree we'd rather see this in fstests then I'm
happy to do that too.  Thanks,

Josef


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Dave Chinner
On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> Hello,
> 
> One thing that comes up a lot every LSF is the fact that we have no general 
> way
> that we do performance testing.  Every fs developer has a set of scripts or
> things that they run with varying degrees of consistency, but nothing central
> that we all use.  I for one am getting tired of finding regressions when we 
> are
> deploying new kernels internally, so I wired this thing up to try and address
> this need.
> 
> We all hate convoluted setups, the more brain power we have to put in to 
> setting
> something up the less likely we are to use it, so I took the xfstests approach
> of making it relatively simple to get running and relatively easy to add new
> tests.  For right now the only thing this framework does is run fio scripts.  
> I
> chose fio because it already gathers loads of performance data about it's 
> runs.
> We have everything we need there, latency, bandwidth, cpu time, and all broken
> down by reads, writes, and trims.  I figure most of us are familiar enough 
> with
> fio and how it works to make it relatively easy to add new tests to the
> framework.
> 
> I've posted my code up on github, you can get it here
> 
> https://github.com/josefbacik/fsperf
> 
> All (well most) of the results from fio are stored in a local sqlite database.
> Right now the comparison stuff is very crude, it simply checks against the
> previous run and it only checks a few of the keys by default.  You can check
> latency if you want, but while writing this stuff up it seemed that latency 
> was
> too variable from run to run to be useful in a "did my thing regress or 
> improve"
> sort of way.
> 
> The configuration is brain dead simple, the README has examples.  All you need
> to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> good
> to go.

Why re-invent the test infrastructure? Why not just make it a
tests/perf subdir in fstests?

> The plan is to add lots of workloads as we discover regressions and such.  We
> don't want anything that takes too long to run otherwise people won't run 
> this,
> so the existing tests don't take much longer than a few minutes each.  I will 
> be
> adding some more comparison options so you can compare against averages of all
> previous runs and such.

Yup, that fits exactly into what fstests is for... :P

Integrating into fstests means it will be immediately available to
all fs developers, it'll run on everything that everyone already has
setup for filesystem testing, and it will have familiar mkfs/mount
option setup behaviour so there's no new hoops for everyone to jump
through to run it...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


Re: [ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-08 Thread Dave Chinner
On Fri, Oct 06, 2017 at 05:09:57PM -0400, Josef Bacik wrote:
> Hello,
> 
> One thing that comes up a lot every LSF is the fact that we have no general 
> way
> that we do performance testing.  Every fs developer has a set of scripts or
> things that they run with varying degrees of consistency, but nothing central
> that we all use.  I for one am getting tired of finding regressions when we 
> are
> deploying new kernels internally, so I wired this thing up to try and address
> this need.
> 
> We all hate convoluted setups, the more brain power we have to put in to 
> setting
> something up the less likely we are to use it, so I took the xfstests approach
> of making it relatively simple to get running and relatively easy to add new
> tests.  For right now the only thing this framework does is run fio scripts.  
> I
> chose fio because it already gathers loads of performance data about it's 
> runs.
> We have everything we need there, latency, bandwidth, cpu time, and all broken
> down by reads, writes, and trims.  I figure most of us are familiar enough 
> with
> fio and how it works to make it relatively easy to add new tests to the
> framework.
> 
> I've posted my code up on github, you can get it here
> 
> https://github.com/josefbacik/fsperf
> 
> All (well most) of the results from fio are stored in a local sqlite database.
> Right now the comparison stuff is very crude, it simply checks against the
> previous run and it only checks a few of the keys by default.  You can check
> latency if you want, but while writing this stuff up it seemed that latency 
> was
> too variable from run to run to be useful in a "did my thing regress or 
> improve"
> sort of way.
> 
> The configuration is brain dead simple, the README has examples.  All you need
> to do is make your local.cfg, run ./setup and then run ./fsperf and you are 
> good
> to go.

Why re-invent the test infrastructure? Why not just make it a
tests/perf subdir in fstests?

> The plan is to add lots of workloads as we discover regressions and such.  We
> don't want anything that takes too long to run otherwise people won't run 
> this,
> so the existing tests don't take much longer than a few minutes each.  I will 
> be
> adding some more comparison options so you can compare against averages of all
> previous runs and such.

Yup, that fits exactly into what fstests is for... :P

Integrating into fstests means it will be immediately available to
all fs developers, it'll run on everything that everyone already has
setup for filesystem testing, and it will have familiar mkfs/mount
option setup behaviour so there's no new hoops for everyone to jump
through to run it...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com


[ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-06 Thread Josef Bacik
Hello,

One thing that comes up a lot every LSF is the fact that we have no general way
that we do performance testing.  Every fs developer has a set of scripts or
things that they run with varying degrees of consistency, but nothing central
that we all use.  I for one am getting tired of finding regressions when we are
deploying new kernels internally, so I wired this thing up to try and address
this need.

We all hate convoluted setups, the more brain power we have to put in to setting
something up the less likely we are to use it, so I took the xfstests approach
of making it relatively simple to get running and relatively easy to add new
tests.  For right now the only thing this framework does is run fio scripts.  I
chose fio because it already gathers loads of performance data about it's runs.
We have everything we need there, latency, bandwidth, cpu time, and all broken
down by reads, writes, and trims.  I figure most of us are familiar enough with
fio and how it works to make it relatively easy to add new tests to the
framework.

I've posted my code up on github, you can get it here

https://github.com/josefbacik/fsperf

All (well most) of the results from fio are stored in a local sqlite database.
Right now the comparison stuff is very crude, it simply checks against the
previous run and it only checks a few of the keys by default.  You can check
latency if you want, but while writing this stuff up it seemed that latency was
too variable from run to run to be useful in a "did my thing regress or improve"
sort of way.

The configuration is brain dead simple, the README has examples.  All you need
to do is make your local.cfg, run ./setup and then run ./fsperf and you are good
to go.

The plan is to add lots of workloads as we discover regressions and such.  We
don't want anything that takes too long to run otherwise people won't run this,
so the existing tests don't take much longer than a few minutes each.  I will be
adding some more comparison options so you can compare against averages of all
previous runs and such.

Another future goal is to parse the sqlite database and generate graphs of all
runs for the tests so we can visualize changes over time.  This is where the
latency measurements will be more useful so we can spot patterns rather than
worrying about test to test variances.

Please let me know if you have any feedback.  I'll take github pull requests for
people who like that workflow, but email'ed patches work as well.  Thanks,

Josef


[ANNOUNCE] fsperf: a simple fs/block performance testing framework

2017-10-06 Thread Josef Bacik
Hello,

One thing that comes up a lot every LSF is the fact that we have no general way
that we do performance testing.  Every fs developer has a set of scripts or
things that they run with varying degrees of consistency, but nothing central
that we all use.  I for one am getting tired of finding regressions when we are
deploying new kernels internally, so I wired this thing up to try and address
this need.

We all hate convoluted setups, the more brain power we have to put in to setting
something up the less likely we are to use it, so I took the xfstests approach
of making it relatively simple to get running and relatively easy to add new
tests.  For right now the only thing this framework does is run fio scripts.  I
chose fio because it already gathers loads of performance data about it's runs.
We have everything we need there, latency, bandwidth, cpu time, and all broken
down by reads, writes, and trims.  I figure most of us are familiar enough with
fio and how it works to make it relatively easy to add new tests to the
framework.

I've posted my code up on github, you can get it here

https://github.com/josefbacik/fsperf

All (well most) of the results from fio are stored in a local sqlite database.
Right now the comparison stuff is very crude, it simply checks against the
previous run and it only checks a few of the keys by default.  You can check
latency if you want, but while writing this stuff up it seemed that latency was
too variable from run to run to be useful in a "did my thing regress or improve"
sort of way.

The configuration is brain dead simple, the README has examples.  All you need
to do is make your local.cfg, run ./setup and then run ./fsperf and you are good
to go.

The plan is to add lots of workloads as we discover regressions and such.  We
don't want anything that takes too long to run otherwise people won't run this,
so the existing tests don't take much longer than a few minutes each.  I will be
adding some more comparison options so you can compare against averages of all
previous runs and such.

Another future goal is to parse the sqlite database and generate graphs of all
runs for the tests so we can visualize changes over time.  This is where the
latency measurements will be more useful so we can spot patterns rather than
worrying about test to test variances.

Please let me know if you have any feedback.  I'll take github pull requests for
people who like that workflow, but email'ed patches work as well.  Thanks,

Josef


[PATCH 2/3 RFC v2] selftests/net: update socket test to use new testing framework

2013-04-25 Thread Alexandru Copot
Signed-of-by Alexandru Copot 
Cc: Daniel Baluta 
---
 tools/testing/selftests/net/Makefile |  14 +++--
 tools/testing/selftests/net/socket.c | 108 +--
 2 files changed, 88 insertions(+), 34 deletions(-)

diff --git a/tools/testing/selftests/net/Makefile 
b/tools/testing/selftests/net/Makefile
index 750512b..7984f80 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -3,17 +3,23 @@
 CC = $(CROSS_COMPILE)gcc
 CFLAGS = -Wall -O2 -g
 
-CFLAGS += -I../../../../usr/include/
+CFLAGS += -I../../../../usr/include/ -I../lib
 
 NET_PROGS = socket psock_fanout psock_tpacket
 
+LIB = ../lib/selftests.a
+
 all: $(NET_PROGS)
-%: %.c
-   $(CC) $(CFLAGS) -o $@ $^
+
+socket: socket.o $(LIB)
+
+%.o: %.c
+   $(CC) $(CFLAGS) -c -o $@ $^
+
 
 run_tests: all
@/bin/sh ./run_netsocktests || echo "sockettests: [FAIL]"
@/bin/sh ./run_afpackettests || echo "afpackettests: [FAIL]"
 
 clean:
-   $(RM) $(NET_PROGS)
+   $(RM) $(NET_PROGS) *.o
diff --git a/tools/testing/selftests/net/socket.c 
b/tools/testing/selftests/net/socket.c
index 0f227f2..198a57b 100644
--- a/tools/testing/selftests/net/socket.c
+++ b/tools/testing/selftests/net/socket.c
@@ -6,6 +6,8 @@
 #include 
 #include 
 
+#include "selftests.h"
+
 struct socket_testcase {
int domain;
int type;
@@ -22,7 +24,7 @@ struct socket_testcase {
int nosupport_ok;
 };
 
-static struct socket_testcase tests[] = {
+static struct socket_testcase tests_inet[] = {
{ AF_MAX,  0,   0,   -EAFNOSUPPORT,0 },
{ AF_INET, SOCK_STREAM, IPPROTO_TCP, 0,1  },
{ AF_INET, SOCK_DGRAM,  IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
@@ -30,58 +32,104 @@ static struct socket_testcase tests[] = {
{ AF_INET, SOCK_STREAM, IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
 };
 
+static struct socket_testcase tests_inet6[] = {
+   { AF_INET6, SOCK_STREAM, IPPROTO_TCP, 0,1  },
+   { AF_INET6, SOCK_DGRAM,  IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
+   { AF_INET6, SOCK_DGRAM,  IPPROTO_UDP, 0,1  },
+   { AF_INET6, SOCK_STREAM, IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
+};
+
+static struct socket_testcase tests_unix[] = {
+   { AF_UNIX, SOCK_STREAM,0,   0,1  },
+   { AF_UNIX, SOCK_DGRAM, 0,   0,1  },
+   { AF_UNIX, SOCK_SEQPACKET, 0,   0,1  },
+   { AF_UNIX, SOCK_STREAM,PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_DGRAM, PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_SEQPACKET, PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_STREAM,IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
+   { AF_UNIX, SOCK_STREAM,IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
+   { AF_UNIX, SOCK_DCCP,  0,   -ESOCKTNOSUPPORT, 1  },
+};
+
 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
 #define ERR_STRING_SZ  64
 
-static int run_tests(void)
+static test_result_t my_run_testcase(void *arg)
 {
+   struct socket_testcase *s = (struct socket_testcase*)arg;
+   int fd;
char err_string1[ERR_STRING_SZ];
char err_string2[ERR_STRING_SZ];
-   int i, err;
-
-   err = 0;
-   for (i = 0; i < ARRAY_SIZE(tests); i++) {
-   struct socket_testcase *s = [i];
-   int fd;
+   test_result_t ret;
 
-   fd = socket(s->domain, s->type, s->protocol);
-   if (fd < 0) {
-   if (s->nosupport_ok &&
-   errno == EAFNOSUPPORT)
-   continue;
+   ret = TEST_PASS;
+   fd = socket(s->domain, s->type, s->protocol);
+   if (fd < 0) {
+   pass_if(s->nosupport_ok && errno == EAFNOSUPPORT, out, ret);
 
-   if (s->expect < 0 &&
-   errno == -s->expect)
-   continue;
+   pass_if(s->expect < 0 && errno == -s->expect, out, ret);
 
-   strerror_r(-s->expect, err_string1, ERR_STRING_SZ);
-   strerror_r(errno, err_string2, ERR_STRING_SZ);
+   strerror_r(-s->expect, err_string1, ERR_STRING_SZ);
+   strerror_r(errno, err_string2, ERR_STRING_SZ);
 
-   fprintf(stderr, "socket(%d, %d, %d) expected "
+   fprintf(stderr, "socket(%d, %d, %d) expected "
"err (%s) got (%s)\n",
s->domain, s->type, s->protocol,
err_string1, err_string2);
 
-   err = -1;
-   break;
-   } else {
-   close(fd);
+   return TEST_FAIL;
+   } else {
+   close(fd);
 
-   if (s->expect < 0) {
-   strerror_r(errno, err_string1, ERR_STRING_SZ);
+ 

[PATCH 2/3 RFC v2] selftests/net: update socket test to use new testing framework

2013-04-25 Thread Alexandru Copot
Signed-of-by Alexandru Copot alex.miha...@gmail.com
Cc: Daniel Baluta dbal...@ixiacom.com
---
 tools/testing/selftests/net/Makefile |  14 +++--
 tools/testing/selftests/net/socket.c | 108 +--
 2 files changed, 88 insertions(+), 34 deletions(-)

diff --git a/tools/testing/selftests/net/Makefile 
b/tools/testing/selftests/net/Makefile
index 750512b..7984f80 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -3,17 +3,23 @@
 CC = $(CROSS_COMPILE)gcc
 CFLAGS = -Wall -O2 -g
 
-CFLAGS += -I../../../../usr/include/
+CFLAGS += -I../../../../usr/include/ -I../lib
 
 NET_PROGS = socket psock_fanout psock_tpacket
 
+LIB = ../lib/selftests.a
+
 all: $(NET_PROGS)
-%: %.c
-   $(CC) $(CFLAGS) -o $@ $^
+
+socket: socket.o $(LIB)
+
+%.o: %.c
+   $(CC) $(CFLAGS) -c -o $@ $^
+
 
 run_tests: all
@/bin/sh ./run_netsocktests || echo sockettests: [FAIL]
@/bin/sh ./run_afpackettests || echo afpackettests: [FAIL]
 
 clean:
-   $(RM) $(NET_PROGS)
+   $(RM) $(NET_PROGS) *.o
diff --git a/tools/testing/selftests/net/socket.c 
b/tools/testing/selftests/net/socket.c
index 0f227f2..198a57b 100644
--- a/tools/testing/selftests/net/socket.c
+++ b/tools/testing/selftests/net/socket.c
@@ -6,6 +6,8 @@
 #include sys/socket.h
 #include netinet/in.h
 
+#include selftests.h
+
 struct socket_testcase {
int domain;
int type;
@@ -22,7 +24,7 @@ struct socket_testcase {
int nosupport_ok;
 };
 
-static struct socket_testcase tests[] = {
+static struct socket_testcase tests_inet[] = {
{ AF_MAX,  0,   0,   -EAFNOSUPPORT,0 },
{ AF_INET, SOCK_STREAM, IPPROTO_TCP, 0,1  },
{ AF_INET, SOCK_DGRAM,  IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
@@ -30,58 +32,104 @@ static struct socket_testcase tests[] = {
{ AF_INET, SOCK_STREAM, IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
 };
 
+static struct socket_testcase tests_inet6[] = {
+   { AF_INET6, SOCK_STREAM, IPPROTO_TCP, 0,1  },
+   { AF_INET6, SOCK_DGRAM,  IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
+   { AF_INET6, SOCK_DGRAM,  IPPROTO_UDP, 0,1  },
+   { AF_INET6, SOCK_STREAM, IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
+};
+
+static struct socket_testcase tests_unix[] = {
+   { AF_UNIX, SOCK_STREAM,0,   0,1  },
+   { AF_UNIX, SOCK_DGRAM, 0,   0,1  },
+   { AF_UNIX, SOCK_SEQPACKET, 0,   0,1  },
+   { AF_UNIX, SOCK_STREAM,PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_DGRAM, PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_SEQPACKET, PF_UNIX, 0,1  },
+   { AF_UNIX, SOCK_STREAM,IPPROTO_TCP, -EPROTONOSUPPORT, 1  },
+   { AF_UNIX, SOCK_STREAM,IPPROTO_UDP, -EPROTONOSUPPORT, 1  },
+   { AF_UNIX, SOCK_DCCP,  0,   -ESOCKTNOSUPPORT, 1  },
+};
+
 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
 #define ERR_STRING_SZ  64
 
-static int run_tests(void)
+static test_result_t my_run_testcase(void *arg)
 {
+   struct socket_testcase *s = (struct socket_testcase*)arg;
+   int fd;
char err_string1[ERR_STRING_SZ];
char err_string2[ERR_STRING_SZ];
-   int i, err;
-
-   err = 0;
-   for (i = 0; i  ARRAY_SIZE(tests); i++) {
-   struct socket_testcase *s = tests[i];
-   int fd;
+   test_result_t ret;
 
-   fd = socket(s-domain, s-type, s-protocol);
-   if (fd  0) {
-   if (s-nosupport_ok 
-   errno == EAFNOSUPPORT)
-   continue;
+   ret = TEST_PASS;
+   fd = socket(s-domain, s-type, s-protocol);
+   if (fd  0) {
+   pass_if(s-nosupport_ok  errno == EAFNOSUPPORT, out, ret);
 
-   if (s-expect  0 
-   errno == -s-expect)
-   continue;
+   pass_if(s-expect  0  errno == -s-expect, out, ret);
 
-   strerror_r(-s-expect, err_string1, ERR_STRING_SZ);
-   strerror_r(errno, err_string2, ERR_STRING_SZ);
+   strerror_r(-s-expect, err_string1, ERR_STRING_SZ);
+   strerror_r(errno, err_string2, ERR_STRING_SZ);
 
-   fprintf(stderr, socket(%d, %d, %d) expected 
+   fprintf(stderr, socket(%d, %d, %d) expected 
err (%s) got (%s)\n,
s-domain, s-type, s-protocol,
err_string1, err_string2);
 
-   err = -1;
-   break;
-   } else {
-   close(fd);
+   return TEST_FAIL;
+   } else {
+   close(fd);
 
-   if (s-expect  0) {
-   strerror_r(errno, 

Re: Testing framework

2007-04-28 Thread Pavel Machek
Hi!

> For some time I had been working on this file system 
> test framework.
> Now I have a implementation for the same and below is 
> the explanation.
> Any comments are welcome.
> 
> Introduction:
> The testing tools and benchmarks available around do not 
> take into
> account the repair and recovery aspects of file systems. 
> The test
> framework described here focuses on repair and recovery 
> capabilities
> of file systems. Since most file systems use 'fsck' to 
> recover from
> file system inconsistencies, the test framework 
> characterizes file
> systems based on outcomes of running 'fsck'.

Thanks for your work.

> Putting it all together:
> The Corruption, Repair and Comparison phases could be 
> repeated a
> number of times (each repetition is called an iteration) 
> before the
> summary of that test run is prepared.
> 
> TODO:
> Account for files in the lost+found directory during the 
> comparison phase.
> Support for other file systems (only ext2 is supported 
> currently)

Yes, please. ext2 does really well in fsck area, unfortunately some
other filesystems (vfat, reiserfs) do not work that well.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-28 Thread Pavel Machek
Hi!

 For some time I had been working on this file system 
 test framework.
 Now I have a implementation for the same and below is 
 the explanation.
 Any comments are welcome.
 
 Introduction:
 The testing tools and benchmarks available around do not 
 take into
 account the repair and recovery aspects of file systems. 
 The test
 framework described here focuses on repair and recovery 
 capabilities
 of file systems. Since most file systems use 'fsck' to 
 recover from
 file system inconsistencies, the test framework 
 characterizes file
 systems based on outcomes of running 'fsck'.

Thanks for your work.

 Putting it all together:
 The Corruption, Repair and Comparison phases could be 
 repeated a
 number of times (each repetition is called an iteration) 
 before the
 summary of that test run is prepared.
 
 TODO:
 Account for files in the lost+found directory during the 
 comparison phase.
 Support for other file systems (only ext2 is supported 
 currently)

Yes, please. ext2 does really well in fsck area, unfortunately some
other filesystems (vfat, reiserfs) do not work that well.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-25 Thread Karuna sagar K

On 4/23/07, Avishay Traeger <[EMAIL PROTECTED]> wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:


You may want to check out the paper "EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors" from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.


EXPLODE is more of a generic tool i.e. it is used to find larger set
of errors/bugs in file systems than the Test framework which focuses
on the repair of file systems.

The Test framework is focused towards repairability of the file
systems, it doesnt use model checking concept, it uses replayable
corruption mechanism and is user space implementation. Thats the
reason why this is not similar to EXPLODE.



Avishay




Thanks,
Karuna
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-25 Thread Karuna sagar K

On 4/23/07, Avishay Traeger [EMAIL PROTECTED] wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:
snip

You may want to check out the paper EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.


EXPLODE is more of a generic tool i.e. it is used to find larger set
of errors/bugs in file systems than the Test framework which focuses
on the repair of file systems.

The Test framework is focused towards repairability of the file
systems, it doesnt use model checking concept, it uses replayable
corruption mechanism and is user space implementation. Thats the
reason why this is not similar to EXPLODE.



Avishay




Thanks,
Karuna
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Ric Wheeler

Avishay Traeger wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:

For some time I had been working on this file system test framework.
Now I have a implementation for the same and below is the explanation.
Any comments are welcome.




You may want to check out the paper "EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors" from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.

Avishay



It would also be interesting to use the disk error injection patches 
that Mark Lord sent out recently to introduce real sector level 
corruption.  When your file systems are large enough and old enough, 
getting bad sectors and IO errors during an fsck stresses things in 
interesting ways ;-)


ric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Avishay Traeger
On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:
> For some time I had been working on this file system test framework.
> Now I have a implementation for the same and below is the explanation.
> Any comments are welcome.



You may want to check out the paper "EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors" from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.

Avishay

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Karuna sagar K

On 4/23/07, Kalpak Shah <[EMAIL PROTECTED]> wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:


.

The file system is looked upon as a set of blocks (more precisely
metadata blocks). We randomly choose from this set of blocks to
corrupt. Hence we would be able to overcome the deficiency of the
previous approach. However this approach makes it difficult to have a
replayable corruption. Further thought about this approach has to be
given.


Fill a test filesystem with data and save it. Corrupt it by copying a
chunk of data from random locations A to B. Save positions A and B so
that you can reproduce the corruption.



Hey, thats a nice idea :). But, this woundnt reproduce the same
corruption right? Because, say, on first run of the tool there is
metadata stored at locations A and B and then on the second run there
may be user data present. I mean the allocation may be different.


Or corrupt random bits (ideally in metadata blocks) and maintain the
list of the bit numbers for reproducing the corruption.



.

The corrupted file system is repaired and recovered with 'fsck' or any
other tools; this phase considers the repair and recovery action on
the file system as a black box. The time taken to repair by the tool
is measured


I see that you are running fsck just once on the test filesystem. It
might be a good idea to run it twice and if second fsck does not find
the filesystem to be completely clean that means it is a bug in fsck.


You are right. Will modify that.







..

State of the either file system is stored, which may be huge, time
consuming and not necessary. So, we could have better ways of storing
the state.


Also, people may want to test with different mount options, so something
like "mount -t $fstype -o loop,$MOUNT_OPTIONS $imgname $mountpt" may be
useful. Similarly it may also be useful to have MKFS_OPTIONS while
formatting the filesystem.



Right. I didnt think of that. Will look into it.


Thanks,
Kalpak.

>
> Comments are welcome!!
>
> Thanks,
> Karuna




Thanks,
Karuna
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Kalpak Shah
On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:
> Hi,
> 
> For some time I had been working on this file system test framework.
> Now I have a implementation for the same and below is the explanation.
> Any comments are welcome.
> 
> Introduction:
> The testing tools and benchmarks available around do not take into
> account the repair and recovery aspects of file systems. The test
> framework described here focuses on repair and recovery capabilities
> of file systems. Since most file systems use 'fsck' to recover from
> file system inconsistencies, the test framework characterizes file
> systems based on outcomes of running 'fsck'.



> Higher level perspective/approach:
> In this approach the file system is viewed as a tree of nodes, where
> nodes are either files or directories. The metadata information
> corresponding to some randomly chosen nodes of the tree are corrupted.
> Nodes which are corrupted are marked or recorded to be able to replay
> later. This file system is called source file system while the file
> system on which we need to replay the corruption is called target file
> system. The assumption is that the target file system contains a set
> of files and directories which is a superset of that in the source
> file system. Hence to replay the corruption we need point out which
> nodes in the source file system were corrupted in the source file
> system and corrupt the corresponding nodes in the target file system.
> 
> A major disadvantage with this approach is that on-disk structures
> (like superblocks, block group descriptors, etc.) are not considered
> for corruption.
> 
> Lower level perspective/approach:
> The file system is looked upon as a set of blocks (more precisely
> metadata blocks). We randomly choose from this set of blocks to
> corrupt. Hence we would be able to overcome the deficiency of the
> previous approach. However this approach makes it difficult to have a
> replayable corruption. Further thought about this approach has to be
> given.
> 

Fill a test filesystem with data and save it. Corrupt it by copying a
chunk of data from random locations A to B. Save positions A and B so
that you can reproduce the corruption. 

Or corrupt random bits (ideally in metadata blocks) and maintain the
list of the bit numbers for reproducing the corruption.

> We could have a blend of both the approaches in the program to
> compromise between corruption and replayability.
> 
> Repair Phase:
> The corrupted file system is repaired and recovered with 'fsck' or any
> other tools; this phase considers the repair and recovery action on
> the file system as a black box. The time taken to repair by the tool
> is measured.

I see that you are running fsck just once on the test filesystem. It
might be a good idea to run it twice and if second fsck does not find
the filesystem to be completely clean that means it is a bug in fsck.



> Summary Phase:
> This is the final phase in the model. A report file is prepared which
> summarizes the result of this test run. The summary contains:
> 
> Average time taken for recovery
> Number of files lost at the end of each iteration
> Number of files with metadata corruption at the end of each iteration
> Number of files with data corruption at the end of each iteration
> Number of files lost and found at the end of each iteration
> 
> Putting it all together:
> The Corruption, Repair and Comparison phases could be repeated a
> number of times (each repetition is called an iteration) before the
> summary of that test run is prepared.
> 
> TODO:
> Account for files in the lost+found directory during the comparison phase.
> Support for other file systems (only ext2 is supported currently)
> State of the either file system is stored, which may be huge, time
> consuming and not necessary. So, we could have better ways of storing
> the state.

Also, people may want to test with different mount options, so something
like "mount -t $fstype -o loop,$MOUNT_OPTIONS $imgname $mountpt" may be
useful. Similarly it may also be useful to have MKFS_OPTIONS while
formatting the filesystem.

Thanks,
Kalpak.

> 
> Comments are welcome!!
> 
> Thanks,
> Karuna

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Kalpak Shah
On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:
 Hi,
 
 For some time I had been working on this file system test framework.
 Now I have a implementation for the same and below is the explanation.
 Any comments are welcome.
 
 Introduction:
 The testing tools and benchmarks available around do not take into
 account the repair and recovery aspects of file systems. The test
 framework described here focuses on repair and recovery capabilities
 of file systems. Since most file systems use 'fsck' to recover from
 file system inconsistencies, the test framework characterizes file
 systems based on outcomes of running 'fsck'.

snip

 Higher level perspective/approach:
 In this approach the file system is viewed as a tree of nodes, where
 nodes are either files or directories. The metadata information
 corresponding to some randomly chosen nodes of the tree are corrupted.
 Nodes which are corrupted are marked or recorded to be able to replay
 later. This file system is called source file system while the file
 system on which we need to replay the corruption is called target file
 system. The assumption is that the target file system contains a set
 of files and directories which is a superset of that in the source
 file system. Hence to replay the corruption we need point out which
 nodes in the source file system were corrupted in the source file
 system and corrupt the corresponding nodes in the target file system.
 
 A major disadvantage with this approach is that on-disk structures
 (like superblocks, block group descriptors, etc.) are not considered
 for corruption.
 
 Lower level perspective/approach:
 The file system is looked upon as a set of blocks (more precisely
 metadata blocks). We randomly choose from this set of blocks to
 corrupt. Hence we would be able to overcome the deficiency of the
 previous approach. However this approach makes it difficult to have a
 replayable corruption. Further thought about this approach has to be
 given.
 

Fill a test filesystem with data and save it. Corrupt it by copying a
chunk of data from random locations A to B. Save positions A and B so
that you can reproduce the corruption. 

Or corrupt random bits (ideally in metadata blocks) and maintain the
list of the bit numbers for reproducing the corruption.

 We could have a blend of both the approaches in the program to
 compromise between corruption and replayability.
 
 Repair Phase:
 The corrupted file system is repaired and recovered with 'fsck' or any
 other tools; this phase considers the repair and recovery action on
 the file system as a black box. The time taken to repair by the tool
 is measured.

I see that you are running fsck just once on the test filesystem. It
might be a good idea to run it twice and if second fsck does not find
the filesystem to be completely clean that means it is a bug in fsck.

snip

 Summary Phase:
 This is the final phase in the model. A report file is prepared which
 summarizes the result of this test run. The summary contains:
 
 Average time taken for recovery
 Number of files lost at the end of each iteration
 Number of files with metadata corruption at the end of each iteration
 Number of files with data corruption at the end of each iteration
 Number of files lost and found at the end of each iteration
 
 Putting it all together:
 The Corruption, Repair and Comparison phases could be repeated a
 number of times (each repetition is called an iteration) before the
 summary of that test run is prepared.
 
 TODO:
 Account for files in the lost+found directory during the comparison phase.
 Support for other file systems (only ext2 is supported currently)
 State of the either file system is stored, which may be huge, time
 consuming and not necessary. So, we could have better ways of storing
 the state.

Also, people may want to test with different mount options, so something
like mount -t $fstype -o loop,$MOUNT_OPTIONS $imgname $mountpt may be
useful. Similarly it may also be useful to have MKFS_OPTIONS while
formatting the filesystem.

Thanks,
Kalpak.

 
 Comments are welcome!!
 
 Thanks,
 Karuna

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Karuna sagar K

On 4/23/07, Kalpak Shah [EMAIL PROTECTED] wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:


.

The file system is looked upon as a set of blocks (more precisely
metadata blocks). We randomly choose from this set of blocks to
corrupt. Hence we would be able to overcome the deficiency of the
previous approach. However this approach makes it difficult to have a
replayable corruption. Further thought about this approach has to be
given.


Fill a test filesystem with data and save it. Corrupt it by copying a
chunk of data from random locations A to B. Save positions A and B so
that you can reproduce the corruption.



Hey, thats a nice idea :). But, this woundnt reproduce the same
corruption right? Because, say, on first run of the tool there is
metadata stored at locations A and B and then on the second run there
may be user data present. I mean the allocation may be different.


Or corrupt random bits (ideally in metadata blocks) and maintain the
list of the bit numbers for reproducing the corruption.



.

The corrupted file system is repaired and recovered with 'fsck' or any
other tools; this phase considers the repair and recovery action on
the file system as a black box. The time taken to repair by the tool
is measured


I see that you are running fsck just once on the test filesystem. It
might be a good idea to run it twice and if second fsck does not find
the filesystem to be completely clean that means it is a bug in fsck.


You are right. Will modify that.



snip



..

State of the either file system is stored, which may be huge, time
consuming and not necessary. So, we could have better ways of storing
the state.


Also, people may want to test with different mount options, so something
like mount -t $fstype -o loop,$MOUNT_OPTIONS $imgname $mountpt may be
useful. Similarly it may also be useful to have MKFS_OPTIONS while
formatting the filesystem.



Right. I didnt think of that. Will look into it.


Thanks,
Kalpak.


 Comments are welcome!!

 Thanks,
 Karuna




Thanks,
Karuna
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Avishay Traeger
On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:
 For some time I had been working on this file system test framework.
 Now I have a implementation for the same and below is the explanation.
 Any comments are welcome.

snip

You may want to check out the paper EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.

Avishay

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Testing framework

2007-04-23 Thread Ric Wheeler

Avishay Traeger wrote:

On Mon, 2007-04-23 at 02:16 +0530, Karuna sagar K wrote:

For some time I had been working on this file system test framework.
Now I have a implementation for the same and below is the explanation.
Any comments are welcome.


snip

You may want to check out the paper EXPLODE: A Lightweight, General
System for Finding Serious Storage System Errors from OSDI 2006 (if you
haven't already).  The idea sounds very similar to me, although I
haven't read all the details of your proposal.

Avishay



It would also be interesting to use the disk error injection patches 
that Mark Lord sent out recently to introduce real sector level 
corruption.  When your file systems are large enough and old enough, 
getting bad sectors and IO errors during an fsck stresses things in 
interesting ways ;-)


ric
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Testing framework

2007-04-22 Thread Karuna sagar K

Hi,

For some time I had been working on this file system test framework.
Now I have a implementation for the same and below is the explanation.
Any comments are welcome.

Introduction:
The testing tools and benchmarks available around do not take into
account the repair and recovery aspects of file systems. The test
framework described here focuses on repair and recovery capabilities
of file systems. Since most file systems use 'fsck' to recover from
file system inconsistencies, the test framework characterizes file
systems based on outcomes of running 'fsck'.

Overview:
The model can be described in brief as - prepare a file system, record
the state of the file system, corrupt it, use repair and recovery
tools and finally compare and report the status of the recovered file
system against its initial state.

Prepare Phase:
This is the first phase in the model. Here we prepare a file system to
carry out subsequent phases. A new file system image is created with
the specified name. 'mkfs' program is run on this image and then the
file system is aged after populating it sufficiently. This state of
the file system is considered as an ideal state.

Corruption Phase:
The file system prepared in the prepare phase is corrupted to simulate
a system crash or in general an inconsistency in the file system.
Obviously we are more interested in corrupting the metadata
information. A random corruption would provide us with the results
like that of fs_mutator or fs_fuzzer. However, for different test runs
the corruption would vary and hence it wouldn't be fair and tedious to
have a comparison between file systems. So, we would like have a
mechanism where the corruption could be replayable thus ensuring
almost same amount of corruption be reproduced across test runs. The
techniques for corruption are:

Higher level perspective/approach:
In this approach the file system is viewed as a tree of nodes, where
nodes are either files or directories. The metadata information
corresponding to some randomly chosen nodes of the tree are corrupted.
Nodes which are corrupted are marked or recorded to be able to replay
later. This file system is called source file system while the file
system on which we need to replay the corruption is called target file
system. The assumption is that the target file system contains a set
of files and directories which is a superset of that in the source
file system. Hence to replay the corruption we need point out which
nodes in the source file system were corrupted in the source file
system and corrupt the corresponding nodes in the target file system.

A major disadvantage with this approach is that on-disk structures
(like superblocks, block group descriptors, etc.) are not considered
for corruption.

Lower level perspective/approach:
The file system is looked upon as a set of blocks (more precisely
metadata blocks). We randomly choose from this set of blocks to
corrupt. Hence we would be able to overcome the deficiency of the
previous approach. However this approach makes it difficult to have a
replayable corruption. Further thought about this approach has to be
given.

We could have a blend of both the approaches in the program to
compromise between corruption and replayability.

Repair Phase:
The corrupted file system is repaired and recovered with 'fsck' or any
other tools; this phase considers the repair and recovery action on
the file system as a black box. The time taken to repair by the tool
is measured.

Comparison Phase:
The current state of the file system is compared with the ideal state
of the file system. The metadata information of the file system is
checked with that of the ideal file system and the outcome is noted to
summarize on this test run. If repair tool used is 100% effective then
the current state of the file system should be exactly the same as
that of the ideal file system. Simply checking for equality wouldn't
be right because it doesn't take care of lost and found files. Hence
we need to check node-by-node for each node in the ideal state of the
file system.

State Record:
The comparison phase requires that the ideal state of the file system
be known. Replicating the whole file system would eat up a lot of disk
space. Storing the state of the file system in memory would be costly
in case of huge file systems. So, we need to store the state of the
file system on the disk such that it wouldn't take up a lot of disk
space. We record the metadata information and store it onto a file.
One approach is replicating the metadata blocks of the source file
system and storing the replica blocks under a single file called state
file. Additional metadata such as checksum of the data blocks can be
stored in the same state file. However this may store some unnecessary
metadata information in the state file and hence swelling it up for
huge source file systems. So, instead of storing the metadata blocks
themselves we would summarize the information in them before storing
in the state 

Testing framework

2007-04-22 Thread Karuna sagar K

Hi,

For some time I had been working on this file system test framework.
Now I have a implementation for the same and below is the explanation.
Any comments are welcome.

Introduction:
The testing tools and benchmarks available around do not take into
account the repair and recovery aspects of file systems. The test
framework described here focuses on repair and recovery capabilities
of file systems. Since most file systems use 'fsck' to recover from
file system inconsistencies, the test framework characterizes file
systems based on outcomes of running 'fsck'.

Overview:
The model can be described in brief as - prepare a file system, record
the state of the file system, corrupt it, use repair and recovery
tools and finally compare and report the status of the recovered file
system against its initial state.

Prepare Phase:
This is the first phase in the model. Here we prepare a file system to
carry out subsequent phases. A new file system image is created with
the specified name. 'mkfs' program is run on this image and then the
file system is aged after populating it sufficiently. This state of
the file system is considered as an ideal state.

Corruption Phase:
The file system prepared in the prepare phase is corrupted to simulate
a system crash or in general an inconsistency in the file system.
Obviously we are more interested in corrupting the metadata
information. A random corruption would provide us with the results
like that of fs_mutator or fs_fuzzer. However, for different test runs
the corruption would vary and hence it wouldn't be fair and tedious to
have a comparison between file systems. So, we would like have a
mechanism where the corruption could be replayable thus ensuring
almost same amount of corruption be reproduced across test runs. The
techniques for corruption are:

Higher level perspective/approach:
In this approach the file system is viewed as a tree of nodes, where
nodes are either files or directories. The metadata information
corresponding to some randomly chosen nodes of the tree are corrupted.
Nodes which are corrupted are marked or recorded to be able to replay
later. This file system is called source file system while the file
system on which we need to replay the corruption is called target file
system. The assumption is that the target file system contains a set
of files and directories which is a superset of that in the source
file system. Hence to replay the corruption we need point out which
nodes in the source file system were corrupted in the source file
system and corrupt the corresponding nodes in the target file system.

A major disadvantage with this approach is that on-disk structures
(like superblocks, block group descriptors, etc.) are not considered
for corruption.

Lower level perspective/approach:
The file system is looked upon as a set of blocks (more precisely
metadata blocks). We randomly choose from this set of blocks to
corrupt. Hence we would be able to overcome the deficiency of the
previous approach. However this approach makes it difficult to have a
replayable corruption. Further thought about this approach has to be
given.

We could have a blend of both the approaches in the program to
compromise between corruption and replayability.

Repair Phase:
The corrupted file system is repaired and recovered with 'fsck' or any
other tools; this phase considers the repair and recovery action on
the file system as a black box. The time taken to repair by the tool
is measured.

Comparison Phase:
The current state of the file system is compared with the ideal state
of the file system. The metadata information of the file system is
checked with that of the ideal file system and the outcome is noted to
summarize on this test run. If repair tool used is 100% effective then
the current state of the file system should be exactly the same as
that of the ideal file system. Simply checking for equality wouldn't
be right because it doesn't take care of lost and found files. Hence
we need to check node-by-node for each node in the ideal state of the
file system.

State Record:
The comparison phase requires that the ideal state of the file system
be known. Replicating the whole file system would eat up a lot of disk
space. Storing the state of the file system in memory would be costly
in case of huge file systems. So, we need to store the state of the
file system on the disk such that it wouldn't take up a lot of disk
space. We record the metadata information and store it onto a file.
One approach is replicating the metadata blocks of the source file
system and storing the replica blocks under a single file called state
file. Additional metadata such as checksum of the data blocks can be
stored in the same state file. However this may store some unnecessary
metadata information in the state file and hence swelling it up for
huge source file systems. So, instead of storing the metadata blocks
themselves we would summarize the information in them before storing
in the state 

Testing framework

2006-12-28 Thread Karuna sagar k

Hi,

I am working on a testing framework for file systems focusing on
repair and recovery areas. Right now, I have been timing fsck and
trying to determine the effectiveness of fsck. The idea that I have is
below.

In abstract terms, I create a file system (ideal state), corrupt it,
run fsck on it and compare the recovered state with the ideal one. So
the whole process is divided into phases:

1. Prepare Phase - a new file system is created, populated and aged.
This state of the file system is consistent and is considered as ideal
state. This state is to be recorded for later comparisons (during the
comparison phase).

2. Corruption Phase - the file system on the disk is corrupted. We
should be able to reproduce such corruption on different test runs
(probably not very accurately). This way we would be able to compare
the file systems in a better way.

3. Repair Phase - fsck is run to repair and recover the disk to a
consistent state. The time taken by fsck is determined here.

4. Comparison Phase - the current state (recovered state) is compared
(a logical comparison) with the ideal state. The comparison tells what
fsck recovered, and how close are the two states.

Apart from this we would also require a component to record the state
of the file system. For this we construct a tree (which represents the
state of the file system) where each node stores some info on the
files and directories in the file system and the tree structure
records the parent children relationship among the files and
directories.

Currently I am focusing on the corruption and comparison phases:

Corruption Phase:
Focus - the corruption should be reproducible. This gives the control
over the comparison between test runs and file systems. I am assuming
here that different test runs would deal with same files.

I have come across two approaches here:

Approach 1 -
1. Among the files present on the disk, we randomly choose few files.
2. For each such file, we will then find the meta data block info
(inode block).
3. We seek to such blocks and corrupt them (the corruption is done by
randomly writing data to the block or some predetermined info)
4. Such files are noted to reproduce the similar corruption.

The above approach looks at the file system from a very abstract view.
But there may be file system specific data structures on the disk (eg.
group descriptors in case of ext2). These are not handled by this
approach directly.

Approach 2 - We pick up meta data blocks from the disk, and form a
logical disk structure containing just these meta data blocks. The
blocks on the logical disk map onto the physical disk. We pick up
randomly blocks from this and corrupt them.

Comparison Phase:
We determine the logical equivalence between the recovered and ideal
states of the disk i.e. say if a file was lost during corruption and
fsck recovered it and put it in the lost+found directory. We have to
recognize this as logical equivalent and not report that the file is
lost.

Right now I have a basic implementation of the above with random
corruption (using fsfuzzer logic).

Any suggestions?

Thanks,
Karuna
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Testing framework

2006-12-28 Thread Karuna sagar k

Hi,

I am working on a testing framework for file systems focusing on
repair and recovery areas. Right now, I have been timing fsck and
trying to determine the effectiveness of fsck. The idea that I have is
below.

In abstract terms, I create a file system (ideal state), corrupt it,
run fsck on it and compare the recovered state with the ideal one. So
the whole process is divided into phases:

1. Prepare Phase - a new file system is created, populated and aged.
This state of the file system is consistent and is considered as ideal
state. This state is to be recorded for later comparisons (during the
comparison phase).

2. Corruption Phase - the file system on the disk is corrupted. We
should be able to reproduce such corruption on different test runs
(probably not very accurately). This way we would be able to compare
the file systems in a better way.

3. Repair Phase - fsck is run to repair and recover the disk to a
consistent state. The time taken by fsck is determined here.

4. Comparison Phase - the current state (recovered state) is compared
(a logical comparison) with the ideal state. The comparison tells what
fsck recovered, and how close are the two states.

Apart from this we would also require a component to record the state
of the file system. For this we construct a tree (which represents the
state of the file system) where each node stores some info on the
files and directories in the file system and the tree structure
records the parent children relationship among the files and
directories.

Currently I am focusing on the corruption and comparison phases:

Corruption Phase:
Focus - the corruption should be reproducible. This gives the control
over the comparison between test runs and file systems. I am assuming
here that different test runs would deal with same files.

I have come across two approaches here:

Approach 1 -
1. Among the files present on the disk, we randomly choose few files.
2. For each such file, we will then find the meta data block info
(inode block).
3. We seek to such blocks and corrupt them (the corruption is done by
randomly writing data to the block or some predetermined info)
4. Such files are noted to reproduce the similar corruption.

The above approach looks at the file system from a very abstract view.
But there may be file system specific data structures on the disk (eg.
group descriptors in case of ext2). These are not handled by this
approach directly.

Approach 2 - We pick up meta data blocks from the disk, and form a
logical disk structure containing just these meta data blocks. The
blocks on the logical disk map onto the physical disk. We pick up
randomly blocks from this and corrupt them.

Comparison Phase:
We determine the logical equivalence between the recovered and ideal
states of the disk i.e. say if a file was lost during corruption and
fsck recovered it and put it in the lost+found directory. We have to
recognize this as logical equivalent and not report that the file is
lost.

Right now I have a basic implementation of the above with random
corruption (using fsfuzzer logic).

Any suggestions?

Thanks,
Karuna
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/