On Wed, Mar 29, 2017 at 2:04 PM, Jeff Moyer <[email protected]> wrote:
> Dan Williams <[email protected]> writes:
>
>>>>>>> Can we stop with this kernel version checking, please?  Test to see if
>>>>>>> you can create a device dax instance.  If not, skip the test.  If so,
>>>>>>> and if you have a kernel that isn't fixed, so be it, you'll get
>>>>>>> failures.
>>>>>>
>>>>>> I'd rather not. It helps me keep track of what went in where. If you
>>>>>> want to run all the tests on a random kernel just do:
>>>>>>
>>>>>>     KVER="4.11.0" make check
>>>>>
>>>>> This, of course, breaks completely with distro kernels.
>>>>
>>>> Why does this break distro kernels? The KVER variable overrides "uname -r"
>>>
>>> Because some features may not exist in the distro kernels.  It's the
>>> same problem you outline with xfstests, below.
>>>
>>
>> Right, but you started off with suggesting that just running the test
>> and failing was ok, and that's the behavior you get with KVER=.
>
> Well, the goal was to be somewhat smart about it, by not even trying to
> test things that aren't implemented or configured into the current
> kernel (such as device dax, for example).  Upon thinking about it
> further, I don't think that gets us very far.  However, that does raise
> a use case that is not distro-specific.  If you don't enable device dax,
> your test suite will still try to run those tests.  ;-)

The other part of the equation is that I'm lazy and don't want to do
the extra work of validating the environment for each test. So just do
a quick version check and if the test fails you get to figure out what
configuration you failed to enable. The most common case being that
you failed to install the nfit_test modules in which case we do have a
capability check for that.

>>>>> You don't see this kind of checking in xfstests, for example.  git
>>>>> helps you keep track of what changes went in where (see git describe
>>>>> --contains).  It's weird to track that via your test harness.  So, I
>>>>> would definitely prefer to move to a model where we check for
>>>>> features instead of kernel versions.
>>>>
>>>> I see this as a deficiency of xfstests. We have had to go through and
>>>> qualify and track each xfstest and why it may fail with random
>>>> combinations of kernel, xfsprogs, or e2fsprogs versions. I'd much
>>>> prefer that upstream xfstests track the minimum versions of components
>>>> to make a given test pass so we can stop doing it out of tree.
>>>
>>> Yes, that's a good point.  I can't think of a good compromise, either.
>>
>> Maybe we can at least get our annotated blacklist upstream so other
>> people can start contributing to it?
>
> Are you referring to xfstests?  Yeah, that's a good idea.  Right now I
> just add tests to the dangerous group as I encounter known issues.  ;-)
> So, my list probably isn't very helpful in its current form.

Yes, xfstests. We have entries in our blacklist like this:

# needs xfsprogs fix
# c8dc42356142 xfs_db: fix the 'source' command when passed as a -c option
# Last checked:
# - xfsprogs-4.9.0-1.fc25.x86_64
xfs/138

# needs xfsprogs fix
# 3297e0caa25a xfs: replace xfs_mode_to_ftype table with switch statement
# Last checked:
# - xfsprogs-4.9.0-1.fc25.x86_64
xfs/348

# see "[BUG] generic/232 hangs on XFS DAX mount" thread on xfs mailing
# list
generic/232

# failed on emulated pmem without dax, may be impacted by the same fix
# as the one for generic/270. The generic/270 failure is tracked in this
# thread on the xfs mailing list: "XFS kernel BUG during generic/270
# with v4.10". Re-test on v4.11 with fa7f138 ("xfs: clear delalloc and
# cache on buffered write failure")
generic/269
generic/270
_______________________________________________
Linux-nvdimm mailing list
[email protected]
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to