On Mon, Sep 17, 2012 at 1:15 PM, Troy Benjegerdes <[email protected]> wrote:
> I'm looking to get all the low-hanging fruit with unskilled testing.
> Particularly with regressions like this:
>
> hozer@six:~/src/openafs-fuse-git/tests/fuse$ 
> /home/hozer/src/openafs-fuse-git/tests/fuse/../../src/afsd/afsd.fuse -dynroot 
> -fakestat -d -confdir /home/hozer/src/openafs-fuse-git/tests/fuse/conf 
> -cachedir /home/hozer/src/openafs-fuse-git/tests/fuse/vcache -mountdir 
> /home/hozer/src/openafs-fuse-git/tests/fuse/mntdir
> FUSE library version: 2.8.6
> nullpath_ok: 0
> unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> INIT: 7.17
> flags=0x0000047b
> max_readahead=0x00020000
> Starting AFS cache scan...found 0 non-empty cache files (0%).
> afsd: All AFS daemons started.
> Segmentation fault
>
>
> I am pretty sure this is related to the work Simon is doing on Libtool,
> and there's a 90% probability it's a 30-second 'aha', followed by a two
> line fix, and we're back to working again.
>

I'd bet not. However....

> The code is so complicated it will take me half a day to track down what
> that two line fix is, or work in my own isolated fork and not get updates
> as quickly. That unskilled smoke testing and/or automated runs gets a LOT
> of mileage.

Not really. Build with debugging and get a real backtrace. That said,
since fuse is not *required*
functionality in a build, yes, it's undertested. This is why we've
generally avoided code which doesn't
always build. Or, at least tried to.

> It also gives people who want to learn about the codebase something simple
> and meaningful they can do, instead of waiting around for someone else to
> come up with a test plan.

>
> On Mon, Sep 17, 2012 at 11:25:36AM -0500, David Boyes wrote:
>> > How about an effort to get nightly builds of master available on as many
>> > platforms as possible, and getting thousands of bored college students to
>> > download, install, and test them?
>>
>> I think that's still overly optimistic. There's a lot of moving parts here; 
>> you just can't just install a package and have it do something useful. You 
>> need to have a lot of surrounding infrastructure that involves real control 
>> of a fair amount of stuff that random college students won't have.  'make 
>> check' on a single machine will never give you useful testing results other 
>> than to find packaging or "smoke test" errors, which aren't all that helpful 
>> overall.
>>
>> > Wouldn't that massive crowsourced testing effort be worth the time of a
>> > single developer to make sure *some* sort of package, even if it's half-
>> > assed, gets distributed? I can't think of much of anything else that has a
>> > bigger resource multiplation factor than a 'one click install', along with 
>> > some
>> > defaults to use a 'test.openafs.org' cell.
>>
>> As others have commented, unskilled testing performed without a detailed 
>> test plan on software systems this complex is probably less helpful than 
>> might otherwise appear. GIGO applies here. A uncoordinated test process is 
>> unlikely to produce anything useful in that there have to be a sequence of 
>> coordinated tests, replacing one component at a time in a known order. I 
>> can't see how crowdsourcing would help here.
>> _______________________________________________
>> OpenAFS-info mailing list
>> [email protected]
>> https://lists.openafs.org/mailman/listinfo/openafs-info
> _______________________________________________
> OpenAFS-info mailing list
> [email protected]
> https://lists.openafs.org/mailman/listinfo/openafs-info
>



-- 
Derrick
_______________________________________________
OpenAFS-info mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to