Thanks for reading through my wall of text, Adam. :)

Adam Kennedy wrote:
Salve J. Nilsen wrote:
Let's say Joe Sysadmin wants to install the author's (a.k.a. "your") module Useful::Example, and during the test-phase one of the POD tests fail.

Joe Sysadmin doesn't use modules, lets try the following.

"Joe Sysadmin wants to install the JSAN client, because his HTML/JavaScript
guys want to use some of the JavaScript modules. Joe Sysadmin doesn't know
Perl. He does not know what POD is, and has never heard of CPANTS. He will
never need to read the documentation for any dependencies of the JSAN
client."

Ok, let's try.


1) Joe's POD-analyzing module has a different/early/buggy notion of how the POD syntax is supposed to be. This can be fixed by you in several possible ways:

Joe Sysadmin runs "sudo cpan --install JSAN". 10,000 lines of text scrolls
down the screen for about 10 minutes. 9 minutes and 8,500 lines in, the POD
tests in a utility module 6 layers of recursive dependencies up the chain
fails.

Installation of that module fails, and as the CPAN recursion unwinds another
5 modules recursively fail.

The final summary lists 6 modules which did not install. The original reason
is 1,500 lines above this summary, at the top of many many details about
failed tests due to the missing dependencies.

Joe Sysadmin has no idea why the installation failed, he scrolls up through
last 1000 lines of output, before giving up, and just running "sudo cpan
--install JSAN" again. It still fails, with 2,000 lines of output.

At this point, MOST people that are not Perl developers are utterly lost.
I've seen it several times in the JSAN IRC channels, as quite competant
JavaScript, Python, Ruby and ASP.Net coders come in to ask for help because
their CPAN installation fails.

Ok, I see you're describing several bugs (other than the one breaking the
install chain):

Bug #1: The module build output text is too verbose. (Hiding the detailed
        output would be useful.)
Bug #2: The module build output isn't stored anywhere accessible, or at all.
        (Keeping the module build output in a Build.log would be useful.)
Bug #3: If the build output IS stored somewhere, there's nothing telling Joe
        about this fact. (Telling at the end of the build where the Build.log
        can be found may help. "TESTS FAILED! SEE /tmp/Build.log FOR DETAILS")
Bug #4: There isn't a sufficiently clear test output summary telling Joe which
        module broke the dependency chain - so he can't look into it himself.
        (Visualizing the dependencies and show where it broke may help. Maybe
        displaying the relevant dependencies in a way like tree(1) does?)
Bug #5: There's no simple way available Joe to report/post the failed test to
        someone who cares. (It may help asking if the test failures should be
        reported, possibly resulting in the installation of Test::Reporter and
        it picking up the previous Build.log files)


The author has no idea it has failed for the user, because the user does not
know how to report the fault.

This ought to be something the authors (and the community) can improve (see bug
#5.)


Likewise, not only does the user not know HOW to blame the pod analyzer, but
often does not even know what POD is.

He doesn't have to know what POD is, just that there has been an error, and how
to report it. :)


But even if the author's influence over Joe Sysadmin's installation is
rather limited, it's still the author's duty to make sure Joe can know (as
best as possible) that the module is in good shape.

Surely the best way to do this is simply to not have failing tests for things that aren't related to the actual functionality of the module.

Well, in some ways, I agree with you. But sadly, no module is an island. By running all the tests (even the ones that don't directly concern the
modules functionality), we can learn about other things too. Things like "Does
Test::Pod understand my documentation syntax?" or "Does Test::Pod::Coverage
give what i expect as result?" or "Are my tests set up correctly?" or "Have I
made sure to keep my dependencies requirements up to date?" or even secondary concerns like "Does the module I use for testing POD function correctly?" or even "Is the syntax I use to describe my documentation powerful enough?"

By letting the end-user run these tests, you get a much earlier warning about these questions (and therefor an earlier chance to find an answer to them), but at the cost of some annoyance for the user. Because of this, I think the feedback one can get from such tests easily outweigh any concerns from the user about "non-essential tests failing"...

But this isn't a binary yes/no to POD tests issue. There's no reason to make
this into an all-or-nothing situation. We can still let the end-user be the
master of her own world, by allowing her to run the "less essential tests" only
when she explicitly asks for it. e.g. by using $ENV{PERL_AUTHOR_TESTING}, or
asking during setup if she wants to run the author tests (with default answer
"no".)


Now if Joe doesn't care enough about Useful::Module to send a bug report
(or send a failure report to CPANTS), one can still hope someone else does
it.

Joe has no what the problem is, and after screwing around for half an hour,
gives up in frustration and just manually unzips the JSAN modules.

Well, if we give Joe some relevant pointers to work with, he might screw around
less. See bug #1, #3 and #4.


---- The super-short version :) ----

Turning off syntax checking of your POD is comparable to not turning on warnings in your code. Now would you publish code developed without "use warnings;"?

Yes. Absolutely.

Every single module I write ships with use warnings disabled in the module.

I've had a number of instances in the past where spurious warnings in a web
application overflowed the log files. Nobody ever knew the warnings were
happening, and there was very little they could do about it anyway.

I worked at a place once where the previous developers had turned off all
warnings for exactly the same reason - the logfiles were filling up. This
resulted of course in several other bugs going unnoticed for quite a while, and
when I was trying to find these bugs in the code, it was definitely more
difficult spotting the relevant warnings in that stream of other warnings.

Read about the "Fixing broken windows", for a pointer on this

   http://en.wikipedia.org/wiki/Fixing_Broken_Windows

However, I ALWAYS run all the tests with warnings enabled and where possible
with warnings fatal.

This is a very good practice to follow.


2) "Failures" in POD have any bearing on the use of the distribution, especially if an end-user has installed the distribution merely as a dependency and not as a developer

If that was the only kind of end-user, I'd agree - the bearing on usage would be negligent. But it's not the only kind of end-user. I'll even be brash and postulate that the non-developer end-user is one of the LESS important ones, since he most likely won't interested in or capable of taking part in the development community for that module.

But that doesn't mean the author should make it difficult to install their
modules cleanly... :)

Unfortunately, this is exactly what having end users run the POD (and similar tests does). It makes it more difficult to install their modules cleanly.

Yes, but it's a useful sort of difficulty. It forces us to look at the root cause of problem instead of hiding it. But this can only work if the developers are informed of this difficulty. See bug #5.



3) False negatives are EVER acceptable in tests

If there's a false negative in a test, that's still a sign of a bug somewhere. Maybe in the test itself? Or the dependencies? Or the build system? Or some third-party module? Wherever it is, it's the author's job
to fix it, but he can't do that unless he first learns about it. :)

So it comes down to this. If the user can't use a module because of a non-critical bug, and the user can't report that bug to the author because
they don't know how to diagnose the problem, you have a catch-22.

Yes. This is true. That's why I postulate the non-developer end-users should get preferential treatment (access to enough information to get out of the catch-22, so she can help the author removing the bug that caused it all) instead of the naïve end-users (who are only interested in installing the software and nothing more.)

But for this to work, one has to lower the "barrier of entry" for giving feedback, and up the quality of the base level of feedback offered.

            Developer
             /     __
            /     |\
           /        \
Creates,  /          \
fixes... /            \
       (A)             \ Reports failures to...
       /               (C)       (#5)
      /                  \
    |/_                   \
                           \
  Module-----(B)-------->  User
           Warns about
         it's state to...
       (#1, #2, #3 and #4)

The cool thing here is that the author/developer is in a postition to affect both A, B and C. I say it's much better to improve things in B and C than to remove parts of B because one some disgruntled user has decided that feedback about secondary issues is too much for him to handle.


Additionally, it's fallacious to assume that authors are either helpful, proactive, competant, or even (unfortunately in some cases) alive.

This is true, and I'm actually assuming that the authors ARE all these things (or at the very least _wants_ to be these things.)

I'm sure this assumption will cause me (and anyone else making the same mistake) quite a lot of pain and anguish, but for some reason I think that world looks better than the one where I assume all authors are rude, reactive, incompetent deadheads who might be more useful dead than alive. :-\


It's extremely common for bugs to sit in RT queues for years, especially for
minor things like a POD nit.

In the mean time, the user still can't use the module, or anything that depends on it.

Yeah, I know...

Would it make sense to find a way to make it easier to "take over" projects that are obviosly standing still?


Oh well. Hope my rantings don't offend too many people. :)


- Salve

--
Salve J. Nilsen <salvejn at met dot no> / Systems Developer
Norwegian Meteorological Institute           http://met.no/
Information Technology Department / Section for Development

Reply via email to