On 1/22/2013 9:00 AM, Stephen Farrell wrote:

Hi Joe,

On 01/22/2013 04:39 PM, Joe Touch wrote:
...
This is a silly idea.

So you're in two minds about it eh:-)


First, running code should already be considered as part of the context
of review.

Second, running code is not correlated to correctness, appropriateness,
or safety. See Linux for numerous examples.

Third, running code doesn't mean the doc is sufficient that multiple
parties can generate interoperable instances. It's merely the sound of
one hand clapping ;-)

Your second and third and points seem opposed to your first.
The latter ones imply that running code is useless, the first
one says its not.

I never said "useless"; I explained several ways in which it alone is correlated to any of the issues relevant to speeding up the review process.

Multiple interoperable implementations helps ensure a doc sufficiently describes a protocol - nothing less, but also NOTHING MORE.

I don't believe any of us have any quantitative basis on which to
base assertions that this will improve or dis-improve our processes
or output, or be neutral. (Hence proposing it as an experiment.)

It takes more than an "unknown" to make an experiment. There has to be an hypothesis. Near as I can tell, yours is "running code means it's OK to run concurrent review at multiple levels".

Please explain why you think that is true. I gave multiple reasons why it is not.

Finally, NOTHING should circumvent the multi-tiered review process. That
process helps reduce the burden on the community at large via the
presumption that smaller groups with more context have already reviewed
proposals before they get to the broader community.

I disagree with the shouted "NOTHING" - if there are non-silly
ways in which we figure we can improve our processes then we
ought be open to trying 'em out. You may or may not be right
that this is silly, but merely asserting that it is doesn't
make it so.

Being stuck with current processes or only ever adding more
review tiers would IMO be sillier than this proposal. But
that seems to be where we're mostly at.

OK, so let's try an experiment where authors with the first name "Stephen" pay everyone $1,000 to review their docs. It certainly hasn't been tried, so - by your metric - it's worth considering?

Some things are simply not.

This is a bad idea even as an experiment.

Sorry, I don't get the "bad" aspect - rhetoric aside, in what
way do you see running this experiment doing harm?

It puts more work on the community at large to review an idea that could have been either rejected or significantly improved in a smaller community before wasting the larger communities time.

This document is a prime example of such.

Joe

Reply via email to