Civileme,
Thanks for the illumination, but I venture to slightly expand upon it
inline in square brackets below.
By way of general background, I would add that Linus Torvalds is on
record as saying "The best management is no management", and
presumably he manages the kernel by that dictum. Does it work?
The jury is not in yet. There are many kernel bugs which would not
be there if it was managed by some all-knowing, all-seeing, all-aware
person. In the case of distributed voluntary development using the
internet, and using people of the highest calibre and responsibility
(ie with the professional mindset - other than in the matter of
compensation), I agree that it has the potential of producing the
best result.
But there still remains the question of testing, and I do not know
how Linus manages (aha!) that.
Civileme wrote:
> No, it isn't that simple. The whole idea of alpha, beta, and
> software development [gamma testing is critical as well]
> was a MADE UP thing, created from the minds
> of academics as their best GUESS at what process fitted a certain
> situation.
[The concept has long since drifted from academia into productive use
in commercial developments, and, as verified by my personal
experience in many of them, reaped wonderful dividends, indicating
that it is at least proximate to the ideal process]
> I am not trying to denigrate their efforts at finding some form
> in the chaos, just trying to place them in perspective. Now we
> have two differences that must be considered.....
> Alpha, Beta, etc. were created as concepts at a time when this
> open testing process was unthinkable except as a castle in the
> air. And it was designed for a COMMERCIAL software development
> process, the type that Microsoft does, in fact.
>
> I have said it before, and I will say it again here. We have no
> data to support that one process produces faster, better code
> than another.
[but we do have ample evidence that some such process produces better
code than no process]
> In God we trust, all others bring data. Arguments
> from logic (authority) or "this is the way it's done" (tradition)
> mean little when we should be comparing the results and timelines
> of the various processes, calculating process capabilities, and
> examining special causes. But we aren't even there, yet. First
> we would have to agree on what to measure. That could be
> achieved through a process called imagineering--designing a
> system as if things were perfect then engineering it back to
> earth.
>
> Obviously one criterion would be/should be the comfort level of
> beta testers, including you. (That is, if we discover that an
> equivalent of a beta cycle is a desirable feature of a system.)
>
> What I am trying to say, Ron, is that we never, as a human
> species, took a scientific approach to optimizing software
> development. We have operated on theories propounded from logic,
> authority and tradition without evidence, [don't agree] mainly on the basis
> that order is better than chaos, and educed a theory that fits a
> model of development we aren't even following in several of its
> precursor conditions. [agreed]
>
> I will support an open forum to find something better than what
> we do now, but I am not willing to concede at this point that
> anyone has a good handle on what that might be. [which is not a reason not to
>proceed with the best that we yet know]
--
Regards,
Ron. [AU, Mandrake Linux].