On 10/4/2012 8:54 PM, Russell Standish wrote:
On Thu, Oct 04, 2012 at 07:48:01PM -0700, meekerdb wrote:
If it is crucially different, then that difference ought to be
measurable. Got any ideas?
Sure, the ratio of the number of new designs built that didn't work
compared to those that did.  It's a difference of process.  It
doesn't have to show up in the successful designs.

Brent
That would be a rather large figure in both cases. After all, it is
rare for a changeset to just work without any debugging.

But I'll bet they didn't try to fix the bug by making random changes either.

In the three
years I worked on the SCATS project, I think it happened once (and I
remarked on it to my colleagues, because it was so rare), out of many
hundreds of changesets.

Another difficulty with your measure is the evidence of failed designs
is usually instantly erased - both in source code repository, and in
the fossil record.

Aren't there estimates of the rate of mutations in DNA? Of course most don't even make it to becoming fossils. Instead of one out of a hundred, I'd guess the biological success rate of random mutations to be a couple of orders of magnitude smaller.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to