On Thu, Oct 04, 2012 at 07:48:01PM -0700, meekerdb wrote:
> >If it is crucially different, then that difference ought to be
> >measurable. Got any ideas?
> 
> Sure, the ratio of the number of new designs built that didn't work
> compared to those that did.  It's a difference of process.  It
> doesn't have to show up in the successful designs.
> 
> Brent

That would be a rather large figure in both cases. After all, it is
rare for a changeset to just work without any debugging. In the three
years I worked on the SCATS project, I think it happened once (and I
remarked on it to my colleagues, because it was so rare), out of many
hundreds of changesets.

Another difficulty with your measure is the evidence of failed designs
is usually instantly erased - both in source code repository, and in
the fossil record.  It might be achievable in an ALife simulation, but
how to do comparisons?

-- 

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics      hpco...@hpcoders.com.au
University of New South Wales          http://www.hpcoders.com.au
----------------------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to