Casey,

Let's cut this email you wrote into two ideas.

On Thu, Oct 14, 2010 at 9:20 PM, Casey Ransberger
<casey.obrie...@gmail.com>wrote:

> The previous thread about testing got me thinking about this again. One of
> the biggest problems I have in the large with getting developers to write
> tests is the burden of maintaining the tests when the code changes.
>
> I have this wacky idea that we need the tests more than the dev code; it
> makes me wish I had some time to study prolog.
>
>

Why don't you have time to study prolog?  What does the word "study" even
mean here?  Be deconstructive and assess what you actually think that
entails, since that is the first step to really getting good feedback.  You
have to vocalize your thoughts, since nobody is going to read your mind or
try to imply for you what you want to know.  Just as you don't have time to
study, nobody has time to give you answers if you can't formalize the
question.



> I wonder: what if all we did was write the tests? What if we threw some
> kind of genetic algorithm or neural network at the task of making the tests
> pass?
>
>
You should read Robert Binder's book on testing.  Most testing systems are
very sophisticated, but even systems like concolic testing have proven to
miss test cases that experts would probably catch.  Some (but not all) of
the tests in a compiler like C#, I am not sure how you would auto-generate
tests for.  Even specifying them with DbC would be difficult.  Writing an
integration test or unit test would probably be more legible, and that's
what matters.  Binder's book is a good starting point for learning about
testability of systems.  It is a mammoth book, though: over 1,000 pages.  It
is extremely thorough and the author basically gathered all research up to
about a year before it was published.  It's almost 15 years old now but it's
still the best place to start.  Other books on testing tend to be pretty
naive and just focused on teaching obstinate programmers why unit testing is
good.  Binder covers topics not covered by other authors, such as Orthogonal
Array Testing Strategy.  Other authors, like Brian Marick, have basically
learned they can sell more books if they just focus on unit testing, and so
as they publish new iterations on their books on testing, each iteration
contains less and less intellectual content and more practical tutorial-like
examples on the very basics.  It's like Hooked on Phoenics.  This is not
meant to be disparaging towards Marick's books, either.  Many people need to
know how to "read" tests/code in general before they can write them.

I don't understand why the people you talk to say it's not possible.  My
guess is that this attitude dominates companies like Google that use their
compute cloud for more practical tests, like automated acceptance testing of
web applications across many browsers.  I've seen Google TechTalks by Google
employees about the importance of such tests, but never seen Google talk
about more advanced testing.  They do have a guy who got his Ph.D. from
Stanford in testing web security using techniques like fuzzing, and have
some ad-hoc static analysis tools to check for exploits.  But that appears
to be the extent to which Google uses its cloud power.


> I realize that there are some challenges with the idea: what's the DNA of a
> computer program look like? Compiled methods? Pure functions? Abstract
> syntax trees? Objects? Classes? Prototypes? Source code fragments? How are
> these things composed, inherited, and mutated?
>
>
Well, DNA is only one property of a living organism.  DNA is argued by some
to not even really be our source code (RNA World Hypothesis [1]), Messenger
RNA serves as the object template for protein synthesis.

[1] http://en.wikipedia.org/wiki/RNA_world_hypothesis
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to