On 15/10/2010, at 12:20 PM, Casey Ransberger wrote: > The previous thread about testing got me thinking about this again. One of > the biggest problems I have in the large with getting developers to write > tests is the burden of maintaining the tests when the code changes. > > I have this wacky idea that we need the tests more than the dev code; it > makes me wish I had some time to study prolog. > > I wonder: what if all we did was write the tests? What if we threw some kind > of genetic algorithm or neural network at the task of making the tests pass? > > I realize that there are some challenges with the idea: what's the DNA of a > computer program look like? Compiled methods? Pure functions? Abstract syntax > trees? Objects? Classes? Prototypes? Source code fragments? How are these > things composed, inherited, and mutated? > > I've pitched the idea over beer before; the only objections I've heard have > been of the form "that's computationally expensive" and "no one knows how to > do that." > > Computational expense is usually less expensive than developer time these > days, so without knowing exactly *how* expensive, it's hard to buy that. And > if no one knows how to do it, it could be that there aren't enough of us > trying:) > > Does anyone know of any cool research in this area? > _______________________________________________
This is quite interesting to me, too, because if you think of how we've built programming libraries and frameworks - in granular, small-chunked ways, why not have libraries of requirements and tests and such? If we match these two up, then we don't even need any form of neural network to build code for us - the corresponding and matching dev code has already been written many times before (this is what libraries and frameworks ARE, is it not?) Surely if the point of a library is to build a set of reproducible similar-code that we can map to any problem domain of a similar nature, then the corresponding requirements suites would have to come along for the ride in terms of there being similar tests - if it was baked in at the compiler or interpreter level. I have a similar problem with maintenance... and this is the biggest drawback to behaviour-driven or test-driven development... it requires writing more code... the bit that is really good, though, is that you don't spend as much time hunting down and fixing bugs most importantly regression errors. This is a big win, but it's preventative so therefore at least somewhat un-measurable, therefore un-marketable. As we are in a marketing-based society (which I am noticing is slowly changing), anything that can't be measured often takes a sad-faced back-seat. It's a pity, because I'd wager that the most important things that we have as humans more often than not can't be measured. However - the trouble with starting with requirements first is that it doesn't provide humans with that instant or near-instant feedback that gives a happy emotional reaction which says "I'm making progress". I'm wagering that the reason for this is that we start too late. If writing behaviour requirements and testing before writing dev code is simply part of the programming process, then I wager it will be much easier... if learnt from the ground up. For example, when learning C, one has to decide what type a variable is before one uses it. This isn't a requirement in Smalltalk, where you merely have to know about behaviours. (ie does it respond to the message "printToScreen" - then it's fine). Thus, one of the requirements of writing C is that you are limited in this way by the language enforcing you to make decisions for type. This has fairly obvious refactoring ramifications, and yet because it's part of the language, it's simply taken on board. You want to change the type of some fairly ubiquitous variable, then it requires a LOT of work. People don't see these differences very clearly or cleanly. Julian. _______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
