Re: [fonc] Growing Objects?

2010-10-16 Thread spir
On Fri, 15 Oct 2010 20:10:05 -0400
Chris Gahan ch...@ill-logic.com wrote:

 I think so too. Genetic Programming always struck me as quite wasteful
 because your algorithm doesn't care about *why* any of your offspring
 succeeded or failed.
 
 Intelligence has been a huge evolutionary boon because it allows
 critters to learn from the mistakes of others, and then over time
 those learnings become baked into the genome. (see how learning can
 guide evolution:
 http://htpprints.yorku.ca/archive/0172/01/hinton-nowlan.htm )
 
 Evolution is really really slow... Which isn't a problem when you're
 an insect, and the cost of forking a child process is eating a little
 bit of garbage juice.
 
 When you're trying to program with expensive and limited computing
 resources however, it makes more sense to encode some knowledge into
 the solution-generating system.

What about making test / evaluation / evolution-driving code be itself 
evolutionary? Leads to who judges the judge?, yes ;-) but still; there can be 
some human evaluators at the end of the chain.

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-15 Thread Murat Girgin
Cunningham's Extreme Genetic Programming might be of interest:
http://www.neocoretechs.com/.

Murat

On Fri, Oct 15, 2010 at 1:33 AM, John Nilsson j...@milsson.nu wrote:

 On Fri, Oct 15, 2010 at 3:20 AM, Casey Ransberger
 casey.obrie...@gmail.com wrote:
  I wonder: what if all we did was write the tests? What if we threw some
 kind of genetic algorithm or neural network at the task of making the tests
 pass?

 I've been having a similar thought for a while now, but its not really
 the test as such, it is more the declarative nature of the test. How
 would the programming model look like if the system was derived from
 formalized requirements (tests)? How would the system be derived
 (genetic algorithm)?


 My thinking is more focused on the programming model and how to divide
 the artifact development between the correct people than actual
 algorithms for auomatic derivation. F.ex. architectures could be
 expressed as libraries, a constraint solver or genetic algorithm can
 be fed the high level requirements and mine the architecture libraries
 to generate a basic architecture. The generated architecture concepts
 can then be referenced in new requirements to derive functions.

 Now the trick is, i believe, in stratifying the requirements when
 formalizing them. Low-level requirements is often dependent on
 solutions picked from high-level requirements. I.e. the color of the
 navigation menu should be red is not at the same level as the system
 presents a webshop. Still the dependency between the requirements is
 interesting to focus on. Would one revisit the choice of webshop
 maybe there is no navigation menu that can be red.

 I anticipate that the problem in developming and maintaining such a
 system is to keep referential integrity between requirements.
 Navigating Java in a modern IDE f.ex. makes it easy to find all
 references of an identifier which is vital when assessing the imact of
 a change. In a similar style high level requirements that affect lower
 level requirements must be easy to trace.


 To achieve such a system I have been thinking of implementing a meta
 language system in which languages can be declared, mixed and anlyzed
 together. By declaring transformations between langaugas the system
 would allow derived concepts in one language to depend on declared
 expressions in another language and assert referential integrity.


 BR,
 John

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-15 Thread Max OrHai
Also, some interesting research along these lines by Stephanie Forrest of
the University of New Mexico:

http://genprog.adaptive.cs.unm.edu/

-- Max

On Fri, Oct 15, 2010 at 11:04 AM, Murat Girgin gir...@gmail.com wrote:

 Cunningham's Extreme Genetic Programming might be of interest:
 http://www.neocoretechs.com/.

 Murat

 On Fri, Oct 15, 2010 at 1:33 AM, John Nilsson j...@milsson.nu wrote:

 On Fri, Oct 15, 2010 at 3:20 AM, Casey Ransberger
 casey.obrie...@gmail.com wrote:
  I wonder: what if all we did was write the tests? What if we threw some
 kind of genetic algorithm or neural network at the task of making the tests
 pass?

 I've been having a similar thought for a while now, but its not really
 the test as such, it is more the declarative nature of the test. How
 would the programming model look like if the system was derived from
 formalized requirements (tests)? How would the system be derived
 (genetic algorithm)?


 My thinking is more focused on the programming model and how to divide
 the artifact development between the correct people than actual
 algorithms for auomatic derivation. F.ex. architectures could be
 expressed as libraries, a constraint solver or genetic algorithm can
 be fed the high level requirements and mine the architecture libraries
 to generate a basic architecture. The generated architecture concepts
 can then be referenced in new requirements to derive functions.

 Now the trick is, i believe, in stratifying the requirements when
 formalizing them. Low-level requirements is often dependent on
 solutions picked from high-level requirements. I.e. the color of the
 navigation menu should be red is not at the same level as the system
 presents a webshop. Still the dependency between the requirements is
 interesting to focus on. Would one revisit the choice of webshop
 maybe there is no navigation menu that can be red.

 I anticipate that the problem in developming and maintaining such a
 system is to keep referential integrity between requirements.
 Navigating Java in a modern IDE f.ex. makes it easy to find all
 references of an identifier which is vital when assessing the imact of
 a change. In a similar style high level requirements that affect lower
 level requirements must be easy to trace.


 To achieve such a system I have been thinking of implementing a meta
 language system in which languages can be declared, mixed and anlyzed
 together. By declaring transformations between langaugas the system
 would allow derived concepts in one language to depend on declared
 expressions in another language and assert referential integrity.


 BR,
 John

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-15 Thread frank


I think genetic programming is probably most interesting if you don't
write a mere battery of black-box tests for the fitness evaluation.

It would be much cooler to have code that reasons about the candidat
solutions, and see how much of each is proovably correct, IMHO. =)

Casey Ransberger wrote:
The previous thread about testing got me thinking about this again. One of the biggest problems I have in the large with getting developers to write tests is the burden of maintaining the tests when the code changes. 

I have this wacky idea that we need the tests more than the dev code; it makes me wish I had some time to study prolog. 


I wonder: what if all we did was write the tests? What if we threw some kind of 
genetic algorithm or neural network at the task of making the tests pass?

I realize that there are some challenges with the idea: what's the DNA of a 
computer program look like? Compiled methods? Pure functions? Abstract syntax 
trees? Objects? Classes? Prototypes? Source code fragments? How are these 
things composed, inherited, and mutated?

I've pitched the idea over beer before; the only objections I've heard have been of the form 
that's computationally expensive and no one knows how to do that.

Computational expense is usually less expensive than developer time these days, 
so without knowing exactly *how* expensive, it's hard to buy that. And if no 
one knows how to do it, it could be that there aren't enough of us trying:)

Does anyone know of any cool research in this area?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-15 Thread Chris Gahan
I think so too. Genetic Programming always struck me as quite wasteful
because your algorithm doesn't care about *why* any of your offspring
succeeded or failed.

Intelligence has been a huge evolutionary boon because it allows
critters to learn from the mistakes of others, and then over time
those learnings become baked into the genome. (see how learning can
guide evolution:
http://htpprints.yorku.ca/archive/0172/01/hinton-nowlan.htm )

Evolution is really really slow... Which isn't a problem when you're
an insect, and the cost of forking a child process is eating a little
bit of garbage juice.

When you're trying to program with expensive and limited computing
resources however, it makes more sense to encode some knowledge into
the solution-generating system.

On Friday, October 15, 2010, frank fr...@frankhirsch.net wrote:

 I think genetic programming is probably most interesting if you don't
 write a mere battery of black-box tests for the fitness evaluation.

 It would be much cooler to have code that reasons about the candidat
 solutions, and see how much of each is proovably correct, IMHO. =)

 Casey Ransberger wrote:

 The previous thread about testing got me thinking about this again. One of 
 the biggest problems I have in the large with getting developers to write 
 tests is the burden of maintaining the tests when the code changes.
 I have this wacky idea that we need the tests more than the dev code; it 
 makes me wish I had some time to study prolog.
 I wonder: what if all we did was write the tests? What if we threw some kind 
 of genetic algorithm or neural network at the task of making the tests pass?

 I realize that there are some challenges with the idea: what's the DNA of a 
 computer program look like? Compiled methods? Pure functions? Abstract syntax 
 trees? Objects? Classes? Prototypes? Source code fragments? How are these 
 things composed, inherited, and mutated?

 I've pitched the idea over beer before; the only objections I've heard have 
 been of the form that's computationally expensive and no one knows how to 
 do that.

 Computational expense is usually less expensive than developer time these 
 days, so without knowing exactly *how* expensive, it's hard to buy that. And 
 if no one knows how to do it, it could be that there aren't enough of us 
 trying:)

 Does anyone know of any cool research in this area?
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc





 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Josh McDonald
I'd say the biggest problem is more in the selection than generation /
mutation. In the world, it's easy to determine the winner - he passes on
more of his genes. But if we've got two potential solutions, neither of
which actually pass the test, how do we select which to continue mutating,
and which to let die? And when you've got two that *do* pass the test, how
do you select between them? Size? Running time?

It would make an interesting project for a year or two, though :)

-Josh

On 15 October 2010 11:20, Casey Ransberger casey.obrie...@gmail.com wrote:

 The previous thread about testing got me thinking about this again. One of
 the biggest problems I have in the large with getting developers to write
 tests is the burden of maintaining the tests when the code changes.

 I have this wacky idea that we need the tests more than the dev code; it
 makes me wish I had some time to study prolog.

 I wonder: what if all we did was write the tests? What if we threw some
 kind of genetic algorithm or neural network at the task of making the tests
 pass?

 I realize that there are some challenges with the idea: what's the DNA of a
 computer program look like? Compiled methods? Pure functions? Abstract
 syntax trees? Objects? Classes? Prototypes? Source code fragments? How are
 these things composed, inherited, and mutated?

 I've pitched the idea over beer before; the only objections I've heard have
 been of the form that's computationally expensive and no one knows how to
 do that.

 Computational expense is usually less expensive than developer time these
 days, so without knowing exactly *how* expensive, it's hard to buy that. And
 if no one knows how to do it, it could be that there aren't enough of us
 trying:)

 Does anyone know of any cool research in this area?
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Therefore, send not to know For whom the bell tolls. It tolls for thee.

Josh 'G-Funk' McDonald
   -  j...@joshmcdonald.info
   -  http://twitter.com/sophistifunk
   -  http://flex.joshmcdonald.info/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Julian Leviston

On 15/10/2010, at 12:20 PM, Casey Ransberger wrote:

 The previous thread about testing got me thinking about this again. One of 
 the biggest problems I have in the large with getting developers to write 
 tests is the burden of maintaining the tests when the code changes. 
 
 I have this wacky idea that we need the tests more than the dev code; it 
 makes me wish I had some time to study prolog. 
 
 I wonder: what if all we did was write the tests? What if we threw some kind 
 of genetic algorithm or neural network at the task of making the tests pass?
 
 I realize that there are some challenges with the idea: what's the DNA of a 
 computer program look like? Compiled methods? Pure functions? Abstract syntax 
 trees? Objects? Classes? Prototypes? Source code fragments? How are these 
 things composed, inherited, and mutated?
 
 I've pitched the idea over beer before; the only objections I've heard have 
 been of the form that's computationally expensive and no one knows how to 
 do that.
 
 Computational expense is usually less expensive than developer time these 
 days, so without knowing exactly *how* expensive, it's hard to buy that. And 
 if no one knows how to do it, it could be that there aren't enough of us 
 trying:)
 
 Does anyone know of any cool research in this area?
 ___


This is quite interesting to me, too, because if you think of how we've built 
programming libraries and frameworks - in granular, small-chunked ways, why not 
have libraries of requirements and tests and such? If we match these two up, 
then we don't even need any form of neural network to build code for us - the 
corresponding and matching dev code has already been written many times before 
(this is what libraries and frameworks ARE, is it not?)

Surely if the point of a library is to build a set of reproducible similar-code 
that we can map to any problem domain of a similar nature, then the 
corresponding requirements suites would have to come along for the ride in 
terms of there being similar tests - if it was baked in at the compiler or 
interpreter level.

I have a similar problem with maintenance... and this is the biggest drawback 
to behaviour-driven or test-driven development... it requires writing more 
code... the bit that is really good, though, is that you don't spend as much 
time hunting down and fixing bugs most importantly regression errors. This is a 
big win, but it's preventative so therefore at least somewhat un-measurable, 
therefore un-marketable. As we are in a marketing-based society (which I am 
noticing is slowly changing), anything that can't be measured often takes a 
sad-faced back-seat. It's a pity, because I'd wager that the most important 
things that we have as humans more often than not can't be measured.

However - the trouble with starting with requirements first is that it doesn't 
provide humans with that instant or near-instant feedback that gives a happy 
emotional reaction which says I'm making progress. I'm wagering that the 
reason for this is that we start too late. If writing behaviour requirements 
and testing before writing dev code is simply part of the programming process, 
then I wager it will be much easier... if learnt from the ground up.

For example, when learning C, one has to decide what type a variable is before 
one uses it. This isn't a requirement in Smalltalk, where you merely have to 
know about behaviours. (ie does it respond to the message printToScreen - 
then it's fine). Thus, one of the requirements of writing C is that you are 
limited in this way by the language enforcing you to make decisions for type. 
This has fairly obvious refactoring ramifications, and yet because it's part of 
the language, it's simply taken on board. You want to change the type of some 
fairly ubiquitous variable, then it requires a LOT of work. People don't see 
these differences very clearly or cleanly.

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Growing Objects?

2010-10-14 Thread Faré
On 14 October 2010 21:20, Casey Ransberger casey.obrie...@gmail.com wrote:
 The previous thread about testing got me thinking about this again. One of 
 the biggest problems I have in the large with getting developers to write 
 tests is the burden of maintaining the tests when the code changes.

 I have this wacky idea that we need the tests more than the dev code; it 
 makes me wish I had some time to study prolog.

 I wonder: what if all we did was write the tests? What if we threw some kind 
 of genetic algorithm or neural network at the task of making the tests pass?

 I realize that there are some challenges with the idea: what's the DNA of a 
 computer program look like? Compiled methods? Pure functions? Abstract syntax 
 trees? Objects? Classes? Prototypes? Source code fragments? How are these 
 things composed, inherited, and mutated?

 I've pitched the idea over beer before; the only objections I've heard have 
 been of the form that's computationally expensive and no one knows how to 
 do that.

 Computational expense is usually less expensive than developer time these 
 days, so without knowing exactly *how* expensive, it's hard to buy that. And 
 if no one knows how to do it, it could be that there aren't enough of us 
 trying:)

 Does anyone know of any cool research in this area?

For the low-hanging fruits, see Programming by example.

For the general case, search for (Solomonoff) Induction, and see
notably recent work by Noah Goodman.

If you're going to explore this area, be sure to come back to tell
your experiences.

[ François-René ÐVB Rideau | ReflectionCybernethics | http://fare.tunes.org ]
I discovered a few years ago that happiness was something you put into life,
not something you get out of it — and I was transformed.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc