Moshe Looks writes:> This is not quite correct; it really depends on the 
complexity of the > programs one is evolving and the structure of the fitness 
function. For > simple cases, it can really rock; see> > 
http://www.cs.ucl.ac.uk/staff/W.Langdon/
 
That's interesting work, thanks for the link!  It's not immediately obvious, 
but the particular example there is a population of programs that estimate pi 
with up to 8 leaves from an "alphabet" of six tokens (2 constants and 4 basic 
arithmetic operations).  
 
The strategy used for parallelization is to run all programs currently waiting 
on a '+' operation, then run all programs doing a '-' operation and so on.  If 
there are N operations (4 in this case), the population runs at 1/N speed 
(since the SIMD nature of the thing makes the others wait).  So you're right, 
for simple cases like this one it only wastes 75% of the available processing 
power.  It doesn't seem like it will scale very well though.  Even on this 
simple task I bet a quad-core cpu is competitive with the GPU hardware.
 
It does point out though that some things that are not intuitively data 
parallel can be executed effectively on a GPU.
 
Do you personally think MOSES will run well on a GPU?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to