Definitely interesting!

Weaknesses, as has been noted, tend to show up mostly on things with complex
logic, the sort of code that has lots of conditional branches and pointer
chasing. (Not that these things don't suffer a speed penalty on a
conventional CPU - but CPUs spend a _lot_ of resources reducing the
penalty).

Here's my take on GPUs for evolutionary computation and other "search for a
program that does what I want" workloads:

If the performance element/program to be searched for is simple/constrained,
if it can be expressed as something like a weight vector, then that might be
directly run in parallel on the GPU.

In cases with a more complex performance element, it may still be worth
looking at whether parts of same can be run on GPU. Suppose for example
you're trying to evolve a program to do a complex image processing task,
which involves a significant amount of logic, and also some data-parallel
elements. Then what you could do is, for each individual to be evaluated,
run the logic on the CPU, but let it call the GPU for the data-parallel
subroutines. In other words, you'd be evaluating individuals one at a time
(per machine), but making each evaluation (hopefully) run a lot faster due
to being able to parallelize subtasks. (Obviously the representation
language would need to support short/natural representations for GPU-able
tasks.)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to