Ben Goertzel writes:> http://www.nvidia.com/page/home.html> > Anyone know what 
are the weaknesses of these GPU's as opposed to> ordinary processors?> > They 
are good at linear algebra and number crunching, obviously.> > Is there some 
reason they would be bad at, say, MOSES learning?
These parallel hardware innovations are indeed very exciting.  I recently 
purchased a PC with two of these GPUs in it to play with.  Like JoSH, I think 
that "number crunching" is The Way To Go.
 
Unfortunately, these will be spectacularly bad at evaluating individuals for 
genetic programming.  First, although they can do standard logic, program flow, 
and integer operations, that doesn't make very good use of the transistor count 
since the bulk of the silicon is dedicated to floating point arithmetic.  
Second, and more important, the programming model is SIMD, which means that the 
processors have to be running the same program.  If, for example, and "if" 
statement's condition is satisfied on one processor but not the others, the 
others have to wait for the code inside to finish so they can all synchronize 
again.  That would be terrible for evaluating heterogenous program trees.
 
You're going to get your speedup over the coming years on that task from 
multicore CPUS that can run heterogenous threads.
 
However, intuitively I think this massively parallel SIMD type of hardware 
might work rather well for propagation through your Probabilistic Logic 
Networks, depending on the details.
 
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to