Derek Zahn wrote:
Ben Goertzel writes:
> http://www.nvidia.com/page/home.html
>
> Anyone know what are the weaknesses of these GPU's as opposed to
> ordinary processors?
>
> They are good at linear algebra and number crunching, obviously.
>
> Is there some reason they would be bad at, say, MOSES learning?
These parallel hardware innovations are indeed very exciting. I
recently purchased a PC with two of these GPUs in it to play with. Like
JoSH, I think that "number crunching" is The Way To Go.
Unfortunately, these will be spectacularly bad at evaluating individuals
for genetic programming.
>
This is not quite correct; it really depends on the complexity of the
programs one is evolving and the structure of the fitness function. For
simple cases, it can really rock; see
http://www.cs.ucl.ac.uk/staff/W.Langdon/
"The nVidia GPU SIMD architecture allows the whole GP population to be
run in parallel. The animation is sped up, but in the GeForce 8800 GTX
ran 204800 programs in less than a second."
Cheers,
Moshe
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e