With a little reorganization and forethought, you can even have your own
mini-supercomputer using banks of GPU cards to crunch vectors and matrices.
See Nvidia's CUDA development system, and their Tesla computer system.

- Ken 

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Jochen Fromm
> Sent: Sunday, July 20, 2008 12:52 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] REPOST: The meaning of "inner".
> 
> Yes, an impressive supercomputer. I think it is much more 
> difficult to use a supercomputer with a trillion operations 
> per second than a huge cluster of ordinary computers, as you 
> can find them in Google's data centers.
> 
> -J.
> 
> ----- Original Message -----
> From: "Marcus G. Daniels" <[EMAIL PROTECTED]>
> To: "The Friday Morning Applied Complexity Coffee Group" 
> <[email protected]>
> Sent: Sunday, July 20, 2008 7:49 PM
> Subject: Re: [FRIAM] REPOST: The meaning of "inner".
> 
> 
> > For comparison, LANL Roadrunner has about 5 trillion 
> transistors for the
> > CPUs (~13000 PowerXCell 8i processors and ~6500 dual core 
> Opterons) and
> > another 800 trillion for RAM (~100 TB).
> >
> 
> 
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to