Jochen Fromm wrote: > I think it is much more difficult > to use a supercomputer with a trillion operations per second > than a huge cluster of ordinary computers, as you can find them > in Google's data centers. > One code for investigating synthetic cognition is called PetaVision. This code was adapted to Roadrunner and, like LINPACK, exceeded 1000 trillion floating point operations a second in recent benchmarks. Another project is the Blue Brain project at EPFL.
Codes like this are usually use MPI (message passing) and often latency limited (i.e. transaction speed is limited by the speed of light). For such applications, computers connected with ordinary networking just won't scale. To say it is more difficult to build systems and software to cope with that is really just to say they are hard problems. The main limitations to silicon system are heat and distance. Although there are multiple layers of circuitry on modern microprocessors (~10), nothing like the 3D integration that exists with the brain. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
