Owen Densmore wrote:
We should be thinking thousands of processors with at least mesh memory, if not hyper-cube.
That's fine if it is on a single piece of silicon or on the same board. Latency across sites will be on the order of milliseconds just from the speed of light. Compare to nanoseconds for intra-chip communication. Even state-of-the-art interconnects for installations like Encanto are still just on the order of microseconds within the facility.
Google will eventually beat all these efforts because they are thinking plumbing/networking with scalable data stores (NoSql).
Hmm, I think their application toolkits will get good at use cases where there are medium and high latencies to deal with along with medium individual bandwidths, e.g. JavaScript within a web browser and delays from communication over the internet. And they'll get better and better at managing millions of such workloads. That's completely different from high performance computing and scientific workloads.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to