Quoting "Gregory K. Ruiz-Ade" <[EMAIL PROTECTED]>:

I don't get this, either.  Intel seems to stupidly hold on to the bus
design for all CPU communications, whereas AMD's HyperTransport is so
much more flexible and allows non-shared communications paths between
components.

Hypertransport also makes things like NUMA interconnections possible. You can get systems like blades from HP (unfortunately you have to get 8xx series CPUs which are a lot more expensive) that you can configure to run independently, or link some of the blades together into larger single systems. The HP blade chassis basically just turns on a new HT interconnect between blades to make them act as one host. Of course, the latency for accessing memory between blades rises (I forget the exact numbers but each HT path taken is a bump in latency).. so using tools to try to lock processes on processors and have the memory used kept local can help a lot.

I keep thinking to myself, how long until core system architecture
interconnects become more complicated than high-end ethernet routing
switches?  Or are we already at that point?

I wouldn't be surprised if they were already there. Plus there's talk now of how they're hitting limits on CPUs so companies like AMD talk of mixing general use CPUs mixed with special use chips. That scares me in the sense that it could create splits between AMD based designs and Intel that would break the gains we get today from having compatible x86(_64) systems from competing vendors. I'd hate to have to support even more completely different arch's in our compute farms.

--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
A nerd is someone whose life revolves around computers and technology.
A geek is someone whose life revolves around computers and technology... and
likes it!  - Stolen from a /. post.



--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to