im impressed with the different views everyone has. i dont know how

I still don't quite understand what differences you're referring to.

    many of you would agree with me a multicore processor lets say a
    quad is 4 nodes in one. could one say it like that?

no. a multicore processor is just an smp, which have been around for a long time and are familiar. calling each core a node is merely
misleading.  call them cores ('processor' would be OK, but people often
conflate processor with chip.  anyone interested in clear communication
also talks about sockets - meaning whatever cores/chips in a socket
that share a single system/memory interface)

I would not.  To me a node is a physical thing.

I would disagree, slightly.  I would say that a node is a single system
image. That is, it's one one image of the operating system. The physical
boundary is a good rule of thumb, but doesn't always work.

node is a memory domain.  numalink is something of a special case,
since it exists to extend the memory sharing domain across nodes -
when discussing Origin/Altix, I usually refer to its components as "bricks" (indeed, this has appeared in SGI literature.)

administered it as a single 8-way system. Using the more general
definition of node, I would call that system a single node.

right - memory sharing domain.

ScaleMP, which makes multiple systems connected by IB behave as a single
system image by means of a BIOS overlay (if that's the right term), also
blurs the lines of physical boundaries when defining a node.

well, it's software shared memory, which has been done many times before.
I don't think it's much of a terminological challenge, since the nodes are not "really" sharing memory (ie, cacheline coherency).

SiCortex blurs the line in the other direction. Their deskside system
has 72 of their RISC processors in it, but has 6 "nodes", each with 12
cores running a separate instance of the operating system. And then
there's the "master" node (the one that provides the UI and
cross-compiler), that runs on AMD64 processor(s).

again, the touchstone should be memory domains - trying to use the OS structure seems unuseful to me... you may care about nodes because they represent a unit that has shared properties that you may need to pay attention to, even in an MPI job. if a node crashes, all the ranks
on that node go away (and eventually the rest of the job, of course).
if you run out of memory on that node, all the ranks there are affected.
ranks may be bound to cores or may jostle around.  may run into memory
bandwidth limits.  probably share a single interconnect port.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to