Luck, Tony wrote:
I remember this was discussed some months ago, but it still seems that on 2.6.10, SD_NODES_PER_DOMAIN is statically defined to value 6.
This is not what is expected on Bull ia64 platforms, based on modules of 4 bricks of 4 cpus each.


In that case, yes you would be better off with a different value for SD_NODES_PER_DOMAIN, maybe 16? It is really something you want to be able to set in sub-architecture specific code. You'd really have to test and find out.

Although, in general I don't think our multiprocessor scheduling
is very efficient at the moment, which is what I'm working on now
- and so any change I might make might invalidate your testing
unfortunately.


I guess I still don't understand how defining the number of nodes per domain gets the *right* nodes assigned to a domain. Does this rely on node discovery code assigning logical node numbers in such a way that nodes 0, 1, 2, 3 belong to one domain, and nodes 4, 5, 6, 7 belong to the next domain (for

It uses node_distance, which IIRC is implemented to use SLIT on ia64.

a system where SD_NODES_PER_DOMAIN=4)?  What if we have a
system where node numbers are effectively randomly assigned
by firmware at power-on? Then nodes 0, 3, 6, 7 might make up
a super-node, but we'll create a couple of domains that have
a jumbled mix of nodes from each super-node.


If node_distance is random then yeah that could happen.


- To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Reply via email to