Le 27/08/2013 05:07, Christopher Samuel a écrit :
> On 27/08/13 00:07, Brice Goglin wrote:
>
> > But there's a more general problem here, some people may want
> > something similar for other cases. I need to think about it.
>
> Something like a sort order perhaps, combined with some method to
> exclude or weight PUs based on some metrics (including a user defined
> weight)?

Excluding is already supported with --restrict.


If you want to (if possible) avoid core 0 on each socket, and (at least)
avoid core 0 on the entire machine, you'd need a command-line like this:

   hwloc-distrib --weight 1 socket:all.core:0 --weight 2 core:0 ...

Instead of doing

   if $(hwloc-calc -N pu all ~socket:all.core:0) -le $jobs; then
      hwloc-distrib --restrict $(all ~socket:all.core:0) ...
   else if $(hwloc-calc -N pu all ~core:0) -le $jobs; then
      hwloc-distrib --restrict $(all ~core.0) ...
   else
      hwloc-distrib ...
   fi


Another solution would be to have hwloc-distrib error-out when there are
not enough objects to distribute jobs. You'll do:

   hwloc-distrib --new-option --restrict $(all ~socket:all.core:0) ...
   || hwloc-distrib --new-option --restrict $(all ~core.0) ...
   || hwloc-distrib ...


And if you want to use entire cores instead of individual PUs, you can
still use "--to core" to stop distributing once you reach the core level.


> I had a quick poke around looking at /proc/irq/*/ and it would appear
> you can gather info about which CPUs are eligible to handle IRQs from
> the smp_affinity bitmask (or smp_affinity_list).

smp_affinity_list is only accessible to root unfortunately, that's why
we never used it in hwloc.

> The node file there just "shows the node to which the device using the
> IRQ reports itself as being attached. This hardware locality
> information does not include information about any possible driver
> locality preference."

Ah, I missed the addition is "node" file. This one is world-accessible.

Brice

Reply via email to