Wow! You are awesome!

This works great!

Thanks a bunch.

James

On 2/3/17, Brice Goglin <brice.gog...@inria.fr> wrote:
> Le 03/02/2017 23:01, James Elliott a écrit :
>> On 2/3/17, Brice Goglin <brice.gog...@inria.fr> wrote:
>>> What do you mean with shaded? Red or green? Red means unavailable.
>>> Requires --whole-system everywhere. Green means that's where the
>>> process is bound. But XML doesn't store the information about where
>>> the process is bound, so you may only get Green in 2).
>> This is exactly what I am attempting to do (and finding it does not work).
>> I would like to have a figure with green shadings so that I have a
>> visual representation of where my MPI process lived on the machine.
>
> Try this:
>
> lstopo --whole-system --no-io -f hwloc-${rank}.xml
> for pu in $(hwloc-calc --whole-system -H PU --sep " " $(hwloc-bind
> --get)); do hwloc-annotate hwloc-${rank}.xml hwloc-${rank}.xml $pu info
> lstopoStyle Background=#00ff00 ; done
> ...
> <display with lstopo -i hwloc-${rank}.xml>
>
> How it works:
> * hwloc-bind --get retrieves the current binding as a bitmask
> * hwloc-calc converts this bitmask into a space-separated list of PU
> indexes (there are other possible outputs if needed, such as cores, or
> the largest object included in the binding, etc)
> * the for loop iterates on these objects and hwloc-annotate adds an
> attribute lstopoStyle Background=#00ff00 to each of them
> * lstopo will use this attribute to change the background color of these
> PU boxes in the graphical output
>
> Make sure you have hwloc >= 1.11.1 for this to work.
>
> Brice
>
>
>
>>
>> I currently have a function (in C) that I use in my codes that
>> inspects affinities, but when I discuss app performance with others, I
>> would like to be able to show (graphically) exactly how their app uses
>> the resources.  I work mostly with hybrid MPI/OpenMP codes, developed
>> by smart scientists who are not familiar with things like affinity.
>>
>>>> To test without MPI, you would just need to set a processes affinity
>>>> and then use its PID instead.
>>>>
>>>> What I see, is that the XML generated in (1) is identical for all MPI
>>>> processes, even though they have different PIDs and different CPUSETS.
>>> Are you talking about different MPI runs, or different MPI ranks within
>>> the same run?
>>>
>>> My feeling is that you think you should be seeing different cpusets for
>>> each process, but they actually have the same cpuset but different
>>> bindings. Cores outside the cpuset are red when --whole-system, or
>>> totally ignored otherwise.
>>>
>>> In (2), you don't have --whole-system, no red cores. But you have --pid,
>>> so you get one green core per process, it's its binding. That's why you
>>> get different images for each process.
>>> in (3), you inherit the missing --whole-system from (1) through XML, no
>>> red cores either. But XML doesn't save the process binding, no green
>>> cores either. Same image for each process.
>>>
>>>
>>> Do you care about process binding (what mpirun applies to each rank?) or
>>> about cpusets (what the batch scheduler applies to the entire job before
>>> mpirun?)
>>>
>>> If cpuset, just add --whole-system everywhere, it should be enough.
>>> If binding, there's no direct way with lstopo (but we have a way to save
>>> custom colors for individual objects in the XML).
>>>
>>> Brice
>>>
>>>
>> _______________________________________________
>> hwloc-users mailing list
>> hwloc-users@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users

Reply via email to