Brice, Thanks for you comments. I have worked with this some, but this is not working.
My goal is to generate images of the cpusets inuse when I run a parallel code using mpirun, aprun, srun, etc... The compute nodes lack the mojo necessary to generate graphical formats, so I can only extract XML on the nodes. I am testing this locally on a 2 NUMA, dual socket workstation with 14 cores per socket (so, 28 total cores). I can use OpenMPI to easily spawn/bind processes. E.g., mpirun --map-by ppr:2:NUMA:pe=7 ./hwloc_plot_mpi.sh hwloc_plot_mpi.sh is very simple: > #!/bin/bash > > pid="$$" > > rank=${OMPI_COMM_WORLD_RANK} > > lstopo --pid ${pid} --no-io -f hwloc-${rank}.xml > > lstopo --pid ${pid} --no-io --append-legend "Rank: ${rank}" -f > hwloc-${rank}-orig.png > > lstopo --append-legend "Rank: ${rank}" --whole-system --input > hwloc-${rank}.xml -f hwloc-${rank}.png To test things, 1) write the XML 2) use the same command to write a PNG 3) use the generated XML to generate the PNG (2) and (3) should produce the same image if I am doing things correctly. The image for (2) is unique for each process, showing 7 *different* cores shaded in each figure (4 images are generated since I spawn 4 processes) The images from (3) are all identical (no shading) To test without MPI, you would just need to set a processes affinity and then use its PID instead. What I see, is that the XML generated in (1) is identical for all MPI processes, even though they have different PIDs and different CPUSETS. I hate to show the MPI stuff here, but it is a convenient way to bind processes. I hope that I have been clear. James On 1/31/17, James Elliott <jjell...@ncsu.edu> wrote: > Thanks for the info! > > On 1/31/2017 11:01 PM, Brice Goglin wrote: >> shade/highlight is included in the "cpuset" and "allowed_cpuset" fields >> inside the XML (even when not using --pid). >> >> By default, only what's "available" is displayed. If you want >> "disallowed" things to appear (in different colors), add --whole-system >> when drawing (in the second command-line). >> >> Brice >> >> >> >> Le 01/02/2017 06:56, James a écrit : >>> Thanks Brice, >>> >>> I believe I am rebuilding it as you say, but I can retry tomorrow at >>> my desk. >>> I looked in the XML and can see the taskset data, but since I cannot >>> do --pid ###, it seems to not shade/highlight the tasksets. >>> >>> I'll drop the args that are redundant and try the exact form you list. >>> >>> James >>> >>> On 1/31/2017 10:52 PM, Brice Goglin wrote: >>>> Le 01/02/2017 00:19, James Elliott a écrit : >>>>> Hi, >>>>> >>>>> I seem to be stuck. What I would like to do, is us lstopo to generate >>>>> files that I can plot on another system (the nodes lack the necessary >>>>> libraries for graphical output). >>>>> >>>>> That is, I would like to see something like >>>>> lstopo --only core --pid ${pid} --taskset --no-io --no-bridges >>>>> --append-legend "PID: ${pid}" -f hwloc-${pid}.png >>>>> >>>>> But I need to output to XML instead, and plot on another machine, e.g. >>>>> >>>>> lstopo --only core --pid ${pid} --taskset --no-io --no-bridges >>>>> --append-legend "PID: ${pid}" -f hwloc-${pid}.png >>>>> ... >>>>> Then on another machine, >>>>> lstopo --input hwloc-<number>.xml output.png >>>>> >>>>> Where, the --pid shading of cpusets is produced in the output.png. >>>>> This does not seem to work. I am fairly new to lstopo, is it possible >>>>> to achieve this functionality? (I would also like to preserve the >>>>> append-legend stuff, but I could work out a way to do that on the >>>>> other host.) >>>> Hello >>>> >>>> My guess is that you would need to export to XML like this: >>>> lstopo --pid ${pid} --no-io -f foo.xml >>>> >>>> and reload/draw on the other host like this: >>>> lstopo --input foo.xml --only-core --taskset --append-legend "PID: >>>> ${pid}" -f output.png >>>> >>>> Random comments: >>>> * --no-bridges in implied by --no-io >>>> * --only and --taskset only apply to the textual output, while you seem >>>> to want graphical output as png >>>> * --append-legend only applies to the graphical output >>>> >>>> Brice >>>> >>>> _______________________________________________ >>>> hwloc-users mailing list >>>> hwloc-users@lists.open-mpi.org >>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users >>> _______________________________________________ >>> hwloc-users mailing list >>> hwloc-users@lists.open-mpi.org >>> https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users >> _______________________________________________ >> hwloc-users mailing list >> hwloc-users@lists.open-mpi.org >> https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users > >
_______________________________________________ hwloc-users mailing list hwloc-users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/hwloc-users