Wait, I missed the "io node" part of your first mail. The bgq support is for compute nodes running cnk. Are io nodes running linux on same hardware as the compute nodes?
I have an account on vesta. Where should I logon to have a look? Brice On 25 mars 2014 08:12:58 UTC+01:00, "Biddiscombe, John A." <biddi...@cscs.ch> wrote: >Brice, > >lstopo --whole-system > >gives the same output and setting env var BG_THREADMODEL=2 does not >appear to make any visible difference. > >my configure command for compiling hwloc had no special options, >./configure >--prefix=/gpfs/bbp.cscs.ch/home/biddisco/apps/clang/hwloc-1.8.1 > >should I rerun with something set? > >Thanks > >JB > > >From: hwloc-users [mailto:hwloc-users-boun...@open-mpi.org] On Behalf >Of Brice Goglin >Sent: 25 March 2014 08:04 >To: Hardware locality user list >Subject: Re: [hwloc-users] BGQ question. > >Le 25/03/2014 07:51, Biddiscombe, John A. a écrit : >I'm compiling hwloc using clang (bgclang++11 from ANL) to run on IO >nodes af a BGQ. It seems to have compiled ok, and when I run lstopo, I >get an output like this (below), which looks reasonable, but there are >15 sockets instead of 16. I'm a little worried because the first time I >compiled, I had problems where apps would report an error from HWLOC on >start and tell me to set HWLOC_FORCE_BGQ=1. when I did set this env >var, it would then report that "topology became empty" and the app >would segfault due to the unexpected return from hwloc presumably. > >Can you give a bit more details on what you did there? I'd like to >check if that case should be better supported or not. > > >I wiped everything and recompiled (not sure what I did differently), >and now it behaves more sensibly, but with 15 instead of 16 sockets. > >Should IO be worried? > >The topology detection is hardwired so you shouldn't worried on the >hardware side. >The problem could be related to how you reserved resources before >running lstopo. >Does lstopo --whole-system see more sockets? >Does BG_THREADMODEL=2 help? > >Brice > > >------------------------------------------------------------------------ > >_______________________________________________ >hwloc-users mailing list >hwloc-us...@open-mpi.org >http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-users