Re: [OMPI users] [mpich-discuss] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Rob Latham
On 01/05/2016 11:43 AM, Gus Correa wrote: Hi Rob Your email says you'll keep PVFS2. However, on your blog PVFS2 is not mentioned (on the "Keep" list). I suppose it will be kept, right? Right. An oversight on my part. PVFS2 will stay. ==rob Thank you, Gus Correa On 01/05/2016 12:31 PM,

Re: [OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Gus Correa
Hi Rob Your email says you'll keep PVFS2. However, on your blog PVFS2 is not mentioned (on the "Keep" list). I suppose it will be kept, right? Thank you, Gus Correa On 01/05/2016 12:31 PM, Rob Latham wrote: I'm itching to discard some of the little-used file system drivers in ROMIO, an MPI-IO

[OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Rob Latham
I'm itching to discard some of the little-used file system drivers in ROMIO, an MPI-IO implementation used by, well, everyone. I've got more details in this ROMIO blog post: http://press3.mcs.anl.gov/romio/2016/01/05/cleaning-out-old-romio-file-system-drivers/ Right now the plan is to keep

Re: [hwloc-users] error from the operating system - Solaris 11.3 - SOLVED

2016-01-05 Thread Brice Goglin
Hello So processor sets are not taken into account when Solaris reports topology information in kstat etc. Do you know if hwloc can query processor sets from the C interface? If so, we could apply the processor set mask to hwloc object cpusets during discovery to avoid your error. Brice Le

Re: [hwloc-users] error from the operating system - Solaris 11.3 - SOLVED

2016-01-05 Thread Karl Behler
There was a processor set defined (command psrset) on this machine. Having removed the psrset hwloc-info produces a result without error messages: hwloc-info -v depth 0:1 Machine (type #1) depth 1: 2 NUMANode (type #2) depth 2: 2 Package (type #3) depth 3: 12 Core