Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-19 Thread George Bosilca
On Thu, Dec 18, 2014 at 2:27 PM, Jeff Squyres (jsquyres) wrote: > On Dec 17, 2014, at 9:52 PM, George Bosilca wrote: > > >> I don't understand how MPIX_ is better. > >> > >> Given that there is *zero* commonality between any MPI extension > implemented

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-19 Thread Nick Papior Andersen
I have been following this being very interested, I will create a PR for my branch then. To be clear, I already did the OMPI change before this discussion came up, so this will be the one, however the change to other naming schemes is easy. 2014-12-19 7:48 GMT+00:00 George Bosilca

Re: [OMPI users] [EXTERNAL] Re: How to find MPI ranks located in remote nodes?

2014-12-19 Thread Jeff Squyres (jsquyres)
On Dec 19, 2014, at 2:48 AM, George Bosilca wrote: > We made little progress over the last couple of [extremely long] emails and > the original topic diverged and got diluted. Lets hold on our discussion here > and let Nick, Keita and the others go ahead and complete their

Re: [OMPI users] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

2014-12-19 Thread Jeff Squyres (jsquyres)
George: (I'm not a member of petsc-maint; I have no idea whether my mail will actually go through to that list) TL;DR: I do not think that George's change was correct. PETSC is relying on undefined behavior in the MPI standard and should probably update to use a different scheme. More

Re: [OMPI users] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

2014-12-19 Thread Jeff Squyres (jsquyres)
On Dec 19, 2014, at 8:58 AM, Jeff Squyres (jsquyres) wrote: > More specifically, George's change can lead to inconsistency/incorrectness in > the presence of multiple threads simultaneously executing attribute actions > on a single entity. Actually -- it's worse than I

Re: [OMPI users] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

2014-12-19 Thread George Bosilca
On Fri, Dec 19, 2014 at 8:58 AM, Jeff Squyres (jsquyres) wrote: > George: > > (I'm not a member of petsc-maint; I have no idea whether my mail will > actually go through to that list) > > TL;DR: I do not think that George's change was correct. PETSC is relying > on undefined

Re: [OMPI users] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

2014-12-19 Thread Jeff Squyres (jsquyres)
On Dec 19, 2014, at 10:44 AM, George Bosilca wrote: > Regarding your second point, while I do tend to agree that such issue is > better addressed in the MPI Forum, the last attempt to fix this was certainly > not a resounding success. Yeah, fair enough -- but it wasn't a

[OMPI users] Hwloc error with Openmpi 1.8.3 on AMD 64

2014-12-19 Thread Sergio Manzetti
Dear all, when trying to run NWchem with openmpi, I get this error. * Hwloc has encountered what looks like an error from the operating system. * * object intersection without inclusion! * Error occurred in

Re: [OMPI users] Hwloc error with Openmpi 1.8.3 on AMD 64

2014-12-19 Thread Brice Goglin
Hello, The rationale is to read the message and do what it says :) Have a look at www.open-mpi.org/projects/hwloc/doc/v1.10.0/a00028.php#faq_os_error Try upgrading your BIOS and kernel. Otherwise install hwloc and send the output (tarball) of hwloc-gather-topology to hwloc-users (not to

[OMPI users] best function to send data

2014-12-19 Thread Diego Avesani
dear all users, I am new in MPI world. I would like to know what is the best choice and meaning between different function. In my program I would like that each process send a vector of data to all the other process. What do you suggest? Is it correct MPI_Bcast or I am missing something? Thanks