Re: [OMPI users] openmpi-1.7.3 still not accept cpus-per-proc

2013-10-31 Thread tmishima
Thank you, Nathan. I'll wait until 1.7.4 release. Regards, Tetsuya Mishima > Looks like it is fixed in the development trunk but not 1.7.3. We can fix > this in 1.7.4. > > -Nathan Hjelm > HPC-3, LANL > > On Thu, Oct 31, 2013 at 04:17:30PM +0900, tmish...@jcity.maeda.co.jp wrote: > > > >

Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jeff Hammond
Stupid question: Why not just make your first level internal API equivalent to the MPI public API except for s/int/size_t/g and have the Fortran bindings drop directly into that? Going through the C int-erface seems like a recipe for endless pain... Jeff On Thu, Oct 31, 2013 at 4:05 PM, Jeff

Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jeff Squyres (jsquyres)
For giggles, try using MPI_STATUS_IGNORE (assuming you don't need to look at the status at all). See if that works for you. Meaning: I wonder if we're computing the status size for Fortran incorrectly in the -i8 case... On Oct 31, 2013, at 1:58 PM, Jim Parker

Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jeff Squyres (jsquyres)
On Oct 30, 2013, at 11:55 PM, Jim Parker wrote: > Perhaps I should start with the most pressing issue for me. I need 64-bit > indexing > > @Martin, >you indicated that even if I get this up and running, the MPI library > still uses signed 32-bit ints to count

Re: [OMPI users] Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jim Parker
Some additional info that may jog some solutions. Calls to MPI_SEND do not cause memory corruption. Only calls to MPI_RECV. Since the main difference is the fact that MPI_RECV needs a "status" array and SEND does not, seems to indicate to me that something is wrong with status. Also, I can run

[OMPI users] SIGSEGV in opal_hwlock152_hwlock_bitmap_or.A // Bug in 'hwlock" ?

2013-10-31 Thread Paul Kapinos
Hello all, using 1.7.x (1.7.2 and 1.7.3 tested), we get SIGSEGV from somewhere in-deepth of 'hwlock' library - see the attached screenshot. Because the error is strongly aligned to just one single node, which in turn is kinda special one (see output of 'lstopo -'), it smells like an error in

Re: [OMPI users] (no subject)

2013-10-31 Thread Ralph Castain
Yes, though the degree of impact obviously depends on the messaging pattern of the app. On Oct 31, 2013, at 2:50 AM, MM wrote: > Of course, by this you mean, with the same total number of nodes, for e.g. 64 > process on 1 node using shared mem, vs 64 processes spread

Re: [OMPI users] openmpi-1.7.3 still not accept cpus-per-proc

2013-10-31 Thread Nathan Hjelm
Looks like it is fixed in the development trunk but not 1.7.3. We can fix this in 1.7.4. -Nathan Hjelm HPC-3, LANL On Thu, Oct 31, 2013 at 04:17:30PM +0900, tmish...@jcity.maeda.co.jp wrote: > > Hollo, I asked Ralph to re-enable cpus-per-proc of openmpi-1.7.x one year > ago. > > According to

Re: [OMPI users] (no subject)

2013-10-31 Thread MM
Of course, by this you mean, with the same total number of nodes, for e.g. 64 process on 1 node using shared mem, vs 64 processes spread over 2 nodes (32 each for e.g.)? On 29 October 2013 14:37, Ralph Castain wrote: > As someone previously noted, apps will always run slower

[OMPI users] Fwd: Prototypes for Fortran MPI_ commands using 64-bit indexing

2013-10-31 Thread Jim Parker
Ok, all, where to begin... Perhaps I should start with the most pressing issue for me. I need 64-bit indexing @Martin, you indicated that even if I get this up and running, the MPI library still uses signed 32-bit ints to count (your term), or index (my term) the recvbuffer lengths. More