Re: [OMPI users] Problem building OpenMPI with PGI compilers

2009-12-11 Thread David Turner
Jeff, Subject: Re: [OMPI users] Problem building OpenMPI with PGI compilers From: Jeff Squyres Date: Thu, 10 Dec 2009 10:20:32 -0500 To: Open MPI Users ... Actually, I was wrong. You *can't* just take the SVN trunk's autogen.sh and use it with a

Re: [OMPI users] OpenMPI 1.4 RPM Spec file problem

2009-12-11 Thread Jeff Squyres
On Dec 9, 2009, at 4:47 PM, Jim Kusznir wrote: > One (on gcc only): the D_FORTIFY_SOURCE build failure. I've had to > move the if test "$using_gcc" = 0; then line down to after the > RPM_OPT_FLAGS= that includes D_FORTIFY_SOURCE; otherwise the compile > blows up. Hmm. Can you explain why /

Re: [OMPI users] Problem building OpenMPI with PGI compilers

2009-12-11 Thread Jeff Squyres
Sorry -- I neglected to update the list yesterday: I got the RM approval and committed the fix to the v1.4 branch. So the PGI fix should be in last night's 1.4 snapshot. Could someone out in the wild give it a whirl and let me know if it works for you? (it works for *me*) On Dec 10, 2009,

Re: [OMPI users] mpirun only works when -np <4 (Gus Correa)

2009-12-11 Thread Matthew MacManes
On my system, mpirun -np 8 -mca btl_sm_num_fifos 7 is much slower (and appeared to hang after several thousand interations) than -mca btl ^sm Is there another better way I should be modifying fifos to get better performance? Matt On Dec 11, 2009, at 4:04 AM, Terry Dontje wrote: >> >>

Re: [OMPI users] checkpoint opempi-1.3.3+sge62

2009-12-11 Thread Sergio Díaz
Hi Josh Here you go the file. I will try to apply the trunk but I think that I broke-up my openmpi installation doing "something" and I don't know what :-( . I was modifying the mca parameters... When I send a job, the orted daemon expanded in the SLAVE host is launched in a bucle till they

Re: [OMPI users] mpirun only works when -np <4 (Gus Correa)

2009-12-11 Thread Terry Dontje
Date: Thu, 10 Dec 2009 17:57:27 -0500 From: Jeff Squyres On Dec 10, 2009, at 5:53 PM, Gus Correa wrote: > How does the efficiency of loopback > (let's say, over TCP and over IB) compare with "sm"? Definitely not as good; that's why we have sm. :-) I don't