Re: [OMPI users] users Digest, Vol 1911, Issue 4

2011-05-20 Thread Jason Mackay
"MPI can get through your firewall, right?" As far as I can tell the firewall is not the problem - have tried it with firewalls disabled, automatic fw polices based on port requests from MPI, and with manual exception policies. > From: users-requ...@open-mpi.org > Subject: users Digest, Vol

Re: [OMPI users] v1.5.3-x64 does not work on Windows 7 workgroup

2011-05-20 Thread Damien
MPI can get through your firewall, right? Damien On 20/05/2011 12:53 PM, Jason Mackay wrote: I have verified that disabling UAC does not fix the problem. xhlp.exe starts, threads spin up on both machines, CPU usage is at 80-90% but no progress is ever made. >From this state, Ctrl-break on

Re: [OMPI users] v1.5.3-x64 does not work on Windows 7 workgroup

2011-05-20 Thread Jason Mackay
I have verified that disabling UAC does not fix the problem. xhlp.exe starts, threads spin up on both machines, CPU usage is at 80-90% but no progress is ever made. >From this state, Ctrl-break on the head node yields the following output: [REMOTEMACHINE:02032] [[20816,1],0]-[[20816,0],0]

Re: [OMPI users] openmpi (1.2.8 or above) and Intel composer XE 2011 (aka 12.0)

2011-05-20 Thread Gus Correa
Hi Salvatore Just in case ... You say you have problems when you use "--mca btl openib,self". Is this a typo in your email? I guess this will disable the shared memory btl intra-node, whereas your other choice "--mca btl_tcp_if_include ib0" will not. Could this be the problem? Here we use

Re: [OMPI users] Openib with > 32 cores per node

2011-05-20 Thread Jeff Squyres
If you're using QLogic, you might want to try the native PSM Open MPI support rather than the verbs support. QLogic cards only "sorta" support verbs in order to say that they're OFED-complaint; their native PSM interface is more performant than verbs for MPI. Assuming you built OMPI with PSM

Re: [OMPI users] Openib with > 32 cores per node

2011-05-20 Thread Robert Horton
Hi, Thanks for getting back to me (and thanks to Jeff for the explanation too). On Thu, 2011-05-19 at 09:59 -0600, Samuel K. Gutierrez wrote: > Hi, > > On May 19, 2011, at 9:37 AM, Robert Horton wrote > > > On Thu, 2011-05-19 at 08:27 -0600, Samuel K. Gutierrez wrote: > >> Hi, > >> > >> Try

Re: [OMPI users] openmpi (1.2.8 or above) and Intel composer XE 2011 (aka 12.0)

2011-05-20 Thread Salvatore Podda
We are still struggling we these problems. Actually the new version of intel compilers does not seem to be the real issue. We clash against the same errors using also the `gcc' compilers. We succeed in building an openmi-1.2.8 (with different compiler flavours) rpm from the installation of

Re: [OMPI users] TotalView Memory debugging and OpenMPI

2011-05-20 Thread Peter Thompson
Thanks Ralph. I've seen the messages generated in b...@open-mpi.org, so I figured something was up! I was going to provide the unified diff, but then ran into another issue in testing where we immediately ran into a seq fault, even with this fix. It turns out that a pre-pending of /lib64

[OMPI users] Issue with mpicc --showme in windows

2011-05-20 Thread AMARNATH, Balachandar
Hello, Here in my windows machine, if i ran mpicc -showme, i get erroneous output like below:- ** C:\>C:\Users\BAAMARNA5617\Programs\mpi\OpenMPI_v1.5.3-win32\bin\mpicc.exe --showme Cannot open configuration file C:/Users/hpcfan/Documents/OpenMPI/openmpi-1.5.3/i

Re: [OMPI users] Trouble with MPI-IO

2011-05-20 Thread Jeff Squyres
On May 20, 2011, at 6:23 AM, Jeff Squyres wrote: > Shouldn't ijlena and ijdisp be 1D arrays, not 2D arrays? Ok, if I convert ijlena and ijdisp to 1D arrays, I don't get the compile error (even though they're allocatable -- so allocate was a red herring, sorry). That's all that "use mpi" is

Re: [OMPI users] MPI_ERR_TRUNCATE with MPI_Allreduce() error, but only sometimes...

2011-05-20 Thread Jeff Squyres
Sorry for the super-late reply. :-\ Yes, ERR_TRUNCATE means that the receiver didn't have a large enough buffer. Have you tried upgrading to a newer version of Open MPI? 1.4.3 is the current stable release (I have a very dim and not guaranteed to be correct recollection that we fixed

Re: [OMPI users] MPI_Alltoallv function crashes when np > 100

2011-05-20 Thread Jeff Squyres
I missed this email in my INBOX, sorry. Can you be more specific about what exact error is occurring? You just say that the application crashes...? Please send all the information listed here: http://www.open-mpi.org/community/help/ On Apr 26, 2011, at 10:51 PM, 孟宪军 wrote: > It seems

Re: [OMPI users] Trouble with MPI-IO

2011-05-20 Thread Jeff Squyres
On May 19, 2011, at 11:24 PM, Tom Rosmond wrote: > What fortran compiler did you use? gfortran. > In the original script my Intel compile used the -132 option, > allowing up to that many columns per line. Gotcha. >> x.f90:99.77: >> >>call

Re: [OMPI users] Problem with MPI_Request, MPI_Isend/recv and MPI_Wait/Test

2011-05-20 Thread David Büttner
Hello, thanks for the quick answer. I am sorry that I forgot to mention this: I did compile OpenMPI with MPI_THREAD_MULTIPLE support and test if required == provided after the MPI_Thread_init call. I do not see any mechanism for protecting the accesses to the requests to a single thread?

Re: [OMPI users] Trouble with MPI-IO

2011-05-20 Thread Tom Rosmond
Thanks for looking at my problem. Sounds like you did reproduce my problem. I have added some comments below On Thu, 2011-05-19 at 22:30 -0400, Jeff Squyres wrote: > Props for that testio script. I think you win the award for "most easy to > reproduce test case." :-) > > I notice that some