Re: [OMPI users] AlphaServers & OpenMPI

2007-05-14 Thread Brian Barrett
On May 13, 2007, at 6:23 AM, Bert Wesarg wrote: Even better: is there a patch available to fix this in the 1.2.1 tarball, so that I can set the full path again with CC? The patch is quite trivial, but requires a rebuild of the build system (autoheader, autoconf, automake,...) see here:

Re: [OMPI users] newbie question

2007-05-14 Thread Brian Barrett
I fixed the OOB. I also mucked some things up with it interface wise that I need to undo :). Anyway, I'll have a look at fixing up the TCP component in the next day or two. Brian On May 10, 2007, at 6:07 PM, Jeff Squyres wrote: Brian -- Didn't you add something to fix exactly this

Re: [OMPI users] MPI_TYPE_STRUCT Not

2007-05-14 Thread Brian Barrett
On May 14, 2007, at 10:21 AM, Nym wrote: I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers 9.1.045). If I try to call MPI_TYPE_STRUCT with the array of displacements that are of type

Re: [OMPI users] multiple MPI_Reduce

2007-05-14 Thread Adrian Knoth
On Mon, May 14, 2007 at 11:59:18PM +0530, Jayanta Roy wrote: > if(myrank = 0 || myrank == 1) > if(myrank = 2 || myrank == 3) Just to make clear we're not talking about a typo: Do you mean assignment or comparison? For comparisons, better put the constant value to the left, so if (2 =

[OMPI users] multiple MPI_Reduce

2007-05-14 Thread Jayanta Roy
Hi, In my 4 nodes cluster I want to run two MPI_Reduce on two communicators (one using Node1, Node2 and other using Node3, Node4). Now to create communicator I used ... MPI_Comm MPI_COMM_G1, MPI_COMM_G2; MPI_Group g0, g1, g2; MPI_Comm_group(MPI_COMM_WORLD,); MPI_Group_incl(g0,g_size,_array[0],);

[OMPI users] MPI_TYPE_STRUCT Not

2007-05-14 Thread Nym
Hi, I am trying to use MPI_TYPE_STRUCT in a 64 bit Fortran 90 program. I'm using the Intel Fortran Compiler 9.1.040 (and C/C++ compilers 9.1.045). If I try to call MPI_TYPE_STRUCT with the array of displacements that are of type INTEGER(KIND=MPI_ADDRESS_KIND), then I get a compilation error:

Re: [OMPI users] profiling MPI - getting number of send and receive request made by the MPI libray

2007-05-14 Thread Sefa Arslan
Also there is profiler called "vampire", I think bought by Intel.. It create a very extended profile for mpi applications and mpi communication. It is very useful, I think it is a library, you should compile your program with vampire option to be able to use it. Also it has a graphical

Re: [OMPI users] profiling MPI - getting number of send and receive request made by the MPI libray

2007-05-14 Thread Rainer Keller
Hello, On Monday 14 May 2007 14:59, Jeff Squyres wrote: > It doesn't give you stats about the underlying transport, though > (E.g., TCP-level stats). For that, you would need to use PERUSE. > Rainer -- can you comment on how much info the tcp BTL reports via > PERUSE? > > On May 13, 2007, at 5:14

Re: [OMPI users] Problem running hpcc with a threaded BLAS

2007-05-14 Thread Götz Waschk
Hi, here's the result of my examination. The stack size limit set by Gridengine is the culprit. Somehow, the h_vmem limit I gave to my Gridengine job translated into setting the stack size limit to this value (ulimit -s). I've edited /etc/security/limits.conf on all my nodes, adding a hard stack

Re: [OMPI users] profiling MPI - getting number of send and receive request made by the MPI libray

2007-05-14 Thread Jeff Squyres
Have a look at MPIP: http://mpip.sf.net/ It will give you simple stats on what MPI functions were invoked. Quite handy. It doesn't give you stats about the underlying transport, though (E.g., TCP-level stats). For that, you would need to use PERUSE. Rainer -- can you comment on how