Re: [OMPI users] mpi functions are slow when first called and become normal afterwards

2009-10-29 Thread Brock Palen
When MPI_Bcast and MPI_Reduce are called for the first time, they are very slow. But after that, they run at normal and stable speed. Is there anybody out there who have encountered such problem? If you need any other information, please let me know and I'll provide. Thanks in advance.

[OMPI users] mpi functions are slow when first called and become normal afterwards

2009-10-29 Thread rightcfd
We installed a linux cluster recently. The OS is Ubuntu 8.04. The network is infiniband. We run a simple MPI program to compute the value of pi. The program follows three stages: MPI_Bcast, computation and MPI_Reduce. We measure the elapsed time on computation and communication,

Re: [OMPI users] MPI-Send for entire entire matrix when allocating memory dynamically

2009-10-29 Thread Justin Luitjens
Why not do something like this: double **A=new double*[N]; double *A_data new double [N*N]; for(int i=0;i

Re: [OMPI users] Disabling tuned collectives in OMPI 1.3.3

2009-10-29 Thread Ralph Castain
Hi Dave I believe you can turn it "off" by setting -mca coll ^tuned This will tell the system to consider all collective modules -except- for tuned. HTH Ralph On Thu, Oct 29, 2009 at 12:13 PM, David Gunter wrote: > We have a user who's hitting a hang in MPI_Allgather that

Re: [OMPI users] MPI-Send for entire entire matrix when allocating memory dynamically

2009-10-29 Thread Natarajan CS
Hi thanks for the quick response. Yes, that is what I meant. I thought there was no other way around what I am doing but It is always good to ask a expert rather than assume! Cheers, C.S.N On Thu, Oct 29, 2009 at 11:25 AM, Eugene Loh wrote: > Natarajan CS wrote: > >

[OMPI users] Disabling tuned collectives in OMPI 1.3.3

2009-10-29 Thread David Gunter
We have a user who's hitting a hang in MPI_Allgather that totalview is showing is in a tuned collective operation. How do we disable the use of tuned collectives? We have tried setting the priority to 0 but maybe that wasn't the correct way: mpirun -mca coll_tuned_priority 0 ... Should

Re: [OMPI users] MPI-Send for entire entire matrix when allocating memory dynamically

2009-10-29 Thread Eugene Loh
Natarajan CS wrote: Hello all, Firstly, My apologies for a duplicate post in LAM/MPI list I have the following simple MPI code. I was wondering if there was a workaround for sending a dynamically allocated 2-D matrix? Currently I can send the matrix row-by-row, however, since rows

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread John R. Cary
This also appears to fix a bug I had reported that did not involve collective calls. The code is appended. When run on 64 bit architecture with iter.cary$ gcc --version gcc (GCC) 4.4.0 20090506 (Red Hat 4.4.0-4) Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread Vincent Loechner
> >>> It seems that the calls to collective communication are not > >>> returning for some MPI processes, when the number of processes is > >>> greater or equal to 5. It's reproduceable, on two different > >>> architectures, with two different versions of OpenMPI (1.3.2 and > >>> 1.3.3). It was

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread Jonathan Dursi
On 2009-10-29, at 10:21AM, Vincent Loechner wrote: It seems that the calls to collective communication are not returning for some MPI processes, when the number of processes is greater or equal to 5. It's reproduceable, on two different architectures, with two different versions of OpenMPI

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread Vincent Loechner
> > It seems that the calls to collective communication are not > > returning for some MPI processes, when the number of processes is > > greater or equal to 5. It's reproduceable, on two different > > architectures, with two different versions of OpenMPI (1.3.2 and > > 1.3.3). It was working

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread Jonathan Dursi
On 2009-10-29, at 9:57AM, Vincent Loechner wrote: [...] It seems that the calls to collective communication are not returning for some MPI processes, when the number of processes is greater or equal to 5. It's reproduceable, on two different architectures, with two different versions of

[OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread Vincent Loechner
Hello to the list, I came to a problem running a simple program with collective communications, on a 6-core processors (6 local MPI processes). It seems that the calls to collective communication are not returning for some MPI processes, when the number of processes is greater or equal to 5.

Re: [OMPI users] problem calling mpirun from script invoked

2009-10-29 Thread Ralph Castain
Please see my earlier response. This proposed solution will work, but may be unstable as it (a) removes all of OMPI's internal variables, some of which are required; and (b) also removes all the variables that might be needed by your system. For example, envars directing the use of specific

[OMPI users] problem calling mpirun from script invoked

2009-10-29 Thread Per Madsen
Could your problem is related to the MCA parameter "contamination" problem, where the child MPI process inherits MCA environment variables from the parent process still exists. Back in 2007 I was implementing a program that solves two large interrelated systems of equations (+200.000.000

[OMPI users] MPI-Send for entire entire matrix when allocating memory dynamically

2009-10-29 Thread Natarajan CS
Hello all, Firstly, My apologies for a duplicate post in LAM/MPI list I have the following simple MPI code. I was wondering if there was a workaround for sending a dynamically allocated 2-D matrix? Currently I can send the matrix row-by-row, however, since rows are not contiguous I cannot