[OMPI users] Hide Abort output

2010-03-31 Thread Yves Caniou
information (the stack). Is there a mean to avoid the printing of the note (except the 2>/dev/null tips)? Or to delay this printing? Thank you. .Yves. -- Yves Caniou Associate Professor at Université Lyon 1, Member of the team project INRIA GRAAL in the LIP ENS-Lyon, Délégation CNRS in Ja

Re: [OMPI users] Hide Abort output

2010-04-01 Thread Yves Caniou
t() message in help queries and fail to > look for the application error message about the root cause. A short > MPI_Abort() message that said "look elsewhere for the real error message" > would be useful. > > Cheers, > David > > On 03/31/2010 07:58 PM, Yves Canio

Re: [OMPI users] Hide Abort output

2010-04-05 Thread Yves Caniou
t; Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601 > Tele (845) 433-7846 Fax (845) 433-8363 I don't understand how your question is related to mine, since in my case, the application ends correctly and I don't want any output. :? -- Yves Caniou Associate Professor at

[OMPI users] About the correct use of DIET_Finalize()

2010-05-07 Thread Yves Caniou
the call to MPI_Finalize() and obtain an execution without error messages? Thank you for any help. .Yves. -- Yves Caniou Associate Professor at Université Lyon 1, Member of the team project INRIA GRAAL in the LIP ENS-Lyon, Délégation CNRS in Japan French Laboratory of Informatics (JFLI

Re: [OMPI users] About the correct use of DIET_Finalize()

2010-05-09 Thread Yves Caniou
> > FWIW, you should also be able to invoke the MPI_Finalized function to see > if MPI_Finalize has already been invoked. > > On May 7, 2010, at 12:54 AM, Yves Caniou wrote: > > Dear All, > > > > My parallel application ends when each process receives a msg, done in

[OMPI users] Execution don't go back to shell after MPI_Finalize()

2010-05-19 Thread Yves Caniou
Dear all, I use the following code: #include "stdlib.h" #include "stdio.h" #include "mpi.h" #include "math.h" #include "unistd.h" /* sleep */ int my_num, mpi_size ; int main(int argc, char *argv[]) { MPI_Init(, ) ; MPI_Comm_rank(MPI_COMM_WORLD, _num); MPI_Comm_size(MPI_COMM_WORLD,

[OMPI users] Program does not finish after MPI_Finalize()

2010-05-24 Thread Yves Caniou
int flag ; MPI_Init(, ) ; MPI_Comm_rank(MPI_COMM_WORLD, _num); printf("%d calls MPI_Finalize()\n\n\n", my_num) ; MPI_Finalize() ; MPI_Finalized() ; printf("MPI finalized: %d\n", flag) ; return 0 ; } --- -- Yves Caniou Associate Professor at Université

Re: [OMPI users] Program does not finish after MPI_Finalize()

2010-05-24 Thread Yves Caniou
; and that your environment is pointing to the right place. > > On May 24, 2010, at 12:15 AM, Yves Caniou wrote: > > Dear All, > > (follows a previous mail) > > > > I don't understand the strange behavior of this small code: sometimes it > > ends, sometimes not. The

Re: [OMPI users] Program does not finish after MPI_Finalize()

2010-05-24 Thread Yves Caniou
May 24, 2010, at 2:53 AM, Yves Caniou wrote: > > I rechecked, but didn't see anything wrong. > > Here is how I set my environment. Tkx. > > > > $>mpicc --v > > Using built-in specs. > > COLLECT_GCC=//home/p10015/gcc/bin/x86_64-unknown-linux-gnu-gcc-4.5.0 &g

[OMPI users] About the necessity of cancelation of pending communication and the use of buffer

2010-05-25 Thread Yves Caniou
? Thank you! .Yves. -- Yves Caniou Associate Professor at Université Lyon 1, Member of the team project INRIA GRAAL in the LIP ENS-Lyon, Délégation CNRS in Japan French Laboratory of Informatics (JFLI), * in Information Technology Center, The University of Tokyo, 2-11-16 Yayoi, Bunkyo-ku, Tokyo

[OMPI users] Bugs in MPI_Abort() -- MPI_Finalize()?

2010-06-02 Thread Yves Caniou
2) MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2) MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2) MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2) -- Yves Caniou Associate Professor at Université Lyon 1, Member of the team project

Re: [OMPI users] Bugs in MPI_Abort() -- MPI_Finalize()?

2010-06-02 Thread Yves Caniou
I forgot the list... - Le Wednesday 02 June 2010 14:59:46, vous avez écrit : > On Jun 2, 2010, at 8:03 AM, Ralph Castain wrote: > > I built it with gcc 4.2.1, though - I know we have a problem with shared > > memory hanging when built with gcc 4.4.x, so I wonder if the issue here > > is

[OMPI users] OpenMPI providing rank?

2010-07-28 Thread Yves Caniou
Hi, I have some performance issue on a parallel machine composed of nodes of 16 procs each. The application is launched on multiple of 16 procs for given numbers of nodes. I was told by people using MX MPI with this machine to attach a script to mpiexec, which 'numactl' things, in order to

Re: [OMPI users] OpenMPI providing rank?

2010-07-28 Thread Yves Caniou
't answer to my question. .Yves. > --Nysal > > On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.can...@ens-lyon.fr>wrote: > > Hi, > > > > I have some performance issue on a parallel machine composed of nodes of > > 16 procs each. The application is launched

Re: [OMPI users] OpenMPI providing rank?

2010-07-28 Thread Yves Caniou
Le Wednesday 28 July 2010 11:34:13 Ralph Castain, vous avez écrit : > On Jul 27, 2010, at 11:18 PM, Yves Caniou wrote: > > Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit : > >> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other > >> enviro

Re: [OMPI users] OpenMPI providing rank?

2010-07-28 Thread Yves Caniou
ouldn't > do, and doesn't do it as well. You would be far better off just adding > --bind-to-core to the mpirun cmd line. "mpirun -h" says that it is the default, so there is not even something to do? I don't even have to add "--mca mpi_paffinity_alone 1" ? .Yves. > On Ju