information (the stack).
Is there a mean to avoid the printing of the note (except the 2>/dev/null
tips)? Or to delay this printing?
Thank you.
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Ja
t() message in help queries and fail to
> look for the application error message about the root cause. A short
> MPI_Abort() message that said "look elsewhere for the real error message"
> would be useful.
>
> Cheers,
> David
>
> On 03/31/2010 07:58 PM, Yves Canio
t; Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
> Tele (845) 433-7846 Fax (845) 433-8363
I don't understand how your question is related to mine, since in my case, the
application ends correctly and I don't want any output. :?
--
Yves Caniou
Associate Professor at
the call to
MPI_Finalize() and obtain an execution without error messages?
Thank you for any help.
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFLI
>
> FWIW, you should also be able to invoke the MPI_Finalized function to see
> if MPI_Finalize has already been invoked.
>
> On May 7, 2010, at 12:54 AM, Yves Caniou wrote:
> > Dear All,
> >
> > My parallel application ends when each process receives a msg, done in
Dear all,
I use the following code:
#include "stdlib.h"
#include "stdio.h"
#include "mpi.h"
#include "math.h"
#include "unistd.h" /* sleep */
int my_num, mpi_size ;
int
main(int argc, char *argv[])
{
MPI_Init(, ) ;
MPI_Comm_rank(MPI_COMM_WORLD, _num);
MPI_Comm_size(MPI_COMM_WORLD,
int flag ;
MPI_Init(, ) ;
MPI_Comm_rank(MPI_COMM_WORLD, _num);
printf("%d calls MPI_Finalize()\n\n\n", my_num) ;
MPI_Finalize() ;
MPI_Finalized() ;
printf("MPI finalized: %d\n", flag) ;
return 0 ;
}
---
--
Yves Caniou
Associate Professor at Université
; and that your environment is pointing to the right place.
>
> On May 24, 2010, at 12:15 AM, Yves Caniou wrote:
> > Dear All,
> > (follows a previous mail)
> >
> > I don't understand the strange behavior of this small code: sometimes it
> > ends, sometimes not. The
May 24, 2010, at 2:53 AM, Yves Caniou wrote:
> > I rechecked, but didn't see anything wrong.
> > Here is how I set my environment. Tkx.
> >
> > $>mpicc --v
> > Using built-in specs.
> > COLLECT_GCC=//home/p10015/gcc/bin/x86_64-unknown-linux-gnu-gcc-4.5.0
&g
?
Thank you!
.Yves.
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
* in Information Technology Center, The University of Tokyo,
2-11-16 Yayoi, Bunkyo-ku, Tokyo
2)
MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2)
MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2)
MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2)
--
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project
I forgot the list...
-
Le Wednesday 02 June 2010 14:59:46, vous avez écrit :
> On Jun 2, 2010, at 8:03 AM, Ralph Castain wrote:
> > I built it with gcc 4.2.1, though - I know we have a problem with shared
> > memory hanging when built with gcc 4.4.x, so I wonder if the issue here
> > is
Hi,
I have some performance issue on a parallel machine composed of nodes of 16
procs each. The application is launched on multiple of 16 procs for given
numbers of nodes.
I was told by people using MX MPI with this machine to attach a script to
mpiexec, which 'numactl' things, in order to
't answer to my question.
.Yves.
> --Nysal
>
> On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.can...@ens-lyon.fr>wrote:
> > Hi,
> >
> > I have some performance issue on a parallel machine composed of nodes of
> > 16 procs each. The application is launched
Le Wednesday 28 July 2010 11:34:13 Ralph Castain, vous avez écrit :
> On Jul 27, 2010, at 11:18 PM, Yves Caniou wrote:
> > Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit :
> >> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other
> >> enviro
ouldn't
> do, and doesn't do it as well. You would be far better off just adding
> --bind-to-core to the mpirun cmd line.
"mpirun -h" says that it is the default, so there is not even something to do?
I don't even have to add "--mca mpi_paffinity_alone 1" ?
.Yves.
> On Ju
16 matches
Mail list logo