i4py list and we will see what we can do.
>
> Brian
>
> On Dec 17, 2007 8:25 AM, de Almeida, Valmor F. <dealmei...@ornl.gov>
> wrote:
> >
> > Hello,
> >
> > I am getting these messages (below) when running mpi4py python
codes.
> > Alway
Hello,
I am getting these messages (below) when running mpi4py python codes.
Always one message per mpi process. The codes seem to run correctly. Any
ideas why this is happening and how to avoid it?
Thanks,
--
Valmor de Almeida
>mpirun -np 2 python helloworld.py
[xeon0:05998] mca: base:
Eric,
I see you are using a gentoo distro like me. My version uses the vanilla
kernel 2.6.22.9 and gcc-4.1.2. I have the following intel compiler
versions installed:
10.0.026 10.1.008 10.1.011 9.1.052
None of them are able to build a functional version of openmpi-1.2.4.
I've been posting
uild on that platform (including Open MPI).
The install script of the 10.1.008 suite lists the supported platform.
That includes kernel 2.6 and glibc 2.6. I guess there are some loose
ends.
--
Valmor
>
>
>
> On Dec 5, 2007, at 9:59 AM, de Almeida, Valmor F. wrote:
>
> &g
hat you can (obviously, ompi_info won't
> run) from http://www.open-mpi.org/community/help/ ?
>
>
>
> On Dec 4, 2007, at 4:26 PM, de Almeida, Valmor F. wrote:
>
> >
> > Hello,
> >
> > What is the suggested intel compiler version to compile
openmp
Hello,
What is the suggested intel compiler version to compile openmpi-1.2.4?
I tried versions 10.1.008 and 9.1.052 and no luck in getting a working
library. In both cases I get:
->mpic++ --showme
Segmentation fault
->ompi_info
Segmentation fault
Thanks for your help.
--
Valmor de Almeida
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
>
> On Tue, 2007-12-04 at 09:33 +0100, Åke Sandgren wrote:
> > On Sun, 2007-12-02 at 21:27 -0500, de Almeida, Valmor F. wrote:
> >
> > Run an nm on opal/mca/m
Hello,
After compiling ompi-1.2.4 with the intel compiler suite 10.1.008, I get
->mpicxx --showme
Segmentation fault
->ompi_info
Segmentation fault
The 10.1.008 is the only one I know that officially supports the linux
kernel 2.6 and glibc-2.6 that I have on my system.
config.log file
t principle. But if you are
> only ever going to use the particular compilation with Myrinet
anyways,
> I suppose it does not matter.
>
> I guess this is a long way of saying that it is just personal
preference.
>
> Hope this helps,
>
> Tim
>
>
> de Almeida, Valmor
Hello,
I am getting the warnings after an upgrade to
mx-1.2.4 and openmpi-1.2.4.
Either using the env variable setting MX_RCACHE=2, or linking the
application with -lmyriexpress removes the warnings.
Is either one of them the preferred way of doing it?
Thanks,
--
Valmor
> -Original
Hello list,
I would appreciate recommendations on what to use for developing mpi
python codes. I've seen several packages on public domain: mympi, pypar,
mpi python, mpi4py and it would be helpful to start in the right
direction.
Thanks,
--
Valmor de Almeida
ORNL
PS. I apologize if this
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Brian Powell
>
> using gcc and ifort (see the attached config.log) with a variety of
One recommendation I received from this list was not to mix compiler
suites. So use
Hello,
Is there a way to get detailed information on what this error may be?
[x1:17287] mca_btl_tcp_frag_send: writev failed with errno=104
mpirun noticed that job rank 0 with PID 17287 on node x1 exited on
signal 15 (Terminated).
15 additional processes aborted (not shown)
Thanks,
--
Valmor
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On
> Behalf Of Jeff Squyres
>
> Bummer. FWIW, we have some internal testing OMPI Fortran codes that
> are difficult to get to compile uniformly across all Fortran
> compilers (the most difficult
on, but there is a bug in Open
MPI
> 1.2
> which causes a segmentation fault in case of this type of error. This
will
> be
> fixed in 1.2.1, and the fix is available now in the 1.2 nightly
tarballs.
>
> Hope this helps,
>
> Tim
>
> On Friday 30 March 2007 05:06 pm, de
Hello,
I would be interested in hearing folk's experiences with gfortran and
ompi-1.2. Is gfortran good enough for prime time?
I have built ompi-1.2 with gfortran-4.1.1 but no luck in testing it
because my application of interest (legacy fortran code) will not
compile with gfortran.
In
> At this point we should gracefully move on, but there is a bug in Open
MPI
> 1.2
> which causes a segmentation fault in case of this type of error. This
will
> be
> fixed in 1.2.1, and the fix is available now in the 1.2 nightly
tarballs.
>
> Hope this helps,
>
> T
Hello,
I am getting this error any time the number of processes requested per
machine is greater than the number of cpus. I suspect it is something on
the configuration of mx / ompi that I am missing since another machine I
have without mx installed runs ompi correctly with oversubscription.
Hello,
I am using mpic++ to create a program that combines c++ and f90
libraries. The libraries are created with mpic++ and mpif90. OpenMPI-1.2
was built using gcc-4.1.1. (below follows the output of ompi_info. The
final linking stage takes quite a long time compared to the creation of
the
19 matches
Mail list logo