John,
by "user", i meant MPI user (e.g. your application) as opposed to
"OpenMPI developer's bug"
As previously explained, the coll/tuned module is known to be broken in
some cases
(e.g. different but compatible signatures), so the --mca coll ^tuned
option simply disables this module,
so
Thanks, Jeff.
I'll let you know what happens.
Best.John
On 2/16/16 10:19 AM, Jeff Squyres (jsquyres) wrote:
-Original Message-
From: JR Cary
Reply: Open MPI Users
Date: February 16, 2016 at 9:39:23 AM
To: us...@open-mpi.org
-Original Message-
From: JR Cary
Reply: Open MPI Users
Date: February 16, 2016 at 9:39:23 AM
To: us...@open-mpi.org
Subject: Re: [OMPI users] readv failed How to debug?
> Thanks, Gilles,
>
> Yes, this binary was built a few
Which one is producing correct (or at least reasonable) results? Are
both results correct? Do you have ways of assessing correctness of your
results?
On February 16, 2016 at 5:19:16 AM, Diego Avesani (diego.aves...@gmail.com)
wrote:
Dear all,
I have written an fortran-MPI code.
Hi Jeff,
Thanks. Did you see my followup?
The code is all written the same, e.g. SPMD.
And it is just when I use a binary built at my institution
and run it at another.
ThxJohn
On 2/16/16 6:35 AM, Jeff Squyres (jsquyres) wrote:
John --
+1 on what Gilles said.
The initial error says
Thanks, Gilles,
Yes, this binary was built a few years ago.
You mention a user error, but do you mean developer error? I.e., it
would have to be in the code?
What does "--mca coll ^tuned" do?
ThxJohn
On 2/15/16 4:03 PM, Gilles Gouaillardet wrote:
John,
the readv error is likely a
John --
+1 on what Gilles said.
The initial error says that a broadcast message was truncated. This likely
indicates that someone is calling MPI_Bcast with a different size than its
peers (it *could* indicate what Giles mentioned about
different-but-supposed-to-be-compatible-datatypes, but
On February 16, 2016 at 5:19:16 AM, Diego Avesani (diego.aves...@gmail.com)
wrote:
> Dear all,
>
> I have written an fortran-MPI code.
> Usually, I compile it in MPI or in openMPI according to the cluster where
> it runs.
> Unfortunately, I get complitly a different result and I do not know
Dear all,
I have written an fortran-MPI code.
Usually, I compile it in MPI or in openMPI according to the cluster where
it runs.
Unfortunately, I get complitly a different result and I do not know why.
Where could I look? Do you know why?
Thanks
Diego