The short answer is that OMPI currently does not remap ranks during
MPI_CART_CREATE, even if you pass reorder==1. :-\
The reason is because we've had very little requests to do so.
However, we do have the good foresight (if I do say so myself ;-) ) to
make the MPI topology system be a plugi
Sorry for the delay in answering. More below.
On Oct 23, 2009, at 4:02 AM, Francesco Pietra wrote:
I have also put the 1.3.3 version (gfortran) on the path:
#For openmpi-1.2.6 Intel compiler
if [ "$LD_LIBRARY_PATH" ] ; then
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
else
e
On Mon, 2009-10-26 at 16:21 -0400, Jeff Squyres wrote:
> there's a tiny/
> small amount of overhead inserted by OMPI telling Valgrind "this
> memory region is ok", but we live in an intensely competitive HPC
> environment.
I may be wrong but I seem to remember Julian saying the overhead is
t
Jeff Squyres wrote:
> Verbs and Open MPI don't have these options on by default because a)
> you need to compile against Valgrind's header files to get them to
> work, and b) there's a tiny/small amount of overhead inserted by OMPI
> telling Valgrind "this memory region is ok", but we live in an
>
Hi Brock,
On Monday 26 October 2009 03:23:42 pm Brock Palen wrote:
> Is there a large overhead for --enable-debug --enable-memchecker?
>
> reading:
> http://www.open-mpi.org/faq/?category=debugging
>
> It sounds like there is and there isn't, what should I expect if we
> build all of our mpi libr
There's a whole class of valgrind warnings that are generated when you
use OS-bypass networks like OpenFabrics. The verbs library and Open
MPI can be configured and compiled with additional instructions that
tell Valgrind where the "problematic" spots are, and that the memory
is actually o
Samuel K. Gutierrez wrote:
> Hi Jed,
>
> I'm not sure if this will help, but it's worth a try. Turn off OMPI's
> memory wrapper and see what happens.
>
> c-like shell
> setenv OMPI_MCA_memory_ptmalloc2_disable 1
>
> bash-like shell
> export OMPI_MCA_memory_ptmalloc2_disable=1
>
> Also add the
Hi Jed,
I'm not sure if this will help, but it's worth a try. Turn off OMPI's
memory wrapper and see what happens.
c-like shell
setenv OMPI_MCA_memory_ptmalloc2_disable 1
bash-like shell
export OMPI_MCA_memory_ptmalloc2_disable=1
Also add the following MCA parameter to you run command.
--
Jeff Squyres wrote:
> Using --enable-debug adds in a whole pile of developer-level run-time
> checking and whatnot. You probably don't want that on production runs.
I have found that --enable-debug --enable-memchecker actually produces
more valgrind noise than leaving them off. Are there options
On Oct 26, 2009, at 3:29 PM, Jeff Squyres wrote:
On Oct 26, 2009, at 3:23 PM, Brock Palen wrote:
Is there a large overhead for
--enable-debug --enable-memchecker?
--enable-debug, yes, there is a pretty large penalty. --enable-
debug is really only intended for Open MPI developers. If you
On Oct 26, 2009, at 3:23 PM, Brock Palen wrote:
Is there a large overhead for
--enable-debug --enable-memchecker?
--enable-debug, yes, there is a pretty large penalty. --enable-debug
is really only intended for Open MPI developers. If you just want an
OMPI that was compiled with debuggi
On Oct 16, 2009, at 1:55 PM, nam kim wrote:
Our school has a cluster running over CISCO based Infiniband cards
and switch.
Recently, we purchased more computing nods with Mellanox card since
CISCO stops making IB card anymore.
Sorry for the delay in replying; my INBOX has grown totally out
Is there a large overhead for
--enable-debug --enable-memchecker?
reading:
http://www.open-mpi.org/faq/?category=debugging
It sounds like there is and there isn't, what should I expect if we
build all of our mpi libraries with those options, when we run normally:
mpirun ./myexe
vs using a l
On Oct 15, 2009, at 2:14 AM, Sangamesh B wrote:
I've run ibpingpong tests. They are working fine.
Sorry for the delay in replying.
Good.
Are there any additional tests available which will make sure that
"there is no problem with IB software and Open MPI. The problem is
with Applic
Dear list members
I am using openmpi 1.3.3 with OFED on a HP cluster with redhatLinux.
Occasionally (not always) I get a crash with the following message:
[hydra11:09312] *** Process received signal ***
[hydra11:09312] Signal: Segmentation fault (11)
[hydra11:09312] Signal code: Address not mapp
On Oct 25, 2009, at 11:38 PM, Steve Kargl wrote:
There is currently a semi-heated debate in comp.lang.fortran
concerning co-arrays and the upcoming Fortran 2008. Don't
waste your time trying to decipher the thread; however, there
appear to be a few knowledgable MPI Fortaners hang-out, lately.
W
I can confirm that it is fixed on both the trunk and will be included
in the upcoming 1.3.4 release. The code now reads:
re_order = (0 == reorder)? false : true;
Thanks for the heads-up!
On Oct 26, 2009, at 6:40 AM, Kiril Dichev wrote:
Hi David,
I believe this particular bug was fixe
Hi David,
I believe this particular bug was fixed in the trunk some weeks ago
shortly before your post.
Regards,
Kiril
On Tue, 2009-10-13 at 17:54 +1100, David Singleton wrote:
> Looking back through the archives, a lot of people have hit error
> messages like
>
> > [bl302:26556] *** An erro
18 matches
Mail list logo