Re: [OMPI users] [FEniCS] Question about MPI barriers

2014-10-17 Thread Jed Brown
Martin Sandve Alnæs writes: > Thanks, but ibarrier doesn't seem to be in the stable version of openmpi: > http://www.open-mpi.org/doc/v1.8/ > Otherwise mpi_ibarrier+mpi_test+homemade time/sleep loop would do the trick. MPI_Ibarrier is there (since 1.7), just missing a man

Re: [OMPI users] [petsc-maint] Deadlock in OpenMPI 1.8.3 and PETSc 3.4.5

2015-02-23 Thread Jed Brown
"Jeff Squyres (jsquyres)" writes: > This is, unfortunately, an undefined area of the MPI specification. I > do believe that our previous behavior was *correct* -- it just > deadlocks with PETSC because PETSC is relying on undefined behavior. Jeff, can you clarify where in

[OMPI users] "C++ compiler absolute"

2013-06-01 Thread Jed Brown
I built from trunk a couple days ago and notice that mpicxx has an erroneous path: $ ~/usr/ompi/bin/mpicxx -show no -I/homes/jedbrown/usr/ompi/include -pthread -Wl,-rpath -Wl,/homes/jedbrown/usr/ompi/lib -Wl,--enable-new-dtags -L/homes/jedbrown/usr/ompi/lib -lmpi The C compiler is fine $

Re: [OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple

2013-11-24 Thread Jed Brown
Pierre Jolivet writes: > It looks like you are compiling Open MPI with Homebrew. The flags they use in > the formula when --enable-mpi-thread-multiple is wrong. > c.f. a similar problem with MacPorts >

Re: [OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple

2013-11-24 Thread Jed Brown
Dominique Orban writes: > My question originates from a hang similar to the one I described in > my first message in the PETSc tests. They still hang after I corrected > the OpenMPI compile flags. I'm in touch with the PETSc folks as well > about this. Do you have an

Re: [OMPI users] MPI process hangs if OpenMPI is compiled with --enable-thread-multiple

2013-11-24 Thread Jed Brown
Ralph Castain writes: > Given that we have no idea what Homebrew uses, I don't know how we > could clarify/respond. Pierre provided a link to MacPorts saying that all of the following options were needed to properly enable threads. --enable-event-thread-support

[OMPI users] Regression: Fortran derived types with newer MPI module

2014-01-06 Thread Jed Brown
The attached code is from the example on page 629-630 (17.1.15 Fortran Derived Types) of MPI-3. This compiles cleanly with MPICH and with OMPI 1.6.5, but not with the latest OMPI. Arrays higher than rank 4 would have a similar problem since they are not enumerated. Did someone decide that a

Re: [OMPI users] Regression: Fortran derived types with newer MPI module

2014-01-07 Thread Jed Brown
"Jeff Squyres (jsquyres)" writes: > Yes, I can explain what's going on here. The short version is that a > change was made with the intent to provide maximum Fortran code > safety, but with a possible backwards compatibility issue. If this > change is causing real problems,

Re: [OMPI users] Regression: Fortran derived types with newer MPI module

2014-01-08 Thread Jed Brown
"Jeff Squyres (jsquyres)" writes: > As I mentioned Craig and I debated long and hard to change that > default, but, in summary, we apparently missed this clause on p610. > I'll change it back. Okay, thanks. > I'll be happy when gfortran 4.9 is released that supports ignore

Re: [OMPI users] MPI stats argument in Fortran mpi module

2014-01-08 Thread Jed Brown
"Jeff Squyres (jsquyres)" writes: >> Totally superficial, just passing "status(1)" instead of "status" or >> "status(1:MPI_STATUS_SIZE)". > > That's a different type (INTEGER scalar vs. INTEGER array). So the > compiler complaining about that is actually correct. Yes,

[OMPI users] CXX=no in config.status, breaks mpic++ wrapper

2014-01-14 Thread Jed Brown
With ompi-git from Monday (7e023a4ebf1aeaa530f79027d00c1bdc16b215fd), configure is putting "compiler=no" in ompi/tools/wrappers/mpic++-wrapper-data.txt: # There can be multiple blocks of configuration data, chosen by # compiler flags (using the compiler_args key to chose which block # should be

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Jed Brown
Rob Latham writes: > Well, I (and dgoodell and jsquyers and probably a few others of you) can > say from observing disc...@mpich.org traffic that we get one message > about Windows support every month -- probably more often. Seems to average at least once a week. We also see

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Jed Brown
t on Microsoft's intentions regarding MPI and C99/C11 (just dreaming now). > On 2014-07-17 11:42 AM, Jed Brown wrote: >> Rob Latham <r...@mcs.anl.gov> writes: >>> Well, I (and dgoodell and jsquyers and probably a few others of you) can >>> say from observing disc...

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Jed Brown
Rob Latham writes: > hey, (almost all of) c99 support is in place in visual studio 2013 > http://blogs.msdn.com/b/vcblog/archive/2013/07/19/c99-library-support-in-visual-studio-2013.aspx This talks about the standard library, but not whether the C frontend has acquired these

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Jed Brown
Ralph Castain writes: > Yeah, but I'm cheap and get the Intel compilers for free :-) Fine for you, but not for the people trying to integrate your library in a stack developed using MSVC. pgpDnaWHH5nyy.pgp Description: PGP signature

Re: [OMPI users] latest stable and win7/msvc2013

2014-07-17 Thread Jed Brown
Damien writes: > Visual Studio can link libs compiled with Intel. The headers also need to fall within the language subset implemented by MSVC, but this is easier to ensure and the Windows ecosystem seems to be happy with binary distribution. pgpHHN3ASH0QJ.pgp

Re: [OMPI users] OpenMPI Run-Time "Freedom" Question

2010-08-12 Thread Jed Brown
Or OMPI_CC=icc-xx.y mpicc ... Jed On Aug 12, 2010 5:18 PM, "Ralph Castain" wrote: On Aug 12, 2010, at 7:04 PM, Michael E. Thomadakis wrote: > On 08/12/10 18:59, Tim Prince wrote: >>... The "easy" way to accomplish this would be to: (a) build OMPI with whatever compiler

[OMPI users] Build failure with OMPI-1.5 (clang-2.8, gcc-4.5.1 with debug options)

2010-10-11 Thread Jed Brown
Note that this is an out-of-source build. $ ../configure --enable-debug --enable-mem-debug --prefix=/home/jed/usr/ompi-1.5-clang CC=clang CXX=clang++ $ make [...] CXXLD vtunify-mpi vtunify_mpi-vt_unify_mpi.o: In function `VTUnify_MPI_Abort':

Re: [OMPI users] Build failure with OMPI-1.5 (clang-2.8, gcc-4.5.1 with debug options)

2010-10-14 Thread Jed Brown
On Thu, Oct 14, 2010 at 22:36, Jeff Squyres <jsquy...@cisco.com> wrote: > On Oct 11, 2010, at 4:50 PM, Jed Brown wrote: > > > Note that this is an out-of-source build. > > > > $ ../configure --enable-debug --enable-mem-debug > --prefix=/home/jed/usr/ompi-1.5-clan

Re: [OMPI users] Build failure with OMPI-1.5 (clang-2.8, gcc-4.5.1 with debug options)

2010-10-14 Thread Jed Brown
On Thu, Oct 14, 2010 at 23:31, Jeff Squyres wrote: > Strange, because I see > /home/jed/src/openmpi-1.5/bclang/ompi/contrib/vt/vt/../../../.libs/libmpi.so > explicitly listed in the link line, which should contain MPI_Abort. Can you > nm on that file and ensure that it is

Re: [OMPI users] Build failure with OMPI-1.5 (clang-2.8, gcc-4.5.1 with debug options)

2010-10-14 Thread Jed Brown
On Fri, Oct 15, 2010 at 01:26, Jed Brown <j...@59a2.org> wrote: > I'll report the bug http://llvm.org/bugs/show_bug.cgi?id=8383

Re: [OMPI users] Open MPI program cannot complete

2010-10-25 Thread Jed Brown
On Mon, Oct 25, 2010 at 19:07, Jack Bryan wrote: > I need to use #PBS parallel job script to submit a job on MPI cluster. > Is it not possible to reproduce locally? Most clusters have a way to submit an interactive job (which would let you start this thing and then

Re: [OMPI users] Open MPI program cannot complete

2010-10-25 Thread Jed Brown
On Mon, Oct 25, 2010 at 19:35, Jack Bryan wrote: > I have to use #PBS to submit any jobs in my cluster. > I cannot use command line to hang a job on my cluster. > You don't need a cluster to run MPI jobs, can you run the job on whatever you development machine is? Does

[OMPI users] Open MPI 1.5 is not detecting oversubscription

2010-11-06 Thread Jed Brown
Previous versions would set mpi_yield_when_idle automatically when oversubscribing a node. I assume this behavior was not intentionally changed, but the parameter is not being set in cases of oversubscription, with or without an explicit hostfile. Jed

Re: [OMPI users] Open MPI data transfer error

2010-11-06 Thread Jed Brown
On Sat, Nov 6, 2010 at 18:00, Jack Bryan wrote: > Thanks, > > About my MPI program bugs: > > I used GDB and got the error: > > Program received signal SIGSEGV, Segmentation fault. > 0: 0x003a31c62184 in fwrite () from /lib64/libc.so.6 > Clearly fwrite was called

Re: [OMPI users] memcpy overlap in ompi_ddt_copy_content_same_ddt and glibc 2.12

2010-11-10 Thread Jed Brown
On Wed, Nov 10, 2010 at 18:11, Number Cruncher wrote: > Just some observations from a concerned user with a temperamental Open MPI > program (1.4.3): > > Fedora 14 (just released) includes glibc-2.12 which has optimized versions > of memcpy, including a copy

Re: [OMPI users] memcpy overlap in ompi_ddt_copy_content_same_ddt and glibc 2.12

2010-11-10 Thread Jed Brown
On Wed, Nov 10, 2010 at 18:25, Jed Brown <j...@59a2.org> wrote: > Is the memcpy-back code ever executed when called as memcpy()? I can't > imagine why it would be, but it would make plenty of sense to use it inside > memmove when the destination is at a higher address

Re: [OMPI users] memcpy overlap in ompi_ddt_copy_content_same_ddt and glibc 2.12

2010-11-10 Thread Jed Brown
On Wed, Nov 10, 2010 at 22:08, e-mail number.cruncher < number.crunc...@ntlworld.com> wrote: > In short, someone from Intel submitted a glibc patch that does faster > memcpy's on e.g. Intel i7, respects the ISO C definition, but does > things backwards. > However, the commit message and mailing

Re: [OMPI users] memcpy overlap in ompi_ddt_copy_content_same_ddt and glibc 2.12

2010-11-11 Thread Jed Brown
On Thu, Nov 11, 2010 at 12:36, Number Cruncher wrote: > However as commented here: > https://bugzilla.redhat.com/show_bug.cgi?id=638477#c86 the valgrind memcpy > implementation is overlap-safe. > Yes, of course. That's how the bug in Open MPI was originally

Re: [OMPI users] SpMV Benchmarks

2011-05-06 Thread Jed Brown
On Thu, May 5, 2011 at 23:15, Paul Monday (Parallel Scientific) < paul.mon...@parsci.com> wrote: > Hi, I'm hoping someone can help me locate a SpMV benchmark that runs w/ > Open MPI so I can benchmark how my systems are interacting with the network > as I add nodes / cores to the pool of systems.

[OMPI users] One-sided bugs

2011-12-22 Thread Jed Brown
I wrote a new communication layer that we are evaluating for use in mesh management and PDE solvers, but it is based on MPI-2 one-sided operations (and will eventually benefit from some of the MPI-3 one-sided proposals, especially MPI_Fetch_and_op() and dynamic windows). All the basic

Re: [OMPI users] One-sided bugs

2011-12-22 Thread Jed Brown
[Forgot the attachment.] On Thu, Dec 22, 2011 at 15:16, Jed Brown <j...@59a2.org> wrote: > I wrote a new communication layer that we are evaluating for use in mesh > management and PDE solvers, but it is based on MPI-2 one-sided operations > (and will eventually benefit from so

Re: [OMPI users] parallelising ADI

2012-03-06 Thread Jed Brown
On Tue, Mar 6, 2012 at 16:23, Tim Prince wrote: > On 03/06/2012 03:59 PM, Kharche, Sanjay wrote: > >> Hi >> >> I am working on a 3D ADI solver for the heat equation. I have implemented >> it as serial. Would anybody be able to indicate the best and more >> straightforward way to

Re: [OMPI users] [EXTERNAL] Re: mpicc link shouldn't add -ldl and -lhwloc

2012-05-27 Thread Jed Brown
On Wed, May 23, 2012 at 8:29 AM, Barrett, Brian W wrote: > >I should add the caveat that they are need when linking statically, but > >not when using shared libraries. > > And therein lies the problem. We have a number of users who build Open > MPI statically and even some

Re: [OMPI users] [EXTERNAL] Re: mpicc link shouldn't add -ldl and -lhwloc

2012-05-29 Thread Jed Brown
On Tue, May 29, 2012 at 9:05 AM, Jeff Squyres wrote: > > > We've tossed around ideas such as having the wrappers always assume > dynamic linking (e.g., only include a minimum of libraries), and then add > another wrapper option like --wrapper:static (or whatever) to know when

Re: [OMPI users] [EXTERNAL] Re: mpicc link shouldn't add -ldl and -lhwloc

2012-05-31 Thread Jed Brown
On Thu, May 31, 2012 at 6:20 AM, Jeff Squyres <jsquy...@cisco.com> wrote: > On May 29, 2012, at 11:42 AM, Jed Brown wrote: > > > The pkg-config approach is to use pkg-config --static if you want to > link that library statically. > > Do the OMPI pkg-config files not

[OMPI users] Setting RPATH for Open MPI libraries

2012-09-08 Thread Jed Brown
Is there a way to configure Open MPI to use RPATH without needing to manually specify --with-wrapper-ldflags=-Wl,-rpath,${prefix}/lib (and similar for non-GNU-compatible compilers)?

Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-09-11 Thread Jed Brown
a different library using LD_LIBRARY_PATH). On Sep 8, 2012 2:48 PM, "Reuti" <re...@staff.uni-marburg.de> wrote: > Hi, > > Am 08.09.2012 um 14:46 schrieb Jed Brown: > > > Is there a way to configure Open MPI to use RPATH without needing to > manually specify --with-wra

Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-09-11 Thread Jed Brown
On Tue, Sep 11, 2012 at 2:29 PM, Reuti wrote: > With "user" you mean someone compiling Open MPI? Yes

Re: [OMPI users] One-sided bugs

2012-09-11 Thread Jed Brown
*Bump* There doesn't seem to have been any progress on this. Can you at least have an error message saying that Open MPI one-sided does not work with datatypes instead of silently causing wanton corruption and deadlock? On Thu, Dec 22, 2011 at 4:17 PM, Jed Brown <j...@59a2.org> wrote: >

Re: [OMPI users] Setting RPATH for Open MPI libraries

2012-09-12 Thread Jed Brown
On Wed, Sep 12, 2012 at 10:20 AM, Jeff Squyres wrote: > We have a long-standing philosophy that OMPI should add the bare minimum > number of preprocessor/compiler/linker flags to its wrapper compilers, and > let the user/administrator customize from there. > In general, I

Re: [OMPI users] One-sided bugs

2012-12-30 Thread Jed Brown
in this known broken case instead of silently stomping all over the user's memory. On Tue, Sep 11, 2012 at 2:23 PM, Jed Brown <j...@59a2.org> wrote: > *Bump* > > There doesn't seem to have been any progress on this. Can you at least > have an error message saying that Open

[OMPI users] MPI_Barrier called late within ompi_mpi_finalize when MPIIO fd not closed

2009-07-20 Thread Jed Brown
This helped me track down a leaked file descriptor, but I think the order of events is not desirable. If an MPIIO file descriptor is not closed before MPI_Finalize, I get the following. *** An error occurred in MPI_Barrier *** after MPI was finalized *** MPI_ERRORS_ARE_FATAL (your MPI job will

Re: [OMPI users] Question about OpenMPI performance vs. MVAPICH2

2009-09-20 Thread Jed Brown
Brian Powell wrote: > I ran a final test which I find very strange: I ran the same test case > on 1 cpu. The MVAPICH2 case was 23% faster!?!? This makes little sense > to me. Both are using ifort as the mpif90 compiler using *identical* > optimization flags, etc. I don't understand how the results

Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Jeff Squyres wrote: > Using --enable-debug adds in a whole pile of developer-level run-time > checking and whatnot. You probably don't want that on production runs. I have found that --enable-debug --enable-memchecker actually produces more valgrind noise than leaving them off. Are there

Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Samuel K. Gutierrez wrote: > Hi Jed, > > I'm not sure if this will help, but it's worth a try. Turn off OMPI's > memory wrapper and see what happens. > > c-like shell > setenv OMPI_MCA_memory_ptmalloc2_disable 1 > > bash-like shell > export OMPI_MCA_memory_ptmalloc2_disable=1 > > Also add the

Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Jeff Squyres wrote: > Verbs and Open MPI don't have these options on by default because a) > you need to compile against Valgrind's header files to get them to > work, and b) there's a tiny/small amount of overhead inserted by OMPI > telling Valgrind "this memory region is ok", but we live in an

Re: [OMPI users] segmentation fault: Address not mapped

2009-11-23 Thread Jed Brown
On Mon, 23 Nov 2009 10:39:28 -0800, George Bosilca wrote: > In the case of Open MPI we use pointers, which are different than int > on most cases I just want to comment that Open MPI's opaque (to the user) pointers are significantly better than int because it offers type

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Jed Brown
On Thu, 3 Dec 2009 12:21:50 -0500, Jeff Squyres wrote: > On Dec 3, 2009, at 10:56 AM, Brock Palen wrote: > > > The allocation statement is ok: > > allocate(vec(vec_size,vec_per_proc*(size-1))) > > > > This allocates memory vec(32768, 2350) It's easier to translate to C

[OMPI users] Wrappers should put include path *after* user args

2009-12-04 Thread Jed Brown
Open MPI is installed by the distro with headers in /usr/include $ mpif90 -showme:compile -I/some/special/path -I/usr/include -pthread -I/usr/lib/openmpi -I/some/special/path Here's why it's a problem: HDF5 is also installed in /usr with modules at /usr/include/h5*.mod. A new HDF5 cannot

Re: [OMPI users] MPI debugger

2010-01-11 Thread Jed Brown
On Sun, 10 Jan 2010 19:29:18 +, Ashley Pittman wrote: > It'll show you parallel stack traces but won't let you single step for > example. Two lightweight options if you want stepping, breakpoints, watchpoints, etc. * Use serial debuggers on some interesting processes,

Re: [OMPI users] More NetBSD fixes

2010-01-15 Thread Jed Brown
On Thu, 14 Jan 2010 21:55:06 -0500, Jeff Squyres wrote: > That being said, you could sign up on it and then set your membership to > receive no mail...? This is especially dangerous because the Open MPI lists munge the Reply-To header, which is a bad thing

Re: [OMPI users] ABI stabilization/versioning

2010-01-25 Thread Jed Brown
On Mon, 25 Jan 2010 09:09:47 -0500, Jeff Squyres wrote: > The short version is that the possibility of static linking really > fouls up the scheme, and we haven't figured out a good way around this > yet. :-( So pkg-config addresses this with it's Libs.private field and an

Re: [OMPI users] ABI stabilization/versioning

2010-01-25 Thread Jed Brown
On Mon, 25 Jan 2010 15:10:12 -0500, Jeff Squyres wrote: > Indeed. Our wrapper compilers currently explicitly list all 3 > libraries (-lmpi -lopen-rte -lopen-pal) because we don't know if those > libraries will be static or shared at link time. I am suggesting that it is

Re: [OMPI users] ABI stabilization/versioning

2010-01-26 Thread Jed Brown
On Tue, 26 Jan 2010 11:15:45 +, Dave Love wrote: > > Versions where bumped to 0.0.1 for libmpi which has no > > effect for dynamic linking. > > I've forgotten the rules on this, but the point is that it needs to > affect dynamic linking to avoid running with earlier

Re: [OMPI users] speed up this problem by MPI

2010-01-29 Thread Jed Brown
On Fri, 29 Jan 2010 11:25:09 -0500, Richard Treumann wrote: > Any support for automatic serialization of C++ objects would need to be in > some sophisticated utility that is not part of MPI. There may be such > utilities but I do not think anyone who has been involved in

Re: [OMPI users] [mpich-discuss] problem with MPI_Get_count() for very long (but legal length) messages.

2010-02-06 Thread Jed Brown
On Fri, 5 Feb 2010 14:28:40 -0600, Barry Smith wrote: > To cheer you up, when I run with openMPI it runs forever sucking down > 100% CPU trying to send the messages :-) On my test box (x86 with 8GB memory), Open MPI (1.4.1) does complete after several seconds, but still

Re: [OMPI users] Difficulty with MPI_Unpack

2010-02-08 Thread Jed Brown
On Sun, 07 Feb 2010 22:40:55 -0500, Prentice Bisbal wrote: > Hello, everyone. I'm having trouble packing/unpacking this structure: > > typedef struct{ > int index; > int* coords; > }point; > > The size of the coords array is not known a priori, so it needs to be a >

Re: [OMPI users] Similar question about MPI_Create_type

2010-02-08 Thread Jed Brown
On Mon, 08 Feb 2010 13:54:10 -0500, Prentice Bisbal wrote: > but I don't have that book handy The standard has lots of examples. http://www.mpi-forum.org/docs/docs.html You can do this, but for small structures, you're better off just packing buffers. For large structures

Re: [OMPI users] Similar question about MPI_Create_type

2010-02-08 Thread Jed Brown
On Mon, 08 Feb 2010 14:42:15 -0500, Prentice Bisbal wrote: > I'll give that a try, too. IMHO, MPI_Pack/Unpack looks easier and less > error prone, but Pacheco advocates using derived types over > MPI_Pack/Unpack. I would recommend using derived types for big structures, or

Re: [OMPI users] MPi Abort verbosity

2010-02-24 Thread Jed Brown
On Wed, 24 Feb 2010 14:21:02 +0100, Gabriele Fatigati wrote: > Yes, of course, > > but i would like to know if there is any way to do that with openmpi See the error handler docs, e.g. MPI_Comm_set_errhandler. Jed

Re: [OMPI users] 3D domain decomposition with MPI

2010-03-11 Thread Jed Brown
On Wed, 10 Mar 2010 22:25:43 -0500, Gus Correa wrote: > Ocean dynamics equations, at least in the codes I've seen, > normally use "pencil" decomposition, and are probably harder to > handle using 3D "chunk" decomposition (due to the asymmetry imposed by > gravity). There

Re: [OMPI users] 3D domain decomposition with MPI

2010-03-13 Thread Jed Brown
On Fri, 12 Mar 2010 15:06:33 -0500, Gus Correa wrote: > Hi Cole, Jed > > I don't have much direct experience with PETSc. Disclaimer: I've been using PETSc for several years and also work on the library itself. > I mostly troubleshooted other people's PETSc programs, >

Re: [OMPI users] How to debug Open MPI programs with gdb

2010-04-22 Thread Jed Brown
On Thu, 22 Apr 2010 13:11:49 +0200, "=?utf-8?b?0J3QtdC80LDRmtCwINCY0LvQuNGb?= (Nemanja Ilic)" wrote: > On the contrary when I debug with "mpirun -np 4 xterm -e gdb > my_mpi_application" the four debugger windows are started with > separate thread each, just as it

Re: [OMPI users] How to "guess" the incoming data type ?

2010-04-26 Thread Jed Brown
On Sun, 25 Apr 2010 20:38:54 -0700, Eugene Loh wrote: > Could you encode it into the tag? This sounds dangerous. > Or, append a data type to the front of each message? This is the idea, unfortunately this still requires multiple messages for collectives (because you

Re: [OMPI users] Solving SVD Using Lanczos Method Implementation

2010-04-26 Thread Jed Brown
On Mon, 26 Apr 2010 22:30:15 +0700, long thai wrote: > Hi all. > > I'm trying to develop MPI program to solve SVD using Lanczos algorithms. > However, I have no idea how to do that. Somebody suggested to take a look at > http://www.netlib.org/scalapack/ but I cannot

[OMPI users] Highly variable performance

2010-06-02 Thread Jed Brown
I'm investigating some very large performance variation and have reduced the issue to a very simple MPI_Allreduce benchmark. The variability does not occur for serial jobs, but it does occur within single nodes. I'm not at all convinced that this is an Open MPI-specific issue (in fact the same

Re: [OMPI users] Address not mapped segmentation fault with 1.4.2 ...

2010-06-10 Thread Jed Brown
Just a guess, but you could try the updated patch here https://svn.open-mpi.org/trac/ompi/ticket/2431 Jed

Re: [OMPI users] Highly variable performance

2010-06-23 Thread Jed Brown
Following up on this, I have partial resolution. The primary culprit appears to be stale files in a ramdisk non-uniformly distributed across the sockets, thus interactingly poorly with NUMA. The slow runs invariably have high numa_miss and numa_foreign counts. I still have trouble making it

Re: [OMPI users] EXTERNAL: Re: MPI_GET beyond 2 GB displacement

2010-07-07 Thread Jed Brown
On Wed, 07 Jul 2010 15:51:41 -0600, "Price, Brian M (N-KCI)" wrote: > Jeff, > > I understand what you've said about 32-bit signed INTs, but in my program, > the displacement variable that I use for the MPI_GET call is a 64-bit INT > (KIND = 8). The MPI Fortran

Re: [OMPI users] EXTERNAL: Re: MPI_GET beyond 2 GB displacement

2010-07-07 Thread Jed Brown
On Wed, 07 Jul 2010 17:34:44 -0600, "Price, Brian M (N-KCI)" wrote: > Jed, > > The IBM P5 I'm working on is big endian. Sorry, that didn't register. The displ argument is MPI_Aint which is 8 bytes (at least on LP64, probably also on LLP64), so your use of kind=8 for

Re: [OMPI users] EXTERNAL: Re: MPI_GET beyond 2 GB displacement

2010-07-08 Thread Jed Brown
On Thu, 8 Jul 2010 09:53:11 -0400, Jeff Squyres wrote: > > Do you "use mpi" or the F77 interface? > > It shouldn't matter; both the Fortran module and mpif.h interfaces are the > same. Yes, but only the F90 version can do type checking, the function prototypes are not

Re: [OMPI users] Highly variable performance

2010-07-15 Thread Jed Brown
On Thu, 15 Jul 2010 09:36:18 -0400, Jeff Squyres wrote: > Per my other disclaimer, I'm trolling through my disastrous inbox and > finding some orphaned / never-answered emails. Sorry for the delay! No problem, I should have followed up on this with further explanation. >

Re: [OMPI users] Highly variable performance

2010-07-15 Thread Jed Brown
On Thu, 15 Jul 2010 13:03:31 -0400, Jeff Squyres wrote: > Given the oversubscription on the existing HT links, could contention > account for the difference? (I have no idea how HT's contention > management works) Meaning: if the stars line up in a given run, you > could end

Re: [OMPI users] openmpi v1.5?

2010-07-19 Thread Jed Brown
On Mon, 19 Jul 2010 15:16:59 -0400, Michael Di Domenico wrote: > Since I am a SVN neophyte can anyone tell me when openmpi 1.5 is > scheduled for release? https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.5 > And whether the Slurm srun changes are going to

Re: [OMPI users] Ok, I've got OpenMPI set up, now what?!

2010-07-19 Thread Jed Brown
On Mon, 19 Jul 2010 13:33:01 -0600, Damien Hocking wrote: > It does. The big difference is that MUMPS is a 3-minute compile, and > PETSc, erm, isn't. It's..longer... FWIW, PETSc takes less than 3 minutes to build (after configuration) for me (I build it every day).

Re: [OMPI users] openmpi v1.5?

2010-07-21 Thread Jed Brown
On Mon, 19 Jul 2010 15:24:32 -0400, Jeff Squyres wrote: > I'm actually waiting for *1* more bug fix before we consider 1.5 "complete". I see this going through, but would it be possible to change the size of the _count field in ompi_status_public_t now so that this bug can be

Re: [OMPI users] Do MPI calls ever sleep?

2010-07-21 Thread Jed Brown
On Wed, 21 Jul 2010 14:10:53 -0400, David Ronis wrote: > Is there another MPI routine that polls for data and then gives up its > time-slice? You're probably looking for the runtime option -mca yield_when_idle 1. This will slightly increase latency, but allows other

Re: [OMPI users] Do MPI calls ever sleep?

2010-07-21 Thread Jed Brown
On Wed, 21 Jul 2010 15:20:24 -0400, David Ronis wrote: > Hi Jed, > > Thanks for the reply and suggestion. I tried adding -mca > yield_when_idle 1 (and later mpi_yield_when_idle 1 which is what > ompi_info reports the variable as) but it seems to have had 0 effect. > My

Re: [OMPI users] where is mpif.h ?

2008-09-23 Thread Jed Brown
On Tue 2008-09-23 08:50, Simon Hammond wrote: > Yes, it should be there. Shouldn't the path be automatically included by the mpif77 wrapper? I ran into this problem when building BLACS (my default OpenMPI 1.2.7 lives in /usr, MPICH2 is at /opt/mpich2). The build tries $ /usr/bin/mpif90 -c

Re: [OMPI users] Execution in multicore machines

2008-09-29 Thread Jed Brown
On Mon 2008-09-29 20:30, Leonardo Fialho wrote: > 1) If I use one node (8 cores) the "user" % is around 100% per core. The > execution time is around 430 seconds. > > 2) If I use 2 nodes (4 cores in each node) the "user" % is around 95% > per core and the "sys" % is 5%. The execution time is

Re: [OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Jed Brown
On Wed, Oct 8, 2008 at 21:19, Sudhakar Mahalingam wrote: > I am having a problem about "Open" Macro's number of arguments, when I try > to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5 machine. The > error message is given below. When I look at the file.h and

[OMPI users] on SEEK_*

2008-10-16 Thread Jed Brown
I've just run into this chunk of code. /* MPICH2 will fail if SEEK_* macros are defined * because they are also C++ enums. Undefine them * when including mpi.h and then redefine them * for sanity. */ # ifdef SEEK_SET #define MB_SEEK_SET SEEK_SET #define MB_SEEK_CUR SEEK_CUR #

Re: [OMPI users] on SEEK_*

2008-10-16 Thread Jed Brown
On Thu 2008-10-16 07:43, Jeff Squyres wrote: > On Oct 16, 2008, at 6:29 AM, Jed Brown wrote: > > Open MPI doesn't require undef'ing of anything. It should also not > require any special ordering of include files. Specifically, the > following codes both compile fine fo

Re: [OMPI users] on SEEK_*

2008-10-16 Thread Jed Brown
On Thu 2008-10-16 08:21, Jeff Squyres wrote: > FWIW: https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/20 is a > placemarker for discussion for the upcoming MPI Forum meeting (next > week). > > Also, be aware that OMPI's 1.2.7 solution isn't perfect, either. You > can see from ticket 20

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-22 Thread Jed Brown
On Wed 2008-10-22 00:40, Reuti wrote: > > Okay, now I see. Why not just call MPI_Comm_size(MPI_COMM_WORLD, > ) When nprocs is 1, it's a serial run. It can also be executed > when not running within mpirun AFAICS. This is absolutely NOT okay. You cannot call any MPI functions before MPI_Init

Re: [OMPI users] OMPI users] Fortran vs C reductions

2016-02-09 Thread Jed Brown
George Bosilca writes: > Now we can argue if DOUBLE PRECISION in Fortran is a double in C. As these > languages are interoperable, and there is no explicit conversion function, > it is safe to assume this is the case. Thus, is seems to me absolutely > legal to provide the

Re: [OMPI users] OMPI users] Fortran vs C reductions

2016-02-10 Thread Jed Brown
Gilles Gouaillardet writes: >> implementation. Must I compile in support for being called with >> MPI_DOUBLE_COMPLEX? >> > does that really matter ? Possibly. For example, if the library needed to define some static data, its setup might involve communicating values before