Looks like boost::mpi and/or your python "mpi" module might be creating a bogus
argv array and passing it to OMPI's MPI_Init routine. Note that argv is
required by C99 to be terminated with a NULL pointer (that is,
(argv[argc]==NULL) must hold). See http://stackoverflow.com/a/3772826/158513.
On Nov 24, 2014, at 12:06 AM, George Bosilca wrote:
> https://github.com/open-mpi/ompi/pull/285 is a potential answer. I would like
> to hear Dave Goodell comment on this before pushing it upstream.
>
> George.
I'll take a look at it today. My notification settings
On Jan 9, 2015, at 7:46 AM, Jeff Squyres (jsquyres) wrote:
> Yes, I know examples 3.8/3.9 are blocking examples.
>
> But it's morally the same as:
>
> MPI_WAITALL(send_requests...)
> MPI_WAITALL(recv_requests...)
>
> Strictly speaking, that can deadlock, too.
>
> It
On Jun 5, 2015, at 8:47 PM, Gilles Gouaillardet
wrote:
> i did not use the term "pure" properly.
>
> please read instead "posix_memalign is a function that does not modify any
> user variable"
> that assumption is correct when there is no wrapper, and incorrect
Perhaps there's an RPATH issue here? I don't fully understand the structure of
Rmpi, but is there both an app and a library (or two separate libraries) that
are linking against MPI?
I.e., what we want is:
app -> ~ross/OMPI
\ /
--> library --
But what
On Apr 1, 2014, at 10:26 AM, "Blosch, Edwin L" wrote:
> I am getting some errors building 1.8 on RHEL6. I tried autoreconf as
> suggested, but it failed for the same reason. Is there a minimum version of
> m4 required that is newer than that provided by RHEL6?
Don't
On Apr 1, 2014, at 12:13 PM, Filippo Spiga wrote:
> Dear Ralph, Dear Jeff,
>
> I've just recompiled the latest Open MPI 1.8. I added
> "--enable-mca-no-build=btl-usnic" to configure but the message still appear.
> Here the output of "--mca btl_base_verbose 100"
On Apr 2, 2014, at 12:57 PM, Filippo Spiga wrote:
> I still do not understand why this keeps appearing...
>
> srun: cluster configuration lacks support for cpu binding
>
> Any clue?
I don't know what causes that message. Ralph, any thoughts here?
-Dave
On Apr 14, 2014, at 12:15 PM, Djordje Romanic wrote:
> When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
> -
> starting wrf task0 of1
> starting wrf task0 of1
>
I don't know of any workaround. I've created a ticket to track this, but it
probably won't be very high priority in the short term:
https://svn.open-mpi.org/trac/ompi/ticket/4575
-Dave
On Apr 25, 2014, at 3:27 PM, Jamil Appa wrote:
>
> Hi
>
> The following
On Jun 27, 2014, at 8:53 AM, Brock Palen wrote:
> Is there a way to import/map memory from a process (data acquisition) such
> that an MPI program could 'take' or see that memory?
>
> We have a need to do data acquisition at the rate of .7TB/s and need todo
> some
On Sep 27, 2015, at 1:38 PM, marcin.krotkiewski
wrote:
>
> Hello, everyone
>
> I am struggling a bit with IB performance when sending data from a POSIX
> shared memory region (/dev/shm). The memory is shared among many MPI
> processes within the same compute
Lachlan mentioned that he has "M Series" hardware, which, to the best of my
knowledge, does not officially support usNIC. It may not be possible to even
configure the relevant usNIC adapter policy in UCSM for M Series
modules/chassis.
Using the TCP BTL may be the only realistic option here.
13 matches
Mail list logo