Hello,
I wasn't able to find anything about that in the standard. Given this situation:
MPI_Open_port(MPI_INFO_NULL, portName);
MPI_Comm communicator;
MPI_Comm_accept(portName, MPI_INFO_NULL, 0, MPI_COMM_WORLD, );
MPI_Close_port(portName);
// can I still use communicator here?
Does
Hello,
I used to have oversubscribe set a default using the environment variable
OMPI_MCA_rmaps_base_oversubscribe. However, since recently, probably since
4.0.0, that doesn't seem to work anymore.
% echo $OMPI_MCA_rmaps_base_oversubscribe
1
% mpirun --version
mpirun (Open MPI) 4.0.0
Report
Hello,
I have this piece of code:
MPI_Comm icomm;
INFO << "Accepting connection on " << portName;
MPI_Comm_accept(portName.c_str(), MPI_INFO_NULL, 0, MPI_COMM_SELF, );
and sometimes (like in 1 of 5 runs), I get:
[helium:33883] [[32673,1],0] ORTE_ERROR_LOG: Data unpack would read past end of
Hello,
I can use --oversubscribe to enable oversubscribing. What is OpenMPI way to set
this as a default, e.g. through a config file option or an environment variable?
Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
n Hjelm <hje...@me.com> wrote:
>>
>> Try using MPI_Comm_remotr_size. As this is an intercommunicator that will
>> give the number of ranks for send/recv.
>>
>>> On Apr 13, 2018, at 7:34 AM, Florian Lindner <mailingli...@xgm.de> wrote:
>>&g
Hello,
I have this piece of code
PtrRequest MPICommunication::aSend(double *itemsToSend, int size, int
rankReceiver)
{
rankReceiver = rankReceiver - _rankOffset;
int comsize = -1;
MPI_Comm_size(communicator(rankReceiver), );
TRACE(size, rank(rankReceiver), comsize);
MPI_Request
Am 29.03.2018 um 09:58 schrieb Florian Lindner:
> #define MPI_BOOL MPI_Select_unsigned_integer_datatype<sizeof(bool)>::datatype
>
>
> It redefines MPI_BOOL based on the size of bool. I wonder if this is needed
> and why?
>
> I was speculating that the compile
Hello,
in a code that I am currently reading I have found that code:
template
struct MPI_Select_unsigned_integer_datatype;
template <>
struct MPI_Select_unsigned_integer_datatype<1> {
static MPI_Datatype datatype;
};
MPI_Datatype MPI_Select_unsigned_integer_datatype<1>::datatype =
< eventsSize; ++i) {
MPI_Request req;
eventSendBuf.at(i).size = 5;
cout << "Isending event " << i << endl;
MPI_Isend([i], 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD, );
requests.push_back(req);
}
Best,
Florian
Am 19.02.2018 um 10:14 schrieb Flor
Hello,
I am having problems understanding an error valgrind gives me. I tried to bog
down the program as much as possible. The
original program as well as the test example both work fine, but when I link
the created library to another application
I get segfaults. I think that this piece of code
_COMM_WORLD
[asaru:30337] *** MPI_ERR_TRUNCATE: message truncated
[asaru:30337] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now
abort,
[asaru:30337] ***and potentially your MPI job)
Best Thanks,
Florian
>
>
> Cheers,
>
> Gilles
>
> Florian Lindner <mailingli...@xg
Hello,
I have this example code:
#include
#include
int main(int argc, char *argv[])
{
MPI_Init(, );
{
MPI_Request req1, req2;
std::vector vec = {1, 2, 3};
int packSize = sizeof(int) * vec.size();
int position = 0;
std::vector packSendBuf(packSize);
int vecSize =
Hello,
I have a custom datatype MPI_EVENTDATA (created with MPI_Type_create_struct)
which is a struct with some fixed size fields and a variable sized array of
ints (data). I want to collect a variable number of these types (Events) from
all ranks at rank 0. My current version is working for a
Regardless, you can have Open MPI use hyperthreads by default (instead of
> cores) with the mpirun option --use-hwthread-cpus.
>
>
>
>> On Jan 10, 2018, at 10:48 AM, r...@open-mpi.org wrote:
>>
>> Set the MCA param “rmaps_base_oversubscribe=1” in your default MCA p
Hello,
a recent openmpi update on my Arch machine seems to have enabled
--nooversubscribe, as described in the manpage. Since I
regularly test on my laptop with just 2 physical cores, I want to set
--oversubscribe by default.
How can I do that?
I am also a bit surprised, that openmpi takes
>> The MPI Ports functionality (chapter 10.4 of MPI 3.1), mainly consisting of
>> MPI_Open_port, MPI_Comm_accept and
>> MPI_Comm_connect is not usuable without running an ompi-server as a third
>> process?
>
> Yes, that’s correct. The reason for moving in that direction is that the
> resource
Am 05.11.2017 um 20:57 schrieb r...@open-mpi.org:
>
>> On Nov 5, 2017, at 6:48 AM, Florian Lindner <mailingli...@xgm.de
>> <mailto:mailingli...@xgm.de>> wrote:
>>
>> Am 04.11.2017 um 00:05 schrieb r...@open-mpi.org <mailto:r...@open-mpi.org>:
>&g
and Lookup_name or exchanging the
information using files (or command line or stdin) shouldn't have any
impact on the connection (Connect / Accept) itself.
Best,
Florian
> Ralph
>
>> On Nov 3, 2017, at 11:23 AM, Florian Lindner <mailingli...@xgm.de> wrote:
>>
>>
&
Am 03.11.2017 um 16:18 schrieb r...@open-mpi.org:
> What version of OMPI are you using?
2.1.1 @ Arch Linux.
Best,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
Hello,
I'm working on a sample program to connect two MPI communicators launched with
mpirun using Ports.
Firstly, I use MPI_Open_port to obtain a name and write that to a file:
if (options.participant == A) { // A publishes the port
if (options.commType == single and rank == 0)
e.
Best and thanks!
Florian
>
> Cheers,
>
> Gilles
>
> On Thursday, March 3, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
>
> > I am still getting errors, even with your script.
> >
> > I will also try to modified build of openmpi that Jef
LAGS="$LDFLAGS -Wl,-z,noexecstack"
make
see
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/openmpi
Any more ideas?
Best,
Florian
>
>
> > On Mar 2, 2016, at 9:51 AM, Florian Lindner <mailingli...@xgm.de> wrote:
> >
in 0 blocks
==5324== Rerun with --leak-check=full to see details of leaked memory
==5324==
==5324== For counts of detected and suppressed errors, rerun with: -v
==5324== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Am Donnerstag, 3. März 2016, 14:53:24 CET schrieb Gilles Go
or didn't I understand?
Best,
Florian
> Cheers,
>
> Gilles
>
> PS if it works, do not jump to the erroneous conclusion valgrind likes
> French and dislikes German ;-)
>
> On Wednesday, March 2, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
>
>
Hello,
using OpenMPI 1.10.2 and valgrind 3.11.0 I try to use the code below to
send a c++ string.
It works fine, but running through valgrind gives a lot of memory errors,
invalid read of size...
What is going wrong there?
Valgrind output, see below.
Thanks!
Florian
// Compile with: mpicxx
25 matches
Mail list logo