[OMPI users] Does Close_port invalidates the communicactor?

2019-03-01 Thread Florian Lindner
Hello,

I wasn't able to find anything about that in the standard. Given this situation:


  MPI_Open_port(MPI_INFO_NULL, portName);
  MPI_Comm communicator;
  MPI_Comm_accept(portName, MPI_INFO_NULL, 0, MPI_COMM_WORLD, );

  MPI_Close_port(portName);

  // can I still use communicator here?

Does the close port invalidates communicator? Or does it means the port closes 
for any new incoming connections, i.e., Comm_accept won't work anymore on it?

Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Set oversubscribe as default

2019-02-15 Thread Florian Lindner
Hello,

I used to have oversubscribe set a default using the environment variable 
OMPI_MCA_rmaps_base_oversubscribe. However, since recently, probably since 
4.0.0, that doesn't seem to work anymore.

% echo $OMPI_MCA_rmaps_base_oversubscribe 
1
% mpirun --version
mpirun (Open MPI) 4.0.0

Report bugs to http://www.open-mpi.org/community/help/
lindnefn@asaru ~ % mpirun -n 4 ls
--
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:
  ls

Either request fewer slots for your application, or make more slots available
for use.
--
% mpirun --oversubscribe -n 4 ls
[works]

How can enable oversubscribe as default again?

Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Comm_connect: Data unpack would read past end of buffer

2018-08-03 Thread Florian Lindner
Hello,

I have this piece of code:

MPI_Comm icomm;
INFO << "Accepting connection on " << portName;
MPI_Comm_accept(portName.c_str(), MPI_INFO_NULL, 0, MPI_COMM_SELF, );

and sometimes (like in 1 of 5 runs), I get:

[helium:33883] [[32673,1],0] ORTE_ERROR_LOG: Data unpack would read past end of 
buffer in file dpm_orte.c at line 406
[helium:33883] *** An error occurred in MPI_Comm_accept
[helium:33883] *** reported by process [2141257729,0]
[helium:33883] *** on communicator MPI_COMM_SELF
[helium:33883] *** MPI_ERR_UNKNOWN: unknown error
[helium:33883] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will 
now abort,
[helium:33883] ***and potentially your MPI job)
[helium:33883] [0] func:/usr/lib/libopen-pal.so.13(opal_backtrace_buffer+0x33) 
[0x7fc1ad0ac6e3]
[helium:33883] [1] func:/usr/lib/libmpi.so.12(ompi_mpi_abort+0x365) 
[0x7fc1af4955e5]
[helium:33883] [2] 
func:/usr/lib/libmpi.so.12(ompi_mpi_errors_are_fatal_comm_handler+0xe2) 
[0x7fc1af487e72]
[helium:33883] [3] func:/usr/lib/libmpi.so.12(ompi_errhandler_invoke+0x145) 
[0x7fc1af4874b5]
[helium:33883] [4] func:/usr/lib/libmpi.so.12(MPI_Comm_accept+0x262) 
[0x7fc1af4a90e2]
[helium:33883] [5] func:./mpiports() [0x41e43d]
[helium:33883] [6] func:/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) 
[0x7fc1ad7a1830]
[helium:33883] [7] func:./mpiports() [0x41b249]


Before that I check for the length of portName

  DEBUG << "COMM ACCEPT portName.size() = " << portName.size();
  DEBUG << "MPI_MAX_PORT_NAME = " << MPI_MAX_PORT_NAME;

which both return 1024.

I am completely puzzled, how I can get a buffer issue, except something faulty 
with std::string portName.

Any clues?

Launch command: mpirun -n 4 -mca opal_abort_print_stack 1 
OpenMPI 1.10.2 @ Ubuntu 16.

Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Settings oversubscribe as default?

2018-08-03 Thread Florian Lindner
Hello,

I can use --oversubscribe to enable oversubscribing. What is OpenMPI way to set 
this as a default, e.g. through a config file option or an environment variable?

Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Invalid rank despite com size large enough

2018-04-13 Thread Florian Lindner
Am 13.04.2018 um 15:41 schrieb Nathan Hjelm:
> Err. MPI_Comm_remote_size.

Ah, thanks! I thought at that MPI_Comm_size returns the number of remote ranks. 
remote_size returns 1, so now at least
the error message is consistent.

Best,
Florian

> 
>> On Apr 13, 2018, at 7:41 AM, Nathan Hjelm <hje...@me.com> wrote:
>>
>> Try using MPI_Comm_remotr_size. As this is an intercommunicator that will 
>> give the number of ranks for send/recv. 
>>
>>> On Apr 13, 2018, at 7:34 AM, Florian Lindner <mailingli...@xgm.de> wrote:
>>>
>>> Hello,
>>>
>>> I have this piece of code
>>>
>>> PtrRequest MPICommunication::aSend(double *itemsToSend, int size, int 
>>> rankReceiver)
>>> {
>>> rankReceiver = rankReceiver - _rankOffset;
>>> int comsize = -1;
>>> MPI_Comm_size(communicator(rankReceiver), );
>>> TRACE(size, rank(rankReceiver), comsize);
>>>
>>>
>>> MPI_Request request;
>>> MPI_Isend(itemsToSend,
>>>   size,
>>>   MPI_DOUBLE,
>>>   rank(rankReceiver),
>>>   0,
>>>   communicator(rankReceiver),
>>>   );
>>>
>>> return PtrRequest(new MPIRequest(request));
>>> }
>>>
>>> While there are quite some calls you don't know, it's basically a wrapper 
>>> around Isend.
>>>
>>> The communicator returned by communicator(rankReceiver) is an 
>>> inter-communicator!
>>>
>>> The TRACE call prints:
>>>
>>> [1,1]:(1) 14:30:04 [com::MPICommunication]:104 in aSend: Entering 
>>> aSend
>>> [1,1]:  Argument 0: size == 50
>>> [1,1]:  Argument 1: rank(rankReceiver) == 1
>>> [1,1]:  Argument 2: comsize == 2
>>> [1,0]:(0) 14:30:04 [com::MPICommunication]:104 in aSend: Entering 
>>> aSend
>>> [1,0]:  Argument 0: size == 48
>>> [1,0]:  Argument 1: rank(rankReceiver) == 0
>>> [1,0]:  Argument 2: comsize == 2
>>>
>>> So, on rank 1 we send to rank = 1 on a communicator with size = 2.
>>>
>>> Still, rank 1 crashes with:
>>>
>>> [neon:80361] *** An error occurred in MPI_Isend
>>> [neon:80361] *** reported by process [1052966913,1]
>>> [neon:80361] *** on communicator
>>> [neon:80361] *** MPI_ERR_RANK: invalid rank
>>> [neon:80361] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will 
>>> now abort,
>>> [neon:80361] ***and potentially your MPI job)
>>>
>>> your collegues from mpich print
>>>
>>> [1] Fatal error in PMPI_Isend: Invalid rank, error stack:
>>> [1] PMPI_Isend(149): MPI_Isend(buf=0x560ddeb02100, count=49, MPI_DOUBLE, 
>>> dest=1, tag=0, comm=0x8405,
>>> request=0x7ffd528989c0) failed
>>> [1] PMPI_Isend(97).: Invalid rank has value 1 but must be nonnegative and 
>>> less than 1
>>>
>>> [0] Fatal error in PMPI_Isend: Invalid rank, error stack:
>>> [0] PMPI_Isend(149): MPI_Isend(buf=0x564b74c9edd8, count=1, MPI_DOUBLE, 
>>> dest=1, tag=0, comm=0x8406,
>>> request=0x7ffe5848d9f0) failed
>>> [0] PMPI_Isend(97).: Invalid rank has value 1 but must be nonnegative and 
>>> less than 1
>>>
>>> but MPI_Comm_size also returns 2.
>>>
>>> Do you have any idea where to look to find out what is going wrong here? 
>>> Esp. with the communicator being an inter-com.
>>>
>>> Best Thanks,
>>> Florian
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Invalid rank despite com size large enough

2018-04-13 Thread Florian Lindner
Hello,

I have this piece of code

PtrRequest MPICommunication::aSend(double *itemsToSend, int size, int 
rankReceiver)
{
  rankReceiver = rankReceiver - _rankOffset;
  int comsize = -1;
  MPI_Comm_size(communicator(rankReceiver), );
  TRACE(size, rank(rankReceiver), comsize);


  MPI_Request request;
  MPI_Isend(itemsToSend,
size,
MPI_DOUBLE,
rank(rankReceiver),
0,
communicator(rankReceiver),
);

  return PtrRequest(new MPIRequest(request));
}

While there are quite some calls you don't know, it's basically a wrapper 
around Isend.

The communicator returned by communicator(rankReceiver) is an 
inter-communicator!

The TRACE call prints:

[1,1]:(1) 14:30:04 [com::MPICommunication]:104 in aSend: Entering aSend
[1,1]:  Argument 0: size == 50
[1,1]:  Argument 1: rank(rankReceiver) == 1
[1,1]:  Argument 2: comsize == 2
[1,0]:(0) 14:30:04 [com::MPICommunication]:104 in aSend: Entering aSend
[1,0]:  Argument 0: size == 48
[1,0]:  Argument 1: rank(rankReceiver) == 0
[1,0]:  Argument 2: comsize == 2

So, on rank 1 we send to rank = 1 on a communicator with size = 2.

Still, rank 1 crashes with:

[neon:80361] *** An error occurred in MPI_Isend
[neon:80361] *** reported by process [1052966913,1]
[neon:80361] *** on communicator
[neon:80361] *** MPI_ERR_RANK: invalid rank
[neon:80361] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[neon:80361] ***and potentially your MPI job)

your collegues from mpich print

[1] Fatal error in PMPI_Isend: Invalid rank, error stack:
[1] PMPI_Isend(149): MPI_Isend(buf=0x560ddeb02100, count=49, MPI_DOUBLE, 
dest=1, tag=0, comm=0x8405,
request=0x7ffd528989c0) failed
[1] PMPI_Isend(97).: Invalid rank has value 1 but must be nonnegative and less 
than 1

[0] Fatal error in PMPI_Isend: Invalid rank, error stack:
[0] PMPI_Isend(149): MPI_Isend(buf=0x564b74c9edd8, count=1, MPI_DOUBLE, dest=1, 
tag=0, comm=0x8406,
request=0x7ffe5848d9f0) failed
[0] PMPI_Isend(97).: Invalid rank has value 1 but must be nonnegative and less 
than 1

but MPI_Comm_size also returns 2.

Do you have any idea where to look to find out what is going wrong here? Esp. 
with the communicator being an inter-com.

Best Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Redefining MPI_BOOL depending of sizeof(bool)

2018-03-29 Thread Florian Lindner


Am 29.03.2018 um 09:58 schrieb Florian Lindner:
> #define MPI_BOOL MPI_Select_unsigned_integer_datatype<sizeof(bool)>::datatype
> 
> 
> It redefines MPI_BOOL based on the size of bool. I wonder if this is needed 
> and why?
> 
> I was speculating that the compiler could pack multiple bools in one word, 
> when used as a array. But the code above is a
> compile time specialization and won't help there.

Ok, I just discovered that MPI has no MPI_BOOL type. According to the standard, 
there is a MPI_CXX_BOOL and MPI_C_BOOL
(pp. 26).

Since I use C++ I tend to use MPI_CXX_BOOL, which also seems to be present when 
no CXX bindings are compiled.

Best,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Redefining MPI_BOOL depending of sizeof(bool)

2018-03-29 Thread Florian Lindner
Hello,

in a code that I am currently reading I have found that code:


template 
struct MPI_Select_unsigned_integer_datatype;

template <>
struct MPI_Select_unsigned_integer_datatype<1> {
  static MPI_Datatype datatype;
};
MPI_Datatype MPI_Select_unsigned_integer_datatype<1>::datatype = 
MPI_UNSIGNED_CHAR;

template <>
struct MPI_Select_unsigned_integer_datatype<2> {
  static MPI_Datatype datatype;
};
MPI_Datatype MPI_Select_unsigned_integer_datatype<2>::datatype = 
MPI_UNSIGNED_SHORT;

template <>
struct MPI_Select_unsigned_integer_datatype<4> {
  static MPI_Datatype datatype;
};
MPI_Datatype MPI_Select_unsigned_integer_datatype<4>::datatype = MPI_UNSIGNED;

template <>
struct MPI_Select_unsigned_integer_datatype<8> {
  static MPI_Datatype datatype;
};
MPI_Datatype MPI_Select_unsigned_integer_datatype<8>::datatype = 
MPI_UNSIGNED_LONG;

#define MPI_BOOL MPI_Select_unsigned_integer_datatype::datatype


It redefines MPI_BOOL based on the size of bool. I wonder if this is needed and 
why?

I was speculating that the compiler could pack multiple bools in one word, when 
used as a array. But the code above is a
compile time specialization and won't help there.

Best Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Help debugging invalid read

2018-02-19 Thread Florian Lindner
Ok, I think I have found the problem

During std::vector::push_back or emplace_back a realloc happens and thus memory 
locations that I gave to MPI_Isend
become invalid.

My loop now reads:

  std::vector eventSendBuf(eventsSize); // Buffer to hold the 
MPI_EventData object

  for (int i = 0; i < eventsSize; ++i) {
MPI_Request req;

eventSendBuf.at(i).size = 5;

cout << "Isending event " << i << endl;
MPI_Isend([i], 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD, );
requests.push_back(req);
  }

Best,
Florian


Am 19.02.2018 um 10:14 schrieb Florian Lindner:
> Hello,
> 
> I am having problems understanding an error valgrind gives me. I tried to bog 
> down the program as much as possible. The
> original program as well as the test example both work fine, but when I link 
> the created library to another application
> I get segfaults. I think that this piece of code is to blame. I run valgrind 
> on it and get an invalid read.
> 
> The code can be seen at 
> https://gist.github.com/floli/d62d16ce7cabb4522e2ae7e6b3cfda43 or below.
> 
> It's about 60 lines of C/C++ code.
> 
> I have also attached the valgrind report below the code.
> 
> The code registers a custom MPI datatype and sends that using an isend. It 
> does not crash or produces invalid data, but
> I fear that the invalid read message from valgrind is a hint of an existing 
> memory corruption.
> 
> But I got no idea where that could happen.
> 
> OpenMPI 3.0.0 @ Arch
> 
> I am very thankful of any hints whatsover!
> 
> Florian
> 
> 
> 
> 
> 
> // Compile and test with: mpicxx -std=c++11 -g -O0 mpitest.cpp  &&
> LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 1 valgrind 
> --read-var-info=yes --leak-check=full ./a.out
> 
> #include 
> #include 
> 
> #include 
> 
> using namespace std;
> 
> struct MPI_EventData
> {
>   int size;
> };
> 
> 
> void collect()
> {
>   // Register MPI datatype
>   MPI_Datatype MPI_EVENTDATA;
>   int blocklengths[] = {1};
>   MPI_Aint displacements[] = {offsetof(MPI_EventData, size) };
>   MPI_Datatype types[] = {MPI_INT};
>   MPI_Type_create_struct(1, blocklengths, displacements, types, 
> _EVENTDATA);
>   MPI_Type_commit(_EVENTDATA);
> 
>   int rank, MPIsize;
>   MPI_Comm_rank(MPI_COMM_WORLD, );
>   MPI_Comm_size(MPI_COMM_WORLD, );
> 
>   std::vector requests;
>   std::vector eventsPerRank(MPIsize);
>   size_t eventsSize = 3; // each rank sends three events, invalid read 
> happens only if eventsSize > 1
>   MPI_Gather(, 1, MPI_INT, eventsPerRank.data(), 1, MPI_INT, 0, 
> MPI_COMM_WORLD);
> 
>   std::vector eventSendBuf; // Buffer to hold the 
> MPI_EventData object
> 
>   for (int i = 0; i < eventsSize; ++i) {
> MPI_EventData eventdata;
> MPI_Request req;
> 
> eventdata.size = 5;
> eventSendBuf.push_back(eventdata);
> 
> cout << "Isending event " << i << endl;
> MPI_Isend((), 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD, 
> );
> requests.push_back(req);
>   }
> 
>   if (rank == 0) {
> for (int i = 0; i < MPIsize; ++i) {
>   for (int j = 0; j < eventsPerRank[i]; ++j) {
> MPI_EventData ev;
> MPI_Recv(, 1, MPI_EVENTDATA, i, MPI_ANY_TAG, MPI_COMM_WORLD, 
> MPI_STATUS_IGNORE);
> 
> cout << "Received Size = " << ev.size << endl;
>   }
> }
>   }
>   MPI_Waitall(requests.size(), requests.data(), MPI_STATUSES_IGNORE);
>   MPI_Type_free(_EVENTDATA);
> }
> 
> 
> int main(int argc, char *argv[])
> {
>   MPI_Init(, );
> 
>   collect();
> 
>   MPI_Finalize();
> }
> 
> 
> /*
> 
>  % mpicxx -std=c++11 -g -O0 mpitest.cpp  && 
> LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 1 valgrind
> --read-var-info=yes --leak-check=full ./a.out
> ==13584== Memcheck, a memory error detector
> ==13584== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==13584== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
> ==13584== Command: ./a.out
> ==13584==
> valgrind MPI wrappers 13584: Active for pid 13584
> valgrind MPI wrappers 13584: Try MPIWRAP_DEBUG=help for possible options
> ==13584== Thread 3:
> ==13584== Syscall param epoll_pwait(sigmask) points to unaddressable byte(s)
> ==13584==at 0x61A0FE6: epoll_pwait (in /usr/lib/libc-2.26.so)
> ==13584==by 0x677CDDC: ??? (in /usr/lib/openmpi/libopen-pal.so.40.0.0)
> ==13584==by 0x6780EDA: opal_libevent2022_event_base_loop (in 
> /usr/lib/openmpi/libopen-pal.so.40.0.0)
> ==13584==by 0x93100CE: ??? (in 
> /usr/lib/openmpi/openmpi/mc

[OMPI users] Help debugging invalid read

2018-02-19 Thread Florian Lindner
Hello,

I am having problems understanding an error valgrind gives me. I tried to bog 
down the program as much as possible. The
original program as well as the test example both work fine, but when I link 
the created library to another application
I get segfaults. I think that this piece of code is to blame. I run valgrind on 
it and get an invalid read.

The code can be seen at 
https://gist.github.com/floli/d62d16ce7cabb4522e2ae7e6b3cfda43 or below.

It's about 60 lines of C/C++ code.

I have also attached the valgrind report below the code.

The code registers a custom MPI datatype and sends that using an isend. It does 
not crash or produces invalid data, but
I fear that the invalid read message from valgrind is a hint of an existing 
memory corruption.

But I got no idea where that could happen.

OpenMPI 3.0.0 @ Arch

I am very thankful of any hints whatsover!

Florian





// Compile and test with: mpicxx -std=c++11 -g -O0 mpitest.cpp  &&
LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 1 valgrind 
--read-var-info=yes --leak-check=full ./a.out

#include 
#include 

#include 

using namespace std;

struct MPI_EventData
{
  int size;
};


void collect()
{
  // Register MPI datatype
  MPI_Datatype MPI_EVENTDATA;
  int blocklengths[] = {1};
  MPI_Aint displacements[] = {offsetof(MPI_EventData, size) };
  MPI_Datatype types[] = {MPI_INT};
  MPI_Type_create_struct(1, blocklengths, displacements, types, _EVENTDATA);
  MPI_Type_commit(_EVENTDATA);

  int rank, MPIsize;
  MPI_Comm_rank(MPI_COMM_WORLD, );
  MPI_Comm_size(MPI_COMM_WORLD, );

  std::vector requests;
  std::vector eventsPerRank(MPIsize);
  size_t eventsSize = 3; // each rank sends three events, invalid read happens 
only if eventsSize > 1
  MPI_Gather(, 1, MPI_INT, eventsPerRank.data(), 1, MPI_INT, 0, 
MPI_COMM_WORLD);

  std::vector eventSendBuf; // Buffer to hold the MPI_EventData 
object

  for (int i = 0; i < eventsSize; ++i) {
MPI_EventData eventdata;
MPI_Request req;

eventdata.size = 5;
eventSendBuf.push_back(eventdata);

cout << "Isending event " << i << endl;
MPI_Isend((), 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD, 
);
requests.push_back(req);
  }

  if (rank == 0) {
for (int i = 0; i < MPIsize; ++i) {
  for (int j = 0; j < eventsPerRank[i]; ++j) {
MPI_EventData ev;
MPI_Recv(, 1, MPI_EVENTDATA, i, MPI_ANY_TAG, MPI_COMM_WORLD, 
MPI_STATUS_IGNORE);

cout << "Received Size = " << ev.size << endl;
  }
}
  }
  MPI_Waitall(requests.size(), requests.data(), MPI_STATUSES_IGNORE);
  MPI_Type_free(_EVENTDATA);
}


int main(int argc, char *argv[])
{
  MPI_Init(, );

  collect();

  MPI_Finalize();
}


/*

 % mpicxx -std=c++11 -g -O0 mpitest.cpp  && 
LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 1 valgrind
--read-var-info=yes --leak-check=full ./a.out
==13584== Memcheck, a memory error detector
==13584== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==13584== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==13584== Command: ./a.out
==13584==
valgrind MPI wrappers 13584: Active for pid 13584
valgrind MPI wrappers 13584: Try MPIWRAP_DEBUG=help for possible options
==13584== Thread 3:
==13584== Syscall param epoll_pwait(sigmask) points to unaddressable byte(s)
==13584==at 0x61A0FE6: epoll_pwait (in /usr/lib/libc-2.26.so)
==13584==by 0x677CDDC: ??? (in /usr/lib/openmpi/libopen-pal.so.40.0.0)
==13584==by 0x6780EDA: opal_libevent2022_event_base_loop (in 
/usr/lib/openmpi/libopen-pal.so.40.0.0)
==13584==by 0x93100CE: ??? (in /usr/lib/openmpi/openmpi/mca_pmix_pmix2x.so)
==13584==by 0x5E9408B: start_thread (in /usr/lib/libpthread-2.26.so)
==13584==by 0x61A0E7E: clone (in /usr/lib/libc-2.26.so)
==13584==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==13584==
Isending event 0
==13584== Thread 1:
==13584== Invalid read of size 2
==13584==at 0x4C33B20: memmove (vg_replace_strmem.c:1258)
==13584==by 0x11A7BB: MPI_EventData* std::__copy_move::__copy_m(MPI_EventData const*, 
MPI_EventData const*, MPI_EventData*)
(stl_algobase.h:368)
==13584==by 0x11A70B: MPI_EventData* std::__copy_move_a(MPI_EventData*,
MPI_EventData*, MPI_EventData*) (stl_algobase.h:386)
==13584==by 0x11A62B: MPI_EventData* std::__copy_move_a2(MPI_EventData*,
MPI_EventData*, MPI_EventData*) (stl_algobase.h:424)
==13584==by 0x11A567: MPI_EventData* 
std::copy,
MPI_EventData*>(std::move_iterator, 
std::move_iterator, MPI_EventData*) (stl_algobase.h:456)
==13584==by 0x11A478: MPI_EventData*
std::__uninitialized_copy::__uninit_copy,
MPI_EventData*>(std::move_iterator, 
std::move_iterator, MPI_EventData*)
(stl_uninitialized.h:101)
==13584==by 

Re: [OMPI users] ERR_TRUNCATE with MPI_Pack

2018-02-14 Thread Florian Lindner
Hi Gilles,


Am 14.02.2018 um 11:46 schrieb Gilles Gouaillardet:
> Florian,
> 
> You send position=0 MPI_PACKED instead of estimatedPackSize, so it is very 
> odd you see get_count = 12
> 
> Can you please double check that part ?

https://gist.github.com/floli/310980790d5d76caac0b19a937e2a502

You mean in line 22:

 MPI_Isend(packSendBuf.data(), position, MPI_PACKED, 0, 0, MPI_COMM_WORLD, 
);

but position was incremented (to 12, see output below) by the preceding 
MPI_Pack call.

> 
> Also, who returns MPI_ERR_TRUNCATE ? MPI_Recv ? MPI_Unpack ?

Sorry, forgot to include that crucial part:

% mpirun -n 1 ./a.out
packSize = 12, estimatedPackSize = 12
position after pack = 12
packSize from get_count = 12
[asaru:30337] *** An error occurred in MPI_Unpack
[asaru:30337] *** reported by process [4237492225,0]
[asaru:30337] *** on communicator MPI_COMM_WORLD
[asaru:30337] *** MPI_ERR_TRUNCATE: message truncated
[asaru:30337] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now 
abort,
[asaru:30337] ***and potentially your MPI job)

Best Thanks,
Florian

> 
> 
> Cheers,
> 
> Gilles
> 
> Florian Lindner <mailingli...@xgm.de> wrote:
>> Hello,
>>
>> I have this example code:
>>
>> #include 
>> #include 
>>
>> int main(int argc, char *argv[])
>> {
>>  MPI_Init(, );
>>  {
>>MPI_Request req1, req2;
>>std::vector vec = {1, 2, 3};
>>int packSize = sizeof(int) * vec.size();
>>int position = 0;
>>std::vector packSendBuf(packSize);
>>int vecSize = vec.size();
>>MPI_Pack(vec.data(), vec.size(), MPI_INT, packSendBuf.data(), packSize, 
>> , MPI_COMM_WORLD);
>>
>>int estimatedPackSize = 0;
>>MPI_Pack_size(vec.size(), MPI_INT, MPI_COMM_WORLD, );
>>std::cout << "packSize = " << packSize << ", estimatedPackSize = " << 
>> estimatedPackSize << std::endl;
>>
>>MPI_Isend(, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, );
>>MPI_Isend(packSendBuf.data(), position, MPI_PACKED, 0, 0, MPI_COMM_WORLD, 
>> );
>>  }
>>  {
>>int vecSize, msgSize;
>>int packSize = 0, position = 0;
>>
>>MPI_Recv(, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, 
>> MPI_STATUS_IGNORE);
>>
>>MPI_Status status;
>>MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
>>MPI_Get_count(, MPI_PACKED, );
>>char packBuffer[packSize];
>>std::cout << "packSize from get_count = " << packSize << std::endl;
>>
>>std::vector vec(vecSize);
>>MPI_Recv(packBuffer, packSize, MPI_PACKED, 0, MPI_ANY_TAG, 
>> MPI_COMM_WORLD, MPI_STATUS_IGNORE);
>>MPI_Unpack(packBuffer, vecSize, , vec.data(), vecSize, MPI_INT, 
>> MPI_COMM_WORLD);
>>  }
>>  MPI_Finalize();
>> }
>>
>>
>> Which gives an MPI_ERR_TRUNCATE even when running on 1 rank only. Background 
>> is that I want to send multiple differently
>> sized objects, also with more complex types that to not map to MPI_*, for 
>> which I plan to use MPI_BYTES. I plan to pack
>> them into one stream and unpack them one after one.
>>
>> I suspect I got somthig with the sizes wrong. The lines
>>
>>int estimatedPackSize = 0;
>>MPI_Pack_size(vec.size(), MPI_INT, MPI_COMM_WORLD, );
>>std::cout << "packSize = " << packSize << ", estimatedPackSize = " << 
>> estimatedPackSize << std::endl;
>>
>> Return the same number, that is 12, the packSize from get_cont is also 12.
>>
>> Could you give a hint, what the is problem is here?
>>
>> OpenMPI 3.0.0 @ Arch or OpenMPI 1.1.0.2 @ Ubuntu 16.04
>>
>> Thanks,
>> Florian
>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] ERR_TRUNCATE with MPI_Pack

2018-02-14 Thread Florian Lindner
Hello,

I have this example code:

#include 
#include 

int main(int argc, char *argv[])
{
  MPI_Init(, );
  {
MPI_Request req1, req2;
std::vector vec = {1, 2, 3};
int packSize = sizeof(int) * vec.size();
int position = 0;
std::vector packSendBuf(packSize);
int vecSize = vec.size();
MPI_Pack(vec.data(), vec.size(), MPI_INT, packSendBuf.data(), packSize, 
, MPI_COMM_WORLD);

int estimatedPackSize = 0;
MPI_Pack_size(vec.size(), MPI_INT, MPI_COMM_WORLD, );
std::cout << "packSize = " << packSize << ", estimatedPackSize = " << 
estimatedPackSize << std::endl;

MPI_Isend(, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, );
MPI_Isend(packSendBuf.data(), position, MPI_PACKED, 0, 0, MPI_COMM_WORLD, 
);
  }
  {
int vecSize, msgSize;
int packSize = 0, position = 0;

MPI_Recv(, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, 
MPI_STATUS_IGNORE);

MPI_Status status;
MPI_Probe(0, MPI_ANY_TAG, MPI_COMM_WORLD, );
MPI_Get_count(, MPI_PACKED, );
char packBuffer[packSize];
std::cout << "packSize from get_count = " << packSize << std::endl;

std::vector vec(vecSize);
MPI_Recv(packBuffer, packSize, MPI_PACKED, 0, MPI_ANY_TAG, MPI_COMM_WORLD, 
MPI_STATUS_IGNORE);
MPI_Unpack(packBuffer, vecSize, , vec.data(), vecSize, MPI_INT, 
MPI_COMM_WORLD);
  }
  MPI_Finalize();
}


Which gives an MPI_ERR_TRUNCATE even when running on 1 rank only. Background is 
that I want to send multiple differently
sized objects, also with more complex types that to not map to MPI_*, for which 
I plan to use MPI_BYTES. I plan to pack
them into one stream and unpack them one after one.

I suspect I got somthig with the sizes wrong. The lines

int estimatedPackSize = 0;
MPI_Pack_size(vec.size(), MPI_INT, MPI_COMM_WORLD, );
std::cout << "packSize = " << packSize << ", estimatedPackSize = " << 
estimatedPackSize << std::endl;

Return the same number, that is 12, the packSize from get_cont is also 12.

Could you give a hint, what the is problem is here?

OpenMPI 3.0.0 @ Arch or OpenMPI 1.1.0.2 @ Ubuntu 16.04

Thanks,
Florian


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Custom datatype with variable length array

2018-01-14 Thread Florian Lindner
Hello,

I have a custom datatype MPI_EVENTDATA (created with MPI_Type_create_struct) 
which is a struct with some fixed size fields and a variable sized array of 
ints (data). I want to collect a variable number of these types (Events) from 
all ranks at rank 0. My current version is working for a fixed size custom 
datatype:

void collect()
{
  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, );
  
  size_t globalSize = events.size();
  // Get total number of Events that are to be received
  MPI_Allreduce(MPI_IN_PLACE, , 1, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
  std::vector requests(globalSize);
  std::vector recvEvents(globalSize);

  if (rank == 0) {
for (size_t i = 0; i < globalSize; i++) {
  MPI_Irecv([i], 1, MPI_EVENTDATA, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, [i]);
}
  }
  for (const auto & ev : events) {
MPI_EventData eventdata;
assert(ev.first.size() < 255);
strcpy(eventdata.name, ev.first.c_str());
eventdata.rank = rank;
eventdata.dataSize = ev.second.data.size();
MPI_Send(, 1, MPI_EVENTDATA, 0, 0, MPI_COMM_WORLD);  
  }
  if (rank == 0) {
MPI_Waitall(globalSize, requests.data(), MPI_STATUSES_IGNORE);
for (const auto & evdata : recvEvents) {
  // Save in a std::multimap with evdata.name as key
  globalEvents.emplace(std::piecewise_construct, 
std::forward_as_tuple(evdata.name),
   std::forward_as_tuple(evdata.name, evdata.rank));
}

  }
}

Obviously, next step would be to allocate a buffer of size evdata.dataSize, 
receive it, add it to globalEvents multimap and be happy. Questions I 
have:

* How to correlate the received Events in the first step, with the received 
data vector in the second step?
* Is there a way to use a variable sized compononent inside a custom MPI 
datatype?
* Or dump the custom datatype and use MPI_Pack instead?
* Or somehow group two succeeding messages together?

I'm open to any good and elegant suggestions!

Thanks,
Florian





___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Setting mpirun default parameters in a file

2018-01-11 Thread Florian Lindner

Am 10.01.2018 um 16:55 schrieb Jeff Squyres (jsquyres):
> See https://www.open-mpi.org/faq/?category=tuning#setting-mca-params for a 
> little more info on how to set MCA params.

Thanks!

> In terms of physical vs. logical -- are you talking about hyperthreading?

Yes.

> If so, Open MPI uses the number of *cores* (by default), because that's what 
> "most" HPC users want (I put "most" in quotes because this can quickly turn 
> into a religious debate -- the "cores vs. hyperthreads" discussion has come 
> up on this list a few times over the years, and "most" HPC-related workloads 
> still tend to benefit from using a whole core vs. a hyperthread).

Not wanting to open that debate again ;-). I was just surprised because most 
applications take the logical
(hyperthreading) number of CPUs.

Best,
Florian

> 
> Regardless, you can have Open MPI use hyperthreads by default (instead of 
> cores) with the mpirun option --use-hwthread-cpus.
> 
> 
> 
>> On Jan 10, 2018, at 10:48 AM, r...@open-mpi.org wrote:
>>
>> Set the MCA param “rmaps_base_oversubscribe=1” in your default MCA param 
>> file, or in your environment
>>
>>> On Jan 10, 2018, at 4:42 AM, Florian Lindner <mailingli...@xgm.de> wrote:
>>>
>>> Hello,
>>>
>>> a recent openmpi update on my Arch machine seems to have enabled 
>>> --nooversubscribe, as described in the manpage. Since I
>>> regularly test on my laptop with just 2 physical cores, I want to set 
>>> --oversubscribe by default.
>>>
>>> How can I do that?
>>>
>>> I am also a bit surprised, that openmpi takes the physical number of cores 
>>> into account, not the logical (which is 4 on
>>> my machine).
>>>
>>> Thanks,
>>> Florian
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] Setting mpirun default parameters in a file

2018-01-10 Thread Florian Lindner
Hello,

a recent openmpi update on my Arch machine seems to have enabled 
--nooversubscribe, as described in the manpage. Since I
regularly test on my laptop with just 2 physical cores, I want to set 
--oversubscribe by default.

How can I do that?

I am also a bit surprised, that openmpi takes the physical number of cores into 
account, not the logical (which is 4 on
my machine).

Thanks,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Can't connect using MPI Ports

2017-11-09 Thread Florian Lindner
>> The MPI Ports functionality (chapter 10.4 of MPI 3.1), mainly consisting of 
>> MPI_Open_port, MPI_Comm_accept and
>> MPI_Comm_connect is not usuable without running an ompi-server as a third 
>> process?
> 
> Yes, that’s correct. The reason for moving in that direction is that the 
> resource managers, as they continue to
> integrate PMIx into them, are going to be providing that third party. This 
> will make connect/accept much easier to use,
> and a great deal more scalable.
> 
> See https://github.com/pmix/RFCs/blob/master/RFC0003.md for an explanation.


Ok, thanks for that input. I haven't heard of pmix so far (only as part of some 
ompi error messages).

Using ompi-server -d -r 'ompi.connect' I was able to publish and retrieve the 
port name, however, still no connection
could be established.

% mpirun -n 1 --ompi-server "file:ompi.connect" ./a.out A
Published port 3044605953.0:664448538

% mpirun -n 1 --ompi-server "file:ompi.connect" ./a.out B
Looked up port 3044605953.0:664448538


at this point, both processes hang.

The code is:

#include 
#include 
#include 

int main(int argc, char **argv)
{
  MPI_Init(, );
  std::string a(argv[1]);
  char p[MPI_MAX_PORT_NAME];
  MPI_Comm icomm;

  if (a == "A") {
MPI_Open_port(MPI_INFO_NULL, p);
MPI_Publish_name("foobar", MPI_INFO_NULL, p);
printf("Published port %s\n", p);
MPI_Comm_accept(p, MPI_INFO_NULL, 0, MPI_COMM_WORLD, );
  }
  if (a == "B") {
MPI_Lookup_name("foobar", MPI_INFO_NULL, p);
printf("Looked up port %s\n", p);
MPI_Comm_connect(p, MPI_INFO_NULL, 0, MPI_COMM_WORLD, );
  }

  MPI_Finalize();

  return 0;
}



Do you have any idea?

Best,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Can't connect using MPI Ports

2017-11-06 Thread Florian Lindner
Am 05.11.2017 um 20:57 schrieb r...@open-mpi.org:
> 
>> On Nov 5, 2017, at 6:48 AM, Florian Lindner <mailingli...@xgm.de 
>> <mailto:mailingli...@xgm.de>> wrote:
>>
>> Am 04.11.2017 um 00:05 schrieb r...@open-mpi.org <mailto:r...@open-mpi.org>:
>>> Yeah, there isn’t any way that is going to work in the 2.x series. I’m not 
>>> sure it was ever fixed, but you might try
>>> the latest 3.0, the 3.1rc, and even master.
>>>
>>> The only methods that are known to work are:
>>>
>>> * connecting processes within the same mpirun - e.g., using comm_spawn
>>
>> That is not an option for our application.
>>
>>> * connecting processes across different mpiruns, with the ompi-server 
>>> daemon as the rendezvous point
>>>
>>> The old command line method (i.e., what you are trying to use) hasn’t been 
>>> much on the radar. I don’t know if someone
>>> else has picked it up or not...
>>
>> What do you mean with "the old command line method”.
>>
>> Isn't the ompi-server just another means of exchanging port names, i.e. the 
>> same I do using files?
> 
> No, it isn’t - there is a handshake that ompi-server facilitates.
> 
>>
>> In my understanding, using Publish_name and Lookup_name or exchanging the 
>> information using files (or command line or
>> stdin) shouldn't have any
>> impact on the connection (Connect / Accept) itself.
> 
> Depends on the implementation underneath connect/accept.
> 
> The initial MPI standard authors had fixed in their minds that the 
> connect/accept handshake would take place over a TCP
> socket, and so no intermediate rendezvous broker was involved. That isn’t how 
> we’ve chosen to implement it this time
> around, and so you do need the intermediary. If/when some developer wants to 
> add another method, they are welcome to do
> so - but the general opinion was that the broker requirement was fine.

Ok. Just to make sure I understood correctly:

The MPI Ports functionality (chapter 10.4 of MPI 3.1), mainly consisting of 
MPI_Open_port, MPI_Comm_accept and
MPI_Comm_connect is not usuable without running an ompi-server as a third 
process?

Thank again,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Can't connect using MPI Ports

2017-11-05 Thread Florian Lindner
Am 04.11.2017 um 00:05 schrieb r...@open-mpi.org:
> Yeah, there isn’t any way that is going to work in the 2.x series. I’m not 
> sure it was ever fixed, but you might try the latest 3.0, the 3.1rc, and even 
> master.
> 
> The only methods that are known to work are:
> 
> * connecting processes within the same mpirun - e.g., using comm_spawn

That is not an option for our application.

> * connecting processes across different mpiruns, with the ompi-server daemon 
> as the rendezvous point
> 
> The old command line method (i.e., what you are trying to use) hasn’t been 
> much on the radar. I don’t know if someone else has picked it up or not...

What do you mean with "the old command line method".

Isn't the ompi-server just another means of exchanging port names, i.e. the 
same I do using files?

In my understanding, using Publish_name and Lookup_name or exchanging the 
information using files (or command line or stdin) shouldn't have any
impact on the connection (Connect / Accept) itself.

Best,
Florian


> Ralph
> 
>> On Nov 3, 2017, at 11:23 AM, Florian Lindner <mailingli...@xgm.de> wrote:
>>
>>
>> Am 03.11.2017 um 16:18 schrieb r...@open-mpi.org:
>>> What version of OMPI are you using?
>>
>> 2.1.1 @ Arch Linux.
>>
>> Best,
>> Florian
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> 
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Can't connect using MPI Ports

2017-11-03 Thread Florian Lindner

Am 03.11.2017 um 16:18 schrieb r...@open-mpi.org:
> What version of OMPI are you using?

2.1.1 @ Arch Linux.

Best,
Florian
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


[OMPI users] Can't connect using MPI Ports

2017-11-03 Thread Florian Lindner
Hello,

I'm working on a sample program to connect two MPI communicators launched with 
mpirun using Ports.

Firstly, I use MPI_Open_port to obtain a name and write that to a file:

  if (options.participant == A) { // A publishes the port
if (options.commType == single and rank == 0)
  openPublishPort(options);

if (options.commType == many)
  openPublishPort(options);
  }
  MPI_Barrier(MPI_COMM_WORLD);

participant is a command line argument and defines the role of A as server. B 
is the client.

void openPublishPort(Options options)
{
  using namespace boost::filesystem;
  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, );

  char p[MPI_MAX_PORT_NAME];
  MPI_Open_port(MPI_INFO_NULL, p);
  std::string portName(p);

  create_directory(options.publishDirectory);
  std::string filename;
  if (options.commType == many)
filename = "A-" + std::to_string(rank) + ".address";
  if (options.commType == single)
filename = "intercomm.address";

  auto path = options.publishDirectory / filename;
  DEBUG << "Writing address " << portName << " to " << path;
  std::ofstream ofs(path.string(), std::ofstream::out);
  ofs << portName;
}

This works fine as far as I see. Next, I try to connect:

  MPI_Comm icomm;
  std::string portName;
  if (options.participant == A) { // receives connections
if (options.commType == single) {
  if (rank == 0)
portName = readPort(options);
  INFO << "Accepting connection on " << portName;
  MPI_Comm_accept(portName.c_str(), MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
);
  INFO << "Received connection";
}
  }

  if (options.participant == B) { // connects to the intercomms
if (options.commType == single) {
  if (rank == 0)
portName = readPort(options);
  INFO << "Trying to connect to " << portName;
  MPI_Comm_connect(portName.c_str(), MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
);
  INFO << "Connected";
}
  }


options.single says that I want to use a single communicator that contains all 
ranks on both participants, A and B.
readPort reads the port name from the file that was written before.

Now, when I first launch A and, in another terminal, B, nothing happens until a 
timeout occurs.

% mpirun -n 1 ./mpiports --commType="single" --participant="A"
[2017-11-03 15:29:55.469891] [debug]   Writing address 3048013825.0:1069313090 
to "./publish/intercomm.address"
[2017-11-03 15:29:55.470169] [debug]   Read address 3048013825.0:1069313090 
from "./publish/intercomm.address"
[2017-11-03 15:29:55.470185] [info]Accepting connection on 
3048013825.0:1069313090
[asaru:16199] OPAL ERROR: Timeout in file base/pmix_base_fns.c at line 195
[...]

and on the other site:

% mpirun -n 1 ./mpiports --commType="single" --participant="B"
[2017-11-03 15:29:59.698921] [debug]   Read address 3048013825.0:1069313090 
from "./publish/intercomm.address"
[2017-11-03 15:29:59.698947] [info]Trying to connect to 
3048013825.0:1069313090
[asaru:16238] OPAL ERROR: Timeout in file base/pmix_base_fns.c at line 195
[...]

The complete code, including cmake build script can be downloaded at:

https://www.dropbox.com/s/azo5ti4kjg12zjy/MPI_Ports.tar.gz?dl=0

Why is the connection not working?

Thanks a lot,
Florian


___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users


Re: [OMPI users] Sending string causes memory errors

2016-03-03 Thread Florian Lindner
Hey,

Am Donnerstag, 3. März 2016, 17:50:59 CET schrieb Gilles Gouaillardet:
> Florian,
> 
> which distro are you running on ?

I'm running Arch.

> if you are not using stock gcc and valgrind, can you tell which version you
> are running ?
> last but not least, how did you configure openmpi ?

openmpi 1.10.2
https://www.archlinux.org/packages/extra/x86_64/openmpi/
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/openmpi
   ./configure --prefix=/usr \
   --sysconfdir=/etc/${pkgname} \
   --enable-mpi-fortran=all \
   --libdir=/usr/lib/${pkgname} \
   --with-threads=posix \
   --enable-smp-locks \
   --with-valgrind \
   --enable-memchecker \
   --enable-pretty-print-stacktrace \
   --without-slurm \
   --with-hwloc=/usr \
   --with-libltdl=/usr  \
   FC=/usr/bin/gfortran \
   LDFLAGS="$LDFLAGS -Wl,-z,noexecstack"

There is also a patch applied:
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/system_ltdl.patch?h=packages/openmpi

valgrind 3.11.0
https://www.archlinux.org/packages/extra/x86_64/valgrind/
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/valgrind
./configure --prefix=/usr --mandir=/usr/share/man --with-mpicc=mpic

gcc 5.3.0
https://www.archlinux.org/packages/core/x86_64/gcc/
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/gcc
  ${srcdir}/${_basedir}/configure --prefix=/usr \
  --libdir=/usr/lib --libexecdir=/usr/lib \
  --mandir=/usr/share/man --infodir=/usr/share/info \
  --with-bugurl=https://bugs.archlinux.org/ \
  --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++ \
  --enable-shared --enable-threads=posix --enable-libmpx \
  --with-system-zlib --with-isl --enable-__cxa_atexit \
  --disable-libunwind-exceptions --enable-clocale=gnu \
  --disable-libstdcxx-pch --disable-libssp \
  --enable-gnu-unique-object --enable-linker-build-id \
  --enable-lto --enable-plugin --enable-install-libiberty \
  --with-linker-hash-style=gnu --enable-gnu-indirect-function \
  --disable-multilib --disable-werror \
  --enable-checking=release

  make

The PKGBUILD files I linked to contain the build recipe.

Best and thanks!
Florian


> 
> Cheers,
> 
> Gilles
> 
> On Thursday, March 3, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
> 
> > I am still getting errors, even with your script.
> >
> > I will also try to modified build of openmpi that Jeff suggested.
> >
> > Best,
> > Florian
> >
> > % mpicxx -std=c++11 -g -O0 -Wall -Wextra -fno-builtin-strlen
> > mpi_gilles.cpp && mpirun -n 2 ./a.out
> > Stringlength = 64
> > 123456789012345678901234567890123456789012345678901234567890123
> >
> > % LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 2
> > valgrind ./a.out
> > ==5324== Memcheck, a memory error detector
> > ==5324== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
> > ==5324== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright
> > info
> > ==5324== Command: ./a.out
> > ==5324==
> > ==5325== Memcheck, a memory error detector
> > ==5325== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
> > ==5325== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright
> > info
> > ==5325== Command: ./a.out
> > ==5325==
> > valgrind MPI wrappers  5324: Active for pid 5324
> > valgrind MPI wrappers  5324: Try MPIWRAP_DEBUG=help for possible options
> > valgrind MPI wrappers  5325: Active for pid 5325
> > valgrind MPI wrappers  5325: Try MPIWRAP_DEBUG=help for possible options
> > Stringlength = 64
> > ==5325== Invalid read of size 1
> > ==5325==at 0x4C2D992: strlen (in
> > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> > ==5325==by 0x56852D8: length (char_traits.h:267)
> > ==5325==by 0x56852D8: std::basic_ostream<char, std::char_traits
> > >& std::operator<< <std::char_traits >(std::basic_ostream<char,
> > std::char_traits >&, char const*) (ostream:562)
> > ==5325==by 0x408A45: receive() (mpi_gilles.cpp:22)
> > ==5325==by 0x408B88: main (mpi_gilles.cpp:44)
> > ==5325==  Address 0xffefff800 is on thread 1's stack
> > ==5325==  in frame #2, created by receive() (mpi_gilles.cpp:8)
> > ==5325==
> > ==5325== Invalid read of size 1
> > ==5325==at 0x4C2D9A4: strlen (in
> > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> > ==5325==by 0x56852D8: length (char_traits.h:267)
> > ==5325==by 0x568

Re: [OMPI users] Sending string causes memory errors

2016-03-03 Thread Florian Lindner
Am Mittwoch, 2. März 2016, 16:23:02 CET schrieb Jeff Squyres (jsquyres):
> There's a bunch of places in OMPI where we don't initialize memory because we 
> know it doesn't matter (e.g., in padding between unaligned struct members), 
> but then that memory is accessed when writing the entire struct down a file 
> descriptor or memcpy'ed elsewhere in memory...etc.  It gets even worse with 
> OS-bypass networks, because valgrind doesn't see the origination of various 
> buffers, and therefore thinks they're uninitialized (but they *are* actually 
> initialized).  
> 
> If you want to remove spurious valgrind warnings, build Open MPI with the 
> --enable-memchecker configure option.  There's a (slight) performance 
> penalty, which is why it is not the default.

Hey Jeff,

I am using the arch build of openmpi 
(https://www.archlinux.org/packages/extra/x86_64/openmpi/) and it's already 
built with enable-memchecker:

   ./autogen.pl
   ./configure --prefix=/usr \
   --sysconfdir=/etc/${pkgname} \
   --enable-mpi-fortran=all \
   --libdir=/usr/lib/${pkgname} \
   --with-threads=posix \
   --enable-smp-locks \
   --with-valgrind \
   --enable-memchecker \
   --enable-pretty-print-stacktrace \
   --without-slurm \
   --with-hwloc=/usr \
   --with-libltdl=/usr  \
   FC=/usr/bin/gfortran \
   LDFLAGS="$LDFLAGS -Wl,-z,noexecstack"

   make

see 
https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/openmpi

Any more ideas?

Best,
Florian

> 
> 
> > On Mar 2, 2016, at 9:51 AM, Florian Lindner <mailingli...@xgm.de> wrote:
> > 
> > Hello Gilles,
> > 
> > Am Mittwoch, 2. März 2016, 23:36:56 CET schrieb Gilles Gouaillardet:
> >> Florian,
> >> 
> >> under the hood, strlen() can use vector instructions, and then read memory
> >> above the end of the string. valgrind is extremely picky and does warn
> >> about that.
> >> iirc, there are some filter options not to issue these warnings, but I
> >> forgot the details.
> > 
> > Ok, i'll try to research in that direction.
> > 
> >> 
> >> can you try to send "Bonjour" instead of "Halo" and see if the warning
> >> disappear ?
> > 
> > They are still there. But, was this meant as a joke or didn't I understand?
> > 
> > Best,
> > Florian
> > 
> >> Cheers,
> >> 
> >> Gilles
> >> 
> >> PS if it works, do not jump to the erroneous conclusion valgrind likes
> >> French and dislikes German ;-)
> >> 
> >> On Wednesday, March 2, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
> >> 
> >>> Hello,
> >>> 
> >>> using OpenMPI 1.10.2 and valgrind 3.11.0 I try to use the code below to
> >>> send a c++ string.
> >>> 
> >>> It works fine, but running through valgrind gives a lot of memory errors,
> >>> invalid read of size...
> >>> 
> >>> What is going wrong there?
> >>> 
> >>> Valgrind output, see below.
> >>> 
> >>> Thanks!
> >>> Florian
> >>> 
> >>> 
> >>> // Compile with: mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp
> >>> #include 
> >>> #include 
> >>> #include 
> >>> 
> >>> using namespace std;
> >>> 
> >>> 
> >>> void receive() {
> >>>  int length = 0;
> >>>  MPI_Status status;
> >>>  MPI_Probe(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, );
> >>>  MPI_Get_count(, MPI_CHAR, );
> >>>  cout << "Stringlength = " << length << endl;
> >>>  char cstr[length];
> >>>  MPI_Recv(cstr,
> >>>   length,
> >>>   MPI_CHAR,
> >>>   MPI_ANY_SOURCE,
> >>>   MPI_ANY_TAG,
> >>>   MPI_COMM_WORLD,
> >>>   MPI_STATUS_IGNORE);
> >>>  cout << cstr << endl;
> >>> }
> >>> 
> >>> void send(int rankReceiver) {
> >>>  std::string s = "Hallo";
> >>>  MPI_Send(s.c_str(),
> >>>   s.size()+1,
> >>>   MPI_CHAR,
> >>>   rankReceiver,
> >>>   0,
> >>>   MPI_COMM_WORLD);
> >>> }
> >>> 
> >>> int main(int argc, char* argv[])
> >>> {
> >>>  int ra

Re: [OMPI users] Sending string causes memory errors

2016-03-03 Thread Florian Lindner
use at exit: 96,351 bytes in 247 blocks
==5325==   total heap usage: 15,007 allocs, 14,760 frees, 13,362,050 bytes 
allocated
==5325== 
==5325== LEAK SUMMARY:
==5325==definitely lost: 9,154 bytes in 39 blocks
==5325==indirectly lost: 4,008 bytes in 22 blocks
==5325==  possibly lost: 0 bytes in 0 blocks
==5325==still reachable: 83,189 bytes in 186 blocks
==5325== suppressed: 0 bytes in 0 blocks
==5325== Rerun with --leak-check=full to see details of leaked memory
==5325== 
==5325== For counts of detected and suppressed errors, rerun with: -v
==5325== ERROR SUMMARY: 138 errors from 9 contexts (suppressed: 0 from 0)
==5324== 
==5324== HEAP SUMMARY:
==5324== in use at exit: 96,351 bytes in 247 blocks
==5324==   total heap usage: 15,028 allocs, 14,781 frees, 13,370,286 bytes 
allocated
==5324== 
==5324== LEAK SUMMARY:
==5324==definitely lost: 9,154 bytes in 39 blocks
==5324==indirectly lost: 4,008 bytes in 22 blocks
==5324==  possibly lost: 0 bytes in 0 blocks
==5324==still reachable: 83,189 bytes in 186 blocks
==5324== suppressed: 0 bytes in 0 blocks
==5324== Rerun with --leak-check=full to see details of leaked memory
==5324== 
==5324== For counts of detected and suppressed errors, rerun with: -v
==5324== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)


Am Donnerstag, 3. März 2016, 14:53:24 CET schrieb Gilles Gouaillardet:
> I was unable to reproduce this in my environment.
> 
> here is a slightly modified version of your test program.
> buffers are 64 bytes aligned and the string (including the null 
> terminator) is 64 bytes long,
> hopefully, strlen will not complain any more.
> 
> Cheers,
> 
> Gilles
> 
> On 3/3/2016 12:51 AM, Florian Lindner wrote:
> > Hello Gilles,
> >
> > Am Mittwoch, 2. März 2016, 23:36:56 CET schrieb Gilles Gouaillardet:
> >> Florian,
> >>
> >> under the hood, strlen() can use vector instructions, and then read memory
> >> above the end of the string. valgrind is extremely picky and does warn
> >> about that.
> >> iirc, there are some filter options not to issue these warnings, but I
> >> forgot the details.
> > Ok, i'll try to research in that direction.
> >
> >> can you try to send "Bonjour" instead of "Halo" and see if the warning
> >> disappear ?
> > They are still there. But, was this meant as a joke or didn't I understand?
> >
> > Best,
> > Florian
> >   
> >> Cheers,
> >>
> >> Gilles
> >>
> >> PS if it works, do not jump to the erroneous conclusion valgrind likes
> >> French and dislikes German ;-)
> >>
> >> On Wednesday, March 2, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
> >>
> >>> Hello,
> >>>
> >>> using OpenMPI 1.10.2 and valgrind 3.11.0 I try to use the code below to
> >>> send a c++ string.
> >>>
> >>> It works fine, but running through valgrind gives a lot of memory errors,
> >>> invalid read of size...
> >>>
> >>> What is going wrong there?
> >>>
> >>> Valgrind output, see below.
> >>>
> >>> Thanks!
> >>> Florian
> >>>
> >>>
> >>> // Compile with: mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp
> >>> #include 
> >>> #include 
> >>> #include 
> >>>
> >>> using namespace std;
> >>>
> >>>
> >>> void receive() {
> >>>int length = 0;
> >>>MPI_Status status;
> >>>MPI_Probe(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, );
> >>>MPI_Get_count(, MPI_CHAR, );
> >>>cout << "Stringlength = " << length << endl;
> >>>char cstr[length];
> >>>MPI_Recv(cstr,
> >>> length,
> >>> MPI_CHAR,
> >>> MPI_ANY_SOURCE,
> >>> MPI_ANY_TAG,
> >>> MPI_COMM_WORLD,
> >>> MPI_STATUS_IGNORE);
> >>>cout << cstr << endl;
> >>> }
> >>>
> >>> void send(int rankReceiver) {
> >>>std::string s = "Hallo";
> >>>MPI_Send(s.c_str(),
> >>> s.size()+1,
> >>> MPI_CHAR,
> >>> rankReceiver,
> >>> 0,
> >>> MPI_COMM_WORLD);
> >>> }
> >>>
> >>> int main(int argc, char* argv[])
> >>> {
> >>>int rank;
> >>>MPI_Init

Re: [OMPI users] Sending string causes memory errors

2016-03-02 Thread Florian Lindner
Hello Gilles,

Am Mittwoch, 2. März 2016, 23:36:56 CET schrieb Gilles Gouaillardet:
> Florian,
> 
> under the hood, strlen() can use vector instructions, and then read memory
> above the end of the string. valgrind is extremely picky and does warn
> about that.
> iirc, there are some filter options not to issue these warnings, but I
> forgot the details.

Ok, i'll try to research in that direction.

> 
> can you try to send "Bonjour" instead of "Halo" and see if the warning
> disappear ?

They are still there. But, was this meant as a joke or didn't I understand?

Best,
Florian
 
> Cheers,
> 
> Gilles
> 
> PS if it works, do not jump to the erroneous conclusion valgrind likes
> French and dislikes German ;-)
> 
> On Wednesday, March 2, 2016, Florian Lindner <mailingli...@xgm.de> wrote:
> 
> > Hello,
> >
> > using OpenMPI 1.10.2 and valgrind 3.11.0 I try to use the code below to
> > send a c++ string.
> >
> > It works fine, but running through valgrind gives a lot of memory errors,
> > invalid read of size...
> >
> > What is going wrong there?
> >
> > Valgrind output, see below.
> >
> > Thanks!
> > Florian
> >
> >
> > // Compile with: mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp
> > #include 
> > #include 
> > #include 
> >
> > using namespace std;
> >
> >
> > void receive() {
> >   int length = 0;
> >   MPI_Status status;
> >   MPI_Probe(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, );
> >   MPI_Get_count(, MPI_CHAR, );
> >   cout << "Stringlength = " << length << endl;
> >   char cstr[length];
> >   MPI_Recv(cstr,
> >length,
> >MPI_CHAR,
> >MPI_ANY_SOURCE,
> >MPI_ANY_TAG,
> >MPI_COMM_WORLD,
> >MPI_STATUS_IGNORE);
> >   cout << cstr << endl;
> > }
> >
> > void send(int rankReceiver) {
> >   std::string s = "Hallo";
> >   MPI_Send(s.c_str(),
> >s.size()+1,
> >MPI_CHAR,
> >rankReceiver,
> >0,
> >MPI_COMM_WORLD);
> > }
> >
> > int main(int argc, char* argv[])
> > {
> >   int rank;
> >   MPI_Init(, );
> >
> >   MPI_Comm_rank(MPI_COMM_WORLD, );
> >   if (rank == 0)
> > send(1);
> >   else {
> > receive();
> >   }
> >   MPI_Finalize();
> >   return 0;
> > }
> >
> >
> > VALGRIND OUTPUT
> >
> > % mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp && mpirun -n 2 ./a.out
> > Stringlength = 6
> > Hallo
> > florian@asaru ~/scratch (git)-[master] %
> > LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 2 valgrind
> > ./a.out
> > ==9290== Memcheck, a memory error detector
> > ==9290== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
> > ==9290== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
> > ==9290== Command: ./a.out
> > ==9290==
> > ==9291== Memcheck, a memory error detector
> > ==9291== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
> > ==9291== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
> > ==9291== Command: ./a.out
> > ==9291==
> > valgrind MPI wrappers  9290: Active for pid 9290
> > valgrind MPI wrappers  9291: Active for pid 9291
> > valgrind MPI wrappers  9290: Try MPIWRAP_DEBUG=help for possible options
> > valgrind MPI wrappers  9291: Try MPIWRAP_DEBUG=help for possible options
> > Stringlength = 6
> > ==9291== Invalid read of size 1
> > ==9291==at 0x4C2DBA2: strlen (in
> > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> > ==9291==by 0x56852D8: length (char_traits.h:267)
> > ==9291==by 0x56852D8: std::basic_ostream<char, std::char_traits
> > >& std::operator<< <std::char_traits >(std::basic_ostream<char,
> > std::char_traits >&, char const*) (ostream:562)
> > ==9291==by 0x408A39: receive() (mpi.cpp:22)
> > ==9291==by 0x408B61: main (mpi.cpp:46)
> > ==9291==  Address 0xffefff870 is on thread 1's stack
> > ==9291==  in frame #2, created by receive() (mpi.cpp:8)
> > ==9291==
> > ==9291== Invalid read of size 1
> > ==9291==at 0x4C2DBB4: strlen (in
> > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> > ==9291==by 0x56852D8: length (char_traits.h:267)
> > ==9291==by 0x56852D8: std::basic_ostream<char, std::char_traits
> > >& std::oper

[OMPI users] Sending string causes memory errors

2016-03-02 Thread Florian Lindner
Hello,

using OpenMPI 1.10.2 and valgrind 3.11.0 I try to use the code below to
send a c++ string.

It works fine, but running through valgrind gives a lot of memory errors, 
invalid read of size...

What is going wrong there?

Valgrind output, see below.

Thanks!
Florian


// Compile with: mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp
#include 
#include 
#include 

using namespace std;


void receive() {
  int length = 0;
  MPI_Status status;
  MPI_Probe(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, );
  MPI_Get_count(, MPI_CHAR, );
  cout << "Stringlength = " << length << endl;
  char cstr[length];
  MPI_Recv(cstr,
   length,
   MPI_CHAR,
   MPI_ANY_SOURCE,
   MPI_ANY_TAG,
   MPI_COMM_WORLD,
   MPI_STATUS_IGNORE);
  cout << cstr << endl;
}

void send(int rankReceiver) {
  std::string s = "Hallo";
  MPI_Send(s.c_str(),
   s.size()+1,
   MPI_CHAR,
   rankReceiver,
   0,
   MPI_COMM_WORLD);
}

int main(int argc, char* argv[])
{
  int rank;
  MPI_Init(, );

  MPI_Comm_rank(MPI_COMM_WORLD, );
  if (rank == 0)
send(1);
  else {
receive();
  }
  MPI_Finalize();
  return 0;
}


VALGRIND OUTPUT

% mpicxx -std=c++11 -g -O0 -Wall -Wextra mpi.cpp && mpirun -n 2 ./a.out 
  
Stringlength = 6
Hallo
florian@asaru ~/scratch (git)-[master] % 
LD_PRELOAD=/usr/lib/valgrind/libmpiwrap-amd64-linux.so mpirun -n 2 valgrind 
./a.out
==9290== Memcheck, a memory error detector
==9290== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==9290== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==9290== Command: ./a.out
==9290== 
==9291== Memcheck, a memory error detector
==9291== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==9291== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==9291== Command: ./a.out
==9291== 
valgrind MPI wrappers  9290: Active for pid 9290
valgrind MPI wrappers  9291: Active for pid 9291
valgrind MPI wrappers  9290: Try MPIWRAP_DEBUG=help for possible options
valgrind MPI wrappers  9291: Try MPIWRAP_DEBUG=help for possible options
Stringlength = 6
==9291== Invalid read of size 1
==9291==at 0x4C2DBA2: strlen (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9291==by 0x56852D8: length (char_traits.h:267)
==9291==by 0x56852D8: std::basic_ostream& 
std::operator<< (std::basic_ostream&, char const*) (ostream:562)
==9291==by 0x408A39: receive() (mpi.cpp:22)
==9291==by 0x408B61: main (mpi.cpp:46)
==9291==  Address 0xffefff870 is on thread 1's stack
==9291==  in frame #2, created by receive() (mpi.cpp:8)
==9291== 
==9291== Invalid read of size 1
==9291==at 0x4C2DBB4: strlen (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9291==by 0x56852D8: length (char_traits.h:267)
==9291==by 0x56852D8: std::basic_ostream& 
std::operator<< (std::basic_ostream&, char const*) (ostream:562)
==9291==by 0x408A39: receive() (mpi.cpp:22)
==9291==by 0x408B61: main (mpi.cpp:46)
==9291==  Address 0xffefff871 is on thread 1's stack
==9291==  in frame #2, created by receive() (mpi.cpp:8)
==9291== 
==9291== Invalid read of size 1
==9291==at 0x60A0FF1: _IO_file_xsputn@@GLIBC_2.2.5 (in 
/usr/lib/libc-2.23.so)
==9291==by 0x6096D1A: fwrite (in /usr/lib/libc-2.23.so)
==9291==by 0x5684F75: sputn (streambuf:451)
==9291==by 0x5684F75: __ostream_write 
(ostream_insert.h:50)
==9291==by 0x5684F75: std::basic_ostream& 
std::__ostream_insert(std::basic_ostream&, char const*, long) (ostream_insert.h:101)
==9291==by 0x56852E6: std::basic_ostream& 
std::operator<< (std::basic_ostream&, char const*) (ostream:561)
==9291==by 0x408A39: receive() (mpi.cpp:22)
==9291==by 0x408B61: main (mpi.cpp:46)
==9291==  Address 0xffefff874 is on thread 1's stack
==9291==  in frame #4, created by receive() (mpi.cpp:8)
==9291== 
==9291== Invalid read of size 1
==9291==at 0x60A100D: _IO_file_xsputn@@GLIBC_2.2.5 (in 
/usr/lib/libc-2.23.so)
==9291==by 0x6096D1A: fwrite (in /usr/lib/libc-2.23.so)
==9291==by 0x5684F75: sputn (streambuf:451)
==9291==by 0x5684F75: __ostream_write 
(ostream_insert.h:50)
==9291==by 0x5684F75: std::basic_ostream& 
std::__ostream_insert(std::basic_ostream&, char const*, long) (ostream_insert.h:101)
==9291==by 0x56852E6: std::basic_ostream& 
std::operator<< (std::basic_ostream&, char const*) (ostream:561)
==9291==by 0x408A39: receive() (mpi.cpp:22)
==9291==by 0x408B61: main (mpi.cpp:46)