Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-29 Thread Sebastian Rettenberger
Cheers, Gilles On 3/24/2016 11:30 PM, Sebastian Rettenberger wrote: Hi, I tested this on my desktop machine. Thus, one node, two tasks. It deadlock appears on the local file system and on the nfs mount. The MPICH version I tested was 3.2. However, as far as I know, locking is part of the MPI

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Sebastian Rettenberger
PM, Gilles Gouaillardet wrote: Sebastian, in openmpi 1.10, the default io component is romio from mpich 3.0.4. how many tasks, how many nodes and which file system are you running on ? Cheers, Gilles On Thursday, March 24, 2016, Sebastian Rettenberger wrote: Hi, I tried to run the attached

[OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Sebastian Rettenberger
to do collective I/O. Any idea how one can get around this issue? Best regards, Sebastian -- Sebastian Rettenberger, M.Sc. Technische Universität München Department of Informatics Chair of Scientific Computing Boltzmannstrasse 3, 85748 Garching, Germany http://www5.in.tum.de/ #include #include

Re: [OMPI users] MPI_Win_lock with MPI_MODE_NOCHECK

2015-10-21 Thread Sebastian Rettenberger
The title was actually not correct. I first thought that happens when using multiple tasks/threads, but I could reproduce this with one task and thread as well. Sebastian On 10/20/2015 04:21 PM, Sebastian Rettenberger wrote: Hi, there seems to be a bug in MPI_Win_lock/MPI_Win_unlock in

[OMPI users] Multiple MPI_Win_lock with MPI_MODE_NOCHECK

2015-10-20 Thread Sebastian Rettenberger
d of error message *** -- mpiexec noticed that process rank 0 with PID 29012 on node hpcsccs4 exited on signal 11 (Segmentation fault). Best regards, Sebastian -- Sebastian Rettenberger, M.Sc. Technische Universität München Department of Informatics Chair of Scientific Computing

Re: [OMPI users] Allgather in OpenMPI 1.4.3

2014-10-30 Thread Sebastian Rettenberger
ies. Note that you can always install Open MPI as a normal/non-root user (e.g., install it into your $HOME, or some such). On Oct 28, 2014, at 12:08 PM, Sebastian Rettenberger wrote: Hi, I know 1.4.3 is really old but I am currently stuck with it. However, there seems to be a bug in Allgathe

[OMPI users] Allgather in OpenMPI 1.4.3

2014-10-28 Thread Sebastian Rettenberger
. Does anybody know in which version? Best regards, Sebastian -- Sebastian Rettenberger, M.Sc. Technische Universität München Department of Informatics Chair of Scientific Computing Boltzmannstrasse 3, 85748 Garching, Germany http://www5.in.tum.de/ #include #include int main(int argc, char

Re: [OMPI users] Strange "All-to-All" behavior

2013-04-28 Thread Sebastian Rettenberger
Hi, hast du das Problem nur mit OpenMPI oder auch mit anderen MPI Bibliotheken (z.B. MPICH2) Ansonsten kannst du auch mal probieren, ob du das All-to-all mit Collectives hin bekommst, z.B. Scatter oder Gatter. Viele Grüße Sebastian > Hi, > > I have encountered really bad performance when all

Re: [OMPI users] Sharing (not copying) data with OpenMPI?

2012-04-17 Thread Sebastian Rettenberger
Hi RMA operations exist since MPI 2.0. There are some new functions in MPI 3.0, but I don't think you will need them. I'm currently working on a library that provides access to large grids. It uses RMA and it works quite well with MPI 2.0. Best regards, Sebastian > Hi > > Thank You all for y

Re: [OMPI users] [EXTERNAL] Using One-sided communication with lock/unlock

2012-04-03 Thread Sebastian Rettenberger
Thank you for the hint. I thought that "the same process" refers to the locked window, not to the calling process. Maybe I can work around this restriction with a dummy window for synchronization ... Thanks again, Sebastian > On 4/3/12 12:01 PM, "Sebastian Rettenberger&qu

[OMPI users] Using One-sided communication with lock/unlock

2012-04-03 Thread Sebastian Rettenberger
Hello, I posted the bug report a week ago, but unfortunately I didn't get any response: https://svn.open-mpi.org/trac/ompi/ticket/3067 The example (see bug report) is very simple, however it still fails. Other MPI versions work fine (e.g. Intel MPI). This is a real show stopper for me. Any hel