Cheers,
Gilles
On 3/24/2016 11:30 PM, Sebastian Rettenberger wrote:
Hi,
I tested this on my desktop machine. Thus, one node, two tasks.
It deadlock appears on the local file system and on the nfs mount.
The MPICH version I tested was 3.2.
However, as far as I know, locking is part of the MPI
PM, Gilles Gouaillardet wrote:
Sebastian,
in openmpi 1.10, the default io component is romio from mpich 3.0.4.
how many tasks, how many nodes and which file system are you running on ?
Cheers,
Gilles
On Thursday, March 24, 2016, Sebastian Rettenberger
wrote:
Hi,
I tried to run the attached
to do collective I/O.
Any idea how one can get around this issue?
Best regards,
Sebastian
--
Sebastian Rettenberger, M.Sc.
Technische Universität München
Department of Informatics
Chair of Scientific Computing
Boltzmannstrasse 3, 85748 Garching, Germany
http://www5.in.tum.de/
#include
#include
The title was actually not correct. I first thought that happens when
using multiple tasks/threads, but I could reproduce this with one task
and thread as well.
Sebastian
On 10/20/2015 04:21 PM, Sebastian Rettenberger wrote:
Hi,
there seems to be a bug in MPI_Win_lock/MPI_Win_unlock in
d of error message ***
--
mpiexec noticed that process rank 0 with PID 29012 on node hpcsccs4 exited on
signal 11 (Segmentation fault).
Best regards,
Sebastian
--
Sebastian Rettenberger, M.Sc.
Technische Universität München
Department of Informatics
Chair of Scientific Computing
ies.
Note that you can always install Open MPI as a normal/non-root user (e.g.,
install it into your $HOME, or some such).
On Oct 28, 2014, at 12:08 PM, Sebastian Rettenberger wrote:
Hi,
I know 1.4.3 is really old but I am currently stuck with it. However, there
seems to be a bug in Allgathe
. Does anybody know in which version?
Best regards,
Sebastian
--
Sebastian Rettenberger, M.Sc.
Technische Universität München
Department of Informatics
Chair of Scientific Computing
Boltzmannstrasse 3, 85748 Garching, Germany
http://www5.in.tum.de/
#include
#include
int main(int argc, char
Hi,
hast du das Problem nur mit OpenMPI oder auch mit anderen MPI Bibliotheken
(z.B. MPICH2)
Ansonsten kannst du auch mal probieren, ob du das All-to-all mit Collectives
hin bekommst, z.B. Scatter oder Gatter.
Viele Grüße
Sebastian
> Hi,
>
> I have encountered really bad performance when all
Hi
RMA operations exist since MPI 2.0. There are some new functions in MPI 3.0,
but I don't think you will need them.
I'm currently working on a library that provides access to large grids. It
uses RMA and it works quite well with MPI 2.0.
Best regards,
Sebastian
> Hi
>
> Thank You all for y
Thank you for the hint. I thought that "the same process" refers to the locked
window, not to the calling process.
Maybe I can work around this restriction with a dummy window for
synchronization ...
Thanks again,
Sebastian
> On 4/3/12 12:01 PM, "Sebastian Rettenberger&qu
Hello,
I posted the bug report a week ago, but unfortunately I didn't get any
response:
https://svn.open-mpi.org/trac/ompi/ticket/3067
The example (see bug report) is very simple, however it still fails. Other MPI
versions work fine (e.g. Intel MPI).
This is a real show stopper for me. Any hel
11 matches
Mail list logo