ocations in the RPM?
>
> Thank you for any help.
>
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inf. Kiril Dichev
Tel.: +49 711 685 60492
E-mail: dic...@hl
?
>
> David
> _______
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inf. Kiril Dichev
Tel.: +49 711 685 60492
E-mail: dic...@hlrs.de
High Performance Computing Center Stuttgart (HLRS)
Universität Stuttgart
70550 Stuttgart
Germany
Hi,
I’m doing some research on message logging protocols. It seems that Vprotocol
in Open MPI can wrap around communication calls and log messages, if enabled.
Unfortunately, when I try to use it with Open MPI- 4.0.0, I get an error:
mpirun --mca vprotocol pessimist-mca vprotocol_pessimis
Thanks for the quick reply Aurelien.
I tried initialising MPI from the benchmark via “call
mpi_init_thread(MPI_THREAD_SINGLE, provided, ierror)" — it’s in Fortran -- but
nothing changed there. I still get the same “threads are enabled” warning and
vprotocol doesn’t seem to be used . I also tri
e ?
Thanks,
Kiril
--
Dipl.-Inf. Kiril Dichev
Tel.: +49 711 685 60492
E-mail: dic...@hlrs.de
High Performance Computing Center Stuttgart (HLRS)
Universität Stuttgart
70550 Stuttgart
Germany
hough, you'll have to make the Torque
> libs available on the backend nodes.
>
> Ralph
>
>
> On Jan 29, 2009, at 8:32 AM, Kiril Dichev wrote:
>
> > Hi,
> >
> > I am trying to run with Open MPI 1.3 on a cluster using PBS Pro:
> >
> > pbs_version =
Hi guys,
sorry for the long e-mail.
I have been trying for some time now to run VampirServer with shared
libs for Open MPI 1.3.
First of all: The "--enable-static --disable-shared" version works.
Also, the 1.2 series worked fine with the shared libs.
But here is the story for the shared librari
ull of details about Vampir that most people probably don't care
> about; they're working on a small example to send to me that
> replicates the problem -- will post back here when we have some kind
> of solution...)
>
> We now return you to your regularly schedu