OpenMPI-1.6.1 is installed on Rocks-5.5 Linux cluster with intel
compilers and OFED-1.5.3. A sample Helloworld MPI program gives following
warning message:
/mpi/openmpi/1.6.1/intel/bin/mpirun -np 4 ./mpi
--
WARNING: It
I have tried with these flags (I use gcc 4.7 and open mpi 1.6), but
the program doesn't crash, a node go down and the rest of them remain
to wait a signal (there is an ALLREDUCE in the code).
Anyway, yesterday some processes died (without a log) on the node 10,
I logged almost immediately in the n
Hi,
Am 05.09.2012 um 06:42 schrieb San B:
>OpenMPI-1.6.1 is installed on Rocks-5.5 Linux cluster with intel
> compilers and OFED-1.5.3. A sample Helloworld MPI program gives following
> warning message:
>
>
> /mpi/openmpi/1.6.1/intel/bin/mpirun -np 4 ./mpi
> --
On 9/4/2012 7:21 PM, Yong Qin wrote:
> On Tue, Sep 4, 2012 at 5:42 AM, Yevgeny Kliteynik
> wrote:
>> On 8/30/2012 10:28 PM, Yong Qin wrote:
>>> On Thu, Aug 30, 2012 at 5:12 AM, Jeff Squyres wrote:
On Aug 29, 2012, at 2:25 PM, Yong Qin wrote:
> This issue has been observed on OMPI
Hi,
I'm new to rankfiles so that I played a little bit with different
options. I thought that the following entry would be similar to an
entry in an appfile and that MPI could place the process with rank 0
on any core of any processor.
rank 0=tyr.informatik.hs-fulda.de
Unfortunately it's not all
I couldn't really say for certain - I don't see anything obviously wrong with
your syntax, and the code appears to be working or else it would fail on the
other nodes as well. The fact that it fails solely on that machine seems
suspect.
Set aside the rankfile for the moment and try to just bind
Yevgeny,
we at RZ Aachen also see problems very similar to described in initial posting
of Yong Qin, on VASP with Open MPI 1.5.3.
We're currently looking for a data set able to reproduce this. I'll write an
email if we gotcha it.
Best,
Paul
On 09/05/12 13:52, Yevgeny Kliteynik wrote:
I'm
Hi Shiqing,
> Could you try set OPENMPI_HOME env var to the root of the Open MPI dir?
> This env is a backup option for the registry.
It solves one problem but there is a new problem now :-((
Without OPENMPI_HOME: Wrong pathname to help files.
D:\...\prog\mpi\small_prog>mpiexec init_finalize.
Yes, so far this has only been observed in VASP and a specific dataset.
Thanks,
On Wed, Sep 5, 2012 at 4:52 AM, Yevgeny Kliteynik
wrote:
> On 9/4/2012 7:21 PM, Yong Qin wrote:
>> On Tue, Sep 4, 2012 at 5:42 AM, Yevgeny Kliteynik
>> wrote:
>>> On 8/30/2012 10:28 PM, Yong Qin wrote:
On Thu,
Hi,
I am learning pthreads and trying to implement the pthreads in my
quicksortprogram.
My problem is iam unable to understand how to implement the pthreads at
data received at a node from the master (In detail: In my program Master
will divide the data and send to the slaves and each slave will do
Andrea,
As suggested by the previous answers I guess the size of your problem is too
large for the memory available on the nodes. I can runs ZeusMP without any
issues up to 64 processes, both over Ethernet and Infiniband. I tried the 1.6
and the current trunk, and both perform as expected.
Wha
11 matches
Mail list logo