On Feb 20, 2014, at 7:05 PM, Suraj Prabhakaran
wrote:
> Thanks Ralph!
>
> I must have mentioned though. Without the Torque environment, spawning with
> ssh works ok. But Under the torque environment, not.
Ah, no - you forgot to mention that point.
>
> I
Thanks Ralph!
I must have mentioned though. Without the Torque environment, spawning with ssh
works ok. But Under the torque environment, not.
I started the simple_spawn with 3 processes and spawned 9 processes (3 per node
on 3 nodes).
There is no problem with the Torque environment because
Hmmm...I don't see anything immediately glaring. What do you mean by "doesn't
work"? Is there some specific behavior you see?
You might try the attached program. It's a simple spawn test we use - 1.7.4
seems happy with it.
simple_spawn.c
Description: Binary data
On Feb 20, 2014, at 10:14
Creating nightly hwloc snapshot git tarball was a success.
Snapshot: hwloc dev-100-g8145438
Start time: Thu Feb 20 21:01:01 EST 2014
End time: Thu Feb 20 21:03:12 EST 2014
Your friendly daemon,
Cyrador
On 15:45 Thu 20 Feb , julia.dudascik.contrac...@unnpp.gov wrote:
> Please take me off distribution.
http://www.open-mpi.org/mailman/listinfo.cgi/devel
HTH
-Andreas
--
==
Andreas Schäfer
HPC and Grid Computing
Chair of Computer
Please take me off distribution.
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Suraj
Prabhakaran
Sent: Thursday, February 20, 2014 1:14 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] MPI_Comm_spawn under Torque
I am using 1.7.4!
On Feb 20, 2014,
I am using 1.7.4!
On Feb 20, 2014, at 7:00 PM, Ralph Castain wrote:
> What OMPI version are you using?
>
> On Feb 20, 2014, at 7:56 AM, Suraj Prabhakaran
> wrote:
>
>> Hello!
>>
>> I am having problem using MPI_Comm_spawn under torque. It doesn't work when
>>
What OMPI version are you using?
On Feb 20, 2014, at 7:56 AM, Suraj Prabhakaran
wrote:
> Hello!
>
> I am having problem using MPI_Comm_spawn under torque. It doesn't work when
> spawning more than 12 processes on various nodes. To be more precise,
> "sometimes"
On Feb 20, 2014, at 7:10 AM, Jeff Squyres (jsquyres) wrote:
> For all of these, I'm using the openshmem test suite that is now committed to
> the ompi-svn SVN repo. I don't know if the errors are with the tests or with
> oshmem itself.
>
> 1. I'm running the oshmem test
Hello!
I am having problem using MPI_Comm_spawn under torque. It doesn't work when
spawning more than 12 processes on various nodes. To be more precise,
"sometimes" it works, and "sometimes" it doesn't!
Here is my case. I obtain 5 nodes, 3 cores per node and my $PBS_NODEFILE looks
like below.
On Feb 20, 2014, at 10:44 AM, "Jeff Squyres (jsquyres)"
wrote:
> Yes, I've added them to my Cisco MTT ini files in the ompi-svn repo.
Err... I meant ompi-tests SVN repo. :-)
> Look in cisco/mtt/usnic/usnic-trunk.ini and usnic-v1.7.ini.
>
> All relevant sections have
Yes, I've added them to my Cisco MTT ini files in the ompi-svn repo. Look in
cisco/mtt/usnic/usnic-trunk.ini and usnic-v1.7.ini.
All relevant sections have "oshmem" in them.
Most are copied from the Mellanox examples, but I made a few
tweaks/improvements here and there. I also anticipate
Could you send along the relevant mtt .ini sections?
On Feb 20, 2014, at 7:10 AM, Jeff Squyres (jsquyres) wrote:
> For all of these, I'm using the openshmem test suite that is now committed to
> the ompi-svn SVN repo. I don't know if the errors are with the tests or with
Thanks for the report; I filed https://svn.open-mpi.org/trac/ompi/ticket/4290.
On Feb 20, 2014, at 4:34 AM, Brice Goglin wrote:
> Hello,
>
> We're setting up a new cluster here. Open MPI 1.7.4 was hanging at
> startup without any error message. The issue appears to be
>
For all of these, I'm using the openshmem test suite that is now committed to
the ompi-svn SVN repo. I don't know if the errors are with the tests or with
oshmem itself.
1. I'm running the oshmem test suite at 32 processes across 2 16-core servers.
I'm seeing a segv in
I took the liberty of committing the openshmem test suite 1.0d to the
ompi-tests SVN, mainly because there are some post-release patches that are
necessary to get it to compile/run properly.
Mellanox put some clever workarounds in the MTT ini file for a first round of
patches, but I'm finding
Was just fixed in https://svn.open-mpi.org/trac/ompi/changeset/30780.
On Feb 20, 2014, at 7:11 AM, Mike Dubman wrote:
> Hi,
> This commit caused the failure:
> • Comments about 'db' arguments.
> • Fixes #4205: ensure sizeof(MPI_Count) <= sizeof(size_t)
>
*Hi,*
*This commit caused the failure:*
1. Comments about 'db' arguments.
2. Fixes #4205: ensure sizeof(MPI_Count) <= sizeof(size_t)
*13:28:24* CC ompi_datatype_args.lo*13:28:24* In file included
from ../../ompi/datatype/ompi_datatype.h:43,*13:28:24*
from
Hello,
We're setting up a new cluster here. Open MPI 1.7.4 was hanging at
startup without any error message. The issue appears to be
udcm_component_query() hanging in finalize() on the sched_yield() loop
when memlock limit isn't set to unlimited as usual.
Unfortunately the hangs occur before we
19 matches
Mail list logo