Thanks, this works. I have now removed my change to oob_tcp_peer.c.
--Bob Soliday
Ralph Castain wrote:
If you wanted it to use eth1, your other option would be to simply tell it
to do so using the mca param. I believe it is something like -mca
oob_tcp_if_include eth1 -mca oob_tcp_if_exclude eth
no, unfortunately there is no way to do that. In fact, each set of child
processes which you spawn has its own MPI_COMM_WORLD. MPI_COMM_WORLD is
static and there is no way to change it at runtime...
Edgar
Rajesh Sudarsan wrote:
Hi,
I have simple MPI program that uses MPI_comm_spawn to create
If you wanted it to use eth1, your other option would be to simply tell it
to do so using the mca param. I believe it is something like -mca
oob_tcp_if_include eth1 -mca oob_tcp_if_exclude eth0
You may only need the latter since you only have the two interfaces.
Ralph
On 11/29/07 9:47 AM, "Jeff
Hi,
I have simple MPI program that uses MPI_comm_spawn to create additional
child processes.
Using MPI_Intercomm_merge, I merge the child and the parent
communicator resulting in a single expanded user
defined intracommunicator. I know MPI_COMM_WORLD is a constant which is
statically initialized
Jeff Squyres (jsquyres) wrote:
Interesting. Would you mind sharing your patch?
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Bob Soliday
Sent: Thursday, November 29, 2007 11:35 AM
To: Ralph H Castain
Cc: Open MPI Users
Subject: Re
Interesting. Would you mind sharing your patch?
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Bob Soliday
Sent: Thursday, November 29, 2007 11:35 AM
To: Ralph H Castain
Cc: Open MPI Users
Subject: Re: [OMPI users] mca_oob_tcp_peer_t
I solved the problem by making a change to orte/mca/oob/tcp/oob_tcp_peer.c
On Linux 2.6 I have read that after a failed connect system call the next
call to connect can immediately return ECONNABORTED and not try to actually
connect, the next call to connect will then work. So I changed
mca_oob_t
Hi Bob
I'm afraid the person most familiar with the oob subsystem recently left the
project, so we are somewhat hampered at the moment. I don't recognize the
"Software caused connection abort" error message - it doesn't appear to be
one of ours (at least, I couldn't find it anywhere in our code ba
On Nov 29, 2007, at 12:08 AM, Keshetti Mahesh wrote:
There is work starting literally right about now to allow Open MPI to
use the RDMA CM and/or the IBCM for creating OpenFabrics connections
(IB or iWARP).
when this is expected to be completed?
It will not planned to be released until the
On Nov 29, 2007, at 2:09 AM, Madireddy Samuel Vijaykumar wrote:
A non MPI application does run without any issues. Could eloberate on
what you mean by doing mpirun "hostname". You mean i just do an
'mpirun lynx' in my case???
No, I mean
mpirun --hostfile hostname
This should run the "hos
Hi,
Am 29.11.2007 um 00:02 schrieb Henry Adolfo Lambis Miranda:
This is my first post to the mail list.
I have installed openmp 1.2.4 over a x_64 AMD double processor with
SuSE
linux.
In principal, the instalation was succefull, with ifort 10.X.
But when i run any code ( mpirun -np 2 a.out),
A non MPI application does run without any issues. Could eloberate on
what you mean by doing mpirun "hostname". You mean i just do an
'mpirun lynx' in my case???
On Nov 28, 2007 9:57 PM, Jeff Squyres wrote:
> Well, that's odd.
>
> What happens if you try to mpirun "hostname" (i.e., a non-MPI
> ap
Hi Terry,
Thanks for your reply, ARRAY of logical problem gone, when i
used -disable-mpi-f77 option,but now iam getting following error
configure: error: Cannot support Fortran MPI_ADDRESS_KIND!
option string iam using as follows
./configure --disable-mpi-f77 --with-devel-headers.
Hi George,
Thanks for your reply, i passed --disable-mpi-f77 option to
the configure script, but now the compiler failed with following reason.
configure: error: Cannot support Fortran MPI_ADDRESS_KIND!
can you pls let me know, how to get rid of this problem.( i.e what option
t
> There is work starting literally right about now to allow Open MPI to
> use the RDMA CM and/or the IBCM for creating OpenFabrics connections
> (IB or iWARP).
when this is expected to be completed?
-Mahesh
Hi Guys,
The alternative to THREAD_MULTIPLE problem is to use --mca
mpi_leave_pinned 1 to mpirun option. This will ensure 1 RDMA operation contrary
to splitting data in MAX RDMA size (default to 1MB).
If your data size is small say below 1 MB, program will run well with
THREAD_MULTIPLE. P
16 matches
Mail list logo