tel processors E7330, E5345, E5530 e E5620
CentOS 5.3, CentOS 5.5.
Intel composer XE 2011
gcc 4.1.2
pgi 10.2-1
Regards
Salvatore Podda
ENEA UTICT-HPC
Department for Computer Science Development and ICT
Facilities Laboratory for Science and High Performace Computing
C.R. Frascati
Via E. Fermi, 45
PoB
n the FAQ and on the
opem-mpi.org documentation,
but can you be so kindly to explain the meaning of this flag?
Thanks
Salvatore Podda
On 20/mag/11, at 03:37, Jeff Squyres wrote:
Sorry for the late reply.
Other users have seen something similar but we have never been able
to reproduce it
the shared memory btl intra-node,
whereas your other choice "--mca btl_tcp_if_include ib0" will not.
Could this be the problem?
Here we use "--mca btl openib,self,sm",
to enable the shared memory btl intra-node as well,
and it works just fine on programs that do use collective cal
Apoligize, I forgot to edit the subject line.
I send again with the sensible subject.
Salvatore
Begin forwarded message:
From: Salvatore Podda
Date: 24 maggio 2011 12:46:17 GMT+02:00
To: g...@ldeo.columbia.edu
Cc: users open-mpi
Subject: Re: users Digest, Vol 1911, Issue 3
Sorry for the
ompilation phase
Salvatore Podda
On 20/mag/11, at 03:37, Jeff Squyres wrote:
Sorry for the late reply.
Other users have seen something similar but we have never been able
to reproduce it. Is this only when using IB? If you use "mpirun --
mca btl_openib_cpc_if_include rdmacm", does
ositively definitely sure to use the specific BTL.
assume that the `sm' flag is even included by default.
At the end, just for curiosity, as most part of applications show to
work with only
"openib,self" flags, what "physycally" happen to the "intranode"
,
of the eth interfaces.
Regards
Salvatore Podda
ENEA UTICT-HPC
Department for Computer Science Development and ICT
Facilities Laboratory for Science and High Performace Computing
C.R. Frascati
Via E. Fermi, 45
PoBox 65
00044 Frascati (Rome)
Italy
Tel: +39 06 9400 5342
Fax: +39 06 9400 5551
Fax
Thanks for the prompt reply!
On Sep 27, 2011, at 6:35 AM, Salvatore Podda wrote:
We would like to know if the ethernet interfaces play any role in
the startup phase of an opempi job using InfiniBand
In this case, where we can found some literature on this topic?
Unfortunately, there
Hi all,
we have a computational infrastructure composed from front end and
worker nodes
which differ slightly differ in the architectures on board (I mean same
processor different socket).
OFED and openMPI have been compiled and built on both systems at the same
version level.
As users
Pasha, thanks for your comment.
I add the comments inlne.
> Salvatore,
>
> Please see my comment inline.
>
>>
>> More generally, in the case of a front-end nodes with a processors
>> definitively different from
>> worker nodes (same firm i.e.Intel) can openMPI applications compiled on one
>>
Apologize for the delay, but I missed your post.
> Hi,
>
> Am 22.02.2012 um 14:21 schrieb Salvatore Podda:
>
>> we have a computational infrastructure composed from front end and
>> worker nodes
>> which differ slightly differ in the architectures on bo
Dear all,
in OpenMPI 1.2.8 it was possble to not daemonize the orted using the
mca paramater:
MCA orte: parameter "orte_no_daemonize"
is there any, or which is the equivalent in following versions?
I note that starting, at least, from OpenMPI 1.4.2 there is the support to
daemonize (o
12 matches
Mail list logo