to is that
>> their license manager is blocking you from running, albeit without a really
>> nice error message. I’m sure that’s something they are working on.
>>
>> If you really want to use Spectrum MPI, I suggest you contact them about
>> purchasing it.
>>
>
; Gilles
>
> On 5/19/2017 4:28 PM, Gabriele Fatigati wrote:
>
>> Using:
>>
>> mpirun --mca pml ^pami --mca pml_base_verbose 100 -n 2 ./prova_mpi
>>
>> I attach the output
>>
>> 2017-05-19 9:16 GMT+02:00 John Hearns via users > <mailto:users@l
>
>
> Cheers,
>
>
> Gilles
>
>
>
> On 5/19/2017 4:23 PM, Gabriele Fatigati wrote:
>
>> Oh no, by using two procs:
>>
>>
>> findActiveDevices Error
>> We found
e the physical interface cards in
>>> these systems, but you do not have the correct drivers or
>>> libraries loaded.
>>>
>>> I have had similar messages when using Infiniband on x86 systems -
>>> which did not have libibverbs installed
parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
[openpower:88867] 1 more process has sent help message help-mpi-runtime.txt
/ mpi_init:startup:pml-add-procs-fail
2017-05-19 9:22 GMT+02:00 Gabriele Fatigati :
> Hi GIlles,
>
> using your command with one MPI procs
t work, can run and post the logs)
>
> mpirun --mca pml ^pami --mca pml_base_verbose 100 ...
>
>
> Cheers,
>
>
> Gilles
>
>
> On 5/19/2017 4:01 PM, Gabriele Fatigati wrote:
>
>> Hi John,
>> Infiniband is not used, there is a single node on this mach
-- Forwarded message --
From: Gabriele Fatigati
Date: 2017-05-19 9:07 GMT+02:00
Subject: Re: [OMPI users] IBM Spectrum MPI problem
To: John Hearns
If I understand well, when I launch mpirun by default try to use
Infiniband, but because there are no infiniband module the run
correct drivers or libraries loaded.
>
> I have had similar messages when using Infiniband on x86 systems - which
> did not have libibverbs installed.
>
>
> On 19 May 2017 at 08:41, Gabriele Fatigati wrote:
>
>> Hi Gilles, using your command:
>>
>> [openpower:88
he output ?
>
>
> Cheers,
>
> Gilles
>
> On 5/18/2017 10:41 PM, Gabriele Fatigati wrote:
>
>> Hi Gilles, attached the requested info
>>
>> 2017-05-18 15:04 GMT+02:00 Gilles Gouaillardet <
>> gilles.gouaillar...@gmail.com <mailto:gilles.gouaill
l
> for example
> ldd a.out
> should only point to IBM libraries
>
> Cheers,
>
> Gilles
>
>
> On Thursday, May 18, 2017, Gabriele Fatigati wrote:
>
>> Dear OpenMPI users and developers, I'm using IBM Spectrum MPI 10.1.0 based
>> on OpenMPI, so I hop
t;
>
> On 18 May 2017 at 14:10, Reuti wrote:
>
>> Hi,
>>
>> > Am 18.05.2017 um 14:02 schrieb Gabriele Fatigati > >:
>> >
>> > Dear OpenMPI users and developers, I'm using IBM Spectrum MPI 10.1.0
>>
>> I noticed this on IBM'
Hi Reuti, I think is it freely available. I posted also on IBM Spectrum
forum, I'm waiting some reply.
2017-05-18 14:10 GMT+02:00 Reuti :
> Hi,
>
> > Am 18.05.2017 um 14:02 schrieb Gabriele Fatigati :
> >
> > Dear OpenMPI users and developers, I'm using IBM Spec
and potentially your MPI job)
My sysadmin used official IBM Spectrum packages to install MPI, so It's
quite strange that there are some components missing (pami). Any help?
Thanks
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Department
Via Magnanel
Ok Jeff, thanks very much for your support!
Regards,
2012/1/31 Jeff Squyres
> On Jan 31, 2012, at 3:59 AM, Gabriele Fatigati wrote:
>
> > I have very interesting news. I recompiled OpenMPI 1.4.4 enabling the
> memchecker.
> >
> > Now the warning on strcmp is disap
28, 2012, at 5:22 AM, Gabriele Fatigati wrote:
>
> > I had the same idea so my simple code I have already done calloc and
> memset ..
> >
> > The same warning still appear using strncmp that should exclude
> uninitialized bytes on hostnam_recv_buf :(
>
> Bummer.
>
and then alerting you later when you access
> those secondary uninitialized bytes.
>
> If I'm right, you can memset the local_hostname buffer (or use calloc),
> and then valgrind warnings will go away.
>
>
>
> On Jan 27, 2012, at 8:21 AM, Gabriele Fatigati wrote:
>
iminate the warning, you should memset hostname_recv_buf to 0 so it has a
> guaranteed value.
>
> On Jan 27, 2012, at 6:21 AM, Gabriele Fatigati wrote:
>
> Hi Jeff,
>
> yes, very stupid bug in a code, but also with the correction the problem
> with Valgrind in strcmp rema
MPI is not looking for \0's; you gave it the
> explicit length of the buffer), but if they weren't filled with \0's, then
> the receiver's printf will have problems handling it.
>
>
>
> On Jan 27, 2012, at 4:03 AM, Gabriele Fatigati wrote:
>
> > Sorry,
>
Sorry,
this is the right code.
2012/1/27 Gabriele Fatigati
> Hi Jeff,
>
> The problem is when I use strcmp on ALLGather buffer and Valgrind that
> raise a warning.
>
> Please check if the attached code is right, where size(local_hostname) is
> very small.
>
> Valg
MPI_Comm_size(MPI_COMM_WORLD, &size);
>
>gethostname(hostname, MAX_LEN - 1);
>where_null(hostname, MAX_LEN, rank);
>
>hostname_recv_buf = calloc(size * (MAX_LEN), (sizeof(char)));
>MPI_Allgather(hostname, MAX_LEN, MPI_CHAR,
> hostname_recv_buf, MAX_
Dear OpenMPi users/developers,
anybody can help about such problem?
2012/1/13 Gabriele Fatigati
> Dear OpenMPI,
>
> using MPI_Allgather with MPI_CHAR type, I have a doubt about
> null-terminated character. Imaging I want to spawn node names where my
> progra
al jump or move depends on uninitialised value(s)
==19931==at 0x4A06E5C: strcmp (mc_replace_strmem.c:412)
The same warning is not present if I use MAX_STRING_LEN+1 in MPI_Allgather.
Thanks in forward.
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Dep
More in detail,
is it possible use mmap() function from MPI process and sharing these memory
between others processes?
2011/10/13 Gabriele Fatigati
> Dear OpenMPI users and developers,
>
> is there some limitation or issues to use memory mapped memory into MPI
> processes? I w
Dear OpenMPI users and developers,
is there some limitation or issues to use memory mapped memory into MPI
processes? I would like to share some memory in a node without using OpenM.
Thanks a lot.
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Department
r portability issue. :-\
>
> On Aug 23, 2011, at 5:19 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPi users,
> >
> > is there some portable MPI macro to check if a code is compiled with MPI
> compiler? Something like _OPENMP for OpenMP codes:
> >
> > #ifdef _O
Dear OpenMPi users,
is there some portable MPI macro to check if a code is compiled with MPI
compiler? Something like _OPENMP for OpenMP codes:
#ifdef _OPENMP
#endif
it exist?
#ifdef MPI
#endif
Thanks
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and
with Totalview, the problem appears in a
line 188 of ompi/mca/io/romio/romio/adio/ad_nfs/ad_nfs_read.c:
MPI_Type_size(fd->filetype, &filetype_size);
here there is an explicit cast to int that can causes the problem.
Someone can help me?
Thanks in forward.
--
Ing. Gabriele Fatiga
Good!
Thanks for your support!
Regards.
2011/1/28 Jeff Squyres
> Thanks for the confirmation.
>
> I committed the fix to the trunk as of r24322 and filed CMR's for v1.4 and
> v1.5.
>
>
>
> On Jan 28, 2011, at 2:50 AM, Gabriele Fatigati wrote:
>
> > Hi
u try the attached
> patch to a trunk nightly tarball and see if that works for you?
>
> If it does, I can provide patches for v1.4 and v1.5 (the code moved a bit
> between these 3 versions, so I would need to adapt the patches a little).
>
>
>
> On Jan 27, 2011, at 9:06
>
> > Just start your debugged job with "totalview mpirun ..." and it should
> work fine.
> >
> > On Jan 27, 2011, at 3:00 AM, Gabriele Fatigati wrote:
> >
> >> The problem is how mpirun scan input parameters when Totalview is
> invoked.
> >
es are lost in the process.
>
> Just start your debugged job with "totalview mpirun ..." and it should work
> fine.
>
> On Jan 27, 2011, at 3:00 AM, Gabriele Fatigati wrote:
>
> The problem is how mpirun scan input parameters when Totalview is invoked.
>
> The
The problem is how mpirun scan input parameters when Totalview is invoked.
There is some wrong behaviour in the middle :(
2011/1/27 Reuti
> Am 27.01.2011 um 10:32 schrieb Gabriele Fatigati:
>
> > Mm,
> >
> > doing as you suggest the output is:
> >
> > a
Mm,
doing as you suggest the output is:
a
b
"c
d"
and not:
a
b
"c d"
2011/1/27 Reuti
> Hi,
>
> Am 27.01.2011 um 09:48 schrieb Gabriele Fatigati:
>
> > Dear OpenMPI users and developers,
> >
> > i'm using OpenMPI 1.4.3 and Intel compil
/a.out a b "c d"
Argument parsing doesn't work well. Arguments passed are:
a b c d
and not
a b "c d"
I think there is an issue in parsing the arguments invoking Totalview. Is
this a bug into mpirun or i need to do it in other way?
Thanks in forward.
--
Ing. Gabrie
finity? This is my question..
2010/9/27 Tim Prince
>
> On 9/27/2010 9:01 AM, Gabriele Fatigati wrote:
>>
>> if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
>> I didn't find memory affinity alone ( similar) parameter to set at 1.
Sorry,
memory affinity is enabled by default setting mprocessor_affinity=1 in
OpenMPI-numa?
2010/9/27 Gabriele Fatigati
> Dear OpenMPI users,
>
> if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
> I didn't find memory affinity alone ( similar) para
Dear OpenMPI users,
if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
I didn't find memory affinity alone ( similar) parameter to set at 1.
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomput
;
> From:
> Jeff Squyres
> To: Open MPI Users Date: 09/23/2010 10:13 AM Subject: Re:
> [OMPI users] Question about Asynchronous collectives Sent by:
> users-boun...@open-mpi.org
> --
>
>
>
> On Sep 23, 2010, at 10:00 AM, Gabriele Fatigati wrote:
>
> &
ocess
MPI_IBcast(MPI_COMM_WORLD, request_1) // second Bcast for another process
Because first Bcast of second process matches with first Bcast of first
process, and it's wrong.
Is it right?
2010/9/23 Jeff Squyres
> On Sep 23, 2010, at 6:28 AM, Gabriele Fatigati wrote:
>
>
collective more time on one
communicator, but is it possible with different collectives?
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
Thanks Jeff,
and.. what about RDMA? It works only with point-to-point or also with
collectives?
2010/9/22 Jeff Squyres
> On Sep 22, 2010, at 3:46 AM, Gabriele Fatigati wrote:
>
> > i'm tuning collectives of OpenMPI 1.4.2 with OTPO. I have a little
> question about BTL. Th
collective routine, performances can have very different
behaviour.
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 61717
gt; Yes, however, it seems Gabriele is saying the total execution time
> *drops* by ~500 s when the barrier is put *in*. (Is that the right way
> around, Gabriele?)
>
> That's harder to explain as a sync issue.
>
>
>
> > On Sep 9, 2010, at 1:14 AM, Gabriele Fatigat
gt;
> On Sep 9, 2010, at 1:14 AM, Gabriele Fatigati wrote:
>
> More in depth,
>
> total execution time without Barrier is about 1 sec.
>
> Total execution time with Barrier+Reduce is 9453, with 128 procs.
>
> 2010/9/9 Terry Frankcombe
>
>> Gabriele,
>>
National University
> Ph: (+61) 0417 163 509Skype: terry.frankcombe
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA S
gt; Ph: (+61) 0417 163 509Skype: terry.frankcombe
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomp
enMPI. I suspect the
same for others collective communications. Someone can explaine me why
MPI_reduce has this strange behaviour?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casale
y! :D
2010/7/15 Jeff Squyres
> We don't have any kind of logic language like that for the params files.
>
> Got any suggestions / patches?
>
>
> On Jul 15, 2010, at 8:37 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPI users,
> >
> > is it possible to
dvance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
ynamic_decision = 1; \
> EXECUTE; \
>
>
>
>
>
> On Jul 4, 2010, at 8:12 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPI user,
> >
> > i'm trying to use collective dynamic rules with OpenM
;"
ofa-v2-ipath0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ipath0
2" ""
ofa-v2-ehca0-1 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ehca0
1" ""
ofa-v2-iwarp u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "eth2 0" ""
it works only if i use 1.2 interface, not with 2.0 version.
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
03ib0 exited
on signal 11 (Segmentation fault).
--
The same using other Bcast algorithm. Disabling dynamic rules, it works
well. Maybe i'm using some wrong parameter setup?
Thanks in advance.
--
Ing. Gabriele Fatigati
y.
2010/5/11 Gijsbert Wiesenekker
>
> On May 11, 2010, at 9:29 , Gabriele Fatigati wrote:
>
> Dear Gijsbert,
>
>
> >Ideally I would like to check how many MPI_Isend messages have not been
> processed yet, so that I can stop >sending messages if there are 't
'
> waiting. Is there a way to do this?
>
> Regards,
> Gijsbert
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Sy
Ok Jeff,
i have understood. Thanks very much for your help!
Regards.
2010/4/13 Jeff Squyres
> On Apr 13, 2010, at 9:17 AM, Gabriele Fatigati wrote:
>
> > My actual configuration is:
> >
> > btl = ^tcp
> > btl_tcp_if_exclude = eth0,ib0,ib1
> > oob_tcp_inclu
s/openmpi/1.3.3/intel--11.1--binary/etc/openmpi-mca-params.conf])
My actual configuration is:
btl = ^tcp
btl_tcp_if_exclude = eth0,ib0,ib1
oob_tcp_include = eth1,lo
But is it right? I have some doubt..
2010/4/13 Jeff Squyres
> On Apr 13, 2010, at 9:03 AM, Gabriele Fatigati wrote:
>
> &
e OMPI
> plugins got slurped up into their respective libraries (e.g., libmpi.a).
>
> If you run ompi_info --param btl tcp, do you see anything at all? If not,
> that would indicate that the TCP BTL wasn't built. IF so, can you send your
> build logs/etc.? (please compress!)
&
o file (and probably a .la file
> as well). If the .so is not there, then the BTL TCP plugin is not installed
> (which would be darn weird, to be honest...).
>
>
> On Apr 13, 2010, at 8:23 AM, Gabriele Fatigati wrote:
>
> > Hi Jeff,
> >
> > thaks for your reply
quot; or "
> 192.168.0.0/16,10.1.4.0/24"). Mutually exclusive with btl_tcp_if_include.
> mca:btl:tcp:param:btl_tcp_if_exclude:deprecated:no
> $
>
>
> Did your TCP BTL plugin somehow not get built / installed?
>
>
> On Apr 13, 2010, at 6:06 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPI users and
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
u could try adding the --quiet option to your mpirun cmd line. This will
> help eliminate some (maybe not all) of the verbage.
>
>
> On Feb 24, 2010, at 6:36 AM, Jed Brown wrote:
>
> > On Wed, 24 Feb 2010 14:21:02 +0100, Gabriele Fatigati <
> g.fatig...@cineca.it> wrote
Yes, of course,
but i would like to know if there is any way to do that with openmpi
2010/2/24 jody
> Hi Gabriele
> you could always pipe your output through grep
>
> my_app | grep "MPI_ABORT was invoked"
>
> jody
>
> On Wed, Feb 24, 2010 at 11:28 AM, Gabrie
can upgrade my OpenMPI if necessary.
Thanks.
2010/2/24 Nadia Derbey
> On Wed, 2010-02-24 at 09:55 +0100, Gabriele Fatigati wrote:
> >
> > Dear Openmpi users and developers,
> >
> > i have a question about MPI_Abort error message. I have a program
> > written in C+
.. But i'm interesting just called
rank. Is it possible?
Thanks in advance.
I'm using openmpi 1.2.2
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
d by LSF and OpenMPI. I
have launched 255 procs and there are 161 task.. very very strange.
Any idea?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.i
M,
> RSS/Atom, DNS, and Bonjour.
> - Opportunistic integration with Conflicker in order to utilize free
> resources distributed world-wide.
> - Support for all Fortran versions prior to Fortran 2020 has been
> dropped.
>
> Make today an Open MPI day!
>
>
> _
ll also start much faster, as a bonus.
>
> HTH
> Ralph
>
>
> On Apr 1, 2009, at 3:58 AM, Gabriele Fatigati wrote:
>
>> Dear OpenMPI developers, m
>> i have a strange problem during running my application ( 2000
>> processors). I'm using openmpi 1.2.22 over
hanks in advance.
Than
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
recvcount, recvtype, i, 0,
> comm, &array_request[i-1]);
>
>
> In practice, the extent of a datatype should be equal to the size as
> reported by sizeof(datatype).
> Using MPI_Type_get_extent() is the portable way of doing this using MPI.
>
> Regards,
> Massimo
>
>
&
I_DOUBLE etc. Of course I would not create nor
> recommend to create new communicators for this purpose only.
>
> Kind regards,
> Massimo
>
> On 30/mar/09, at 17:43, Gabriele Fatigati wrote:
>
>> Dear OpenMPI developers,
>> i'm writing an MPI_Gather wrappe
i think isn't
portable. Is there aMPI function that does this check?
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
line. This will force
> mpirun to map all procs on other nodes. If my analysis is correct, the job
> should run.
>
> Ralph
>
> On Feb 20, 2009, at 6:46 AM, Gabriele Fatigati wrote:
>
>> Dear OpenMPi developers,
>> i'm running my MPI code compiled with OpenMPI 1.3 ov
Dear OpenMPi developers,
i'm running my MPI code compiled with OpenMPI 1.3 over Infiniband and
LSF scheduler. But i got the error attached. I suppose that spawning
process doesn't works well. The same program under OpenMPI 1.2.5 works
well. Could you help me?
Thanks in advance.
--
Ing
ssh works well. But the problem is still here..
2009/2/17 jody :
> I got this ssh message when my workstation wasn't allowed access because of
> the
> settings in the files /etc/hosts.allow and /etc/hosts.deny on your ssh server.
> Jody
>
> On Mon, Feb 16, 2009 at 10:3
s.
I have setted LD_LIBRARY_PATH, but still doesn't work.
Could you help me? Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>
> to
>
>MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
> MPI_COMM_WORLD, &request);
>
> ?
> Jody
>
>
communication is
already finished?
In attach you have my simple C test program.
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
the cycle is finished, a and b memories, are unregistrered?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051
per
>> node.
>> I had a problem with IMB when I was not able to run to completion Alltoall
>> on N=128, ppn=8 on our cluster with 16 GB per node. You'd think 16 GB is
>> quite a lot but when you do the maths:
>> 2* 4 MB * 128 procs * 8 procs/node = 8 GB/node plus
ging, it sounds like an Open MPI bug. You should never need to add an
> MPI_Barrier to make an MPI program correct.
>
>
>
> On Jan 23, 2009, at 8:09 AM, Gabriele Fatigati wrote:
>
>> Hi Igor,
>> My message size is 4096kb and i have 4 procs per core.
>> There isn
Hi Igor,
My message size is 4096kb and i have 4 procs per core.
There isn't any difference using different algorithms..
2009/1/23 Igor Kozin :
> what is your message size and the number of cores per node?
> is there any difference using different algorithms?
>
> 2009/1/23
with this strange problem, i think there is a
strange interaction between Infiniband and OpenMPI that causes it.
2009/1/23 Jeff Squyres :
> On Jan 23, 2009, at 6:32 AM, Gabriele Fatigati wrote:
>
>> I've noted that OpenMPI has an asynchronous behaviour in the collective
>> c
er to lock all process in the collective
call until is finished? Otherwise i have to insert many MPI_Barrier
in my code and it is very tedious and strange..
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via
or an application problem? How can i solve it?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
[node0862:29190] [0,0,0] ORTE_ERROR_LOG: Out of resource in file
rmgr_urm.c at line 372
[node0862:29190] mpirun: spawn failed with errno=-2
2009/1/8 Gabriele Fatigati :
> Dear OpenMPI Developers,
> i'm running my jobs under OpenMPI 1.2.5 Intel compiled. Our cluster
> has Infiniba
gt; rmgr_urm.c at line 372
> [node0862:29190] mpirun: spawn failed with errno=-2
I don't understand if the problem depends by OpenMPI, Infiniband or
other. Any idea?
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanel
3rc3r20130
openmpi-1.3rc3r20107
openmpi-1.3rc3r20092
openmpi-1.3rc2r20084
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
Well,
thanks very much for your support.
Regards.
2008/12/15 Jeff Squyres :
> On Dec 15, 2008, at 7:59 AM, Gabriele Fatigati wrote:
>
>> Ok,
>> thanks a lot,
>> as soon as possibile, i''install 1.3 version.
>>
>> But i don't understand
To see why you should switch, read the FAQ entries about Memchecker
> that start here:
> http://www.open-mpi.org/faq/?category=debugging#memchecker_what
>
> On Mon, Dec 15, 2008 at 9:34 AM, Gabriele Fatigati
> wrote:
>> PS: i'm using openmpi 1.2.5..
>>
>>
PS: i'm using openmpi 1.2.5..
2008/12/15 Gabriele Fatigati :
> Hi Jeff,
> i recompiled libibverbs and libmthca with valgrind flas, but for
> strage reasons, only warning over MPI_Send are disappears, but
> warnings over MPI_Recv remains!
>
> 2008/12/15 Jeff Squyres :
>
libiverbs and libmthca with the valgrind flag,
> and use the enhanced memchecker support in v1.3.
>
> I have not personally verified that all the warnings disappear in this
> configuration (I was hoping to verify this somewhere during the v1.3
> series).
>
>
>
>>
>&
Hi Jeff,
i recompiled libmthca with --with-valgrind flag, and modified
enviroment variables, but warnings doesnt' disappears..
2008/12/14 Jeff Squyres :
> On Dec 14, 2008, at 8:21 AM, Gabriele Fatigati wrote:
>
>> i have a strage problems with OpenMPI 1.2.5 Intel Compiled when i
e.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatig...@cineca.it
#include
#include
#include
#include
int main(
Dear OpenMPI developers,
i wold like to know how i can export by default some enviroment
variables, so without using" -x" option after mpirun
Is it possible to add some flags in openmpi-mca-params.conf ?
Thanks.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems &am
t;>> if you look at recent versions of libnuma, there are two functions called
>>> numa_run_on_node() and numa_run_on_node_mask(), which allow thread-based
>>> assignments to CPUs
>>>
>>> Thanks
>>> Edgar
>>>
>>> Gabrie
Thanks
> Edgar
>
> Gabriele Fatigati wrote:
>>
>> Is there a way to assign one thread to one core? Also from code, not
>> necessary with OpenMPI option.
>>
>> Thanks.
>>
>> 2008/11/19 Stephen Wornom :
>>>
>>> Gabriele Fatigati wro
Is there a way to assign one thread to one core? Also from code, not
necessary with OpenMPI option.
Thanks.
2008/11/19 Stephen Wornom :
> Gabriele Fatigati wrote:
>>
>> Ok,
>> but in Ompi 1.3 how can i enable it?
>>
>
> This may not be relevant, but I could no
Ok,
but in Ompi 1.3 how can i enable it?
2008/11/18 Ralph Castain :
> I am afraid it is only available in 1.3 - we didn't backport it to the 1.2
> series
>
>
> On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
>
>> Hi,
>> how can i set "slot mapping&qu
his will ensure that the threads for
> that process have exclusive access to those cores, but will not bind a
> particular thread to one core - the threads can "move around" across the
> specified set of cores. Your threads will then be able to run without
> interfering with ea
1 - 100 of 139 matches
Mail list logo