-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 31/01/12 20:51, Samuel Thibault wrote:
> Could you try the attached patch?
Doesn't appear to apply cleanly to the 1.4 release, it doesn't
like the comment change. The other hunk does fix the build issue
though, thanks!
On Jan 31, 2012, at 3:20 PM, Brice Goglin wrote:
> In 1.5.x, cache info doesn't matter as far as I know.
>
> In trunk, the affinity code has been reworked. I think you can bind
> process to caches there. Binding to L2 wouldn't work as expected (would
> bind to one core instead of 2). hwloc
Le 31/01/2012 19:02, Dave Love a écrit :
>> FWIW, the Linux kernel (at least up to 3.2) still reports wrong L2 and
>> L1i cache information on AMD Bulldozer. Kernel bug reported at
>> https://bugzilla.kernel.org/show_bug.cgi?id=42607
> I assume that isn't relevant for open-mpi, just other things.
Am 31.01.2012 um 20:38 schrieb Ralph Castain:
> Not sure I fully grok this thread, but will try to provide an answer.
>
> When you start a singleton, it spawns off a daemon that is the equivalent of
> "mpirun". This daemon is created for the express purpose of allowing the
> singleton to use
Not sure I fully grok this thread, but will try to provide an answer.
When you start a singleton, it spawns off a daemon that is the equivalent of
"mpirun". This daemon is created for the express purpose of allowing the
singleton to use MPI dynamics like comm_spawn - without it, the singleton
We have heard reports of failures with the Intel 12.1 compilers.
Can you try with rc4 (that was literally just released) with the
--without-memory-manager configure option?
On Jan 31, 2012, at 2:19 PM, Daniel Milroy wrote:
> Hello,
>
> I have built OpenMPI 1.4.5rc2 with Intel 12.1 compilers
Hello,
I have built OpenMPI 1.4.5rc2 with Intel 12.1 compilers in an HPC
environment. We are running RHEL 5, kernel 2.6.18-238 with Intel Xeon
X5660 cpus. You can find my build options below. In an effort to
test the OpenMPI build, I compiled "Hello world" with an mpi_init call
in C and
Am 31.01.2012 um 20:12 schrieb Jeff Squyres:
> I only noticed after the fact that Tom is also here at Cisco (it's a big
> company, after all :-) ).
>
> I've contacted him using our proprietary super-secret Cisco handshake (i.e.,
> the internal phone network); I'll see if I can figure out the
I only noticed after the fact that Tom is also here at Cisco (it's a big
company, after all :-) ).
I've contacted him using our proprietary super-secret Cisco handshake (i.e.,
the internal phone network); I'll see if I can figure out the issues off-list.
On Jan 31, 2012, at 1:08 PM, Dave Love
Reuti writes:
> Maybe it's a side effect of a tight integration that it would start on
> the correct nodes (but I face an incorrect allocation of slots and an
> error message at the end if started without mpiexec), as in this case
> it has no command line option for
Brice Goglin writes:
> Note that magny-Cours processors are OK, cores are "normal" there.
Apologies for the bad guess about the architecture, and thanks for the
info.
> FWIW, the Linux kernel (at least up to 3.2) still reports wrong L2 and
> L1i cache information on AMD
Gotz,
Sorry, I was in a rush and missed that.
Here is some further information the compiler options used by me
for the 1.5.5 build:
[richard.walsh@bob linux]$ pwd
/share/apps/openmpi-intel/1.5.5/build/opal/mca/memory/linux
[richard.walsh@bob linux]$ make -n malloc.o
echo " CC"
Hi All,
I'm having this weird problem when running a very simple OpenMPI application.
The application sends an integer from the rank 0 process to the rank 1 process.
The sequence of code that I use to accomplish this is the following:
if (rank == 0)
{
Ok Jeff, thanks very much for your support!
Regards,
2012/1/31 Jeff Squyres
> On Jan 31, 2012, at 3:59 AM, Gabriele Fatigati wrote:
>
> > I have very interesting news. I recompiled OpenMPI 1.4.4 enabling the
> memchecker.
> >
> > Now the warning on strcmp is disappeared
Thanks Samuel. This patch fixes the build issue.
-Devendar
On Tue, Jan 31, 2012 at 4:51 AM, Samuel Thibault
wrote:
> Could you try the attached patch?
>
> Samuel
>
> ___
> hwloc-users mailing list
> hwloc-us...@open-mpi.org
>
Am 31.01.2012 um 05:33 schrieb Tom Bryan:
>> Suppose you want to start 4 additional tasks, you would need 5 in total from
>> SGE.
>
> OK, thanks. I'll try other values.
BTW: there is a setting in the PE definition to allow one addititonal task:
$ qconf -sp openmpi
...
job_is_first_task FALSE
On Jan 31, 2012, at 8:49 AM, Brice Goglin wrote:
> Unless I am mistaken, OMPI 1.5.4 has hwloc 1.2
Correct.
> while 1.5.5 will have
> 1.2.2 or even 1.3.1. So don't use core binding on interlagos with
> OMPI<=1.5.4.
OMPI 1.5.5rc1 has hwloc 1.3.1 + a few SVN commits past it.
Per some off-list
Le 31/01/2012 14:24, Jeff Squyres a écrit :
> On Jan 31, 2012, at 6:18 AM, Dave Love wrote:
>
>> Core binding is broken on Interlagos with open-mpi 1.5.4. I guess it
>> also bites on Magny-Cours, but all our systems are currently busy and I
>> can't check.
>>
>> It does work, at least basically,
LinkedIn
Song Guo requested to add you as a connection on LinkedIn:
--
Mohan,
I'd like to add you to my professional network on LinkedIn.
- Song
Accept invitation from Song Guo
On Jan 31, 2012, at 6:18 AM, Dave Love wrote:
> Core binding is broken on Interlagos with open-mpi 1.5.4. I guess it
> also bites on Magny-Cours, but all our systems are currently busy and I
> can't check.
>
> It does work, at least basically, in 1.5.5rc1, but the release notes for
> that don't
On Jan 31, 2012, at 3:59 AM, Gabriele Fatigati wrote:
> I have very interesting news. I recompiled OpenMPI 1.4.4 enabling the
> memchecker.
>
> Now the warning on strcmp is disappeared also without buffers initializations
> using memset!
>
> So the warning is a false positive? My simple code
This is to help anyone else having this problem, as it doesn't seem to
be mentioned anywhere I can find, rather surprisingly.
Core binding is broken on Interlagos with open-mpi 1.5.4. I guess it
also bites on Magny-Cours, but all our systems are currently busy and I
can't check.
It does work,
Am 31.01.2012 um 06:33 schrieb Rayson Ho:
> On Mon, Jan 30, 2012 at 11:33 PM, Tom Bryan wrote:
>> For our use, yes, spawn_multiple makes sense. We won't be spawning lots and
>> lots of jobs in quick succession. We're using MPI as an robust way to get
>> IPC as we spawn
Dear Jeff,
I have very interesting news. I recompiled OpenMPI 1.4.4 enabling the
memchecker.
Now the warning on strcmp is disappeared also without buffers
initializations using memset!
So the warning is a false positive? My simple code is safe?
Thanks.
2012/1/28 Jeff Squyres
On Mon, Jan 30, 2012 at 5:11 PM, Richard Walsh
wrote:
> I have not seen this mpirun error with the OpenMPI version I have built
> with Intel 12.1 and the mpicc fix:
> openmpi-1.5.5rc1.tar.bz2
Hi,
I haven't tried that version yet. I was trying to build a
supplementary
On Mon, Jan 30, 2012 at 11:33 PM, Tom Bryan wrote:
> For our use, yes, spawn_multiple makes sense. We won't be spawning lots and
> lots of jobs in quick succession. We're using MPI as an robust way to get
> IPC as we spawn multiple child processes while using SGE to help us
26 matches
Mail list logo