On Jun 8, 2010, at 2:44 PM, Eugene Loh wrote:
> What does "disabling threads" mean?
I was specifically referring to --disable-opal-multi-threads and
--disable-mpi-thread-multiple (couldn't remember the names of the options when
I typed this up).
One thing I just noticed -- these warnings onl
As stated at the conf call, I did some performance testing on a 32 cores
node.
So, here is graph showing 500 timings of an allreduce operation (repeated
15,000 times for good timing) with sysv, mmap on /dev/shm and mmap on
/tmp.
What is shows :
- sysv has the better performance ;
- having
Thanks Sylvain!
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Jun 9, 2010, at 9:58 AM, Sylvain Jeaugey wrote:
As stated at the conf call, I did some performance testing on a 32
cores node.
So, here is graph showing 500 timings of an allreduce operation
(repeated 15,000 times fo
Now in the trunk (see r23260).
Thanks everyone!
--
Samuel K. Gutierrez
Los Alamos National Laboratory
On Jun 1, 2010, at 11:08 AM, Samuel K. Gutierrez wrote:
WHAT: New System V shared memory component.
WHY: https://svn.open-mpi.org/trac/ompi/ticket/1320
WHERE:
M ompi/mca/btl/sm/btl_sm.
Iiiteresting.
This, of course, begs the question of whether we should use sysv shmem or not.
It seems like the order of preference should be:
- sysv
- mmap in a tmpfs
- mmap in a "regular" (but not networked) fs
The big downer, of course, is the whole "what happens if the job crashes?"
is
If anyone is up for it, another interesting performance comparison could
be start-up time. That is, consider a fat node with many on-node
processes and a large shared-memory area. How long does it take for all
that shared memory to be set up? Arguably, start-up time is a
"second-order effect
On Jun 4, 2010, at 5:02 PM, Peter Thompson wrote:
> It was suggested by our CTO that if these files were compiled as to
> produce STABS debug info, rather than DWARF, then the debug info would
> be copied into the executables and shared libraries, and we would then
> be able to debug with Open MPI
On Jun 9, 2010, at 11:57 AM, Jeff Squyres wrote:
Iiiteresting.
This, of course, begs the question of whether we should use sysv
shmem or not. It seems like the order of preference should be:
- sysv
- mmap in a tmpfs
- mmap in a "regular" (but not networked) fs
The big downer, of cours
On Jun 9, 2010, at 3:26 PM, Samuel K. Gutierrez wrote:
> System V shared memory cleanup is a concern only if a process dies in
> between shmat and shmctl IPC_RMID. Shared memory segment cleanup
> should happen automagically in most cases, including abnormal process
> termination.
Umm... right
Jeff Squyres wrote:
On Jun 9, 2010, at 3:26 PM, Samuel K. Gutierrez wrote:
System V shared memory cleanup is a concern only if a process dies in
between shmat and shmctl IPC_RMID. Shared memory segment cleanup
should happen automagically in most cases, including abnormal process
termina
Jeff Squyres wrote:
On Jun 4, 2010, at 5:02 PM, Peter Thompson wrote:
It was suggested by our CTO that if these files were compiled as to
produce STABS debug info, rather than DWARF, then the debug info would
be copied into the executables and shared libraries, and we would then
be able to
11 matches
Mail list logo