Brian,
There seems to be a real problem in the one sided implementation.
Using the block indexed type from the test application the following
call succeed:
MPI_Put(mem, 1, mpit, 1, 0, 4, mpi_double3, win);
However, the call:
MPI_Put(mem, 4, mpi_double3, 1, 0, 1, mpit, win);
Richard Graham wrote:
Re: [OMPI devel] shared-memory allocations
The memory allocation is intended to take
into account that two separate procs may be touching the same memory,
so the intent is to reduce cache conflicts (false sharing)
Got it. I'm totally fine with that. Separate cacheli
Eugene,
As you noticed add_procs only add processes to the list of available
processes without trying to setup any connections to them. As a result
when we return from add_proc it is very unlikely that we will be able
to accurately detect any connection problems.
The connections are estab
WHAT: Add new tool to retrieve/monitor process stats
WHY: Several of us have had user requests to provide a
convenient way of obtaining reports
It has been a long time since I wrote the original code, and things have
changed a fair amount since that time, so bear this in mind.
The memory allocation is intended to take into account that two separate
procs may be touching the same memory, so the intent is to reduce cache
conflicts (false sh
Hi Ralph,
The Mac OS X affinity stuff doesn't work like Linux, etc.
The document I pointed to is sparse in details, but basically
they didn't leave a way for unrelated processes to affect
how they are scheduled relative to each other. Only
processes/threads that have a fork/thread-spawn ancestor
c
But we don't want the child to inherit affinity from the orted anyway,
so I don't see why the exec call is an issue for us. The MPI proc sets
its own affinity during MPI_Init using the paffinity framework, so it
looks to me like the only thing missing is the correct set_affinity
code in the
Hello,
I just ran across this document from Apple that describes
the Thread affinity scheme that was added in Leopard.
http://developer.apple.com/releasenotes/Performance/RN-AffinityAPI/
In its current form, and how orteds start the MPI ranks with exec,
we can't use this, AFAIK. However, if someo
From our perspective, it would be good if it could default to the old
behavior (in 1.3 if possible).
Thanks,
Greg
On Dec 8, 2008, at 11:42 AM, Ralph Castain wrote:
I don't think there was any overt thought given to it, at least not
on my part. I suspect it came about because (a) the wiki de
Yes, this is a problem, but not quite in the way you describe (I think
a hodgepodge of BTLs for final connectivity is fine).
I found similar issues a while ago if the openib BTL opens properly
but then fails in add_procs() for some reason. Check out these
tickets -- 1434 points to some dis
On Dec 10, 2008, at 1:11 PM, Eugene Loh wrote:
For shared memory communications, each on-node connection (non-self,
sender-receiver pair) gets a circular buffer during MPI_Init().
Each CB requires the following allocations:
*) ompi_cb_fifo_wrapper_t (roughly 64 bytes)
*) ompi_cb_fifo_ctl_t
11 matches
Mail list logo