To expand slightly on Patrick's last comment:
> Cache prefetching is slightly
> more efficient on local socket, so closer to reader may be a bit better.
Ideally one polls from cache, but in the event that the line is evicted the
next poll after the eviction will pay a lower cost if the memory
Richard Graham wrote:
Yes - it is polling volatile memory, so has to load from memory on every
read.
Actually, it will poll in cache, and only load from memory when the
cache coherency protocol invalidates the cache line. Volatile semantic
only prevents compiler optimizations.
It does not m
No problem-o.
George -- can you please file a bug?
On Dec 13, 2008, at 3:11 PM, Brian Barrett wrote:
Sorry, I really won't have time to look until after Christmas. I'll
put it on the to-do list, but that's as soon as it has a prayer of
reaching the top.
Brian
On Dec 13, 2008, at 1:02 P
Sorry, I really won't have time to look until after Christmas. I'll
put it on the to-do list, but that's as soon as it has a prayer of
reaching the top.
Brian
On Dec 13, 2008, at 1:02 PM, George Bosilca wrote:
Brian,
I found a second problem with rebuilding the datatype on the remote.
Brian,
I found a second problem with rebuilding the datatype on the remote.
Originally, the displacement were wrongly computed. This is now fixed.
However, the data at the end of the fence is still not correct on the
remote.
I can confirm that the packed message contains only 0 instead of
>> >
>
> On 12/12/08 8:21 PM, "Eugene Loh" wrote:
>
> Richard Graham wrote:
> Re: [OMPI devel] shared-memory allocations The memory allocation is intended
to take into account that two separate procs may be touching the same memory, so
the intent is to reduce cache conflicts (false sharing)
>
This works for me. LAM had a similar tool to query daemons and find
the current state of running MPI procs (although it didn't get top-
like statistics of the apps).
On Dec 12, 2008, at 3:20 PM, Ralph Castain wrote:
---