Before this thread on "cache coherence" and "memory consistency" goes
any further, I'd like to suggest a time-out to read something like
http://www-ece.rice.edu/~sarita/Publications/models_tutorial.ps.
A lot of what I'm reading has a grain of truth but isn't quite
right. This paper appeared as a
On Mon, Jul 12, 1999 at 10:38:03PM -0700, Mike Smith wrote:
I said:
than indirect function calls on some architectures: inline
branched code. So you still have a global variable selecting
locked/non-locked, but it's a boolean, rather than a pointer.
Your atomic macros are then {
Doug Rabson wrote:
On Mon, 12 Jul 1999, Peter Jeremy wrote:
Mike Haertel [EMAIL PROTECTED] wrote:
Um. FYI on x86, even if the compiler generates the RMW
form "addl $1, foo", it's not atomic. If you want it to
be atomic you have to precede the opcode with a LOCK
prefix 0xF0.
Although function calls are more expensive than inline code,
they aren't necessarily a lot more so, and function calls to
non-locked RMW operations are certainly much cheaper than
inline locked RMW operations.
This is a fairly key statement in context, and an opinion here would
count for a
:
: Although function calls are more expensive than inline code,
: they aren't necessarily a lot more so, and function calls to
: non-locked RMW operations are certainly much cheaper than
: inline locked RMW operations.
:
:This is a fairly key statement in context, and an opinion here would
:
: Although function calls are more expensive than inline code,
: they aren't necessarily a lot more so, and function calls to
: non-locked RMW operations are certainly much cheaper than
: inline locked RMW operations.
:
:This is a fairly key statement in context, and an opinion here
:I assumed too much in asking the question; I was specifically
:interested in indirect function calls, since this has a direct impact
:on method-style implementations.
Branch prediction caches are typically PC-sensitive. An indirect method
call will never be as fast as a direct call,
Mike Smith [EMAIL PROTECTED] wrote:
Although function calls are more expensive than inline code,
they aren't necessarily a lot more so, and function calls to
non-locked RMW operations are certainly much cheaper than
inline locked RMW operations.
This is a fairly key statement in context, and
:
:I'm not sure there's any reason why you shouldn't. If you changed the
:semantics of a stack segment so that memory addresses below the stack
:pointer were irrelevant, you could implement a small, 0-cycle, on-chip
:stack (that overflowed into memory). I don't know whether this
:semantic
:
:Based on general computer architecture principles, I'd say that a lock
:prefix is likely to become more expensive[1], whilst a function call
:will become cheaper[2] over time.
:...
:
:[1] A locked instruction implies a synchronous RMW cycle. In order
:to meet write-ordering guarantees
On Mon, Jul 12, 1999 at 07:09:58PM -0700, Mike Smith wrote:
Although function calls are more expensive than inline code,
they aren't necessarily a lot more so, and function calls to
non-locked RMW operations are certainly much cheaper than
inline locked RMW operations.
This is a fairly
:...
I would also like to add a few more notes in regards to write pipelines.
Write pipelines are not used any more, at least not long ones. The
reason is simply the cache coherency issue again. Until the data is
actually written into the L1 cache, it is acoherent.
Second answer: in the real world, we're nearly always hitting the
cache on stack operations associated with calls and argument passing,
but not less often on operations in the procedure body. So, in
^^^ typo
Urk. I meant to say "less often", delete the "not".
To Unsubscribe: send mail
This is a fairly key statement in context, and an opinion here would
count for a lot; are function calls likely to become more or less
expensive in time?
Ambiguous question.
First answer: Assume we're hitting the cache, taking no branch
mispredicts, and everything is generally going at "the
14 matches
Mail list logo