On Thu, Jul 29, 2010 at 9:32 AM, William Leslie <[email protected]> wrote: > On 29 July 2010 17:27, Maciej Fijalkowski <[email protected]> wrote: >> On Thu, Jul 29, 2010 at 7:18 AM, William Leslie >> <[email protected]> wrote: >>> When an object is mutable, it must be visible to at most one thread. >>> This means it can participate in return values, arguments and queues, >>> but the sender cannot keep a reference to an object it sends, because >>> if the receiver mutates the object, this will need to be reflected in >>> the sender's thread to ensure internal consistency. Well, you could >>> ignore internal consistency, require explicit locking, and have it >>> segfault when the change to the length of your list has propogated but >>> not the element you have added, but that wouldn't be much fun. The >>> alternative, implicitly writing updates back to memory as soon as >>> possible and reading them out of memory every time, can be hundreds or >>> more times slower. So you really can't have two tasks sharing mutable >>> objects, ever. >>> >>> -- >>> William Leslie >> >> Hi. >> >> Do you have any data points supporting your claim? > > About the performance of programs that involve a cache miss on every > memory access, or internal consistency? >
I think I lost some implication here. Did I get you right - you claim that per-object locking in case threads share obejcts are very expensive, is that correct? If not, I completely misunderstood you and my question makes no sense, please explain. If yes, why does it mean a cache miss on every read/write? Cheers, fijal _______________________________________________ [email protected] http://codespeak.net/mailman/listinfo/pypy-dev
