Ulrich,

Ulrich Hertlein wrote:
> Hi Cory,
>
> On 13/10/09 3:47 PM, Cory Riddell wrote:
>> As I understand it, basically every core has it's own cache. So, if some
>> data is shared between multiple threads, it may be loaded into multiple
>> caches all at once. If one thread makes a change, the value cached by
>> the other core/thread is now stale. Synchronizing flushes the caches, so
>> any pending writes are made to the main store. The next time anybody
>> tries to read that value, it won't be in their cache and the data will
>> be read from the main store.
>
> Just to clarify:
>
> Yes, CPUs usually have their own caches and data can be different in
> the different caches.  However, this isn't what synchronization is
> solving.  This problem (cache coherency) is handled by the hardware.
>
> The reason you need to do locking is because two threads (which may or
> may not run on separate cores) might alter the same data in ways so
> that the result is no longer sane.
>
You might be right. If you want to rely on that, then you really need to
know what hardware you are running on and what the memory model is. I
don't believe C++ has a standard memory model.

The last time I really dug into this was a couple of years ago. If I
remember correctly, the Intel chips of that era promised only a very
weak memory model. In practice, they had a much stronger memory model.
In fact, the only cpu that I know had a very weak memory model was the
Alpha and those haven't been around for a while now.

I thought cache coherency depended on cues from the software. For
example, acquiring or releasing a mutex forces any pending writes to
complete then flushed the caches, marking a value as volatile prevented
any caching, etc... Is this wrong?

Cory

_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to