On Tue, Jun 28, 2011 at 9:59 AM, Michael Schnell <mschn...@lumino.de> wrote:
> But the original claim was that the implementation with Critical sections > failed on a multi core engine and interlocked instructions helped. > > This is why I suggested that there is some kind of bug. LOL I don't recall you ever asking specifics on my implementation. You have indeed made gross assumptions regardless of design. "The Engine" doesn't have cores. It's an application written in Lazarus and proudly compiled with FPC and worked flawlessly on Windows/Ubuntu32/64 untill I upgraded from a 3 core to a 6 core. During stress tests - I watched graphically under each core, using AWN widgets (1 display per core) code execution went from core to core instead of staying locked. While my linked list was headed by a critical section, that head was there only by design for general use and as vestigial for my particular instance. Keep in mind, there was no re-entrancy with this particular link-list instance. I had headed it with critical sections there for two reasons. 1) it was vestige from a non-adapted - general purpose link list - and were always allowed access b/c the same thread was accessing. 2) if I ever wanted to re-use the component, I would still need that thread barrier to block re-entrance. So to bring this to a conclusion, the Critical section did not ensure code order of execution when run on the multi-core system. And by using InterlockedExhchange we can be assured that variables are valid to other cores when the event of a core switch occurred - which resolved the stale values problem I was experiencing. -- _______________________________________________ Lazarus mailing list Lazarus@lists.lazarus.freepascal.org http://lists.lazarus.freepascal.org/mailman/listinfo/lazarus