* Fabio Kaminski ([email protected]) wrote: > Hi , > > Im playing with Urcu , and first thing was to tried the tests.. and source > of it.. > > Read throughtput is very impressive.. really unbeliavable.. :) > > so first of all.. thanks for this amazing initiative.. to create this user > level library! > > > As RCU theoretically mostly uses spinlocks instead of mutexes.. i thought in > give it a trie.. > > and changed the test_urcu to use spinlock.. (the same ones provided by > pthread library) and made a copy..with original mutex lock..
Please note that the mutex used in test_urcu.c is not related to RCU at all. It simply protects the home-made memory allocation. In this implementation, the RCU pointer update is done with "rcu_xchg_pointer()", which atomically exchanges the new pointer with the old one, so no mutex nor spinlock is needed there (especially if you don't care about reading the content you are replacing). Mutexes or spinlocks can be used to protect writes one from another. Mutexes are typically implemented as adaptative spinlocks turning into mutexes after a few loops, so there should not be much difference between the spinlocks and the mutexes you are trying to compare (other than implementation differences). > in my own tests.. the writes, with low hits, almost double its values.. > while reads, downgrade just a bit.. (i particularly liked this version :)) > > so.. my question is if anyone have tried this.. > > and what are the impressions?! Impact on read throughput caused by changes in memory allocation locking scheme is quite unexpected. You might want continue experimenting to find out why this caused this change in performance. Thanks, Mathieu > _______________________________________________ > ltt-dev mailing list > [email protected] > http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev -- Mathieu Desnoyers Operating System Efficiency R&D Consultant EfficiOS Inc. http://www.efficios.com _______________________________________________ ltt-dev mailing list [email protected] http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev
