Hi Christoph, Thanks for the patches.
I pushed some changes, together with your load-tester fixes, to the atomic-ref branch [1] of our repository. I did not apply your original patch, however, mainly because of the following: > + count = this->half_open_count; Depending on the architecture this could be problematic. On many common architectures it will probably work as expected, thanks to cache coherency protocols and atomic int loads. But there might be architectures for which this is not the case. And if we ever decided to increase the size of refcount_t to 64-bit, loads would not be atomic anymore on e.g. x86. Therefore, I decided to introduce the ref_cur() macro, which atomically returns the current value. To make sure it's atomic (and we don't have to care how to make it so) it resolves to __sync_fetch_and_add(ref, 0), if available. Additionally, I pushed a commit that uses the newer __atomic GCC built-ins, that support the C++11 memory models, to implement the ref counter macros. This allows us to use the __ATOMIC_RELAXED memory model for them, which does not impose any memory barriers (which the __sync functions do) and should therefore be more efficient. Regards, Tobias [1] http://git.strongswan.org/?p=strongswan.git;a=shortlog;h=refs/heads/atomic-ref _______________________________________________ Dev mailing list [email protected] https://lists.strongswan.org/mailman/listinfo/dev
