Hi.

Running some benchmarks and profiles for backing block storage (iSCSI, etc) with ZFS files with recordsize=4K on 24-core FreeBSD machine I've noticed significant lock congestion on ZFS SPA config locks inside spa_config_enter() and spa_config_exit(). The source of it is numerous bp_get_dsize() calls, which acquire and drop SCL_VDEV reader lock, while called for every written block. Even if the lock scope is very small, so many acquisitions predictably cause congestion. The more CPUs system has the heavier congestion is. And since these locks are adaptive on FreeBSD -- I am getting heavy lock spinning, burning up to half of the CPU time.

Is this problem known on other platforms?

I've made a patch to replace mutex locks there with rwlocks, acquiring them for read in case SPA config lock is requested for read. On my tests this change doubles benchmark results and completely removes lock congestion from SPA config locks since write acquisitions there don't happen so often.

On FreeBSD, due to difference in memory allocation semantics, both Solaris mutex and rwlock primitives are in fact emulated with the same sxlock primitive now, so my patch just uses functionality that is there any way. On illumos I suppose there is some difference, but I guess this patch still should give benefits since these accesses indeed can be shared and that is what shared locks are for.

Any comments about: http://people.freebsd.org/~mav/spa_shared.patch ?

--
Alexander Motin
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to