On 04.06.2014 21:56, Matthew Ahrens wrote:
Great idea -- I've also implemented something like this, and will upstream to illumos soon.
Thanks! Shall I just wait, or some testing may help? PS: Another cool thing hanging around and not committed. Again. ;)
On Wed, Jun 4, 2014 at 11:53 AM, Alexander Motin <[email protected] <mailto:[email protected]>> wrote: While doing high-IOPS ZFS benchmarking and profiling on 40-core FreeBSD system with bunch of SSDs I noticed significant lock congestion spinning on spa->spa_async_zio_root ZIO lock. As I found, it was caused by multiple concurrently starting and completing prefetch ZIOs, fighting for the same lock. To mitigate the problem I've replaced the single async root ZIO with 16, as you may see in attached patch. As result, on concurrent strided read with 4K block and 4K recordsize from 256 threads (iozone -i 5 -w -r 4k -t 256 -s 256M) I've practically doubled the test result, rising from 150K IOPS originally to 300K IOPS with the patch, while total CPU load reduced from 100% to ~60%. So I would like to know people opinion about this patch. Won't having multiple ZIOs and calling multiple zio_wait()'s cause any problems on pool export?
-- Alexander Motin _______________________________________________ developer mailing list [email protected] http://lists.open-zfs.org/mailman/listinfo/developer
