On Feb 7, 2014, at 10:41 AM, Prakash Surya <[email protected]> wrote:
And here's a snippet from the pull request description with a summary of the benefits this patch stack has shown in my testing (go check out the pull request for more info on the tests run and results gathered):Improve ARC hit rate with metadata heavy workloads This stack of patches has been empirically shown to drastically improve the hit rate of the ARC for certain workloads. As a result, fewer reads to disk are required, which is generally a good thing and can drastically improve performance if the workload is disk limited. For the impatient, I'll summarize the results of the tests performed: * Test 1 - Creating many empty directories. This test saw 99.9% fewer reads and 12.8% more inodes created when running *with* these changes. * Test 2 - Creating many empty files. This test saw 4% fewer reads and 0% more inodes created when running *with* these changes. * Test 3 - Creating many 4 KiB files. This test saw 96.7% fewer reads and 4.9% more inodes created when running *with* these changes. * Test 4 - Creating many 4096 KiB files. This test saw 99.4% fewer reads and 0% more inodes created (but took 6.9% fewer seconds to complete) when running *with* these changes. * Test 5 - Rsync'ing a dataset with many empty directories. This test saw 36.2% fewer reads and 66.2% more inodes created when running *with* these changes. * Test 6 - Rsync'ing a dataset with many empty files. This test saw 30.9% fewer reads and 0% more inodes created (but took 24.3% fewer seconds to complete) when running *with* these changes. * Test 7 - Rsync'ing a dataset with many 4 KiB files. This test saw 30.8% fewer reads and 173.3% more inodes created when running *with* these changes.
I think it is also necessary to prove that all other workloads are not suffering from this change. Nearly every program can be optimized for a specific workload (here, empty files and directories), but the usual outcome is that such an optimization will hurt anyone else. Especially since there are some impressive claims, maybe a result of overoptimizing that could have drastic effects on usual everyday workloads or heavy filesystem operations with nonzero content common for HPC applications. This needs testing for different usage schemes than those it was designed for.
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ developer mailing list [email protected] http://lists.open-zfs.org/mailman/listinfo/developer
