Meta-data test, time find on 11 million inodes (designed such that the filesystem meta-data blows out main memory):
time find /build > /dev/null df -i /build Filesystem 1K-blocks Used Avail Capacity iused BUILDX 897966080 236821248 661144832 26% 11383321 Without swap cache: 4.843u 59.522s 14:10.33 7.5% 15+65k 0+0io 0pf+0w 4.417u 56.161s 11:20.23 8.9% 16+66k 0+0io 8pf+0w (steady state) 4.565u 56.035s 11:27.37 8.8% 16+66k 2+0io 36pf+0w (steady state) Remount to clear VM page caches, then enable swap cache: 4.472u 50.983s 11:52.69 7.7% 15+64k 0+0io 0pf+0w 4.537u 50.566s 7:27.07 12.3% 16+66k 2+0io 8pf+0w 4.613u 49.218s 6:42.33 13.3% 16+66k 2+0io 2pf+0w 4.412u 50.181s 6:00.40 15.1% 16+66k 2+0io 2pf+0w 4.217u 49.751s 5:26.99 16.5% 16+67k 2+0io 16pf+0w 4.435u 48.651s 4:54.43 18.0% 16+67k 2+0io 14pf+0w 4.801u 48.382s 4:27.48 19.8% 16+66k 2+0io 14pf+0w 4.824u 49.083s 4:05.44 21.9% 15+64k 4+0io 12pf+0w 4.684u 48.422s 3:41.55 23.9% 16+67k 4+0io 14pf+0w 4.239u 48.682s 3:24.90 25.8% 16+68k 2+0io 16pf+0w (reset burst parameters) 4.911u 48.598s 3:19.98 26.7% 16+66k 0+0io 0pf+0w 4.655u 48.334s 2:14.90 39.2% 15+65k 2+0io 12pf+0w (hard drive activity almost gone) (SSD activity ~30-50MB/sec, mostly reads) 4.515u 48.598s 1:49.73 48.3% 15+65k 2+0io 10pf+0w (steady state) 4.233u 49.028s 1:49.59 48.5% 16+66k 0+0io 0pf+0w (steady state) 4.507u 49.059s 1:50.05 48.6% 16+66k 2+0io 8pf+0w (steady state) 4.663u 48.912s 1:49.35 48.9% 16+66k 2+0io 8pf+0w (steady state) (basically no HD activity, SSD 30-60MB/sec all reads) Steady state reached w/ approximately 4.7GB in the swap cache. test28:/root# pstat -s Device 1K-blocks Used Avail Capacity Type /dev/da1s1b 16777088 4719420 12057668 28% Interleaved iostat shows SSD is about 55% busy. The cpu is 50% busy, so the lack of meta-data read-ahead is an issue (maybe something I can work on for HAMMER). -Matt