:The good news is there was no obvious stability issue; the tests completed
:successfully.
:But there was no real performance change either. blogbench results were
:624 / 176300, in line with previous ones.

    I'll commit the change.

:>     In anycase, I'm not sure it's even the bottleneck for the earlier tests
:>     since blockbench didn't run long enough (with default options) to use
:>     up 32G of ram.
:
:On the contrary, shouldn't performance be higher if the disks are not touched ?
:
:Looking at iostat history, newfs pushed almost 8000 iops to the raid volume.
:blogbench was much lower at first and the number of iops was almost divided
:by ten after some time.

    Not necessarily, because the I/O's are going to be mostly asynchronous
    writes and not synchronous reads.

    Linear write iops is mostly irrelevant.  That's just the platter write
    bandwidth divided by the I/O size.

:I've attached a log of iostat /dev/da0 1
:
:Two activity periods are shown in the file:
:- newfs_hammer -L RAID_VOL /dev/da0
:- blogbench -d /mnt/blogbench
:
:-- 
:Francois Tigeot

    blogbench --iterations=200 ...  and you really need to include the
    actual blogbench output.  The final results are worthless.

:   0   17 16.00 5922 92.51   3  0  2  0 95
:   0  144 16.00 7439 116.24   3  0  2  0 94
:   0    6 16.00 7672 119.88   3  0  2  0 95
:   0    6 16.00 7316 114.32   3  0  3  0 94

    Linear write activity.  I wonder why it isn't
    clustering the I/O's, though.

:   0    6 15.99 4762 74.38   5  0 95  0  0
:   0    5 15.99 2615 40.82   4  0 84  0 12
:   0    5 15.98 7683 119.86   3  0 73  0 24
:   0    6 15.64 1741 26.59   3  0 84  0 13
:   0    5 15.65 1310 20.01   3  0 70  0 27
:   0    5 15.81 1108 17.11   4  0 90  0  6

    This isn't too good for a RAID volume.  The TPS is there but it
    should not be doing any mixed reading activity that early in the
    blogbench test.

    Try setting vfs.hammer.double_buffer=1.

:   0    5 15.75  252  3.87   2  0 80  0 18
:   0   11 15.80  240  3.71   1  0 84  0 15
:   0    5 15.74  310  4.76   1  0 86  0 13
:   0    5 15.79  231  3.56   1  0 83  0 16
:...
:   0    3 15.94  288  4.48   1  0 85  0 14
:   0    3 16.00  620  9.69   1  0 79  0 20
:   0    3 16.00  352  5.50   1  0 84  0 15

    This doesn't look good either.  It shouldn't degrade that much,
    though it's a bit hard to tell with iostat because it doesn't print
    the disk busy %.

                                        -Matt
                                        Matthew Dillon 
                                        <dil...@backplane.com>

Reply via email to