Still I would argue that if you do not use a write size larger than what you have as real memory, that buffering in real memory is going to play a role....
I think you miss read all the details here Willem.
Sorry about that, if that is the case.
Original values: Write: 150Mb/s Read: 50Mb/s Current value after tweeking, RAID stripe size, vfs.read_max and MAXPHYS ( needs more testing now due to scotts warning ) Write: 150Mb/s Read: 200Mb/s
Note: The test size was upped to 10Gb to avoid caching issues.
That would certainly negate my assumption 10G is enough to regularly flush the buffer.
Other than that I find 50Mb/s is IMHO reasonable high value for a RAID5 in writting. But it would require substantial more organised testing. DD is nothing more than a very crude indication of what to expect in real life.
dd was uses as it is a good quick indication of baseline sequential file access
speed and as such highlighted a serious issue with the original performance.
That is well phrased English for what I was trying to say. I'm glad to see that it worked for you. And I'm certainly impressed by the numbers...
This is on a 4 disk RAID5 with one hot spare???
--WjW _______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "[EMAIL PROTECTED]"

