Pete French wrote:
I've tried all the possible stripe sizes (128k gives the best performance)
but still I only get the above speeds. Just one of the 15k drives on it's
own performs better than this! I would expect the RAID-0 to give me at
least some speedup, or in the worst case be the same,
- Is the controller cache enabled?
Yes - split 50% read, 50% write.
- Do you have the battery for it and is write cache enabled? (You won't
make full use of the cache without the battery)
yes - battery is attached and fully charged
- How does your performance compare when using dd on the
You might be able to speed up the read by playing with the vfs.read_max
sysctl (try 16 or 32).
Wow! That makes a huge difference, thanks. Should this not be in 'man tuning' ?
-pete.
___
freebsd-stable@freebsd.org mailing list
Pete French wrote:
- How does your performance compare when using dd on the raw devices (in
order: da0, da0s1, da0s1a...) vs when using it on the file system? (Poor
performance might indicate FS vs stripe alignment issues)
Raw dd gives 50 meg/second
On /dev/da1, with a reasonable block
Raw dd gives 50 meg/second
On /dev/da1, with a reasonable block size (1m)?
Block size is 2meg. I was using da1s1 and da1s2 which were giving
me 50 and 47 meg/second resepctively - if I switch to da1 on it's
own I get 59 meg/second.
reading from the filesystem with the vfs.read_max set to 64 I
Pete French wrote:
reading from the filesystem with the vfs.read_max set to 64 I now get
112 meg/second though ?!!! how can the filesystem give me better performance
than the raw device ? I do not think this is a caching issue as I am using
a test file nearly twice the size of the RAM in the
It would be interesting for you to track iostat (i.e. run iostat 1)
with and without modified vfs.read_max and see if there's a difference.
On the file: KB/t is about 127.5 with both sizes. Rate is 39 on with
the read_max set to 8, but 115 with read_max set to 64.
On the raw device: KB/t is
Pete French wrote:
It would be interesting for you to track iostat (i.e. run iostat 1)
with and without modified vfs.read_max and see if there's a difference.
On the file: KB/t is about 127.5 with both sizes. Rate is 39 on with
the read_max set to 8, but 115 with read_max set to 64.
Ok, this
Pete French wrote:
You might be able to speed up the read by playing with the vfs.read_max
sysctl (try 16 or 32).
Wow! That makes a huge difference, thanks. Should this not be in 'man tuning' ?
AFAIK vfs.read_max will only influence sequential reading - it's the
readahead size. Also, it's
Pete French wrote:
You might be able to speed up the read by playing with the vfs.read_max
sysctl (try 16 or 32).
Wow! That makes a huge difference, thanks. Should this not be in 'man tuning' ?
Yeah, I believe I've seen it mentioned *somewhere* with respect to
working with RAID (of course,
I recently overhauled my RAID array - I now have 4 drives arranged
as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two
buses, which are actively terminated to give U160 speeds (and I have
verified this). The card is a 5304 (128M cache) in a PCI-X slot.
This replaces a set of 6 7200
Pete French wrote:
I recently overhauled my RAID array - I now have 4 drives arranged
as RAID 0+1, all being 15K 147gig Fujitsu's, and split across two
buses, which are actively terminated to give U160 speeds (and I have
verified this). The card is a 5304 (128M cache) in a PCI-X slot.
This
12 matches
Mail list logo