First off avoid using mfi its a RAID card which adds a extra layer
you dont want even in jbob mode.
Whats the mfi max_cmd set to? If you havent already set it to -1
/boot/loader.conf (hw.mfi.max_cmds=-1) which means controller max.
Next 4GB on a machine avoid it if possible, thats whats disabling
prefetch by default, I'd recommend a min of 8Gb more if posslbe.
When you "add" a disk your creating a stripe which is bad news if
any single disk fails, so avoid that at all costs.
Finally your issue with read performance is likely your block size,
you set it for write to 1mb (which is reasonable) but you didn't
on read so will be 512b which will likely cpu limit in the dd
process instead of bottlenecking on the disk / FS performance.
Regards
Steve
----- Original Message -----
From: "markham breitbach" <[email protected]>
To: <[email protected]>
Sent: Wednesday, January 29, 2014 12:02 AM
Subject: ZFS read performance
Hi,
I'm trying to figure out a ZFS read performance issue that I am seeing
on FreeBSD9.2 (amd64).
CPU: Intel(R) Xeon(R) CPU E5405 @ 2.00GHz (2000.11-MHz
K8-class CPU)
real memory = 4294967296 (4096 MB)
avail memory = 4059762688 (3871 MB)
I have an LSI 9240-8i controller. (8ports @ 6GB/s ea, x8 PCIE 2.0)
with the drives installed JBOD. These are intended for use as a
storage pool. The main system drives are using the onboard SATA
controller for the system.
With a single drive formatted UFS, I can achieve ~ 175MBps Read and
Write speeds using dd as follows:
WRITE:
dd if=/dev/zero of=/vol0/test bs=1m count=10000
READ
dd if=/vol0/test of=/dev/null
When I create a zfs pool with a single drive:
zpool create vol0 /dev/mfisyspd0
using dd again, I can write ~175MBps, but my read speed is only about
60MBps.
I have set
vfs.zfs.prefetch_disable = 0
and this improves reads to about 70MBps, but no much beyond that.
Now, this gets even more interesting.
If I add a drive to the pool:
zpool add vol0 /dev/mfisyspd1
My write speeds increase to ~280MBps, but my read speed is still about
60MBps.
Using gstat, I can see that the load is shared pretty equally amongst
the drives.
This trend continues with the addition of a 3rd and 4th drive with
write speeds ramping up to about 500MBps (I didn't test past that), but
read speeds stuck around 60MBps.
I have done similar testing under 9.1 making sure to align 4K sectors
using the gnop trick (I am using WD RED 3TB AF drives) with no change.
I have looked through the wiki and handbook for tuning and done some
googling around, but I'm pretty much out of ideas now. Everything I
have read seems to indicate that ZFS should run fine on 4GB without any
tuning, and at this point I haven't even begun any serious stress
testing.
Any ideas of where to go from here would be greatly appreciated.
Thanks,
-Markham
---
[1]Markham Breitbach
Network Operations
SSi People, Ideas, Technology
- - - - - - - - - - - - - - - - - - - - -
+1 867 669 7500 work
+1 867 669 7510 fax
[2][email protected]
[3]www.ssimicro.com
356B Old Airport Road
Yellowknife , NT X1A 3T4
Canada
- - - - - - - - - - - - - - - - - - - - -
Visit some of our other networks
[4]www.qiniq.com & [5]www.airware.ca
References
1. http://www.ssimicro.com/
2. mailto:[email protected]
3. http://www.ssimicro.com/
4. http://www.qiniq.com/
5. http://www.airware.ca/
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[email protected]"
================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please
telephone +44 845 868 1337
or return the E.mail to [email protected].
_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[email protected]"