Ross,

I don't know if it's my network setup or what at this point, but during my benchmarks I am unable to get more then 20-23MB/s doing 4k sequential reads with 4 outstanding I/Os. This is constant whether going to a ZVOL or a soft partition.

My setup:

Server: Dell PE 2950, 2 quad xeons, 4GB memory, 2 on board bnx (management network only), intel quad igb running LAG to a layer 3 switch, flow control and jumbo frames enabled, Solaris 10 update 7 with all the latest patches installed. Storage is 14 SAS 15k in a MD1000 enclosure hooked up to a Dell PERC 6e w/ 512MB of BBU write- back cache, disks setup as individual RAID0 disks, then put in a zpool as 2 raidz2 sets of 7 drives each. I have the write-back enabled on each disk for ZIL performance since I don't have an SSD disk for ZIL logging.

Client ESX windows 2003r2 guest running MS ini 2.08.

I believe the switch and the ESX are setup properly tests against a Linux host running IET and HW RAID6 show it's capable of around 30-40MB/s.

Any feedback on this would be appreciated as I am getting a little burnt out trying to diagnose this.

Getting 20-23MB/s doing 4k sequential reads is between 5K to 6K IOPS. These being constant going between ZVOL or soft partition, I would have to surmise that CPU performance is likely one of your limiting factors. What is the CPU cost of the iSCSI Target Daemon (iscsitgtd) while driving this I/O load?

If you are measuring 20-23MB/s performance on the iSCSI Client side, what is the iSCSI Target's I/O performance in accessing the ZVOLs or soft partitions? Specific to comparing iSCSI client and target performance, does the work load performance change, as the I/O size changes?

- Jim



-Ross


_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to