Chris Du wrote, On 42-12-23 02:59 PM:
Compression helps when you don't export volumes through NFS. SSD really helps 
increase write speed and reduce latency for NFS.

This filebench was run on storage server itself. I haven't got a chance to run 
it  inside client. Inside client, I think the result will be more positive for 
small random IO, if you check iSCSI lun properties, it shows writeback enabled 
by default so storage server memory is used for writeback cache.

I'm still testing the environment, the server is dual Opteron 254, 8G memory, 
dual broadcom 5704 nic, qlogic 2460. Disks are 3*MAXTOR AtLAS 10K5, but later 
I'll make 2*147G for OS mirror, 6*ATLAS 10K5 in RAID10, plus a Supermicro 
storage shelf with 24 SAS disks, 2*Intel x25-e SSD. This environment is for my 
VMware ESX test lab.

Actually, I see very good performance inside VMware guest for small  random I/O.
With, or without compression there is something strange with NFS on 111b, I give up. Will try to test it on 117, but the performance was so bad with no obvious reasons (like James describing above) that now it's hard even consider ZFS+NFS for production.

But iscsi or FC still can work out for me.

I'd like to ask you couple more questions: what is the chassis model going t get? And how the storage will be connected to your vmware server? Do you use any tuning like jumbo frames, ack registry key correction on windows? I also see you have Qlogic FC card, how does it perform? Did you have a chance to test comstar with it?

--
Regards,
Roman
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to