Compression helps when you don't export volumes through NFS. SSD really helps 
increase write speed and reduce latency for NFS.

This filebench was run on storage server itself. I haven't got a chance to run 
it  inside client. Inside client, I think the result will be more positive for 
small random IO, if you check iSCSI lun properties, it shows writeback enabled 
by default so storage server memory is used for writeback cache.

I'm still testing the environment, the server is dual Opteron 254, 8G memory, 
dual broadcom 5704 nic, qlogic 2460. Disks are 3*MAXTOR AtLAS 10K5, but later 
I'll make 2*147G for OS mirror, 6*ATLAS 10K5 in RAID10, plus a Supermicro 
storage shelf with 24 SAS disks, 2*Intel x25-e SSD. This environment is for my 
VMware ESX test lab.

Actually, I see very good performance inside VMware guest for small  random I/O.
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to