On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tim Cook
>> In my example - probably not a completely clustered FS.
>> A clustered ZFS pool with datasets individually owned by
>> specific nodes at any given time would suffice for such
>> VM farms. This would give users the benefits of ZFS
>> (resilience, snapshots and clones, shared free space)
>> merged with the speed of direct disk access instead of
>> lagging through a storage server accessing these disks.
> I think I see a couple of points of disconnect.
> #1 - You seem to be assuming storage is slower when it's on a remote storage
> server as opposed to a local disk.  While this is typically true over
> ethernet, it's not necessarily true over infiniband or fibre channel.  

Ethernet has *always* been faster than a HDD. Even back when we had 3/180s
10Mbps Ethernet it was faster than the 30ms average access time for the disks 
the day. I tested a simple server the other day and round-trip for 4KB of data 
on a 
busy 1GbE switch was 0.2ms. Can you show a HDD as fast? Indeed many SSDs 
have trouble reaching that rate under load.

Many people today are deploying 10GbE and it is relatively easy to get wire 
for bandwidth and < 0.1 ms average access for storage.

Today, HDDs aren't fast, and are not getting faster.
 -- richard


ZFS and performance consulting
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 

zfs-discuss mailing list

Reply via email to