On Oct 15, 2011, at 12:31 PM, Toby Thain wrote:
> On 15/10/11 2:43 PM, Richard Elling wrote:
>> On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>>> boun...@opensolaris.org] On Behalf Of Tim Cook
>>>> In my example - probably not a completely clustered FS.
>>>> A clustered ZFS pool with datasets individually owned by
>>>> specific nodes at any given time would suffice for such
>>>> VM farms. This would give users the benefits of ZFS
>>>> (resilience, snapshots and clones, shared free space)
>>>> merged with the speed of direct disk access instead of
>>>> lagging through a storage server accessing these disks.
>>> I think I see a couple of points of disconnect.
>>> #1 - You seem to be assuming storage is slower when it's on a remote storage
>>> server as opposed to a local disk. While this is typically true over
>>> ethernet, it's not necessarily true over infiniband or fibre channel.
>> Ethernet has *always* been faster than a HDD. Even back when we had 3/180s
>> 10Mbps Ethernet it was faster than the 30ms average access time for the
>> disks of
>> the day. I tested a simple server the other day and round-trip for 4KB of
>> data on a
>> busy 1GbE switch was 0.2ms. Can you show a HDD as fast? Indeed many SSDs
>> have trouble reaching that rate under load.
> Hmm, of course the *latency* of Ethernet has always been much less, but I did
> not see it reaching the *throughput* of a single direct attached disk until
In practice, there are very, very, very few disk workloads that do not involve
Just one seek kills your bandwidth. But we do not define "fast" as "bandwidth"
> I'm pretty sure direct attached disk throughput in the Sun 3 era was much
> better than 10Mbit Ethernet could manage. Iirc, NFS on a Sun 3 running NetBSD
> over 10B2 was only *just* capable of streaming MP3, with tweaking, from my
> own experiments (I ran 10B2 at home until 2004; hey, it was good enough!)
The max memory you could put into a Sun-3/280 was 32MB. There is no possible way
for such a system to handle 100 Mbps Ethernet, you could exhaust all of main
in about 3 seconds :-)
ZFS and performance consulting
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9
zfs-discuss mailing list