On 15/10/11 2:43 PM, Richard Elling wrote:
On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
In my example - probably not a completely clustered FS.
A clustered ZFS pool with datasets individually owned by
specific nodes at any given time would suffice for such
VM farms. This would give users the benefits of ZFS
(resilience, snapshots and clones, shared free space)
merged with the speed of direct disk access instead of
lagging through a storage server accessing these disks.
I think I see a couple of points of disconnect.
#1 - You seem to be assuming storage is slower when it's on a remote storage
server as opposed to a local disk. While this is typically true over
ethernet, it's not necessarily true over infiniband or fibre channel.
Ethernet has *always* been faster than a HDD. Even back when we had 3/180s
10Mbps Ethernet it was faster than the 30ms average access time for the disks of
the day. I tested a simple server the other day and round-trip for 4KB of data
busy 1GbE switch was 0.2ms. Can you show a HDD as fast? Indeed many SSDs
have trouble reaching that rate under load.
Hmm, of course the *latency* of Ethernet has always been much less, but
I did not see it reaching the *throughput* of a single direct attached
disk until gigabit.
I'm pretty sure direct attached disk throughput in the Sun 3 era was
much better than 10Mbit Ethernet could manage. Iirc, NFS on a Sun 3
running NetBSD over 10B2 was only *just* capable of streaming MP3, with
tweaking, from my own experiments (I ran 10B2 at home until 2004; hey,
it was good enough!)
Many people today are deploying 10GbE and it is relatively easy to get wire
for bandwidth and< 0.1 ms average access for storage.
Today, HDDs aren't fast, and are not getting faster.
zfs-discuss mailing list