On Wed, Apr 23, 2008 at 7:29 PM, erik quanstrom <[EMAIL PROTECTED]> wrote:
> >>  just put it up on a tee: why not use aoe?
>  >>
>  >
>  > The problems of disk I/O are largely a focus issue -- all this stuff
>  > is pretty new and they focused on the network mechanisms first because
>  > those were the ones where the competition has published the most
>  > compelling benchmarks.  The disk stuff will get tuned out and will
>  > likely outperform network for I/O.  As an example, 9P directly over
>  > virtio beats NFS/TCP/virtio-net by 70% without cacheing or
>  > optimization in 9P (which is usually the opposite case on
>  > unvirtualized hardware due to cacheing and what not).
>
>  i wouldn't think that you could tune out rotational latency.  8.4ms is
>  pretty much forever when you're counting nanoseconds.
>
>  since aoe can do wirespeed (120ms/s) on typical physical gige
>  chipsets i would think it would have no trouble keeping up with
>  spinning media.  especially when not handicapped by having to
>  actually stuff bits through a phy.
>

You on the wrong portion of the problem -- the disk solution they have
is effectively AOV (ATA over Virtio), you aren't going to do better by
putting a virtual network driver in between.  They just have to tune
their userspace gateway for disk access -- they put a lot of work into
making the virtio<->tun/tap gateway really efficient and I think they
are just using the crappy Qemu block device at the moment.  Once they
short-out the gateway between the guest-virtio channel and the
in-kernel block driver it'll be much faster than tunneling AOE over
the network device to the host.

Now - if you are talking about supporting an off-server CORAID storage
array -- then you should absolutely go AOE, but I think he was talking
about talking between guest and host partitions on his laptop in which
case you are adding extra layers for nothing.

         -eric

Reply via email to