>>>>> "sw" == Saxon, Will <will.sa...@sage.com> writes:

    sw> It was and may still be common to use RDM for VMs that need
    sw> very high IO performance. It also used to be the only
    sw> supported way to get thin provisioning for an individual VM
    sw> disk. However, VMware regularly makes a lot of noise about how
    sw> VMFS does not hurt performance enough to outweigh its benefits

What's the performance of configuring the guest to boot off iSCSI or
NFS directly using its own initiator/client, through the virtual
network adapter?  Is it better or worse, and does it interfere with
migration?

or is this difficult enough that no one using vmware does it, that
anyone who does this would already be using Xen and in-house scripts
instead of vmware black-box proprietary crap?

It seems to me a native NFS guest would go much easier on the DDT.  I
found it frustrating I could not change the blocksize of XFS: it is
locked at 4kB.

I would guess there is still no vIB adapter, so if you want to use SRP
you are stuck presenting IB storage to guests with vmware virtual scsi
card.

but I don't know if the common wisdom, ``TOE is a worthless gimmick.
modern network cards and TCP stacks are as good as SCSI cards,'' still
applies when the adapters are virtual ones, so I'd be interested to
hear from someone running guests in this way (without virtual storage
adapters).

Attachment: pgp1AHU9P9Hus.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to