I highly disagree with that proposal, it goes against the core meaning of 
domain types.
Domain types are the abstraction between VDSM and storage backends.
The posixfs domain uses, as it's name suggests, only posix file operations so 
it could work on any posix compliment backend.
Adding another abstraction on top of that just makes the type irrelevant.
This by the way does not relate to internal code reuse as two domain types can 
use the same internal object (see posixfs that actually  uses the nfsSD code)

The major difference is that in the future when we separate the nfs 
implementation from the posixfs implementation the user will not have to know 
that the internal objects changed.
Where as with what you suggesting the user will have to migrate from posixfs to 
the new specialized code internally.

As a side note, domains already have to many properties (class, role) that we 
are removing. Adding more properties should be done as a last resort.

----- Original Message -----
> From: "Deepak C Shetty" <deepa...@linux.vnet.ibm.com>
> To: "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org>, "Bharata 
> B Rao" <bhar...@linux.vnet.ibm.com>
> Sent: Friday, July 20, 2012 8:29:06 AM
> Subject: [vdsm] RFC: Proposal to support network disk type in PosixFS
> Hello,
>      I am proposing a method for VDSM to exploit disk of 'network'
>      type
> under PosixFS.
> Altho' I am taking Gluster as the storage backend example, it should
> apply to any other backends (that support network disk type) as well.
> Currently under PosixFS, the design is to mount the 'server:/export'
> and
> use that as storage domain.
> The libvirt XML generated for such a disk is something like below...
> <disk device="disk" snapshot="no" type="file">
> <source
> file="/rhev/data-center/8fe261ea-43c2-4635-a08a-ccbafe0cde0e/4f31ea5c-c01e-4578-8353-8897b2d691b4/images/c94c9cf2-fa1c-4e43-8c77-f222dbfb032d/eff4db09-1fde-43cd-a75b-34054a64182b"/>
> <target bus="ide" dev="hda"/>
> <serial>c94c9cf2-fa1c-4e43-8c77-f222dbfb032d</serial>
> <driver cache="none" error_policy="stop" io="threads" name="qemu"
> type="raw"/>
> </disk>
> This works well, but does not help exploit the gluster block backend
> of
> QEMU, since the QEMU cmdline generated is -drive
> file='/rhev/data-center/....'
> Gluster fits as a network block device in QEMU, similar to ceph and
> sheepdog backend, QEMU already has.
> The proposed libvirt XML for Gluster based disks is ... (WIP)
> <disk type='network' device='disk'>
>        <driver name='qemu' type='raw'/>
>        <source protocol="gluster" name="volname:imgname">
>            <host name='server' port='xxx'/>
>        </source>
>        <target dev='vda' bus='virtio'/>
> </disk>
> This causes libvirt to generate QEMU cmdline like : -drive
> file=gluster:server@port:volname:imgname. The imgname is relative the
> gluster mount point.
> I am proposing the below to help VDSM exploit disk as a network
> device
> under PosixFS.
> Here is a code snippet (taken from a vdsm standalone script) of how a
> storage domain & VM are created in VDSM....
> # When storage domain is mounted
> gluster_conn = "kvmfs01-hs22:dpkvol" # gluster_server:volume_name
> vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, "my gluster mount",
> [dict(id=1, connection=gluster_conn, vfs_type="glusterfs",
> mnt_options="")])
> # do other things...createStoragePool, SPM start etc...
> ...
> ...
> # Now create a VM
> vmId = str(uuid.uuid4())
> vdsOK(
>      s.create(dict(vmId=vmId,
>                    drives=[dict(poolID=spUUID, domainID=sdUUID,
> imageID=imgUUID, volumeID=volUUID, disk_type="network",
> protocol="gluster", connection=gluster_conn)], # Proposed way
>                    #drives=[dict(poolID=spUUID, domainID=sdUUID,
> imageID=imgUUID, volumeID=volUUID)], # Existing way
>                    memSize=256,
>                    display="vnc",
>                    vmName="vm-backed-by-gluster",
>                   )
>              )
> )
> 1) User (engine in ovirt case) passes disk_type, protocol &
> connection
> keywords as depicted above. NOTE: disk_type is used instead of just
> type
> to avoid confusion with driver_type
>      -- protocol and connection are already available to User as
>      he/she
> used it as part of connectStorageServer ( connection and vfs_type )
>      -- disk_type is something that User chooses instead of default
> (which is file type)
> 2) Based on these extra keywords passed, the getXML() of 'class
> Drive'
> in libvirtvm.py can be modified to generate <disk type='network'...>
> as
> shown above.
> Some parsing would be needed to extract the server, volname. imgname
> relative to gluster mount point can be extracted from drive['path']
> which holds the fully qualified path.
> 3) Since these keywords are drive specific, User can choose which
> drives
> he/she wants to use network protocol Vs file. Not passing these
> keywords
> defaults to file, which is what happens today.
> This approach would help VDSM to support network disk types under
> PosixFS and thus provide the ability to the User to choose file or
> network disk types on a per drive basis.
> I will post a RFC patch soon ( awaiting libvirt changes ), comments
> welcome.
> thanx,
> deepak
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/vdsm-devel
vdsm-devel mailing list

Reply via email to