(top posting)

Hello All,
I am posting a VDSM hook example that helps exploit the QEMU-GlusterFS native integration from VDSM.


Since the last time i posted on this thread, there are some changes to the GlusterFS based image/file specification for QEMU. This was done based on the discussion with GlusterFS folks. Bharata (in CC) is primarily working on this.

The latest QEMU way of specifying a image/file served by GlusterFS is as below...
    -drive file=gluster:server@port:volname:imagename,format=gluster

Here it takes volname ( instead of volumefile) and server@port as addnl parameters.

I have been successfully able to write a sample VDSM stand-alone script & a VDSM hook which works along with the stand-alone script to create a VM that exploit the QEMU's native GlusterFS options ( as depicted above ).

( see attached: glusterfs_strg_domain.py & 55_qemu_gluster.py)

Few important points to note...

1) Quite a few stuff in the attached example .py's are hardcoded for my env. But it shows that things work from a VDSM perspective.

2) Pre-req: vdsmd service is started and gluster volume is setup and started. Gluster volume used in the example is... `kvmfs01-hs22:dpkvol` where `kvmfs01-hs22` is the hostname and `dpkvol` is the GlusterFS volname

3) Copy 55_qemu_gluster.py to /usr/libexec/vdsm/hooks/before_vm_start/ directory

4) Run `python glusterfs_strg_domain.py` -- This should create a blank vmdisk in gluster mount point and create a VM that boots from the blank vmdisk using the -drive qemu option as depicted above, thus exploiting QEMU's gluster block backend support.

4a) While creating the VM, i pass a custom arg ( `use_qemu_gluster` in this case), which causes the VDSM hook of mine to be invoked.

4b) The hook replaces the existing <disk> xml tag (generated as a normal file path pointing to gluster mount point) with the `-drive file=gluster:server@port:volname:imagename,format=gluster` using <qemu:commandline> tag support of libvirt.

4c) It also adds a <emulator> tag to point to my custom qemu, which has gluster block backend support.

4d) Currently libvirt native support for GlusterFS is not yet there, once its present, hook can be changed/modified to
        exploit the right libvirt tags for the same.

5) If all goes fine :), one should be able to see the VM getting created and from VNC it should be stuck at "No boot device found" which is obvious, since the VDSM standalone script creates a new Volume ( file in this case ) as a vmdisk, which is a blank disk.

6) I have tried extending the hook to add -cdrom <path/to/iso> and boot from cdrom and install the OS on the Gluster based vmdisk
as part of the VM execution, which also works fine.

7) Since the scenario works fine from a VDSM standalone script, it should work from oVirt side as well, provided the steps necessary to register the custom arg ( `use_qemu_gluster` in this case) with oVirt and supplying the custom arg as part
of VM create step is followed.

I would like to know comments/feedback on the VDSM hook approach and suggestions on how to improvise on the hook implementation,
especially for some of the stuff that is hardcoded.

I am sure VDSM hook is not the ideal way to add this functionality in VDSM, would request inputs from experts on this list on what would be a better way in VDSM to exploit QEMU-GlusterFS native integration ? Ideally based on the Storage Domain type and options used, there should be a way in VDSM to modify the libvirt XML formed.

Appreciate feedback/suggestions.

thanx,
deepak



On 07/05/2012 05:24 PM, Deepak C Shetty wrote:
Hello All,
    Any updates/comments on this mail, anybody ?

More comments/questions inline below....
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:
Hello,
Recently there were patches posted in qemu-devel to support gluster as a block backend for qemu.

This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:<volumefile>:<image name>

where...
volumefile is the gluster volume file name ( say gluster volume is pre-configured on the host )
    image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone
The conndict passed to connectStorageServer is as below...
[dict(id=1, connection="kvmfs01-hs22:dpkvol", vfs_type="glusterfs", mnt_options="")]

Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file residing on gluster mount.

But since this is SHAREDFS way, the qemu -drive cmdline generated via VDSM is ... -drive file=/rhev/datacentre/mnt/.... -- which eventually softlinks to the image file on the gluster mount point.

I was looking to write a vdsm hook to be able to change the above to ....
-drive file=gluster:<volumefile>:<image name>

which means I would need access to some of the conndict params inside the hook, esp. the 'connection' to extract the volume name.

1) In looking at the current VDSM code, i don't see a way for the hook to know anything abt the storage domain setup. So the only way is to have the user pass a custom param which provides the path to the volumefile & image and use it in the hook. Is there a better way ? Can i use the vdsm gluster plugin support inside the hook to determine the volfile from the volname, assuming I only take the volname as the custom param, and determine imagename from the existing <source file = ..> tag ( basename is the image name). Wouldn't it be better to provide a way for hooks to access ( readonly) storage domain parameters, so that they can
use that do implement the hook logic in a more saner way ?

2) In talking to Eduardo, it seems there are discussion going on to see how prepareVolumePath and prepareImage could be exploited to fit gluster ( and in future other types) based images. I am not very clear on the image and volume code of vdsm, frankly its very
complex and hard to understand due to lack of comments.

I would appreciate if someone can guide me on what is the best way to achive my goal (-drive file=gluster:<volumefile>:<image name>) here. Any short term solutions if not perfect solution are also appreciated, so that I can atleast have a working setup where I just run my VDSM standaloen script and my qemu cmdline using gluster:... is generated.

Currently I am using <qemu:commandline> tag facility of libvirt to inject the needed qemu options and hardcoding the volname, imagename but i would like to do this based on the conndict passed by the user when creating SHAREDFS domain.


I am using VDSM hook to customise the libvirt xml to add -drive file=gluster:.... cmdline option, but facing issues as below... NOTE: I am using the libvirt's generic qemu:commandline tag facility to add my needed qemu options.

1) I replace the existing <disk> tag with my new qemu:commandline tag to introduce -drive file=gluster:....

This is what i add in my vdsm hook...
<qemu:commandline>
<qemu:arg value="-drive"/>
<qemu:arg value="file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405-8888-7283b166e981,format=gluster"/>
</qemu:commandline></domain>

In this case the qemu process is created ( as seen from ps aux) but the VM is in stopped state, vdsm does not start it, and using virsh i cannot, it says 'unable to acquire some lock"
There is no way i can force start it from the vdscli cmdline too.
From the vdsm.log all i can see is till the point vdsm dumps the libvirt xml... then nothing happens.

In other cases ( when i am not using this custom cmdline and the standard <disk> tag is present ).. i see the below msgs in vdsm.log.... after it dumps libvirt xml...

libvirtEventLoop::DEBUG::2012-07-05 13:52:17,780::libvirtvm::2409::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::event Started detail 0 opaque None Thread-49::DEBUG::2012-07-05 13:52:17,819::utils::329::vm.Vm::(start) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Start statistics collection Thread-51::DEBUG::2012-07-05 13:52:17,819::utils::358::vm.Vm::(run) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Stats thread started Thread-51::DEBUG::2012-07-05 13:52:17,821::task::588::TaskManager.Task::(_updateState) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::moving from state init -> state preparing Thread-51::INFO::2012-07-05 13:52:17,822::logUtils::37::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='a75b80f8-eb6d-4a01-b57c-66d62db2d867', spUUID='763d7ee5-de1e-4cd3-8af8-654865b2476d', imgUUID='57956795-dae6-4895-ab4f-bf6a95af9bf5', volUUID='ab4532d2-fc9f-4a1e-931a-fb901b7648e3', options=None) Thread-51::DEBUG::2012-07-05 13:52:17,822::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867`ReqID=`e7e10625-1d49-49f1-8bdb-fc649069fd88`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource' Thread-51::DEBUG::2012-07-05 13:52:17,823::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867' for lock type 'shared' Thread-49::DEBUG::2012-07-05 13:52:17,823::vmChannels::144::vds::(register) Add fileno 15 to listener's channels. Thread-51::DEBUG::2012-07-05 13:52:17,823::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867' is free. Now locking as 'shared' (1 active user) Thread-51::DEBUG::2012-07-05 13:52:17,824::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867`ReqID=`e7e10625-1d49-49f1-8bdb-fc649069fd88`::Granted request Thread-51::DEBUG::2012-07-05 13:52:17,825::task::817::TaskManager.Task::(resourceAcquired) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::_resourcesAcquired: Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867 (shared) Thread-51::DEBUG::2012-07-05 13:52:17,825::task::978::TaskManager.Task::(_decref) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::ref 1 aborting False Thread-49::WARNING::2012-07-05 13:52:17,826::libvirtvm::1547::vm.Vm::(_readPauseCode) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::_readPauseCode unsupported by libvirt vm Thread-51::DEBUG::2012-07-05 13:52:17,827::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ab4532d2-fc9f-4a1e-931a-fb901b7648e3 Thread-51::DEBUG::2012-07-05 13:52:17,829::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ab4532d2-fc9f-4a1e-931a-fb901b7648e3

So somehow the above stuff does not happen, when i add my custom qemu cmdline tag from VDSM hook. Any hints on why this could happen ?

2) I replaced the existing <disk> tag with new <disk> tag as below....
(Notice the file=.... stuff)

<disk device="disk" snapshot="no" type="file">
<source file="gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster"/>
<target bus="ide" dev="hda"/>
<serial>0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52</serial>
<driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
</disk>

Here I get the error as seen in vdsm.log...

Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
    self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper
    ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Unable to allow access for disk path gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster: No such file or directory Thread-49::DEBUG::2012-07-05 16:37:16,919::vm::920::vm.Vm::(setDownStatus) vmId=`c399f851-c23f-4f64-a323-559d0a66b9cc`::Changed state to Down: Unable to allow access for disk path gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster: No such file or directory

3) I then tried replacing existign <disk> with qemu:commandline & adding cdrom as boot element...

<disk device="cdrom" snapshot="no" type="file">
<source file="/home/deepakcs/Fedora-16-x86_64-Live-Desktop.iso" startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<serial/>
</disk>

<qemu:commandline>
<qemu:arg value="-drive"/>
<qemu:arg value="file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405-8888-7283b166e981,format=gluster"/>
</qemu:commandline></domain>

I get error from vdsm as below...

Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
    self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper
    ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: internal error unsupported configuration: Readonly leases are not supported






#!/usr/bin/python

import os
import string
import sys
import hooking
import traceback

if os.environ.has_key('use_qemu_gluster'):
    sys.stderr.write(' QEMU-GLUSTER HOOK CALLED -- Entering\n')
  
    domxml = hooking.read_domxml()
    domain = domxml.getElementsByTagName('domain')[0]

    # replace disk with qemu_cmdline, since we need our stuff inplace of disk
    # Before we remove, extract the gluster mnt point relative path of image file.

    device = domxml.getElementsByTagName('devices')[0]
    disk = device.getElementsByTagName('disk')[0]
    disk_src = disk.getElementsByTagName('source')[0]
    disk_src_img_file = disk_src.getAttribute('file')

    # Strip the relative path, based on the knowledge of how vdsm forms it as a pre-req.
    img_file_path_list = string.rsplit(disk_src_img_file, "/")
    img_file_rel_path = string.join(img_file_path_list[4:], "/")    

    # Generate qemu gluster option using libvirt qemu ns support
    domain.setAttribute('xmlns:qemu', 'http://libvirt.org/schemas/domain/qemu/1.0')

    qemu_cmdline = domxml.createElement('qemu:commandline')

    qemu_arg1 = domxml.createElement('qemu:arg')
    qemu_arg1.setAttribute('value', '-drive')

    qemu_arg2 = domxml.createElement('qemu:arg')
    qemu_attr_file = "file=gluster:127.0.0.1@0:dpkvol:" + "/" + img_file_rel_path + ",format=gluster"
    qemu_arg2.setAttribute('value', qemu_attr_file)

    qemu_cmdline.appendChild(qemu_arg1)
    qemu_cmdline.appendChild(qemu_arg2)

    domain.appendChild(qemu_cmdline)
    
    # Now remove the disk element.
    device.removeChild(disk)

    # Use qemu_cmdline and set emulator to use my custom qemu
    device = domxml.getElementsByTagName('devices')[0]
    emulator = domxml.createElement('emulator')
    emulator_val = domxml.createTextNode('/home/bharata/qemu/x86_64-softmmu/qemu-system-x86_64')
    emulator.appendChild(emulator_val)
    device.appendChild(emulator)

    hooking.write_domxml(domxml)

    sys.stderr.write(' QEMU-GLUSTER HOOK CALLED -- Leaving \n')

#!/usr/bin/python

import sys
import uuid
import time

sys.path.append('/usr/share/vdsm')

import vdscli
from storage.sd import SHAREDFS_DOMAIN, DATA_DOMAIN, ISO_DOMAIN
from storage.volume import RAW_FORMAT, COW_FORMAT, PREALLOCATED_VOL, SPARSE_VOL, LEAF_VOL, BLANK_UUID

spUUID = str(uuid.uuid4())
sdUUID = str(uuid.uuid4())
imgUUID = str(uuid.uuid4())
volUUID = str(uuid.uuid4())

print "spUUID = %s"%spUUID
print "sdUUID = %s"%sdUUID
print "imgUUID = %s"%imgUUID
print "volUUID = %s"%volUUID

gluster_conn = "kvmfs01-hs22:dpkvol"

s = vdscli.connect()

masterVersion = 1
hostID = 1

def vdsOK(d):
    print d
    if d['status']['code']:
	raise Exception(str(d))
    return d

def waitTask(s, taskid):
    while vdsOK(s.getTaskStatus(taskid))['taskStatus']['taskState'] != 'finished':
        time.sleep(3)
    vdsOK(s.clearTask(taskid))

vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, "my gluster mount", [dict(id=1, connection=gluster_conn, vfs_type="glusterfs", mnt_options="")]))

vdsOK(s.createStorageDomain(SHAREDFS_DOMAIN, sdUUID, "my gluster domain", gluster_conn, DATA_DOMAIN, 0))

vdsOK(s.createStoragePool(SHAREDFS_DOMAIN, spUUID, "my gluster pool", sdUUID, [sdUUID], masterVersion))

# connect to an existing pool, and become pool manager.
vdsOK(s.connectStoragePool(spUUID, hostID, "scsikey", sdUUID, masterVersion))
tid = vdsOK(s.spmStart(spUUID, -1, -1, -1, 0))['uuid']
waitTask(s, tid)

sizeGiB = 8000000

tid = vdsOK(s.createVolume(sdUUID, spUUID, imgUUID, sizeGiB,
                           RAW_FORMAT, SPARSE_VOL, LEAF_VOL,
                           volUUID, "glustervol",
                           BLANK_UUID, BLANK_UUID))['uuid']
waitTask(s, tid)

vmId = str(uuid.uuid4())

vdsOK(
    s.create(dict(vmId=vmId,
                  drives=[dict(poolID=spUUID, domainID=sdUUID, imageID=imgUUID, volumeID=volUUID)],
                  memSize=256,
                  display="vnc",
                  vmName="vm-backed-by-gluster",
                  custom={"use_qemu_gluster":1},
                 )
            )
)

_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to