Hello All,
    Any updates/comments on this mail, anybody ?

More comments/questions inline below....
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:
Hello,
Recently there were patches posted in qemu-devel to support gluster as a block backend for qemu.

This introduced new way of specifying drive location to qemu as ...
-drive file=gluster:<volumefile>:<image name>

where...
volumefile is the gluster volume file name ( say gluster volume is pre-configured on the host )
    image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone
The conndict passed to connectStorageServer is as below...
[dict(id=1, connection="kvmfs01-hs22:dpkvol", vfs_type="glusterfs", mnt_options="")]

Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file residing on gluster mount.

But since this is SHAREDFS way, the qemu -drive cmdline generated via VDSM is ... -drive file=/rhev/datacentre/mnt/.... -- which eventually softlinks to the image file on the gluster mount point.

I was looking to write a vdsm hook to be able to change the above to ....
-drive file=gluster:<volumefile>:<image name>

which means I would need access to some of the conndict params inside the hook, esp. the 'connection' to extract the volume name.

1) In looking at the current VDSM code, i don't see a way for the hook to know anything abt the storage domain setup. So the only way is to have the user pass a custom param which provides the path to the volumefile & image and use it in the hook. Is there a better way ? Can i use the vdsm gluster plugin support inside the hook to determine the volfile from the volname, assuming I only take the volname as the custom param, and determine imagename from the existing <source file = ..> tag ( basename is the image name). Wouldn't it be better to provide a way for hooks to access ( readonly) storage domain parameters, so that they can
use that do implement the hook logic in a more saner way ?

2) In talking to Eduardo, it seems there are discussion going on to see how prepareVolumePath and prepareImage could be exploited to fit gluster ( and in future other types) based images. I am not very clear on the image and volume code of vdsm, frankly its very
complex and hard to understand due to lack of comments.

I would appreciate if someone can guide me on what is the best way to achive my goal (-drive file=gluster:<volumefile>:<image name>) here. Any short term solutions if not perfect solution are also appreciated, so that I can atleast have a working setup where I just run my VDSM standaloen script and my qemu cmdline using gluster:... is generated.

Currently I am using <qemu:commandline> tag facility of libvirt to inject the needed qemu options and hardcoding the volname, imagename but i would like to do this based on the conndict passed by the user when creating SHAREDFS domain.


I am using VDSM hook to customise the libvirt xml to add -drive file=gluster:.... cmdline option, but facing issues as below... NOTE: I am using the libvirt's generic qemu:commandline tag facility to add my needed qemu options.

1) I replace the existing <disk> tag with my new qemu:commandline tag to introduce -drive file=gluster:....

This is what i add in my vdsm hook...
<qemu:commandline>
<qemu:arg value="-drive"/>
<qemu:arg value="file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405-8888-7283b166e981,format=gluster"/>
</qemu:commandline></domain>

In this case the qemu process is created ( as seen from ps aux) but the VM is in stopped state, vdsm does not start it, and using virsh i cannot, it says 'unable to acquire some lock"
There is no way i can force start it from the vdscli cmdline too.
From the vdsm.log all i can see is till the point vdsm dumps the libvirt xml... then nothing happens.

In other cases ( when i am not using this custom cmdline and the standard <disk> tag is present ).. i see the below msgs in vdsm.log.... after it dumps libvirt xml...

libvirtEventLoop::DEBUG::2012-07-05 13:52:17,780::libvirtvm::2409::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::event Started detail 0 opaque None Thread-49::DEBUG::2012-07-05 13:52:17,819::utils::329::vm.Vm::(start) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Start statistics collection Thread-51::DEBUG::2012-07-05 13:52:17,819::utils::358::vm.Vm::(run) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Stats thread started Thread-51::DEBUG::2012-07-05 13:52:17,821::task::588::TaskManager.Task::(_updateState) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::moving from state init -> state preparing Thread-51::INFO::2012-07-05 13:52:17,822::logUtils::37::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='a75b80f8-eb6d-4a01-b57c-66d62db2d867', spUUID='763d7ee5-de1e-4cd3-8af8-654865b2476d', imgUUID='57956795-dae6-4895-ab4f-bf6a95af9bf5', volUUID='ab4532d2-fc9f-4a1e-931a-fb901b7648e3', options=None) Thread-51::DEBUG::2012-07-05 13:52:17,822::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867`ReqID=`e7e10625-1d49-49f1-8bdb-fc649069fd88`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource' Thread-51::DEBUG::2012-07-05 13:52:17,823::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867' for lock type 'shared' Thread-49::DEBUG::2012-07-05 13:52:17,823::vmChannels::144::vds::(register) Add fileno 15 to listener's channels. Thread-51::DEBUG::2012-07-05 13:52:17,823::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867' is free. Now locking as 'shared' (1 active user) Thread-51::DEBUG::2012-07-05 13:52:17,824::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867`ReqID=`e7e10625-1d49-49f1-8bdb-fc649069fd88`::Granted request Thread-51::DEBUG::2012-07-05 13:52:17,825::task::817::TaskManager.Task::(resourceAcquired) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::_resourcesAcquired: Storage.a75b80f8-eb6d-4a01-b57c-66d62db2d867 (shared) Thread-51::DEBUG::2012-07-05 13:52:17,825::task::978::TaskManager.Task::(_decref) Task=`f66ac43a-1528-491c-bdee-37112dac536c`::ref 1 aborting False Thread-49::WARNING::2012-07-05 13:52:17,826::libvirtvm::1547::vm.Vm::(_readPauseCode) vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::_readPauseCode unsupported by libvirt vm Thread-51::DEBUG::2012-07-05 13:52:17,827::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ab4532d2-fc9f-4a1e-931a-fb901b7648e3 Thread-51::DEBUG::2012-07-05 13:52:17,829::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ab4532d2-fc9f-4a1e-931a-fb901b7648e3

So somehow the above stuff does not happen, when i add my custom qemu cmdline tag from VDSM hook. Any hints on why this could happen ?

2) I replaced the existing <disk> tag with new <disk> tag as below....
(Notice the file=.... stuff)

<disk device="disk" snapshot="no" type="file">
<source file="gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster"/>
<target bus="ide" dev="hda"/>
<serial>0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52</serial>
<driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
</disk>

Here I get the error as seen in vdsm.log...

Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
    self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper
    ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Unable to allow access for disk path gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster: No such file or directory Thread-49::DEBUG::2012-07-05 16:37:16,919::vm::920::vm.Vm::(setDownStatus) vmId=`c399f851-c23f-4f64-a323-559d0a66b9cc`::Changed state to Down: Unable to allow access for disk path gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/cc849437-9350-4cf6-93c8-a99a94dec3f0/images/0ddc6f2a-81fd-41ca-a9d1-7eb063d04f52/f3a37d01-769e-485f-928c-6885d80a58e2,format=gluster: No such file or directory

3) I then tried replacing existign <disk> with qemu:commandline & adding cdrom as boot element...

<disk device="cdrom" snapshot="no" type="file">
<source file="/home/deepakcs/Fedora-16-x86_64-Live-Desktop.iso" startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<serial/>
</disk>

<qemu:commandline>
<qemu:arg value="-drive"/>
<qemu:arg value="file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405-8888-7283b166e981,format=gluster"/>
</qemu:commandline></domain>

I get error from vdsm as below...

Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
    self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper
    ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: internal error unsupported configuration: Readonly leases are not supported




_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to