Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-08-07 Thread Deepak C Shetty

On 07/29/2012 03:47 PM, Dan Kenigsberg wrote:

Deepak,

I know that I am not relating to your main issue (sorry...), but...
I like the idea of a hook mangling.
Could you (or someone else) contribute such a hook to upstream vdsm?
I'm sure many would thank a hook accepting general qemu command line as
custom property, and pass it to qemu command line.




Dan, done
Pls see

http://gerrit.ovirt.org/6969

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-30 Thread Deepak C Shetty

On 07/29/2012 05:16 PM, Itamar Heim wrote:

On 07/16/2012 04:07 PM, Deepak C Shetty wrote:


I am sure VDSM hook is not the ideal way to add this functionality in
VDSM, would request inputs from experts on this list on
what would be a better way in VDSM to exploit QEMU-GlusterFS native
integration ? Ideally based on the Storage Domain type
and options used, there should be a way in VDSM to modify the libvirt
XML formed.


from your discussion with saggi, the recommended approach was a 
gluster storage domain.


Yes, I am working on it. The hooks approach was taken as a intermediate 
step just to verify if
from VDSM, the libvirt/qemu side of things work fine & also to help 
validate the gluster block backend of qemu via VDSM ( consumability aspect).


do i understand correctly that here are two ways to consume the images 
via qemu: block based or file based?


From libvirt's perspective, -drive file=/rhev/data-center/.. maps to 
 'type = file' and -drive file=gluster:...:...:...image maps to 
 'type = network', so there is nothing block based here. I think 
the confusion might be arising due to the fact that gluster fits as a 
new block backend in qemu, but from a user perspective, they map to 
either a file based drive or network based drive, depending on how we 
want to use it. Under PosixFS, we would end up using file based qemu 
drive option, under Gluster domain (wip) I plan to add support for 
network based qemu drive option.



would there also be a difference in how these images are provisioned 
(i.e., would this imply a gluster_fs and a gluster_block storage 
domains, which sounds somewhat of an overkill, unless there are very 
good different use cases for this)?


Currently no. For both PosixFS domain and GlusterFS domain, pre-req is 
to have the gluster volume already setup. The input to VDSM is the 
gluster volfile server and image name ( and few others as being decided 
by the qemu / gluster community). Once the engine has support for 
provisioning Gluster volumes (which i feel is on the way?), the pre-req 
can be met from the engine UI itself.


I am currently working on creating a new GlusterFS domain in VDSM, that 
re-uses the PosixFS core and provides ability to exploit the network 
disk type option of qemu. Going ahead.. the implemention of GlusterFS 
domain can change/modified/improved by exploiting the vdsm gluster 
plugin and/or repo engines for a more native implementation, if need be.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-29 Thread Deepak C Shetty

On 07/29/2012 03:47 PM, Dan Kenigsberg wrote:

Deepak,

I know that I am not relating to your main issue (sorry...), but...
I like the idea of a hook mangling.
Could you (or someone else) contribute such a hook to upstream vdsm?
I'm sure many would thank a hook accepting general qemu command line as
custom property, and pass it to qemu command line.


Dan,
Sure, i remember you asking for this on IRC. Its on my TODO list, 
and will get to it soon. My priority is the VDSM gluster integration for 
exploiting the gluster backend of qemu, and I am trying all different 
options possible, hooks being one of them.


thanx,
deepak

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-29 Thread Itamar Heim

On 07/16/2012 04:07 PM, Deepak C Shetty wrote:


I am sure VDSM hook is not the ideal way to add this functionality in
VDSM, would request inputs from experts on this list on
what would be a better way in VDSM to exploit QEMU-GlusterFS native
integration ? Ideally based on the Storage Domain type
and options used, there should be a way in VDSM to modify the libvirt
XML formed.


from your discussion with saggi, the recommended approach was a gluster 
storage domain.
do i understand correctly that here are two ways to consume the images 
via qemu: block based or file based?
would there also be a difference in how these images are provisioned 
(i.e., would this imply a gluster_fs and a gluster_block storage 
domains, which sounds somewhat of an overkill, unless there are very 
good different use cases for this)?



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-29 Thread Dan Kenigsberg
Deepak,

I know that I am not relating to your main issue (sorry...), but...
I like the idea of a hook mangling .
Could you (or someone else) contribute such a hook to upstream vdsm?
I'm sure many would thank a hook accepting general qemu command line as
custom property, and pass it to qemu command line.


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-16 Thread Deepak C Shetty

(top posting)

Hello All,
I am posting a VDSM hook example that helps exploit the 
QEMU-GlusterFS native integration from VDSM.


Since the last time i posted on this thread, there are some changes to 
the GlusterFS based image/file specification for QEMU.
This was done based on the discussion with GlusterFS folks. Bharata (in 
CC) is primarily working on this.


The latest QEMU way of specifying a image/file served by GlusterFS is as 
below...

-drive file=gluster:server@port:volname:imagename,format=gluster

Here it takes volname ( instead of volumefile) and server@port as addnl 
parameters.


I have been successfully able to write a sample VDSM stand-alone script 
& a VDSM hook which works along with the stand-alone script
to create a VM that exploit the QEMU's native GlusterFS options ( as 
depicted above ).


( see attached: glusterfs_strg_domain.py & 55_qemu_gluster.py)

Few important points to note...

1) Quite a few stuff in the attached example .py's are hardcoded for my 
env. But it shows that things work from a VDSM perspective.


2) Pre-req: vdsmd service is started and gluster volume is setup and 
started. Gluster volume used in the example is...
`kvmfs01-hs22:dpkvol` where `kvmfs01-hs22` is the hostname and `dpkvol` 
is the GlusterFS volname


3) Copy 55_qemu_gluster.py to /usr/libexec/vdsm/hooks/before_vm_start/ 
directory


4) Run `python glusterfs_strg_domain.py`  -- This should create a blank 
vmdisk in gluster mount point and create a VM that boots
from the blank vmdisk using the -drive qemu option as depicted above, 
thus exploiting QEMU's gluster block backend support.


4a) While creating the VM, i pass a custom arg ( `use_qemu_gluster` 
in this case), which causes the VDSM hook of mine to be invoked.


4b) The hook replaces the existing  xml tag (generated as a 
normal file path pointing to gluster mount point)
   with the `-drive 
file=gluster:server@port:volname:imagename,format=gluster` using 
 tag support of libvirt.


4c) It also adds a  tag to point to my custom qemu, which 
has gluster block backend support.


4d) Currently libvirt native support for GlusterFS is not yet 
there, once its present, hook can be changed/modified to

exploit the right libvirt tags for the same.

5) If all goes fine :), one should be able to see the VM getting created 
and from VNC it should be stuck at "No boot device found"
which is obvious, since the VDSM standalone script creates a new Volume 
( file in this case ) as a vmdisk, which is a blank disk.


6) I have tried extending the hook to add -cdrom  and boot 
from cdrom and install the OS on the Gluster based vmdisk

as part of the VM execution, which also works fine.

7) Since the scenario works fine from a VDSM standalone script, it 
should work from oVirt side as well, provided the steps
necessary to register the custom arg ( `use_qemu_gluster` in this case) 
with oVirt and supplying the custom arg as part

of VM create step is followed.

I would like to know comments/feedback on the VDSM hook approach and 
suggestions on how to improvise on the hook implementation,

especially for some of the stuff that is hardcoded.

I am sure VDSM hook is not the ideal way to add this functionality in 
VDSM, would request inputs from experts on this list on
what would be a better way in VDSM to exploit QEMU-GlusterFS native 
integration ? Ideally based on the Storage Domain type
and options used, there should be a way in VDSM to modify the libvirt 
XML formed.


Appreciate feedback/suggestions.

thanx,
deepak



On 07/05/2012 05:24 PM, Deepak C Shetty wrote:

Hello All,
Any updates/comments on this mail, anybody ?

More comments/questions inline below
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:

Hello,
Recently there were patches posted in qemu-devel to support 
gluster as a block backend for qemu.


This introduced new way of specifying drive location to qemu as ...
-drive file=gluster::

where...
volumefile is the gluster volume file name ( say gluster volume 
is pre-configured on the host )

image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to 
PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone

The conndict passed to connectStorageServer is as below...
[dict(id=1, connection="kvmfs01-hs22:dpkvol", vfs_type="glusterfs", 
mnt_options="")]


Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file 
residing on gluster mount.


But since this is SHAREDFS way, the qemu -drive cmdline generated via 
VDSM is ...
-drive file=/rhev/datacentre/mnt/ -- which eventually softlinks 
to the image file on the gluster mount point.


I was looking to write a vdsm hook to be able to change the above to 


-drive file=gluster::

which means I would need access to some of the co

Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu

2012-07-05 Thread Deepak C Shetty

Hello All,
Any updates/comments on this mail, anybody ?

More comments/questions inline below
'would appreciate response which can help me here.

thanx,
deepak

On 06/27/2012 06:44 PM, Deepak C Shetty wrote:

Hello,
Recently there were patches posted in qemu-devel to support 
gluster as a block backend for qemu.


This introduced new way of specifying drive location to qemu as ...
-drive file=gluster::

where...
volumefile is the gluster volume file name ( say gluster volume is 
pre-configured on the host )

image name is the name of the image file on the gluster mount point

I wrote a vdsm standalone script using SHAREDFS ( which maps to 
PosixFs ) taking cues from http://www.ovirt.org/wiki/Vdsm_Standalone

The conndict passed to connectStorageServer is as below...
[dict(id=1, connection="kvmfs01-hs22:dpkvol", vfs_type="glusterfs", 
mnt_options="")]


Here note that 'dpkvol' is the name of the gluster volume

I and am able to create and invoke a VM backed by a image file 
residing on gluster mount.


But since this is SHAREDFS way, the qemu -drive cmdline generated via 
VDSM is ...
-drive file=/rhev/datacentre/mnt/ -- which eventually softlinks to 
the image file on the gluster mount point.


I was looking to write a vdsm hook to be able to change the above to 
-drive file=gluster::

which means I would need access to some of the conndict params inside 
the hook, esp. the 'connection' to extract the volume name.


1) In looking at the current VDSM code, i don't see a way for the hook 
to know anything abt the storage domain setup. So the only
way is to have the user pass a custom param which provides the path to 
the volumefile & image and use it in the hook. Is there
a better way ? Can i use the vdsm gluster plugin support inside the 
hook to determine the volfile from the volname, assuming I
only take the volname as the custom param, and determine imagename 
from the existing  tag ( basename is the
image name). Wouldn't it be better to provide a way for hooks to 
access ( readonly) storage domain parameters, so that they can

use that do implement the hook logic in a more saner way ?

2) In talking to Eduardo, it seems there are discussion going on to 
see how prepareVolumePath and prepareImage could be exploited
to fit gluster ( and in future other types) based images. I am not 
very clear on the image and volume code of vdsm, frankly its very

complex and hard to understand due to lack of comments.

I would appreciate if someone can guide me on what is the best way to 
achive my goal (-drive file=gluster::)
here. Any short term solutions if not perfect solution are also 
appreciated, so that I can atleast have a working setup where I just
run my VDSM standaloen script and my qemu cmdline using gluster:... is 
generated.


Currently I am using  tag facility of libvirt to 
inject the needed qemu options and hardcoding the volname, imagename
but i would like to do this based on the conndict passed by the user 
when creating SHAREDFS domain.




I am using VDSM hook to customise the  libvirt xml to add -drive 
file=gluster: cmdline option, but facing issues as below...
NOTE: I am using the libvirt's generic qemu:commandline tag facility to 
add my needed qemu options.


1) I replace the existing  tag with my new qemu:commandline tag to 
introduce -drive file=gluster:


This is what i add in my vdsm hook...


value="file=gluster:/var/lib/glusterd/vols/dpkvol/dpkvol-qemu.vol:/d536ca42-9dd2-40a2-bd45-7e5c67751698/images/e9d31bc2-9fb6-4803-aa88-5563229aad41/1c3463aa-be2c-4405--7283b166e981,format=gluster"/>



In this case the qemu process is created ( as seen from ps aux) but the 
VM is in stopped state, vdsm does not start it, and using virsh i 
cannot, it says 'unable to acquire some lock"

There is no way i can force start it from the vdscli cmdline too.
From the vdsm.log all i can see is till the point vdsm dumps the 
libvirt xml... then nothing happens.


In other cases ( when i am not using this custom cmdline and the 
standard  tag is present ).. i see the below msgs in vdsm.log 
after it dumps libvirt xml...


libvirtEventLoop::DEBUG::2012-07-05 
13:52:17,780::libvirtvm::2409::vm.Vm::(_onLibvirtLifecycleEvent) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::event Started detail 0 
opaque None
Thread-49::DEBUG::2012-07-05 13:52:17,819::utils::329::vm.Vm::(start) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Start statistics collection
Thread-51::DEBUG::2012-07-05 13:52:17,819::utils::358::vm.Vm::(run) 
vmId=`1eb2b3f7-a319-44fe-8263-fd6e770db983`::Stats thread started
Thread-51::DEBUG::2012-07-05 
13:52:17,821::task::588::TaskManager.Task::(_updateState) 
Task=`f66ac43a-1528-491c-bdee-37112dac536c`::moving from state init -> 
state preparing
Thread-51::INFO::2012-07-05 
13:52:17,822::logUtils::37::dispatcher::(wrapper) Run and protect: 
getVolumeSize(sdUUID='a75b80f8-eb6d-4a01-b57c-66d62db2d867', 
spUUID='763d7ee5-de1e-4cd3-8af8-654865b2476d', 
imgUUID='579567