Re: [vdsm] error when run vdsClient

2012-05-09 Thread ShaoHe Feng

On 05/09/2012 05:46 AM, Adam Litke wrote:

On Tue, May 08, 2012 at 11:51:02PM +0300, Dan Kenigsberg wrote:

On Wed, May 09, 2012 at 01:42:45AM +0800, ShaoHe Feng wrote:

$ sudo ./autobuild.sh
build vdsm, and all test OK.

then rpm install the rpm package.

and start the vdsm
$ sudo systemctl start vdsmd.service

but error, when run vdsClient.

   File /usr/share/vdsm/vdsClient.py, line 28, inmodule
 from vdsm import vdscli
ImportError: cannot import name vdscli

but I change to root, the vdsClient can work.

I have also noticed this problem.  I have found that changing out of the vdsm
source directory 'fixes' it as well.


$ ls /usr/lib/python2.7/site-packages/vdsm/vdscli.py -al
-rw-r--r--. 1 root root 4113 May  9 01:20
/usr/lib/python2.7/site-packages/vdsm/vdscli.py

What's your $PWD? Maybe you have some vdsm module/package in your
PYTHONPATH that hides the one in site-packages.

Yes.
It is in building path.
so I change the work path, it can work.

I add the following  two codes in vdsClient.py, it can also work.
sys.path.remove(os.path.abspath('.'))
sys.path.remove('')

but I just remove the current path, it can not work.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [node-devel] Still not abel to migrate to node

2012-05-09 Thread Mike Burns
On Mon, 2012-05-07 at 17:07 +0200, Michel van Horssen wrote:
 Some extra info:
 
 If I go into virsh on the engine/vdsm and type connect 
 qemu+tls://192.168.10.79/system I get:
 
 error: unable to connect to server at '192.168.10.79:16514': Connection 
 refused
 
 (192.168.10.79 is the nodes IP address)
 
 If I open virsh on the node and type qemu+tls://192.168.10.31/system there no 
 error and I have a connection
 
 (192.168.10.31 is the engine/vdsm's IP address)
  
 This explains why I can migrate towards the engine/vdsm but not towards the 
 node.
 
 Not sure but it seems a certificate thing.
 
 Any pointers where to start looking?
 
 Michel

This strikes me as more of a vdsm problem than ovirt-node directly.
Once node is registered to engine, we hand over all control of libvirt
and networking (among other things) to the engine to manage.  If
migration is failing due to some setting on the node, then vdsm should
probably be changing that setting when it comes online.  

I'm adding the vdsm-devel list to get them to look at this issue.  

Mike

 
 
 - Oorspronkelijk bericht -
  Van: Michel van Horssen mvanhors...@vluchtelingenwerk.nl
  Aan: node-de...@ovirt.org
  Verzonden: Maandag 7 mei 2012 16:34:26
  Onderwerp: [node-devel] Still not abel to migrate to node
  
  Hi,
  
  I've re-installed my system with a separate DNS just for my test.
  Time on the servers are correct.
  Still migrating a VM from the VDSM on my engine towards a node gives
  me problems.
  
  The vdsm.log on the engine says:
  
  ---
  Thread-203040::ERROR::2012-05-07
  15:51:36,794::vm::170::vm.Vm::(_recover)
  vmId=`95fac60f-49f5-4dcb-8c2d-2dfd926c781b`::operation failed:
  Failed to connect to remote libvirt URI
  qemu+tls://192.168.10.79/system
  Thread-203041::DEBUG::2012-05-07
  15:51:36,795::libvirtvm::329::vm.Vm::(run)
  vmId=`95fac60f-49f5-4dcb-8c2d-2dfd926c781b`::migration downtime
  thread exiting
  Thread-203040::ERROR::2012-05-07 15:51:37,045::vm::234::vm.Vm::(run)
  vmId=`95fac60f-49f5-4dcb-8c2d-2dfd926c781b`::Traceback (most recent
  call last):
File /usr/share/vdsm/vm.py, line 217, in run
  self._startUnderlyingMigration()
File /usr/share/vdsm/libvirtvm.py, line 443, in
_startUnderlyingMigration
  None, maxBandwidth)
File /usr/share/vdsm/libvirtvm.py, line 483, in f
  ret = attr(*args, **kwargs)
File /usr/share/vdsm/libvirtconnection.py, line 79, in wrapper
  ret = f(*args, **kwargs)
File /usr/lib64/python2.7/site-packages/libvirt.py, line 971, in
migrateToURI2
  if ret == -1: raise libvirtError ('virDomainMigrateToURI2()
  failed', dom=self)
  libvirtError: operation failed: Failed to connect to remote libvirt
  URI qemu+tls://192.168.10.79/system
  ---
  
  Can someone give me a pointer where to look?
  
  Regards,
  Michel
  ___
  node-devel mailing list
  node-de...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/node-devel
  
 ___
 node-devel mailing list
 node-de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/node-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Storage Device Management in VDSM and oVirt

2012-05-09 Thread Ayal Baron


- Original Message -
 
  This seems interesting.
 
  I am interested in pursuing this further and helping contribute to
  the
  vdsm lsm integration. lsm is still in the early stages, but i feel
  its
  the right time to start influencing it so that vdsm integration
  can
  be
  smooth. My interest mainly lies in how external storage array can
  be
  integrated into oVirt/VDSM and help oVirt exploit the array
  offload
  features as part of the virtualization stack.
 
  I didn't find any oVirt wiki page on this topic, tho' there is a
  old
  mailing list thread on vdsm lsm integration, which when read
  brings
  up
  more issues to discuss :)
  How does storage repo engine and possible vdsm services framework
  ( i
  learnt about these in my brief chat with Saggie some time back)
  play
  a
  role here ?
  Maybe Saggi could elaborate here.
 
  Can Provisioning Storage itself be like a high level service,
  with
  gluster and lsm  exposing storage services, which vdsm can
  enumerate
  and
  send to oVirt GUI, is that the idea ?
  I'm not sure Provisioning Storage is a clear enough definition,
  as it could cover a lot of possibly unrelated things, but I'd need
  to understand more what you mean to really be able to comment
  properly.
 
 
 Well, I was envisioning oVirt as being able to provision and consume
 storage, both, going ahead.
 Provisioning thru' vdsm-libstoragemgmt (lsm) integration. oVirt user
 should be able to carve out LUNs,
 be able to associate the LUNs visibility to host(s) of a oVirt
 cluster,
 all via libstoragemgmt interfaces.
 
 With gluster being integrated into vdsm, oVirt user can provision and
 manage gluster volumes soon,
 which also falls under provisioning storage, hence I was wondering
 if
 there would be a new tab
 in oVirt for provisioning storage, where gluster ( in near future)
 and
 external array/LUNs  ( via
 vdsm -lsm integration) can be provisioned.


Ok, now I that I understand a little more, then in general I agree.
First upstream oVirt already has the ability to provision gLuster (albeit still 
in a limited way) and definitely we will need more provisioning capabilities 
including for example setting up LIO on a host and exposing LUNs that would be 
available to other hosts/VMs (for one, live storage migration without shared 
disks would need this).
Specifically wrt Provisioning Storage tab, that's more of a design question 
as there are going to be many services we will need to provision not all 
specifically around storage and I'm not sure that we'd want a new tab for every 
type.


 
 
  Is there any wiki page on this topic which lists the todos on this
  front, which I can start looking at ?
  Unfortunately there is not as we haven't sat down to plan it in
  depth, but you're more than welcome to start it.
 
  Generally, the idea is as follows:
  Currently vdsm has storage virtualization capabilites, i.e. we've
  implemented a form of thin-provisioning, we provide snapshots
  using qcow etc, without relying on the hardware.  Using lsm we
  could have feature negotiation and whenever we can offload, do it.
  e.g. we could know if a storage array supports thin cloning, if it
  supports thick cloning, if a LUN supports thin provisioning etc.
  In the last example (thin provisioning) when we create a VG on top
  of a thin-p LUN we should create all disk image (LVs)
  'preallocated' and avoid vdsm's thin provisioning implementation
  (as it is not needed).
 
 
 I was thinking libstoragemgmt 'query capability' or similar interface
 should help vdsm know the array capabilities.

that is correct.

 I agree that if the backing LUN already is thinp'ed, then vdsm should
 not add its own over it. So such usecases  needs
 from vdsm perspective need to be thought about and eventually it
 should
 influence the libstoragemgmt interfaces

I don't see how it would influence the lsm interfaces.

 
  However, we'd need a mechanism at domain level to 'disable' some of
  the capabilities, so for example if we know that on a specific
  array snapshots are limited or provide poor performance (worse
  than qcow) or whatever, we'd be able to fall back to vdsm's
  software implementation.
 
 
 I was thinking that its for the user to decide, not sure if we can
 auto-detect and automate this. But i feel this falls under the
 'advanced
 usecase' category :)
 We can always think about this later, rite ?

Correct, the mechanism is in order to allow the user to decide.

 
 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] RESTful VM creation

2012-05-09 Thread Adam Litke
I would like to discuss a problem that is going to affect VM creation in the new
REST API.  This topic has come up previously and I want to revive that
discussion because it is blocking a proper implementation of VM.create().

Consider a RESTful VM creation sequence:
  POST /api/vms/define - Define a new VM in the system
  POST /api/vms/id/disks/add - Add a new disk to the VM
  POST /api/vms/id/cdroms/add - Add a cdrom
  POST /api/vms/id/nics/add - Add a NIC
  PUT /api/vms/id - Change boot sequence
  POST /api/vms/id/start - Boot the VM

Unfortunately this is not possible today with vdsm because a VM must be
fully-specified at the time of creation and it will be started immediately.

As I see it there are two ways forward:

1.) Deviate from a REST model and require a VM resource definition to include
all sub-collections inline.
-- or --
2.) Support storage of vm definitions so that powered off VMs can be manipulated
by the API.

My preference would be #2 because: it makes the API more closely follow RESTful
principles, it maintains parity with the cluster-level VM manipulation API, and
it makes the API easier to use in standalone mode.

Here is my idea on how this could be accomplished without committing to stateful
host storage.  In the past we have discussed adding an API for storing arbitrary
metadata blobs on the master storage domain.  If this API were available we
could use it to create a transient VM construction site.  Let's walk through
the above RESTful sequence again and see how my idea would work in practice:

* POST /api/vms/define - Define a new VM in the system
A new VM definition would be written to the master storage domain metadata area.

* GET /api/vms/new-uuid
The normal 'list' API is consulted as usual.  The VM will not be found there
because it is not yet created.  Next, the metadata area is consulted.  The VM is
found there and will be returned.  The VM state will be 'New'.

* POST /api/vms/id/disks/add - Add a new disk to the VM
For 'New' VMs, this will update the VM metadata blob with the new disk
information.  Otherwise, this will call the hotplugDisk API.

* POST /api/vms/id/cdroms/add - Add a cdrom
For 'New' VMs, this will update the VM metadata blob with the new cdrom
information.  If we want to support hotplugged CDROMs we can call that API
later.

* POST /api/vms/id/nics/add - Add a NIC
For 'New' VMs, this will update the VM metadata blob with the new nic
information.  Otherwise it triggers the hotplugNic API.

* PUT /api/vms/id - Change boot sequence
Only valid for 'New' VMs.  Updates the metadata blob according to the parameters
specified.

* POST /api/vms/id/start - Boot the VM
Load the metadata from the master storage domain metadata area.  Call the
VM.create() API.  Remove the metadata from the master storage domain.

VDSM will automatically purge old metadata from the master storage domain.  This
could be done any time a domain is: attached as master, deactivated, and
periodically.

How does this idea sound?  I am certain that it can be improved by those of you
with more experience and different viewpoints.  Thoughts and comments?

-- 
Adam Litke a...@us.ibm.com
IBM Linux Technology Center

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Need I set Verified when submitting a patch ?

2012-05-09 Thread Ryan Harper
* Mark Wu wu...@linux.vnet.ibm.com [2012-05-09 04:58]:
 Hi Guys,
   I think people always test their patch before submitting, so 
 explicitly setting verified is not necessary.  More importantly, 
 verified by the committer is not  convincible //enough for code 
 quality assurance.  What's your opinion?

I believe the Verified check is to have someone with a different
environment apply the patch and test.  This gives us some additional
test to ensure that the patch isn't going to break something before
applying it.

I'm fine with the setting; even without gerrit there are maintainers who
won't take patches that don't have a 'Tested-by:' tag.

 
  Thanks!
  Mark.

 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Re-code /etc/init.d/functions script with Python and move it to vdsm-tool

2012-05-09 Thread Wenyi Gao

Hi All,

I am working on moving vdsm.init script to vdsm-tool. But the vdsm.init 
script uses some of functions from /etc/init.d/functions. So I plan to 
re-code the /etc/init.d/functions or part of it with python code and 
also move it to vdsm-tool.  Is it okey?




BR.
Wenyi

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel