Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-13 Thread Dan Kenigsberg
On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
 On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
Rob,

It seems that a bug in vdsm code is hiding the real issue.
Could you do a

 sed -i s/ParseError/ElementTree.ParseError
 /usr/share/vdsm/gluster/cli.py

restart vdsmd, and retry?

Bala, would you send a patch fixing the ParseError issue (and
adding a

Ok, both issues have fixes which are in the ovirt-3.2 git branch.
I believe this deserves a respin of vdsm, as having an undeclated
requirement is impolite.

Federico, Mike, would you take care for that?

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-13 Thread Federico Simoncelli
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Balamurugan Arumugam barum...@redhat.com, Federico Simoncelli 
 fsimo...@redhat.com, Mike Burns
 mbu...@redhat.com
 Cc: Rob Zwissler r...@zwissler.org, users@ovirt.org, a...@ovirt.org, 
 Aravinda VK avish...@redhat.com, Ayal
 Baron aba...@redhat.com
 Sent: Wednesday, March 13, 2013 9:03:39 PM
 Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
 
 On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
  On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam
  wrote:
 Rob,
 
 It seems that a bug in vdsm code is hiding the real issue.
 Could you do a
 
  sed -i s/ParseError/ElementTree.ParseError
  /usr/share/vdsm/gluster/cli.py
 
 restart vdsmd, and retry?
 
 Bala, would you send a patch fixing the ParseError issue
 (and
 adding a
 
 Ok, both issues have fixes which are in the ovirt-3.2 git branch.
 I believe this deserves a respin of vdsm, as having an undeclated
 requirement is impolite.
 
 Federico, Mike, would you take care for that?

Since we're at it... I have the feeling that this might be important
enough to be backported to 3.2 too:

http://gerrit.ovirt.org/#/c/12178/

-- 
Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-13 Thread Dan Kenigsberg
On Wed, Mar 13, 2013 at 04:10:56PM -0400, Federico Simoncelli wrote:
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Balamurugan Arumugam barum...@redhat.com, Federico Simoncelli 
  fsimo...@redhat.com, Mike Burns
  mbu...@redhat.com
  Cc: Rob Zwissler r...@zwissler.org, users@ovirt.org, a...@ovirt.org, 
  Aravinda VK avish...@redhat.com, Ayal
  Baron aba...@redhat.com
  Sent: Wednesday, March 13, 2013 9:03:39 PM
  Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
  
  On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
   On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam
   wrote:
  Rob,
  
  It seems that a bug in vdsm code is hiding the real issue.
  Could you do a
  
   sed -i s/ParseError/ElementTree.ParseError
   /usr/share/vdsm/gluster/cli.py
  
  restart vdsmd, and retry?
  
  Bala, would you send a patch fixing the ParseError issue
  (and
  adding a
  
  Ok, both issues have fixes which are in the ovirt-3.2 git branch.
  I believe this deserves a respin of vdsm, as having an undeclated
  requirement is impolite.
  
  Federico, Mike, would you take care for that?
 
 Since we're at it... I have the feeling that this might be important
 enough to be backported to 3.2 too:
 
 http://gerrit.ovirt.org/#/c/12178/

Yes, it is quite horrible. Could you include that, too?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-09 Thread Dan Kenigsberg
On Wed, Mar 06, 2013 at 02:34:10PM +0530, Balamurugan Arumugam wrote:
 On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
 On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
 On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
 Rob,
 
 It seems that a bug in vdsm code is hiding the real issue.
 Could you do a
 
  sed -i s/ParseError/ElementTree.ParseError 
  /usr/share/vdsm/gluster/cli.py
 
 restart vdsmd, and retry?
 
 Bala, would you send a patch fixing the ParseError issue (and adding a
 unit test that would have caught it on time)?
 
 Traceback (most recent call last):
File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
  res = f(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
  rv = func(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
  return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
File /usr/share/vdsm/supervdsm.py, line 81, in __call__
  return callMethod()
File /usr/share/vdsm/supervdsm.py, line 72, in lambda
  **kwargs)
File string, line 2, in glusterVolumeInfo
File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
 in _callmethod
  raise convert_to_error(kind, result)
 AttributeError: class ElementTree has no attribute 'ParseError'
 
 My guess has led us nowhere, since etree.ParseError is simply missing
 from python 2.6. It is to be seen only in python 2.7!
 
 That's sad, but something *else* is problematic, since we got to this
 error-handling code.
 
 Could you make another try and temporarily replace ParseError with
 Exception?
 
  sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
 
 (this sed is relative to the original code).
 
 
 More specific sed is
 sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py

Bala, Aravinda, I have not seem a vdsm patch adding an explicit
dependency on the correct gluster-cli version. Only a change for for
this ParseError issue http://gerrit.ovirt.org/#/c/12829/

Is there anything blocking this? I would really like to clear this
hurdle quickly.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-07 Thread Dave Neary

Hi Rob,

On 03/06/2013 05:59 PM, Rob Zwissler wrote:

On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?


It seems like there has been conflation of two things here - I may be 
wrong with what I say, but having checked, I do not believe so.


With oVirt 3.2/Gluster 3.4, you will be able to manage Gluster clusters 
using the oVirt engine. This is a completely new integration, which is 
still not in a production Gluster release.


However, it is still completely fine to use Gluster as storage for an 
oVirt 3.1 or 3.2 managed cluster. The ability to use Gluster easily as a 
storage back-end was added in oVirt 3.1, and as far as I know, there is 
no problem using glusterfs 3.3 as a POSIX storage filesystem for oVirt 3.2.


Vijay, Shireesh, Ayal, is my understanding correct? I am worried that 
we've been giving people the wrong impression here.


Thanks!
Dave.

--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-07 Thread Vijay Bellur

On 03/07/2013 04:36 PM, Dave Neary wrote:

Hi Rob,

On 03/06/2013 05:59 PM, Rob Zwissler wrote:

On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?


It seems like there has been conflation of two things here - I may be
wrong with what I say, but having checked, I do not believe so.

With oVirt 3.2/Gluster 3.4, you will be able to manage Gluster clusters
using the oVirt engine. This is a completely new integration, which is
still not in a production Gluster release.

However, it is still completely fine to use Gluster as storage for an
oVirt 3.1 or 3.2 managed cluster. The ability to use Gluster easily as a
storage back-end was added in oVirt 3.1, and as far as I know, there is
no problem using glusterfs 3.3 as a POSIX storage filesystem for oVirt 3.2.

Vijay, Shireesh, Ayal, is my understanding correct? I am worried that
we've been giving people the wrong impression here.



Yes, your description is right.

Thanks,
Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Shireesh Anjal

On 03/05/2013 06:08 AM, Rob Zwissler wrote:

Running CentOS 6.3 with the following VDSM packages from dre's repo:

vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
vdsm-gluster-4.10.3-0.30.19.el6.noarch
vdsm-python-4.10.3-0.30.19.el6.x86_64
vdsm-4.10.3-0.30.19.el6.x86_64
vdsm-cli-4.10.3-0.30.19.el6.noarch

And the following gluster packages from the gluster repo:

glusterfs-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64


oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently 
in alpha and hence not available in stable repositories.

http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

This issue has been reported multiple times now, and I think it needs an 
update to the oVirt 3.2 release notes. Have added a note to this effect at:

http://www.ovirt.org/OVirt_3.2_release_notes#Storage


I get the following errors in vdsm.log:

Thread-1483::DEBUG::2013-03-04
16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
[10.33.9.73]::call volumesList with () {}
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
= ''; rc = 0
MainProcess|Thread-1483::ERROR::2013-03-04
16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
   File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
 except (etree.ParseError, AttributeError, ValueError):
AttributeError: 'module' object has no attribute 'ParseError'
Thread-1483::ERROR::2013-03-04
16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
 raise convert_to_error(kind, result)
AttributeError: 'module' object has no attribute 'ParseError'

Which corresponds to the following in the engine.log:

2013-03-04 16:34:46,231 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) START,
GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3
2013-03-04 16:34:46,365 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method
2013-03-04 16:34:46,366 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
2013-03-04 16:34:46,367 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
exception
2013-03-04 16:34:46,369 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log
id: 987aef3
2013-03-04 16:34:46,370 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterManager]
(QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight
data of cluster qa-cluster1!:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440)

Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Balamurugan Arumugam

On 03/05/2013 01:16 PM, Dan Kenigsberg wrote:

On Mon, Mar 04, 2013 at 04:38:50PM -0800, Rob Zwissler wrote:

Running CentOS 6.3 with the following VDSM packages from dre's repo:

vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
vdsm-gluster-4.10.3-0.30.19.el6.noarch
vdsm-python-4.10.3-0.30.19.el6.x86_64
vdsm-4.10.3-0.30.19.el6.x86_64
vdsm-cli-4.10.3-0.30.19.el6.noarch

And the following gluster packages from the gluster repo:

glusterfs-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64

I get the following errors in vdsm.log:

Thread-1483::DEBUG::2013-03-04
16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
[10.33.9.73]::call volumesList with () {}
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
= ''; rc = 0
MainProcess|Thread-1483::ERROR::2013-03-04
16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
   File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
 except (etree.ParseError, AttributeError, ValueError):
AttributeError: 'module' object has no attribute 'ParseError'
Thread-1483::ERROR::2013-03-04
16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
 raise convert_to_error(kind, result)
AttributeError: 'module' object has no attribute 'ParseError'



Rob,

It seems that a bug in vdsm code is hiding the real issue.
Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

restart vdsmd, and retry?

Bala, would you send a patch fixing the ParseError issue (and adding a
unit test that would have caught it on time)?



python 2.7 throws ParseError whereas python 2.6 throws SyntaxError. 
Aravinda is sending a fix for it.


Regards,
Bala

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Balamurugan Arumugam

On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:

On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:

On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:

Rob,

It seems that a bug in vdsm code is hiding the real issue.
Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

restart vdsmd, and retry?

Bala, would you send a patch fixing the ParseError issue (and adding a
unit test that would have caught it on time)?



Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
 raise convert_to_error(kind, result)
AttributeError: class ElementTree has no attribute 'ParseError'


My guess has led us nowhere, since etree.ParseError is simply missing
from python 2.6. It is to be seen only in python 2.7!

That's sad, but something *else* is problematic, since we got to this
error-handling code.

Could you make another try and temporarily replace ParseError with
Exception?

 sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py

(this sed is relative to the original code).



More specific sed is
sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py

Regards,
Bala

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Dan Kenigsberg
On Wed, Mar 06, 2013 at 02:04:29PM +0530, Shireesh Anjal wrote:
 On 03/05/2013 06:08 AM, Rob Zwissler wrote:
 Running CentOS 6.3 with the following VDSM packages from dre's repo:
 
 vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
 vdsm-gluster-4.10.3-0.30.19.el6.noarch
 vdsm-python-4.10.3-0.30.19.el6.x86_64
 vdsm-4.10.3-0.30.19.el6.x86_64
 vdsm-cli-4.10.3-0.30.19.el6.noarch
 
 And the following gluster packages from the gluster repo:
 
 glusterfs-3.3.1-1.el6.x86_64
 glusterfs-fuse-3.3.1-1.el6.x86_64
 glusterfs-vim-3.2.7-1.el6.x86_64
 glusterfs-server-3.3.1-1.el6.x86_64
 
 oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is
 currently in alpha and hence not available in stable repositories.
 http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

Shireesh, this should be specifed in vdsm.spec - please patch both
master and ovirt-3.2 branches.

Beyond that, there's a problem of Python 2.6 missing ParseError.

 
 This issue has been reported multiple times now, and I think it
 needs an update to the oVirt 3.2 release notes. Have added a note to
 this effect at:
 http://www.ovirt.org/OVirt_3.2_release_notes#Storage
 
 I get the following errors in vdsm.log:
 
 Thread-1483::DEBUG::2013-03-04
 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
 [10.33.9.73]::call volumesList with () {}
 MainProcess|Thread-1483::DEBUG::2013-03-04
 16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
 '/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
 MainProcess|Thread-1483::DEBUG::2013-03-04
 16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
 = ''; rc = 0
 MainProcess|Thread-1483::ERROR::2013-03-04
 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
 Error in wrapper
 Traceback (most recent call last):
File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
  return func(*args, **kwargs)
File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
  return func(*args, **kwargs)
File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
  return func(*args, **kwargs)
File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
  except (etree.ParseError, AttributeError, ValueError):
 AttributeError: 'module' object has no attribute 'ParseError'
 Thread-1483::ERROR::2013-03-04
 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
 Traceback (most recent call last):
File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
  res = f(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
  rv = func(*args, **kwargs)
File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
  return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
File /usr/share/vdsm/supervdsm.py, line 81, in __call__
  return callMethod()
File /usr/share/vdsm/supervdsm.py, line 72, in lambda
  **kwargs)
File string, line 2, in glusterVolumeInfo
File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
 in _callmethod
  raise convert_to_error(kind, result)
 AttributeError: 'module' object has no attribute 'ParseError'
 
 Which corresponds to the following in the engine.log:
 
 2013-03-04 16:34:46,231 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (QuartzScheduler_Worker-86) START,
 GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
 b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3
 2013-03-04 16:34:46,365 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method
 2013-03-04 16:34:46,366 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-86) Error code unexpected and error message
 VDSGenericException: VDSErrorException: Failed to
 GlusterVolumesListVDS, error = Unexpected exception
 2013-03-04 16:34:46,367 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution
 failed. Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
 exception
 2013-03-04 16:34:46,369 INFO
 [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
 (QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log
 id: 987aef3
 2013-03-04 16:34:46,370 ERROR
 [org.ovirt.engine.core.bll.gluster.GlusterManager]
 (QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight
 data of cluster qa-cluster1!:
 org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to
 GlusterVolumesListVDS, error = Unexpected exception
  at 
  org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
 [engine-bll.jar:]
  at 
  org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
 [engine-bll.jar:]
  at 

Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Aravinda

Hi,

Sent a patch to handle ParseError attribute issue. vdsm still depends on 
newer(3.4) version of glusterfs, but Python ParseError is fixed.

http://gerrit.ovirt.org/#/c/12752/

--
regards
Aravinda

On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:

On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:

On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:

Rob,

It seems that a bug in vdsm code is hiding the real issue.
Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

restart vdsmd, and retry?

Bala, would you send a patch fixing the ParseError issue (and adding a
unit test that would have caught it on time)?

Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
 raise convert_to_error(kind, result)
AttributeError: class ElementTree has no attribute 'ParseError'

My guess has led us nowhere, since etree.ParseError is simply missing
from python 2.6. It is to be seen only in python 2.7!

That's sad, but something *else* is problematic, since we got to this
error-handling code.

Could you make another try and temporarily replace ParseError with
Exception?

 sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py

(this sed is relative to the original code).

Dan.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Rob Zwissler
On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal san...@redhat.com wrote:

 oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in
 alpha and hence not available in stable repositories.
 http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

 This issue has been reported multiple times now, and I think it needs an
 update to the oVirt 3.2 release notes. Have added a note to this effect at:
 http://www.ovirt.org/OVirt_3.2_release_notes#Storage


On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?

Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Shireesh Anjal

On 03/06/2013 10:29 PM, Rob Zwissler wrote:

On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal san...@redhat.com wrote:

oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in
alpha and hence not available in stable repositories.
http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

This issue has been reported multiple times now, and I think it needs an
update to the oVirt 3.2 release notes. Have added a note to this effect at:
http://www.ovirt.org/OVirt_3.2_release_notes#Storage


On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?


Hi Rob,

Your points are completely valid, and it's my fault (and not the oVirt 
release team's) not mentioning this important information when providing 
details of gluster related features to be included in the oVirt 3.2 
release notes. Genuine apologies for the same.


Having said this, I believe the stable release of glusterfs 3.4.0 should 
be coming out very soon (some time this month if I'm correct), which 
will provide some relief.


Regards,
Shireesh



Rob


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-05 Thread Rob Zwissler
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
 Rob,

 It seems that a bug in vdsm code is hiding the real issue.
 Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

 restart vdsmd, and retry?

 Bala, would you send a patch fixing the ParseError issue (and adding a
 unit test that would have caught it on time)?


 Regards,
 Dan.

Hi Dan, thanks for the quick response.  I did that, and here's what I
get now from the vdsm.log:

MainProcess|Thread-51::DEBUG::2013-03-05
10:03:40,723::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
Thread-52::DEBUG::2013-03-05
10:03:40,731::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state init -
state preparing
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::41::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'4af726ea-e502-4e79-a47c-6c8558ca96ad':
{'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid':
True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay':
'0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::1151::TaskManager.Task::(prepare)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::finished:
{'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941',
'lastCheck': '0.2', 'code': 0, 'valid': True},
'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522',
'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state
preparing - state finished
Thread-52::DEBUG::2013-03-05
10:03:40,732::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::task::957::TaskManager.Task::(_decref)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::ref 0 aborting False
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda latency not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available

Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-05 Thread Dan Kenigsberg
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
 On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
  Rob,
 
  It seems that a bug in vdsm code is hiding the real issue.
  Could you do a
 
  sed -i s/ParseError/ElementTree.ParseError 
  /usr/share/vdsm/gluster/cli.py
 
  restart vdsmd, and retry?
 
  Bala, would you send a patch fixing the ParseError issue (and adding a
  unit test that would have caught it on time)?

 Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
 in _callmethod
 raise convert_to_error(kind, result)
 AttributeError: class ElementTree has no attribute 'ParseError'

My guess has led us nowhere, since etree.ParseError is simply missing
from python 2.6. It is to be seen only in python 2.7!

That's sad, but something *else* is problematic, since we got to this
error-handling code.

Could you make another try and temporarily replace ParseError with
Exception?

sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py

(this sed is relative to the original code).

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-04 Thread Rob Zwissler
Running CentOS 6.3 with the following VDSM packages from dre's repo:

vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
vdsm-gluster-4.10.3-0.30.19.el6.noarch
vdsm-python-4.10.3-0.30.19.el6.x86_64
vdsm-4.10.3-0.30.19.el6.x86_64
vdsm-cli-4.10.3-0.30.19.el6.noarch

And the following gluster packages from the gluster repo:

glusterfs-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64

I get the following errors in vdsm.log:

Thread-1483::DEBUG::2013-03-04
16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
[10.33.9.73]::call volumesList with () {}
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
= ''; rc = 0
MainProcess|Thread-1483::ERROR::2013-03-04
16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
  File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
return func(*args, **kwargs)
  File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
except (etree.ParseError, AttributeError, ValueError):
AttributeError: 'module' object has no attribute 'ParseError'
Thread-1483::ERROR::2013-03-04
16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
rv = func(*args, **kwargs)
  File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
  File /usr/share/vdsm/supervdsm.py, line 81, in __call__
return callMethod()
  File /usr/share/vdsm/supervdsm.py, line 72, in lambda
**kwargs)
  File string, line 2, in glusterVolumeInfo
  File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
raise convert_to_error(kind, result)
AttributeError: 'module' object has no attribute 'ParseError'

Which corresponds to the following in the engine.log:

2013-03-04 16:34:46,231 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) START,
GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3
2013-03-04 16:34:46,365 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method
2013-03-04 16:34:46,366 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
2013-03-04 16:34:46,367 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
exception
2013-03-04 16:34:46,369 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log
id: 987aef3
2013-03-04 16:34:46,370 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterManager]
(QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight
data of cluster qa-cluster1!:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170)
[engine-bll.jar:]
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)

Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-04 Thread Dan Kenigsberg
On Mon, Mar 04, 2013 at 04:38:50PM -0800, Rob Zwissler wrote:
 Running CentOS 6.3 with the following VDSM packages from dre's repo:
 
 vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
 vdsm-gluster-4.10.3-0.30.19.el6.noarch
 vdsm-python-4.10.3-0.30.19.el6.x86_64
 vdsm-4.10.3-0.30.19.el6.x86_64
 vdsm-cli-4.10.3-0.30.19.el6.noarch
 
 And the following gluster packages from the gluster repo:
 
 glusterfs-3.3.1-1.el6.x86_64
 glusterfs-fuse-3.3.1-1.el6.x86_64
 glusterfs-vim-3.2.7-1.el6.x86_64
 glusterfs-server-3.3.1-1.el6.x86_64
 
 I get the following errors in vdsm.log:
 
 Thread-1483::DEBUG::2013-03-04
 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
 [10.33.9.73]::call volumesList with () {}
 MainProcess|Thread-1483::DEBUG::2013-03-04
 16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
 '/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
 MainProcess|Thread-1483::DEBUG::2013-03-04
 16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
 = ''; rc = 0
 MainProcess|Thread-1483::ERROR::2013-03-04
 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
 Error in wrapper
 Traceback (most recent call last):
   File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
 except (etree.ParseError, AttributeError, ValueError):
 AttributeError: 'module' object has no attribute 'ParseError'
 Thread-1483::ERROR::2013-03-04
 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
 in _callmethod
 raise convert_to_error(kind, result)
 AttributeError: 'module' object has no attribute 'ParseError'
 

Rob,

It seems that a bug in vdsm code is hiding the real issue.
Could you do a

sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

restart vdsmd, and retry?

Bala, would you send a patch fixing the ParseError issue (and adding a
unit test that would have caught it on time)?


Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users