Looks like this might be related to the following bug.
https://bugzilla.redhat.com/show_bug.cgi?id=832693
Thanks
Robert
On 06/16/2012 02:08 PM, Robert Middleswarth wrote:
I am seeing the same thing. I also notice that glusterfs seems to die
every-time I try. I am wonder if this could be a glusterfs / f17 issue.
Thanks
Robert
On 06/16/2012 06:27 AM, зоррыч wrote:
Hi.
If you try to create an error gluster volume (gluster.png)
Vdms.log:
Thread-219::DEBUG::2012-06-16
06:00:00,758::BindingXMLRPC::872::vds::(wrapper) client
[10.1.20.2]::call volumeCreate with ('sd', ['10.1.20.7:/mht'], 0, 0,
['TCP']) {} flowID [4d45329e]
MainProcess|Thread-219::DEBUG::2012-06-16
06:00:00,765::__init__::1164::Storage.Misc.excCmd::(_log)
'/usr/sbin/gluster --mode=script volume create sd transport TCP
10.1.20.7:/mht' (cwd None)
MainProcess|Thread-219::DEBUG::2012-06-16
06:00:00,880::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
<err> = ''; <rc> = 0
MainProcess|Thread-219::DEBUG::2012-06-16
06:00:00,881::__init__::1164::Storage.Misc.excCmd::(_log)
'/usr/sbin/gluster --mode=script volume info' (cwd None)
MainProcess|Thread-219::DEBUG::2012-06-16
06:00:00,959::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:
<err> = ''; <rc> = 0
Thread-219::ERROR::2012-06-16
06:00:00,960::BindingXMLRPC::891::vds::(wrapper) unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 877, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 89, in volumeCreate
return {'uuid': volumeList[volumeName]['uuid']}
KeyError: 'uuid'
Thread-220::DEBUG::2012-06-16
06:00:01,225::task::588::TaskManager.Task::(_updateState)
Task=`6272f938-be5c-457e-8982-ec7fe6514ec8`::moving from state init
-> state preparing
Thread-220::INFO::2012-06-16
06:00:01,225::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-220::INFO::2012-06-16
06:00:01,226::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-220::DEBUG::2012-06-16
06:00:01,226::task::1172::TaskManager.Task::(prepare)
Task=`6272f938-be5c-457e-8982-ec7fe6514ec8`::finished: {}
Thread-220::DEBUG::2012-06-16
06:00:01,226::task::588::TaskManager.Task::(_updateState)
Task=`6272f938-be5c-457e-8982-ec7fe6514ec8`::moving from state
preparing -> state finished
Thread-220::DEBUG::2012-06-16
06:00:01,226::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-220::DEBUG::2012-06-16
06:00:01,226::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll
requests {}
Thread-220::DEBUG::2012-06-16
06:00:01,226::task::978::TaskManager.Task::(_decref)
Task=`6272f938-be5c-457e-8982-ec7fe6514ec8`::ref 0 aborting False
^C
Command:
[root@noc-3-synt ~]# /usr/sbin/gluster --mode=script volume info
Volume Name: sd
Type: Distribute
Status: Created
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.1.20.7:/mht
[root@noc-3-synt ~]#
Is this a bug? Or am I wrong create?
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users