Re: [ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Bill James
[root@ovirt3 test vdsm]# gluster volume info 
--remote-host=ovirt3.test.j2noc.com

No volumes present

I thought ovirt was suppose to create the volume. Am I suppose to create 
it first?


Thank you for the doc, I'll read through that...




On 1/29/16 1:26 PM, Nir Soffer wrote:

On Fri, Jan 29, 2016 at 11:08 PM, Bill James  wrote:

jsonrpc.Executor/6::DEBUG::2016-01-29
12:58:16,854::task::595::Storage.TaskManager.Task::(_updateState)
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
g from state init -> state preparing
jsonrpc.Executor/6::INFO::2016-01-29
12:58:16,854::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7, spUUID=u'-
---', conList=[{u'id':
u'----', u'connection':
u'ovirt3.test.j2noc.com:/gluster-store/vol1', u'iqn
': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'', u'port': u''}], options=None)
jsonrpc.Executor/6::ERROR::2016-01-29
12:58:16,950::hsm::2465::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
 self.validate()
   File "/usr/share/vdsm/storage/storageServer.py", line 335, in validate
 replicaCount = self.volinfo['replicaCount']
   File "/usr/share/vdsm/storage/storageServer.py", line 331, in volinfo
 self._volinfo = self._get_gluster_volinfo()
   File "/usr/share/vdsm/storage/storageServer.py", line 358, in
_get_gluster_volinfo
 return volinfo[self._volname]
KeyError: u'gluster-store/vol1'

This means that the gluster server at ovirt3.test.j2noc.com does not
have a volume named "gluster-store/vol1"

I guess that /gluster-store is the mountpoint and the volume name is "vol1".

On the engine side, you probably need to set the storage domain path to

 ovirt3.test.j2noc.com:/vol1

Can you share with us the output of:

 gluster volume info --remote-host=ovirt3.test.j2noc.com

This should return something like:

 {VOLUMENAME: {'brickCount': BRICKCOUNT,
   'bricks': [BRICK1, BRICK2, ...],
   'options': {OPTION: VALUE, ...},
   'transportType': [TCP,RDMA, ...],
   'uuid': UUID,
   'volumeName': NAME,
   'volumeStatus': STATUS,
   'volumeType': TYPE}, ...}

The fact that this code fails with a KeyError is a bug. We should have
returned a clear error in this case instead of failing with KeyError, which
show a generic and unhelpful error message in the engine side.
Please file a bug about it.

I found the document about configuring gluster for ovirt:
http://www.gluster.org/community/documentation/index.php/Virt-store-usecase

Nir


jsonrpc.Executor/6::DEBUG::2016-01-29
12:58:16,950::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: {}
jsonrpc.Executor/6::INFO::2016-01-29
12:58:16,951::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslis
t': [{'status': 100, 'id': u'----'}]}
jsonrpc.Executor/6::DEBUG::2016-01-29
12:58:16,951::task::1191::Storage.TaskManager.Task::(prepare)
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::finished:
  {'statuslist': [{'status': 100, 'id':
u'----'}]}
jsonrpc.Executor/6::DEBUG::2016-01-29
12:58:16,951::task::595::Storage.TaskManager.Task::(_updateState)
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
g from state preparing -> state finished




On 1/29/16 1:05 PM, Nir Soffer wrote:

On Fri, Jan 29, 2016 at 9:31 PM, Bill James  wrote:

I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having
problems
finding a doc that explains how to setup gluster storage for it.

There was good gluster documentation for setting up volumes for ovirt,
but I cannot find it now.

Sahina, can you point us to this document?


If I try to create a Storage domain with storage type GlusterFS it comes
back with "General Exception".
I'm not using Hosted-engine, engine is on a separate host by its self.
I have 3 nodes, all running centos7.2.

vdsm.log:

jsonrpc.Executor/0::DEBUG::2016-01-29
11:06:35,023::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer' in br
idge with {u'connectionParams': [{u'id':
u'----', u'connection':
u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'', u'u
ser': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'', u'port': u''}], u'storagepoolID':
u'----', u
'domainType': 7}
jsonrpc.Executor/0::DEBUG::2016-01-29
11:06:35,024::task::595::Storage.TaskManager.Task::(_updateState)
Task=`e49fc739-faaf-40f1-b512-868b936fbcc1`::movin
g from state init -> 

Re: [ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Nir Soffer
On Fri, Jan 29, 2016 at 11:29 PM, Bill James  wrote:
> [root@ovirt3 test vdsm]# gluster volume info
> --remote-host=ovirt3.test.j2noc.com
> No volumes present
>
> I thought ovirt was suppose to create the volume. Am I suppose to create it
> first?

Yes, ovirt is consuming gluster volumes created on the gluster server side.
ovirt will create vm disk images on the gluster volume.

Nir

>
> Thank you for the doc, I'll read through that...
>
>
>
>
>
> On 1/29/16 1:26 PM, Nir Soffer wrote:
>>
>> On Fri, Jan 29, 2016 at 11:08 PM, Bill James  wrote:
>>>
>>> jsonrpc.Executor/6::DEBUG::2016-01-29
>>> 12:58:16,854::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
>>> g from state init -> state preparing
>>> jsonrpc.Executor/6::INFO::2016-01-29
>>> 12:58:16,854::logUtils::48::dispatcher::(wrapper) Run and protect:
>>> connectStorageServer(domType=7, spUUID=u'-
>>> ---', conList=[{u'id':
>>> u'----', u'connection':
>>> u'ovirt3.test.j2noc.com:/gluster-store/vol1', u'iqn
>>> ': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
>>> u'password':
>>> '', u'port': u''}], options=None)
>>> jsonrpc.Executor/6::ERROR::2016-01-29
>>> 12:58:16,950::hsm::2465::Storage.HSM::(connectStorageServer) Could not
>>> connect to storageServer
>>> Traceback (most recent call last):
>>>File "/usr/share/vdsm/storage/hsm.py", line 2462, in
>>> connectStorageServer
>>>  conObj.connect()
>>>File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
>>>  self.validate()
>>>File "/usr/share/vdsm/storage/storageServer.py", line 335, in validate
>>>  replicaCount = self.volinfo['replicaCount']
>>>File "/usr/share/vdsm/storage/storageServer.py", line 331, in volinfo
>>>  self._volinfo = self._get_gluster_volinfo()
>>>File "/usr/share/vdsm/storage/storageServer.py", line 358, in
>>> _get_gluster_volinfo
>>>  return volinfo[self._volname]
>>> KeyError: u'gluster-store/vol1'
>>
>> This means that the gluster server at ovirt3.test.j2noc.com does not
>> have a volume named "gluster-store/vol1"
>>
>> I guess that /gluster-store is the mountpoint and the volume name is
>> "vol1".
>>
>> On the engine side, you probably need to set the storage domain path to
>>
>>  ovirt3.test.j2noc.com:/vol1
>>
>> Can you share with us the output of:
>>
>>  gluster volume info --remote-host=ovirt3.test.j2noc.com
>>
>> This should return something like:
>>
>>  {VOLUMENAME: {'brickCount': BRICKCOUNT,
>>'bricks': [BRICK1, BRICK2, ...],
>>'options': {OPTION: VALUE, ...},
>>'transportType': [TCP,RDMA, ...],
>>'uuid': UUID,
>>'volumeName': NAME,
>>'volumeStatus': STATUS,
>>'volumeType': TYPE}, ...}
>>
>> The fact that this code fails with a KeyError is a bug. We should have
>> returned a clear error in this case instead of failing with KeyError,
>> which
>> show a generic and unhelpful error message in the engine side.
>> Please file a bug about it.
>>
>> I found the document about configuring gluster for ovirt:
>>
>> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase
>>
>> Nir
>>
>>> jsonrpc.Executor/6::DEBUG::2016-01-29
>>> 12:58:16,950::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: {}
>>> jsonrpc.Executor/6::INFO::2016-01-29
>>> 12:58:16,951::logUtils::51::dispatcher::(wrapper) Run and protect:
>>> connectStorageServer, Return response: {'statuslis
>>> t': [{'status': 100, 'id': u'----'}]}
>>> jsonrpc.Executor/6::DEBUG::2016-01-29
>>> 12:58:16,951::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::finished:
>>>   {'statuslist': [{'status': 100, 'id':
>>> u'----'}]}
>>> jsonrpc.Executor/6::DEBUG::2016-01-29
>>> 12:58:16,951::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
>>> g from state preparing -> state finished
>>>
>>>
>>>
>>>
>>> On 1/29/16 1:05 PM, Nir Soffer wrote:

 On Fri, Jan 29, 2016 at 9:31 PM, Bill James  wrote:
>
> I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having
> problems
> finding a doc that explains how to setup gluster storage for it.

 There was good gluster documentation for setting up volumes for ovirt,
 but I cannot find it now.

 Sahina, can you point us to this document?

> If I try to create a Storage domain with storage type GlusterFS it
> comes
> back with "General Exception".
> I'm not using Hosted-engine, engine is on a separate host by its self.
> I have 3 nodes, all running centos7.2.
>
> vdsm.log:
>
> 

[ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Bill James
I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having 
problems finding a doc that explains how to setup gluster storage for it.


If I try to create a Storage domain with storage type GlusterFS it comes 
back with "General Exception".

I'm not using Hosted-engine, engine is on a separate host by its self.
I have 3 nodes, all running centos7.2.

vdsm.log:

jsonrpc.Executor/0::DEBUG::2016-01-29 
11:06:35,023::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) 
Calling 'StoragePool.connectStorageServer' in br
idge with {u'connectionParams': [{u'id': 
u'----', u'connection': 
u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'', u'u
ser': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': 
'', u'port': u''}], u'storagepoolID': 
u'----', u

'domainType': 7}
jsonrpc.Executor/0::DEBUG::2016-01-29 
11:06:35,024::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`e49fc739-faaf-40f1-b512-868b936fbcc1`::movin

g from state init -> state preparing
jsonrpc.Executor/0::INFO::2016-01-29 
11:06:35,024::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, spUUID=u'-
---', conList=[{u'id': 
u'----', u'connection': 
u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'
', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': 
'', u'port': u''}], options=None)
jsonrpc.Executor/0::ERROR::2016-01-29 
11:06:35,120::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
self.validate()



Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Nir Soffer
On Fri, Jan 29, 2016 at 9:31 PM, Bill James  wrote:
> I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having problems
> finding a doc that explains how to setup gluster storage for it.

There was good gluster documentation for setting up volumes for ovirt,
but I cannot find it now.

Sahina, can you point us to this document?

>
> If I try to create a Storage domain with storage type GlusterFS it comes
> back with "General Exception".
> I'm not using Hosted-engine, engine is on a separate host by its self.
> I have 3 nodes, all running centos7.2.
>
> vdsm.log:
>
> jsonrpc.Executor/0::DEBUG::2016-01-29
> 11:06:35,023::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'StoragePool.connectStorageServer' in br
> idge with {u'connectionParams': [{u'id':
> u'----', u'connection':
> u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'', u'u
> ser': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
> '', u'port': u''}], u'storagepoolID':
> u'----', u
> 'domainType': 7}
> jsonrpc.Executor/0::DEBUG::2016-01-29
> 11:06:35,024::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`e49fc739-faaf-40f1-b512-868b936fbcc1`::movin
> g from state init -> state preparing
> jsonrpc.Executor/0::INFO::2016-01-29
> 11:06:35,024::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7, spUUID=u'-
> ---', conList=[{u'id':
> u'----', u'connection':
> u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'
> ', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
> '', u'port': u''}], options=None)
> jsonrpc.Executor/0::ERROR::2016-01-29
> 11:06:35,120::hsm::2465::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
> self.validate()

The next lines in the log should explain the error - can you send them?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Bill James
jsonrpc.Executor/6::DEBUG::2016-01-29 
12:58:16,854::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin

g from state init -> state preparing
jsonrpc.Executor/6::INFO::2016-01-29 
12:58:16,854::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, spUUID=u'-
---', conList=[{u'id': 
u'----', u'connection': 
u'ovirt3.test.j2noc.com:/gluster-store/vol1', u'iqn
': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', 
u'password': '', u'port': u''}], options=None)
jsonrpc.Executor/6::ERROR::2016-01-29 
12:58:16,950::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
self.validate()
  File "/usr/share/vdsm/storage/storageServer.py", line 335, in validate
replicaCount = self.volinfo['replicaCount']
  File "/usr/share/vdsm/storage/storageServer.py", line 331, in volinfo
self._volinfo = self._get_gluster_volinfo()
  File "/usr/share/vdsm/storage/storageServer.py", line 358, in 
_get_gluster_volinfo

return volinfo[self._volname]
KeyError: u'gluster-store/vol1'
jsonrpc.Executor/6::DEBUG::2016-01-29 
12:58:16,950::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: {}
jsonrpc.Executor/6::INFO::2016-01-29 
12:58:16,951::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslis

t': [{'status': 100, 'id': u'----'}]}
jsonrpc.Executor/6::DEBUG::2016-01-29 
12:58:16,951::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::finished:
 {'statuslist': [{'status': 100, 'id': 
u'----'}]}
jsonrpc.Executor/6::DEBUG::2016-01-29 
12:58:16,951::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin

g from state preparing -> state finished



On 1/29/16 1:05 PM, Nir Soffer wrote:

On Fri, Jan 29, 2016 at 9:31 PM, Bill James  wrote:

I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having problems
finding a doc that explains how to setup gluster storage for it.

There was good gluster documentation for setting up volumes for ovirt,
but I cannot find it now.

Sahina, can you point us to this document?


If I try to create a Storage domain with storage type GlusterFS it comes
back with "General Exception".
I'm not using Hosted-engine, engine is on a separate host by its self.
I have 3 nodes, all running centos7.2.

vdsm.log:

jsonrpc.Executor/0::DEBUG::2016-01-29
11:06:35,023::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'StoragePool.connectStorageServer' in br
idge with {u'connectionParams': [{u'id':
u'----', u'connection':
u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'', u'u
ser': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'', u'port': u''}], u'storagepoolID':
u'----', u
'domainType': 7}
jsonrpc.Executor/0::DEBUG::2016-01-29
11:06:35,024::task::595::Storage.TaskManager.Task::(_updateState)
Task=`e49fc739-faaf-40f1-b512-868b936fbcc1`::movin
g from state init -> state preparing
jsonrpc.Executor/0::INFO::2016-01-29
11:06:35,024::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7, spUUID=u'-
---', conList=[{u'id':
u'----', u'connection':
u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'
', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'', u'port': u''}], options=None)
jsonrpc.Executor/0::ERROR::2016-01-29
11:06:35,120::hsm::2465::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
 self.validate()

The next lines in the log should explain the error - can you send them?

Nir



Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, 

Re: [ovirt-users] ovirt-3.6 and glusterfs doc

2016-01-29 Thread Nir Soffer
On Fri, Jan 29, 2016 at 11:08 PM, Bill James  wrote:
> jsonrpc.Executor/6::DEBUG::2016-01-29
> 12:58:16,854::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
> g from state init -> state preparing
> jsonrpc.Executor/6::INFO::2016-01-29
> 12:58:16,854::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7, spUUID=u'-
> ---', conList=[{u'id':
> u'----', u'connection':
> u'ovirt3.test.j2noc.com:/gluster-store/vol1', u'iqn
> ': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
> '', u'port': u''}], options=None)
> jsonrpc.Executor/6::ERROR::2016-01-29
> 12:58:16,950::hsm::2465::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 220, in connect
> self.validate()
>   File "/usr/share/vdsm/storage/storageServer.py", line 335, in validate
> replicaCount = self.volinfo['replicaCount']
>   File "/usr/share/vdsm/storage/storageServer.py", line 331, in volinfo
> self._volinfo = self._get_gluster_volinfo()
>   File "/usr/share/vdsm/storage/storageServer.py", line 358, in
> _get_gluster_volinfo
> return volinfo[self._volname]
> KeyError: u'gluster-store/vol1'

This means that the gluster server at ovirt3.test.j2noc.com does not
have a volume named "gluster-store/vol1"

I guess that /gluster-store is the mountpoint and the volume name is "vol1".

On the engine side, you probably need to set the storage domain path to

ovirt3.test.j2noc.com:/vol1

Can you share with us the output of:

gluster volume info --remote-host=ovirt3.test.j2noc.com

This should return something like:

{VOLUMENAME: {'brickCount': BRICKCOUNT,
  'bricks': [BRICK1, BRICK2, ...],
  'options': {OPTION: VALUE, ...},
  'transportType': [TCP,RDMA, ...],
  'uuid': UUID,
  'volumeName': NAME,
  'volumeStatus': STATUS,
  'volumeType': TYPE}, ...}

The fact that this code fails with a KeyError is a bug. We should have
returned a clear error in this case instead of failing with KeyError, which
show a generic and unhelpful error message in the engine side.
Please file a bug about it.

I found the document about configuring gluster for ovirt:
http://www.gluster.org/community/documentation/index.php/Virt-store-usecase

Nir

> jsonrpc.Executor/6::DEBUG::2016-01-29
> 12:58:16,950::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: {}
> jsonrpc.Executor/6::INFO::2016-01-29
> 12:58:16,951::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslis
> t': [{'status': 100, 'id': u'----'}]}
> jsonrpc.Executor/6::DEBUG::2016-01-29
> 12:58:16,951::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::finished:
>  {'statuslist': [{'status': 100, 'id':
> u'----'}]}
> jsonrpc.Executor/6::DEBUG::2016-01-29
> 12:58:16,951::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`e6f93ddd-23c3-4b5b-879a-3c31d4b8d773`::movin
> g from state preparing -> state finished
>
>
>
>
> On 1/29/16 1:05 PM, Nir Soffer wrote:
>>
>> On Fri, Jan 29, 2016 at 9:31 PM, Bill James  wrote:
>>>
>>> I'm trying to setup a ovirt3.6.2 cluster on centos7.2 and am having
>>> problems
>>> finding a doc that explains how to setup gluster storage for it.
>>
>> There was good gluster documentation for setting up volumes for ovirt,
>> but I cannot find it now.
>>
>> Sahina, can you point us to this document?
>>
>>> If I try to create a Storage domain with storage type GlusterFS it comes
>>> back with "General Exception".
>>> I'm not using Hosted-engine, engine is on a separate host by its self.
>>> I have 3 nodes, all running centos7.2.
>>>
>>> vdsm.log:
>>>
>>> jsonrpc.Executor/0::DEBUG::2016-01-29
>>> 11:06:35,023::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Calling
>>> 'StoragePool.connectStorageServer' in br
>>> idge with {u'connectionParams': [{u'id':
>>> u'----', u'connection':
>>> u'ovirt3.test.j2noc.com:/gluster-store', u'iqn': u'', u'u
>>> ser': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
>>> '', u'port': u''}], u'storagepoolID':
>>> u'----', u
>>> 'domainType': 7}
>>> jsonrpc.Executor/0::DEBUG::2016-01-29
>>> 11:06:35,024::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`e49fc739-faaf-40f1-b512-868b936fbcc1`::movin
>>> g from state init -> state preparing
>>> jsonrpc.Executor/0::INFO::2016-01-29
>>>