Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Dominic Kaiser
Hey All,

So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's no
problems creating gluster volumes and distributed replicated ones.
 Broadcom and Realtek yuk!  So now I am trying to mount the gluster volume
as a nfs mount and am having a problem.  It is timing out like it is
blocked by a firewall.

I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/export
/home/administrator/test

Here is gfs1 tail vdsm.log

[root@gfs1 vdsm]# tail vdsm.log
Thread-88731::DEBUG::2012-09-21
10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-88731::DEBUG::2012-09-21
10:35:56,567::task::978::TaskManager.Task::(_decref)
Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False
Thread-88737::DEBUG::2012-09-21
10:36:06,890::task::588::TaskManager.Task::(_updateState)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -
state preparing
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-88737::DEBUG::2012-09-21
10:36:06,891::task::1172::TaskManager.Task::(prepare)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}
Thread-88737::DEBUG::2012-09-21
10:36:06,892::task::588::TaskManager.Task::(_updateState)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -
state finished
Thread-88737::DEBUG::2012-09-21
10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-88737::DEBUG::2012-09-21
10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-88737::DEBUG::2012-09-21
10:36:06,893::task::978::TaskManager.Task::(_decref)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False

Do you know why I can not connect via NFS?  Using an older kernel not 3.5
and iptables are off.

Dominic


On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya hat...@redhat.com wrote:

 On 09/10/2012 06:27 PM, Dominic Kaiser wrote:

 Here is the message and the logs again except zipped I failed the first
 delivery:

 Ok here are the logs 4 node and 1 engine log.  Tried making /data folder
 owned by root and then tried by 36:36 neither worked.  Name of volume is
 data to match folders on nodes also.

 Let me know what you think,

 Dominic


 this is the actual failure (taken from gfs2vdsm.log).

 Thread-332442::DEBUG::2012-09-**10 
 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper)
 client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data',
 '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'],
  2, 0, ['TCP']) {} flowID [406f2c8e]
 MainProcess|Thread-332442::**DEBUG::2012-09-10
 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log)
 '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP
 10.4.0.97:/data 10.4.0.98:/data 10
 .4.0.99:/data 10.4.0.100:/data' (cwd None)
 MainProcess|Thread-332442::**DEBUG::2012-09-10
 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: err
 = 'Host 10.4.0.99 not a friend\n'; rc = 255
 MainProcess|Thread-332442::**ERROR::2012-09-10
 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper)
 Error in wrapper
 Traceback (most recent call last):
   File /usr/share/vdsm/**supervdsmServer.py, line 74, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/**supervdsmServer.py, line 286, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.**py, line 46, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.**py, line 176, in volumeCreate
 raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err)
 GlusterVolumeCreateFailedExcep**tion: Volume create failed
 error: Host 10.4.0.99 not a friend
 return code: 255
 Thread-332442::ERROR::2012-09-**10 
 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper)
 unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/**BindingXMLRPC.py, line 864, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.**py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.**py, line 87, in volumeCreate
 transportList)
   File /usr/share/vdsm/supervdsm.py**, line 67, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py**, line 65, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeCreate
   File /usr/lib64/python2.7/**multiprocessing/managers.py, line 759, in
 _callmethod
 kind, result = conn.recv()
 TypeError: ('__init__() takes exactly 4 arguments (1 given)', class
 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion', ())

 can you please run  gluster peer status on all your nodes ? also, it
 appears that '10.4.0.99' is problematic, can you try create the volume
 without it ?



 On Mon, Sep 10, 2012 

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Dominic Kaiser
Here is the engine.log info:

[root@ovirt ovirt-engine]# tail engine.log
2012-09-21 11:10:00,007 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
(QuartzScheduler_Worker-49) Autorecovering 0 hosts
2012-09-21 11:10:00,007 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
(QuartzScheduler_Worker-49) Checking autorecoverable hosts done
2012-09-21 11:10:00,008 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
(QuartzScheduler_Worker-49) Checking autorecoverable storage domains
2012-09-21 11:10:00,009 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
(QuartzScheduler_Worker-49) Autorecovering 0 storage domains
2012-09-21 11:10:00,010 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
(QuartzScheduler_Worker-49) Checking autorecoverable storage domains done
2012-09-21 11:10:22,710 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-84) Failed to decryptData must not be longer than
256 bytes
2012-09-21 11:10:22,726 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-12) Failed to decryptData must start with zero
2012-09-21 11:10:54,519 INFO
 [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(ajp--0.0.0.0-8009-11) [3769be9c] Running command:
RemoveStorageServerConnectionCommand internal: false. Entities affected :
 ID: aaa0----123456789aaa Type: System
2012-09-21 11:10:54,537 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-11) [3769be9c] START,
DisconnectStorageServerVDSCommand(vdsId =
3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId =
----, storageType = NFS, connectionList =
[{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id:
16dd4a1b
2012-09-21 11:10:56,417 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-11) [3769be9c] FINISH,
DisconnectStorageServerVDSCommand, return:
{----=477}, log id: 16dd4a1b

Thanks,

dk

On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser domi...@bostonvineyard.org
 wrote:

 I can mount to another computer with this command:

 mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data
 /home/administrator/test

 So volumes work but I get a 500 error timeout when trying to add as a
 storage domain in ovirt.  weird?

 dk

 On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 Hey All,

 So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's no
 problems creating gluster volumes and distributed replicated ones.
  Broadcom and Realtek yuk!  So now I am trying to mount the gluster volume
 as a nfs mount and am having a problem.  It is timing out like it is
 blocked by a firewall.

 I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/export
 /home/administrator/test

 Here is gfs1 tail vdsm.log

 [root@gfs1 vdsm]# tail vdsm.log
 Thread-88731::DEBUG::2012-09-21
 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88731::DEBUG::2012-09-21
 10:35:56,567::task::978::TaskManager.Task::(_decref)
 Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False
 Thread-88737::DEBUG::2012-09-21
 10:36:06,890::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -
 state preparing
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect:
 repoStats(options=None)
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect:
 repoStats, Return response: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,891::task::1172::TaskManager.Task::(prepare)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -
 state finished
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,893::task::978::TaskManager.Task::(_decref)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False

 Do you know why I can not connect via NFS?  Using an older kernel not 3.5
 and iptables are off.

 Dominic


 On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya hat...@redhat.com wrote:

 On 09/10/2012 06:27 PM, Dominic Kaiser wrote:

 Here is the message and the logs again except zipped I failed the first
 delivery:

 Ok here are the logs 4 node and 1 engine log.  Tried making /data
 folder owned by root and then tried by 36:36 neither worked.  Name of
 volume is data to match folders on nodes also.

 Let me know what you think,

 

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Dominic Kaiser
I noticed something.  If I am trying to mount the gluster share from
another computer and do not include mounproto=tcp it times out.  vers=3 or
4 does not matter.  Could this be why I can not add it from the engine gui?

dk

On Fri, Sep 21, 2012 at 11:12 AM, Dominic Kaiser domi...@bostonvineyard.org
 wrote:

 Here is the engine.log info:

 [root@ovirt ovirt-engine]# tail engine.log
 2012-09-21 11:10:00,007 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Autorecovering 0 hosts
 2012-09-21 11:10:00,007 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable hosts done
 2012-09-21 11:10:00,008 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable storage domains
 2012-09-21 11:10:00,009 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Autorecovering 0 storage domains
 2012-09-21 11:10:00,010 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done
 2012-09-21 11:10:22,710 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than
 256 bytes
 2012-09-21 11:10:22,726 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-12) Failed to decryptData must start with zero
 2012-09-21 11:10:54,519 INFO
  [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] Running command:
 RemoveStorageServerConnectionCommand internal: false. Entities affected :
  ID: aaa0----123456789aaa Type: System
 2012-09-21 11:10:54,537 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] START,
 DisconnectStorageServerVDSCommand(vdsId =
 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId =
 ----, storageType = NFS, connectionList =
 [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id:
 16dd4a1b
 2012-09-21 11:10:56,417 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] FINISH,
 DisconnectStorageServerVDSCommand, return:
 {----=477}, log id: 16dd4a1b

 Thanks,

 dk

 On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 I can mount to another computer with this command:

 mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data
 /home/administrator/test

 So volumes work but I get a 500 error timeout when trying to add as a
 storage domain in ovirt.  weird?

 dk

 On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 Hey All,

 So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's no
 problems creating gluster volumes and distributed replicated ones.
  Broadcom and Realtek yuk!  So now I am trying to mount the gluster volume
 as a nfs mount and am having a problem.  It is timing out like it is
 blocked by a firewall.

 I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/export
 /home/administrator/test

 Here is gfs1 tail vdsm.log

 [root@gfs1 vdsm]# tail vdsm.log
 Thread-88731::DEBUG::2012-09-21
 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88731::DEBUG::2012-09-21
 10:35:56,567::task::978::TaskManager.Task::(_decref)
 Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False
 Thread-88737::DEBUG::2012-09-21
 10:36:06,890::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -
 state preparing
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect:
 repoStats(options=None)
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect:
 repoStats, Return response: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,891::task::1172::TaskManager.Task::(prepare)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -
 state finished
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,893::task::978::TaskManager.Task::(_decref)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False

 Do you know why I can not connect via NFS?  Using an older kernel not
 3.5 and iptables are off.

 Dominic


 On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya hat...@redhat.com 

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Dominic Kaiser
Any ideas?  Pretty please.

dk

On Fri, Sep 21, 2012 at 11:51 AM, Dominic Kaiser domi...@bostonvineyard.org
 wrote:

 I noticed something.  If I am trying to mount the gluster share from
 another computer and do not include mounproto=tcp it times out.  vers=3 or
 4 does not matter.  Could this be why I can not add it from the engine gui?

 dk


 On Fri, Sep 21, 2012 at 11:12 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 Here is the engine.log info:

 [root@ovirt ovirt-engine]# tail engine.log
 2012-09-21 11:10:00,007 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Autorecovering 0 hosts
 2012-09-21 11:10:00,007 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable hosts done
 2012-09-21 11:10:00,008 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable storage domains
 2012-09-21 11:10:00,009 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Autorecovering 0 storage domains
 2012-09-21 11:10:00,010 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done
 2012-09-21 11:10:22,710 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than
 256 bytes
 2012-09-21 11:10:22,726 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-12) Failed to decryptData must start with zero
 2012-09-21 11:10:54,519 INFO
  [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] Running command:
 RemoveStorageServerConnectionCommand internal: false. Entities affected :
  ID: aaa0----123456789aaa Type: System
 2012-09-21 11:10:54,537 INFO
  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] START,
 DisconnectStorageServerVDSCommand(vdsId =
 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId =
 ----, storageType = NFS, connectionList =
 [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id:
 16dd4a1b
 2012-09-21 11:10:56,417 INFO
  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
 (ajp--0.0.0.0-8009-11) [3769be9c] FINISH,
 DisconnectStorageServerVDSCommand, return:
 {----=477}, log id: 16dd4a1b

 Thanks,

 dk

 On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 I can mount to another computer with this command:

 mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data
 /home/administrator/test

 So volumes work but I get a 500 error timeout when trying to add as a
 storage domain in ovirt.  weird?

 dk

 On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser 
 domi...@bostonvineyard.org wrote:

 Hey All,

 So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's no
 problems creating gluster volumes and distributed replicated ones.
  Broadcom and Realtek yuk!  So now I am trying to mount the gluster volume
 as a nfs mount and am having a problem.  It is timing out like it is
 blocked by a firewall.

 I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/export
 /home/administrator/test

 Here is gfs1 tail vdsm.log

 [root@gfs1 vdsm]# tail vdsm.log
 Thread-88731::DEBUG::2012-09-21
 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88731::DEBUG::2012-09-21
 10:35:56,567::task::978::TaskManager.Task::(_decref)
 Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False
 Thread-88737::DEBUG::2012-09-21
 10:36:06,890::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -
 state preparing
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect:
 repoStats(options=None)
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect:
 repoStats, Return response: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,891::task::1172::TaskManager.Task::(prepare)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::task::588::TaskManager.Task::(_updateState)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -
 state finished
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-88737::DEBUG::2012-09-21
 10:36:06,893::task::978::TaskManager.Task::(_decref)
 Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False

 Do you know why I can not connect via NFS?  

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Jason Brooks

On 09/21/2012 08:09 AM, Dominic Kaiser wrote:

I can mount to another computer with this command:

mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data
/home/administrator/test


I notice that in your previous message, citing the mount that didn't 
work, you were mounting :/export, and above you're mounting :/data. Can 
you also mount the export volume from another computer?





So volumes work but I get a 500 error timeout when trying to add as a
storage domain in ovirt.  weird?

dk

On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser
domi...@bostonvineyard.org mailto:domi...@bostonvineyard.org wrote:

Hey All,

So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's
no problems creating gluster volumes and distributed replicated
ones.  Broadcom and Realtek yuk!  So now I am trying to mount the
gluster volume as a nfs mount and am having a problem.  It is timing
out like it is blocked by a firewall.

I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/export
/home/administrator/test

Here is gfs1 tail vdsm.log

[root@gfs1 vdsm]# tail vdsm.log
Thread-88731::DEBUG::2012-09-21
10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-88731::DEBUG::2012-09-21
10:35:56,567::task::978::TaskManager.Task::(_decref)
Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False
Thread-88737::DEBUG::2012-09-21
10:36:06,890::task::588::TaskManager.Task::(_updateState)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init
- state preparing
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-88737::DEBUG::2012-09-21
10:36:06,891::task::1172::TaskManager.Task::(prepare)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}
Thread-88737::DEBUG::2012-09-21
10:36:06,892::task::588::TaskManager.Task::(_updateState)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state
preparing - state finished
Thread-88737::DEBUG::2012-09-21
10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-88737::DEBUG::2012-09-21
10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-88737::DEBUG::2012-09-21
10:36:06,893::task::978::TaskManager.Task::(_decref)
Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False

Do you know why I can not connect via NFS?  Using an older kernel
not 3.5 and iptables are off.

Dominic


On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya hat...@redhat.com
mailto:hat...@redhat.com wrote:

On 09/10/2012 06:27 PM, Dominic Kaiser wrote:

Here is the message and the logs again except zipped I
failed the first delivery:

Ok here are the logs 4 node and 1 engine log.  Tried making
/data folder owned by root and then tried by 36:36 neither
worked.  Name of volume is data to match folders on nodes also.

Let me know what you think,

Dominic


this is the actual failure (taken from gfs2vdsm.log).

Thread-332442::DEBUG::2012-09-__10
10:28:05,788::BindingXMLRPC::__859::vds::(wrapper) client
[10.3.0.241]::call volumeCreate with ('data',
['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data',
'10.4.0.100:/data'],
  2, 0, ['TCP']) {} flowID [406f2c8e]
MainProcess|Thread-332442::__DEBUG::2012-09-10
10:28:05,792::__init__::1249::__Storage.Misc.excCmd::(_log)
'/usr/sbin/gluster --mode=script volume create data replica 2
transport TCP 10.4.0.97:/data 10.4.0.98:/data 10
.4.0.99:/data 10.4.0.100:/data' (cwd None)
MainProcess|Thread-332442::__DEBUG::2012-09-10
10:28:05,900::__init__::1249::__Storage.Misc.excCmd::(_log)
FAILED: err = 'Host 10.4.0.99 not a friend\n'; rc = 255
MainProcess|Thread-332442::__ERROR::2012-09-10

10:28:05,900::supervdsmServer:__:76::SuperVdsm.ServerCallback:__:(wrapper)
Error in wrapper
Traceback (most recent call last):
   File /usr/share/vdsm/__supervdsmServer.py, line 74, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/__supervdsmServer.py, line 286, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.__py, line 46, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.__py, line 176, in
volumeCreate
 raise ge.__GlusterVolumeCreateFailedExcep__tion(rc, out, err)

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Dominic Kaiser
Yes I can mount both to another computer.  Just not to ovirt.  I noticed on
the other computer which is Ubuntu 12.04 if you leave mountproto=tcp out of
the command it does not mount.  Does engine default to tcp?

Dk
On Sep 21, 2012 6:36 PM, Jason Brooks jbro...@redhat.com wrote:

 On 09/21/2012 08:09 AM, Dominic Kaiser wrote:

 I can mount to another computer with this command:

 mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data
 /home/administrator/test


 I notice that in your previous message, citing the mount that didn't work,
 you were mounting :/export, and above you're mounting :/data. Can you also
 mount the export volume from another computer?



 So volumes work but I get a 500 error timeout when trying to add as a
 storage domain in ovirt.  weird?

 dk

 On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser
 domi...@bostonvineyard.org 
 mailto:dominic@**bostonvineyard.orgdomi...@bostonvineyard.org
 wrote:

 Hey All,

 So I finally found the problem.  Cheap NIC's.  Installed Intel NIC's
 no problems creating gluster volumes and distributed replicated
 ones.  Broadcom and Realtek yuk!  So now I am trying to mount the
 gluster volume as a nfs mount and am having a problem.  It is timing
 out like it is blocked by a firewall.

 I am trying to:  mount -t nfs gfs1.bostonvineyard.org:/**export
 /home/administrator/test

 Here is gfs1 tail vdsm.log

 [root@gfs1 vdsm]# tail vdsm.log
 Thread-88731::DEBUG::2012-09-**21
 10:35:56,566::resourceManager:**:844::ResourceManager.Owner::(**
 cancelAll)
 Owner.cancelAll requests {}
 Thread-88731::DEBUG::2012-09-**21
 10:35:56,567::task::978::**TaskManager.Task::(_decref)
 Task=`01b69eed-de59-4e87-8b28-**5268b5dcbb50`::ref 0 aborting False
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,890::task::588::**TaskManager.Task::(_**updateState)
 Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::moving from state init
 - state preparing
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::37::**dispatcher::(wrapper) Run and protect:
 repoStats(options=None)
 Thread-88737::INFO::2012-09-21
 10:36:06,891::logUtils::39::**dispatcher::(wrapper) Run and protect:
 repoStats, Return response: {}
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,891::task::1172::**TaskManager.Task::(prepare)
 Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::finished: {}
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,892::task::588::**TaskManager.Task::(_**updateState)
 Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::moving from state
 preparing - state finished
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,892::resourceManager:**:809::ResourceManager.Owner::(**
 releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,892::resourceManager:**:844::ResourceManager.Owner::(**
 cancelAll)
 Owner.cancelAll requests {}
 Thread-88737::DEBUG::2012-09-**21
 10:36:06,893::task::978::**TaskManager.Task::(_decref)
 Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::ref 0 aborting False

 Do you know why I can not connect via NFS?  Using an older kernel
 not 3.5 and iptables are off.

 Dominic


 On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya hat...@redhat.com
 mailto:hat...@redhat.com wrote:

 On 09/10/2012 06:27 PM, Dominic Kaiser wrote:

 Here is the message and the logs again except zipped I
 failed the first delivery:

 Ok here are the logs 4 node and 1 engine log.  Tried making
 /data folder owned by root and then tried by 36:36 neither
 worked.  Name of volume is data to match folders on nodes
 also.

 Let me know what you think,

 Dominic


 this is the actual failure (taken from gfs2vdsm.log).

 Thread-332442::DEBUG::2012-09-**__10
 10:28:05,788::BindingXMLRPC::_**_859::vds::(wrapper) client
 [10.3.0.241]::call volumeCreate with ('data',
 ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data',
 '10.4.0.100:/data'],
   2, 0, ['TCP']) {} flowID [406f2c8e]
 MainProcess|Thread-332442::__**DEBUG::2012-09-10
 10:28:05,792::__init__::1249::**__Storage.Misc.excCmd::(_log)
 '/usr/sbin/gluster --mode=script volume create data replica 2
 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10
 .4.0.99:/data 10.4.0.100:/data' (cwd None)
 MainProcess|Thread-332442::__**DEBUG::2012-09-10
 10:28:05,900::__init__::1249::**__Storage.Misc.excCmd::(_log)
 FAILED: err = 'Host 10.4.0.99 not a friend\n'; rc = 255
 MainProcess|Thread-332442::__**ERROR::2012-09-10
 10:28:05,900::supervdsmServer:**__:76::SuperVdsm.**
 ServerCallback:__:(wrapper)
 Error in wrapper
 Traceback (most recent call last):
File /usr/share/vdsm/__**supervdsmServer.py, line 74, in
 

Re: [Users] Problem with creating a glusterfs volume

2012-09-21 Thread Jason Brooks

On Fri 21 Sep 2012 04:19:27 PM PDT, Dominic Kaiser wrote:

Yes I can mount both to another computer.  Just not to ovirt.  I
noticed on the other computer which is Ubuntu 12.04 if you leave
mountproto=tcp out of the command it does not mount.  Does engine
default to tcp?


I believe that the gluster nfs server only supports tcp. On my setup, 
I've edited /etc/nfsmount.conf with Defaultvers=3, Nfsvers=3, and 
Defaultproto=tcp




Dk

On Sep 21, 2012 6:36 PM, Jason Brooks jbro...@redhat.com
mailto:jbro...@redhat.com wrote:

On 09/21/2012 08:09 AM, Dominic Kaiser wrote:

I can mount to another computer with this command:

mount -o mountproto=tcp,vers=3 -t nfs
gfs1.bostonvineyard.org:/data
/home/administrator/test


I notice that in your previous message, citing the mount that
didn't work, you were mounting :/export, and above you're mounting
:/data. Can you also mount the export volume from another computer?



So volumes work but I get a 500 error timeout when trying to
add as a
storage domain in ovirt.  weird?

dk

On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser
domi...@bostonvineyard.org
mailto:domi...@bostonvineyard.org
mailto:dominic@__bostonvineyard.org
mailto:domi...@bostonvineyard.org wrote:

Hey All,

So I finally found the problem.  Cheap NIC's.  Installed
Intel NIC's
no problems creating gluster volumes and distributed
replicated
ones.  Broadcom and Realtek yuk!  So now I am trying to
mount the
gluster volume as a nfs mount and am having a problem.  It
is timing
out like it is blocked by a firewall.

I am trying to:  mount -t nfs
gfs1.bostonvineyard.org:/__export
/home/administrator/test

Here is gfs1 tail vdsm.log

[root@gfs1 vdsm]# tail vdsm.log
Thread-88731::DEBUG::2012-09-__21


10:35:56,566::resourceManager:__:844::ResourceManager.Owner::(__cancelAll)
Owner.cancelAll requests {}
Thread-88731::DEBUG::2012-09-__21
10:35:56,567::task::978::__TaskManager.Task::(_decref)
Task=`01b69eed-de59-4e87-8b28-__5268b5dcbb50`::ref 0
aborting False
Thread-88737::DEBUG::2012-09-__21
10:36:06,890::task::588::__TaskManager.Task::(___updateState)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from
state init
- state preparing
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::37::__dispatcher::(wrapper) Run
and protect:
repoStats(options=None)
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::39::__dispatcher::(wrapper) Run
and protect:
repoStats, Return response: {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,891::task::1172::__TaskManager.Task::(prepare)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::finished: {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,892::task::588::__TaskManager.Task::(___updateState)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from
state
preparing - state finished
Thread-88737::DEBUG::2012-09-__21


10:36:06,892::resourceManager:__:809::ResourceManager.Owner::(__releaseAll)
Owner.releaseAll requests {} resources {}
Thread-88737::DEBUG::2012-09-__21


10:36:06,892::resourceManager:__:844::ResourceManager.Owner::(__cancelAll)
Owner.cancelAll requests {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,893::task::978::__TaskManager.Task::(_decref)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::ref 0
aborting False

Do you know why I can not connect via NFS?  Using an older
kernel
not 3.5 and iptables are off.

Dominic


On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya
hat...@redhat.com mailto:hat...@redhat.com
mailto:hat...@redhat.com mailto:hat...@redhat.com wrote:

On 09/10/2012 06:27 PM, Dominic Kaiser wrote:

Here is the message and the logs again except zipped I
failed the first delivery:

Ok here are the logs 4 node and 1 engine log.
 Tried making
/data folder owned by root and then tried by 36:36
neither
worked.  Name of volume is data to match folders
on nodes also.

Let me know what you think,

Dominic


this is the actual failure (taken from gfs2vdsm.log).

Thread-332442::DEBUG::2012-09-10
10:28:05,788::BindingXMLRPC::859::vds::(wrapper)
client

Re: [Users] Problem with creating a glusterfs volume

2012-09-06 Thread Maxim Burgerhout
I just ran into this as well, and it seems that you have to either reformat
previously used gluster bricks or manually tweak some extended attributes.

Maybe this helps you in setting up your gluster volume, Dominic?

More info here: https://bugzilla.redhat.com/show_bug.cgi?id=812214


Maxim Burgerhout
ma...@wzzrd.com

EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A




On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal san...@redhat.com wrote:

  Hi Dominic,

 Looking at the engine log immediately after trying to create the volume
 should tell us on which node the gluster volume creation was attempted.
 Then looking at the vdsm log on that node should help us identifying the
 exact reason for failure.

 In case this doesn't help you, can you please simulate the issue again and
 send back all the 5 log files? (engine.log from engine server and vdsm.log
 from the 4 nodes)

 Regards,
 Shireesh


 On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:

 So I have a problem creating glusterfs volumes.  Here is the install:


1. Ovirt 3.1
2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
3. 4 nodes peer joined and running
4. 4 nodes added as hosts to ovirt
5. created a directory on each node this path /data
6. chmod 36.36 -R /data all nodes
7. went back to ovirt and created a distributed/replicated volume and
added the 4 nodes with brick path of /data

 I received this error:

  Creation of Gluster Volume maingfs1 failed.

  I went and looked at the vdsm logs on the nodes and the ovirt server
 which did not say much.  Where else should I look?  Also this error is
 vague what does it mean?


  --
 Dominic Kaiser
 Greater Boston Vineyard
 Director of Operations

 cell: 617-230-1412
 fax: 617-252-0238
 email: domi...@bostonvineyard.org




 ___
 Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem with creating a glusterfs volume

2012-09-05 Thread Dominic Kaiser
Did you confirm the peers were working after adding them in as adding a
host updates the firewall.

Yes here it is.

[root@gfs1 ~]# gluster
gluster peer status
Number of Peers: 3

Hostname: gfs2
Uuid: dce3ed1d-d38b-4eaa-81e5-12577a2f1a39
State: Peer in Cluster (Connected)

Hostname: gfs3
Uuid: 807b83f9-bff2-4cf1-a002-f1f1c5f694ad
State: Peer in Cluster (Connected)

Hostname: gfs4
Uuid: 7699d992-ee42-45c2-afca-d51dcdd9fdf6
State: Peer in Cluster (Connected)


Want to confirm you are using gluster 3.3 not 3.2.

Yes using 3.3

Installed Packages
glusterfs.x86_64 3.3.0-5.fc17
 @fedora-glusterfs
glusterfs-fuse.x86_643.3.0-5.fc17
 @fedora-glusterfs
glusterfs-geo-replication.x86_64 3.3.0-5.fc17
 @fedora-glusterfs
glusterfs-rdma.x86_643.3.0-5.fc17
 @fedora-glusterfs
glusterfs-server.x86_64  3.3.0-5.fc17

I reviewed your steps.  The only thing I can think is let us say you create
a /data directory.  When I am adding bricks should I add for instance
10.4.0.97 /data 10.4.0.98 /data 10.4.0.99 /data 10.4.0.100 /data?  Those
are the directories I have created and owned on each server 36.36 -R /data.




On Wed, Sep 5, 2012 at 2:25 PM, Robert Middleswarth rob...@middleswarth.net
 wrote:

  On 09/05/2012 02:20 PM, Dominic Kaiser wrote:

 So I have a problem creating glusterfs volumes.  Here is the install:


1. Ovirt 3.1
2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
3. 4 nodes peer joined and running
4. 4 nodes added as hosts to ovirt

  Did you confirm the peers were working after adding them in as adding a
 host updates the firewall.


1. created a directory on each node this path /data
2. chmod 36.36 -R /data all nodes
3. went back to ovirt and created a distributed/replicated volume and
added the 4 nodes with brick path of /data

  Want to confirm you are using gluster 3.3 not 3.2.

  I received this error:

  Creation of Gluster Volume maingfs1 failed.

  I went and looked at the vdsm logs on the nodes and the ovirt server
 which did not say much.  Where else should I look?  Also this error is
 vague what does it mean?

  You can review the steps I did to get a working gluster setup at
 http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-using-either-nfs-or-posix-native-file-system


  --
 Dominic Kaiser
 Greater Boston Vineyard
 Director of Operations

 cell: 617-230-1412
 fax: 617-252-0238
 email: domi...@bostonvineyard.org




 ___
 Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users


 --
 Thanks
 Robert Middleswarth
 @rmiddle (twitter/Freenode IRC)
 @RobertM (OFTC IRC)


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




-- 
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem with creating a glusterfs volume

2012-09-05 Thread Jason Brooks


- Original Message -
 From: Dominic Kaiser domi...@bostonvineyard.org
 To: users@ovirt.org
 Sent: Wednesday, September 5, 2012 11:20:31 AM
 Subject: [Users] Problem with creating a glusterfs volume
 
 So I have a problem creating glusterfs volumes.  Here is the install:
 
 
1. Ovirt 3.1
2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
3. 4 nodes peer joined and running
4. 4 nodes added as hosts to ovirt
5. created a directory on each node this path /data
6. chmod 36.36 -R /data all nodes
7. went back to ovirt and created a distributed/replicated volume
and
added the 4 nodes with brick path of /data

Is this an NFS domain?

Is your volume named data -- the path should match the gluster volume name, 
not the path on the filesystem (unless they're the same). If your volume is 
maingfs1 then the nfs path should be hostname:/maingfs1.

Also, I've found that I've had to chown 36.36 -R /data after creating the 
volumes, because the subdirs that get created when you create a volume start 
out as root-owned by default.

Jason




 
 I received this error:
 
 Creation of Gluster Volume maingfs1 failed.
 
 I went and looked at the vdsm logs on the nodes and the ovirt server
 which
 did not say much.  Where else should I look?  Also this error is
 vague what
 does it mean?
 
 
 --
 Dominic Kaiser
 Greater Boston Vineyard
 Director of Operations
 
 cell: 617-230-1412
 fax: 617-252-0238
 email: domi...@bostonvineyard.org
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem with creating a glusterfs volume

2012-09-05 Thread Shireesh Anjal

Hi Dominic,

Looking at the engine log immediately after trying to create the volume 
should tell us on which node the gluster volume creation was attempted. 
Then looking at the vdsm log on that node should help us identifying the 
exact reason for failure.


In case this doesn't help you, can you please simulate the issue again 
and send back all the 5 log files? (engine.log from engine server and 
vdsm.log from the 4 nodes)


Regards,
Shireesh

On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:

So I have a problem creating glusterfs volumes.  Here is the install:

 1. Ovirt 3.1
 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
 3. 4 nodes peer joined and running
 4. 4 nodes added as hosts to ovirt
 5. created a directory on each node this path /data
 6. chmod 36.36 -R /data all nodes
 7. went back to ovirt and created a distributed/replicated volume and
added the 4 nodes with brick path of /data

I received this error:

Creation of Gluster Volume maingfs1 failed.

I went and looked at the vdsm logs on the nodes and the ovirt server 
which did not say much.  Where else should I look?  Also this error is 
vague what does it mean?



--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org mailto:domi...@bostonvineyard.org




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users