Re: [Users] cannot add gluster domain

2013-01-23 Thread T-Sinjon
Thanks for your kindly reply, but I think the problem may not be glusterfs , i 
have tested with a normal nfs storage, the error still the same.

There's familiar case in ' 
http://comments.gmane.org/gmane.comp.emulators.ovirt.user/3216 ' , 

says:
> It seems it's related http://gerrit.ovirt.org/#/c/4720/.
> The unnecessary nfs related rpm/commands including rpc-bind were removed
> from ovirt-node. But the service nfs-lock requires rpc-bind, so it
> can't start.  I guess so.  Michael,  is it?   Thanks!

seems the problem has occurred since ovirt-node-iso-2.5.0-2.0.fc17 and related 
with this merge.

On 24 Jan, 2013, at 2:50 AM, Vijay Bellur  wrote:

> On 01/22/2013 03:28 PM, T-Sinjon wrote:
>> HI, everyone:
>>  Recently , I newly installed ovirt 3.1 from 
>> http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
>>  and node use 
>> http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso
>>  
>>  when i add gluster domain via nfs, mount error occurred,
>>  I have do manually mount action on the node but failed if without -o 
>> nolock option:
>>  # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
>> my-gluster-ip:/gvol02/GlusterDomain 
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
>>  mount.nfs: rpc.statd is not running but is required for remote locking. 
>> mount.nfs: Either use '-o nolock' to keep locks local, or start statd. 
>> mount.nfs: an incorrect mount option was specified
>>  
> 
> 
> Can you please confirm the glusterfs version that is available in 
> ovirt-node-iso?
> 
> Thanks,
> Vijay
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] cannot add gluster domain

2013-01-23 Thread Vijay Bellur

On 01/22/2013 03:28 PM, T-Sinjon wrote:

HI, everyone:
Recently , I newly installed ovirt 3.1 from 
http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
and node use 
http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso

when i add gluster domain via nfs, mount error occurred,
I have do manually mount action on the node but failed if without -o 
nolock option:
# /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
my-gluster-ip:/gvol02/GlusterDomain 
/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
mount.nfs: rpc.statd is not running but is required for remote locking. 
mount.nfs: Either use '-o nolock' to keep locks local, or start statd. 
mount.nfs: an incorrect mount option was specified




Can you please confirm the glusterfs version that is available in 
ovirt-node-iso?


Thanks,
Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] cannot add gluster domain

2013-01-23 Thread Alex Leonhardt
Hi all,

Am not too familiar with fedora and its services, anyone can help him?

Alex
 On Jan 23, 2013 5:02 AM, "T-Sinjon"  wrote:

> I have forced v3 in my /etc/nfsmount and there's no firewall between NFS
> server and the host.
>
> The only problem is no rpc.statd running . Could you tell me how can i
> start it since there's no rpcbind installed on overt node 2.5.5-0.1?
>
> [root@ovirtnode1 ~]# systemctl status nfs-lock.service
> nfs-lock.service - NFS file locking service.
>   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
>   Active: failed (Result: exit-code) since Thu, 17 Jan 2013 09:41:45
> +; 5 days ago
>   CGroup: name=systemd:/system/nfs-lock.service
>
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Version 1.2.6
> starting
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Initializing NSM
> state
> [root@ovirtnode1 ~]# systemctl start nfs-lock.service
> Failed to issue method call: Unit rpcbind.service failed to load: No such
> file or directory. See system logs and 'systemctl status rpcbind.service'
> for details.
>
> On 22 Jan, 2013, at 6:14 PM, Alex Leonhardt  wrote:
>
> Hi, this seems to look like the error you're getting :
>
> MountError: (32, ";mount.nfs: rpc.statd is not running but is required for
> remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or
> start statd.\nmount.nfs: an incorrect mount option was specified\n")
>
> Are you running nfs3 on that host ? if yes, have you forced v3 ? is
> rpc.statd running ? is the NFS server firewalling off the rpc.* ports ?
>
> alex
>
>
> On 22 January 2013 09:58, T-Sinjon  wrote:
>
>> HI, everyone:
>> Recently , I newly installed ovirt 3.1 from
>> http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
>> and node use
>> http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso
>>
>> when i add gluster domain via nfs, mount error occurred,
>> I have do manually mount action on the node but failed if without
>> -o nolock option:
>> # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
>> my-gluster-ip:/gvol02/GlusterDomain
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
>> mount.nfs: rpc.statd is not running but is required for remote
>> locking. mount.nfs: Either use '-o nolock' to keep locks local, or start
>> statd. mount.nfs: an incorrect mount option was specified
>>
>> blow is the vdsm.log from node and engine.log, any help was
>> appreciated :
>>
>> vdsm.log
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init ->
>> state preparing
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection(domType=1,
>> spUUID='----', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '**', 'id': '----', 'port':
>> ''}], options=None)
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection, Return response: {'statuslist':
>> [{'status': 0, 'id': '----'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::1172::TaskManager.Task::(prepare)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist':
>> [{'status': 0, 'id': '----'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing ->
>> state finished
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,263::task::978::TaskManager.Task::(_decref)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::task::588::TaskManager.Task::(_updateState)
>> Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init ->
>> state preparing
>> Thread-12718::INFO::2013-01-22
>> 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect:
>> connectStorageServer(domType=1,
>> spUUID='----', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '**', 'id': '6463ca53-6c57-45f6-bb5c-455

[Users] cannot add gluster domain

2013-01-22 Thread T-Sinjon
HI, everyone:
Recently , I newly installed ovirt 3.1 from 
http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/, 
and node use 
http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso

when i add gluster domain via nfs, mount error occurred, 
I have do manually mount action on the node but failed if without -o 
nolock option: 
# /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
my-gluster-ip:/gvol02/GlusterDomain 
/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
mount.nfs: rpc.statd is not running but is required for remote locking. 
mount.nfs: Either use '-o nolock' to keep locks local, or start statd. 
mount.nfs: an incorrect mount option was specified

blow is the vdsm.log from node and engine.log, any help was appreciated 
:

vdsm.log
Thread-12717::DEBUG::2013-01-22 
09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12717::DEBUG::2013-01-22 
09:19:02,261::task::588::TaskManager.Task::(_updateState) 
Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init -> state 
preparing
Thread-12717::INFO::2013-01-22 
09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection(domType=1, 
spUUID='----', conList=[{'connection': 
'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 
'password': '**', 'id': '----', 'port': 
''}], options=None)
Thread-12717::INFO::2013-01-22 
09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 
'id': '----'}]}
Thread-12717::DEBUG::2013-01-22 
09:19:02,262::task::1172::TaskManager.Task::(prepare) 
Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist': 
[{'status': 0, 'id': '----'}]}
Thread-12717::DEBUG::2013-01-22 
09:19:02,262::task::588::TaskManager.Task::(_updateState) 
Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing -> 
state finished
Thread-12717::DEBUG::2013-01-22 
09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-12717::DEBUG::2013-01-22 
09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-12717::DEBUG::2013-01-22 
09:19:02,263::task::978::TaskManager.Task::(_decref) 
Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
Thread-12718::DEBUG::2013-01-22 
09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12718::DEBUG::2013-01-22 
09:19:02,307::task::588::TaskManager.Task::(_updateState) 
Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init -> state 
preparing
Thread-12718::INFO::2013-01-22 
09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=1, spUUID='----', 
conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 
'portal': '', 'user': '', 'password': '**', 'id': 
'6463ca53-6c57-45f6-bb5c-45505891cae9', 'port': ''}], options=None)
Thread-12718::DEBUG::2013-01-22 
09:19:02,467::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n 
/usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
my-gluster-ip:/gvol02/GlusterDomain 
/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain' (cwd None)
Thread-12718::ERROR::2013-01-22 
09:19:02,486::hsm::1932::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
  File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
  File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
  File "/usr/share/vdsm/storage/mount.py", line 190, in mount
  File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for 
remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or 
start statd.\nmount.nfs: an incorrect mount option was specified\n")

engine.log:
2013-01-22 17:19:20,073 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
 (ajp--0.0.0.0-8009-7) [25932203] START, 
ValidateStorageServerConnectionVDSCommand(vdsId = 
626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: null, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id: 303f4753
2013-01-22 17:19:20,095 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
 (ajp--0.0.0.0-8009-7) [25932203] FINISH, 
ValidateStorageServerConnectionVDSCommand, return: 
{----=0}, log id: 303f4