We are getting repeated stale file handles in nfs.log on our gluster 3.4.0 
cluster:

[root@splunksearch2<mailto:root@splunksearch2>:2726 ~]# gluster volume info

Volume Name: splunkshare
Type: Replicate
Volume ID: 919a6dab-4458-4e67-8d5f-68bd337ba886
Status: Started
Number of Bricks: 1 x 7 = 7
Transport-type: tcp
Bricks:
Brick1: splunksearch1:/mnt/bricks/splunksearch1
Brick2: splunksearch2:/mnt/bricks/splunksearch2
Brick3: splunk1:/mnt/bricks/splunk1
Brick4: splunk2:/mnt/bricks/splunk2
Brick5: splunk3:/mnt/bricks/splunk3
Brick6: splunk4:/mnt/bricks/splunk4
Brick7: splunk5:/mnt/bricks/splunk5


[2013-09-16 16:23:42.202532] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 
0-splunkshare-client-5: remote operation failed: Stale file handle. Path: 
/var/run/splunk/dispatch/scheduler__dingram__search__RMD591a438de29ff1350_at_1379347620_7482/info.csv
 (347553b8-10e2-4225-8f59-85b08fc6293c)

Of course this is on the local nfs mount of the gluster share.  We have tried 
all manner of nfs mounts with a couple of examples here:

localhost:/splunkshare /mnt/splunkshare nfs 
noauto,soft,mountproto=tcp,_netdev,noatime,actimeo=60,wsize=1000000,rsize=1000000,retrans=3
 0 0

localhost:/splunkshare /mnt/splunkshare nfs 
actimeo=2,vers=3,noauto,soft,mountproto=tcp,_netdev,noatime,async,retrans=3 0 0

Are there recommended nfs mount options?  Or should we be looking elsewhere for 
the issue?



--
__________________________________

Chris Noffsinger
Systems Engineer
Desk: 919-659-2453
Cell: 919-525-4232
www.sciquest.com<http://www.sciquest.com/>
[cid:[email protected]]
Turn spending into savings.
[cid:[email protected]]<https://www.facebook.com/SciQuestInc>[cid:[email protected]]<https://www.twitter.com/sciquest>[cid:[email protected]]<https://www.facebook.com/SciQuestInc>

<<attachment: rss.png>>

<<attachment: twitter.png>>

<<attachment: facebook.png>>

<<attachment: sciquest.png>>

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to