Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-27 Thread David F. Robinson
After shutting down all NFS and gluster processes, there was still an 
NFS process.


[root@gfs01bkp ~]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
153   tcp  38465  mountd
151   tcp  38466  mountd
133   tcp   2049  nfs
1000241   udp  34738  status
1000241   tcp  37269  status
[root@gfs01bkp ~]# netstat -anp | grep 2049
[root@gfs01bkp ~]# netstat -anp | grep 38465
[root@gfs01bkp ~]# netstat -anp | grep 38466

I killed off the processes using rpcinfo -d

[root@gfs01bkp ~]# rpcinfo -p
   program vers proto   port  service
104   tcp111  portmapper
103   tcp111  portmapper
102   tcp111  portmapper
104   udp111  portmapper
103   udp111  portmapper
102   udp111  portmapper
1000241   udp  34738  status
1000241   tcp  37269  status

Then I restarted the glusterd and did a 'mount -a'.  Worked perfectly.  
And the errors that were showing up in the logs every 3-seconds stopped.


Thanks for your help.  Greatly appreciated.

David




-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M 
kshlms...@gmail.com
Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
gluster-devel@gluster.org

Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a netstat -anp | grep 39618 to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-27 Thread David F. Robinson
I rebooted the machine to see if the problem would return and it does.  
Same issue after a reboot.

Any suggestions?

One other thing I tested was to comment out the NFS mounts in 
/etc/fstab:
# gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768 0 0
After the machine comes back up, I remove the comment and do a 'mount 
-a'.  The mount works fine.


It looks like it is a timing during startup issue.  Is it trying to do 
the NFS mount while glusterd is still starting up?


David


-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M 
kshlms...@gmail.com
Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
gluster-devel@gluster.org

Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a netstat -anp | grep 39618 to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: Kaushal M kshlms...@gmail.com mailto:kshlms...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
mailto:david.robin...@corvidtec.com
Cc: Joe Julian j...@julianfamily.org mailto:j...@julianfamily.org;
Gluster Users gluster-us...@gluster.org
mailto:gluster-us...@gluster.org; Gluster Devel
gluster-devel@gluster.org mailto:gluster-devel@gluster.org
Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-27 Thread David F. Robinson

In my /etc/fstab, I have the following:

  gfsib01bkp.corvidtec.com:/homegfs_bkp  /backup/homegfs
glusterfs   transport=tcp,_netdev 0 0
  gfsib01bkp.corvidtec.com:/Software_bkp /backup/Software   
glusterfs   transport=tcp,_netdev 0 0
  gfsib01bkp.corvidtec.com:/Source_bkp   /backup/Source 
glusterfs   transport=tcp,_netdev 0 0


  #... Setup NFS mounts as well
  gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768 0 0



It looks like it is trying to start the nfs mount before gluster has 
finished coming up and that this is hanging the nfs ports.  I have 
_netdev in the glusterfs mount point to make sure the network has come 
up (including infiniband) prior to starting gluster.  Shouldn't the 
gluster init scripts check for gluster startup prior to starting the nfs 
mount?  It doesn't look like this is working properly.


David



-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M 
kshlms...@gmail.com
Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
gluster-devel@gluster.org

Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a netstat -anp | grep 39618 to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: Kaushal M kshlms...@gmail.com

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-27 Thread David F. Robinson
Not elegant, but here is my short-term fix to prevent the issue after a 
reboot:


Added 'noauto' to the mount in /etc/rc.local:

/etc/fstab:
#... Note: Used the 'noauto' for the NFS mounts and put the mount in 
/etc/rc.local to ensure that
#... glsuter has been started before attempting to mount using NFS. 
Otherwise, hangs ports

#... during startup.
gfsib01bkp.corvidtec.com:/homegfs_bkp /backup_nfs/homegfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768,noauto 0 0
gfsib01a.corvidtec.com:/homegfs /homegfs_nfs nfs 
vers=3,intr,bg,rsize=32768,wsize=32768,noauto 0 0



/etc/rc.local:
/etc/init.d/glusterd restart
(sleep 20; mount -a; mount /backup_nfs/homegfs)



-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M 
kshlms...@gmail.com
Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
gluster-devel@gluster.org

Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


Hi,

I had a similar problem once. It happened after doing some unrelated 
tests with NFS. I thought it was a problem I generated doing weird 
things, so I didn't investigate the cause further.


To see if this is the same case, try this:

* Unmount all NFS mounts and stop all gluster volumes
* Check that there are no gluster processes running (ps ax | grep 
gluster), specially any glusterfs. glusterd is ok.

* Check that there are no NFS processes running (ps ax | grep nfs)
* Check with 'rpcinfo -p' that there's no nfs service registered

The output should be similar to this:

   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100024 1 udp 33482 status
100024 1 tcp 37034 status

If there are more services registered, you can directly delete them or 
check if they correspond to an active process. For example, if the 
output is this:


   program vers proto port service
10 4 tcp 111 portmapper
10 3 tcp 111 portmapper
10 2 tcp 111 portmapper
10 4 udp 111 portmapper
10 3 udp 111 portmapper
10 2 udp 111 portmapper
100021 3 udp 39618 nlockmgr
100021 3 tcp 41067 nlockmgr
100024 1 udp 33482 status
100024 1 tcp 37034 status

You can do a netstat -anp | grep 39618 to see if there is some 
process really listening at the nlockmgr port. You can repeat this for 
port 41067. If there is some process, you should stop it. If there is 
no process listening on that port, you should remove it with a command 
like this:


rpcinfo -d 100021 3

You must execute this command for all stale ports for any services 
other than portmapper and status. Once done you should get the output 
shown before.


After that, you can try to start your volume and see if everything is 
registered (rpcinfo -p) and if gluster has started the nfs server 
(gluster volume status).


If everything is ok, you should be able to mount the volume using NFS.

Xavi

On 01/27/2015 03:18 PM, David F. Robinson wrote:

Turning off nfslock did not help. Also, still getting these messages
every 3-seconds:

[2015-01-27 14:16:12.921880] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:15.922431] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:18.923080] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:21.923748] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:24.924472] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:27.925192] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:30.925895] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:33.926563] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-27 14:16:36.927248] W [socket.c:611:__socket_rwv] 
0-management:

readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
-- Original Message --
From: Kaushal M kshlms...@gmail.com mailto:kshlms...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
mailto:david.robin...@corvidtec.com
Cc: Joe Julian j...@julianfamily.org mailto:j...@julianfamily.org;
Gluster Users gluster-us...@gluster.org
mailto:gluster-us...@gluster.org; Gluster Devel

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread David F. Robinson

Tried that... Still having errors starting gluster NFS...

From the /var/log/gluster/nfs.log file:


[2015-01-26 19:51:25.996481] I [MSGID: 100030] [glusterfsd.c:2018:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.2 
(args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p 
/var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket)
[2015-01-26 19:51:26.005501] I 
[rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
Configured rpc.outstanding-rpc-limit with value 16
[2015-01-26 19:51:26.054144] E [nlm4.c:2481:nlm4svc_init] 0-nfs-NLM: 
unable to start /sbin/rpc.statd
[2015-01-26 19:51:26.054183] E [nfs.c:1342:init] 0-nfs: Failed to 
initialize protocols
[2015-01-26 19:51:26.054191] E [xlator.c:425:xlator_init] 0-nfs-server: 
Initialization of volume 'nfs-server' failed, review your volfile again
[2015-01-26 19:51:26.054198] E [graph.c:322:glusterfs_graph_init] 
0-nfs-server: initializing translator failed
[2015-01-26 19:51:26.054205] E [graph.c:525:glusterfs_graph_activate] 
0-graph: init failed
[2015-01-26 19:51:26.05] W [glusterfsd.c:1194:cleanup_and_exit] (-- 
0-: received signum (0), shutting down




-- Original Message --
From: Anatoly Pugachev mator...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: gluster-us...@gluster.org gluster-us...@gluster.org; Gluster 
Devel gluster-devel@gluster.org

Sent: 1/26/2015 2:48:08 PM
Subject: Re: [Gluster-users] v3.6.2


David,

can you stop glusterfs on affected machine and remove gluster related 
socket extension files from /var/run ? Start glusterfs service again 
and try once more ?


On Mon, Jan 26, 2015 at 5:57 PM, David F. Robinson 
david.robin...@corvidtec.com wrote:

Tried shutting down glusterd and glusterfsd and restarting.

[2015-01-26 14:52:53.548330] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.549763] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.551245] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.552819] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.554289] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.555769] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.564429] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.565578] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/0cdef7faa934cfe52676689ff8c0110f.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.566488] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.567453] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/09e734d5e8d52bb796896c7a33d0a3ff.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.568248] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.569009] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/3f6844c74682f39fa7457082119628c5.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.569851] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.570818] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/34d5cc70aba63082bbb467ab450bd08b.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.571777] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.572681] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/0cd747876dca36cb21ecc7a36f7f897c.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.573533] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.574433] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/88744e1365b414d41e720e480700716a.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.575399] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.575434] W [socket.c:611:__socket_rwv] 
0-management: readv on 

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread Joe Julian
Is rpcbind running? 

On January 26, 2015 6:57:44 AM PST, David F. Robinson 
david.robin...@corvidtec.com wrote:
Tried shutting down glusterd and glusterfsd and restarting.

[2015-01-26 14:52:53.548330] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.549763] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.551245] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.552819] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.554289] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.555769] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.564429] I
[rpc-clnt.c:969:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600
[2015-01-26 14:52:53.565578] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/0cdef7faa934cfe52676689ff8c0110f.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.566488] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.567453] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/09e734d5e8d52bb796896c7a33d0a3ff.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.568248] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.569009] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/3f6844c74682f39fa7457082119628c5.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.569851] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.570818] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/34d5cc70aba63082bbb467ab450bd08b.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.571777] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.572681] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/0cd747876dca36cb21ecc7a36f7f897c.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.573533] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.574433] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/88744e1365b414d41e720e480700716a.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.575399] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.575434] W [socket.c:611:__socket_rwv]
0-management: 
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed 
(Invalid argument)
[2015-01-26 14:52:53.575447] I [MSGID: 106006] 
[glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management: 
nfs has disconnected from glusterd.
[2015-01-26 14:52:53.579663] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick01bkp/homegfs_bkp on port 49152
[2015-01-26 14:52:53.581943] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick02bkp/Source_bkp on port 49156
[2015-01-26 14:52:53.583487] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick01bkp/Source_bkp on port 49153
[2015-01-26 14:52:53.584921] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick02bkp/Software_bkp on port 49157
[2015-01-26 14:52:53.585719] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick01bkp/Software_bkp on port 49154
[2015-01-26 14:52:53.586281] I [glusterd-pmap.c:227:pmap_registry_bind]

0-pmap: adding brick /data/brick02bkp/homegfs_bkp on port 49155



-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: gluster-us...@gluster.org gluster-us...@gluster.org; Gluster 
Devel gluster-devel@gluster.org
Sent: 1/26/2015 9:50:09 AM
Subject: v3.6.2

I have a server with v3.6.2 from which I cannot mount using NFS.  The 
FUSE mount works, however, I cannot get the NFS mount to work. From 
/var/log/message:

Jan 26 09:27:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:27:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:29:28 

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread David F. Robinson

[root@gfs01bkp bricks]# ps -ef | grep rpcbind
rpc   2306 1  0 11:32 ?00:00:00 rpcbind
root  5265  4638  0 11:55 pts/000:00:00 grep rpcbind

-- Original Message --
From: Joe Julian j...@julianfamily.org
To: David F. Robinson david.robin...@corvidtec.com; 
gluster-us...@gluster.org gluster-us...@gluster.org; Gluster Devel 
gluster-devel@gluster.org

Sent: 1/26/2015 11:55:09 AM
Subject: Re: [Gluster-users] v3.6.2


Is rpcbind running?

On January 26, 2015 6:57:44 AM PST, David F. Robinson 
david.robin...@corvidtec.com wrote:

Tried shutting down glusterd and glusterfsd and restarting.

[2015-01-26 14:52:53.548330] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.549763] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.551245] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.552819] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.554289] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.555769] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.564429] I 
[rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting 
frame-timeout to 600
[2015-01-26 14:52:53.565578] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/0cdef7faa934cfe52676689ff8c0110f.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.566488] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.567453] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/09e734d5e8d52bb796896c7a33d0a3ff.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.568248] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Software_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.569009] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/3f6844c74682f39fa7457082119628c5.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.569851] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.570818] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/34d5cc70aba63082bbb467ab450bd08b.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.571777] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/Source_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.572681] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/0cd747876dca36cb21ecc7a36f7f897c.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.573533] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.574433] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/88744e1365b414d41e720e480700716a.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.575399] I [MSGID: 106005] 
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: 
Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs_bkp has 
disconnected from glusterd.
[2015-01-26 14:52:53.575434] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:52:53.575447] I [MSGID: 106006] 
[glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management: 
nfs has disconnected from glusterd.
[2015-01-26 14:52:53.579663] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick01bkp/homegfs_bkp on port 49152
[2015-01-26 14:52:53.581943] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick02bkp/Source_bkp on port 49156
[2015-01-26 14:52:53.583487] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick01bkp/Source_bkp on port 49153
[2015-01-26 14:52:53.584921] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick02bkp/Software_bkp on port 49157
[2015-01-26 14:52:53.585719] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick01bkp/Software_bkp on port 49154
[2015-01-26 14:52:53.586281] I 
[glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick 
/data/brick02bkp/homegfs_bkp on port 49155




-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: gluster-us...@gluster.org gluster-us...@gluster.org; Gluster 
Devel gluster-devel@gluster.org


Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread Kaushal M
Your nfs.log file has the following lines,
```
[2015-01-26 20:06:58.298078] E
[rpcsvc.c:1303:rpcsvc_program_register_portmap] 0-rpc-service: Could not
register with portmap 100021 4 38468
[2015-01-26 20:06:58.298108] E [nfs.c:331:nfs_init_versions] 0-nfs: Program
 NLM4 registration failed
```

The Gluster NFS server has it's own NLM (nlockmgr) implementation. You said
that you have the nfslock service on. Can you turn that off and try again?

~kaushal

On Tue, Jan 27, 2015 at 11:21 AM, David F. Robinson 
david.robin...@corvidtec.com wrote:

  On a different system where gluster-NFS (not kernel-nfs) is working
 properly shows the following:

 [root@gfs01a glusterfs]# rpcinfo -p
program vers proto   port  service
 104   tcp111  portmapper
 103   tcp111  portmapper
 102   tcp111  portmapper
 104   udp111  portmapper
 103   udp111  portmapper
 102   udp111  portmapper
 153   tcp  38465  mountd
 151   tcp  38466  mountd
 133   tcp   2049  nfs
 1000241   udp  42413  status
 1000241   tcp  35424  status
 1000214   tcp  38468  nlockmgr
 1000211   udp801  nlockmgr
 1002273   tcp   2049  nfs_acl
 1000211   tcp804  nlockmgr

 [root@gfs01a glusterfs]# /etc/init.d/nfs status
 rpc.svcgssd is stopped
 rpc.mountd is stopped
 nfsd is stopped
 rpc.rquotad is stopped



 -- Original Message --
 From: Joe Julian j...@julianfamily.org
 To: Kaushal M kshlms...@gmail.com; David F. Robinson 
 david.robin...@corvidtec.com
 Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
 gluster-devel@gluster.org
 Sent: 1/27/2015 12:48:49 AM
 Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2


 If that was true, wouldn't it not Connection refused because the kernel
 nfs is listening?

 On January 26, 2015 9:43:34 PM PST, Kaushal M kshlms...@gmail.com
 wrote:

 Seems like you have the kernel NFS server running. The `rpcinfo -p`
 output you provided shows that there are other mountd, nfs and nlockmgr
 services running your system. Gluster NFS server requires that the kernel
 nfs services be disabled and not running.

 ~kaushal

 On Tue, Jan 27, 2015 at 10:56 AM, David F. Robinson 
 david.robin...@corvidtec.com wrote:

 [root@gfs01bkp ~]# gluster volume status homegfs_bkp
 Status of volume: homegfs_bkp
 Gluster process PortOnline
 Pid
 
 --
 Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs
 _bkp49152   Y
  4087
 Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs
 _bkp49155   Y
  4092
 NFS Server on localhost N/A N
  N/A

 Task Status of Volume homegfs_bkp
 
 --
 Task : Rebalance
 ID   : 6d4c6c4e-16da-48c9-9019-dccb7d2cfd66
 Status   : completed





 -- Original Message --
 From: Atin Mukherjee amukh...@redhat.com
 To: Pranith Kumar Karampuri pkara...@redhat.com; Justin Clift 
 jus...@gluster.org; David F. Robinson david.robin...@corvidtec.com
 Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
 gluster-devel@gluster.org
 Sent: 1/26/2015 11:51:13 PM
 Subject: Re: [Gluster-devel] v3.6.2



 On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:


  On 01/26/2015 09:41 PM, Justin Clift wrote:

  On 26 Jan 2015, at 14:50, David F. Robinson
  david.robin...@corvidtec.com wrote:

  I have a server with v3.6.2 from which I cannot mount using NFS. The
  FUSE mount works, however, I cannot get the NFS mount to work. From
  /var/log/message:
Jan 26 09:27:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:27:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:29:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:29:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:31:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:31:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:33:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:33:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:35:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused

Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread Pranith Kumar Karampuri


On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:


On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson 
david.robin...@corvidtec.com wrote:
I have a server with v3.6.2 from which I cannot mount using NFS.  
The FUSE mount works, however, I cannot get the NFS mount to work. 
From /var/log/message:
  Jan 26 09:27:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:27:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:29:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:29:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:31:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:31:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:33:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:33:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:35:28 gfs01bkp mount[2810]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
Jan 26 09:35:53 gfs01bkp mount[4456]: mount to NFS server 
'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
I also am continually getting the following errors in 
/var/log/glusterfs:

  [root@gfs01bkp glusterfs]# tail -f etc-glusterfs-glusterd.vol.log
[2015-01-26 14:41:51.260827] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:41:54.261240] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:41:57.261642] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:00.262073] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:03.262504] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:06.262935] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:09.263334] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:12.263761] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:15.264177] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:18.264623] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:21.265053] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
[2015-01-26 14:42:24.265504] W [socket.c:611:__socket_rwv] 
0-management: readv on 
/var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid 
argument)
I believe this error message comes when the socket file is not 
present. I see the following commit which changed the location of the 
sockets. May be Atin may know more. about this: +Atin.
Ignore this mail above. I see that the commit is only present on Master: 
http://review.gluster.org/9423


Pranith


Pranith

^C
  Also, when I try to NFS mount my gluster volume, I am getting
Any chance there's a network or host based firewall stopping some of 
the ports?


+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] v3.6.2

2015-01-26 Thread Kaushal M
I think you get a connection refused if there are no servers running as
well.

On Tue, Jan 27, 2015 at 11:18 AM, Joe Julian j...@julianfamily.org wrote:

 If that was true, wouldn't it not Connection refused because the kernel
 nfs is listening?

 On January 26, 2015 9:43:34 PM PST, Kaushal M kshlms...@gmail.com wrote:

 Seems like you have the kernel NFS server running. The `rpcinfo -p`
 output you provided shows that there are other mountd, nfs and nlockmgr
 services running your system. Gluster NFS server requires that the kernel
 nfs services be disabled and not running.

 ~kaushal

 On Tue, Jan 27, 2015 at 10:56 AM, David F. Robinson 
 david.robin...@corvidtec.com wrote:

 [root@gfs01bkp ~]# gluster volume status homegfs_bkp
 Status of volume: homegfs_bkp
 Gluster process PortOnline
 Pid
 
 --
 Brick gfsib01bkp.corvidtec.com:/data/brick01bkp/homegfs
 _bkp49152   Y
  4087
 Brick gfsib01bkp.corvidtec.com:/data/brick02bkp/homegfs
 _bkp49155   Y
  4092
 NFS Server on localhost N/A N
  N/A

 Task Status of Volume homegfs_bkp
 
 --
 Task : Rebalance
 ID   : 6d4c6c4e-16da-48c9-9019-dccb7d2cfd66
 Status   : completed





 -- Original Message --
 From: Atin Mukherjee amukh...@redhat.com
 To: Pranith Kumar Karampuri pkara...@redhat.com; Justin Clift 
 jus...@gluster.org; David F. Robinson david.robin...@corvidtec.com
 Cc: Gluster Users gluster-us...@gluster.org; Gluster Devel 
 gluster-devel@gluster.org
 Sent: 1/26/2015 11:51:13 PM
 Subject: Re: [Gluster-devel] v3.6.2



 On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:


  On 01/26/2015 09:41 PM, Justin Clift wrote:

  On 26 Jan 2015, at 14:50, David F. Robinson
  david.robin...@corvidtec.com wrote:

  I have a server with v3.6.2 from which I cannot mount using NFS. The
  FUSE mount works, however, I cannot get the NFS mount to work. From
  /var/log/message:
Jan 26 09:27:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:27:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:29:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:29:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:31:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:31:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:33:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:33:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:35:28 gfs01bkp mount[2810]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  Jan 26 09:35:53 gfs01bkp mount[4456]: mount to NFS server
  'gfsib01bkp.corvidtec.com' failed: Connection refused, retrying
  I also am continually getting the following errors in
  /var/log/glusterfs:
[root@gfs01bkp glusterfs]# tail -f etc-glusterfs-glusterd.vol.log
  [2015-01-26 14:41:51.260827] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:41:54.261240] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:41:57.261642] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:00.262073] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:03.262504] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:06.262935] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:09.263334] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:12.263761] W [socket.c:611:__socket_rwv]
  0-management: readv on
  /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed (Invalid
  argument)
  [2015-01-26 14:42:15.264177] W