I would be concerned about the connections in a SYN_SENT state. Would be helpful if this was done with the -n flag so no DNS and we could see the real IPs.

On 11/21/12 2:49 PM, Steve Postma wrote:
Eco,
they all appear to be using 24007 and 24009, none of them are running on 24010 
or 24011.
Steve

[root@nas-0-0 ~]# lsof | grep 24010
[root@nas-0-0 ~]# lsof | grep 24011
[root@nas-0-0 ~]# lsof | grep 24009
glusterfs 3536      root   18u     IPv4             143541      0t0        TCP 
10.1.1.10:1022->gluster-data:24009 (ESTABLISHED)
[root@nas-0-0 ~]# lsof | grep 24007
glusterd  3515      root    6u     IPv4             143469      0t0        TCP 
nas-0-0:24007->nas-0-0:1022 (ESTABLISHED)
glusterd  3515      root    8u     IPv4              77801      0t0        TCP 
*:24007 (LISTEN)
glusterd  3515      root   12u     IPv4             143805      0t0        TCP 
10.1.1.10:1020->gluster-data:24007 (ESTABLISHED)
glusterfs 3536      root    7u     IPv4             143468      0t0        TCP 
nas-0-0:1022->nas-0-0:24007 (ESTABLISHED)
glusterfs 3536      root   16u     IPv4             399743      0t0        TCP 
10.1.1.10:1023->gluster-0-0:24007 (SYN_SENT)
glusterfs 3536      root   17u     IPv4             399745      0t0        TCP 
10.1.1.10:1021->gluster-0-1:24007 (SYN_SENT)



[root@nas-0-1 ~]# lsof | grep 24007
glusterd  3447      root    6u     IPv4              77189      0t0        TCP 
nas-0-1:24007->nas-0-1:1021 (ESTABLISHED)
glusterd  3447      root    8u     IPv4              11540      0t0        TCP 
*:24007 (LISTEN)
glusterd  3447      root   10u     IPv4             317363      0t0        TCP 
10.1.1.11:1022->gluster-0-0:24007 (SYN_SENT)
glusterd  3447      root   12u     IPv4              77499      0t0        TCP 
10.1.1.11:1023->gluster-data:24007 (ESTABLISHED)
glusterfs 3468      root    7u     IPv4              77188      0t0        TCP 
nas-0-1:1021->nas-0-1:24007 (ESTABLISHED)
glusterfs 3468      root   17u     IPv4             317361      0t0        TCP 
10.1.1.11:1019->gluster-0-1:24007 (SYN_SENT)
[root@nas-0-1 ~]# lsof | grep 24009
glusterfs 3468      root   18u     IPv4              77259      0t0        TCP 
10.1.1.11:1021->gluster-data:24009 (ESTABLISHED)
[root@nas-0-1 ~]# lsof | grep 24010
[root@nas-0-1 ~]# lsof | grep 24011

glusterfs  4301      root   16u     IPv4             586766                  TCP 
10.1.1.2:1021->gluster-0-0:24007 (SYN_SENT)
glusterfs  4301      root   17u     IPv4             586768                  TCP 
10.1.1.2:1020->gluster-0-1:24007 (SYN_SENT)
glusterfs 17526      root    8u     IPv4             205563                  TCP 
mseas-data.mit.edu:1015->mseas-data.mit.edu:24007 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24009
glusterfs  4008      root   10u     IPv4              77692                  
TCP *:24009 (LISTEN)
glusterfs  4008      root   13u     IPv4             148473                  TCP 
gluster-data:24009->gluster-data:1018 (ESTABLISHED)
glusterfs  4008      root   14u     IPv4              82251                  TCP 
gluster-data:24009->10.1.1.10:1022 (ESTABLISHED)
glusterfs  4008      root   15u     IPv4              82440                  TCP 
gluster-data:24009->10.1.1.11:1021 (ESTABLISHED)
glusterfs  4008      root   16u     IPv4             205600                  TCP 
gluster-data:24009->gluster-data:1023 (ESTABLISHED)
glusterfs  4008      root   17u     IPv4             218671                  TCP 
10.1.1.2:24009->10.1.1.1:1018 (ESTABLISHED)
glusterfs  4301      root   18u     IPv4             148472                  TCP 
gluster-data:1018->gluster-data:24009 (ESTABLISHED)
glusterfs 17526      root   12u     IPv4             205599                  TCP 
gluster-data:1023->gluster-data:24009 (ESTABLISHED)
[root@mseas-data ~]# lsof | grep 24010
[root@mseas-data ~]# lsof | grep 24011
[root@mseas-data ~]#



________________________________
From: [email protected] [[email protected]] on 
behalf of Steve Postma [[email protected]]
Sent: Wednesday, November 21, 2012 10:19 AM
To: Eco Willson; [email protected]
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Your right Eco
I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data 24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-1 24007 open, 24009,24010 and 24011 closed



Steve
________________________________
From: [email protected]<mailto:[email protected]> 
[[email protected]<mailto:[email protected]>] on behalf of 
Eco Willson [[email protected]<mailto:[email protected]>]
Sent: Tuesday, November 20, 2012 6:32 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:
Hi Eco,
I believe you are asking that I run

find /mount/glusterfs >/dev/null

only? That should take care of the issue?
Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco

Thanks for your time,
Steve

________________________________
From: 
[email protected]<mailto:[email protected]><mailto:[email protected]>
 
[[email protected]<mailto:[email protected]><mailto:[email protected]>]
 on behalf of Eco Willson [[email protected]<mailto:[email protected]><mailto:[email protected]>]
Sent: Tuesday, November 20, 2012 5:39 PM
To: 
[email protected]<mailto:[email protected]><mailto:[email protected]>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

[root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data> gdata]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 18G 6.6G 9.7G 41% /
/dev/sda6 77G 49G 25G 67% /scratch
/dev/sda3 18G 3.8G 13G 24% /var
/dev/sda2 18G 173M 16G 2% /tmp
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/the_raid-lv_home
3.0T 2.2T 628G 79% /home
glusterfs#mseas-data:/gdata
15T 14T 606G 96% /gdata


[root@nas-0-0<mailto:root@nas-0-0><mailto:root@nas-0-0> ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 33G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.5T 93% /mseas-data-0-0

[root@nas-0-1<mailto:root@nas-0-1><mailto:root@nas-0-1> ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 34G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.3T 94% /mseas-data-0-1


Thanks for confirming.

cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data> glusterd]# cat 
/root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/<volume name>.vol I believe. It 
should contain an entry similar to this output for each of the servers toward the top 
of the file.

Article you referenced is looking for the words "glusterfs-volgen" in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation. The main consideration is to ensure that you have the command in step 
5:

find /mount/glusterfs >/dev/null

- Eco

Thanks




________________________________
From: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
 
[[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
 on behalf of Eco Willson 
[[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
Sent: Tuesday, November 20, 2012 4:03 PM
To: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?


Its interesting that "gluster volume info" shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.




[root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data>
 data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



________________________________
From: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
 
[[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
 on behalf of Eco Willson 
[[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
Sent: Tuesday, November 20, 2012 3:02 PM
To: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:


Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it


To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?


I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?


Do you have the old vol files from before the upgrade? It would be good
to see them to make sure the volume got recreated properly.


The file structure appears intact on each brick.


As long as the file structure is intact, you will be able to recreate
the volume although it may require a potentially painful rsync in the
worst case.

- Eco





Steve


________________________________
From: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
 
[[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
 on behalf of Eco Willson [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:ewillson!
@redhat.co
m><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
Sent: Tuesday, November 20, 2012 1:29 PM
To: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:



Type: Distribute


In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create <vol name> replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir>
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir>


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:


I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data>
 ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


but "mount -a" does not appear to do anything.
I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data"
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma







________________________________
From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: 
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
Subject: cant mount gluster volume

I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one 
of the actual machines in the cluster to itself, as well as from various other 
clients. They all seem to be failing in the same part of the process.

_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users


________________________________
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]><mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to