Eco, after stopping Gluster and restarting, same results as before with telnet 
able to connect to 24007, none of the other ports.  I noticed 1 machine has a 
process running that the other two do not. 22603 refers to "--volfile-id 
gdata.gluster-data.data"
and is only running on the one machine. Is this correct?




[root@mseas-data ~]# ps -ef | grep gluster
root     22582     1  0 15:00 ?        00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root     22603     1  0 15:00 ?        00:00:00 /usr/sbin/glusterfsd -s 
localhost --volfile-id gdata.gluster-data.data -p 
/var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S 
/tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l 
/var/log/glusterfs/bricks/data.log --xlator-option 
*-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 24009 
--xlator-option gdata-server.listen-port=24009
root     22609     1  0 15:00 ?        00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/d5c892de43c28a1ee7481b780245b789.socket
root     22690 22511  0 15:01 pts/0    00:00:00 grep gluster



[root@nas-0-0 ~]# ps -ef | grep gluster
root      7943     1  3 14:43 ?        00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root      7965     1  0 14:43 ?        00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/8f87e178e9707e4694ee7a2543c66db9.socket
root      7976  7898  0 14:43 pts/1    00:00:00 grep gluster
[root@nas-0-0 ~]#
[root@nas-0-1 ~]# ps -ef | grep gluster
root      7567     1  4 14:47 ?        00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root      7589     1  0 14:47 ?        00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/6054da6605d9f9d1c1e99252f1d235a6.socket
root      7600  7521  0 14:47 pts/2    00:00:00 grep gluster
________________________________
From: [email protected] [[email protected]] on 
behalf of Eco Willson [[email protected]]
Sent: Wednesday, November 21, 2012 2:52 PM
To: [email protected]
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in
production) would be:

1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:
> Your right Eco
> I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
> connection refused . Iptables is not running on any of the machines
>
>
> mseas-data 24007, 24009 are open, 24010 and 24011 closed
> nas-0-0 24007 open, 24009,24010 and 24011 closed
> nas-0-1 24007 open, 24009,24010 and 24011 closed
>
>
>
> Steve
> ________________________________
> From: 
> [email protected]<mailto:[email protected]> 
> [[email protected]<mailto:[email protected]>] 
> on behalf of Eco Willson [[email protected]<mailto:[email protected]>]
> Sent: Tuesday, November 20, 2012 6:32 PM
> To: [email protected]<mailto:[email protected]>
> Subject: Re: [Gluster-users] FW: cant mount gluster volume
>
> Steve,
> On 11/20/2012 02:43 PM, Steve Postma wrote:
>> Hi Eco,
>> I believe you are asking that I run
>>
>> find /mount/glusterfs >/dev/null
>>
>> only? That should take care of the issue?
> Meaning, run a recursive find against the client mount point
> (/mount/glusterfs is used as an example in the docs). This should solve
> the specific issue of the files not being visible.
> However, the issue of the disk space discrepancy is different. From the
> df output, the only filesystem with 18GB is / on the mseas-data node, I
> assume this is where you are mounting from?
> If so, then the issue goes back to one of connectivity, the gluster
> bricks most likely are still not being connected to, which may actually
> be the root cause of both problems.
>
> Can you confirm that iptables is off on all hosts (and from any client
> you would connect from)? I had seen your previous tests with telnet,
> was this done from and to all hosts from the client machine?
> Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
> This will test the management port and the expected initial port for
> each of the bricks in the volume.
>
>
> Thanks,
>
> Eco
>
>> Thanks for your time,
>> Steve
>>
>> ________________________________
>> From: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]>
>>  
>> [[email protected]<mailto:[email protected]><mailto:[email protected]>]
>>  on behalf of Eco Willson 
>> [[email protected]<mailto:[email protected]><mailto:[email protected]>]
>> Sent: Tuesday, November 20, 2012 5:39 PM
>> To: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]>
>> Subject: Re: [Gluster-users] FW: cant mount gluster volume
>>
>> Steve,
>>
>> On 11/20/2012 01:32 PM, Steve Postma wrote:
>>
>> [root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data> gdata]# df 
>> -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda1 18G 6.6G 9.7G 41% /
>> /dev/sda6 77G 49G 25G 67% /scratch
>> /dev/sda3 18G 3.8G 13G 24% /var
>> /dev/sda2 18G 173M 16G 2% /tmp
>> tmpfs 3.9G 0 3.9G 0% /dev/shm
>> /dev/mapper/the_raid-lv_home
>> 3.0T 2.2T 628G 79% /home
>> glusterfs#mseas-data:/gdata
>> 15T 14T 606G 96% /gdata
>>
>>
>> [root@nas-0-0<mailto:root@nas-0-0><mailto:root@nas-0-0> ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda3 137G 33G 97G 26% /
>> /dev/sda1 190M 24M 157M 14% /boot
>> tmpfs 2.0G 0 2.0G 0% /dev/shm
>> /dev/sdb1 21T 19T 1.5T 93% /mseas-data-0-0
>>
>> [root@nas-0-1<mailto:root@nas-0-1><mailto:root@nas-0-1> ~]# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/sda3 137G 34G 97G 26% /
>> /dev/sda1 190M 24M 157M 14% /boot
>> tmpfs 2.0G 0 2.0G 0% /dev/shm
>> /dev/sdb1 21T 19T 1.3T 94% /mseas-data-0-1
>>
>>
>> Thanks for confirming.
>>
>> cat of /etc/glusterfs/glusterd.vol from backup
>>
>> [root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data> glusterd]# 
>> cat /root/mseas_backup/etc/glusterfs/glusterd.vol
>> volume management
>> type mgmt/glusterd
>> option working-directory /etc/glusterd
>> option transport-type socket,rdma
>> option transport.socket.keepalive-time 10
>> option transport.socket.keepalive-interval 2
>> end-volume
>>
>>
>> The vol file for 2.x would be in /etc/glusterfs/<volume name>.vol I believe. 
>> It should contain an entry similar to this output for each of the servers 
>> toward the top of the file.
>>
>> Article you referenced is looking for the words "glusterfs-volgen" in a vol 
>> file. I have used locate and grep, but can find no such entry in any .vol 
>> files.
>>
>>
>> This would not appear if the glusterfs-volgen command wasn't used during 
>> creation. The main consideration is to ensure that you have the command in 
>> step 5:
>>
>> find /mount/glusterfs >/dev/null
>>
>> - Eco
>>
>> Thanks
>>
>>
>>
>>
>> ________________________________
>> From: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>>  
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>>  on behalf of Eco Willson 
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>> Sent: Tuesday, November 20, 2012 4:03 PM
>> To: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> Subject: Re: [Gluster-users] FW: cant mount gluster volume
>>
>> Steve,
>>
>>
>>
>> On 11/20/2012 12:03 PM, Steve Postma wrote:
>>
>>
>> The do show expected size. I have a backup of /etc/glusterd and 
>> /etc/glusterfs from before upgrade.
>>
>>
>> Can we see the vol file from the 2.x install and the output of df -h for
>> each of the bricks?
>>
>>
>> Its interesting that "gluster volume info" shows the correct path for each 
>> machine.
>>
>> These are the correct mountpoints on each machine, and from each machine I 
>> can see the files and structure.
>>
>>
>> If the volume was created in a different order than before, then it is
>> expected you would be able to see the files only from the backend
>> directories and not from the client mount.
>> If this is the case, recreating the volume in the correct order should
>> show the files from the mount.
>> If the volume was recreated properly, make sure you have followed the
>> upgrade steps to go from versions prior to 3.1:
>> http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide
>>
>> This would explain why the files can't be viewed from the client, but
>> the size discrepancy isn't expected if we see the expected output from
>> df for the bricks.
>>
>>
>>
>>
>> [root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data>
>>  data]# gluster volume info
>>
>> Volume Name: gdata
>> Type: Distribute
>> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
>> Status: Started
>> Number of Bricks: 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster-0-0:/mseas-data-0-0
>> Brick2: gluster-0-1:/mseas-data-0-1
>> Brick3: gluster-data:/data
>>
>>
>>
>> ________________________________
>> From: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>>  
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>>  on behalf of Eco Willson 
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>> Sent: Tuesday, November 20, 2012 3:02 PM
>> To: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> Subject: Re: [Gluster-users] FW: cant mount gluster volume
>>
>> Steve,
>>
>>
>>
>>
>> Does df -h show the expected directories on each server, and do they
>> show the expected size?
>>
>> If the file
>>
>>
>> On 11/20/2012 11:09 AM, Steve Postma wrote:
>>
>>
>> Hi Eco, thanks for your help.
>>
>> If I run on brick 1:
>> mount -t glusterfs gluster-data:/gdata /gdata
>>
>> it mounts but appears as a 18 GB partition with nothing in it
>>
>>
>> To confirm, are the export directories mounted properly on all three
>> servers?
>> Does df -h show the expected directories on each server, and do they
>> show the expected size?
>> Does gluster volume info show the same output on all three servers?
>>
>>
>> I can mount it from the client, but again, there is nothing in it.
>>
>>
>>
>> Before upgrade this was a 50 TB gluster volume. Was that volume information 
>> lost with upgrade?
>>
>>
>> Do you have the old vol files from before the upgrade? It would be good
>> to see them to make sure the volume got recreated properly.
>>
>>
>> The file structure appears intact on each brick.
>>
>>
>> As long as the file structure is intact, you will be able to recreate
>> the volume although it may require a potentially painful rsync in the
>> worst case.
>>
>> - Eco
>>
>>
>>
>>
>>
>> Steve
>>
>>
>> ________________________________
>> From: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>>  
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>>  on behalf of Eco Willson 
>> [[email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:ewillson!
>> @redhat.co
>> m><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>]
>> Sent: Tuesday, November 20, 2012 1:29 PM
>> To: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> Subject: Re: [Gluster-users] FW: cant mount gluster volume
>>
>> Steve,
>>
>> The volume is a pure distribute:
>>
>>
>>
>> Type: Distribute
>>
>>
>> In order to have files replicate, you need
>> 1) to have a number of bricks that is a multiple of the replica count,
>> e.g., for your three node configuration, you would need two bricks per
>> node to set up replica two. You could set up replica 3, but you will
>> take a performance hit in doing so.
>> 2) to add a replica count during the volume creation, e.g.
>> `gluster volume create <vol name> replica 2 server1:/export server2:/export
>>
>> From the volume info you provided, the export directories are different
>> for all three nodes:
>>
>> Brick1: gluster-0-0:/mseas-data-0-0
>> Brick2: gluster-0-1:/mseas-data-0-1
>> Brick3: gluster-data:/data
>>
>>
>> Which node are you trying to mount to /data? If it is not the
>> gluster-data node, then it will fail if there is not a /data directory.
>> In this case, it is a good thing, since mounting to /data on gluster-0-0
>> or gluster-0-1 would not accomplish what you need.
>> To clarify, there is a distinction to be made between the export volume
>> mount and the gluster mount point. In this case, you are mounting the
>> brick.
>> In order to see all the files, you would need to mount the volume with
>> the native client, or NFS.
>> For the native client:
>> mount -t glusterfs gluster-data:/gdata /mnt/<gluster mount dir>
>> For NFS:
>> mount -t nfs -o vers=3 gluster-data:/gdata /mnt/<gluster mount dir>
>>
>>
>> Thanks,
>>
>> Eco
>> On 11/20/2012 09:42 AM, Steve Postma wrote:
>>
>>
>> I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 
>> installed.
>>
>> I had some mounting issues yesterday, from a rocks 6.2 install to the 
>> cluster. I was able to overcome those issues and mount the export on my 
>> node. Thanks to all for your help.
>>
>> However, I can only view the portion of files that is directly stored on the 
>> one brick in the cluster. The other bricks do not seem to be replicating, 
>> tho gluster reports the volume as up.
>>
>> [root@mseas-data<mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data><mailto:root@mseas-data>
>>  ~]# gluster volume info
>> Volume Name: gdata
>> Type: Distribute
>> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
>> Status: Started
>> Number of Bricks: 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster-0-0:/mseas-data-0-0
>> Brick2: gluster-0-1:/mseas-data-0-1
>> Brick3: gluster-data:/data
>>
>>
>>
>> The brick we are attaching to has this in the fstab file.
>> /dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0
>>
>>
>> but "mount -a" does not appear to do anything.
>> I have to run "mount -t xfs /dev/mapper/the_raid-lv_data /data"
>> manually to mount it.
>>
>>
>>
>> Any help with troubleshooting why we are only seeing data from 1 brick of 3 
>> would be appreciated,
>> Thanks,
>> Steve Postma
>>
>>
>>
>>
>>
>>
>>
>> ________________________________
>> From: Steve Postma
>> Sent: Monday, November 19, 2012 3:29 PM
>> To: 
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> Subject: cant mount gluster volume
>>
>> I am still unable to mount a new 3.3.1 glusterfs install. I have tried from 
>> one of the actual machines in the cluster to itself, as well as from various 
>> other clients. They all seem to be failing in the same part of the process.
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> ________________________________
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> ________________________________
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> ________________________________
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> ________________________________
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]<mailto:[email protected]><mailto:[email protected]>
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> [email protected]<mailto:[email protected]><mailto:[email protected]>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> ________________________________
> _______________________________________________
> Gluster-users mailing list
> [email protected]<mailto:[email protected]>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]<mailto:[email protected]>
http://supercolony.gluster.org/mailman/listinfo/gluster-users
________________________________
_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to