Re: [Gluster-users] Gluster processes remaining after stopping glusterd

2017-10-18 Thread ismael mondiu
Thank you Atin .

 That's exactly what i wanted.




De : Atin Mukherjee 
Envoyé : mercredi 18 octobre 2017 14:17
À : ismael mondiu
Cc : gluster-users@gluster.org
Objet : Re: [Gluster-users] Gluster processes remaining after stopping glusterd



On Tue, Oct 17, 2017 at 3:28 PM, ismael mondiu 
> wrote:

Hi,

I noticed that when i stop my gluster server via systemctl stop glusterd 
command , one glusterfs process is still up.

Which is the correct way to stop all gluster processes in my host?

Stopping glusterd service doesn't bring down any other services than glusterd 
process. We do have a script stop-all-gluster-processes.sh (available in 
/usr/share/glusterfs/scripts/ path) by which all gluster related services can 
be brought down.



That's we see after run the command:

***

[root@xx ~]# ps -ef |  grep -i glu
root  1825 1  0 Oct05 ?00:05:07 /usr/sbin/glusterfsd -s 
dvihcasc0s --volfile-id advdemo.dvihcasc0s.opt-glusterfs-advdemo -p 
/var/lib/glusterd/vols/advdemo/run/dvihcasc0s-opt-glusterfs-advdemo.pid -S 
/var/run/gluster/b7cbd8cac308062ef1ad823a3abf54f5.socket --brick-name 
/opt/glusterfs/advdemo -l /var/log/glusterfs/bricks/opt-glusterfs-advdemo.log 
--xlator-option *-posix.glusterd-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a 
--brick-port 49152 --xlator-option advdemo-server.listen-port=49152
root  2058 1  0 Oct05 ?00:00:28 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/glustershd -p 
/var/lib/glusterd/glustershd/run/glustershd.pid -l 
/var/log/glusterfs/glustershd.log -S 
/var/run/gluster/fe4d4b13937be47e8fef6fd69be60899.socket --xlator-option 
*replicate*.node-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a
root 40044 39906  0 11:55 pts/000:00:00 grep --color=auto -i glu

*


Thanks

Ismael


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster + synology

2017-10-18 Thread Vijay Bellur
On Wed, Oct 18, 2017 at 12:45 PM, Alex Chekholko 
wrote:

> In theory, you can run GlusterFS on a Synology box, as it is "just a Linux
> box".  In practice, you might be the first person to ever try it.
>


Not sure if this would be the first attempt. Synology seems to bundle
glusterfs in some form [1].

Regards,
Vijay

[1]
https://sourceforge.net/projects/dsgpl/files/Packages/DSM%205.2%20Package%20Release/Glusterfs-Mgmt-x64.tgz

On Tue, Oct 17, 2017 at 8:45 PM, Ben Mabey 
> wrote:
>
>> Hi,
>> Does anyone have any experience of using Synology NAS servers as bricks
>> in a gluster setup?  The ops team at my work prefers Synology since that is
>> what they are already using and some of the nice out-of-the-box admin
>> features. From what I can tell Synology runs a custom linux flavor so it
>> should be possible to compile gluster on it. Any first hand experience with
>> it?
>>
>> Thanks,
>> Ben
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster command not respond

2017-10-18 Thread Alex K
I would first check that you have the same version on both servers.

On Oct 18, 2017 6:46 PM, "Ngo Leung"  wrote:

> Dear Sir / Madam
>
>
>
> I had been using  Glusterfs of both nodes, and it is under in distribute
> mode. But I cannot use all of the gluster commands at one of the node
> recently, any gluster command not respond (E.g volume info or pool list),
> and I got the error in cli.log . Does anyone can help me, many thanks.
>
>
>
> Here is the log error:
>
> [cli.c:759:main] 0-cli: Started running gluster with version 3.10.5
>
> [2017-10-16 04:46:17.464292] E [mem-pool.c:577:mem_get0]
> (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(data_from_dynstr+0x15)
> [0x7f1858b7e1f5] 
> -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(get_new_data+0x1c)
> [0x7f1858b7c9ac] -->/usr/lib/x86_64-so.0(mem_get0+0x88) [0x7f1858baa928]
> ) 0-mem-pool: invalid argument [Invalid argument]
>
>
>
> Thanks
>
> Ngo Leung
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] warning spam in the logs after tiering experiment

2017-10-18 Thread Dmitri Chebotarov
Hi Alastair

I apologize my email is off topic, but I also have strange issues on a
volume and it happened to be volume with hot-tier attached to it in past.

Can you do me a favor and test something on the volume where you used to
have hot-tier?

I'm working with RH on issue where glusterfs fuse client crashes while
doing 'grep' on files.

Here are the steps:

On the volume where you had hot-tier enabled:

# generate dataset for testing:

mkdir uuid-set; cd uuid-set
for i in $(seq 40); do uuidgen | awk {'print "mkdir "$1"; echo test >>
"$1"/"$1".meta"'}; done | sh

# run grep

grep test */*.meta

#Running 'grep' may crash glusterfs-client. If it doesn't crash, do the
following:

rsync -varP uuid-set/ uuid-set2
grep test uuid-set2/*/*.meta

Thank you.

On Wed, Oct 18, 2017 at 1:27 PM, Alastair Neil 
wrote:

> forgot to mention Gluster version 3.10.6
>
> On 18 October 2017 at 13:26, Alastair Neil  wrote:
>
>> a short while ago I experimented with tiering on one of my volumes.  I
>> decided it was not working out so I removed the tier.  I now have spam in
>> the glusterd.log evert 7 seconds:
>>
>> [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>> [2017-10-18 17:17:36.579276] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>> [2017-10-18 17:17:43.580238] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>> [2017-10-18 17:17:50.581185] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>> [2017-10-18 17:17:57.582136] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>> [2017-10-18 17:18:04.583148] W [socket.c:3207:socket_connect] 0-tierd:
>> Ignore failed connection attempt on 
>> /var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket,
>> (No such file or directory)
>>
>>
>> gluster volume status is showing the tier daemon status on all the nodes
>> as 'N', but lists the pids of a nonexistent processes.
>>
>> Just wondering if I messed up removing the tier?
>>
>> -Alastair
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] warning spam in the logs after tiering experiment

2017-10-18 Thread Alastair Neil
forgot to mention Gluster version 3.10.6

On 18 October 2017 at 13:26, Alastair Neil  wrote:

> a short while ago I experimented with tiering on one of my volumes.  I
> decided it was not working out so I removed the tier.  I now have spam in
> the glusterd.log evert 7 seconds:
>
> [2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
> [2017-10-18 17:17:36.579276] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
> [2017-10-18 17:17:43.580238] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
> [2017-10-18 17:17:50.581185] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
> [2017-10-18 17:17:57.582136] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
> [2017-10-18 17:18:04.583148] W [socket.c:3207:socket_connect] 0-tierd:
> Ignore failed connection attempt on /var/run/gluster/
> 2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or directory)
>
>
> gluster volume status is showing the tier daemon status on all the nodes
> as 'N', but lists the pids of a nonexistent processes.
>
> Just wondering if I messed up removing the tier?
>
> -Alastair
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] warning spam in the logs after tiering experiment

2017-10-18 Thread Alastair Neil
a short while ago I experimented with tiering on one of my volumes.  I
decided it was not working out so I removed the tier.  I now have spam in
the glusterd.log evert 7 seconds:

[2017-10-18 17:17:29.578327] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18 17:17:36.579276] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18 17:17:43.580238] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18 17:17:50.581185] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18 17:17:57.582136] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)
[2017-10-18 17:18:04.583148] W [socket.c:3207:socket_connect] 0-tierd:
Ignore failed connection attempt on
/var/run/gluster/2e3df1c501d0a19e5076304179d1e43e.socket, (No such file or
directory)


gluster volume status is showing the tier daemon status on all the nodes as
'N', but lists the pids of a nonexistent processes.

Just wondering if I messed up removing the tier?

-Alastair
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster + synology

2017-10-18 Thread Alex Chekholko
In theory, you can run GlusterFS on a Synology box, as it is "just a Linux
box".  In practice, you might be the first person to ever try it.

On Tue, Oct 17, 2017 at 8:45 PM, Ben Mabey 
wrote:

> Hi,
> Does anyone have any experience of using Synology NAS servers as bricks in
> a gluster setup?  The ops team at my work prefers Synology since that is
> what they are already using and some of the nice out-of-the-box admin
> features. From what I can tell Synology runs a custom linux flavor so it
> should be possible to compile gluster on it. Any first hand experience with
> it?
>
> Thanks,
> Ben
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster command not respond

2017-10-18 Thread Ngo Leung
Dear Sir / Madam



I had been using  Glusterfs of both nodes, and it is under in distribute
mode. But I cannot use all of the gluster commands at one of the node
recently, any gluster command not respond (E.g volume info or pool list),
and I got the error in cli.log . Does anyone can help me, many thanks.



Here is the log error:

[cli.c:759:main] 0-cli: Started running gluster with version 3.10.5

[2017-10-16 04:46:17.464292] E [mem-pool.c:577:mem_get0]
(-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(data_from_dynstr+0x15)
[0x7f1858b7e1f5]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(get_new_data+0x1c)
[0x7f1858b7c9ac] -->/usr/lib/x86_64-so.0(mem_get0+0x88) [0x7f1858baa928] )
0-mem-pool: invalid argument [Invalid argument]



Thanks

Ngo Leung
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] ObjectStore on top of gluster volume

2017-10-18 Thread David Spisla
Hello Gluster Community,

does anybody know how much performance (read/write bandwith) I will loose if I 
use an object store on top of a gluster volume?

Any experiences?

Regards
David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841
[cid:image002.jpg@01D2EE94.184FF730]
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-10-18 Thread Matt Waymack
It looks like these entries don't have a corresponding file path, they exist 
only in .glusters and appear to be orphaned:

[root@tpc-cent-glus2-081017 ~]# find /exp/b4/gv0 -samefile 
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3

[root@tpc-cent-glus2-081017 ~]# find /exp/b4/gv0 -samefile 
/exp/b4/gv0/.glusterfs/6f/0a/6f0a0549-8669-46de-8823-d6677fdca8e3
/exp/b4/gv0/.glusterfs/6f/0a/6f0a0549-8669-46de-8823-d6677fdca8e3

[root@tpc-cent-glus1-081017 ~]# find /exp/b1/gv0 -samefile 
/exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
/exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2

[root@tpc-cent-glus1-081017 ~]# find /exp/b4/gv0 -samefile 
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3

Occasionally I would get these gfid entries in the heal info output but would 
just run tree against the volume.  After that the files would trigger 
self-heal, and the gfid entries would be updated with their volume paths.  That 
does not seem to be the case here so I feel that all of these entries are 
orphaned.

From: Karthik Subrahmanya [mailto:ksubr...@redhat.com] 
Sent: Wednesday, October 18, 2017 4:34 AM
To: Matt Waymack 
Cc: gluster-users 
Subject: Re: [Gluster-users] gfid entries in volume heal info that do not heal

Hey Matt,
From the xattr output, it looks like the files are not present on the arbiter 
brick & needs healing. But on the parent it does not have the pending markers 
set for those entries.
The workaround for this is you need to do a lookup on the file which needs heal 
from the mount, so it will create the entry on the arbiter brick and then run 
the volume heal to do the healing.
Follow these steps to resolve the issue: (first try this on one file and check 
whether it gets healed. If it gets healed then do this for all the remaining 
files)
1. Get the file path for the gfids you got from heal info output.
    find  -samefile //
2. Do ls/stat on the file from mount.
3. Run volume heal.
4. Check the heal info output to see whether the file got healed.
If one file gets healed, then do step 1 & 2 for the rest of the files and do 
step 3 & 4 once at the end.
Let me know if that resolves the issue.

Thanks & Regards,
Karthik

On Tue, Oct 17, 2017 at 8:04 PM, Matt Waymack  wrote:
Attached is the heal log for the volume as well as the shd log.

>> Run these commands on all the bricks of the replica pair to get the attrs 
>> set on the backend.

[root@tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . 
/exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x
trusted.afr.gv0-client-2=0x0001
trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435

[root@tpc-cent-glus2-081017 ~]# getfattr -d -e hex -m . 
/exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x
trusted.afr.gv0-client-2=0x0001
trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d346463622d393630322d3839356136396461363131662f435f564f4c2d623030312d693637342d63642d63772e6d6435

[root@tpc-arbiter1-100617 ~]# getfattr -d -e hex -m . 
/exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2: No 
such file or directory


[root@tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . 
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
getfattr: Removing leading '/' from absolute path names
# file: exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x
trusted.afr.gv0-client-11=0x0001
trusted.gfid=0xe0c56bf78bfe46cabde1e46b92d33df3
trusted.gfid2path.be3ba24c3ef95ff2=0x63323366353834652d353566652d343033382d393131622d3866373063656334616136662f435f564f4c2d623030332d69313331342d63642d636d2d63722e6d6435

[root@tpc-cent-glus2-081017 ~]# getfattr -d -e hex -m . 
/exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
getfattr: Removing leading '/' from 

[Gluster-users] Mounting of Gluster volumes in Kubernetes

2017-10-18 Thread Travis Truman
Hi all,

Wondered if there are others in the community using GlusterFS on Google
Compute Engine and Kubernetes via Google Container Engine together.

We're running glusterfs 3.7.6 on Ubuntu Xenial across 3 GCE nodes. We have
a single replicated volume of ~800GB that our pods running in Kubernetes
are mounting.

We've observed a pattern of soft lockups on our Kubernetes nodes that mount
our Gluster volume. These nodes seem to be those that have the highest rate
of reads/writes to the Gluster volume.

An example looks like:

[495498.074071] Kernel panic - not syncing: softlockup: hung tasks
[495498.080108] CPU: 0 PID: 10166 Comm: nginx Tainted: G L
4.4.64+ #1
[495498.087524] Hardware name: Google Google Compute Engine/Google Compute
Engine, BIOS Google 01/01/2011
[495498.096947]   8803ffc03e20 a1317394
a1713537
[495498.105113]  8803ffc03eb0 8803ffc03ea0 a1139bbc
0008
[495498.113187]  8803ffc03eb0 8803ffc03e48 009c

[495498.121488] Call Trace:
[495498.124131][] dump_stack+0x63/0x8f
[495498.130207]  [] panic+0xc6/0x1ec
[495498.135208]  [] watchdog_timer_fn+0x1e7/0x1f0
[495498.141327]  [] ? watchdog+0xa0/0xa0
[495498.146668]  [] __hrtimer_run_queues+0xff/0x260
[495498.152959]  [] hrtimer_interrupt+0xac/0x1b0
[495498.158993]  [] smp_apic_timer_interrupt+0x68/0xa0
[495498.167232]  [] apic_timer_interrupt+0x82/0x90
[495498.173432][] ?
prepare_to_wait_exclusive+0x80/0x80
[495498.182557]  [] ? 0xc02e331f
[495498.187893]  [] ? prepare_to_wait_event+0xf0/0xf0
[495498.194357]  [] 0xc02e3679
[495498.199519]  [] fuse_simple_request+0x11a/0x1e0 [fuse]
[495498.206415]  [] fuse_dev_cleanup+0xa81/0x1ef0 [fuse]
[495498.213151]  [] lookup_fast+0x249/0x330
[495498.218748]  [] walk_component+0x3d/0x500

While the particular issue seems more related to the Fuse client talking to
Gluster, we're wondering if others have seen this type of behavior, if
there are particular troubleshooting/tuning steps we might be advised to
the take on the Gluster side of the problem, and if the community has any
general tips around using Gluster and Kubernetes together.

Thanks in advance,
Travis Truman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster processes remaining after stopping glusterd

2017-10-18 Thread Atin Mukherjee
On Tue, Oct 17, 2017 at 3:28 PM, ismael mondiu  wrote:

> Hi,
>
> I noticed that when i stop my gluster server via systemctl stop glusterd
> command , one glusterfs process is still up.
>
> Which is the correct way to stop all gluster processes in my host?
>

Stopping glusterd service doesn't bring down any other services than
glusterd process. We do have a script stop-all-gluster-processes.sh
(available in /usr/share/glusterfs/scripts/ path) by which all gluster
related services can be brought down.

>
>
> That's we see after run the command:
>
> 
> ***
>
> [root@xx ~]# ps -ef |  grep -i glu
> root  1825 1  0 Oct05 ?00:05:07 /usr/sbin/glusterfsd -s
> dvihcasc0s --volfile-id advdemo.dvihcasc0s.opt-glusterfs-advdemo -p
> /var/lib/glusterd/vols/advdemo/run/dvihcasc0s-opt-glusterfs-advdemo.pid
> -S /var/run/gluster/b7cbd8cac308062ef1ad823a3abf54f5.socket --brick-name
> /opt/glusterfs/advdemo -l /var/log/glusterfs/bricks/opt-glusterfs-advdemo.log
> --xlator-option *-posix.glusterd-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a
> --brick-port 49152 --xlator-option advdemo-server.listen-port=49152
> root  2058 1  0 Oct05 ?00:00:28 /usr/sbin/glusterfs -s
> localhost --volfile-id gluster/glustershd -p 
> /var/lib/glusterd/glustershd/run/glustershd.pid
> -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/
> fe4d4b13937be47e8fef6fd69be60899.socket --xlator-option
> *replicate*.node-uuid=30865b77-4da5-4f40-9945-0bd2cf55ac2a
> root 40044 39906  0 11:55 pts/000:00:00 grep --color=auto -i glu
>
> 
> *
>
>
> Thanks
>
> Ismael
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-10-18 Thread Karthik Subrahmanya
Hey Matt,

>From the xattr output, it looks like the files are not present on the
arbiter brick & needs healing. But on the parent it does not have the
pending markers set for those entries.
The workaround for this is you need to do a lookup on the file which needs
heal from the mount, so it will create the entry on the arbiter brick and
then run the volume heal to do the healing.
Follow these steps to resolve the issue: (first try this on one file and
check whether it gets healed. If it gets healed then do this for all the
remaining files)
1. Get the file path for the gfids you got from heal info output.
find  -samefile //
2. Do ls/stat on the file from mount.
3. Run volume heal.
4. Check the heal info output to see whether the file got healed.

If one file gets healed, then do step 1 & 2 for the rest of the files and
do step 3 & 4 once at the end.
Let me know if that resolves the issue.

Thanks & Regards,
Karthik

On Tue, Oct 17, 2017 at 8:04 PM, Matt Waymack  wrote:

> Attached is the heal log for the volume as well as the shd log.
>
> >> Run these commands on all the bricks of the replica pair to get the
> attrs set on the backend.
>
> [root@tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.afr.gv0-client-2=0x0001
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d
> 346463622d393630322d3839356136396461363131662f435f564f4c2d62
> 3030312d693637342d63642d63772e6d6435
>
> [root@tpc-cent-glus2-081017 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.afr.gv0-client-2=0x0001
> trusted.gfid=0x108694dbc0394b7cbd3dad6a15d811a2
> trusted.gfid2path.9a2f5ada22eb9c45=0x38633262623330322d323466332d
> 346463622d393630322d3839356136396461363131662f435f564f4c2d62
> 3030312d693637342d63642d63772e6d6435
>
> [root@tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2:
> No such file or directory
>
>
> [root@tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.afr.gv0-client-11=0x0001
> trusted.gfid=0xe0c56bf78bfe46cabde1e46b92d33df3
> trusted.gfid2path.be3ba24c3ef95ff2=0x63323366353834652d353566652d
> 343033382d393131622d3866373063656334616136662f435f564f4c2d62
> 3030332d69313331342d63642d636d2d63722e6d6435
>
> [root@tpc-cent-glus2-081017 ~]# getfattr -d -e hex -m .
> /exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a756e6c6162656c65645f743a733000
> trusted.afr.dirty=0x
> trusted.afr.gv0-client-11=0x0001
> trusted.gfid=0xe0c56bf78bfe46cabde1e46b92d33df3
> trusted.gfid2path.be3ba24c3ef95ff2=0x63323366353834652d353566652d
> 343033382d393131622d3866373063656334616136662f435f564f4c2d62
> 3030332d69313331342d63642d636d2d63722e6d6435
>
> [root@tpc-arbiter1-100617 ~]# getfattr -d -e hex -m .
> /exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3
> getfattr: /exp/b4/gv0/.glusterfs/e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3:
> No such file or directory
>
> >> And the output of "gluster volume heal  info split-brain"
>
> [root@tpc-cent-glus1-081017 ~]# gluster volume heal gv0 info split-brain
> Brick tpc-cent-glus1-081017:/exp/b1/gv0
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick tpc-cent-glus2-081017:/exp/b1/gv0
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick tpc-arbiter1-100617:/exp/b1/gv0
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick tpc-cent-glus1-081017:/exp/b2/gv0
> Status: Connected
> Number of entries in split-brain: 0
>
> Brick tpc-cent-glus2-081017:/exp/b2/gv0
> Status: Connected
> Number of entries in