[Gluster-users] Bitrot strange behavior

2018-04-16 Thread Cedric Lemarchand
Hello, I am playing around with the bitrot feature and have some questions: 1. when a file is created, the "trusted.bit-rot.signature” attribute seems only created approximatively 120 seconds after its creations (the cluster is idle and there is only one file living on it). Why ? Is there a way

[Gluster-users] lstat & readlink calls during glusterfsd process startup

2018-04-16 Thread Serkan Çoban
Hi all, I am on gluster 3.10.5 with one EC volume 16+4. One of the machines go down previous night and I just fixed it and powered on. When glusterfsd processes started they consume all CPU on the server. strace shows every process goes over in bricks directory and do a lstat & readlink calls.

[Gluster-users] Fwd: lstat & readlink calls during glusterfsd process startup

2018-04-16 Thread Serkan Çoban
This is an example from one of the glusterfsd processes, strace -f -c -p pid_of_glusterfsd %time seconds usecs/call calls errors syscall 68 36.2 2131 17002 4758 futex 137 5783 1206 epoll_wait 115.4

[Gluster-users] Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory

2018-04-16 Thread Niels Hendriks
Hi, We have a 3-node gluster setup where gluster is both the server and the client. Every few days we have some $random file or directory that does not exist according to the FUSE mountpoint. When we try to access the file (stat, cat, etc...) the filesystem reports that the file/directory does

Re: [Gluster-users] Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory

2018-04-16 Thread Raghavendra Gowdappa
On Mon, Apr 16, 2018 at 1:54 PM, Niels Hendriks wrote: > Hi, > > We have a 3-node gluster setup where gluster is both the server and the > client. > Every few days we have some $random file or directory that does not exist > according to the FUSE mountpoint. When we try to

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
Hi Artem, Was the volume size correct before the bricks were expanded? This sounds like [1] but that should have been fixed in 4.0.0. Can you let us know the values of shared-brick-count in the files in /var/lib/glusterd/vols/dev_apkmirror_data/ ? [1]

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol 3:option shared-brick-count 3 dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol 3:option shared-brick-count 3

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1. Sincerely, Artem -- Founder, Android Police , APK Mirror , Illogical Robot LLC beerpla.net | +ArtemRussakovskii

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
Ok, it looks like the same problem. @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate the volfiles to fix this? Regards, Nithya On 17 April 2018 at 09:57, Artem Russakovskii wrote: > pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix. There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release). [2]

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Amar Tumballi
On Tue, Apr 17, 2018 at 9:59 AM, Nithya Balachandran wrote: > Ok, it looks like the same problem. > > > @Amar, this fix is supposed to be in 4.0.1. Is it possible to regenerate > the volfiles to fix this? > Yes, regenerating volfiles should fix it. Should we try a volume

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
I just remembered that I didn't run https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ for this test volume/box like I did for the main production gluster, and one of these ops - either heal or the op-version, resolved the issue. I'm now seeing:

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Nithya Balachandran
On 17 April 2018 at 10:03, Artem Russakovskii wrote: > I just remembered that I didn't run https://docs.gluster.org/ > en/v3/Upgrade-Guide/op_version/ for this test volume/box like I did for > the main production gluster, and one of these ops - either heal or the >

Re: [Gluster-users] Getting glusterfs to expand volume size to brick size

2018-04-16 Thread Artem Russakovskii
Hi Nithya, I'm on Gluster 4.0.1. I don't think the bricks were smaller before - if they were, maybe 20GB because Linode's minimum is 20GB, then I extended them to 25GB, resized with resize2fs as instructed, and rebooted many times over since. Yet, gluster refuses to see the full disk size.

Re: [Gluster-users] Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory

2018-04-16 Thread Nithya Balachandran
On 16 April 2018 at 14:07, Raghavendra Gowdappa wrote: > > > On Mon, Apr 16, 2018 at 1:54 PM, Niels Hendriks wrote: > >> Hi, >> >> We have a 3-node gluster setup where gluster is both the server and the >> client. >> Every few days we have some $random

Re: [Gluster-users] Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory

2018-04-16 Thread Nithya Balachandran
Hi Niels, As this is a pure replicate volume, lookup-optimize is not going to be much use so you can turn it off if you wish. Do you see any error messages in the FUSE mount logs when this happens? If it happens again, a tcpdump of the fuse mount would help. Regards, Nithya On 16 April 2018