Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Amudhan P
Hi Kotresh, 2280 is a brick process, i have not tried with dist-rep volume? I have not seen any fd in bitd process in any of the node's and bitd process usage always 0% CPU and randomly it goes 0.3% CPU. Thanks, Amudhan On Thursday, September 22, 2016, Kotresh Hiremath Ravishankar <

Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Kotresh Hiremath Ravishankar
Hi Amudhan, No, bitrot signer is a different process by itself and is not part of brick process. I believe the process 2280 is a brick process ? Did you check with dist-rep volume? Is the same behavior being observed there as well? We need to figure out why brick process is holding that fd for

Re: [Gluster-users] gluster 3.7 healing errors (no data available, buf->ia_gfid is null)

2016-09-21 Thread Ravishankar N
On 09/21/2016 10:54 PM, Pasi Kärkkäinen wrote: Let's see. # getfattr -m . -d -e hex /bricks/vol1/brick1/foo getfattr: Removing leading '/' from absolute path names # file: bricks/vol1/brick1/foo security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 So

Re: [Gluster-users] incorrect usage value on a directory

2016-09-21 Thread Sergei Gerasenko
Great! Thank you, Manikandan. > On Sep 15, 2016, at 6:23 AM, Raghavendra Gowdappa wrote: > > Hi Sergei, > > You can set marker "dirty" xattr using key trusted.glusterfs.quota.dirty. You > have two choices: > > 1. Setting through a gluster mount. This will set key on

[Gluster-users] write performance with NIC bonding

2016-09-21 Thread James Ching
Hi, I'm using gluster 3.7.5 and I'm trying to get port bonding working properly with the gluster protocol. I've bonded the NICs using round robin because I also bond it at the switch level with link aggregation. I've used this type of bonding without a problem with my other applications

Re: [Gluster-users] EC clarification

2016-09-21 Thread Jeff Darcy
> 2016-09-21 20:56 GMT+02:00 Serkan Çoban : > > Then you can use 8+3 with 11 servers. > > Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple) It's not really 512*(8+3) though. Even though there are 11 fragments, they only contain 8 fragments' worth of

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 20:56 GMT+02:00 Serkan Çoban : > Then you can use 8+3 with 11 servers. Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple) ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] EC clarification

2016-09-21 Thread Serkan Çoban
Then you can use 8+3 with 11 servers. On Wed, Sep 21, 2016 at 9:17 PM, Gandalf Corvotempesta wrote: > 2016-09-21 16:13 GMT+02:00 Serkan Çoban : >> 8+2 is recommended for 10 servers. From n+k servers it will be good to >> choose n with a

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 16:13 GMT+02:00 Serkan Çoban : > 8+2 is recommended for 10 servers. From n+k servers it will be good to > choose n with a power of 2(4,8,16,vs) > You need to add 10 bricks if you want to extend the volume. 8+2 means at least 2 failed bricks, right? That's too

Re: [Gluster-users] gluster 3.7 healing errors (no data available, buf->ia_gfid is null)

2016-09-21 Thread Pasi Kärkkäinen
Hi, On Wed, Sep 21, 2016 at 10:12:44PM +0530, Ravishankar N wrote: > On 09/21/2016 06:45 PM, Pasi Kärkkäinen wrote: > >Hello, > > > >I have a pretty basic two-node gluster 3.7 setup, with a volume > >replicated/mirrored to both servers. > > > >One of the servers was down for hardware

Re: [Gluster-users] gluster 3.7 healing errors (no data available, buf->ia_gfid is null)

2016-09-21 Thread Ravishankar N
On 09/21/2016 06:45 PM, Pasi Kärkkäinen wrote: Hello, I have a pretty basic two-node gluster 3.7 setup, with a volume replicated/mirrored to both servers. One of the servers was down for hardware maintenance, and later when it got back up, the healing process started, re-syncing files. In

Re: [Gluster-users] EC clarification

2016-09-21 Thread Serkan Çoban
8+2 is recommended for 10 servers. From n+k servers it will be good to choose n with a power of 2(4,8,16,vs) You need to add 10 bricks if you want to extend the volume. On Wed, Sep 21, 2016 at 4:12 PM, Gandalf Corvotempesta wrote: > 2016-09-21 14:42 GMT+02:00

[Gluster-users] gluster 3.7 healing errors (no data available, buf->ia_gfid is null)

2016-09-21 Thread Pasi Kärkkäinen
Hello, I have a pretty basic two-node gluster 3.7 setup, with a volume replicated/mirrored to both servers. One of the servers was down for hardware maintenance, and later when it got back up, the healing process started, re-syncing files. In the beginning there was some 200 files that need to

[Gluster-users] Weekly Community Meeting - 21-Sep-2016

2016-09-21 Thread Kaushal M
This weeks meeting started slow. But snowballed into quite an active meeting. Thank you all who attended the meeting! The meeting logs for the meeting are available at the links below, and the minutes have been pasted at the end. - Minutes:

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
2016-09-21 14:42 GMT+02:00 Xavier Hernandez : > You *must* ensure that *all* bricks forming a single disperse set are placed > in a different server. There are no 4 special fragments. All fragments have > the same importance. The way to do that is ordering them when the

Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Kotresh Hiremath Ravishankar
Hi Amudhan, If you see the ls output, some process has a fd opened in the backend. That is the reason bitrot is not considering for the signing. Could you please observe, after 120 secs of closure of "/media/disk2/brick2/.glusterfs/6e/7c/6e7c49e6-094e-4435-85bf-f21f99fd8764" the signing happens.

Re: [Gluster-users] EC clarification

2016-09-21 Thread Xavier Hernandez
On 21/09/16 14:36, Gandalf Corvotempesta wrote: Il 01 set 2016 10:18 AM, "Xavier Hernandez" > ha scritto: If you put more than one fragment into the same server, you will lose all the fragments if the server goes down. If there are more than

Re: [Gluster-users] EC clarification

2016-09-21 Thread Gandalf Corvotempesta
Il 01 set 2016 10:18 AM, "Xavier Hernandez" ha scritto: > If you put more than one fragment into the same server, you will lose all the fragments if the server goes down. If there are more than 4 fragments on that server, the file will be unrecoverable until the server is

Re: [Gluster-users] Upgrading Gluster Client without upgrading server

2016-09-21 Thread Atin Mukherjee
You should be first upgrading your servers followed by clients. On Wed, Sep 21, 2016 at 2:44 PM, mabi wrote: > Hi, > > I have a GlusterFS server version 3.7.12 and mount my volumes on my > clients using FUSE native GlusterFS. Now I was wondering if it is safe to > upgrade

Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Amudhan P
Hi Kotresh, Writing new file. getfattr -m. -e hex -d /media/disk2/brick2/data/G/test58-bs10M-c100.nul getfattr: Removing leading '/' from absolute path names # file: media/disk2/brick2/data/G/test58-bs10M-c100.nul trusted.bit-rot.version=0x020057da8b23000b120e

[Gluster-users] Upgrading Gluster Client without upgrading server

2016-09-21 Thread mabi
Hi, I have a GlusterFS server version 3.7.12 and mount my volumes on my clients using FUSE native GlusterFS. Now I was wondering if it is safe to upgrade the GlusterFS client on my clients to 3.7.15 without upgrading my server to 3.7.15? Regards,

Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Amudhan P
Hi Kotresh, i have used below command to verify any open fd for file. "ls -l /proc/*/fd | grep filename". as soon as write completes there no open fd's, if there is any alternate option. please let me know will also try that. Also, below is my scrub status in my test setup. number of

Re: [Gluster-users] 3.8.3 Bitrot signature process

2016-09-21 Thread Kotresh Hiremath Ravishankar
Hi Amudhan, I don't think it's the limitation with read data from the brick. To limit the usage of CPU, throttling is done using token bucket algorithm. The log message showed is related to it. But even then I think it should not take 12 minutes for check-sum calculation unless there is an fd