Hi Kotresh,
2280 is a brick process, i have not tried with dist-rep volume?
I have not seen any fd in bitd process in any of the node's and bitd
process usage always 0% CPU and randomly it goes 0.3% CPU.
Thanks,
Amudhan
On Thursday, September 22, 2016, Kotresh Hiremath Ravishankar <
Hi Amudhan,
No, bitrot signer is a different process by itself and is not part of brick
process.
I believe the process 2280 is a brick process ? Did you check with dist-rep
volume?
Is the same behavior being observed there as well? We need to figure out why
brick
process is holding that fd for
On 09/21/2016 10:54 PM, Pasi Kärkkäinen wrote:
Let's see.
# getfattr -m . -d -e hex /bricks/vol1/brick1/foo
getfattr: Removing leading '/' from absolute path names
# file: bricks/vol1/brick1/foo
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000
So
Great! Thank you, Manikandan.
> On Sep 15, 2016, at 6:23 AM, Raghavendra Gowdappa wrote:
>
> Hi Sergei,
>
> You can set marker "dirty" xattr using key trusted.glusterfs.quota.dirty. You
> have two choices:
>
> 1. Setting through a gluster mount. This will set key on
Hi,
I'm using gluster 3.7.5 and I'm trying to get port bonding working
properly with the gluster protocol. I've bonded the NICs using round
robin because I also bond it at the switch level with link aggregation.
I've used this type of bonding without a problem with my other
applications
> 2016-09-21 20:56 GMT+02:00 Serkan Çoban :
> > Then you can use 8+3 with 11 servers.
>
> Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple)
It's not really 512*(8+3) though. Even though there are 11 fragments,
they only contain 8 fragments' worth of
2016-09-21 20:56 GMT+02:00 Serkan Çoban :
> Then you can use 8+3 with 11 servers.
Stripe size won't be good: 512*(8-3) = 2560 and not 2048 (or multiple)
___
Gluster-users mailing list
Gluster-users@gluster.org
Then you can use 8+3 with 11 servers.
On Wed, Sep 21, 2016 at 9:17 PM, Gandalf Corvotempesta
wrote:
> 2016-09-21 16:13 GMT+02:00 Serkan Çoban :
>> 8+2 is recommended for 10 servers. From n+k servers it will be good to
>> choose n with a
2016-09-21 16:13 GMT+02:00 Serkan Çoban :
> 8+2 is recommended for 10 servers. From n+k servers it will be good to
> choose n with a power of 2(4,8,16,vs)
> You need to add 10 bricks if you want to extend the volume.
8+2 means at least 2 failed bricks, right?
That's too
Hi,
On Wed, Sep 21, 2016 at 10:12:44PM +0530, Ravishankar N wrote:
> On 09/21/2016 06:45 PM, Pasi Kärkkäinen wrote:
> >Hello,
> >
> >I have a pretty basic two-node gluster 3.7 setup, with a volume
> >replicated/mirrored to both servers.
> >
> >One of the servers was down for hardware
On 09/21/2016 06:45 PM, Pasi Kärkkäinen wrote:
Hello,
I have a pretty basic two-node gluster 3.7 setup, with a volume
replicated/mirrored to both servers.
One of the servers was down for hardware maintenance, and later when it got
back up, the healing process started, re-syncing files.
In
8+2 is recommended for 10 servers. From n+k servers it will be good to
choose n with a power of 2(4,8,16,vs)
You need to add 10 bricks if you want to extend the volume.
On Wed, Sep 21, 2016 at 4:12 PM, Gandalf Corvotempesta
wrote:
> 2016-09-21 14:42 GMT+02:00
Hello,
I have a pretty basic two-node gluster 3.7 setup, with a volume
replicated/mirrored to both servers.
One of the servers was down for hardware maintenance, and later when it got
back up, the healing process started, re-syncing files.
In the beginning there was some 200 files that need to
This weeks meeting started slow. But snowballed into quite an active meeting.
Thank you all who attended the meeting!
The meeting logs for the meeting are available at the links below, and
the minutes have been pasted at the end.
- Minutes:
2016-09-21 14:42 GMT+02:00 Xavier Hernandez :
> You *must* ensure that *all* bricks forming a single disperse set are placed
> in a different server. There are no 4 special fragments. All fragments have
> the same importance. The way to do that is ordering them when the
Hi Amudhan,
If you see the ls output, some process has a fd opened in the backend.
That is the reason bitrot is not considering for the signing.
Could you please observe, after 120 secs of closure of
"/media/disk2/brick2/.glusterfs/6e/7c/6e7c49e6-094e-4435-85bf-f21f99fd8764"
the signing happens.
On 21/09/16 14:36, Gandalf Corvotempesta wrote:
Il 01 set 2016 10:18 AM, "Xavier Hernandez" > ha scritto:
If you put more than one fragment into the same server, you will lose
all the fragments if the server goes down. If there are more than
Il 01 set 2016 10:18 AM, "Xavier Hernandez" ha
scritto:
> If you put more than one fragment into the same server, you will lose all
the fragments if the server goes down. If there are more than 4 fragments
on that server, the file will be unrecoverable until the server is
You should be first upgrading your servers followed by clients.
On Wed, Sep 21, 2016 at 2:44 PM, mabi wrote:
> Hi,
>
> I have a GlusterFS server version 3.7.12 and mount my volumes on my
> clients using FUSE native GlusterFS. Now I was wondering if it is safe to
> upgrade
Hi Kotresh,
Writing new file.
getfattr -m. -e hex -d /media/disk2/brick2/data/G/test58-bs10M-c100.nul
getfattr: Removing leading '/' from absolute path names
# file: media/disk2/brick2/data/G/test58-bs10M-c100.nul
trusted.bit-rot.version=0x020057da8b23000b120e
Hi,
I have a GlusterFS server version 3.7.12 and mount my volumes on my clients
using FUSE native GlusterFS. Now I was wondering if it is safe to upgrade the
GlusterFS client on my clients to 3.7.15 without upgrading my server to 3.7.15?
Regards,
Hi Kotresh,
i have used below command to verify any open fd for file.
"ls -l /proc/*/fd | grep filename".
as soon as write completes there no open fd's, if there is any alternate
option. please let me know will also try that.
Also, below is my scrub status in my test setup. number of
Hi Amudhan,
I don't think it's the limitation with read data from the brick.
To limit the usage of CPU, throttling is done using token bucket
algorithm. The log message showed is related to it. But even then
I think it should not take 12 minutes for check-sum calculation unless
there is an fd
23 matches
Mail list logo