On 09-Feb-2018 7:07 PM, "Seva Gluschenko" wrote:
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr
output for a file from all the bricks:
root@gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/
30677af808ad578916f54783904e6342.pack
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr output
for a file from all the bricks:
root@gv2 ~ # getfattr -d -e hex -m .
/data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file:
On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko wrote:
> Hi Karthik,
>
> Thank you for your reply. The heal is still undergoing, as the
> /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of
> pending entries in the heal info.
>
> The gluster version is
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the
/var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending
entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It
doesn't have info summary [yet?],
On Fri, Feb 9, 2018 at 11:46 AM, Karthik Subrahmanya
wrote:
> Hey,
>
> Did the heal completed and you still have some entries pending heal?
> If yes then can you provide the following informations to debug the issue.
> 1. Which version of gluster you are running
> 2. Output
Hey,
Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal info summary or gluster volume heal
info
3. getfattr -d -e hex -m . output of
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load
issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 +