Hi João,
Yes it'll take some time given the file system size as it has to change the
xattrs in each level and then crawl upwards.
stat is done by the script itself so the crawl is initiated.
Regards,
Srijan Sivakumar
On Sun 16 Aug, 2020, 04:58 João Baúto,
wrote:
> Hi Srijan & Strahil,
>
> I
Hi Srijan & Strahil,
I ran the quota_fsck script mentioned in Hari's blog post in all bricks and
it detected a lot of size mismatch.
The script was executed as,
- python quota_fsck.py --sub-dir projectB --fix-issues /mnt/tank
/tank/volume2/brick (in all nodes and bricks)
Here is a
Greetings, I am trying to monitor the start/stop of a selfheal in the
cluster without needing to poll the cli. Is there a passive way to monitor
if the cluster is in a state of selfheal? It looked like checking the
xattrop directory for a file count worked in some cases, but it was not
accurate.
Hi João,
The quota accounting error is what we're looking at here. I think you've
already looked into the blog post by Hari and are using the script to fix
the accounting issue.
That should help you out in fixing this issue.
Let me know if you face any issues while using it.
Regards,
Srijan
Hey Matthew,
Can you check with valgrind the memory leak ?
It will be something like:
Find the geo rep process via ps and note all parameters it was started with .
Next stop geo rep.
Then start it with valgrind :
valgrind --log-file="filename" --tool=memcheck --leak-check=full
It might
Usually sharding is used for that purpose. Each shard is of a fixed size.
For details:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-hosting_virtual_machine_images_on_red_hat_storage_volumes
Hi João,
most probably enable/disable should help.
Have you checked all bricks on the ZFS ?
Your example is for projectA vs ProjectB.
What about 'ProjectB' directories on all bricks of the volume ?
If enable/disable doesn't help, I have an idea but I have never test it, so I
can't