At least you can bumb the cluster op-version (in case you don't plan to add
older clients)via:
gluster volume set all cluster.op-version 50400
If it happens again, try to remount the client in order to verify that it is
not a memory leak.
Best Regards,
Strahil Nikolov
В сряда, 30
Hm... .
Can you check the cluster op version via:
gluster volume get all cluster.op-version
And the max version:
gluster volume get all cluster.max-op-version
If you restart the client (umount and then mount) , do you have the same memory
usage?
In your case the client is 5.10 , so you can try
Sadly I can't help much here.
Is this a Hyperconverged setup (host is also a client) ?
Best Regards,
Strahil Nikolov
В вторник, 29 септември 2020 г., 18:29:20 Гринуич+3, Shreyansh Shah
написа:
Hi All,
Can anyone help me out with this?
On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah
cluster op.version is 5, and cluster.max-op-version is 50400
Our cluster server is 5.10 and client too is running at 5.10.
Unfortunately the instance is not running anymore so we cannot remount and
check.
On Wed, Sep 30, 2020 at 8:59 PM Strahil Nikolov
wrote:
> Hm... .
> Can you check the
Hi Strahil,
Thanks for taking out time to help me.
This is not a hyperconverged setup. We have 7 nodes with 2 bricks on each
node. Total 14 node distributed setup.
The host on which i saw the increased RAM is a client with glusterfs client
version 5.10.
On Wed, Sep 30, 2020 at 8:42 PM Strahil
Hi All,
Can anyone help me out with this?
On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah <
shreyansh.s...@alpha-grep.com> wrote:
> Hi,
> We are using distributed gluster version 5.10 (7 nodes with 2 bricks per
> node, i.e 14 bricks total).
>
> We have set the performance.cache-size parameter as
Hi,
We are using distributed gluster version 5.10 (7 nodes with 2 bricks per
node, i.e 14 bricks total).
We have set the performance.cache-size parameter as 8GB on server. We
assumed that this config parameter indicates the amount of RAM that will be
used on the client machine (i.e. upto 8 GB of