What is the OS and it's version ?
I have seen similar behaviour (different workload) on RHEL 7.6 (and below).
Have you checked what processes are in 'R' or 'D' state on st2a ?
Best Regards,
Strahil Nikolov
На 23 юни 2020 г. 19:31:12 GMT+03:00, Pavel Znamensky
написа:
>Hi all,
>There's
Hi all,
There's something strange with one of our clusters and glusterfs version
6.8: it's quite slow and one node is overloaded.
This is distributed cluster with four servers with the same
specs/OS/versions:
Volume Name: st2
Type: Distributed-Replicate
Volume ID:
# Gluster Community Meeting - 23rd June, 2020
### Previous Meeting minutes:
- http://github.com/gluster/community
### Date/Time: Check the [community calendar](
https://calendar.google.com/calendar/b/1?cid=dmViajVibDBrbnNiOWQwY205ZWg5cGJsaTRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
)
### Bridge
*
Dear Shwetha,
yes, I deleted the previous session including the [reset-sync-time] option.
Actually, the geo-replication is in hybrid crawl and I executed the command
what we discussed yesterday. # setfattr
So far, the files are still present on the slave side.
You mentioned that
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and require
less work from Gluster admins but, it is a waste of disk space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick went down, it should be easy to
recover it
On Wed, 17 Jun 2020 00:06:33 +0300
Mahdi Adnan wrote:
> [gluster going down ]
I am following this project for quite some years now, probably longer than
most of the people nowadays on the list. The project started with the
brilliant idea of making a fs on top of classical fs's distributed over