We ara also using 10TB disks, heal takes 7-8 days.
You can play with "cluster.shd-max-threads" setting. It is default 1 I
think. I am using it with 4.
Below you can find more info:
https://access.redhat.com/solutions/882233
On Thu, Jan 10, 2019 at 9:53 AM Hu Bert wrote:
>
> Hi Mike,
>
> > We
Hi Mike,
> We have similar setup, and I do not test restoring...
> How many volumes do you have - one volume on one (*3) disk 10 TB in size
> - then 4 volumes?
Testing could be quite easy: reset-brick start, then delete
partition/fs/etc., reset-brick commit force - and then watch.
We only
Has anyone any other ideas where to look? This is only affecting FUSE clients.
SMB clients are unaffected by this problem.
Thanks!
From: gluster-users-boun...@gluster.org On
Behalf Of Matt Waymack
Sent: Monday, January 7, 2019 1:19 PM
To: Raghavendra Gowdappa
Cc: gluster-users@gluster.org
09.01.2019 17:38, Hu Bert пишет:
Hi @all,
we have 3 servers, 4 disks (10TB) each, in a replicate 3 setup. We're
having some problems after a disk failed; the restore via reset-brick
takes way too long (way over a month)
terrible.
We have similar setup, and I do not test restoring...
How many
I am seeing a broken file that exists on 2 out of 3 nodes. The application
trying to use the file throws file permissions error. ls, rm, mv, touch
all throw "Input/output error"
$ ls -la
ls: cannot access .download_suspensions.memo: Input/output error
drwxrwxr-x. 2 ossadmin ossadmin 4096 Jan
Hi @all,
we have 3 servers, 4 disks (10TB) each, in a replicate 3 setup. We're
having some problems after a disk failed; the restore via reset-brick
takes way too long (way over a month), disk utilization is at 100%, it
doesn't get any faster, some params have already been tweaked. Only
about
Can I please get some help in understanding the issue mentioned ?
From: Anand Malagi
Sent: Monday, December 31, 2018 1:39 PM
To: 'Anand Malagi' ; gluster-users@gluster.org
Subject: RE: replace-brick operation issue...
Can someone please help here ??
From: