I don't think the issue is on gluster side, it seems the issue is on kernel
side (possible deadlock in fuse_reverse_inval_entry)
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bda9a71980e083699a0360963c0135657b73f47a
On Tue, May 2, 2023 at 5:48 PM Hu Bert wrote:
>
Yes you can directly update the volfile and restart the glusterd.
Please file a github issue if you are facing any issue further.
On Tue, Apr 19, 2022 at 5:32 PM wrote:
> Send Gluster-users mailing list submissions to
> gluster-users@gluster.org
>
> To subscribe or unsubscribe via the
I don't think it is similar to Xavi fixed in
https://review.gluster.org/#/c/glusterfs/+/24099/.
Is it possible to share the output "thread apply all bt full" after
attaching the core with gdb?
Regards,
Mohit Agrawal
On Sat, Feb 15, 2020 at 7:25 AM Amar Tumballi wrote:
>
ent at the time of
mounting the volume?
Please use default event-threads if you are not getting any performance
improvement and share the command output
"top -bH -d 10" at the time of writing IO.
Thanks,
Mohit Agrawal
On Mon, Jan 6, 2020 at 4:01 AM Michael Richardson <
he...@miker
the AES128 cipher list, I hope you
should get sufficient performance improvement.
Please share the result of a performance improvement after configuring the
cipher
AES128 if it is possible for you.
Thanks,
Mohit Agrawal
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday a
a-gfs-bricks-brick1-ovirt-engine.pid"
repeated 2 times between [2019-04-01 10:23:21.748492] and [2019-04-01
10:23:21.752432]
I will backport the same.
Thanks,
Mohit Agrawal
On Wed, Apr 3, 2019 at 3:58 PM Olaf Buitelaar
wrote:
> Dear Mohit,
>
> Sorry i thought Krutika was referring to t
a dump of ovirt-kube logs only
Kindly share brick logs specific to the bricks "ovirt-core ovirt-engine"
and share glusterd logs also.
Regards
Mohit Agrawal
On Tue, Apr 2, 2019 at 9:18 PM Olaf Buitelaar
wrote:
> Dear Krutika,
>
> 1.
> I've changed the volume settings,
dd57cf5b6c1-brick.pid
2) start volume with force option
I hope volume should start.
Let us know if you are not able to start the volume.
If you are not able to start the volume kindly share glusterd logs as well
as brick logs for the same brick.
Thanks,
Mohit Agrawal
On Thu, Jan 24, 2019 at 5:1
nsure task is
successfully attached or not you can check
tasks file in glusterd.service/tasks in the same hierarchy, all tasks
will be move from glusterd tasks to mycgroup
tasks.
9) Check top output, cpu usage for selfheald will be lower.Let me know
if u face any problem to run the
client only.
Thanks
Mohit Agrawal
On Fri, Mar 31, 2017 at 2:27 PM, Mohit Agrawal wrote:
> Hi,
>
> As per attached glusterdump/stackdump it seems it is a known issue
> (https://bugzilla.redhat.com/show_bug.cgi?id=1372211) and issue is already
> fixed from the patch (https://review.
TE also can not be ended if fd1 is not closed. fd2 stuck in close
syscall.
As per statedump it also shows flush op fd is not same as write op fd.
Kindly upgrade the package on 3.10.1 and share the result.
Thanks
Mohit Agrawal
On Fri, Mar 31, 2017 at 12:29 PM, Amar Tumballi http://lists.glus
Hi Kevin,
As of now auth.allow/auth.reject does accept only ip address not hostname
but it will accept after merge one patch.
I have upload patch(http://review.gluster.org/15086) in upstream but got
some reviewer comment, i will correct it soon.
Thanks & Regards
Mohit Agrawal
On Fri, Se
Hi tommy,
Can you please share reproducer steps to reproduce the issue?
Regards
Mohit Agrawal
On Wed, Jul 20, 2016 at 8:20 PM, Atin Mukherjee wrote:
>
>
> On Wednesday 20 July 2016, tommy.yard...@baesystems.com <
> tommy.yard...@baesystems.com> wrote:
>
>> Ah
13 matches
Mail list logo