Hi,
Every resource(thread, mem pools) is associated with glusterfs_ctx, hence as
the ctxs in the process
grows the resource utilization also grows (most of it being unused). This
mostly is an issue with any
libgfapi application: USS, NFS Ganesha, Samba, vdsm, qemu.
It is normal in any of
gluster volume heal volname info doesn't seem to fail because
the process is crashing in afr_notify when invoked by glfs_fini.
As a result proper error codes are not being propagated.
Pranith had recently sent a patch : http://review.gluster.org/#/c/11001/
to not invoke glfs_fini in non-debug
On 06/05/2015 09:39 AM, Atin Mukherjee wrote:
I still see lot of patches in release-3.7 are failing regression, be it
Linux or Netbsd. Does this mean all the spurious fixes which we did in
mainline are yet to be in 3.7? If so, what are we waiting for?
Most of the fixes are in release-3.7.
Potentially relevant to a GlusterD rewrite, since we've
mentioned Go as a possibility a few times:
https://vagabond.github.io/rants/2015/06/05/a-year-with-go/
https://news.ycombinator.com/item?id=9668302
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file
- Original Message -
- Original Message -
This seems to happen because of race between STACK_RESET and stack
statedump. Still thinking how to fix it without taking locks around
writing to file.
Why should we still keep the stack being reset as part of
Interesting Justin, thanks for sharing.
Regards,
Atin
Sent from Samsung Galaxy S4
On 6 Jun 2015 06:16, Justin Clift jus...@gluster.org wrote:
Potentially relevant to a GlusterD rewrite, since we've
mentioned Go as a possibility a few times: