I have rebased [1] and triggered brick-mux regression as we fixed one
genuine snapshot test failure in brick mux through
https://review.gluster.org/#/c/glusterfs/+/21314/ which got merged today.
On Thu, Oct 4, 2018 at 10:39 AM Poornima Gurusiddaiah
wrote:
> Hi,
>
> For each brick, we create
Hi,
For each brick, we create atleast 20+ threads, hence in a brick mux use
case, where we load multiple bricks in the same process, there will 100s of
threads resulting in perf issues, memory usage increase.
IO-threads : Make it global, to the process, and ref count the resource.
patch [1],
On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal
wrote:
> Hello folks,
>
> Distributed-regression job[1] is now a part of Gluster's
> nightly-master build pipeline. The following are the issues we have
> resolved since we started working on this:
>
> 1) Collecting gluster logs from servers.
02.10.2018 12:59, Amar Tumballi пишет:
Recently, in one of the situation, we found that locks were not freed
up due to not getting TCP timeout..
Can you try the option like below and let us know?
`gluster volume set $volname tcp-user-timeout 42`
(ref: https://review.gluster.org/21170/ )
It doesn't work for some reason:
gluster volume set pool tcp-user-timeout 42
volume set: failed: option : tcp-user-timeout does not exist
Did you mean tcp-user-timeout?
4.1.5.
03.10.2018 08:30, Dmitry Melekhov пишет:
02.10.2018 12:59, Amar Tumballi пишет:
Recently, in one of the
03.10.2018 10:10, Amar Tumballi пишет:
Sorry! I should have been more specific. I over-looked the option:
---
[root@localhost ~]# gluster volume set demo1 tcp-user-timeout 42
volume set: failed: option : tcp-user-timeout does not exist
Did you mean tcp-user-timeout?
[root@localhost ~]# gluster
On Fri, Sep 28, 2018 at 4:01 PM Shyam Ranganathan
wrote:
> We tested with ASAN and without the fix at [1], and it consistently
> crashes at the mdcache xlator when brick mux is enabled.
> On 09/28/2018 03:50 PM, FNU Raghavendra Manjunath wrote:
> >
> > I was looking into the issue and this is
Hello folks,
I meant to send this out on Monday, but it's been a busy few days.
* The infra pieces of distributed regression are now complete. A big shout
out to Deepshikha for driving this and Ramky for his help in get this to
completion.
* The GD2 containers and CSI container builds work now.
Hello folks,
Distributed-regression job[1] is now a part of Gluster's
nightly-master build pipeline. The following are the issues we have
resolved since we started working on this:
1) Collecting gluster logs from servers.
2) Tests failed due to infra-related issues have been fixed.
3) Time taken