Re: [Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Atin Mukherjee
On Fri, 29 Jun 2018 at 21:35, Poornima Gurusiddaiah 
wrote:

>
>
> On Fri, Jun 29, 2018, 5:54 PM Mohit Agrawal  wrote:
>
>> Hi Poornima,
>>
>> It seems the test case(tests/bugs/core/bug-1432542-mpx-restart-crash.t)
>> is crashing because in quota xlator rpc-cleanup code is not perfectly
>> handled in brick multiplexing scenario. Though I did not see this issue
>> from a long time after done some changes in quota xlator code but now it
>> seems on your patch it is consistently reproducible.
>>
>>For this build to test your code you can mark it as a bad test and try
>> to run a regression.I will check how we can resolve the same.
>>
> I have done that already, and all the regression tests pass if I mark this
> one bad. So it's only this the regression is blocked on. Is it possible to
> fix it in the near future? Or mark the test bad until then?
>

We shouldn’t be marking a test bad to get a patch in. That’s totally
unacceptable. We should try to fix this test first. Btw, I see for most of
the patches the regression passes, do what’s in for Poornima’s patch which
doesn’t get through with this test? Is it just intermittent?


> Thanks,
> Poornima
>
>>
>> Regards
>> Mohit Agrawal
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
- Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Poornima Gurusiddaiah
On Fri, Jun 29, 2018, 5:54 PM Mohit Agrawal  wrote:

> Hi Poornima,
>
> It seems the test case(tests/bugs/core/bug-1432542-mpx-restart-crash.t) is
> crashing because in quota xlator rpc-cleanup code is not perfectly handled
> in brick multiplexing scenario. Though I did not see this issue from a long
> time after done some changes in quota xlator code but now it seems on your
> patch it is consistently reproducible.
>
>For this build to test your code you can mark it as a bad test and try
> to run a regression.I will check how we can resolve the same.
>
I have done that already, and all the regression tests pass if I mark this
one bad. So it's only this the regression is blocked on. Is it possible to
fix it in the near future? Or mark the test bad until then?

Thanks,
Poornima

>
> Regards
> Mohit Agrawal
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Mohit Agrawal
Hi Poornima,

It seems the test case(tests/bugs/core/bug-1432542-mpx-restart-crash.t) is
crashing because in quota xlator rpc-cleanup code is not perfectly handled
in brick multiplexing scenario. Though I did not see this issue from a long
time after done some changes in quota xlator code but now it seems on your
patch it is consistently reproducible.

   For this build to test your code you can mark it as a bad test and try
to run a regression.I will check how we can resolve the same.

Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Poornima Gurusiddaiah
Hi,

The regression is failing consistently for patch [1] but never on local
setup. But, it has failed for many other patches as well [2]. Sometimes it
generates core. Its crashing because the lookup is issued after the fini()
is called on the xlators. This crash seems unrelated to the patch it is
failing on. Anyone else has seen similar crash?

#0  0x7f218b1447dc in quota_lookup (frame=0x7f21682480f8,
this=0x7f21487ebd10,
loc=0x7f208b5428d0, xattr_req=0x0) at quota.c:1663
#1  0x7f218af2022b in io_stats_lookup (frame=0x7f2168247fc8,
this=0x7f21487ed580, loc=0x7f208b5428d0, xdata=0x0) at io-stats.c:2784
#2  0x7f219eebf87a in default_rmdir (frame=0x7f2168247fc8,
this=0x7f21487ef210,
loc=0x7f208b5428d0, flags=0, xdata=0x7f21487ee5c0) at defaults.c:2728
#3  0x7f219ee3bd4e in syncop_lookup (subvol=0x7f21487ef210,
loc=0x7f208b5428d0,
iatt=0x7f208b542830, parent=0x0, xdata_in=0x0, xdata_out=0x0) at
syncop.c:1260
#4  0x7f218aa9c562 in server_first_lookup (this=0x7f218c032b40,
client=0x7f216822ce00, reply=0x7f2168207fa8) at server-handshake.c:382
#5  0x7f218aa9e0cd in server_setvolume (req=0x7f21684087c8)
at server-handshake.c:886

(gdb) p this->private
$1 = (void *) 0x0
(gdb) p this->name
$2 = 0x7f21487e9ec0 "patchy-vol02-quota"
(gdb) p *this
$3 = {name = 0x7f21487e9ec0 "patchy-vol02-quota",
  type = 0x7f21487eba90 "features/quota", instance_name = 0x0,
...
  pass_through_fops = 0x7f219f0ee7c0 <_default_fops>, cleanup_starting = 1,
  call_cleanup = 1}
(gdb) p this->local_pool
$4 = (struct mem_pool *) 0x0


[1] https://review.gluster.org/#/c/20362/
[2]
https://fstat.gluster.org/failure/209?state=2&start_date=2018-05-25&end_date=2018-06-29&branch=all


Regards,
Poornima
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel