[Gluster-Maintainers] Maintainer Meeting notes and AIs - 5th Apr

2017-04-10 Thread Vijay Bellur
Hey All,

We had a good discussion in last week's maintainers meeting. Meeting notes
can be found at [1] and the recording is available at [2].

We have the following AIs from last meeting:

a> All maintainers to publish backlog in github issues [3] by next meeting
(4/19).

b> Publish 4.0 proposal by 4/19. Amar - can you please circulate the
document that you have been curating when it is ready?

Please respond here if you need any assistance with the AIs. Let us do a
round table and review respective  github issue backlogs in next week's
meeting.

Thanks,
Vijay

[1] https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvAT
IiFhJgCYarBQCMCYIQA===?both

[2] https://bluejeans.com/s/Nx4MS

[3] https://github.com/gluster/glusterfs/issues
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Jenkins build is back to normal : regression-test-burn-in #2947

2017-04-10 Thread jenkins
See 


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #2946

2017-04-10 Thread jenkins
See 


Changes:

[Raghavendra G] cli/auth : auth.allow and auth.reject does not accept FQDN/host 
name

[Aravinda VK] geo-rep: Fix EBUSY traceback

--
[...truncated 11942 lines...]
ok 2, LINENUM:17
ok 3, LINENUM:21
ok 4, LINENUM:22
ok 5, LINENUM:32
ok 6, LINENUM:33
ok 7, LINENUM:35
ok 8, LINENUM:37
ok 9, LINENUM:38
ok 10, LINENUM:40
ok 11, LINENUM:42
not ok 12 , LINENUM:47
FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
not ok 13 , LINENUM:48
FAILED COMMAND: gluster --mode=script --wignore volume start patchy
not ok 14 Got "" instead of "49152", LINENUM:50
FAILED COMMAND: 49152 get_nth_brick_port_for_volume patchy 1
not ok 15 Got "" instead of "49153", LINENUM:51
FAILED COMMAND: 49153 get_nth_brick_port_for_volume patchy 2
not ok 16 , LINENUM:53
FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
ok 17, LINENUM:55
not ok 18 , LINENUM:57
FAILED COMMAND: gluster --mode=script --wignore volume start patchy
not ok 19 Got "" instead of "49152", LINENUM:59
FAILED COMMAND: 49152 get_nth_brick_port_for_volume patchy 1
volume set: success
Failed 7/19 subtests 

Test Summary Report
---
./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t (Wstat: 0 Tests: 19 
Failed: 7)
  Failed tests:  12-16, 18-19
Files=1, Tests=19, 251 wallclock secs ( 0.02 usr  0.00 sys + 13.70 cusr  3.49 
csys = 17.21 CPU)
Result: FAIL
End of test ./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t



Run complete

Number of tests found: 199
Number of tests selected for run based on pattern: 199
Number of tests skipped as they were marked bad:   7
Number of tests skipped because of known_issues:   4
Number of tests that were run: 188

1 test(s) failed 
./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t

0 test(s) generated core 


Tests ordered by time taken, slowest to fastest: 

./tests/basic/ec/ec-12-4.t  -  342 second
./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t  -  251 second
./tests/basic/ec/ec-7-3.t  -  202 second
./tests/basic/ec/ec-6-2.t  -  177 second
./tests/basic/ec/self-heal.t  -  165 second
./tests/basic/ec/ec-5-2.t  -  151 second
./tests/basic/afr/entry-self-heal.t  -  151 second
./tests/basic/ec/ec-5-1.t  -  150 second
./tests/basic/afr/split-brain-favorite-child-policy.t  -  147 second
./tests/basic/afr/self-heal.t  -  138 second
./tests/basic/tier/tier.t  -  124 second
./tests/basic/ec/ec-4-1.t  -  121 second
./tests/basic/tier/legacy-many.t  -  117 second
./tests/basic/ec/ec-optimistic-changelog.t  -  116 second
./tests/basic/afr/self-heald.t  -  105 second
./tests/basic/ec/ec-3-1.t  -  94 second
./tests/bugs/core/bug-1110917.t  -  85 second
./tests/basic/tier/new-tier-cmds.t  -  84 second
./tests/bugs/cli/bug-1320388.t  -  83 second
./tests/basic/volume-snapshot-clone.t  -  83 second
./tests/basic/afr/split-brain-heal-info.t  -  82 second
./tests/basic/ec/heal-info.t  -  80 second
./tests/basic/ec/ec-new-entry.t  -  77 second
./tests/basic/ec/ec-notify.t  -  76 second
./tests/basic/afr/metadata-self-heal.t  -  72 second
./tests/basic/afr/split-brain-healing.t  -  71 second
./tests/basic/ec/ec-background-heals.t  -  70 second
./tests/basic/afr/granular-esh/cli.t  -  70 second
./tests/basic/quota.t  -  68 second
./tests/basic/afr/sparse-file-self-heal.t  -  67 second
./tests/bugs/bug-1368312.t  -  64 second
./tests/basic/tier/frequency-counters.t  -  52 second
./tests/basic/volume-snapshot.t  -  51 second
./tests/basic/afr/quorum.t  -  51 second
./tests/bugs/cli/bug-770655.t  -  50 second
./tests/basic/uss.t  -  50 second
./tests/basic/tier/fops-during-migration-pause.t  -  47 second
./tests/basic/mount-nfs-auth.t  -  46 second
./tests/basic/ec/ec.t  -  46 second
./tests/basic/afr/inodelk.t  -  45 second
./tests/basic/ec/ec-readdir.t  -  39 second
./tests/basic/afr/arbiter.t  -  39 second
./tests/basic/mpx-compat.t  -  38 second
./tests/basic/tier/locked_file_migration.t  -  37 second
./tests/basic/ec/ec-cpu-extensions.t  -  37 second
./tests/basic/tier/tier-heald.t  -  36 second
./tests/bitrot/bug-1294786.t  -  34 second
./tests/bitrot/br-state-check.t  -  34 second
./tests/basic/volume-snapshot-xml.t  -  34 second
./tests/basic/tier/unlink-during-migration.t  -  33 second
./tests/basic/mgmt_v3-locks.t  -  33 second
./tests/basic/afr/granular-esh/conservative-merge.t  -  32 second
./tests/basic/afr/gfid-self-heal.t  -  31 second
./tests/basic/geo-replication/marker-xattrs.t  -  28 second
./tests/basic/quota-ancestry-building.t  -  27 second
./tests/bugs/cli/bug-1353156-get-state-cli-validations.t  -  26 second
./tests/basic/afr/split-brain-resolution.t  

Re: [Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #2932

2017-04-10 Thread Atin Mukherjee
bug-1421590-brick-mux-reuse-ports.t seems to be a bad test to me and here
is my reasoning:

This test tries to check if the ports are reused or not. When a volume is
restarted, by the time glusterd tries to allocate a new port to the one of
the brick processes of the volume there is no guarantee that the older port
will be allocated given the kernel might take some extra time to free up
the port between this time frame. From https://build.gluster.org/job/
regression-test-burn-in/2932/console we can clearly see that post restart
of the volume, glusterd allocated port 49153 & 49155 for brick1 & brick2
respectively but the test was expecting the ports to be matched with 49155
& 49156 which were allocated before the volume was restarted.

@Jeff - Is there any specific reason we want to keep this test running?


On Sat, Apr 8, 2017 at 8:12 AM, Atin Mukherjee  wrote:

>
> On Sat, 8 Apr 2017 at 08:06,  wrote:
>
>> See > isplay/redirect>
>>
>> --
>> [...truncated 12020 lines...]
>> ok 5, LINENUM:32
>> ok 6, LINENUM:33
>> ok 7, LINENUM:35
>> not ok 8 , LINENUM:37
>> FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
>> ok 9, LINENUM:38
>> not ok 10 , LINENUM:40
>> FAILED COMMAND: gluster --mode=script --wignore volume start patchy
>> not ok 11 Got "" instead of "49152", LINENUM:42
>> FAILED COMMAND: 49152 get_nth_brick_port_for_volume patchy 1
>> not ok 12 , LINENUM:47
>> FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
>> not ok 13 , LINENUM:48
>> FAILED COMMAND: gluster --mode=script --wignore volume start patchy
>> not ok 14 Got "" instead of "49152", LINENUM:50
>> FAILED COMMAND: 49152 get_nth_brick_port_for_volume patchy 1
>> not ok 15 Got "" instead of "get_nth_brick_port_for_volume", LINENUM:51
>> FAILED COMMAND: get_nth_brick_port_for_volume patchy 2
>> not ok 16 , LINENUM:53
>> FAILED COMMAND: gluster --mode=script --wignore volume stop patchy
>> ok 17, LINENUM:55
>> not ok 18 , LINENUM:57
>> FAILED COMMAND: gluster --mode=script --wignore volume start patchy
>> not ok 19 Got "" instead of "49152", LINENUM:59
>> FAILED COMMAND: 49152 get_nth_brick_port_for_volume patchy 1
>> volume set: success
>> Failed 10/19 subtests
>>
>> Test Summary Report
>> ---
>> ./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t (Wstat: 0 Tests:
>> 19 Failed: 10)
>>   Failed tests:  8, 10-16, 18-19
>> Files=1, Tests=19, 249 wallclock secs ( 0.03 usr  0.01 sys + 13.40 cusr
>> 3.36 csys = 16.80 CPU)
>> Result: FAIL
>> End of test ./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t
>
>
> Something is wrong with this test, have seen it failing in many regression
> test burns. I'll take a look at it.
>
>
>> 
>> 
>>
>>
>> Run complete
>> 
>> 
>> Number of tests found: 199
>> Number of tests selected for run based on pattern: 199
>> Number of tests skipped as they were marked bad:   7
>> Number of tests skipped because of known_issues:   4
>> Number of tests that were run: 188
>>
>> 1 test(s) failed
>> ./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t
>>
>> 0 test(s) generated core
>>
>>
>> Tests ordered by time taken, slowest to fastest:
>> 
>> 
>> ./tests/basic/ec/ec-12-4.t  -  336 second
>> ./tests/basic/ec/ec-7-3.t  -  199 second
>> ./tests/basic/ec/ec-6-2.t  -  178 second
>> ./tests/basic/ec/self-heal.t  -  158 second
>> ./tests/basic/afr/split-brain-favorite-child-policy.t  -  151 second
>> ./tests/basic/ec/ec-5-2.t  -  150 second
>> ./tests/basic/ec/ec-5-1.t  -  150 second
>> ./tests/basic/afr/entry-self-heal.t  -  150 second
>> ./tests/basic/afr/self-heal.t  -  137 second
>> ./tests/bugs/core/bug-1421590-brick-mux-reuse-ports.t  -  131 second
>> ./tests/basic/tier/legacy-many.t  -  127 second
>> ./tests/basic/tier/tier.t  -  126 second
>> ./tests/basic/ec/ec-4-1.t  -  123 second
>> ./tests/basic/ec/ec-optimistic-changelog.t  -  111 second
>> ./tests/basic/afr/self-heald.t  -  109 second
>> ./tests/basic/ec/ec-3-1.t  -  95 second
>> ./tests/basic/volume-snapshot-clone.t  -  88 second
>> ./tests/bugs/core/bug-1110917.t  -  85 second
>> ./tests/basic/tier/new-tier-cmds.t  -  83 second
>> ./tests/basic/afr/split-brain-heal-info.t  -  82 second
>> ./tests/bugs/cli/bug-1320388.t  -  81 second
>> ./tests/basic/ec/heal-info.t  -  80 second
>> ./tests/basic/ec/ec-new-entry.t  -  79 second
>> ./tests/basic/ec/ec-background-heals.t  -  77 second
>> ./tests/basic/afr/split-brain-healing.t  -  76 second
>> ./tests/basic/afr/metadata-self-heal.t  -  74 second
>> ./tests/basic/ec/ec-notify.t  -  73 second
>> ./tests/basic/afr/sparse-file-self-heal.t  -  73