[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3805

2018-01-12 Thread jenkins
See 


Changes:

[Raghavendra G] cluster/dht: Update options for gd2

[Pranith Kumar K] tests: check volume status for shd being up

[Amar Tumballi] rpc: use export map to minimize exported symbols in 
libgf{rpc,xdr}.so

--
[...truncated 1.12 MB...]
./tests/bugs/fuse/bug-985074.t  -  10 second
./tests/bugs/cli/bug-1030580.t  -  10 second
./tests/basic/stats-dump.t  -  10 second
./tests/basic/quota-nfs.t  -  10 second
./tests/basic/quota_aux_mount.t  -  10 second
./tests/basic/md-cache/bug-1317785.t  -  10 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  10 second
./tests/basic/gfapi/gfapi-dup.t  -  10 second
./tests/basic/gfapi/bug-1241104.t  -  10 second
./tests/basic/gfapi/anonymous_fd.t  -  10 second
./tests/performance/open-behind.t  -  9 second
./tests/features/ssl-authz.t  -  9 second
./tests/bugs/upcall/bug-1458127.t  -  9 second
./tests/bugs/upcall/bug-1227204.t  -  9 second
./tests/bugs/transport/bug-873367.t  -  9 second
./tests/bugs/replicate/bug-1325792.t  -  9 second
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  9 second
./tests/bugs/glusterd/bug-1104642.t  -  9 second
./tests/bugs/distribute/bug-1086228.t  -  9 second
./tests/bugs/changelog/bug-1208470.t  -  9 second
./tests/basic/volume-status.t  -  9 second
./tests/basic/ios-dump.t  -  9 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  9 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  9 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  9 second
./tests/basic/gfapi/glfd-lkowner.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/basic/ec/ec-read-policy.t  -  9 second
./tests/basic/ec/ec-anonymous-fd.t  -  9 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/features/readdir-ahead.t  -  8 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1260637.t  -  8 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  8 second
./tests/bugs/replicate/bug-1365455.t  -  8 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/posix/bug-1360679.t  -  8 second
./tests/bugs/posix/bug-1175711.t  -  8 second
./tests/bugs/posix/bug-1122028.t  -  8 second
./tests/bugs/io-cache/bug-858242.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-888752.t  -  8 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  8 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  8 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  8 second
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/ec/bug-1179050.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bitrot/br-stub.t  -  8 second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/afr/arbiter-remove-brick.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/bugs/replicate/bug-966018.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/quota/bug-1104692.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/glusterfs/bug-848251.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  7 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  7 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/core/bug-908146.t  -  7 second
./tests/bugs/bug-1371806_2.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/gfid2path_nfs.t  -  6 second
./tests/features/delay-gen.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/shard/bug-1256580.t  -  6 second
./tests/bugs/rpc/bug-954057.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/nfs/bug-847622.t  -  6 second

[Gluster-Maintainers] Build failed in Jenkins: experimental-periodic #199

2018-01-12 Thread jenkins
See 

--
[...truncated 1.58 MB...]
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-30676.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-31326.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-31326.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-31351.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-31351.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-6529.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-6529.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-7466.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-7466.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-7490.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-7490.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-8317.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-8317.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-9114.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-9114.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-9139.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-9139.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-9213.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-9213.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-9237.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-9237.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glusterepoll1-9354.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glusterepoll1-9354.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glustersproc0-14613.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glustersproc0-14613.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glustersproc0-17965.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glustersproc0-17965.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glustersproc0-17999.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glustersproc0-17999.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glustersproc0-19042.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glustersproc0-19042.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glustersproc0-20074.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c 

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.5 released

2018-01-12 Thread Kaleb S. KEITHLEY
On 01/12/2018 12:47 PM, Kaleb S. KEITHLEY wrote:
> On 01/12/2018 09:13 AM, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.sha512sum
>>
>> This release is made off jenkins-release-37
> 
> Packages for:
> 
> * Fedora 27 are in the Fedora Updates or Updates-Testing repo. Use `dnf`
> to install. Fedora 26 is on download.gluster.org at [1]. (Fedora 28
> packages will come later pending rpcgen resolution.)
> 
> * Debian amd64 Jessie/8, Stretch/9, and Buster/10(Sid) are on
> download.gluster.org at [1] (arm64 are coming in a day or two.)
> 
> * Ubuntu Xenial/16.04, Zesty/17.04, Artful/17.10, and Bionic/18.04 are
> on Launchpad at [2]
> 
> * SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
> at [3].
> 
> The .../LTM-3.12 -> .../3.12/3.12.5 and .../3.12/LATEST ->
> .../3.12/3.12.5 symlinks have been updated.
> 

[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

-- 

Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.5 released

2018-01-12 Thread Kaleb S. KEITHLEY
On 01/12/2018 09:13 AM, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.sha512sum
> 
> This release is made off jenkins-release-37

Packages for:

* Fedora 27 are in the Fedora Updates or Updates-Testing repo. Use `dnf`
to install. Fedora 26 is on download.gluster.org at [1]. (Fedora 28
packages will come later pending rpcgen resolution.)

* Debian amd64 Jessie/8, Stretch/9, and Buster/10(Sid) are on
download.gluster.org at [1] (arm64 are coming in a day or two.)

* Ubuntu Xenial/16.04, Zesty/17.04, Artful/17.10, and Bionic/18.04 are
on Launchpad at [2]

* SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
at [3].

The .../LTM-3.12 -> .../3.12/3.12.5 and .../3.12/LATEST ->
.../3.12/3.12.5 symlinks have been updated.

-- 

Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #519

2018-01-12 Thread jenkins
See 


Changes:

[Raghavendra G] cluster/dht: Update options for gd2

[Pranith Kumar K] tests: check volume status for shd being up

[Amar Tumballi] rpc: use export map to minimize exported symbols in 
libgf{rpc,xdr}.so

--
[...truncated 276.06 KB...]
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t .. 
1..29
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:19
ok 9, LINENUM:20
ok 10, LINENUM:22
ok 11, LINENUM:25
ok 12, LINENUM:34
ok 13, LINENUM:39
ok 14, LINENUM:39
ok 15, LINENUM:43
ok 16, LINENUM:46
ok 17, LINENUM:47
ok 18, LINENUM:48
ok 19, LINENUM:51
ok 20, LINENUM:52
ok 21, LINENUM:53
ok 22, LINENUM:54
ok 23, LINENUM:60
ok 24, LINENUM:63
ok 25, LINENUM:68
ok 26, LINENUM:68
ok 27, LINENUM:72
ok 28, LINENUM:73
ok 29, LINENUM:74
ok
All tests successful.
Files=1, Tests=29, 24 wallclock secs ( 0.05 usr -0.01 sys +  1.94 cusr  3.17 
csys =  5.15 CPU)
Result: PASS
End of test 
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t




[14:22:54] Running tests in file ./tests/basic/afr/granular-esh/replace-brick.t
./tests/basic/afr/granular-esh/replace-brick.t .. 
1..34
ok 1, LINENUM:7
ok 2, LINENUM:8
ok 3, LINENUM:9
ok 4, LINENUM:10
ok 5, LINENUM:11
ok 6, LINENUM:12
ok 7, LINENUM:13
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:17
ok 11, LINENUM:26
ok 12, LINENUM:29
ok 13, LINENUM:32
ok 14, LINENUM:35
ok 15, LINENUM:38
ok 16, LINENUM:41
ok 17, LINENUM:43
ok 18, LINENUM:44
ok 19, LINENUM:46
ok 20, LINENUM:47
ok 21, LINENUM:48
ok 22, LINENUM:49
ok 23, LINENUM:50
ok 24, LINENUM:53
ok 25, LINENUM:56
ok 26, LINENUM:59
ok 27, LINENUM:60
ok 28, LINENUM:63
ok 29, LINENUM:65
ok 30, LINENUM:68
ok 31, LINENUM:69
ok 32, LINENUM:71
ok 33, LINENUM:72
ok 34, LINENUM:73
ok
All tests successful.
Files=1, Tests=34, 25 wallclock secs ( 0.04 usr  0.00 sys +  2.11 cusr  3.27 
csys =  5.42 CPU)
Result: PASS
End of test ./tests/basic/afr/granular-esh/replace-brick.t




[14:23:19] Running tests in file ./tests/basic/afr/heal-info.t
./tests/basic/afr/heal-info.t .. 
1..9
ok 1, LINENUM:21
ok 2, LINENUM:22
ok 3, LINENUM:23
ok 4, LINENUM:24
ok 5, LINENUM:25
ok 6, LINENUM:26
ok 7, LINENUM:27
ok 8, LINENUM:33
ok 9, LINENUM:34
ok
All tests successful.
Files=1, Tests=9, 31 wallclock secs ( 0.05 usr  0.00 sys +  2.91 cusr  4.76 
csys =  7.72 CPU)
Result: PASS
End of test ./tests/basic/afr/heal-info.t




[14:23:50] Running tests in file ./tests/basic/afr/heal-quota.t
touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/15082/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests 

Test Summary Report
---
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
  Failed test:  11
Files=1, Tests=19, 21 wallclock secs ( 0.05 usr  0.01 sys +  1.66 cusr  3.06 
csys =  4.78 CPU)
Result: FAIL
./tests/basic/afr/heal-quota.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/27386/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, 

[Gluster-Maintainers] glusterfs-3.12.5 released

2018-01-12 Thread jenkins
SRC: 
https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.tar.gz
HASH: 
https://build.gluster.org/job/release-new/37/artifact/glusterfs-3.12.5.sha512sum

This release is made off jenkins-release-37___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Mailinglist admins wanted

2018-01-12 Thread Atin Mukherjee
I can take that up if that helps..

On Fri, 12 Jan 2018 at 16:36, Niels de Vos  wrote:

> Hi,
>
> It seems that during the last weeeks neither Vijay, Jeff (with incorrect
> email address) or I had time to review/approve/reject emails sent to
> this list. Adding one or two additional moderators would be good, who's
> volunteering for that?
>
> Thanks,
> Niels
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
-- 
- Atin (atinm)
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release 4.0: Making it happen! (Protocol changes and wireshark)

2018-01-12 Thread Niels de Vos
On Wed, Jan 10, 2018 at 03:36:55PM -0500, Shyam Ranganathan wrote:
> Hi,
> 
> As we are introducing a new protocol version, the existing gluster
> wireshark plugin [1] needs to be updated.
> 
> Further this needs to get released to wireshark users in some fashion,
> which looks like a need to follow wireshark roadmap [2] (not sure if
> this can be part of a maintenance release, which would possibly be based
> on the quantum of changes etc.).
> 
> This need not happen with 4.0 branching, but at least has to be
> completed before 4.0 release.
> 
> @neils once the protocol changes are complete, would this be possible to
> complete by you in the next 6 odd weeks by the release (end of Feb)? Or,
> if we need volunteers, please give a shout out here.

Adding the new bits to the Wireshark dissector is pretty straight
forward. Once the protocol changes have been done, it would be good to
have a few .pcap files captured that can be used for developing and
testing the changes. This can even be done in steps, as soon as one
chunk of the protocol is finalized, a patch to upstream Wireshark can be
sent already. We can improve it incrementally that way, also making it
easier for multiple contributors to work on it.

I can probably do some of the initial work, but would like assistance
from others with testing and possibly improving certain parts. If
someone can provide tcpdumps with updated protocol changes, that would
be most welcome! Capture the dumps like this:

  # tcpdump -i any -w /var/tmp/gluster-40-${proto_change}.pcap -s 0 tcp and not 
port 22
  ... exercise the protocol bit that changed, include connection setup
  ... press CTRL+C once done
  # gzip /var/tmp/gluster-40-${proto_change}.pcap
  ... Wireshark can read .pcap.gz without manual decompressing

Attach the .pcap.gz to the GitHub issue for the protocol change and
email gluster-devel@ once it is available so that a developer can start
working on the Wireshark change.

Thanks,
Niels


> 
> Shyam
> 
> [1] Gluster wireshark plugin:
> https://code.wireshark.org/review/gitweb?p=wireshark.git;a=tree;f=epan/dissectors;h=8c8303285a204bdff3b8b80e2811dcd9b7ab6fe0;hb=HEAD
> 
> [2] Wireshark roadmap: https://wiki.wireshark.org/Development/Roadmap
> 
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-12 Thread Niels de Vos
On Thu, Jan 11, 2018 at 12:14:20PM -0500, Kaleb S. KEITHLEY wrote:
> On 01/11/2018 11:34 AM, Shyam Ranganathan wrote:
> >>>
> >>> One thing not covered above is what happens when GD2 fixes a high priority
> >>> bug between releases of glusterfs.
> >>>
> >>> Once option is we wait until the next release of glusterfs to include the
> >>> update to GD2.
> >>>
> >>> Or we can respin (rerelease) the glusterfs packages with the updated GD2.
> >>> I.e. glusterfs-4.0.0-1 (containing GD2-1.0.0) -> glusterfs-4.0.0-2
> >>> (containing GD2-1.0.1).
> >>>
> >>> Or we can decide not to make a hard rule and do whatever makes the most
> >>> sense at the time. If the fix is urgent, we respin. If the fix is not 
> >>> urgent
> >>> it waits for the next Gluster release. (From my perspective though I'd
> >>> rather not do respins, I've already got plenty of work doing the regular
> >>> releases.)
> > 
> > I would think we follow what we need to do for the gluster package (and
> > its sub-packages) as it stands now. If there is an important enough fix
> > (critical/security etc.) that requires a one-off build (ie. not a
> > maintenance release or a regular release) we respin the whole thing
> > (which is more work).
> > 
> > I think if it is a GD2 specific fix then just re-spinning that
> > sub-package makes more sense and possibly less work.
> 
> RPM (and Debian) packaging is an all or nothing proposition. There is no
> respinning just the -glusterd2 sub-package.
> 
> > I am going to leave the decision of re-spinning the whole thing or just
> > the GD2 package to the packaging folks, but state that re-spin rules do
> > not change, IOW, if something is critical enough we re-spin as we do today.
> 
> I think my real question was what should happen when GD2 discovers/fixes
> a severe bug between the regular release dates.
> 
> If we take the decision that it needs to be released immediately (with
> packages built), do we:
> 
>   a) make a whole new glusterfs Release with just the GD2 fix. I.e.
> glusterfs-4.0.4-1.rpm  ->  glusterfs-4.0.5-1.rpm. IOW we bump the _V_ in
> the NVR? (This implies tagging the glusterfs source with the new tag at
> the same location as the previous tag.)
> 
> or
> 
>   b) "respin" the existing glusterfs release, also with just the GD2
> fix. I.e. glusterfs-4.0.4-1.rpm  ->  glusterfs-4.0.4-2.rpm. IOW we bump
> the _R_ in the NVR?
> 
> 
> Obviously (or is it?) if we find serious bugs in core gluster and GD2
> that we want to release we can update both and that would be a new
> Version (_V_ in the NVR).

I think it will be really painful to maintain a .spec that has the
current (already very complex) glusterfs bits, and the new GD2
components. Packaging Golang is quite different from anything written in
C, and will make the mixed language .spec most ugly. (Remember the
difficulties with the gluster-swift/ufo bundling?)

If GD2 evolves at a different rate than glusterfs, it seems better to
package is separately. This will make it possible to update it more
often if needed. Maintaining the packages will also be simpler. Because
GD2 is supposed to be less intimate with the glusterfs internals, there
may come a point where the GD2 version does not need to match the
glusterfs version anymore.

Keeping the same major versions would be good, and that makes it easy to
set the correct Requires: in the .spec files.

Packaging GD2 by itself for Fedora should not be a problem. There are
several package maintainers in the Gluster Community and all can
propose, review and approve new packages. If two packages is the right
technical approach, we should work to make that happen.

Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Mailinglist admins wanted

2018-01-12 Thread Niels de Vos
Hi,

It seems that during the last weeeks neither Vijay, Jeff (with incorrect
email address) or I had time to review/approve/reject emails sent to
this list. Adding one or two additional moderators would be good, who's
volunteering for that?

Thanks,
Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #509

2018-01-12 Thread jenkins
See 

--
[...truncated 252.08 KB...]
ok 3, LINENUM:67
ok 4, LINENUM:68
ok 5, LINENUM:69
ok 6, LINENUM:70
ok 7, LINENUM:71
ok 8, LINENUM:72
ok 9, LINENUM:73
ok 10, LINENUM:74
ok 11, LINENUM:76
ok 12, LINENUM:77
ok 13, LINENUM:78
ok 14, LINENUM:79
ok 15, LINENUM:80
ok 16, LINENUM:82
ok 17, LINENUM:84
ok 18, LINENUM:85
ok 19, LINENUM:87
ok 20, LINENUM:88
ok 21, LINENUM:90
ok 22, LINENUM:91
ok 23, LINENUM:92
ok 24, LINENUM:94
ok 25, LINENUM:95
ok 26, LINENUM:96
ok 27, LINENUM:98
ok 28, LINENUM:99
ok 29, LINENUM:100
ok 30, LINENUM:101
ok 31, LINENUM:103
ok 32, LINENUM:104
ok 33, LINENUM:105
ok 34, LINENUM:106
ok 35, LINENUM:108
ok 36, LINENUM:109
ok 37, LINENUM:110
ok 38, LINENUM:111
ok 39, LINENUM:112
ok 40, LINENUM:113
ok 41, LINENUM:115
ok 42, LINENUM:116
ok 43, LINENUM:117
ok 44, LINENUM:118
ok 45, LINENUM:119
ok 46, LINENUM:121
ok 47, LINENUM:122
ok 48, LINENUM:123
ok 49, LINENUM:124
ok 50, LINENUM:125
ok 51, LINENUM:127
ok 52, LINENUM:128
ok 53, LINENUM:129
ok 54, LINENUM:130
ok 55, LINENUM:131
ok 56, LINENUM:132
ok 57, LINENUM:134
ok 58, LINENUM:135
ok 59, LINENUM:136
ok 60, LINENUM:138
ok 61, LINENUM:139
ok 62, LINENUM:140
ok 63, LINENUM:141
ok 64, LINENUM:142
ok 65, LINENUM:143
ok 66, LINENUM:145
ok 67, LINENUM:146
ok 68, LINENUM:147
ok 69, LINENUM:148
ok 70, LINENUM:149
ok 71, LINENUM:150
ok 72, LINENUM:152
ok 73, LINENUM:153
ok 74, LINENUM:154
ok 75, LINENUM:155
ok 76, LINENUM:160
ok 77, LINENUM:161

ok 78, LINENUM:167
ok 79, LINENUM:168
ok 80, LINENUM:169
ok 81, LINENUM:170
ok 82, LINENUM:171
ok 83, LINENUM:173
ok 84, LINENUM:174
ok 85, LINENUM:176
ok 86, LINENUM:177
ok 87, LINENUM:179
ok 88, LINENUM:180
ok 89, LINENUM:182
ok 90, LINENUM:183
ok 91, LINENUM:185
ok 92, LINENUM:186
ok 93, LINENUM:188
ok 94, LINENUM:189
ok 95, LINENUM:190
ok 96, LINENUM:192
ok 97, LINENUM:193
ok 98, LINENUM:195
ok 99, LINENUM:196
ok 100, LINENUM:198
ok 101, LINENUM:199
ok 102, LINENUM:201
ok 103, LINENUM:202
ok 104, LINENUM:204
ok 105, LINENUM:205
ok 106, LINENUM:207
ok 107, LINENUM:208
ok
All tests successful.
Files=1, Tests=107, 19 wallclock secs ( 0.05 usr  0.02 sys +  3.82 cusr  5.93 
csys =  9.82 CPU)
Result: PASS
End of test ./tests/basic/afr/data-self-heal.t




[14:02:24] Running tests in file ./tests/basic/afr/durability-off.t
dd: /mnt/glusterfs/0/a.txt: Socket is not connected
./tests/basic/afr/durability-off.t .. 
1..29
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:15
ok 7, LINENUM:16
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:20
ok 11, LINENUM:21
ok 12, LINENUM:22
ok 13, LINENUM:23
ok 14, LINENUM:24
ok 15, LINENUM:25
ok 16, LINENUM:26
ok 17, LINENUM:27
ok 18, LINENUM:30
ok 19, LINENUM:31
ok 20, LINENUM:32
not ok 21 , LINENUM:33
FAILED COMMAND: dd of=/mnt/glusterfs/0/a.txt if=/dev/zero bs=1024k count=1
ok 22, LINENUM:35
ok 23, LINENUM:36
ok 24, LINENUM:37
ok 25, LINENUM:38
ok 26, LINENUM:39
ok 27, LINENUM:40
ok 28, LINENUM:41
not ok 29 Got "0" instead of "^2$", LINENUM:42
FAILED COMMAND: ^2$ echo 0
Failed 2/29 subtests 

Test Summary Report
---
./tests/basic/afr/durability-off.t (Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  21, 29
Files=1, Tests=29, 35 wallclock secs ( 0.03 usr  0.01 sys +  2.11 cusr  3.06 
csys =  5.21 CPU)
Result: FAIL
./tests/basic/afr/durability-off.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

dd: /mnt/glusterfs/0/a.txt: Socket is not connected
./tests/basic/afr/durability-off.t .. 
1..29
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:15
ok 7, LINENUM:16
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:20
ok 11, LINENUM:21
ok 12, LINENUM:22
ok 13, LINENUM:23
ok 14, LINENUM:24
ok 15, LINENUM:25
ok 16, LINENUM:26
ok 17, LINENUM:27
ok 18, LINENUM:30
ok 19, LINENUM:31
ok 20, LINENUM:32
not ok 21 , LINENUM:33
FAILED COMMAND: dd of=/mnt/glusterfs/0/a.txt if=/dev/zero bs=1024k count=1
ok 22, LINENUM:35
ok 23, LINENUM:36
ok 24, LINENUM:37
ok 25, LINENUM:38
ok 26, LINENUM:39
ok 27, LINENUM:40
ok 28, LINENUM:41
not ok 29 Got "0" instead of "^2$", LINENUM:42
FAILED COMMAND: ^2$ echo 0
Failed 2/29 subtests 

Test Summary Report
---
./tests/basic/afr/durability-off.t (Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  21, 29
Files=1, Tests=29, 31 wallclock secs ( 0.04 usr  0.02 sys +  2.15 cusr  2.79 
csys =  5.00 CPU)
Result: FAIL
End of test ./tests/basic/afr/durability-off.t



Run complete

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #593

2018-01-12 Thread jenkins
See 


Changes:

[Raghavendra G] Revert "rpc: merge ssl infra with epoll infra"

[atin] dict: fix VALIDATE_DATA_AND_LOG call

--
[...truncated 1.13 MB...]
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/basic/stats-dump.t  -  8 second
./tests/basic/quota_aux_mount.t  -  8 second
./tests/basic/mgmt_v3-locks.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/readdir-ahead.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/upcall/bug-1227204.t  -  7 second
./tests/bugs/transport/bug-873367.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/posix/bug-1122028.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/glusterd/bug-859927.t  -  7 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  7 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  7 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  7 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  7 second
./tests/bugs/ec/bug-1179050.t  -  7 second
./tests/bugs/changelog/bug-1208470.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/md-cache/bug-1317785.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  7 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  7 second
./tests/basic/gfapi/glfd-lkowner.t  -  7 second
./tests/basic/gfapi/gfapi-dup.t  -  7 second
./tests/basic/gfapi/bug-1241104.t  -  7 second
./tests/basic/gfapi/anonymous_fd.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/get-gfid-to-path.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1260637.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/posix/bug-1175711.t  -  6 second
./tests/bugs/md-cache/afr-stale-read.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterfs/bug-856455.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/ec/bug-1227869.t  -  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/features/ssl-authz.t  -  5 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/snapshot/bug-1178079.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-886998.t  -  5 second
./tests/bugs/replicate/bug-1480525.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/quota/bug-1104692.t  -  5 second
./tests/bugs/nfs/bug-877885.t  -  5 second

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #503

2018-01-12 Thread jenkins
See 


Changes:

[Amar Tumballi] dict: support better on-wire transfer

[Amar Tumballi] Set log path correctly when clients use UDS

[Amar Tumballi] tests/vagrant: add a --help option to the script

--
[...truncated 248.59 KB...]
ok 3, LINENUM:67
ok 4, LINENUM:68
ok 5, LINENUM:69
ok 6, LINENUM:70
ok 7, LINENUM:71
ok 8, LINENUM:72
ok 9, LINENUM:73
ok 10, LINENUM:74
ok 11, LINENUM:76
ok 12, LINENUM:77
ok 13, LINENUM:78
ok 14, LINENUM:79
ok 15, LINENUM:80
ok 16, LINENUM:82
ok 17, LINENUM:84
ok 18, LINENUM:85
ok 19, LINENUM:87
ok 20, LINENUM:88
ok 21, LINENUM:90
ok 22, LINENUM:91
ok 23, LINENUM:92
ok 24, LINENUM:94
ok 25, LINENUM:95
ok 26, LINENUM:96
ok 27, LINENUM:98
ok 28, LINENUM:99
ok 29, LINENUM:100
ok 30, LINENUM:101
ok 31, LINENUM:103
ok 32, LINENUM:104
ok 33, LINENUM:105
ok 34, LINENUM:106
ok 35, LINENUM:108
ok 36, LINENUM:109
ok 37, LINENUM:110
ok 38, LINENUM:111
ok 39, LINENUM:112
ok 40, LINENUM:113
ok 41, LINENUM:115
ok 42, LINENUM:116
ok 43, LINENUM:117
ok 44, LINENUM:118
ok 45, LINENUM:119
ok 46, LINENUM:121
ok 47, LINENUM:122
ok 48, LINENUM:123
ok 49, LINENUM:124
ok 50, LINENUM:125
ok 51, LINENUM:127
ok 52, LINENUM:128
ok 53, LINENUM:129
ok 54, LINENUM:130
ok 55, LINENUM:131
ok 56, LINENUM:132
ok 57, LINENUM:134
ok 58, LINENUM:135
ok 59, LINENUM:136
ok 60, LINENUM:138
ok 61, LINENUM:139
ok 62, LINENUM:140
ok 63, LINENUM:141
ok 64, LINENUM:142
ok 65, LINENUM:143
ok 66, LINENUM:145
ok 67, LINENUM:146
ok 68, LINENUM:147
ok 69, LINENUM:148
ok 70, LINENUM:149
ok 71, LINENUM:150
ok 72, LINENUM:152
ok 73, LINENUM:153
ok 74, LINENUM:154
ok 75, LINENUM:155
ok 76, LINENUM:160
ok 77, LINENUM:161

ok 78, LINENUM:167
ok 79, LINENUM:168
ok 80, LINENUM:169
ok 81, LINENUM:170
ok 82, LINENUM:171
ok 83, LINENUM:173
ok 84, LINENUM:174
ok 85, LINENUM:176
ok 86, LINENUM:177
ok 87, LINENUM:179
ok 88, LINENUM:180
ok 89, LINENUM:182
ok 90, LINENUM:183
ok 91, LINENUM:185
ok 92, LINENUM:186
ok 93, LINENUM:188
ok 94, LINENUM:189
ok 95, LINENUM:190
ok 96, LINENUM:192
ok 97, LINENUM:193
ok 98, LINENUM:195
ok 99, LINENUM:196
ok 100, LINENUM:198
ok 101, LINENUM:199
ok 102, LINENUM:201
ok 103, LINENUM:202
ok 104, LINENUM:204
ok 105, LINENUM:205
ok 106, LINENUM:207
ok 107, LINENUM:208
ok
All tests successful.
Files=1, Tests=107, 19 wallclock secs ( 0.06 usr  0.02 sys +  4.13 cusr  6.09 
csys = 10.30 CPU)
Result: PASS
End of test ./tests/basic/afr/data-self-heal.t




[14:07:59] Running tests in file ./tests/basic/afr/durability-off.t
dd: /mnt/glusterfs/0/a.txt: Socket is not connected
./tests/basic/afr/durability-off.t .. 
1..29
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:15
ok 7, LINENUM:16
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:20
ok 11, LINENUM:21
ok 12, LINENUM:22
ok 13, LINENUM:23
ok 14, LINENUM:24
ok 15, LINENUM:25
ok 16, LINENUM:26
ok 17, LINENUM:27
ok 18, LINENUM:30
ok 19, LINENUM:31
ok 20, LINENUM:32
not ok 21 , LINENUM:33
FAILED COMMAND: dd of=/mnt/glusterfs/0/a.txt if=/dev/zero bs=1024k count=1
ok 22, LINENUM:35
ok 23, LINENUM:36
ok 24, LINENUM:37
ok 25, LINENUM:38
ok 26, LINENUM:39
ok 27, LINENUM:40
ok 28, LINENUM:41
not ok 29 Got "0" instead of "^2$", LINENUM:42
FAILED COMMAND: ^2$ echo 0
Failed 2/29 subtests 

Test Summary Report
---
./tests/basic/afr/durability-off.t (Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  21, 29
Files=1, Tests=29, 36 wallclock secs ( 0.04 usr  0.00 sys +  2.36 cusr  2.77 
csys =  5.17 CPU)
Result: FAIL
./tests/basic/afr/durability-off.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

dd: /mnt/glusterfs/0/a.txt: Socket is not connected
./tests/basic/afr/durability-off.t .. 
1..29
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:15
ok 7, LINENUM:16
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:20
ok 11, LINENUM:21
ok 12, LINENUM:22
ok 13, LINENUM:23
ok 14, LINENUM:24
ok 15, LINENUM:25
ok 16, LINENUM:26
ok 17, LINENUM:27
ok 18, LINENUM:30
ok 19, LINENUM:31
ok 20, LINENUM:32
not ok 21 , LINENUM:33
FAILED COMMAND: dd of=/mnt/glusterfs/0/a.txt if=/dev/zero bs=1024k count=1
ok 22, LINENUM:35
ok 23, LINENUM:36
ok 24, LINENUM:37
ok 25, LINENUM:38
ok 26, LINENUM:39
ok 27, LINENUM:40
ok 28, LINENUM:41
not ok 29 Got "0" instead of "^2$", LINENUM:42
FAILED COMMAND: ^2$ echo 0
Failed 2/29 subtests 

Test Summary Report
---
./tests/basic/afr/durability-off.t (Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  21, 29
Files=1, Tests=29, 30 wallclock secs ( 0.03 usr  

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #587

2018-01-12 Thread jenkins
See 


--
[...truncated 777.69 KB...]
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  8 second
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/distribute/bug-1086228.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/basic/volume-status.t  -  8 second
./tests/basic/md-cache/bug-1317785.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  8 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  8 second
./tests/basic/gfapi/glfd-lkowner.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/features/readdir-ahead.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/shard/bug-1260637.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/posix/bug-1175711.t  -  7 second
./tests/bugs/glusterd/bug-889630.t  -  7 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  7 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  7 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/glusterd/bug-1104642.t  -  7 second
./tests/bugs/ec/bug-1179050.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/ec/ec-anonymous-fd.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/ec/bug-1227869.t  -  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bug-1258069.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/afr/heal-info.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/unclassified/bug-1034085.t  -  5 second
./tests/bugs/snapshot/bug-1064768.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1258334.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-976800.t  -  5 second
./tests/bugs/replicate/bug-886998.t  -  5 second
./tests/bugs/readdir-ahead/bug-1439640.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/nfs/subdir-trailing-slash.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  5 second

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #512

2018-01-12 Thread jenkins
See 


Changes:

[Xavier Hernandez] cluster/ec: OpenFD heal implementation for EC

[Amar Tumballi] tests: Enable geo-rep test cases

[atin] glusterd: connect to an existing brick process when qourum status is

[Pranith Kumar K] dict: add more types for values

--
[...truncated 244.04 KB...]
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #518

2018-01-12 Thread jenkins
See 


Changes:

[Varsha Rao] cluster/dht: Change datatype of search_unhashed variable

[Kaleb S. KEITHLEY] build: Link libgfrpc within rpc-transport shared libraries

--
[...truncated 264.60 KB...]
ok 55, LINENUM:110
ok 56, LINENUM:111
ok 57, LINENUM:112
ok 58, LINENUM:113
ok 59, LINENUM:115
ok 60, LINENUM:116
ok 61, LINENUM:118
ok 62, LINENUM:124
ok 63, LINENUM:125
ok 64, LINENUM:126
ok 65, LINENUM:129
ok 66, LINENUM:130
ok 67, LINENUM:131
ok 68, LINENUM:134
ok 69, LINENUM:135
ok 70, LINENUM:136
ok 71, LINENUM:137
ok 72, LINENUM:138
ok 73, LINENUM:139
ok 74, LINENUM:140
ok 75, LINENUM:143
ok 76, LINENUM:144
ok 77, LINENUM:145
ok 78, LINENUM:146
ok 79, LINENUM:147
ok 80, LINENUM:148
ok 81, LINENUM:149
ok 82, LINENUM:153
ok 83, LINENUM:154
ok 84, LINENUM:155
ok 85, LINENUM:156
ok 86, LINENUM:157
ok 87, LINENUM:163
ok 88, LINENUM:164
ok 89, LINENUM:165
ok 90, LINENUM:170
ok 91, LINENUM:171
not ok 92 Got "4" instead of "^0$", LINENUM:172
FAILED COMMAND: ^0$ get_pending_heal_count patchy
ok 93, LINENUM:177
ok 94, LINENUM:178
ok 95, LINENUM:182
ok 96, LINENUM:183
ok 97, LINENUM:189
ok 98, LINENUM:190
ok 99, LINENUM:193
ok 100, LINENUM:194
ok 101, LINENUM:195
ok 102, LINENUM:196
ok 103, LINENUM:197
ok 104, LINENUM:198
ok 105, LINENUM:205
ok 106, LINENUM:206
ok 107, LINENUM:213
ok 108, LINENUM:214
ok 109, LINENUM:215
ok 110, LINENUM:217
ok 111, LINENUM:218
not ok 112 Got "4" instead of "^0$", LINENUM:219
FAILED COMMAND: ^0$ get_pending_heal_count patchy
ok 113, LINENUM:223
ok 114, LINENUM:226
Failed 2/114 subtests 

Test Summary Report
---
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t (Wstat: 0 
Tests: 114 Failed: 2)
  Failed tests:  92, 112
Files=1, Tests=114, 317 wallclock secs ( 0.08 usr  0.01 sys + 8977825.78 cusr 
14246276.18 csys = 23224102.05 CPU)
Result: FAIL
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t .. 
1..114
ok 1, LINENUM:8
ok 2, LINENUM:9
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:21
ok 11, LINENUM:22
ok 12, LINENUM:24
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:27
ok 16, LINENUM:33
ok 17, LINENUM:35
ok 18, LINENUM:36
ok 19, LINENUM:37
ok 20, LINENUM:38
ok 21, LINENUM:39
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:51
ok 27, LINENUM:54
ok 28, LINENUM:56
ok 29, LINENUM:64
ok 30, LINENUM:66
ok 31, LINENUM:67
ok 32, LINENUM:69
ok 33, LINENUM:70
ok 34, LINENUM:71
ok 35, LINENUM:72
ok 36, LINENUM:78
ok 37, LINENUM:80
ok 38, LINENUM:81
ok 39, LINENUM:82
ok 40, LINENUM:83
ok 41, LINENUM:84
ok 42, LINENUM:89
ok 43, LINENUM:90
ok 44, LINENUM:91
ok 45, LINENUM:92
ok 46, LINENUM:96
ok 47, LINENUM:99
ok 48, LINENUM:103
ok 49, LINENUM:104
ok 50, LINENUM:105
ok 51, LINENUM:106
ok 52, LINENUM:107
ok 53, LINENUM:108
ok 54, LINENUM:109
ok 55, LINENUM:110
ok 56, LINENUM:111
ok 57, LINENUM:112
ok 58, LINENUM:113
ok 59, LINENUM:115
ok 60, LINENUM:116
ok 61, LINENUM:118
ok 62, LINENUM:124
ok 63, LINENUM:125
ok 64, LINENUM:126
ok 65, LINENUM:129
ok 66, LINENUM:130
ok 67, LINENUM:131
ok 68, LINENUM:134
ok 69, LINENUM:135
ok 70, LINENUM:136
ok 71, LINENUM:137
ok 72, LINENUM:138
ok 73, LINENUM:139
ok 74, LINENUM:140
ok 75, LINENUM:143
ok 76, LINENUM:144
ok 77, LINENUM:145
ok 78, LINENUM:146
ok 79, LINENUM:147
ok 80, LINENUM:148
ok 81, LINENUM:149
ok 82, LINENUM:153
ok 83, LINENUM:154
ok 84, LINENUM:155
ok 85, LINENUM:156
ok 86, LINENUM:157
ok 87, LINENUM:163
ok 88, LINENUM:164
ok 89, LINENUM:165
ok 90, LINENUM:170
ok 91, LINENUM:171
not ok 92 Got "4" instead of "^0$", LINENUM:172
FAILED COMMAND: ^0$ get_pending_heal_count patchy
ok 93, LINENUM:177
ok 94, LINENUM:178
ok 95, LINENUM:182
ok 96, LINENUM:183
ok 97, LINENUM:189
ok 98, LINENUM:190
ok 99, LINENUM:193
ok 100, LINENUM:194
ok 101, LINENUM:195
ok 102, LINENUM:196
ok 103, LINENUM:197
ok 104, LINENUM:198
ok 105, LINENUM:205
ok 106, LINENUM:206
ok 107, LINENUM:213
ok 108, LINENUM:214
ok 109, LINENUM:215
ok 110, LINENUM:217
ok 111, LINENUM:218
not ok 112 Got "4" instead of "^0$", LINENUM:219
FAILED COMMAND: ^0$ get_pending_heal_count patchy
ok 113, LINENUM:223
ok 114, LINENUM:226
Failed 2/114 subtests 

Test Summary Report
---
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t (Wstat: 0 
Tests: 114 Failed: 2)
  Failed tests:  92, 112
Files=1, Tests=114, 317 wallclock secs ( 0.06 usr  0.03 sys + 14049931.09 cusr 
20358452.40 csys = 34408383.58 CPU)
Result: FAIL
End of test ./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #577

2018-01-12 Thread jenkins
See 


Changes:

[Amar Tumballi] tests/vagrant: configure with --enable-gnfs so tests can use NFS

--
[...truncated 781.26 KB...]
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/md-cache/bug-1317785.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  8 second
./tests/basic/gfapi/glfd-lkowner.t  -  8 second
./tests/basic/gfapi/gfapi-dup.t  -  8 second
./tests/basic/gfapi/anonymous_fd.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/upcall/bug-1227204.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/shard/bug-1260637.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/posix/bug-1175711.t  -  7 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/glusterd/bug-889630.t  -  7 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  7 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  7 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/glusterd/bug-1104642.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/ec/bug-1179050.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/glusterfs-server/bug-873549.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/basic/hardlink-limit.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/afr/heal-info.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/performance/quick-read.t  -  5 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/snapshot/bug-1178079.t  -  5 second
./tests/bugs/snapshot/bug-041.t  -  5 second
./tests/bugs/snapshot/bug-1064768.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-886998.t  -  5 second
./tests/bugs/replicate/bug-1365455.t  -  5 second
./tests/bugs/readdir-ahead/bug-1439640.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/nfs/bug-877885.t  -  5 second
./tests/bugs/nfs/bug-1116503.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t  

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #498

2018-01-12 Thread jenkins
See 


Changes:

[Amar Tumballi] tests/vagrant: configure with --enable-gnfs so tests can use NFS

--
[...truncated 236.81 KB...]
  -c -d 
'/build/install/etc/glusterfs'
 /usr/bin/install -c -m 644 
 
'/build/install/etc/glusterfs'
  -c -d 
'/build/install/libexec/glusterfs'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs'
Making install in tools
  -c -d 
'/build/install/share/glusterfs/scripts'
 /usr/bin/install -c 
 
'/build/install/share/glusterfs/scripts'
make  install-data-hook
/usr/bin/install -c -d -m 755 /build/install/var/db/glusterd/events
  -c -d 
'/build/install/lib/pkgconfig'
 /usr/bin/install -c -m 644 glusterfs-api.pc libgfchangelog.pc libgfdb.pc 
'/build/install/lib/pkgconfig'

Start time Fri Dec 22 13:56:27 UTC 2017
Run the regression test
***

tset: standard error: Inappropriate ioctl for device
chflags: /netbsd: No such file or directory
umount: /mnt/nfs/0: Invalid argument
umount: /mnt/nfs/1: Invalid argument
umount: /mnt/glusterfs/0: Invalid argument
umount: /mnt/glusterfs/1: Invalid argument
umount: /mnt/glusterfs/2: Invalid argument
umount: /build/install/var/run/gluster/patchy: No such file or directory
/dev/rxbd0e: 4096.0MB (8388608 sectors) block size 16384, fragment size 2048
using 23 cylinder groups of 178.09MB, 11398 blks, 22528 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 364768, 729504, 1094240, 1458976, 1823712, 2188448, 2553184, 2917920,
...

... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:28] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.03 usr  0.00 sys +  0.05 cusr  0.08 
csys =  0.16 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:28] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.04 usr  0.01 sys +  1.67 cusr  2.50 
csys =  4.22 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:48] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
perfused: perfuse_node_inactive: perfuse_node_fsync failed error = 57: Resource 
temporarily unavailable
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "5" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "1016832" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #505

2018-01-12 Thread jenkins
See 


Changes:

[Aravinda VK] geo-rep: Log message improvements

[Poornima] quiesce: add fallocate and seek fops

[atin] snapshot : after brick reset/replace snapshot creation fails

[Pranith Kumar K] mgmt/glusterd: Adding validation for setting quorum-count

--
[...truncated 241.47 KB...]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #508

2018-01-12 Thread jenkins
See 

--
[...truncated 240.76 KB...]

... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:05] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.04 usr  0.01 sys +  0.06 cusr  0.07 
csys =  0.18 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:05] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 21 wallclock secs ( 0.03 usr  0.02 sys +  1.68 cusr  2.41 
csys =  4.14 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:26] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 1/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 1)
  Failed test:  25
Files=1, Tests=40, 149 wallclock secs ( 0.04 usr  0.01 sys + 5559951.75 cusr 
14826524.47 csys = 20386476.27 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #592

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] libglusterfs: Include key name in data type validation

--
Started by timer
Building remotely on slave26.cloud.gluster.org (rackspace_regression_2gb) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://review.gluster.org/glusterfs.git # 
 > timeout=10
Fetching upstream changes from git://review.gluster.org/glusterfs.git
 > git --version # timeout=10
 > git fetch --tags --progress git://review.gluster.org/glusterfs.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 515a832de0e761639b1d076a59bf918070ec3130 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 515a832de0e761639b1d076a59bf918070ec3130
Commit message: "libglusterfs: Include key name in data type validation"
 > git rev-list --no-walk 92430596d697381d5f49ff69eb24d9ff3e291da8 # timeout=10
[regression-test-with-multiplex] $ /bin/bash /tmp/jenkins1476898631335510492.sh
Start time Sat Jan  6 14:00:11 UTC 2018

Display all environment variables
*

_=/bin/env
BUILD_DISPLAY_NAME=#592
BUILD_ID=592
BUILD_NUMBER=592
BUILD_TAG=jenkins-regression-test-with-multiplex-592
BUILD_TIMESTAMP=2018-01-06 14:00:00 UTC
BUILD_URL=https://build.gluster.org/job/regression-test-with-multiplex/592/
EXECUTOR_NUMBER=0
G_BROKEN_FILENAMES=1
GIT_AUTHOR_EMAIL=jenk...@build.gluster.org
GIT_AUTHOR_NAME=Gluster Build System
GIT_BRANCH=origin/master
GIT_COMMIT=515a832de0e761639b1d076a59bf918070ec3130
GIT_COMMITTER_EMAIL=jenk...@build.gluster.org
GIT_COMMITTER_NAME=Gluster Build System
GIT_PREVIOUS_COMMIT=92430596d697381d5f49ff69eb24d9ff3e291da8
GIT_PREVIOUS_SUCCESSFUL_COMMIT=0bc22bef7f3c24663aadfb3548b348aa121e3047
GIT_URL=git://review.gluster.org/glusterfs.git
HOME=/home/jenkins
HUDSON_COOKIE=43c8ce92-f997-46db-acda-dac205ec3dfa
HUDSON_HOME=/var/lib/jenkins
HUDSON_SERVER_COOKIE=89d0164d165e278a
HUDSON_URL=https://build.gluster.org/
JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64
JENKINS_HOME=/var/lib/jenkins
JENKINS_SERVER_COOKIE=89d0164d165e278a
JENKINS_URL=https://build.gluster.org/
JOB_BASE_NAME=regression-test-with-multiplex
JOB_DISPLAY_URL=https://build.gluster.org/job/regression-test-with-multiplex/display/redirect
JOB_NAME=regression-test-with-multiplex
JOB_URL=https://build.gluster.org/job/regression-test-with-multiplex/
LANG=en_US.UTF-8
LESSOPEN=||/usr/bin/lesspipe.sh %s
LOGNAME=jenkins
MAIL=/var/mail/jenkins
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
NODE_LABELS=rackspace_regression_2gb slave26.cloud.gluster.org
NODE_NAME=slave26.cloud.gluster.org
NSS_DISABLE_HW_AES=1
PATH=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin:/usr/local/bin:/bin:/usr/bin:/build/install/sbin:/build/install/bin
PWD=
RUN_CHANGES_DISPLAY_URL=https://build.gluster.org/job/regression-test-with-multiplex/592/display/redirect?page=changes
RUN_DISPLAY_URL=https://build.gluster.org/job/regression-test-with-multiplex/592/display/redirect
SHELL=/bin/bash
SHLVL=2
SSH_CLIENT=8.43.85.184 57172 22
SSH_CONNECTION=8.43.85.184 57172 23.253.211.94 22
USER=jenkins
WORKSPACE=
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt

glusterfs: no process killed
glusterfsd: no process killed
glusterd: no process killed
glusterd: no process killed
glusterfs: no process killed
glusterfsd: no process killed
glusterd: no process killed
glusterd: no process killed
HEAD is now at 515a832 libglusterfs: Include key name in data type validation
From https://review.gluster.org/glusterfs
 * branchrefs/changes/45/17145/5 -> FETCH_HEAD
Auto-merging xlators/mgmt/glusterd/src/glusterd-utils.c
Auto-merging run-tests.sh
Merge made by the 'recursive' strategy.
 run-tests.sh   |  2 +-
 xlators/mgmt/glusterd/src/glusterd-utils.c | 17 +
 2 files changed, 2 insertions(+), 17 deletions(-)
Start time Sat Jan  6 14:00:13 UTC 2018

Build GlusterFS
***


... GlusterFS autogen ...

Running aclocal...
Running autoheader...
Running libtoolize...
Running autoconf...
Running automake...
configure.ac:291: installing `./config.guess'
configure.ac:291: installing `./config.sub'
configure.ac:16: installing `./install-sh'
configure.ac:16: installing `./missing'
api/examples/Makefile.am: installing `./depcomp'
events/Makefile.am:3: installing `./py-compile'
Running autogen.sh in argp-standalone ...
configure.ac:10: installing `./install-sh'
configure.ac:10: 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #575

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] posix: fix use after freed by calling STACK_UNWIND_STRICT after 
error

[Jeff Darcy] fips: Replace md5sum usage to enable fips support

[Amar Tumballi] leases: Fix coverity issues

--
[...truncated 787.62 KB...]
./tests/basic/ios-dump.t  -  11 second
./tests/basic/gfapi/glfd-lkowner.t  -  11 second
./tests/basic/gfapi/bug-1241104.t  -  11 second
./tests/basic/gfapi/anonymous_fd.t  -  11 second
./tests/basic/ec/ec-root-heal.t  -  11 second
./tests/features/ssl-authz.t  -  10 second
./tests/bugs/upcall/bug-1227204.t  -  10 second
./tests/bugs/snapshot/bug-1260848.t  -  10 second
./tests/bugs/replicate/bug-1325792.t  -  10 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  10 second
./tests/bugs/glusterd/bug-949930.t  -  10 second
./tests/bugs/glusterd/bug-889630.t  -  10 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  10 second
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  10 second
./tests/bugs/glusterd/bug-1046308.t  -  10 second
./tests/bugs/distribute/bug-1247563.t  -  10 second
./tests/bugs/distribute/bug-1088231.t  -  10 second
./tests/bugs/distribute/bug-1086228.t  -  10 second
./tests/bugs/cli/bug-1022905.t  -  10 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  10 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  10 
second
./tests/basic/volume-status.t  -  10 second
./tests/basic/tier/ctr-rename-overwrite.t  -  10 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  10 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  10 second
./tests/basic/gfapi/gfapi-dup.t  -  10 second
./tests/basic/fop-sampling.t  -  10 second
./tests/basic/ec/ec-anonymous-fd.t  -  10 second
./tests/gfid2path/get-gfid-to-path.t  -  9 second
./tests/bugs/upcall/bug-1458127.t  -  9 second
./tests/bugs/posix/bug-1175711.t  -  9 second
./tests/bugs/md-cache/bug-1211863.t  -  9 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  9 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  9 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  9 second
./tests/bugs/glusterd/bug-1104642.t  -  9 second
./tests/bugs/ec/bug-1227869.t  -  9 second
./tests/bugs/ec/bug-1179050.t  -  9 second
./tests/bugs/distribute/bug-1122443.t  -  9 second
./tests/bugs/cli/bug-1087487.t  -  9 second
./tests/bugs/changelog/bug-1208470.t  -  9 second
./tests/bugs/bug-1371806_2.t  -  9 second
./tests/bugs/bug-1258069.t  -  9 second
./tests/bitrot/br-stub.t  -  9 second
./tests/basic/ec/ec-read-policy.t  -  9 second
./tests/basic/afr/heal-info.t  -  9 second
./tests/basic/afr/gfid-heal.t  -  9 second
./tests/basic/afr/arbiter-remove-brick.t  -  9 second
./tests/gfid2path/block-mount-access.t  -  8 second
./tests/bugs/upcall/bug-1369430.t  -  8 second
./tests/bugs/snapshot/bug-1064768.t  -  8 second
./tests/bugs/shard/bug-1258334.t  -  8 second
./tests/bugs/replicate/bug-966018.t  -  8 second
./tests/bugs/replicate/bug-767585-gfid.t  -  8 second
./tests/bugs/replicate/bug-1365455.t  -  8 second
./tests/bugs/replicate/bug-1101647.t  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/quota/bug-1104692.t  -  8 second
./tests/bugs/nfs/bug-915280.t  -  8 second
./tests/bugs/nfs/bug-847622.t  -  8 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  8 second
./tests/bugs/md-cache/afr-stale-read.t  -  8 second
./tests/bugs/glusterfs/bug-893378.t  -  8 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  8 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  8 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  8 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  8 second
./tests/bugs/glusterd/bug-1102656.t  -  8 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  8 second
./tests/bugs/core/bug-986429.t  -  8 second
./tests/bugs/core/bug-834465.t  -  8 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  8 second
./tests/bitrot/bug-1221914.t  -  8 second
./tests/basic/ec/ec-fallocate.t  -  8 second
./tests/bugs/upcall/bug-upcall-stat.t  -  7 second
./tests/bugs/unclassified/bug-1034085.t  -  7 second
./tests/bugs/snapshot/bug-1178079.t  -  7 second
./tests/bugs/shard/bug-1342298.t  -  7 second
./tests/bugs/shard/bug-1272986.t  -  7 second
./tests/bugs/shard/bug-1259651.t  -  7 second
./tests/bugs/shard/bug-1256580.t  -  7 second
./tests/bugs/readdir-ahead/bug-1439640.t  -  7 second
./tests/bugs/quota/bug-1287996.t  -  7 second
./tests/bugs/nfs/bug-877885.t  -  7 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  7 second
./tests/bugs/io-cache/bug-read-hang.t  -  7 second
./tests/bugs/io-cache/bug-858242.t  -  7 second

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #515

2018-01-12 Thread jenkins
See 


Changes:

[atin] glusterd: fix up volume option flags

[Pranith Kumar K] tests: Use /dev/urandom instead of /dev/random for dd

[Xavier Hernandez] glusterd: get-state memory leak fix

--
[...truncated 238.86 KB...]
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs/events'
  -c -d 
'/build/install/etc/glusterfs'
 /usr/bin/install -c -m 644 
 
'/build/install/etc/glusterfs'
  -c -d 
'/build/install/libexec/glusterfs'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs'
Making install in tools
  -c -d 
'/build/install/share/glusterfs/scripts'
 /usr/bin/install -c 
 
'/build/install/share/glusterfs/scripts'
make  install-data-hook
/usr/bin/install -c -d -m 755 /build/install/var/db/glusterd/events
  -c -d 
'/build/install/lib/pkgconfig'
 /usr/bin/install -c -m 644 glusterfs-api.pc libgfchangelog.pc libgfdb.pc 
'/build/install/lib/pkgconfig'

Start time Mon Jan  8 13:55:46 UTC 2018
Run the regression test
***

tset: standard error: Inappropriate ioctl for device
chflags: /netbsd: No such file or directory
umount: /mnt/nfs/0: Invalid argument
umount: /mnt/nfs/1: Invalid argument
umount: /mnt/glusterfs/0: Invalid argument
umount: /mnt/glusterfs/1: Invalid argument
umount: /mnt/glusterfs/2: Invalid argument
umount: /build/install/var/run/gluster/patchy: No such file or directory
/dev/rxbd0e: 4096.0MB (8388608 sectors) block size 16384, fragment size 2048
using 23 cylinder groups of 178.09MB, 11398 blks, 22528 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 364768, 729504, 1094240, 1458976, 1823712, 2188448, 2553184, 2917920,
...

... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:55:46] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.03 usr  0.00 sys +  0.05 cusr  0.09 
csys =  0.17 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:55:47] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.03 usr  0.01 sys +  1.72 cusr  2.63 
csys =  4.39 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:07] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
perfused: perfuse_node_inactive: perfuse_node_fsync failed error = 57: Resource 
temporarily unavailable
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #586

2018-01-12 Thread jenkins
See 


Changes:

[Niklas Hambüchen] glusterfind: Speed up gfid lookup 100x by using an SQL index

--
[...truncated 792.08 KB...]
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  8 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  8 second
./tests/basic/gfapi/glfd-lkowner.t  -  8 second
./tests/basic/gfapi/gfapi-dup.t  -  8 second
./tests/basic/gfapi/bug-1241104.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/upcall/bug-1227204.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/shard/bug-1260637.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/posix/bug-1122028.t  -  7 second
./tests/bugs/glusterfs/bug-902610.t  -  7 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  7 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  7 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/glusterd/bug-1104642.t  -  7 second
./tests/bugs/ec/bug-1179050.t  -  7 second
./tests/bugs/cli/bug-1022905.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/tier/ctr-rename-overwrite.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  6 second
./tests/features/ssl-authz.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1259651.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/posix/bug-1175711.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs/bug-856455.t  -  6 second
./tests/bugs/glusterd/bug-889630.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/ec/bug-1227869.t  -  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/features/delay-gen.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/nfs/zero-atime.t  -  5 second
./tests/bugs/nfs/bug-1116503.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  5 second
./tests/bugs/md-cache/afr-stale-read.t  -  5 second
./tests/bugs/glusterfs-server/bug-873549.t  -  5 second
./tests/bugs/glusterfs/bug-895235.t  -  5 second

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #507

2018-01-12 Thread jenkins
See 


Changes:

[Niklas Hambüchen] glusterfind: Speed up gfid lookup 100x by using an SQL index

--
[...truncated 241.91 KB...]
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 19 wallclock secs ( 0.04 usr  0.02 sys +  1.81 cusr  2.28 
csys =  4.15 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:31] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
ok
All tests successful.
Files=1, Tests=40, 73 wallclock secs ( 0.05 usr  0.00 sys +  2.26 cusr  3.07 
csys =  5.38 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-add-brick.t




[13:57:45] Running tests in file ./tests/basic/afr/arbiter-cli.t
./tests/basic/afr/arbiter-cli.t .. 
1..7
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:14
ok 4, LINENUM:17
ok 5, LINENUM:20
ok 6, LINENUM:21
ok 7, LINENUM:24
ok
All tests successful.
Files=1, Tests=7,  4 wallclock secs ( 0.01 usr  0.03 sys +  0.78 cusr  1.19 
csys =  2.01 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-cli.t




[13:57:49] Running tests in file ./tests/basic/afr/arbiter-mount.t
rm: /mnt/glusterfs/0/xy_zzy: Socket is not connected
mount_nfs: can't access /patchy: Unknown error: 10006
./tests/basic/afr/arbiter-mount.t .. 
1..22
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:20
ok 9, LINENUM:21
ok 10, LINENUM:25
ok 11, LINENUM:26
ok 12, LINENUM:27
ok 13, LINENUM:30
ok 14, LINENUM:32
ok 15, LINENUM:33
ok 16, LINENUM:34
ok 17, LINENUM:35
ok 18, LINENUM:37
ok 19, LINENUM:38
ok 20, LINENUM:39
ok 21, LINENUM:42
ok 22, LINENUM:43
ok
All tests successful.
Files=1, Tests=22, 31 wallclock secs ( 0.03 usr  0.00 sys +  1.17 cusr  1.81 
csys =  3.01 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-mount.t




[13:58:20] Running tests in file ./tests/basic/afr/arbiter-remove-brick.t
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-remove-brick.t .. 
1..18
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:21
ok 11, LINENUM:22
ok 12, LINENUM:24
ok 13, LINENUM:25
ok 14, LINENUM:26
not ok 15 Got "" instead of "1048576", LINENUM:29
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file
ok 16, LINENUM:32
ok 17, LINENUM:33
ok 18, LINENUM:35
Failed 1/18 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-remove-brick.t 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #516

2018-01-12 Thread jenkins
See 

--
[...truncated 355.94 KB...]
ok 64, LINENUM:151
ok 65, LINENUM:152
ok 66, LINENUM:154
ok 67, LINENUM:155
ok 68, LINENUM:156
ok 69, LINENUM:157
ok 70, LINENUM:158
ok 71, LINENUM:159
ok 72, LINENUM:160
volume stop: patchy: success
volume start: patchy: success
ok 73, LINENUM:174
ok 74, LINENUM:175
ok 75, LINENUM:176
ok 76, LINENUM:178
volume stop: patchy: success
ok 77, LINENUM:188
volume start: patchy: success
ok 78, LINENUM:191
ok 79, LINENUM:192
ok 80, LINENUM:195
ok 81, LINENUM:196
ok 82, LINENUM:197
ok 83, LINENUM:199
ok 84, LINENUM:200
ok 85, LINENUM:201
ok 86, LINENUM:203
ok 87, LINENUM:204
ok 88, LINENUM:205
ok 89, LINENUM:207
ok 90, LINENUM:208
ok 91, LINENUM:209
ok 92, LINENUM:211
ok 93, LINENUM:212
ok 94, LINENUM:213
ok 95, LINENUM:215
ok 96, LINENUM:216
ok 97, LINENUM:217
ok 98, LINENUM:219
ok 99, LINENUM:220
ok 100, LINENUM:221
ok 101, LINENUM:224
ok 102, LINENUM:227
ok 103, LINENUM:228
ok 104, LINENUM:229
ok 105, LINENUM:230
ok 106, LINENUM:231
ok 107, LINENUM:232
ok 108, LINENUM:233
ok 109, LINENUM:234
ok 110, LINENUM:235
ok 111, LINENUM:236
ok 112, LINENUM:237
ok 113, LINENUM:238
ok 114, LINENUM:239
ok 115, LINENUM:240
ok 116, LINENUM:241
ok 117, LINENUM:242
ok 118, LINENUM:243
ok 119, LINENUM:244
ok 120, LINENUM:245
ok 121, LINENUM:246
ok 122, LINENUM:247
ok 123, LINENUM:251
ok 124, LINENUM:252
ok 125, LINENUM:253
ok 126, LINENUM:256
ok 127, LINENUM:257
ok 128, LINENUM:258
ok 129, LINENUM:259
ok 130, LINENUM:260
ok 131, LINENUM:263
ok 132, LINENUM:264
ok 133, LINENUM:265
ok 134, LINENUM:266
ok 135, LINENUM:267
ok 136, LINENUM:270
ok 137, LINENUM:271
ok 138, LINENUM:272
ok 139, LINENUM:275
ok 140, LINENUM:276
ok 141, LINENUM:277
ok 142, LINENUM:280
ok 143, LINENUM:281
ok 144, LINENUM:282
ok 145, LINENUM:285
ok 146, LINENUM:286
ok 147, LINENUM:287
ok 148, LINENUM:291
volume start: patchy: success
ok 149, LINENUM:293
ok 150, LINENUM:294
ok 151, LINENUM:296
ok 152, LINENUM:297
ok 153, LINENUM:298
ok 154, LINENUM:299
ok 155, LINENUM:300
ok 156, LINENUM:301
ok 157, LINENUM:302
ok 158, LINENUM:303
ok 159, LINENUM:304
ok 160, LINENUM:305
ok 161, LINENUM:306
ok 162, LINENUM:307
ok 163, LINENUM:308
ok 164, LINENUM:309
ok 165, LINENUM:310
volume start: patchy: success
ok 166, LINENUM:313
ok 167, LINENUM:314
ok 168, LINENUM:319
ok 169, LINENUM:320
ok 170, LINENUM:322
fool_heal fool_me source_creations_heal/dir1
volume start: patchy: success
ok 171, LINENUM:328
ok 172, LINENUM:329
volume set: success
ok 173, LINENUM:332
ok 174, LINENUM:333
ok 175, LINENUM:334
ok 176, LINENUM:336
ok 177, LINENUM:337
ok 178, LINENUM:339
ok 179, LINENUM:340
ok 180, LINENUM:341
ok 181, LINENUM:342
ok 182, LINENUM:343
ok 183, LINENUM:344
ok 184, LINENUM:345
ok 185, LINENUM:346
ok 186, LINENUM:347
ok 187, LINENUM:348
ok 188, LINENUM:349
ok 189, LINENUM:350
ok 190, LINENUM:351
ok 191, LINENUM:355
ok 192, LINENUM:356
ok 193, LINENUM:358
ok 194, LINENUM:359
ok 195, LINENUM:361
ok 196, LINENUM:362
ok 197, LINENUM:364
ok 198, LINENUM:365
ok 199, LINENUM:367
ok 200, LINENUM:368
ok 201, LINENUM:370
ok 202, LINENUM:371
ok 203, LINENUM:373
ok 204, LINENUM:374
ok 205, LINENUM:377
ok 206, LINENUM:380
ok 207, LINENUM:381
ok 208, LINENUM:382
ok 209, LINENUM:383
ok 210, LINENUM:384
ok 211, LINENUM:385
ok 212, LINENUM:386
ok 213, LINENUM:387
ok 214, LINENUM:388
ok 215, LINENUM:389
ok 216, LINENUM:390
ok 217, LINENUM:391
ok 218, LINENUM:392
ok 219, LINENUM:393
ok 220, LINENUM:396
ok 221, LINENUM:397
ok 222, LINENUM:398
ok 223, LINENUM:399
ok 224, LINENUM:400
ok 225, LINENUM:401
ok 226, LINENUM:402
ok 227, LINENUM:403
ok 228, LINENUM:404
ok 229, LINENUM:405
ok 230, LINENUM:406
ok 231, LINENUM:407
ok 232, LINENUM:408
ok 233, LINENUM:409
ok 234, LINENUM:413
ok 235, LINENUM:414
ok 236, LINENUM:415
ok 237, LINENUM:416
ok 238, LINENUM:419
ok 239, LINENUM:422
ok 240, LINENUM:423
ok 241, LINENUM:424
ok 242, LINENUM:427
ok 243, LINENUM:428
ok 244, LINENUM:431
ok 245, LINENUM:432
ok 246, LINENUM:435
ok 247, LINENUM:436
ok 248, LINENUM:439
ok 249, LINENUM:440

ok
All tests successful.
Files=1, Tests=249, 111 wallclock secs ( 0.07 usr  0.02 sys +  8.70 cusr 13.12 
csys = 21.91 CPU)
Result: PASS
./tests/basic/afr/entry-self-heal.t: 1 new core files
End of test ./tests/basic/afr/entry-self-heal.t



Run complete

Number of tests found: 13
Number of tests selected for run based on pattern: 13
Number of tests skipped as they were marked bad:   0
Number of tests skipped because of known_issues:   0
Number of tests that were run: 13

Tests ordered by time taken, slowest to fastest: 


[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #590

2018-01-12 Thread jenkins
See 


Changes:

[Pranith Kumar K] debug/delay-gen: volume option fixes for GD2

[Nithya Balachandran] cli: Fixed a use_after_free

--
[...truncated 785.79 KB...]
./tests/basic/gfapi/anonymous_fd.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/performance/open-behind.t  -  8 second
./tests/features/readdir-ahead.t  -  8 second
./tests/bugs/upcall/bug-1227204.t  -  8 second
./tests/bugs/transport/bug-873367.t  -  8 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1260637.t  -  8 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  8 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
./tests/bugs/md-cache/bug-1211863.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/ec/bug-1179050.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  8 
second
./tests/basic/quota_aux_mount.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/posix/bug-1175711.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/io-cache/bug-read-hang.t  -  7 second
./tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/changelog/bug-1208470.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/ec/nfs.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterd/bug-889630.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  6 second
./tests/performance/quick-read.t  -  5 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/snapshot/bug-1178079.t  -  5 second
./tests/bugs/snapshot/bug-1064768.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1272986.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1258334.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-886998.t  -  5 second
./tests/bugs/replicate/bug-1480525.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/posix/bug-765380.t  -  5 second
./tests/bugs/nfs/zero-atime.t  -  5 second
./tests/bugs/nfs/socket-as-fifo.t  -  5 second
./tests/bugs/nfs/bug-1116503.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #511

2018-01-12 Thread jenkins
See 


Changes:

[Pranith Kumar K] debug/delay-gen: volume option fixes for GD2

[Nithya Balachandran] cli: Fixed a use_after_free

--
[...truncated 250.48 KB...]
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
not ok 26 Got "" instead of "1", LINENUM:45
FAILED COMMAND: 1 afr_child_up_status patchy 0
not ok 27 Got "" instead of "1", LINENUM:46
FAILED COMMAND: 1 afr_child_up_status patchy 1
not ok 28 Got "" instead of "1", LINENUM:47
FAILED COMMAND: 1 afr_child_up_status patchy 2
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
ok 34, LINENUM:57
ok 35, LINENUM:60
not ok 36 Got "" instead of "0", LINENUM:61
FAILED COMMAND: 0 stat -c %s /d/backends/patchy2/file2
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 6/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 6)
  Failed tests:  25-28, 33, 36
Files=1, Tests=40, 215 wallclock secs ( 0.04 usr  0.01 sys + 13862104.86 cusr 
5125291.88 csys = 18987396.79 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #499

2018-01-12 Thread jenkins
See 


Changes:

[Pranith Kumar K] cluster/ec: Change [f]getxattr to parallel-dispatch-one

[Kotresh H R] fips/geo-rep: Replace MD5 with SHA256

[Pranith Kumar K] cluster/ec: Fix possible shift overflow

[Jeff Darcy] xlator.h: move options and other variables to the top of structure

[Jeff Darcy] performance/write-behind: fix bug while handling short writes

--
[...truncated 239.85 KB...]
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.04 usr  0.02 sys +  1.62 cusr  2.46 
csys =  4.14 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:57:22] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
ok
All tests successful.
Files=1, Tests=40, 48 wallclock secs ( 0.05 usr  0.01 sys +  2.18 cusr  3.02 
csys =  5.26 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-add-brick.t




[13:58:10] Running tests in file ./tests/basic/afr/arbiter-cli.t
./tests/basic/afr/arbiter-cli.t .. 
1..7
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:14
ok 4, LINENUM:17
ok 5, LINENUM:20
ok 6, LINENUM:21
ok 7, LINENUM:24
ok
All tests successful.
Files=1, Tests=7,  4 wallclock secs ( 0.02 usr  0.02 sys +  0.77 cusr  1.24 
csys =  2.05 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-cli.t




[13:58:14] Running tests in file ./tests/basic/afr/arbiter-mount.t
rm: /mnt/glusterfs/0/xy_zzy: Socket is not connected
mount_nfs: can't access /patchy: Unknown error: 10006
./tests/basic/afr/arbiter-mount.t .. 
1..22
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:20
ok 9, LINENUM:21
ok 10, LINENUM:25
ok 11, LINENUM:26
ok 12, LINENUM:27
ok 13, LINENUM:30
ok 14, LINENUM:32
ok 15, LINENUM:33
ok 16, LINENUM:34
ok 17, LINENUM:35
ok 18, LINENUM:37
ok 19, LINENUM:38
ok 20, LINENUM:39
ok 21, LINENUM:42
ok 22, LINENUM:43
ok
All tests successful.
Files=1, Tests=22, 37 wallclock secs ( 0.02 usr  0.02 sys +  1.18 cusr  1.74 
csys =  2.96 CPU)
Result: PASS
End of test ./tests/basic/afr/arbiter-mount.t




[13:58:51] Running tests in file ./tests/basic/afr/arbiter-remove-brick.t
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
stat: /mnt/glusterfs/0/file: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-remove-brick.t .. 
1..18
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:17
ok 9, LINENUM:18
ok 10, LINENUM:21
ok 11, LINENUM:22
ok 12, LINENUM:24
ok 13, LINENUM:25
ok 14, LINENUM:26
not 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #510

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] libglusterfs: export minimum necessary symbols

[R.Shyamsundar] cluster/dht: Use percentages for space check

[atin] glusterd: Nullify pmap entry for bricks belonging to same port

[Pranith Kumar K] performance/io-threads: volume option fixes for GD2

--
[...truncated 238.62 KB...]
Byte-compiling python modules (optimized versions) ...
__init__.pygf_event.pyeventsapiconf.pyeventtypes.pyutils.pyhandlers.py
  -c -d 
'/build/install/libexec/glusterfs/events'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs/events'
  -c -d 
'/build/install/etc/glusterfs'
 /usr/bin/install -c -m 644 
 
'/build/install/etc/glusterfs'
  -c -d 
'/build/install/libexec/glusterfs'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs'
Making install in tools
  -c -d 
'/build/install/share/glusterfs/scripts'
 /usr/bin/install -c 
 
'/build/install/share/glusterfs/scripts'
make  install-data-hook
/usr/bin/install -c -d -m 755 /build/install/var/db/glusterd/events
  -c -d 
'/build/install/lib/pkgconfig'
 /usr/bin/install -c -m 644 glusterfs-api.pc libgfchangelog.pc libgfdb.pc 
'/build/install/lib/pkgconfig'

Start time Wed Jan  3 13:56:06 UTC 2018
Run the regression test
***

tset: standard error: Inappropriate ioctl for device
chflags: /netbsd: No such file or directory
umount: /mnt/nfs/0: Invalid argument
umount: /mnt/nfs/1: Invalid argument
umount: /mnt/glusterfs/0: Invalid argument
umount: /mnt/glusterfs/1: Invalid argument
umount: /mnt/glusterfs/2: Invalid argument
umount: /build/install/var/run/gluster/patchy: No such file or directory
/dev/rxbd0e: 4096.0MB (8388608 sectors) block size 16384, fragment size 2048
using 23 cylinder groups of 178.09MB, 11398 blks, 22528 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 364768, 729504, 1094240, 1458976, 1823712, 2188448, 2553184, 2917920,
...

... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:07] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.03 usr  0.00 sys +  0.06 cusr  0.07 
csys =  0.16 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:07] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.05 usr  0.01 sys +  1.81 cusr  2.46 
csys =  4.33 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:27] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #576

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] stripe, quiesce: volume option fixes

[Jeff Darcy] cli: Fixed coverity issue in cli-cmd-system.c

[Amar Tumballi] snapshot: fix several coverity issues in glusterd-snapshot.c

[Amar Tumballi] rchecksum/fips: Replace MD5 usage to enable fips support

[Pranith Kumar K] cluster/ec: Add default value for the redundancy option

--
[...truncated 783.46 KB...]
./tests/bugs/shard/bug-1260637.t  -  8 second
./tests/bugs/replicate/bug-1498570-client-iot-graph-check.t  -  8 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  8 second
./tests/bugs/posix/bug-1175711.t  -  8 second
./tests/bugs/posix/bug-1122028.t  -  8 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  8 second
./tests/bugs/md-cache/bug-1211863.t  -  8 second
./tests/bugs/glusterfs/bug-872923.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  8 second
./tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t  -  8 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/glusterd/bug-1046308.t  -  8 second
./tests/bugs/ec/bug-1179050.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/bitrot/br-stub.t  -  8 second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/quota-nfs.t  -  8 second
./tests/basic/quota_aux_mount.t  -  8 second
./tests/basic/md-cache/bug-1317785.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/glusterfs/bug-902610.t  -  7 second
./tests/bugs/glusterd/bug-889630.t  -  7 second
./tests/bugs/glusterd/bug-859927.t  -  7 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/basic/volume-status.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-1116503.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs-server/bug-873549.t  -  6 second
./tests/bugs/glusterfs/bug-856455.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-913544.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/afr/heal-info.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/features/delay-gen.t  -  5 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #595

2018-01-12 Thread jenkins
See 


--
[...truncated 1.30 MB...]
Thread 31 (Thread 11416):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 30 (Thread 11200):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 29 (Thread 11415):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 28 (Thread 11414):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 27 (Thread 11199):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 26 (Thread 11198):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 25 (Thread 11412):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 24 (Thread 11197):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 23 (Thread 11411):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 22 (Thread 11196):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 21 (Thread 11409):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 20 (Thread 11194):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 19 (Thread 11286):
#0  0x7f4cfbf1da5e in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 18 (Thread 11190):
#0  0x7f4cfbf1da5e in ?? ()
No symbol table info available.
#1  0x0004 in ?? ()
No symbol table info available.
#2  0x015d27e8 in ?? ()
No symbol table info available.
#3  0x015d27c0 in ?? ()
No symbol table info available.
#4  0x0007 in ?? ()
No symbol table info available.
#5  0x5a54e3d1 in ?? ()
No symbol table info available.
#6  0x0002a04a in ?? ()
No symbol table info available.
#7  0x7f4cf2dfc9c0 in ?? ()
No symbol table info available.
#8  0x0003 in ?? ()
No symbol table info available.
#9  0x in ?? ()
No symbol table info available.

Thread 17 (Thread 11231):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 16 (Thread 11186):
#0  0x7f4cfbf1a2fd in ?? ()
No symbol table info available.
#1  0x7fffd8948d30 in ?? ()
No symbol table info available.
#2  0x7f4cfcf5ac55 in ?? ()
No symbol table info available.
#3  0x7f4cfbf1a1d0 in ?? ()
No symbol table info available.
#4  0x7f4ceffa5d28 in ?? ()
No symbol table info available.
#5  0x015c8be8 in ?? ()
No symbol table info available.
#6  0x in ?? ()
No symbol table info available.

Thread 15 (Thread 11227):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 14 (Thread 11418):
#0  0x7f4cfb845c4d in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 13 (Thread 11413):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 12 (Thread 11324):
#0  0x7f4cfbf1da5e in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 11 (Thread 11232):
#0  0x7f4cfb8821c3 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 10 (Thread 11230):
#0  0x7f4cfbf1da5e in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 9 (Thread 11229):
#0  0x7f4cfb845c4d in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 8 (Thread 11228):
#0  0x7f4cfb845c4d in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 7 (Thread 11226):
#0  0x7f4cfb87a623 in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol table info available.

Thread 6 (Thread 11224):
#0  0x7f4cfbf1d68c in ?? ()
No symbol table info available.
#1  0x in ?? ()
No symbol 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #513

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] libglusterfs: Include key name in data type validation

--
[...truncated 12.90 KB...]
  CC   argp-help.o
--- argp-parse.o ---
  CC   argp-parse.o
--- argp-fmtstream.o ---
:
 In function '_argp_fmtstream_update':
--- argp-help.o ---
:
 In function 'fill_in_uparams':
:192:2:
 warning: array subscript has type 'char' [-Wchar-subscripts]
  SKIPWS (var);
  ^
:194:2:
 warning: array subscript has type 'char' [-Wchar-subscripts]
  if (isalpha (*var))
  ^
:201:6:
 warning: array subscript has type 'char' [-Wchar-subscripts]
  while (isalnum (*arg) || *arg == '-' || *arg == '_')
  ^
:205:6:
 warning: array subscript has type 'char' [-Wchar-subscripts]
  SKIPWS (arg);
  ^
:212:3:
 warning: array subscript has type 'char' [-Wchar-subscripts]
   SKIPWS (arg);
   ^
:226:6:
 warning: array subscript has type 'char' [-Wchar-subscripts]
  else if (isdigit (*arg))
  ^
:229:3:
 warning: array subscript has type 'char' [-Wchar-subscripts]
   while (isdigit (*arg))
   ^
:231:3:
 warning: array subscript has type 'char' [-Wchar-subscripts]
   SKIPWS (arg);
   ^
--- argp-pv.o ---
  CC   argp-pv.o
--- argp-fmtstream.o ---
:209:4:
 warning: array subscript has type 'char' [-Wchar-subscripts]
while (p >= buf && !isblank (*p))
^
:219:3:
 warning: array subscript has type 'char' [-Wchar-subscripts]
   while (p >= buf && isblank (*p));
   ^
:230:8:
 warning: array subscript has type 'char' [-Wchar-subscripts]
while (p < nl && !isblank (*p));
^
:243:8:
 warning: array subscript has type 'char' [-Wchar-subscripts]
while (isblank (*p));
^
--- argp-pvh.o ---
  CC   argp-pvh.o
--- mempcpy.o ---
  CC   mempcpy.o
--- strchrnul.o ---
  CC   strchrnul.o
--- libargp.a ---
  AR   libargp.a
Making all in rpc/xdr/gen
--- glusterfs3-xdr.x ---
--- glusterfs4-xdr.x ---
--- cli1-xdr.x ---
--- nlm4-xdr.x ---
--- nsm-xdr.x ---
--- rpc-common-xdr.x ---
--- glusterd1-xdr.x ---
--- acl3-xdr.x ---
--- portmap-xdr.x ---
--- mount3udp.x ---
--- changelog-xdr.x ---
--- glusterfs-fops.x ---
--- glusterfs3-xdr.c ---
--- glusterfs4-xdr.c ---
--- cli1-xdr.c ---
--- nlm4-xdr.c ---
--- glusterfs4-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/glusterfs4-xdr.c -o glusterfs4-xdr.x -nt 
../../../rpc/xdr/src/glusterfs4-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/glusterfs4-xdr.c glusterfs4-xdr.x ; fi
--- glusterfs3-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/glusterfs3-xdr.c -o glusterfs3-xdr.x -nt 
../../../rpc/xdr/src/glusterfs3-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/glusterfs3-xdr.c glusterfs3-xdr.x ; fi
--- cli1-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/cli1-xdr.c -o cli1-xdr.x -nt 
../../../rpc/xdr/src/cli1-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/cli1-xdr.c cli1-xdr.x ; fi
--- nlm4-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/nlm4-xdr.c -o nlm4-xdr.x -nt 
../../../rpc/xdr/src/nlm4-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/nlm4-xdr.c nlm4-xdr.x ; fi
--- nsm-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/nsm-xdr.c -o nsm-xdr.x -nt 
../../../rpc/xdr/src/nsm-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/nsm-xdr.c nsm-xdr.x ; fi
--- rpc-common-xdr.c ---
--- glusterd1-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/glusterd1-xdr.c -o glusterd1-xdr.x -nt 
../../../rpc/xdr/src/glusterd1-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/glusterd1-xdr.c glusterd1-xdr.x ; fi
--- rpc-common-xdr.c ---
if [ ! -e ../../../rpc/xdr/src/rpc-common-xdr.c -o rpc-common-xdr.x -nt 
../../../rpc/xdr/src/rpc-common-xdr.c ]; then  rpcgen -c -o 
../../../rpc/xdr/src/rpc-common-xdr.c rpc-common-xdr.x ; fi
--- acl3-xdr.c ---
if [ ! -e 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #496

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] posix: fix use after freed by calling STACK_UNWIND_STRICT after 
error

[Jeff Darcy] fips: Replace md5sum usage to enable fips support

[Amar Tumballi] leases: Fix coverity issues

--
[...truncated 238.25 KB...]
... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:56] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.04 usr  0.00 sys +  0.04 cusr  0.08 
csys =  0.16 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:57] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 19 wallclock secs ( 0.04 usr  0.02 sys +  1.67 cusr  2.29 
csys =  4.02 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:57:17] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "1" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
ok 33, LINENUM:56
ok 34, LINENUM:57
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 1/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 1)
  Failed test:  25
Files=1, Tests=40, 126 wallclock secs ( 0.04 usr  0.02 sys + 16769751.84 cusr 
17679473.11 csys = 34449225.01 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #517

2018-01-12 Thread jenkins
See 

--
[...truncated 272.83 KB...]
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t .. 
1..29
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:19
ok 9, LINENUM:20
ok 10, LINENUM:22
ok 11, LINENUM:25
ok 12, LINENUM:34
ok 13, LINENUM:39
ok 14, LINENUM:39
ok 15, LINENUM:43
ok 16, LINENUM:46
ok 17, LINENUM:47
ok 18, LINENUM:48
ok 19, LINENUM:51
ok 20, LINENUM:52
ok 21, LINENUM:53
ok 22, LINENUM:54
ok 23, LINENUM:60
ok 24, LINENUM:63
ok 25, LINENUM:68
ok 26, LINENUM:68
ok 27, LINENUM:72
ok 28, LINENUM:73
ok 29, LINENUM:74
ok
All tests successful.
Files=1, Tests=29, 29 wallclock secs ( 0.03 usr  0.01 sys +  1.65 cusr  2.85 
csys =  4.54 CPU)
Result: PASS
End of test 
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t




[14:09:51] Running tests in file ./tests/basic/afr/granular-esh/replace-brick.t
./tests/basic/afr/granular-esh/replace-brick.t .. 
1..34
ok 1, LINENUM:7
ok 2, LINENUM:8
ok 3, LINENUM:9
ok 4, LINENUM:10
ok 5, LINENUM:11
ok 6, LINENUM:12
ok 7, LINENUM:13
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:17
ok 11, LINENUM:26
ok 12, LINENUM:29
ok 13, LINENUM:32
ok 14, LINENUM:35
ok 15, LINENUM:38
ok 16, LINENUM:41
ok 17, LINENUM:43
ok 18, LINENUM:44
ok 19, LINENUM:46
ok 20, LINENUM:47
ok 21, LINENUM:48
ok 22, LINENUM:49
ok 23, LINENUM:50
ok 24, LINENUM:53
ok 25, LINENUM:56
ok 26, LINENUM:59
ok 27, LINENUM:60
ok 28, LINENUM:63
ok 29, LINENUM:65
ok 30, LINENUM:68
ok 31, LINENUM:69
ok 32, LINENUM:71
ok 33, LINENUM:72
ok 34, LINENUM:73
ok
All tests successful.
Files=1, Tests=34, 23 wallclock secs ( 0.04 usr  0.00 sys + 8465936.41 cusr 
12698904.81 csys = 21164841.26 CPU)
Result: PASS
End of test ./tests/basic/afr/granular-esh/replace-brick.t




[14:10:14] Running tests in file ./tests/basic/afr/heal-info.t
./tests/basic/afr/heal-info.t .. 
1..9
ok 1, LINENUM:21
ok 2, LINENUM:22
ok 3, LINENUM:23
ok 4, LINENUM:24
ok 5, LINENUM:25
ok 6, LINENUM:26
ok 7, LINENUM:27
ok 8, LINENUM:33
ok 9, LINENUM:34
ok
All tests successful.
Files=1, Tests=9, 21 wallclock secs ( 0.04 usr  0.00 sys +  2.58 cusr  3.68 
csys =  6.30 CPU)
Result: PASS
End of test ./tests/basic/afr/heal-info.t




[14:10:35] Running tests in file ./tests/basic/afr/heal-quota.t
touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/19225/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests 

Test Summary Report
---
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
  Failed test:  11
Files=1, Tests=19, 24 wallclock secs ( 0.01 usr  0.03 sys +  1.46 cusr 
21164883.38 csys = 21164884.88 CPU)
Result: FAIL
./tests/basic/afr/heal-quota.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/2836/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests 

Test Summary Report
---
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
  Failed test:  11
Files=1, 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #589

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] libglusterfs: export minimum necessary symbols

[R.Shyamsundar] cluster/dht: Use percentages for space check

[atin] glusterd: Nullify pmap entry for bricks belonging to same port

[Pranith Kumar K] performance/io-threads: volume option fixes for GD2

--
[...truncated 787.64 KB...]
./tests/basic/quota_aux_mount.t  -  9 second
./tests/basic/md-cache/bug-1317785.t  -  9 second
./tests/basic/inode-quota-enforcing.t  -  9 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  9 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  9 second
./tests/basic/gfapi/anonymous_fd.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/bugs/upcall/bug-1227204.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1260637.t  -  8 second
./tests/bugs/replicate/bug-1101647.t  -  8 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  8 second
./tests/bugs/posix/bug-1175711.t  -  8 second
./tests/bugs/nfs/bug-1116503.t  -  8 second
./tests/bugs/md-cache/bug-1211863.t  -  8 second
./tests/bugs/glusterfs/bug-902610.t  -  8 second
./tests/bugs/glusterd/bug-949930.t  -  8 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  8 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  8 second
./tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t  -  8 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/ec/bug-1227869.t  -  8 second
./tests/bugs/distribute/bug-1122443.t  -  8 second
./tests/bugs/distribute/bug-1086228.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  8 
second
./tests/basic/volume-status.t  -  8 second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/quota-nfs.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/glusterfs/bug-893378.t  -  7 second
./tests/bugs/glusterd/bug-889630.t  -  7 second
./tests/bugs/glusterd/bug-859927.t  -  7 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/distribute/bug-1368012.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-834465.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  7 second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/afr/heal-info.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1258334.t  -  6 second
./tests/bugs/shard/bug-1256580.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs-server/bug-873549.t  -  6 second
./tests/bugs/glusterfs/bug-856455.t  -  6 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  6 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  6 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/distribute/bug-884597.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #594

2018-01-12 Thread jenkins
See 


Changes:

[atin] glusterd: fix up volume option flags

[Pranith Kumar K] tests: Use /dev/urandom instead of /dev/random for dd

[Xavier Hernandez] glusterd: get-state memory leak fix

--
[...truncated 1.13 MB...]
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  9 second
./tests/bugs/snapshot/bug-1260848.t  -  9 second
./tests/bugs/nfs/bug-877885.t  -  9 second
./tests/bugs/nfs/bug-847622.t  -  9 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  9 second
./tests/bugs/md-cache/bug-1211863.t  -  9 second
./tests/bugs/glusterfs/bug-902610.t  -  9 second
./tests/bugs/glusterd/bug-949930.t  -  9 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  9 second
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  9 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  9 second
./tests/bugs/glusterd/bug-1046308.t  -  9 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  9 second
./tests/bugs/ec/bug-1179050.t  -  9 second
./tests/bugs/distribute/bug-1122443.t  -  9 second
./tests/bugs/cli/bug-1087487.t  -  9 second
./tests/bugs/changelog/bug-1208470.t  -  9 second
./tests/bugs/bug-1258069.t  -  9 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  9 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  9 
second
./tests/bitrot/br-stub.t  -  9 second
./tests/basic/volume-status.t  -  9 second
./tests/basic/tier/ctr-rename-overwrite.t  -  9 second
./tests/basic/quota-nfs.t  -  9 second
./tests/basic/md-cache/bug-1317785.t  -  9 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  9 second
./tests/basic/gfapi/anonymous_fd.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/basic/ec/ec-read-policy.t  -  9 second
./tests/basic/ec/ec-anonymous-fd.t  -  9 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  8 second
./tests/bugs/transport/bug-873367.t  -  8 second
./tests/bugs/shard/bug-1260637.t  -  8 second
./tests/bugs/replicate/bug-1365455.t  -  8 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  8 second
./tests/bugs/replicate/bug-1101647.t  -  8 second
./tests/bugs/posix/bug-1175711.t  -  8 second
./tests/bugs/posix/bug-1122028.t  -  8 second
./tests/bugs/nfs/showmount-many-clients.t  -  8 second
./tests/bugs/nfs/bug-915280.t  -  8 second
./tests/bugs/io-cache/bug-read-hang.t  -  8 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  8 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  8 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  8 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  8 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/ec/bug-1227869.t  -  8 second
./tests/bugs/distribute/bug-1368012.t  -  8 second
./tests/bugs/distribute/bug-1088231.t  -  8 second
./tests/bugs/core/bug-986429.t  -  8 second
./tests/bugs/bug-1371806_2.t  -  8 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  8 second
./tests/basic/gfapi/glfd-lkowner.t  -  8 second
./tests/basic/gfapi/gfapi-dup.t  -  8 second
./tests/basic/gfapi/bug-1241104.t  -  8 second
./tests/basic/afr/gfid-heal.t  -  8 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/upcall/bug-1369430.t  -  7 second
./tests/bugs/snapshot/bug-1064768.t  -  7 second
./tests/bugs/shard/bug-1258334.t  -  7 second
./tests/bugs/shard/bug-1256580.t  -  7 second
./tests/bugs/replicate/bug-966018.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/quota/bug-1104692.t  -  7 second
./tests/bugs/md-cache/afr-stale-read.t  -  7 second
./tests/bugs/io-cache/bug-858242.t  -  7 second
./tests/bugs/glusterfs/bug-856455.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/glusterd/bug-889630.t  -  7 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  7 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  7 second
./tests/bugs/glusterd/bug-1102656.t  -  7 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  7 second
./tests/bugs/distribute/bug-912564.t  -  7 second
./tests/bugs/distribute/bug-884597.t  -  7 second
./tests/bugs/core/bug-913544.t  -  7 second
./tests/bugs/core/bug-834465.t  -  7 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  7 second
./tests/bitrot/bug-1221914.t  -  7 second
./tests/basic/nl-cache.t  -  7 second
./tests/basic/hardlink-limit.t  -  7 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  7 second
./tests/basic/ec/ec-fallocate.t  -  7 second
./tests/basic/ec/dht-rename.t  -  7 second

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #504

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] Use RTLD_LOCAL for symbol resolution

[Raghavendra G] rpc: fix use after freed of clnt after rpc transport clenup

[Amar Tumballi] rpc-transport/rdma: Fix coverity issues in rdma transport

[Aravinda VK] eventsapi: JWT signing without external dependency

--
[...truncated 238.70 KB...]
... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:16] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  0 wallclock secs ( 0.03 usr  0.01 sys +  0.04 cusr  0.10 
csys =  0.18 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:16] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.02 usr  0.01 sys +  1.70 cusr  2.63 
csys =  4.36 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:37] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "1" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
not ok 34 Got "" instead of "1048576", LINENUM:57
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file2
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 3/40 subtests 


[Gluster-Maintainers] Jenkins build is back to normal : regression-test-with-multiplex #579

2018-01-12 Thread jenkins
See 


___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #578

2018-01-12 Thread jenkins
See 


Changes:

[Pranith Kumar K] cluster/ec: Change [f]getxattr to parallel-dispatch-one

[Kotresh H R] fips/geo-rep: Replace MD5 with SHA256

[Pranith Kumar K] cluster/ec: Fix possible shift overflow

[Jeff Darcy] xlator.h: move options and other variables to the top of structure

[Jeff Darcy] performance/write-behind: fix bug while handling short writes

--
[...truncated 783.66 KB...]
./tests/bugs/glusterd/bug-1121584-brick-existing-validation-for-remove-brick-status-stop.t
  -  8 second
./tests/bugs/glusterd/bug-1104642.t  -  8 second
./tests/bugs/ec/bug-1179050.t  -  8 second
./tests/bugs/distribute/bug-1247563.t  -  8 second
./tests/bugs/distribute/bug-1086228.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/quota_aux_mount.t  -  8 second
./tests/basic/md-cache/bug-1317785.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  8 second
./tests/basic/gfapi/gfapi-dup.t  -  8 second
./tests/basic/gfapi/bug-1241104.t  -  8 second
./tests/basic/gfapi/anonymous_fd.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/features/ssl-authz.t  -  7 second
./tests/features/readdir-ahead.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/upcall/bug-1227204.t  -  7 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/posix/bug-1175711.t  -  7 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/md-cache/bug-1211863.t  -  7 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  7 second
./tests/bugs/glusterd/bug-1323287-real_path-handshake-test.t  -  7 second
./tests/bugs/glusterd/bug-1293414-import-brickinfo-uuid.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/changelog/bug-1208470.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/quota-nfs.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/ec/ec-anonymous-fd.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/get-gfid-to-path.t  -  6 second
./tests/gfid2path/block-mount-access.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1365455.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1243798.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/glusterd/bug-889630.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  6 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  6 second
./tests/bugs/ec/bug-1227869.t  -  6 second
./tests/bugs/distribute/bug-1368012.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-internal-xattrs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/performance/quick-read.t  -  5 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/unclassified/bug-1034085.t  -  5 second
./tests/bugs/snapshot/bug-041.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1259651.t  -  5 second
./tests/bugs/shard/bug-1258334.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-976800.t  

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #502

2018-01-12 Thread jenkins
See 


Changes:

[Raghavendra G] protocol/client: reduce lock contention

[Raghavendra G] cluster/dht: Add migration checks to dht_(f)xattrop

[Kaleb S. KEITHLEY] rpm: Fedora 28 has renamed pyxattr

--
[...truncated 240.04 KB...]
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.04 usr  0.00 sys +  1.70 cusr  2.30 
csys =  4.04 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:42] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
ok 25, LINENUM:42
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
ok 30, LINENUM:49
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
not ok 34 Got "" instead of "1048576", LINENUM:57
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file2
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 2/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 2)
  Failed tests:  33-34
Files=1, Tests=40, 47 wallclock secs ( 0.04 usr  0.00 sys + 6622692.56 cusr 
13245383.73 csys = 19868076.33 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

dd: /mnt/glusterfs/0/file2: Device not configured
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #596

2018-01-12 Thread jenkins
See 


--
[...truncated 1.13 MB...]
./tests/bugs/cli/bug-1022905.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/basic/tier/ctr-rename-overwrite.t  -  8 second
./tests/basic/stats-dump.t  -  8 second
./tests/basic/quota-nfs.t  -  8 second
./tests/basic/inode-quota-enforcing.t  -  8 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  8 second
./tests/basic/gfapi/glfs_xreaddirplus_r.t  -  8 second
./tests/basic/fop-sampling.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/gfid2path/get-gfid-to-path.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/readdir-ahead.t  -  7 second
./tests/bugs/upcall/bug-1458127.t  -  7 second
./tests/bugs/upcall/bug-1227204.t  -  7 second
./tests/bugs/transport/bug-873367.t  -  7 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  7 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  7 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/glusterfs/bug-902610.t  -  7 second
./tests/bugs/glusterd/bug-1499509-disconnect-in-brick-mux.t  -  7 second
./tests/bugs/glusterd/bug-1420637-volume-sync-fix.t  -  7 second
./tests/bugs/glusterd/bug-1213295-snapd-svc-uninitialized.t  -  7 second
./tests/bugs/glusterd/bug-1109741-auth-mgmt-handshake.t  -  7 second
./tests/bugs/glusterd/bug-1094119-remove-replace-brick-support-from-glusterd.t  
-  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  7 
second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/volume-status.t  -  7 second
./tests/basic/md-cache/bug-1317785.t  -  7 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  7 second
./tests/basic/gfapi/glfd-lkowner.t  -  7 second
./tests/basic/gfapi/gfapi-dup.t  -  7 second
./tests/basic/gfapi/bug-1241104.t  -  7 second
./tests/basic/gfapi/anonymous_fd.t  -  7 second
./tests/basic/ec/ec-read-policy.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/features/ssl-authz.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/snapshot/bug-1064768.t  -  6 second
./tests/bugs/shard/bug-1260637.t  -  6 second
./tests/bugs/shard/bug-1259651.t  -  6 second
./tests/bugs/replicate/bug-966018.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/quota/bug-1104692.t  -  6 second
./tests/bugs/posix/bug-1175711.t  -  6 second
./tests/bugs/posix/bug-1122028.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/io-cache/bug-read-hang.t  -  6 second
./tests/bugs/io-cache/bug-858242.t  -  6 second
./tests/bugs/glusterfs/bug-893378.t  -  6 second
./tests/bugs/glusterd/bug-859927.t  -  6 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  6 second
./tests/bugs/glusterd/bug-1223213-peerid-fix.t  -  6 second
./tests/bugs/glusterd/bug-1179175-uss-option-validation.t  -  6 second
./tests/bugs/glusterd/bug-1102656.t  -  6 second
./tests/bugs/ec/bug-1227869.t  -  6 second
./tests/bugs/distribute/bug-1088231.t  -  6 second
./tests/bugs/core/bug-986429.t  -  6 second
./tests/bugs/core/bug-908146.t  -  6 second
./tests/bugs/core/bug-834465.t  -  6 second
./tests/bugs/bug-1371806_2.t  -  6 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/ec/ec-fallocate.t  -  6 second
./tests/basic/ec/dht-rename.t  -  6 second
./tests/basic/afr/heal-info.t  -  6 second
./tests/basic/afr/arbiter-remove-brick.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  5 second
./tests/bugs/snapshot/bug-1178079.t  -  5 second
./tests/bugs/snapshot/bug-041.t  -  5 second
./tests/bugs/shard/bug-1342298.t  -  5 second
./tests/bugs/shard/bug-1258334.t  -  5 second
./tests/bugs/shard/bug-1256580.t  -  5 second
./tests/bugs/replicate/bug-1365455.t  -  5 second
./tests/bugs/quota/bug-1287996.t  -  5 second
./tests/bugs/posix/bug-765380.t  -  5 second
./tests/bugs/nfs/subdir-trailing-slash.t  -  5 second
./tests/bugs/nfs/bug-915280.t  -  5 second
./tests/bugs/nfs/bug-1116503.t  -  5 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  5 second
./tests/bugs/md-cache/afr-stale-read.t  -  5 second
./tests/bugs/glusterfs-server/bug-873549.t  -  5 second
./tests/bugs/glusterfs-server/bug-864222.t  -  5 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #514

2018-01-12 Thread jenkins
See 


Changes:

[Raghavendra G] Revert "rpc: merge ssl infra with epoll infra"

[atin] dict: fix VALIDATE_DATA_AND_LOG call

--
[...truncated 272.55 KB...]
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t .. 
1..29
ok 1, LINENUM:11
ok 2, LINENUM:12
ok 3, LINENUM:14
ok 4, LINENUM:15
ok 5, LINENUM:16
ok 6, LINENUM:17
ok 7, LINENUM:18
ok 8, LINENUM:19
ok 9, LINENUM:20
ok 10, LINENUM:22
ok 11, LINENUM:25
ok 12, LINENUM:34
ok 13, LINENUM:39
ok 14, LINENUM:39
ok 15, LINENUM:43
ok 16, LINENUM:46
ok 17, LINENUM:47
ok 18, LINENUM:48
ok 19, LINENUM:51
ok 20, LINENUM:52
ok 21, LINENUM:53
ok 22, LINENUM:54
ok 23, LINENUM:60
ok 24, LINENUM:63
ok 25, LINENUM:68
ok 26, LINENUM:68
ok 27, LINENUM:72
ok 28, LINENUM:73
ok 29, LINENUM:74
ok
All tests successful.
Files=1, Tests=29, 22 wallclock secs ( 0.03 usr  0.01 sys +  1.81 cusr  2.66 
csys =  4.51 CPU)
Result: PASS
End of test 
./tests/basic/afr/granular-esh/granular-indices-but-non-granular-heal.t




[14:09:59] Running tests in file ./tests/basic/afr/granular-esh/replace-brick.t
./tests/basic/afr/granular-esh/replace-brick.t .. 
1..34
ok 1, LINENUM:7
ok 2, LINENUM:8
ok 3, LINENUM:9
ok 4, LINENUM:10
ok 5, LINENUM:11
ok 6, LINENUM:12
ok 7, LINENUM:13
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:17
ok 11, LINENUM:26
ok 12, LINENUM:29
ok 13, LINENUM:32
ok 14, LINENUM:35
ok 15, LINENUM:38
ok 16, LINENUM:41
ok 17, LINENUM:43
ok 18, LINENUM:44
ok 19, LINENUM:46
ok 20, LINENUM:47
ok 21, LINENUM:48
ok 22, LINENUM:49
ok 23, LINENUM:50
ok 24, LINENUM:53
ok 25, LINENUM:56
ok 26, LINENUM:59
ok 27, LINENUM:60
ok 28, LINENUM:63
ok 29, LINENUM:65
ok 30, LINENUM:68
ok 31, LINENUM:69
ok 32, LINENUM:71
ok 33, LINENUM:72
ok 34, LINENUM:73
ok
All tests successful.
Files=1, Tests=34, 23 wallclock secs ( 0.01 usr  0.03 sys +  1.81 cusr  2.78 
csys =  4.63 CPU)
Result: PASS
End of test ./tests/basic/afr/granular-esh/replace-brick.t




[14:10:22] Running tests in file ./tests/basic/afr/heal-info.t
./tests/basic/afr/heal-info.t .. 
1..9
ok 1, LINENUM:21
ok 2, LINENUM:22
ok 3, LINENUM:23
ok 4, LINENUM:24
ok 5, LINENUM:25
ok 6, LINENUM:26
ok 7, LINENUM:27
ok 8, LINENUM:33
ok 9, LINENUM:34
ok
All tests successful.
Files=1, Tests=9, 33 wallclock secs ( 0.03 usr  0.00 sys + 7839631.86 cusr 
13066052.06 csys = 20905683.95 CPU)
Result: PASS
End of test ./tests/basic/afr/heal-info.t




[14:10:55] Running tests in file ./tests/basic/afr/heal-quota.t
touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/22924/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests 

Test Summary Report
---
./tests/basic/afr/heal-quota.t (Wstat: 0 Tests: 19 Failed: 1)
  Failed test:  11
Files=1, Tests=19, 18 wallclock secs ( 0.05 usr  0.00 sys +  1.43 cusr  2.11 
csys =  3.59 CPU)
Result: FAIL
./tests/basic/afr/heal-quota.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

touch: /mnt/glusterfs/0/b: Socket is not connected
dd: block size `1M': illegal number
cat: /proc/26818/cmdline: No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
dd: block size `1M': illegal number
./tests/basic/afr/heal-quota.t .. 
1..19
ok 1, LINENUM:10
ok 2, LINENUM:11
ok 3, LINENUM:12
ok 4, LINENUM:13
ok 5, LINENUM:14
ok 6, LINENUM:16
ok 7, LINENUM:17
ok 8, LINENUM:18
ok 9, LINENUM:19
ok 10, LINENUM:20
not ok 11 , LINENUM:22
FAILED COMMAND: touch /mnt/glusterfs/0/a /mnt/glusterfs/0/b
ok 12, LINENUM:24
ok 13, LINENUM:26
ok 14, LINENUM:27
ok 15, LINENUM:28
ok 16, LINENUM:29
ok 17, LINENUM:30
ok 18, LINENUM:32
ok 19, LINENUM:33
Failed 1/19 subtests 

Test 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #500

2018-01-12 Thread jenkins
See 

--
[...truncated 239.37 KB...]
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 19 wallclock secs ( 0.04 usr  0.01 sys +  1.66 cusr  2.44 
csys =  4.15 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:39] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
dd: /mnt/glusterfs/0/file2: Input/output error
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
stat: /mnt/glusterfs/0/file2: lstat: No such file or directory
umount: /mnt/glusterfs/0: Invalid argument
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
not ok 30 , LINENUM:49
FAILED COMMAND: dd if=/dev/urandom of=/mnt/glusterfs/0/file2 bs=1024 count=1024
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
not ok 34 Got "" instead of "1048576", LINENUM:57
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file2
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 4/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 4)
  Failed tests:  25, 30, 33-34
Files=1, Tests=40, 133 wallclock secs ( 0.05 usr  0.01 sys + 31512469.15 cusr 
4319040.70 csys = 35831509.91 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: lstat: No such file or directory
stat: /mnt/glusterfs/0/file1: 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #497

2018-01-12 Thread jenkins
See 


Changes:

[Jeff Darcy] stripe, quiesce: volume option fixes

[Jeff Darcy] cli: Fixed coverity issue in cli-cmd-system.c

[Amar Tumballi] snapshot: fix several coverity issues in glusterd-snapshot.c

[Amar Tumballi] rchecksum/fips: Replace MD5 usage to enable fips support

[Pranith Kumar K] cluster/ec: Add default value for the redundancy option

--
[...truncated 242.94 KB...]
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 get_pending_heal_count patchy
ok 26, LINENUM:45
ok 27, LINENUM:46
ok 28, LINENUM:47
ok 29, LINENUM:48
not ok 30 , LINENUM:49
FAILED COMMAND: dd if=/dev/urandom of=/mnt/glusterfs/0/file2 bs=1024 count=1024
ok 31, LINENUM:52
ok 32, LINENUM:53
not ok 33 Got "" instead of "1048576", LINENUM:56
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1
not ok 34 Got "" instead of "1048576", LINENUM:57
FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file2
ok 35, LINENUM:60
ok 36, LINENUM:61
ok 37, LINENUM:64
ok 38, LINENUM:65
ok 39, LINENUM:68
ok 40, LINENUM:69
Failed 4/40 subtests 

Test Summary Report
---
./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 4)
  Failed tests:  25, 30, 33-34
Files=1, Tests=40, 117 wallclock secs ( 0.03 usr  0.02 sys +  7.61 cusr  9.72 
csys = 17.38 CPU)
Result: FAIL
./tests/basic/afr/arbiter-add-brick.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
rm: /build/install/var/run/gluster: is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l 

[Gluster-Maintainers] Build failed in Jenkins: netbsd-periodic #506

2018-01-12 Thread jenkins
See 


Changes:

[Amar Tumballi] posix: Introduce flags for validity of iatt members

--
[...truncated 238.64 KB...]
Byte-compiling python modules (optimized versions) ...
__init__.pygf_event.pyeventsapiconf.pyeventtypes.pyutils.pyhandlers.py
  -c -d 
'/build/install/libexec/glusterfs/events'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs/events'
  -c -d 
'/build/install/etc/glusterfs'
 /usr/bin/install -c -m 644 
 
'/build/install/etc/glusterfs'
  -c -d 
'/build/install/libexec/glusterfs'
 /usr/bin/install -c 
 
'/build/install/libexec/glusterfs'
Making install in tools
  -c -d 
'/build/install/share/glusterfs/scripts'
 /usr/bin/install -c 
 
'/build/install/share/glusterfs/scripts'
make  install-data-hook
/usr/bin/install -c -d -m 755 /build/install/var/db/glusterd/events
  -c -d 
'/build/install/lib/pkgconfig'
 /usr/bin/install -c -m 644 glusterfs-api.pc libgfchangelog.pc libgfdb.pc 
'/build/install/lib/pkgconfig'

Start time Sat Dec 30 13:56:12 UTC 2017
Run the regression test
***

tset: standard error: Inappropriate ioctl for device
chflags: /netbsd: No such file or directory
umount: /mnt/nfs/0: Invalid argument
umount: /mnt/nfs/1: Invalid argument
umount: /mnt/glusterfs/0: Invalid argument
umount: /mnt/glusterfs/1: Invalid argument
umount: /mnt/glusterfs/2: Invalid argument
umount: /build/install/var/run/gluster/patchy: No such file or directory
/dev/rxbd0e: 4096.0MB (8388608 sectors) block size 16384, fragment size 2048
using 23 cylinder groups of 178.09MB, 11398 blks, 22528 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 364768, 729504, 1094240, 1458976, 1823712, 2188448, 2553184, 2917920,
...

... GlusterFS Test Framework ...


The following required tools are missing:

  * dbench

 




[13:56:13] Running tests in file ./tests/basic/0symbol-check.t
Skip Linux specific test
./tests/basic/0symbol-check.t .. 
1..2
ok 1, LINENUM:
ok 2, LINENUM:
ok
All tests successful.
Files=1, Tests=2,  1 wallclock secs ( 0.02 usr  0.01 sys +  0.06 cusr  0.07 
csys =  0.16 CPU)
Result: PASS
End of test ./tests/basic/0symbol-check.t




[13:56:14] Running tests in file ./tests/basic/afr/add-brick-self-heal.t
./tests/basic/afr/add-brick-self-heal.t .. 
1..34
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:8
ok 4, LINENUM:9
ok 5, LINENUM:10
ok 6, LINENUM:11
ok 7, LINENUM:12
ok 8, LINENUM:14
ok 9, LINENUM:15
ok 10, LINENUM:24
ok 11, LINENUM:27
ok 12, LINENUM:30
ok 13, LINENUM:31
ok 14, LINENUM:34
ok 15, LINENUM:35
ok 16, LINENUM:36
ok 17, LINENUM:38
ok 18, LINENUM:39
ok 19, LINENUM:40
ok 20, LINENUM:42
ok 21, LINENUM:43
ok 22, LINENUM:44
ok 23, LINENUM:45
ok 24, LINENUM:46
ok 25, LINENUM:47
ok 26, LINENUM:50
ok 27, LINENUM:53
ok 28, LINENUM:54
ok 29, LINENUM:57
ok 30, LINENUM:60
ok 31, LINENUM:61
ok 32, LINENUM:63
ok 33, LINENUM:64
ok 34, LINENUM:65
ok
All tests successful.
Files=1, Tests=34, 20 wallclock secs ( 0.03 usr  0.01 sys +  1.67 cusr  2.39 
csys =  4.10 CPU)
Result: PASS
End of test ./tests/basic/afr/add-brick-self-heal.t




[13:56:34] Running tests in file ./tests/basic/afr/arbiter-add-brick.t
./tests/basic/afr/arbiter-add-brick.t .. 
1..40
ok 1, LINENUM:6
ok 2, LINENUM:7
ok 3, LINENUM:10
ok 4, LINENUM:11
ok 5, LINENUM:12
ok 6, LINENUM:13
ok 7, LINENUM:14
ok 8, LINENUM:15
ok 9, LINENUM:16
ok 10, LINENUM:19
ok 11, LINENUM:20
ok 12, LINENUM:21
ok 13, LINENUM:25
ok 14, LINENUM:26
ok 15, LINENUM:29
ok 16, LINENUM:30
ok 17, LINENUM:32
ok 18, LINENUM:33
ok 19, LINENUM:36
ok 20, LINENUM:37
ok 21, LINENUM:38
ok 22, LINENUM:39
ok 23, LINENUM:40
ok 24, LINENUM:41
not ok 25 Got "7" instead of "0", LINENUM:42
FAILED COMMAND: 0 

Re: [Gluster-Maintainers] [gluster-packaging] Release 4.0: Landing CentOS SIG packages for 4.0 on time!

2018-01-12 Thread Niels de Vos
On Thu, Jan 11, 2018 at 02:53:53PM -0500, Kaleb S. KEITHLEY wrote:
> On 01/11/2018 02:19 PM, Kaleb S. KEITHLEY wrote:
> > On 01/11/2018 01:50 PM, Shyam Ranganathan wrote:
> >> Hi Packaging team,
> >>
> >> CentOS SIG Gluster packages for new releases seem to land *late* on the
> >> SIG always.
> >>
> >> What do we (as in Gluster community) need to do to make it happen on
> >> time for 4.0?
> >>
> >> As this time around, and possibly in the future, we want to avoid this lag!
> > 
> > I've opened a BZ[1] to create the necessary tags. Once those are created
> > I'll follow up (with another BZ) to create the staging (is that the
> > right term?) directory/directories in the Centos Storage SIG.
> > 
> > Then, depending on how quickly those happen I'll either build a dummy
> > glusterfs-4.0 package, and/or an actual RC/Alpha/Beta package.
> > 
> > This should prime the pump, so to speak, for the actual release.
> 
> And looking through the old BZs I see that Niels has had to open BZs on
> several occasions to poke the CentOS ops to push packages to buildlog
> and to the mirrors. (There doesn't seem to be a lot of automation in place.)
> 
> We'll need to pay close attention and should probably open BZs —
> aggressively — if necessary.

I've tagged the dependencies and additional packages already in the
newly created storage7-gluster-40-testing tag. Because the tags is not
empty anymore, the CentOS team can setup the syncing of the packages to
the test repository. This has been requested in
https://bugs.centos.org/view.php?id=14363 .

In addition to that, a centos-release-gluster40 package is now prepared
as well:
  - https://github.com/CentOS-Storage-SIG/centos-release-gluster40
  - http://cbs.centos.org/koji/taskinfo?taskID=291866

Once the repository is available at
https://buildlogs.centos.org/centos/7/storage/x86_64/ , the RPM from the
CBS can be used for testing. Note that there is no glusterfs-4.0 package
yet, we'll build that once the first Alpha release is tagged.

Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers