See <https://build.gluster.org/job/netbsd-periodic/498/display/redirect?page=changes>
Changes: [Amar Tumballi] tests/vagrant: configure with --enable-gnfs so tests can use NFS ------------------------------------------ [...truncated 236.81 KB...] <https://build.gluster.org/job/netbsd-periodic/ws/install-sh> -c -d '/build/install/etc/glusterfs' /usr/bin/install -c -m 644 <https://build.gluster.org/job/netbsd-periodic/ws/events/src/eventsconfig.json> '/build/install/etc/glusterfs' <https://build.gluster.org/job/netbsd-periodic/ws/install-sh> -c -d '/build/install/libexec/glusterfs' /usr/bin/install -c <https://build.gluster.org/job/netbsd-periodic/ws/events/src/peer_eventsapi.py> '/build/install/libexec/glusterfs' Making install in tools <https://build.gluster.org/job/netbsd-periodic/ws/install-sh> -c -d '/build/install/share/glusterfs/scripts' /usr/bin/install -c <https://build.gluster.org/job/netbsd-periodic/ws/events/tools/eventsdash.py> '/build/install/share/glusterfs/scripts' make install-data-hook /usr/bin/install -c -d -m 755 /build/install/var/db/glusterd/events <https://build.gluster.org/job/netbsd-periodic/ws/install-sh> -c -d '/build/install/lib/pkgconfig' /usr/bin/install -c -m 644 glusterfs-api.pc libgfchangelog.pc libgfdb.pc '/build/install/lib/pkgconfig' Start time Fri Dec 22 13:56:27 UTC 2017 Run the regression test *********************** tset: standard error: Inappropriate ioctl for device chflags: /netbsd: No such file or directory umount: /mnt/nfs/0: Invalid argument umount: /mnt/nfs/1: Invalid argument umount: /mnt/glusterfs/0: Invalid argument umount: /mnt/glusterfs/1: Invalid argument umount: /mnt/glusterfs/2: Invalid argument umount: /build/install/var/run/gluster/patchy: No such file or directory /dev/rxbd0e: 4096.0MB (8388608 sectors) block size 16384, fragment size 2048 using 23 cylinder groups of 178.09MB, 11398 blks, 22528 inodes. super-block backups (for fsck_ffs -b #) at: 32, 364768, 729504, 1094240, 1458976, 1823712, 2188448, 2553184, 2917920, ............................................................................... ... GlusterFS Test Framework ... The following required tools are missing: * dbench <https://build.gluster.org/job/netbsd-periodic/ws/> <https://build.gluster.org/job/netbsd-periodic/ws/> <https://build.gluster.org/job/netbsd-periodic/ws/> ================================================================================ [13:56:28] Running tests in file ./tests/basic/0symbol-check.t Skip Linux specific test ./tests/basic/0symbol-check.t .. 1..2 ok 1, LINENUM: ok 2, LINENUM: ok All tests successful. Files=1, Tests=2, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.05 cusr 0.08 csys = 0.16 CPU) Result: PASS End of test ./tests/basic/0symbol-check.t ================================================================================ ================================================================================ [13:56:28] Running tests in file ./tests/basic/afr/add-brick-self-heal.t ./tests/basic/afr/add-brick-self-heal.t .. 1..34 ok 1, LINENUM:6 ok 2, LINENUM:7 ok 3, LINENUM:8 ok 4, LINENUM:9 ok 5, LINENUM:10 ok 6, LINENUM:11 ok 7, LINENUM:12 ok 8, LINENUM:14 ok 9, LINENUM:15 ok 10, LINENUM:24 ok 11, LINENUM:27 ok 12, LINENUM:30 ok 13, LINENUM:31 ok 14, LINENUM:34 ok 15, LINENUM:35 ok 16, LINENUM:36 ok 17, LINENUM:38 ok 18, LINENUM:39 ok 19, LINENUM:40 ok 20, LINENUM:42 ok 21, LINENUM:43 ok 22, LINENUM:44 ok 23, LINENUM:45 ok 24, LINENUM:46 ok 25, LINENUM:47 ok 26, LINENUM:50 ok 27, LINENUM:53 ok 28, LINENUM:54 ok 29, LINENUM:57 ok 30, LINENUM:60 ok 31, LINENUM:61 ok 32, LINENUM:63 ok 33, LINENUM:64 ok 34, LINENUM:65 ok All tests successful. Files=1, Tests=34, 20 wallclock secs ( 0.04 usr 0.01 sys + 1.67 cusr 2.50 csys = 4.22 CPU) Result: PASS End of test ./tests/basic/afr/add-brick-self-heal.t ================================================================================ ================================================================================ [13:56:48] Running tests in file ./tests/basic/afr/arbiter-add-brick.t perfused: perfuse_node_inactive: perfuse_node_fsync failed error = 57: Resource temporarily unavailable ./tests/basic/afr/arbiter-add-brick.t .. 1..40 ok 1, LINENUM:6 ok 2, LINENUM:7 ok 3, LINENUM:10 ok 4, LINENUM:11 ok 5, LINENUM:12 ok 6, LINENUM:13 ok 7, LINENUM:14 ok 8, LINENUM:15 ok 9, LINENUM:16 ok 10, LINENUM:19 ok 11, LINENUM:20 ok 12, LINENUM:21 ok 13, LINENUM:25 ok 14, LINENUM:26 ok 15, LINENUM:29 ok 16, LINENUM:30 ok 17, LINENUM:32 ok 18, LINENUM:33 ok 19, LINENUM:36 ok 20, LINENUM:37 ok 21, LINENUM:38 ok 22, LINENUM:39 ok 23, LINENUM:40 ok 24, LINENUM:41 not ok 25 Got "5" instead of "0", LINENUM:42 FAILED COMMAND: 0 get_pending_heal_count patchy ok 26, LINENUM:45 ok 27, LINENUM:46 ok 28, LINENUM:47 ok 29, LINENUM:48 ok 30, LINENUM:49 ok 31, LINENUM:52 ok 32, LINENUM:53 not ok 33 Got "1016832" instead of "1048576", LINENUM:56 FAILED COMMAND: 1048576 stat -c %s /mnt/glusterfs/0/file1 ok 34, LINENUM:57 ok 35, LINENUM:60 ok 36, LINENUM:61 ok 37, LINENUM:64 ok 38, LINENUM:65 ok 39, LINENUM:68 ok 40, LINENUM:69 Failed 2/40 subtests Test Summary Report ------------------- ./tests/basic/afr/arbiter-add-brick.t (Wstat: 0 Tests: 40 Failed: 2) Failed tests: 25, 33 Files=1, Tests=40, 134 wallclock secs ( 0.03 usr 0.03 sys + 16175788.67 cusr 22869215.95 csys = 39045004.68 CPU) Result: FAIL ./tests/basic/afr/arbiter-add-brick.t: bad status 1 ********************************* * REGRESSION FAILED * * Retrying failed tests in case * * we got some spurious failures * ********************************* ./tests/basic/afr/arbiter-add-brick.t .. 1..40 ok 1, LINENUM:6 ok 2, LINENUM:7 ok 3, LINENUM:10 ok 4, LINENUM:11 ok 5, LINENUM:12 ok 6, LINENUM:13 ok 7, LINENUM:14 ok 8, LINENUM:15 ok 9, LINENUM:16 ok 10, LINENUM:19 ok 11, LINENUM:20 ok 12, LINENUM:21 ok 13, LINENUM:25 ok 14, LINENUM:26 ok 15, LINENUM:29 ok 16, LINENUM:30 ok 17, LINENUM:32 ok 18, LINENUM:33 ok 19, LINENUM:36 ok 20, LINENUM:37 ok 21, LINENUM:38 ok 22, LINENUM:39 ok 23, LINENUM:40 ok 24, LINENUM:41 ok 25, LINENUM:42 ok 26, LINENUM:45 ok 27, LINENUM:46 ok 28, LINENUM:47 ok 29, LINENUM:48 ok 30, LINENUM:49 ok 31, LINENUM:52 ok 32, LINENUM:53 ok 33, LINENUM:56 ok 34, LINENUM:57 ok 35, LINENUM:60 ok 36, LINENUM:61 ok 37, LINENUM:64 ok 38, LINENUM:65 ok 39, LINENUM:68 ok 40, LINENUM:69 ok All tests successful. Files=1, Tests=40, 46 wallclock secs ( 0.04 usr 0.02 sys + 2.19 cusr 2.95 csys = 5.20 CPU) Result: PASS ./tests/basic/afr/arbiter-add-brick.t: 1 new core files End of test ./tests/basic/afr/arbiter-add-brick.t ================================================================================ Run complete ================================================================================ Number of tests found: 3 Number of tests selected for run based on pattern: 3 Number of tests skipped as they were marked bad: 0 Number of tests skipped because of known_issues: 0 Number of tests that were run: 3 Tests ordered by time taken, slowest to fastest: ================================================================================ ./tests/basic/afr/arbiter-add-brick.t - 134 second ./tests/basic/afr/add-brick-self-heal.t - 20 second ./tests/basic/0symbol-check.t - 0 second 0 test(s) failed 1 test(s) generated core ./tests/basic/afr/arbiter-add-brick.t Result is 1 tar: Removing leading / from absolute path names in the archive Cores and build archived in http://nbslave72.cloud.gluster.org/archives/archived_builds/build-install-20171222135627.tgz Open core using the following command to get a proper stack... Example: From root of extracted tarball gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/xxx.core' <target, say ./build/install/sbin/glusterd> NB: this requires a gdb built with 'NetBSD ELF' osabi support, which is available natively on a NetBSD-7.0/i386 system tar: Removing leading / from absolute path names in the archive Logs archived in http://nbslave72.cloud.gluster.org/archives/logs/glusterfs-logs-20171222135627.tgz error: fatal: change is closed fatal: one or more reviews failed; review output above Build step 'Execute shell' marked build as failure _______________________________________________ maintainers mailing list [email protected] http://lists.gluster.org/mailman/listinfo/maintainers
