See <https://build.gluster.org/job/centos8-s390-regression/127/display/redirect>
Changes:
------------------------------------------
[...truncated 3.94 MB...]
205: subvolumes patchy
206: end-volume
207:
+------------------------------------------------------------------------------+
glfs_get_volumeid: returned 16
lstat(/filename2): (-1) No such file or directory
Entries:
.: 64
..: 76
filename2: 3076
setxattr(/filename2): 0 (Success)
setxattr(/filename2): 0 (Success)
getxattr(/filename2): 8 (Success)
listxattr(/filename2): 44 (Success)
symlink(/filename2 /linkfile): Success
readlink(/filename2) : 8 (Success)
lsetxattr(/linkfile) : 0 (Success)
llistxattr(/filename2): 17 (Success)
lgetxattr(/linkfile): 8 (Success)
removexattr(/filename2): 0 (Success)
open(/filename2): (0x3a71ba48) Success
fsetxattr(/filename2): 0 (Success)
fgetxattr(/filename2): 8 (Success)
flistxattr(/filename2): 117 (Success)
fremovexattr(/filename2): 0 (Success)
mkdir(/topdir): Success
mkdir(/dir): Success
[2023-05-20 21:01:52.440848 +0000] W [MSGID: 108001]
[afr-common.c:6410:afr_notify] 0-patchy-replicate-0: Client-quorum is not met
[2023-05-20 21:01:52.440977 +0000] W [MSGID: 108001]
[afr-common.c:6410:afr_notify] 0-patchy-replicate-1: Client-quorum is not met
[2023-05-20 21:01:52.441005 +0000] E [MSGID: 108006]
[afr-common.c:6105:__afr_handle_child_down_event] 0-patchy-replicate-1: All
subvolumes are down. Going offline until at least one of them comes back up.
[2023-05-20 21:01:52.441173 +0000] E [MSGID: 108006]
[afr-common.c:6105:__afr_handle_child_down_event] 0-patchy-replicate-0: All
subvolumes are down. Going offline until at least one of them comes back up.
[2023-05-20 21:01:52.441342 +0000] W [inode.c:1882:inode_table_destroy]
(-->/build/install/lib/libgfapi.so.0(glfs_fini+0x54c) [0x3ff8e68e284]
-->/build/install/lib/libglusterfs.so.0(inode_table_destroy_all+0xae)
[0x3ff8e0d2736]
-->/build/install/lib/libglusterfs.so.0(inode_table_destroy+0x23c)
[0x3ff8e0d29bc] ) 0-gfapi: Active inode(0x3a71f3c8) with refcount(1) found
during cleanup
[2023-05-20 21:01:58.364034 +0000] I [io-stats.c:4200:fini] 0-patchy: io-stats
translator unloaded
ok 7 [ 18565/ 2] < 24> 'cleanup_tester ./glfsxmp'
ok 8 [ 12/ 2] < 25> 'rm ./glfsxmp.c'
ok 9 [ 12/ 6089] < 28> 'gluster --mode=script --wignore volume stop
patchy'
ok 10 [ 13/ 585] < 30> 'gluster --mode=script --wignore volume delete
patchy'
ok
All tests successful.
Files=1, Tests=10, 30 wallclock secs ( 0.02 usr 0.00 sys + 0.60 cusr 0.58
csys = 1.20 CPU)
Result: PASS
Logs preserved in tarball arbiter-coverage-iteration-1.tar.gz
End of test ./tests/line-coverage/arbiter-coverage.t
================================================================================
======================================== (825 / 838)
========================================
[21:02:08] Running tests in file
./tests/line-coverage/cli-negative-case-and-function-coverage.t
./tests/line-coverage/cli-negative-case-and-function-coverage.t ..
1..75
ok 1 [ 145/ 1142] < 9> 'glusterd'
ok 2 [ 12/ 7] < 10> 'pidof glusterd'
ok 3 [ 12/ 53] < 14> '! gluster --mode=script --wignore volume
create patchy_1 148.100.84.19-/d/backends/v1 148.100.84.19-/d/backends/v2'
ok 4 [ 11/ 52] < 17> '! gluster --mode=script --wignore volume
create patchy_1 :/d/backends/v1 :/d/backends/v2'
ok 5 [ 11/ 52] < 20> '! gluster --mode=script --wignore volume
create patchy_1 localhost:/d/backends/v1 localhost:/d/backends/v2'
ok 6 [ 12/ 52] < 24> '! gluster --mode=script --wignore volume
inode-quota disable'
ok 7 [ 12/ 52] < 27> '! gluster --mode=script --wignore volume
inode-quota patchy_1 disable'
ok 8 [ 11/ 53] < 31> '! gluster --mode=script --wignore volume
patchy_1 start'
ok 9 [ 12/ 52] < 32> '! gluster --mode=script --wignore volume
patchy_1 limit-usage /random-path 0'
ok 10 [ 11/ 53] < 33> '! gluster --mode=script --wignore volume
patchy_1 limit-objects /random-path 0'
ok 11 [ 11/ 52] < 34> '! gluster --mode=script --wignore volume
patchy_1 alert-time some-time'
ok 12 [ 11/ 52] < 35> '! gluster --mode=script --wignore volume
patchy_1 soft-timeout some-time'
ok 13 [ 11/ 53] < 36> '! gluster --mode=script --wignore volume
patchy_1 hard-timeout some-time'
ok 14 [ 11/ 53] < 39> '! gluster --mode=script --wignore volume
patchy_1 limit-usage random-path'
ok 15 [ 11/ 53] < 40> '! gluster --mode=script --wignore volume
patchy_1 remove random-path'
ok 16 [ 12/ 52] < 41> '! gluster --mode=script --wignore volume
patchy_1 remove-objects random-path'
ok 17 [ 12/ 52] < 42> '! gluster --mode=script --wignore volume
patchy_1 list random-path'
ok 18 [ 11/ 53] < 45> '! gluster --mode=script --wignore volume
patchy_1 remove /random-path'
ok 19 [ 11/ 51] < 46> '! gluster --mode=script --wignore volume
patchy_1 remove-objects /random-path'
ok 20 [ 11/ 52] < 47> '! gluster --mode=script --wignore volume
patchy_1 alert-time'
ok 21 [ 11/ 52] < 48> '! gluster --mode=script --wignore volume
patchy_1 soft-timeout'
ok 22 [ 11/ 53] < 49> '! gluster --mode=script --wignore volume
patchy_1 hard-timeout'
ok 23 [ 12/ 53] < 50> '! gluster --mode=script --wignore volume
patchy_1 default-soft-limit'
ok 24 [ 12/ 53] < 53> '! gluster --mode=script --wignore nfs-ganesha'
ok 25 [ 11/ 52] < 54> '! gluster --mode=script --wignore nfs-gansha
disable'
ok 26 [ 11/ 52] < 55> '! gluster --mode=script --wignore nfs-ganesha
stop'
ok 27 [ 11/ 56] < 56> '! gluster --mode=script --wignore nfs-ganesha
disable'
ok 28 [ 11/ 53] < 57> '! gluster --mode=script --wignore nfs-ganesha
enable'
ok 29 [ 11/ 52] < 60> '! gluster --mode=script --wignore peer probe'
ok 30 [ 12/ 53] < 61> '! gluster --mode=script --wignore peer probe
host_name'
ok 31 [ 11/ 52] < 62> '! gluster --mode=script --wignore peer detach'
ok 32 [ 11/ 53] < 63> '! gluster --mode=script --wignore peer detach
host-name random-option'
ok 33 [ 11/ 53] < 64> '! gluster --mode=script --wignore peer status
host'
ok 34 [ 11/ 52] < 65> '! gluster --mode=script --wignore pool list
host'
ok 35 [ 12/ 53] < 68> '! gluster --mode=script --wignore vol sync'
ok 36 [ 12/ 50122] < 69> '! gluster --mode=script --wignore vol sync
host-name'
ok 37 [ 14/ 56] < 70> '! gluster --mode=script --wignore vol sync
localhost'
ok 38 [ 12/ 53] < 73> '! gluster --mode=script --wignore system::
getspec'
ok 39 [ 12/ 54] < 74> '! gluster --mode=script --wignore system::
portmap brick2port'
ok 40 [ 12/ 53] < 75> '! gluster --mode=script --wignore system:: fsm
log random-peer random-value'
ok 41 [ 12/ 54] < 76> '! gluster --mode=script --wignore system::
getwd random-value'
ok 42 [ 12/ 53] < 77> '! gluster --mode=script --wignore system::
mount'
ok 43 [ 12/ 53] < 78> '! gluster --mode=script --wignore system::
umount'
ok 44 [ 12/ 53] < 79> '! gluster --mode=script --wignore system::
uuid get random-value'
ok 45 [ 12/ 54] < 80> '! gluster --mode=script --wignore system::
uuid reset random-value'
ok 46 [ 12/ 56] < 81> '! gluster --mode=script --wignore system::
execute'
ok 47 [ 13/ 53] < 82> '! gluster --mode=script --wignore system::
copy file'
ok 48 [ 12/ 74] < 85> 'gluster --mode=script --wignore volume create
patchy_1 replica 3 148.100.84.19:/d/backends/v1 148.100.84.19:/d/backends/v2
148.100.84.19:/d/backends/v3'
ok 49 [ 12/ 1679] < 86> 'gluster --mode=script --wignore volume start
patchy_1'
ok 50 [ 13/ 59] < 87> 'Y glustershd_up_status'
ok 51 [ 12/ 53] < 88> 'gluster --mode=script --wignore volume heal
patchy_1 statistics'
ok 52 [ 11/ 2399] < 91> 'gluster --mode=script --wignore volume
replace-brick patchy_1 148.100.84.19:/d/backends/v1
148.100.84.19:/d/backends/v4 commit force --xml'
ok 53 [ 14/ 74] < 92> 'gluster --mode=script --wignore volume create
patchy_2 148.100.84.19:/d/backends/v5 148.100.84.19:/d/backends/v6 --xml'
ok 54 [ 12/ 518] < 93> 'gluster --mode=script --wignore volume delete
patchy_2 --xml'
ok 55 [ 14/ 54] < 96> '! gluster --mode=script --wignore volume start'
ok 56 [ 12/ 56] < 97> '! gluster --mode=script --wignore volume start
patchy_1 frc'
ok 57 [ 12/ 55] < 98> '! gluster --mode=script --wignore volume info
patchy_1 info'
ok 58 [ 12/ 54] < 99> '! gluster --mode=script --wignore volume info
patchy_2'
ok 59 [ 12/ 54] < 100> '! gluster --mode=script --wignore volume
delete'
ok 60 [ 12/ 53] < 101> '! gluster --mode=script --wignore volume stop'
ok 61 [ 12/ 55] < 102> '! gluster --mode=script --wignore volume stop
patchy_1 frc'
ok 62 [ 12/ 53] < 103> '! gluster --mode=script --wignore volume
rebalance patchy_1'
ok 63 [ 12/ 53] < 104> '! gluster --mode=script --wignore volume reset'
ok 64 [ 12/ 53] < 105> '! gluster --mode=script --wignore volume
profile patchy_1'
ok 65 [ 12/ 53] < 106> '! gluster --mode=script --wignore volume quota
all'
ok 66 [ 12/ 53] < 107> '! gluster --mode=script --wignore volume
reset-brick patchy_1'
ok 67 [ 12/ 54] < 108> '! gluster --mode=script --wignore volume top
patchy_1'
ok 68 [ 12/ 53] < 109> '! gluster --mode=script --wignore volume log
rotate'
ok 69 [ 11/ 54] < 110> '! gluster --mode=script --wignore volume
status all all'
ok 70 [ 12/ 54] < 111> '! gluster --mode=script --wignore volume heal'
ok 71 [ 12/ 53] < 112> '! gluster --mode=script --wignore volume
statedump'
ok 72 [ 12/ 54] < 113> '! gluster --mode=script --wignore volume
clear-locks patchy_1 / kid granted entry dir1'
ok 73 [ 12/ 53] < 114> '! gluster --mode=script --wignore volume
clear-locks patchy_1 / kind grant entry dir1'
ok 74 [ 12/ 54] < 115> '! gluster --mode=script --wignore volume
clear-locks patchy_1 / kind granted ent dir1'
ok 75 [ 12/ 53] < 116> '! gluster --mode=script --wignore volume
barrier patchy_1'
ok
All tests successful.
Files=1, Tests=75, 61 wallclock secs ( 0.02 usr 0.00 sys + 3.59 cusr 1.50
csys = 5.11 CPU)
Result: PASS
Logs preserved in tarball
cli-negative-case-and-function-coverage-iteration-1.tar.gz
End of test ./tests/line-coverage/cli-negative-case-and-function-coverage.t
================================================================================
======================================== (826 / 838)
========================================
[21:03:09] Running tests in file
./tests/line-coverage/cli-peer-and-volume-operations.t
./tests/line-coverage/cli-peer-and-volume-operations.t ..
1..58
ok 1 [ 152/ 3541] < 13> 'launch_cluster 3'
ok 2 [ 13/ 57] < 15> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log system
uuid reset'
ok 3 [ 12/ 76] < 18> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
probe 127.1.1.2'
ok 4 [ 12/ 60] < 19> '1 peer_count 1'
ok 5 [ 12/ 58] < 20> '1 peer_count 2'
ok 6 [ 12/ 3] < 23> 'kill_glusterd 3'
ok 7 [ 12/ 58] < 24> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
probe 127.1.1.3'
ok 8 [ 12/ 54] < 27> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
detach 127.1.1.3'
ok 9 [ 12/ 53] < 28> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
detach 127.1.1.3 force'
ok 10 [ 12/ 1151] < 30> 'start_glusterd 3'
ok 11 [ 12/ 237] < 31> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
probe 127.1.1.3'
ok 12 [ 13/ 59] < 32> '2 peer_count 1'
ok 13 [ 12/ 57] < 33> '2 peer_count 2'
ok 14 [ 11/ 57] < 34> '2 peer_count 3'
ok 15 [ 12/ 54] < 37> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
probe 127.1.1.3'
ok 16 [ 12/ 67] < 40> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
probe 1024.1024.1024.1024'
ok 17 [ 12/ 54] < 42> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log pool
list'
ok 18 [ 12/ 6] < 44> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log --help'
ok 19 [ 12/ 6] < 45> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log
--version'
ok 20 [ 11/ 6] < 46> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log
--print-logdir'
ok 21 [ 12/ 6] < 47> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log
--print-statedumpdir'
ok 22 [ 11/ 54] < 50> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume'
ok 23 [ 12/ 7] < 51> 'pidof glusterd'
ok 24 [ 12/ 53] < 54> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log global
help'
ok 25 [ 12/ 53] < 55> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log help'
ok 26 [ 12/ 53] < 57> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log peer
help'
ok 27 [ 12/ 54] < 58> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
help'
ok 28 [ 12/ 53] < 59> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
bitrot help'
ok 29 [ 12/ 53] < 60> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
quota help'
ok 30 [ 12/ 53] < 61> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log
snapshot help'
ok 31 [ 12/ 107] < 64> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
create patchy 127.1.1.1:/d/backends/1/patchy 127.1.1.2:/d/backends/2/patchy
127.1.1.3:/d/backends/3/patchy'
ok 32 [ 12/ 54] < 66> '! gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
create patchy 127.1.1.1:/d/backends/1/patchy1 127.1.1.2:/d/backends/2/patchy1'
ok 33 [ 12/ 412] < 67> 'gluster --mode=script --wignore
--glusterd-sock=/d/backends/1/glusterd/gd.sock
--log-file=/var/log/glusterfs/cli-peer-and-volume-operations.t_cli1.log volume
start patchy'
ok 34 [ 12/ 56] < 68> 'Started cluster_volinfo_field 1 patchy Status'
ok 35 [ 11/ 19] < 71> 'glusterfs -s 127.1.1.1 --volfile-id patchy
/mnt/glusterfs/1'
ok 36 [ 13/ 127] < 72> 'touch /mnt/glusterfs/1/file1
/mnt/glusterfs/1/file2 /mnt/glusterfs/1/file3 /mnt/glusterfs/1/file4
/mnt/glusterfs/1/file5 /mnt/glusterfs/1/file6 /mnt/glusterfs/1/file7
/mnt/glusterfs/1/file8 /mnt/glusterfs/1/file9 /mnt/glusterfs/1/file10
/mnt/glusterfs/1/file11 /mnt/glusterfs/1/file12 /mnt/glusterfs/1/file13
/mnt/glusterfs/1/file14 /mnt/glusterfs/1/file15 /mnt/glusterfs/1/file16
/mnt/glusterfs/1/file17 /mnt/glusterfs/1/file18 /mnt/glusterfs/1/file19
/mnt/glusterfs/1/file20 /mnt/glusterfs/1/file21 /mnt/glusterfs/1/file22
/mnt/glusterfs/1/file23 /mnt/glusterfs/1/file24 /mnt/glusterfs/1/file25
/mnt/glusterfs/1/file26 /mnt/glusterfs/1/file27 /mnt/glusterfs/1/file28
/mnt/glusterfs/1/file29 /mnt/glusterfs/1/file30 /mnt/glusterfs/1/file31
/mnt/glusterfs/1/file32 /mnt/glusterfs/1/file33 /mnt/glusterfs/1/file34
/mnt/glusterfs/1/file35 /mnt/glusterfs/1/file36 /mnt/glusterfs/1/file37
/mnt/glusterfs/1/file38 /mnt/glusterfs/1/file39 /mnt/glusterfs/1/file40 /mnt/glu
sterfs/1/file41 /mnt/glusterfs/1/file42 /mnt/glusterfs/1/file43
/mnt/glusterfs/1/file44 /mnt/glusterfs/1/file45 /mnt/glusterfs/1/file46
/mnt/glusterfs/1/file47 /mnt/glusterfs/1/file48 /mnt/glusterfs/1/file49
/mnt/glusterfs/1/file50 /mnt/glusterfs/1/file51 /mnt/glusterfs/1/file52
/mnt/glusterfs/1/file53 /mnt/glusterfs/1/file54 /mnt/glusterfs/1/file55
/mnt/glusterfs/1/file56 /mnt/glusterfs/1/file57 /mnt/glusterfs/1/file58
/mnt/glusterfs/1/file59 /mnt/glusterfs/1/file60 /mnt/glusterfs/1/file61
/mnt/glusterfs/1/file62 /mnt/glusterfs/1/file63 /mnt/glusterfs/1/file64
/mnt/glusterfs/1/file65 /mnt/glusterfs/1/file66 /mnt/glusterfs/1/file67
/mnt/glusterfs/1/file68 /mnt/glusterfs/1/file69 /mnt/glusterfs/1/file70
/mnt/glusterfs/1/file71 /mnt/glusterfs/1/file72 /mnt/glusterfs/1/file73
/mnt/glusterfs/1/file74 /mnt/glusterfs/1/file75 /mnt/glusterfs/1/file76
/mnt/glusterfs/1/file77 /mnt/glusterfs/1/file78 /mnt/glusterfs/1/file79
/mnt/glusterfs/1/file80 /mnt/glusterfs/1/file81 /mnt/glusterfs/1/file
82 /mnt/glusterfs/1/file83 /mnt/glusterfs/1/file84 /mnt/glusterfs/1/file85
/mnt/glusterfs/1/file86 /mnt/glusterfs/1/file87 /mnt/glusterfs/1/file88
/mnt/glusterfs/1/file89 /mnt/glusterfs/1/file90 /mnt/glusterfs/1/file91
/mnt/glusterfs/1/file92 /mnt/glusterfs/1/file93 /mnt/glusterfs/1/file94
/mnt/glusterfs/1/file95 /mnt/glusterfs/1/file96 /mnt/glusterfs/1/file97
/mnt/glusterfs/1/file98 /mnt/glusterfs/1/file99 /mnt/glusterfs/1/file100'
ok 37 [ 12/ 55] < 75> '! gluster --mode=script --wiFATAL: command
execution failed
java.io.EOFException
at
java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2911)
at
java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3406)
at
java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:932)
at
java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:375)
at
hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
at
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: java.io.IOException: Backing channel
'builder-el8-s390x-2.ibm-l1.gluster.org' is disconnected.
at
hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:215)
at
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
at com.sun.proxy.$Proxy150.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1215)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1207)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:195)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:145)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:526)
at hudson.model.Run.execute(Run.java:1900)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:442)
FATAL: Unable to delete script file /tmp/jenkins11785905521952862862.sh
java.io.EOFException
at
java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2911)
at
java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3406)
at
java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:932)
at
java.base/java.io.ObjectInputStream.<init>(ObjectInputStream.java:375)
at
hudson.remoting.ObjectInputStreamEx.<init>(ObjectInputStreamEx.java:49)
at hudson.remoting.Command.readFrom(Command.java:142)
at hudson.remoting.Command.readFrom(Command.java:128)
at
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)
at
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61)
Caused: java.io.IOException: Unexpected termination of the channel
at
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:75)
Caused: hudson.remoting.ChannelClosedException: Channel
"hudson.remoting.Channel@10f3b8e8:builder-el8-s390x-2.ibm-l1.gluster.org":
Remote call on builder-el8-s390x-2.ibm-l1.gluster.org failed. The channel is
closing down or has closed down
at hudson.remoting.Channel.call(Channel.java:993)
at hudson.FilePath.act(FilePath.java:1192)
at hudson.FilePath.act(FilePath.java:1181)
at hudson.FilePath.delete(FilePath.java:1728)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:163)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:818)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:164)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:526)
at hudson.model.Run.execute(Run.java:1900)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:442)
Build step 'Execute shell' marked build as failure
ERROR: Unable to tear down: null
java.lang.NullPointerException
at hudson.slaves.WorkspaceList.tempDir(WorkspaceList.java:313)
at
org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.secretsDir(UnbindableDir.java:61)
at
org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir.access$000(UnbindableDir.java:22)
at
org.jenkinsci.plugins.credentialsbinding.impl.UnbindableDir$UnbinderImpl.unbind(UnbindableDir.java:83)
at
org.jenkinsci.plugins.credentialsbinding.impl.SecretBuildWrapper$1.tearDown(SecretBuildWrapper.java:116)
at
hudson.model.AbstractBuild$AbstractBuildExecution.tearDownBuildEnvironments(AbstractBuild.java:566)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:530)
at hudson.model.Run.execute(Run.java:1900)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
at hudson.model.ResourceController.execute(ResourceController.java:101)
at hudson.model.Executor.run(Executor.java:442)
ERROR: builder-el8-s390x-2.ibm-l1.gluster.org is offline; cannot locate
java-1.6.0-openjdk-1.6.0.0.x86_64
_______________________________________________
maintainers mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/maintainers