A very weird bug was uncovered when using fstests with github actions. In fstests we are doing something like this
od /dev/urandom | dd of=somefile bs=1M count=10 The above works fine, except in the case of github actions which runs this script remotely, capturing the output in a pipe to print on their console. Somehow od ends up writing forever into the stdout pipe without exiting in this scenario, normally it doesn't do this. strace'ing the process we can see it getting SIGPIPE and simply ignoring the error. This is because when we don't provide a limit we never actually check if there was an error on the stdout fd. Fix this by adjusting the logic to check if there was an error on every loop, and break out if appropriate. With this patch applied this configuration no longer results in a hang. --- src/od.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/od.c b/src/od.c index 6b66ceb4f..a07b4de03 100644 --- a/src/od.c +++ b/src/od.c @@ -1380,7 +1380,7 @@ dump (void) if (limit_bytes_to_format) { - while (true) + while (ok) { size_t n_needed; if (current_offset >= end_offset) @@ -1396,13 +1396,14 @@ dump (void) affirm (n_bytes_read == bytes_per_block); write_block (current_offset, n_bytes_read, block[!idx], block[idx]); + ok &= check_and_close(0); current_offset += n_bytes_read; idx = !idx; } } else { - while (true) + while (ok) { ok &= read_block (bytes_per_block, block[idx], &n_bytes_read); if (n_bytes_read < bytes_per_block) @@ -1410,6 +1411,7 @@ dump (void) affirm (n_bytes_read == bytes_per_block); write_block (current_offset, n_bytes_read, block[!idx], block[idx]); + ok &= check_and_close(0); current_offset += n_bytes_read; idx = !idx; } -- 2.41.0