On Thu, Aug 6, 2020 at 2:14 PM Jason A. Donenfeld <ja...@zx2c4.com> wrote: > > On Thu, Aug 6, 2020 at 1:15 PM Oğuz <oguzismailuy...@gmail.com> wrote: > > > > > > > > 6 Ağustos 2020 Perşembe tarihinde Jason A. Donenfeld <ja...@zx2c4.com> > > yazdı: > >> > >> Hi, > >> > >> It may be a surprise to some that this code here winds up printing > >> "done", always: > >> > >> $ cat a.bash > >> set -e -o pipefail > >> while read -r line; do > >> echo "$line" > >> done < <(echo 1; sleep 1; echo 2; sleep 1; false; exit 1) > >> sleep 1 > >> echo done > >> > >> $ bash a.bash > >> 1 > >> 2 > >> done > >> > >> The reason for this is that process substitution right now does not > >> propagate errors. It's sort of possible to almost make this better > >> with `|| kill $$` or some variant, and trap handlers, but that's very > >> clunky and fraught with its own problems. > >> > >> Therefore, I propose a `set -o substfail` option for the upcoming bash > >> 5.1, which would cause process substitution to propagate its errors > >> upwards, even if done asynchronously. > >> > > > > set -e o substfail > > : <(sleep 10; exit 1) > > foo > > > > Say that `foo' is a command that takes longer than ten seconds to complete, > > how would you expect the shell to behave here? Should it interrupt `foo' or > > wait for its termination and exit then? Or do something else? > > It's likely simpler to check after foo, since bash can just ask "are > any of the process substitution processes that I was wait(2)ing on in > exited state with non zero return?", which just involves looking in a > little list titled exited_with_error_process_subst for being non-null. > > A more sophisticated implementation could do that asynchronously with > signals and SIGCHLD. In that model, if bash gets sigchld from a > process that exits with failure, it then exits inside the signal > handler there. This actually wouldn't be too hard to do either.
Actually, it looks like all the infrastructure for this latter approach is already there.