Christoph Anton Mitterer wrote, on 12 Mar 2025: > > > > b) Does it even solve the original problem, or could e.g. such a > > > n<&n > > > respectively m>&m fail itself (not e.g. because a file doesn't > > > exist, but because of something like resource exhaustion, etc.) > > > > It could potentially solve the original problem if we can get > > consensus > > to add something suitable to the standard. > > It would probably not yet solve (b), or would it? > > > I mean not if it were merely defined as you propose: > > > So I would support updating the standard to require that n<&n and > > n>&n are always a no-op if fd n is open, except that if the shell > > normally closes fds > 2, that were opened with exec, when it executes > > a non-built-in utility, then applying n<&n or n>&n to such commands > > causes fd n to remain open. > > That would in principle still allow for such redirection to fail (e.g. > resource exhaustion), with no obvious way of detecting/handling such > cases.
If it is a no-op then it can't fail. So the only possible failure case would be in the "remain open" requirement. In practice this will involve calling fcntl() to clear the FD_CLOEXEC flag, which could indeed fail because of something like resource exhaustion, but I don't see that it increases the likelihood of internal shell failure significantly. Any command execution can fail within the shell because of resource exhaustion (e.g. fork() failure) before it gets as far as doing the exec. -- Geoff Clare <g.cl...@opengroup.org> The Open Group, Apex Plaza, Forbury Road, Reading, RG1 1AX, England