On Sat, Dec 13, 2014 at 9:39 AM, Ole Tange <o...@tange.dk> wrote: > On Fri, Dec 12, 2014 at 3:44 PM, xmoon 2000 <xmoon2...@googlemail.com> wrote:
>> What is the best method for me to capture all errors from my scripts to a >> single file? > > 3 options: > > Stop using --eta. > Use --joblog (if all you need is the exit value) > Use --results (if you really need the whole stderr) Inspired by this problem I wondered how you could work around that. http://unix.stackexchange.com/questions/174055/controlling-the-terminal-while-in-the-middle-of-a-pipe/174058#174058 pointed to the idea of finding parent pid and use the stderr or stdout of that. If running in a terminal it will typically do the right thing. I now have a prototype that works on all platforms that support `lsof`. So my question to you, dear users, is now whether this should be default? Should GNU Parallel's status/error messages be sent to the parent pid's stderr instead of the current stderr? Here are my considerations: For most situations it will not matter at all: stderr is the terminal for both job output and for status message: parallel my_prg ::: 1 2 3 >my_prg.out For xmoon's situation it does make a difference. He will be able to do: parallel my_prg ::: 1 2 3 >my_prg.out 2>my_prg.err and be sure that there are no status messages from GNU Parallel mixed into my_prg.err. The status messages will be sent to parent pid's stderr which in this case will be the terminal. A problem I see is that you cannot redirect GNU Parallel's status messages: They will go to the terminal unless you do something like: bash -c 'parallel my_prg ::: 1 2 3 >my_prg.out 2>my_prg.err' 2>parallel.msg This may lead to confusion when you get a different behaviour when running interactively in a terminal or run via a cron script. So this may conflict with Principle of Least Astonishment. What do you think? /Ole