Hi Pádraig,

I was able to find the work-around which is to immediately close tee's
stdout output by using /dev/full

src/tee --output-error=warn </dev/zero >(head -c100M | wc -c ) >(head -c1 |
wc -c ) >/dev/full | src/numfmt --to=iec
src/tee: standard output: No space left on device
1
src/tee: /dev/fd/62: Broken pipe
100M
src/tee: /dev/fd/63: Broken pipe

I still thinks that having  --no-stdout option is a better way how to
achieve this. Firstly, I'm not sure if /dev/full is available on all
supported platforms. It's also not so well known as /dev/null. Furthermore,
I don't like the fact that "src/tee: standard output: No space left on
device" warning is emitted. It can be misleading for someone reading the
log file who is unaware of this work-around. IMHO, it's a workaround how to
close the stdout which we don't want tee to create at all in this
particular case. And lastly, it could have some negative side-effect (but I
could not figure any now).

Regarding the use-case. Each randomness test returns one p-value. Very low
value indicates that RNG has failed the test. You might want all p-values
to be in one file or you might be interested into the smallest values only
and post-process the output with  sort -n | head .

I would still vote to implement --no-stdout option even knowing
about >/dev/full workaround. In case I will be outvoted on this one please
add at least to tee's man page information that if user do not want tee to
write to stdout she/he can close it immediately with >/dev/full trick.

Thanks!
Jirka





On Fri, Nov 20, 2015 at 12:36 AM, Pádraig Brady <[email protected]> wrote:

> On 19/11/15 23:09, Jirka Hladky wrote:
> >     If you ignore SIGPIPE in tee in the above then what will terminate
> the
> >
> >     tee process?  Since the input is not ever terminated.
> >
> >
> > That's why I would like to have the option to suppress writing to
> STDOUT. By default, tee will finish as soon as all files are closed. So
> without need to have >/dev/null redirection, it will run as long as at
> least one pipe is open.
> >
> >  /while (n_outputs)/
> > /    {/
> > /     //read data;/
> > /
> > /
> > /      /* Write to all NFILES + 1 descriptors./
> > /         Standard output is the first one.  *//
> > /      for (i = 0; i < nfiles; i++)/
> > /        if (descriptors[i]/
> > /            && fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)/
> > /          {/
> > /            //exit on EPIPE error/
> > /            descriptors[i] = NULL;/
> > /            n_outputs--;/
> > /          }/
> > /    }/
> >
> >     Also, a Useless-Use-Of-Cat in the above too.
> >
> > Yes, it is. But anyway, it's not real world example. My real problem is
> to test RNG by multiple tests. I need to test huge amount of data (hundreds
> of GB) so storing the data on disk is not feasible. Each test will consume
> different amount of data - some test will stop after a RNG failure has been
> detected or some threshold for maximum amount of processed data is reached,
> others will dynailly change the amount of tested data needed by test
> results. The command I need to run is
> >
> > rng_generator | tee >(test1) >(test2) >(test3)
> >
> >
> >> Already done in the previous v8.24 release:
> > I have tried it but I'm not able to get desirable behavior. See these
> examples:
> >
> > A)
> > /tee --output-error=warn </dev/zero >(head -c100M | wc -c ) >(head -c1 |
> wc -c ) >/dev/null /
> > /1/
> > /src/tee: /dev/fd/62: Broken pipe/
> > /104857600/
> > /src/tee: /dev/fd/63: Broken pipe/
> >
> > => it's almost there expect that it runs forever because of >/dev/null
> >
> > B)
> > /src/tee --output-error=warn </dev/zero >(head -c100M | wc -c ) | (head
> -c1 | wc -c )/
> > /1/
> > /src/tee: standard output: Broken pipe/
> > /src/tee: /dev/fd/63: Broken pipe/
> >
> > As you can see, output from (head -c100M | wc -c) is missing
> >
> > Conclusion:
> > Case A) above is close to what I want to achieve but there is a problem
> with writing to stdout. --output-error=warn is part of the functionality I
> was looking for. However, to make it usable for scenario described here we
> need to add an option not to write to stdout. What do you think?
>
> Right, the particular issue here is that the >(process substitutions)
> are writing to stdout, and this is intermingled through the pipe
> to what tee is writing to stdout.
>
> Generally the process substitutions write somewhere else.
> In my example I used stderr (>&2), or you could write to file,
> or to /dev/tty for example.  Is there any particular reason
> the output from your process substitutions need to go to stdout?
>
> The general question is, would it be useful to further
> process the intermingled output from process substitutions.
> Maybe if it was tagged, but there still is the issue
> of atomic writes through pipes, so it would be of limited application.
>
> So in summary, maybe there is the need for --no-stdout,
> though I don't see it yet myself TBH.
>
> cheers,
> Pádraig
>
>

Reply via email to