Pádraig Brady wrote:
> unarchive 13530
> stop
Thanks.
...
>> @@ -358,6 +356,14 @@ elide_tail_bytes_pipe (const char *filename, int fd,
>> uintmax_t n_elide_0)
>>
>> if (buffered_enough)
>> {
>> + if (n_elide_0 != n_elide)
>> + {
>> + error (0, 0, _("memory exhausted while reading %s"),
>> + quote (filename));
>> + ok = false;
>> + goto free_mem;
>> + }
>> +
...
> Oh right it's coming back to me a bit now.
> So by removing these upfront checks where possible,
> it only makes sense of the program could under different
> free mem available, fulfil the operation up to the specified limit.
> In this case though, the program could never fulfil the request,
> so it's better to fail early as is the case in the code currently?
Well, it *can* fulfill the request whenever the request is degenerate,
i.e., when the size of the input is smaller than N and also small enough
to be read into memory.
Technically, we could handle this case the same way we handle it
in tac.c: read data from nonseekable FD and write it to a temporary file.
I'm not sure it's worth the effort here, though.
...
>> -(ulimit -v 20000; head --bytes=-E < /dev/null) || fail=1
>> +(ulimit -v 20000; head --bytes=-$OFF_T_MAX < /dev/null) || fail=1
I'm inclined to make the above (nonseekable input) cases succeed,
for consistency with the seekable-input case like this:
: > empty
head --bytes=-E empty
I confess that I did not like the way my manual test ended up
using so much memory... but it couldn't know if it was going
to be able to succeed without actually reading/allocating all
of that space.
If we give up immediately, we fail unnecessarily in cases like
the above where the input is smaller than N.
What do you think?