On Fri, Oct 18, 2019 at 11:09 PM Michael Conrad <[email protected]> wrote:
>> On 10/18/2019 10:54 AM, Denys Vlasenko wrote:
>> On Thu, Oct 10, 2019 at 4:35 PM Martin Lewis <[email protected]> 
>> wrote:
> Could you please elaborate on what was lacking in the INT_MAX part? As seen 
> in write's man page:
>
> On Linux, write() (and similar system calls) will transfer at most 0x7ffff000 
> (2,147,479,552) bytes, returning the number of bytes actually transferred. 
> (This is true on
>
> both 32-bit and 64-bit systems.)
>
> Wouldn't it create an issue with 64 bit systems when trying to read/write 
> large files?
>
> This seemed like a change not prompted by an actually observed buggy behavior.
>
> In fact, to my eyes
>
> + if (count > SYS_BUFSIZE_MAX) + count = SYS_BUFSIZE_MAX;
>
> looks like it introduces a bug. read() is always allowed to return a short 
> count however the kernel pleases,
> and safe_read is supposed to call it in a loop until the desired size is 
> reached or until EOF or a fatal error.

No, it's full_read(). safe_read() only loops on EINTR.

> If you truncate the count before beginning the loop, then the caller ends up 
> with a short read even though there might be more data.

Yes. safe_read() users either must code for this, or they use it in a
context where
short reads do not make sense - e.g. when you read an UDP packet,
short reads never
happen - you will always receive the full packet in one read (if your
buffer is large enough),
it will never be spuriously split by the kernel.

There is in fact some obscure standards language which also guarantees
that reading from
an ordinary file should not return spurious partial reads. IIRC Linus decided
that Linux will not honour this for very large I/O blocks.

> (not that anyone would likely have a reason to require more than 2GB in a 
> single buffer anyway…)

dd is a legitimate case, I guess. Testing dd (coreutils) 8.30, 64-bit:

dd bs=4G count=1 <FILE >FILE2

does perform a 4G-sized read syscall, gets 0x7ffff000 bytes, writes
them, and exits.
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to