Date:        Mon, 14 Jul 2025 10:52:50 -0400
    From:        Chet Ramey <chet.ra...@case.edu>
    Message-ID:  <2702d742-d1d6-4929-9024-028a2d5cf...@case.edu>

  | One of the assumptions bash makes is that when the kernel tells it the size
  | of a regular file using stat(2), it's telling the truth,

That's entirely reasonable.

  | and that reading fewer bytes than that when asking for the entire
  | file indicates some kind of problem.

But that less so, there's a gap between the stat() and read(), and in
that interval, the size of the file might change.   If it gets smaller,
that's harmless, just use the size read.   If it gets larger, it might
be necessary to allocate a bigger buffer and read again (if there's
really a need to read the whole file at once.)  How does this handle
sourcing a script that's hundreds of GB (TB?) large ?   Possibly most of
that might be comments, the file being that big doesn't mean that the
actual parsable code inside is.

  | > This happens because /sys/block/*/uevent show up as regular files, and
  | > reports a file size of 4096 in stat:
  | Which isn't the true file size.

And I'd be reporting that as a kernel bug.    But linux seems to be
filled with crap like that.

kre



Reply via email to