Hi.  I'm sorry that something I had a hand in is causing you an
inconvenience.

I'm afraid it's not clear to me what "working" and "non-working"
behaviour from your example program is.  I don't feel I can reply
comprehensively, so I will comment on some of the details in your
message.

Eric Wong writes ("Re: Bug#1040947: perl: readline fails to detect updated 
file"):
> Both can be data loss bugs, but checking a log file which is
> being written to by another process is a much more common
> occurence than FS failures[1] (or attempting readline on a
> directory (as given in the #1016369 example))

AFAICT you are saying that the fix to #1016369 broke a program which
was tailing a logfile.  I agree that one should be able to tail a
logfile in perl.  I don't think I have a complete opinion about
precisely what set of calls ought to be used to do that, but I would
expect them to mirror the calls needed in C with stdio.

> Since this is Perl and TIMTOWTDI, I've never used IO::Handle->error;
> instead I always check defined-ness on each critical return value and
> also enable Perl warnings to catch undefined return values.
> I've never used `eof' checks, either; checking `chomp' result
> can ensure proper termination of lines to detect truncated reads.

AFIACT you are saying that you have always treated an undef value
from line-reading operations as EOF, and never checked for error.
I think that is erroneous.

That IO errors are rare doesn't mean they oughtn't to be checked for.
Reliable software must check for IO errors and not assume that undef
means EOF.

I believe perl's autodie gets this wrong, which is very unfortunate.

> [1] yes, my early (by my standards) upgrade to bookworm was triggered
>     by an SSD failure, but SSD failures aren't a common occurence
>     compared to tailing a log file.

I don't think this is the right tradeoff calculus.

*With* the fix to #1016369 it is *possible* to write a reliable
program, but soee buggy programs lose data more often.

*Without* the fix to #1016369 it is completely impossible to write a
reliable program.

Having said all that, I don't see why the *eof* indicator ought to
have to persist.  It is only the *errors* that mustn't get lost.  So I
think it might be possible for perl to have behaviour that would
make it possible to write reliable programs, which still helping buggy
programs fail less often.

But, even if that's possible, I'm not sure that it's a good idea.
Buggy programs that lose data only in exceptional error conditions are
a menace.  Much better to make such buggy programs malfunction all the
time - then they will be found and fixed.

Thanks for your attention.

Ian.

-- 
Ian Jackson <ijack...@chiark.greenend.org.uk>   These opinions are my own.  

Pronouns: they/he.  If I emailed you from @fyvzl.net or @evade.org.uk,
that is a private address which bypasses my fierce spamfilter.

Reply via email to