> [meyering - Wed Dec 01 11:14:10 2004]: > > > This is a bug report for perl from [EMAIL PROTECTED], > generated with the help of perlbug 1.35 running under perl v5.9.1. > > > ----------------------------------------------------------------- > Sometimes, closing a Perl file handle succeeds even though there have > been write errors via that handle. This bug appears to affect every > version of perl from 5.005_04 to 5.9.1. I didn't build/test versions > earlier than 5.005_04, but the doio.c:do_close function in 5.003 looks > like it has the same problem. > > Here's a quick demo that depends on two things: > - you have a /dev/full device (Linux, HPUX, MaxOS-X/Darwin) > - 131072 is a multiple of your system's I/O buffer size > > perl -e 'print "x" x 131072; close STDOUT or die "$!\n"' \ > > /dev/full 2> /dev/null && echo fail || echo ok > > It prints `fail' because at least one write system call failed and > Perl's > `close' mistakenly succeeded. It would print `ok' for any size output > other than a multiple of the output buffer size. On the Linux systems > I've checked, that size is 4096. On a ppc/Darwin (7.5.0) system I > tested, > the buffer size was 131072. > > This means that if your Perl script happens to write precisely the > wrong number of bytes to a full or corrupted disk or to a closed file > descriptor, even if you dutifully check for success when closing the > file handle, the error will go undetected. > > Here's another demo. Notice that it fails (as it should) for sizes > 131071 and 131073, but not for output of size 131072. > > $ perl -e 'print "x" x 131071; close STDOUT or die "$!\n"' > > /dev/full > No space left on device > [Exit 28] > $ perl -e 'print "x" x 131072; close STDOUT or die "$!\n"' > > /dev/full > $ perl -e 'print "x" x 131073; close STDOUT or die "$!\n"' > > /dev/full > No space left on device > [Exit 28] > > After applying the patch below, the problematic case (with size being > a multiple of 4096 in my case), now fails, as it should: > > $ ./perl -e 'print "x" x 131072; close STDOUT or die "$!\n"' > > /dev/full > No space left on device > [Exit 28] > > On a Solaris 5.9 system, which lacks /dev/full, I demonstrated > the failure with the following small script that invokes perl with > stdout closed. All invocations of perl should evoke the > `Bad file number' error. But note that on some systems the > closed-stdout test succeeds (perl detects the error) even though > the write-to-/dev/full test fails. > > #!/bin/sh > n=1024 > for i in 1 2 3 4 5 6; do > echo $n > perl -e 'print "x" x '$n'; close STDOUT or die "$!\n"' >&- > n=`expr 2 \* $n` > done > > Here's the output: > > 1024 > Bad file number > 2048 > Bad file number > 4096 > Bad file number > 8192 > Bad file number > 16384 > 32768 > > > This patch fixes the bug and induces no new failures in any of > these versions: > > 5.005_04 > 5.6.2 > 5.8.6 > 5.9.1 > > 2004-11-30 Jim Meyering <[EMAIL PROTECTED]> > > * doio.c (Perl_io_close): Make the return value depend not only on > the > success of the close itself, but also on whether the output stream > had > a previous error. > > --- doio.c.orig 2004-11-29 19:41:05.747199832 +0100 > +++ doio.c 2004-12-01 08:39:53.106764936 +0100 > @@ -1014,11 +1014,14 @@ Perl_io_close(pTHX_ IO *io, bool not_imp > retval = TRUE; > else { > if (IoOFP(io) && IoOFP(io) != IoIFP(io)) { /* a socket */ > - retval = (PerlIO_close(IoOFP(io)) != EOF); > + bool prev_err = PerlIO_error(IoOFP(io)); > + retval = (PerlIO_close(IoOFP(io)) != EOF && !prev_err); > PerlIO_close(IoIFP(io)); /* clear stdio, fd already > closed */ > } > - else > - retval = (PerlIO_close(IoIFP(io)) != EOF); > + else { > + bool prev_err = PerlIO_error(IoIFP(io)); > + retval = (PerlIO_close(IoIFP(io)) != EOF && !prev_err); > + } > } > IoOFP(io) = IoIFP(io) = Nullfp; > } > >
Does anyone have any objections to this patch?