> On 14-Sep-1999, Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
> > the semantics of hGetFileContents was just as if
> > the entire contents of the file was read instantaneously
Tue, 14 Sep 1999 20:41:40 +1000, Fergus Henderson <[EMAIL PROTECTED]> pisze:
> Well, consider the case where the file being read is also being
> concurrently modified by another process.
Maybe the requirement could be restricted to our process only. The
program already maintains the set of open files, so it could read
the rest of contents and close the old file instead of throwing an
exception - I think it would suffice for many cases and fulfill
the requirement.
BTW, it uses C arrays without bounds checking, they may overflow.
I've reported it some time ago, ghc-4.04/ghc/lib/std/cbits/getLock.c,
readLock and writeLock arrays.
If hClose caused immediate reading the rest, then to be predictable for
other processes it would suffice to openFile + hGetContents + hClose.
This would be equivalent to non-lazy read.
It would also be nice to try reading and closing some of the lazily
read files when opening another file runs out of handles. Opening
a lot of files is another case when current readFile semantics bite.
But when we want to close a file and don't read anything more,
it would be risky to depend on garbage collector recognizing the
contents being read as garbage early enough. On the other hand, it
may be hard for the programmer to ensure if it is safe to tell to
discard the non-read rest - hClose of a lazily read stream is now
dangerous. Ignore the problem, so in rare cases the file would be
read unnecessarily? If one wants to *ensure* that the file is read
only up to a point, he can use hGetChar / hGetLine etc.
What if we are reading /dev/zero? Ignore the problem, as for
`length [0..]'?
--
__("< Marcin Kowalczyk * [EMAIL PROTECTED] http://kki.net.pl/qrczak/
\__/ GCS/M d- s+:-- a22 C+++>+++$ UL++>++++$ P+++ L++>++++$ E-
^^ W++ N+++ o? K? w(---) O? M- V? PS-- PE++ Y? PGP->+ t
QRCZAK 5? X- R tv-- b+>++ DI D- G+ e>++++ h! r--%>++ y-