> On Mon, 2002-03-18 at 13:51, Simon Marlow wrote:
> > Ok, here's my explanation. [...]
>
> I see...
>
> Yet, it doesn't explain why the problem disappears if you close stdin
> before calling executeFile, like this:
>
> callIO :: (ProcessStatus -> String)
> -> IO ()
> -> IO ()
> callIO fm io = do
> maybepid <- forkProcess
> case maybepid of
> Nothing ->
> hClose stdin >> -- ** here **
> io >> -- executeFile
> exitWith ExitSuccess
> Just pid -> do
> (Just ps) <- getProcessStatus True True pid
> if ps == Exited ExitSuccess
> then return ()
> else failIO (fm ps)
>
> Since evaluation of the arguments of executefile causes stdin
> to be read
> lazily, shouldn't closing stdin cause them to be cut short? It looks
> like closing stdin caused it to be read completely, first. Something
> like this isn't mentioned in the Library Report.
The report says:
Once a semi-closed handle becomes closed, the contents of the
associated stream becomes fixed, and is the list of those items
which were successfully read from that handle.
so once you close stdin, the characters in the buffer are retained but
any further reading is prevented. In your example, you should be able
to observe slightly different output for the argument which straddles
the buffer boundary: the argument will be truncated (if you use
/bin/echo as the process being spawned you'll see this - I've tried it
and it does indeed behave like this).
> But rejecting lazy IO would throw away a lot of elegance. I
> wonder if it
> would be possible to allow it in many cases, while guaranteeing
> referential transparency.
Perhaps. My major gripe with lazy I/O is that, as currently specified,
it allows the programmer to observe the evaluation order used by the
compiler: eg. since an hClose fixes the stream, you can observe whether
the compiler's strictness analyser figured out that you were going to
evaluate more of the stream later and caused it to be evaluated early.
The String returned by hGetContents is simply not a pure value.
The spec could perhaps *require* that it was a pure value, so that the
file contents is snapshotted at the time of the hGetContents and you
always get the same result regardless of subsequent or concurrent I/O
operations. This can perhaps be implemented with copy-on-write if the
OS supports it, but I don't know if any OSs actually do.
> For files, this could be accomplished by locking them, making
> use of OS
> features (such as lockf(2)). You could define an IO action like "mapIn
> :: FilePath -> IO String", which would bring a file to the
> realm of the
> process, making it inaccessible to the outside world (other
> processes).
yes, that's another possibility. But you have to be careful that the
current process also can't interfere with the lazy I/O.
> Likewise, it might be possible to define a set of process operations
> which avoid mistakes like mine.
Indeed, there definitely at least ought to be a cross-platform way to
spawn a process. The spawn operation needs to be "atomic" with respect
to existing lazy I/O streams and concurrent threads, and the spawned
process should continue concurrently with the current process (unlike
System.system), but there should be a way to wait for it to complete in
a non-blocking way in a multithreaded program.
Cheers,
Simon
_______________________________________________
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs