I've been following this discussion with some interest.  It seems to
me on important underlying problem is being hinted at, but has never
been made explicit:

* When we fork(), we lose sharing.  *Any* lazy computation which
  passes to both children is going to penalize you, sometimes in very
  surprising ways.

Thus, raw access to fork is guaranteed to be the wrong thing for
nearly everybody all the time.  It's probably worth noting this
prominently next to any and all documentation for fork, and next to
its code.  Why?  Because use of fork is part of the commonly accepted
idiom for running one program from within another.  It's likely
programmers who've done this in other languages will go looking for
"fork" rather than some nicer, higher-level functionality (POpen?)
that has seqs in all the right places and actually does what they
want.

That said, I'd love to have lazy I/O that actually works right, if
only because it actually *does* do the right thing for the 95% of the
programs which get written which *aren't* doing fancy I/O.  I say this
having written programs which use lazy I/O to process files which are
much larger than the total virtual memory on my machine (so mmap-ing
regular files to snapshot their contents isn't going to be good enough
for me, even if it works for smaller files).  

It seems to me part of the problem is that lazy I/O results in
concurrency, and concurrency is hard.  This is particularly true as
the lazy I/O routines don't say "WARNING!  CONCURRENCY" all over the
place.  Does this mean we should make semi-closed handles untouchable?
Should there be rules that turn "lines . getContents" into something
vaguely sensible and non-lazy?  Should we say something sensible about
the behavior of anything concurrent-ish across fork?  Simon, what
would it take to make you stop worrying and love lazy I/O? :-)

-Jan-Willem Maessen
_______________________________________________
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs

Reply via email to