RE: Yet another weakly defined bug report

2003-02-17 Thread Simon Marlow
 File reading is not a pure operation: running out
 of file descriptors is a good counter-example.
 
 How does this differ from running out of memory whilst trying 
 to evaluate something?

You're absolutely right, of course.

I think the point is that by using implicit memory allocation we've
accepted that running out of memory is going to be a failure condition
that we cannot reasonably avoid or prepare for.  This is a bad thing -
but we deem the tradeoff to be worth it, because on the other hand we
don't suffer from memory allocation bugs of the same kind that you get
in C programs.

I/O is similar in some ways: lazy I/O might lead to file descriptor
leaks, which are similar to memory leaks but aren't something that
Haskell programmers expect to have to worry about.  To fix these
problems you have to think carefully about strictness and demand in your
program.  For memory we have heap profilers to help out, but we don't
have I/O descriptor profilers for lazy I/O!

Also, by using lazy I/O one elects to ignore I/O errors - which I think
is not something we should be encouraging.

Cheers,
Simon
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Yet another weakly defined bug report

2003-02-17 Thread Malcolm Wallace
Simon Marlow [EMAIL PROTECTED] writes:
   To fix these
 problems you have to think carefully about strictness and demand in your
 program.  For memory we have heap profilers to help out, but we don't
 have I/O descriptor profilers for lazy I/O!

Surely I/O descriptors are just a type of heap-value, and as such
can be profiled in the normal way (producer, retainer, etc), just
like any other heap construction?

Regards,
Malcolm
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe