dm-list-haskell-c...@scs.stanford.edu writes:

> leaking file descriptors

...until they are garbage collected.  I tend to consider the OS fd
limitation an OS design error - I've no idea why there should be some
arbitrary limit on open files, as long as there is plenty of memory
around to store them.  But, well, yes, it is a real concern.

> parsers that parse every possible input and never fail.  

I guess I need to look into how iteratees handle parse failure.
Generally, for me a parse failure means program failure - either the
data is corrupt, or the program is incorrect.

> Thus, for anything other than a toy program, your code actually has to
> be: 

>       readFoo path = bracket (hOpen path) hclose $
>               hGetContents >=> (\s -> return $! decodeFoo s)

No, I can't do that in general, because I want to process a Foo (which
typically is or contains a list of records) incrementally.  I can't
assume the file or its data are smalle enough to fit in memory.  It is
important that readFoo returns a structure that can be consumed lazily
- or perhaps it can be iteratee all the way up.

> Which is still not guaranteed to work if Foo contains thunks, so then
> you end up having to write:

>       readFoo path = bracket (hOpen path) hclose $ \h -> do
>         s <- hGetContents h
>         let foo = decodeFoo s
>         deepseq foo $ return foo

I think this - or rather, having Foo's records be strict - is a good
idea anyway.  The previous discussion about frequency counts seems to
indicate that this goes equally well for iteratees.

Thanks for the elaborate answer.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to