pete-expires-20070513:
> When using readFile to process a large number of files, I am exceeding
> the resource limits for the maximum number of open file descriptors on
> my system.  How can I enhance my program to deal with this situation
> without making significant changes?

Read in data strictly, and there are two obvious ways to do that:

    -- Via strings:

    readFileStrict f = do
        s <- readFile f
        length s `seq` return s

    -- Via ByteStrings
    readFileStrict  = Data.ByteString.readFile
    readFileStrictString  = liftM Data.ByteString.unpack 
Data.ByteString.readFile

If you're reading more than say, 100k of data, I'd use strict
ByteStrings without hesitation. More than 10M, and I'd use lazy
bytestrings.

-- Don
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to