There is simple problem on Project Euler 
- http://projecteuler.net/problem=42 , which boils down to

import Data.Char
import Data.Word
import Control.Applicative
import qualified Data.ByteString as B
import Data.Attoparsec.ByteString

main = do
    result <- solution <$> parseFile <$> B.readFile "euler42.txt"
    print result
    
solution :: [B.ByteString] -> Int
solution =
    length . filter isTriangle . fmap wordValue
    where 
        wordValue = B.foldl' (+) 0 . B.map (subtract 64)
        isTriangle = const True {- some secret function -}
    
parseFile :: B.ByteString -> [B.ByteString]
parseFile = either (const []) id .
    parseOnly (wordParser `sepBy1` word8 (char ','))
  where
    wordParser = word8 (char '"') *> takeWhile1 (/= (char '"')) <* word8 
(char '"')
    char = fromIntegral . ord

Now the problem is: given large input file (150 Mb) this solution leads to 
OutOfMemory.

Seems like pipes are designed to solve this problem in same compositinal 
way, so the question is: what is a correct way to "pipify" this small 
program?

As I understand, there must be:
1) Producer of ByteString's, based on pipes-parse and pipes-attoparsec
2) Some analogues of "map", "filter" and "length" on pipe streams
3) extract result (unpipify)

The 2) seems easy. We should get something like

solution :: Producer B.ByteString IO () -> Producer Int IO ()
solution =
    P.length <-< P.filter isTriangle <-< P.map wordValue

but streaming from parsing and extracting is not that obvious. Could 
someone help me?

-- 
You received this message because you are subscribed to the Google Groups 
"Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].

Reply via email to