Hi,

I have a huge file (~40M rows) in a custom format where each line 
represents a data type, so I want to process it line-by-line.

this code runs very fast (~20 seconds):

    import Pipes.ByteString as BS

    runEffect $ BS.stdin >-> BS.stdout


while this one runs much slower (>2 minutes to execute):

    bslines :: (MonadIO m) => Producer ByteString m ()
    bslines = purely folds mconcat . view BS.lines $ BS.stdin

    main :: IO ()
    main = runEffect $ bslines >-> BS.stdout

Why does it happen? And what would be the fastest way to consume a file 
line-by-line?
To compare, consuming the same file in Node.js line-by-line takes ~40 
seconds, how can similar results be achieved?

Regards,
Alexey.

-- 
You received this message because you are subscribed to the Google Groups 
"Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to haskell-pipes+unsubscr...@googlegroups.com.
To post to this group, send email to haskell-pipes@googlegroups.com.

Reply via email to