Are you thinking of regular pure parallelism, as with `parallel` or 
`monad-par` or of something fancier like the work stealing example in the 
pipes concurrency tutorial (which isn't itself appropriate here, I think, 
since the order of events is important)?


If you are thinking of pure parallelism here is a flat-footed approach.  In 
choosing a batch size you would be surveying the whole producer, so you 
can't think inside the pipeline. You can first freeze each batch to a list 
or something, say

   

     batched :: Monad m => Int -> Producer a m x -> Producer [a] m x

     batched n p = L.purely folds L.list (view (chunksOf n) p)


then resume piping with something like 


    >>> :t \n f p -> batched n p >-> P.mapM (runParIO . parMap f) >-> 
P.concat      --  or P.map (runPar . parMap f) 

    \n f p -> batched n p >-> P.mapM (runParIO . parMap f) >-> P.concat

     :: NFData c =>

         Int -> (a -> c) -> Producer a IO r -> Producer IO r


The equivalent could be done with `async`.  You'd have to think out whether 
waiting to accumulate a batch and then processing simultaneously and 
continuing would be an improvement on processing blocks as they come.

-- 
You received this message because you are subscribed to the Google Groups 
"Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to haskell-pipes+unsubscr...@googlegroups.com.
To post to this group, send email to haskell-pipes@googlegroups.com.

Reply via email to