While working with a binary file format, I started out with this naive code:

import qualified Pipes.Parse as P
import qualified Pipes.Binary as P
import qualified Pipes.ByteString as PB
import qualified Data.Text as T
import qualified Data.ByteString as BS

entryParser tableStart = P.decodeGet $ (,,,) <$> decodeFilename <*> fmap 
(tableStart +) getWord32le <*> getWord32le <*> getWord32le

decodeFilename = T.unpack . decodeLatin1 . BS.pack <$> go where
    go = do
        c <- (`rotateR` 3) <$> getWord8
        if c /= 0 then (c :) <$> go else pure [] -- terminate on (and 
consume the) 0

While it does work, I'm unhappy with decodeFilename as it basically 
implements a combination of map and span/fold with explicit recursion. But 
the underlying ByteString isn't available inside the Get monad without 
consuming it, so using e.g. BS.span seems out of the question. Let's see if 
lenses can come to the rescue:

entryParser tableStart = do
    nameChunks <- zoom (PB.span (/= 0)) P.drawAll
    PB.drawByte -- draw the terminating 0
    let fileName = T.unpack . decodeLatin1 . BS.map (flip rotateR 3) . 
BS.concat $ nameChunks
    P.decodeGet $ (,,,) fileName <$> fmap (tableStart +) getWord32le <*> 
getWord32le <*> getWord32le

I like this better - map and span aren't implemented manually anymore - but 
at the same time I was hoping for more. It doesn't seem right to work 
directly on ByteStrings (i.e. BS.map instead of PB.map, and text instead of 
pipes-text), and the combination of drawAll and concat is a bit awkward, 
especially since drawAll is only for testing (even though all the tutorials 
use it :) ). The latter point might be addressed by giving pipes-bytestring 
a folding function similar to P.foldAll, but even so I wonder if there's a 
more ideomatic way to do this?

-- 
You received this message because you are subscribed to the Google Groups 
"Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].

Reply via email to