Not sure why Ben's post isn't visible in this group yet though it was sent 
to the mailing list - here is what he wrote:

-----------
Have a look at my recently-uploaded pipes-s3 package [1].

Cheers,

- Ben

[1] https://hackage.haskell.org/package/pipes-s3
-----------

This looks very useful. One question though - shouldn't HTTP manager be 
created only once, instead of being recreated for every request in 
`fromS3'` request wrapper? Here is my code involving AWS.S3 with conduit - 
should we take a similar approach but with pipes-s3 apis?

{-# LANGUAGE OverloadedStrings #-}

import qualified Aws 
import qualified Aws.Core as Aws 
import qualified Aws.S3 as S3 
import           Data.Conduit (($$+-)) 
import           Data.Conduit.Binary (sourceFile) 
import qualified Data.Conduit.List as CL (mapM_) 
import           Network.HTTP.Conduit (responseBody,requestBodySource,
newManager,tlsManagerSettings) 
import qualified Data.ByteString.Lazy as LBS 
import Control.Monad.IO.Class 
import System.IO 
import Control.Monad.Trans.Resource (runResourceT) 
import Control.Concurrent.Async (async,waitCatch) 
import Control.Exception (displayException) 
import Data.Text as T (pack) 
import Data.List (lookup) 

main :: IO () 

main = do 
  {- Set up AWS credentials and S3 configuration using the IA endpoint. -} 
  Just creds <- Aws.loadCredentialsFromEnv 
  let cfg = Aws.Configuration Aws.Timestamp creds (Aws.defaultLog Aws.Error) 
  let s3cfg = S3.s3 Aws.HTTP S3.s3EndpointUsClassic False 

  {- Set up a ResourceT region with an available HTTP manager. -} 

  httpmgr <- newManager tlsManagerSettings 
  let file ="out" -- can create a 100MB test file like this on linux: dd if=
/dev/urandom of=out bs=100M count=1 iflag=*fullblo*ck
  let inbytes = sourceFile file 
  lenb <- System.IO.withFile file ReadMode hFileSize 
  req <- async $ runResourceT $ do 
    Aws.pureAws cfg s3cfg httpmgr $ 
      (S3.putObject "put-your-test-bucket-here" ("testbucket/test") 
(requestBodySource 
(fromIntegral lenb) inbytes)) 
        {  
          S3.poMetadata = [("content-type","text;charset=UTF-8"),(
"content-length",T.pack $ show lenb)] 
        -- Automatically creates bucket on IA if it does not exist, 
        -- and uses the above metadata as the bucket's metadata. 
          ,S3.poAutoMakeBucket = True 
        } 
  reqRes <- waitCatch req 
  case reqRes of 
    Left e -> print $ displayException $ e 
    Right r -> print $ S3.porVersionId r




On Monday, May 30, 2016 at 10:49:21 AM UTC-4, Sal wrote:
>
> Hello,
>
> I am planning to use pipes-http for AWS S3 put/get operations (involving 
> big binary objects). I noticed that the pipes-http `stream` api mentions 
> that the server must support chunked encoding. So, I looked up AWS 
> documentation 
> <http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html> 
> which mentions that they have a different way of doing chunking (basically, 
> adding signature to every chunk). 
>
>  I also checked `aws` and `amazonka-s3` packages  - it seems to me that 
> they are not compatible with pipes-http because they use conduit. Please 
> correct me if I got this wrong. So, it seem to me I must write my own HTTP 
> request/response using `pipes` for AWS S3 operations, and must write custom 
> chunking.
>
> If any one has already done this before, and could share tips, that will 
> be very helpful.
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to haskell-pipes+unsubscr...@googlegroups.com.
To post to this group, send email to haskell-pipes@googlegroups.com.

Reply via email to