As part of my experiment for running an S3-based blobserver in AWS Lambda, 
I started by adding an implementation of the S3 "storage" that uses the AWS 
SDK.

The code is 
here: 
https://github.com/raff/camlistore/commit/731379ea1bc7c0498c0bdc13fba6eaef5bca494a

It's currently in alternative to the standard "s3" storage (use the 
configuration key "s3aws" instead of "s3") and the standard storage is 
still in place.

It uses the standard AWS configuration mechanism so you don't have to store 
access key and secret in the server config file, but they can be picked up 
from environment variables or the AWS config files.

You still need to configure the bucket in the server config file (and if 
you want you can store access and secret). The configuration parameters are:

{
  "s3aws": "accesskey:secret:bucket/prefix:region"
}

So at the minimum you will need:

{
  "s3aws": "::bucket"
}

It seems to work fine (it passes all the tests and I can start the server 
and upload/view files). I think there may be some network setting to tweak 
(I did see it "stall" for a while). Need to check what the default number 
of retries and timeouts are for the AWS s3 client.

This, of course, requires the AWS Go SDK to be vendored (that I have done 
in a separate check-in in my fork so that in this commit you can see only 
the specific changes to camlistore).

Next step I may try to build just the blob server and run it as a Lambda 
function, or I may see if I can use DynamoDB as an KV store for the indices.

Enjoy!

-- Raffaele
 

-- 
You received this message because you are subscribed to the Google Groups 
"Camlistore" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to