#999: amazon s3 backend
--------------------------+-------------------------------------------------
Reporter: zooko | Owner:
Type: enhancement | Status: new
Priority: major | Milestone: undecided
Component: code-storage | Version: 1.6.0
Keywords: gsoc | Launchpad_bug:
--------------------------+-------------------------------------------------
(originally I incorrectly posted this to #917)
The way to do it is to make a variant of
[source:src/allmydata/storage/server.py] which doesn't read from local
disk in its [source:src/allmydata/storage/server...@4164#l359
_iter_share_files()], but instead reads the files from its S3 bucket (it
is an S3 client and a Tahoe-LAFS storage server). Likewise variants of
[source:src/allmydata/storage/shares...@3762 storage/shares.py],
[source:src/allmydata/storage/immutable...@3871#l39 storage/immutable.py],
and [source:src/allmydata/storage/mutable...@3815#l34 storage/mutable.py]
which write their data out to S3 instead of to their local filesystem.
Probably one should first start by abstracting out just the "does this go
to local disk, S3, Rackspace Cloudfiles, etc" part from all the other
functionality in those four files... :-)
--
Ticket URL: <http://allmydata.org/trac/tahoe/ticket/999>
tahoe-lafs <http://allmydata.org>
secure decentralized file storage grid
_______________________________________________
tahoe-dev mailing list
[email protected]
http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev