Hi all,
using S3QL with some very big file I have on the S3 bucket a lot of
fileblock files
which are downloaded one by one when I try to read the file from mount
point.
Everything works fine but.. I'd like to have better download performance in
this scenario
(S3backer is about 5-10 times faster than S3QL)
The simpler idea to have seen it is to add to S3QL a read-ahead mechanism
like S3backer has
("When a configurable number of blocks are read in order,
block cache worker threads are awoken to begin reading subsequent blocks
into the block cache")
Looking at the code I can't see a easy way to add this feature
(in file fs.py, function _readwrite should start a thread which will
download the next fileblocks,
but then it will be necessary to add the downloaded fileblock to DB and
cache;
so in the Backend code the function read should check if the fileblock has
alreeady been downloaded and use it instead of doing a S3 GET)
Has somebody a better idea to implement it? or a similar requirement to
join the development force?
Thanks
Tommaso Massimi
Cynny Space
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.