On Sep 04 2018, [email protected] wrote:
> The simpler idea to have seen it is to add to S3QL a read-ahead
> mechanism like S3backer has ("When a configurable number of blocks are
> read in order, block cache worker threads are awoken to begin reading
> subsequent blocks into the block cache")
>
> Looking at the code I can't see a easy way to add this feature (in
> file fs.py, function _readwrite should start a thread which will
> download the next fileblocks, but then it will be necessary to add the
> downloaded fileblock to DB and cache; so in the Backend code the
> function read should check if the fileblock has alreeady been
> downloaded and use it instead of doing a S3 GET)

I think the best way to do this is to add the functionality to
block_cache.py, e.g. with a new, asynchronous download_block()
function. The get() function can then check if a download is already in
progress and wait for it to complete instead of starting a new download.


Best,
-Nikolaus

-- 
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F

             »Time flies like an arrow, fruit flies like a Banana.«

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to