On 08/07/2020 12:02, Ivan Shapovalov wrote:

Is it correct that s3ql will always download the whole block into the
local cache upon a read that intersects with this block?

A block is either a single file or part of a file, if the file exceeds the maximum block size. There are never multiple files in a block and blocks are variable size, up to the maximum.

If true, then how scalable is s3ql with respect to number of blocks in
the filesystem? That is, how far can I realistically reduce the block
size if my dataset is, say, 10-20 TB?

Basically, I'm trying to optimize for random reads.

I don't think that reducing the maximum block size will improve things for you.

Regards
Cliff.


--
Cliff Stanford
London:    +44 20 0222 1666               Swansea: +44 1792 469666
Spain:     +34  603 777 666               Estonia: +372  5308 9666
UK Mobile: +44 7973 616 666

--
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/8cfccdb9-774d-c15a-265b-2086a34bd560%40may.be.

Reply via email to