On Wednesday, April 14, 2021 at 5:36:30 PM UTC+10 [email protected] wrote:

>
> Yes, all of these would be possible and probably be faster. I think 
> option (2) would me the best one. 
>
> Pull requests are welcome :-). 
>
>
I had a funny feeling that might be the answer...and in terms of utility 
and design, ISTM that " add a special s3ql command to do a 'tree copy' -- 
it would know exactly which blocks it needed and download them en-masse 
while restoring files (and would need a lot of cache, possibly even a 
temporary cache drive)" is a good plan.

I am not at all sure I am up for the (probable) deep-dive required, but if 
I were to look at this could you give some suggested starting points? My 
very naieve approach (not knowing the internals at all) would be to build a 
list of all required blocks, do some kind of topo sort, then start multiple 
download threads. As each block was downloaded, determine if a new file can 
be copied yet, and if so, copy it, then release and blocks that are no 
longer needed.

...like I said, naieve, and hightly dependant on internals...and maybe 
should use some kind of private mount to avoid horror.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/s3ql/150eef94-4ac0-4b19-b3e4-bc1d993f20b3n%40googlegroups.com.

Reply via email to