On Monday, April 6, 2020 at 9:26:39 AM UTC-4, markus.peroebner wrote:

> I guess the rolling hash produces small chunks by intention. The perkeep 
> source code mentions 16MB as maximum chunk size in some places. 
>
> Splitting the file is the intended behavior. It has some advantages 
> compared to just hashing a complete file. 
>

Yes, this goal is clear from the perkeep docs.  Blobs are split up into 
chunks.

If we can tune when files get split (to optimize for different backend 
performance, for example) and if that tuning is backend-specific, then...

Could the fsbacked backend optimize for... say 64TB split sizes, and 
everything
smaller is stored as-is?

--Joe 

-- 
You received this message because you are subscribed to the Google Groups 
"Perkeep" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/perkeep/34c503cd-fb4c-4a7f-83ac-502580768dee%40googlegroups.com.

Reply via email to