You can nuke the cache directory. It's just a cache.

We should probably have some automatic size management for it. File a
bug? https://github.com/perkeep/perkeep/issues

(Btw, we use the perkeep@ mailing list now, not camlistore@)

On Mon, Aug 20, 2018 at 1:40 PM Viktor Ogeman <[email protected]> wrote:
>
> Hi again,
>
> I have a perkeepd server running, storing about 200Gb of data using 
> blobpacked. The data is mostly jpg images. The main blob storage is on a 
> spinning disk ("packed") but I keep all the indexes, loose blobs + cache on a 
> smaller SSD.
>
> I notice however that the cache directory becomes prohibitly large (64Gb of 
> cached blobs for 200Gb of "real data"). Is this expected? If so why is this, 
> are very high quality thumbnails being stored for all the images - or is 
> there some other data being cached as well?
>
> Finally (and most importantly) - Is it safe for me to nuke the cache 
> directory? (while the server is running?) Or is there some other recommended 
> way to decrease the cache pressure?
>
> Regards
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Camlistore" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Camlistore" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to