Ok, I found this:
http://wiki.apache.org/jackrabbit/DataStore#Data_Store_Garbage_Collection

Rob

On Dec 4, 2012, at 11:34 AM, Robert A. Decker wrote:

> Hello,
> 
> We recently were transferring a lot of data via webdav in sling. We were 
> processing a file on a windows machine shared from sling via webdav and 
> writing the results back into the webdav directory as they were processed. 
> Our filesystem on the sling server quickly hit 100% full - something like 3GB 
> of data on the filesystem for only about 60MB of data being processed. 
> 
> For example, under /tmp there's GBs of tmp cache files. And under sling our 
> sling folder quickly jumped up to 9GB, hitting 100%.
> 
> We've since changed the way we process the data so that we don't exchange 
> nearly as much via webdav.
> 
> Is it possible to clean up the jackrabbit folder somehow? It looks like the 
> majority of data is in the datastore folder, but I think it must just be some 
> sort of webdav cacheing.
> 
> 
> Rob

Reply via email to