Hi Julien, Do these files look like bin1965159231182123515.tmp? If so, these are the contents of binary properties which are cached by Jackrabbit and I know no way to avoid them. These files should be deleted automatically when the associated properties are garbage collected. If you have a lot of big binary properties the contents on disk can indeed grow very fast. I know of two workarounds: (i) point the java.io.tmpdir to an fs with a lot of space, and (ii) configure smaller cache sizes in org.apache.jackrabbit.core.state.CacheManager (available through a org.apache.jackrabbit.core.RepositoryImpl instance).
Btw, have you tried to use the import/export API for migrating your content? Best regards, Martijn On Mon, Nov 23, 2009 at 4:17 PM, Julien Poffet <[email protected]> wrote: > Here is my situation, > > I was using jackrabbit with a non-datastore config. So all the content of > jackrabbit were stored in my database. Now I just migrated to a > cluster/datastore config with a brand new database prefix. > > At this point I'm trying to import the content of the old repository to the > new one. I have setup the SimpleWebDavServlet to expose the content of the > old repository through WebDav. By doing this I can parse the WebDav and get > the files to import them in the new repository. So far it's a little bit > slow but it works fine. My problem is that when the source WebDav is parsed, > a lot of binary files (which I assume are a kind of BLOB cache) are created > in my tomcat temp dir. This temporary files are never deleted and my server > runs out of space very quickly. > > Is there a way to avoid theses temporary files? > > Cheers, > Julien >
