Hi,
Thanks for your response.
I carried out some tests with my 180000 nodes, when i save per block of
100 nodes the execution time is 8 hours and my store is very big : 1.3Go
When i launch a simple query, my CPU, RAM is full and execution is very
very long ...
Could you help me please ?
My repository.xml here :
<?xml version="1.0" encoding="ISO-8859-1"?>
<Repository>
<FileSystem
class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
<param name="path" value="${rep.home}/repository"/>
</FileSystem>
<Security appName="Jackrabbit">
<AccessManager
class="org.apache.jackrabbit.core.security.SimpleAccessManager">
</AccessManager>
<LoginModule
class="org.apache.jackrabbit.core.security.SimpleLoginModule">
<param name="anonymousId" value="anonymous"/>
</LoginModule>
</Security>
<Workspaces rootPath="${rep.home}/workspaces"
defaultWorkspace="default"/>
<Workspace name="${wsp.name}">
<FileSystem
class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
<param name="path" value="${wsp.home}"/>
</FileSystem>
<PersistenceManager
class="org.apache.jackrabbit.core.state.obj.ObjectPersistenceManager"/>
<SearchIndex
class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
<param name="path" value="${wsp.home}/index"/>
<param name="useCompoundFile" value="true"/>
<param name="minMergeDocs" value="100"/>
<param name="volatileIdleTime" value="3"/>
<param name="maxMergeDocs" value="100000"/>
<param name="mergeFactor" value="10"/>
<param name="bufferSize" value="10"/>
<param name="cacheSize" value="1000"/>
<param name="forceConsistencyCheck" value="false"/>
<param name="autoRepair" value="true"/>
</SearchIndex>
</Workspace>
<Versioning rootPath="${rep.home}/version">
<FileSystem
class="org.apache.jackrabbit.core.fs.local.LocalFileSystem">
<param name="path" value="${rep.home}/version"/>
</FileSystem>
<PersistenceManager
class="org.apache.jackrabbit.core.state.obj.ObjectPersistenceManager"/>
</Versioning>
</Repository>
Best Regards,
Jérôme.
Le vendredi 06 janvier 2006 à 19:17 +0200, Jukka Zitting a écrit :
> Hi,
>
> On 1/6/06, Jérôme BENOIS <[EMAIL PROTECTED]> wrote:
> > I want use jackrabbit in order to create 180000 content nodes. But, how
> > to use correctly the session ? Call session.save() for each nodes or
> > prefer call session.save() per bloc of 1000 nodes ?
>
> It depends on your performance and memory use requirements. Each
> save() will cost you some time, but the more changes you queue up
> before calling save() the more memory your process will use to hold
> the pending changes. Calling Session.save() only per a block of
> changes is probably better for such bulk loads.
>
> You may also want to take a look at the Workspace.importXML() and
> Workspace.getImportContentHandler() for an efficient alternative to
> bulk loading large amounts of data.
>
> BR,
>
> Jukka Zitting
>
> --
> Yukatan - http://yukatan.fi/ - [EMAIL PROTECTED]
> Software craftmanship, JCR consulting, and Java development
signature.asc
Description: Ceci est une partie de message numériquement signée
