Hello again,
just not to cause uncertainness: Jackrabbit runs stable.
In the meantime I tested my use case creating 1.000.000 nodes with the
configuration attached and with no dependency to my application code.
The result was that Jackrabbit did not get an OutOfMemoryError.
Bye,
Sandro
Sandro Boehme schrieb:
Hello,
for a proof of concept I try to create 1.000.000 nodes with not more
than 300 child nodes under one node. Session.save() is called after
every 100 nodes. But after almost 500.000 nodes I get an
"OutOfMemoryError: Java heap space" within a Lucene method (at
org.apache.lucene.index.SegmentReader.createFakeNorms(SegmentReader.java:426)).
I use RMI to connect to a usual Jackrabbit 1.4 Tomcat installation. The
server is configured to start with export
JAVA_OPTS="-XX:MaxPermSize=256m -Xmx1024m" and I use the attached
configuration files.
I guess there is something wrong with my <SearchIndex/> configuration in
the workspace.xml file. But I could not find out what the problem is.
If somebody has a configuration that should be able to handle the
creation of 1.000.000 nodes I would be glad if he could send it as a
reply. Thanks in advance,
Sandro