That's likely maps in the lucene readers code.
- Mark
On Nov 16, 2008, at 10:04 PM, "Bill Au (JIRA)" <[EMAIL PROTECTED]> wrote:
[ https://issues.apache.org/jira/browse/SOLR-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648070#action_12648070
]
Bill Au commented on SOLR-857:
------------------------------
HashMap$Entry is on top of your list there. I would look for a
large HashMap in the heap dump.
Memory Leak during the indexing of large xml files
--------------------------------------------------
Key: SOLR-857
URL: https://issues.apache.org/jira/browse/SOLR-857
Project: Solr
Issue Type: Bug
Affects Versions: 1.3
Environment: Verified on Ubuntu 8.0.4 (1.7GB RAM, 2.4GHz
dual core) and Windows XP (2GB RAM, 2GHz pentium) both with a Java5
SDK
Reporter: Ruben Jimenez
Attachments: OQ_SOLR_00001.xml.zip, schema.xml,
solr256MBHeap.jpg
While indexing a set of SOLR xml files that contain 5000 document
adds within them and are about 30MB each, SOLR 1.3 seems to
continually use more and more memory until the heap is exhausted,
while the same files are indexed without issue with SOLR 1.2.
Steps used to reproduce.
1 - Download SOLR 1.3
2 - Modify example schema.xml to match fields required
3 - start example server with following command java -Xms512m -
Xmx1024m -XX:MaxPermSize=128m -jar start.jar
4 - Index files as follow java -Xmx128m -jar .../examples/
exampledocs/post.jar *.xml
Directory with xml files contains about 100 xml files each of about
30MB each. While indexing after about the 25th file SOLR 1.3 runs
out of memory, while SOLR 1.2 is able to index the entire set of
files without any problems.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.