Hi,

I am using the latest trunk code but still I am facing 
java.lang.OutOfMemoryError.

It may be due to problem in my code, so I have created and attached a sample 
script which shows the problem.

In my script I am just adding a simple document in threads.
Without threading it works and also if document's field is UN_TOKENIZED it 
works but TOKENIZED fails...

Thanks a lot!
Anurag


----- Original Message ----
From: Andi Vajda <[EMAIL PROTECTED]>
To: [email protected]
Sent: Friday, 11 January, 2008 4:26:53 AM
Subject: Re: [pylucene-dev] memory leak status


On Thu, 10 Jan 2008, Andi Vajda wrote:

>      I think I'm going to be adding support for the manual way via
>      finalize() shortly.

This just got checked in to rev 377.

The test/test_PythonDirectory.py tests can now be run in an endless loop 
without leakage. See this tests' sources for an example of finalize() use.

    > python test/test_PythonDirectory.py -loop

I'm still hoping to find a reliable way to automate this....

To rebuild PyLucene with this change, you also need to rebuild jcc.

Andi..
_______________________________________________
pylucene-dev mailing list
[email protected]
http://lists.osafoundation.org/mailman/listinfo/pylucene-dev


      Why delete messages? Unlimited storage is just a click away. Go to 
http://help.yahoo.com/l/in/yahoo/mail/yahoomail/tools/tools-08.html
import os
import sys
import threading

import lucene
lucene.initVM(lucene.CLASSPATH, maxheap='5m')


class MyDocument(lucene.Document):
    indexType = lucene.Field.Index.UN_TOKENIZED # TOKENIZED fails
    def __init__(self):
        lucene.Document.__init__(self)

        self.add(lucene.Field("body", "what a body", lucene.Field.Store.YES, 
MyDocument.indexType))

class DocThread(threading.Thread):
    def __init__(self, writer):
        threading.Thread.__init__(self)
        self.writer = writer
        self.error = None
        
    def run(self):
        try:
            lucene.getVMEnv().attachCurrentThread()
            self.writer.addDocument(MyDocument())
        except Exception,e:
            self.error = e

def main():
    _store = lucene.FSDirectory.getDirectory("/tmp/index/", True)
    _writer = lucene.IndexWriter(_store, lucene.StandardAnalyzer(), True)

    for i in xrange(500):
        if i%100 == 0: print i

        t = DocThread(_writer)
        t.start()
        t.join()

        if t.error:
            print t.error
            break

        
main()
print "lucene.Field.Index.UN_TOKENIZED works but TOKENIZED fails..."
MyDocument.indexType = lucene.Field.Index.TOKENIZED
main()
_______________________________________________
pylucene-dev mailing list
[email protected]
http://lists.osafoundation.org/mailman/listinfo/pylucene-dev

Reply via email to