Hey there,
I never run out of memory but I think the app always run to the limit... The
problem seems to be in here (searching by term):
try {
            indexSearcher = new IndexSearcher(path_index) ;
            
            QueryParser queryParser = new QueryParser("id_field",
getAnalyzer(stopWordsFile)) ;
            Query query = queryParser.parse(query_string) ;
            
            Hits hits = indexSearcher.search(query) ;
            
            if(hits.length() > 0) {
                doc = hits.doc(0) ;
            }
            
        } catch (Exception ex) {
            
        } finally {
            if(indexSearcher != null) {
                try {
                    indexSearcher.close() ;
                } catch(Exception e){} ;
                indexSearcher = null ;
            }
        }

As hits is deprecated I tried to use termdocs and top docs... but the memory
problem never disapeared...
If I call the garbage collector every time I use the upper code the memory
doesn't increase undefinitely but... the app works soo slow.
Any suggestion?
Thanks for replaying!


Yonik Seeley wrote:
> 
> On Sun, Nov 2, 2008 at 8:09 PM, Marc Sturlese <[EMAIL PROTECTED]>
> wrote:
>> I am doing the same and I am experimenting some trouble. I get the
>> document
>> data searching by term. The problem is that when I do it several times
>> (inside a huge for) the app starts increasing the memory use until I use
>> almost the whole memory...
> 
> That just sounds like the way Java's garbage collection tends to
> work... do you ever run out of memory (and get an exception)?
> 
> -Yonik
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Getting-a-document-by-primary-key-tp20072108p20309245.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to