I have seen a lot postings about this topic. Any final thoughts?
We did a simple stress test, Lucene would produce this error between 30 - 80
concurren searches. The index directory has 24 files (15 fields), and
ulimit -n
32768
,
there should be more than enough FDs. Note, we did not do
Thanks for your quick reponse, I still want to know why we ran out of
file descriptors.
--Yup. Cache and reuse your Searcher as much as possible.
--Scott
-Original Message-
From: Hang Li [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 23, 2002 9:59 AM
To: Lucene Users List
Are you closing the searcher after each when done?
No: Waiting for the garbage collector is not a good idea.
Yes: It could be a timeout on the OS holding the files handles.
Either way, the only real option is to avoid thrashing the searchers...
Scott
-Original Message-
From: Hang
I did close searcher after each done. Maybe I should try the
CachedSearcher someone posted before ... Is there a final version?
Scott Ganyo wrote:
Are you closing the searcher after each when done?
No: Waiting for the garbage collector is not a good idea.
Yes: It could be a timeout on the
Another idea to address this (quite common) problem:
Does anyone know if there are any Java file implementations that support a
forked file or a file with multiple streams? Or, if not, do you know of
any design patterns or documents explaining the theory and design in this
kind of thing? It
just close your Searcher after finishing work
- Original Message -
From: Hang Li [EMAIL PROTECTED]
To: Lucene Users List [EMAIL PROTECTED]
Sent: Tuesday, July 23, 2002 5:59 PM
Subject: Too many open files?
I have seen a lot postings about this topic. Any final thoughts?
We did a
Hi all,
I'm (now) using a BooleanQuery to construct my query, and to verify
that my construction is correct, I use the toString() method to
retrieves the 'human readable' query.
Everything seems to be okay, but when I set a Boost Factor for a
sub-query (or the main query itself), this
Hello,
From: Olivier Amira [mailto:[EMAIL PROTECTED]]
I'm (now) using a BooleanQuery to construct my query, and to verify
that my construction is correct, I use the toString() method to
retrieves the 'human readable' query.
Everything seems to be okay, but when I set a Boost Factor for
I cached searcher now. As the results, multiple threads try to use the same
searcher. It seems it is much SLOWER than each thread has its own
searcher. Are there any synchronized methods/blocks in Lucene causing this
performance problem?
Scott Ganyo wrote:
Yup. Cache and reuse your Searcher
I'm trying to implement a HITS/PageRank type algorithm and need to modify
the document scores after a search is performed. The final score will be a
combination of the lucene score and PageRank. Is there currently a way to
modify the scores on the fly via HitCollector? so that calling the
Mike Tinnes wrote:
I'm trying to implement a HITS/PageRank type algorithm and need to modify
the document scores after a search is performed. The final score will be a
combination of the lucene score and PageRank. Is there currently a way to
modify the scores on the fly via HitCollector? so
Please can someone explain me:
1) Why is org.apache.lucene.analysis.standard.ParseException.java included
in the source files? This file is generated by JavaCC!
2) Why is this file deleted after it is generated by JavaCC in build.xml?
3) When I compile the JavaCC generated files, I get
12 matches
Mail list logo