On Sep 29, 2004, at 3:11 PM, Bryan Dotzour wrote:
3. Certainly some of you on this list are using Lucene in a web-app
environment. Can anyone list some best practices on managing
reading/writing/searching a Lucene index in that context?
Beyond the advice already given on this thread, since you
-Message d'origine-
De : Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Envoyé : mercredi 29 septembre 2004 18:28
À : Lucene Users List
Objet : RE: Memory usage: IndexSearcher Sort
2. How does this approach work with multiple, simultaneous users?
IndexSearcher is thread-safe.
You
Correct. I think there is a FAQ entry at jguru.com that answers this.
Otis
--- Cocula Remi [EMAIL PROTECTED] wrote:
2. How does this approach work with multiple, simultaneous users?
IndexSearcher is thread-safe.
You mean one can invoque at the same time the search method of a
unique
Most helpful in this search was the following thread from Bugzilla:
http://issues.apache.org/bugzilla/show_bug.cgi?id=30628
http://issues.apache.org/bugzilla/show_bug.cgi?id=30628
We had a similar problem in our webapp.
Please look at the bug
My solution is :
I have bound in an RMI registry one RemoteSearchable object for each index.
Thus I do not have to create any IndexSearcher and I can execute query from any
application.
This has been implemented in the Lucene Server that I have just began to create.
Hello,
--- Bryan Dotzour [EMAIL PROTECTED] wrote:
I have been investigating a serious memory problem in our web app
(using
Tapestry, Hibernate, Lucene) and have reduced it to being the way
in which
we are using Lucene to search on things. Being a webapp, we have
focused on
doing our
this approach work with multiple, simultaneous users?
3. When does the reader need to get closed?
Thanks again.
Bryan
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 29, 2004 8:47 AM
To: Lucene Users List
Subject: Re: Memory usage: IndexSearcher Sort
are sure you no longer
need it, if you can determine that in your application.
Otis
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 29, 2004 8:47 AM
To: Lucene Users List
Subject: Re: Memory usage: IndexSearcher Sort
Hello,
--- Bryan
Sorry if I'm stating the obvious. Is this happening in some
stand-alone unit tests, or are you running things from some application
and in some environment, like Tomcat, Jetty or in some non-web app?
Your queries are pretty big (although I recall some people using even
bigger ones... but it all
Otis,
My app does run within Tomcat. But when I started
getting these OutOfMemoryErrors I wrote a little unit
test to watch the memory usage without Tomcat in the
middle and I still see the memory usage.
Thanks,
Jim
--- Otis Gospodnetic [EMAIL PROTECTED]
wrote:
Sorry if I'm stating the
This sounds like a memory leakage situation. If you are using tomcat I
would suggest you make sure you are on a recent version, as it is known to
have some memory leaks in version 4. It doesn't make sense that repeated
queries would use more memory that the most demanding query unless objects
Will,
Thanks for your response. It may be an object leak.
I will look into that.
I just ran some more tests and this time I create a
20GB index by repeatedly merging my large index into
itself.
When I ran my test query against that index I got an
OutOfMemoryError on the very first query. I
How big are your actual Documents? Are you caching Hits? It stores,
internally, up to 200 documents.
Erik
On May 26, 2004, at 4:08 PM, James Dunn wrote:
Will,
Thanks for your response. It may be an object leak.
I will look into that.
I just ran some more tests and this time I create a
James Dunn wrote:
Also I search across about 50 fields but I don't use
wildcard or range queries.
Lucene uses one byte of RAM per document per searched field, to hold the
normalization values. So if you search a 10M document collection with
50 fields, then you'll end up using 500MB of RAM.
If
Erik,
Thanks for the response.
My actual documents are fairly small. Most docs only
have about 10 fields. Some of those fields are
stored, however, like the OBJECT_ID, NAME and DESC
fields. The stored fields are pretty small as well.
None should be more than 4KB and very few will
approach
Doug,
Thanks!
I just asked a question regarding how to calculate the
memory requirements for a search. Does this memory
only get used only during the search operation itself,
or is it referenced by the Hits object or anything
else after the actual search completes?
Thanks again,
Jim
---
It is cached by the IndexReader and lives until the index reader is
garbage collected. 50-70 searchable fields is a *lot*. How many are
analyzed text, and how many are simply keywords?
Doug
James Dunn wrote:
Doug,
Thanks!
I just asked a question regarding how to calculate the
memory
Doug,
We only search on analyzed text fields. There are a
couple of additional fields in the index like
OBJECT_ID that are keywords but we don't search
against those, we only use them once we get a result
back to find the thing that document represents.
Thanks,
Jim
--- Doug Cutting [EMAIL
[mailto:[EMAIL PROTECTED]]
Sent: Sunday, November 11, 2001 6:59 AM
To: Lucene Users List
Subject: RE: Memory Usage?
I am not very familiar with the output of -Xrunhprof, but
I've attached the
output of a run of a search through and index of 50.000
documents. It gave
me out-of-memory errors
hmm, I seem to be getting a different number of hits when I use the files
you sent out.
-Original Message-
From: Doug Cutting [mailto:[EMAIL PROTECTED]]
Sent: 12. november 2001 20:47
To: 'Lucene Users List'
Subject: RE: Memory Usage?
From: Anders Nielsen [mailto:[EMAIL PROTECTED
From: Anders Nielsen [mailto:[EMAIL PROTECTED]]
hmm, I seem to be getting a different number of hits when I
use the files
you sent out.
Please provide more information! Is it larger or smaller than before? By
how much? What differences show up in the hits? That's a terrible bug
21 matches
Mail list logo