Renaud,
You'll have to "update" the whole Document in order to change the value of one
of its fields. The only way around that is to store all your fields, which
would let you pull the Document from the index, update one of its fields, and
add it back to the index.
Otis
--
Sematext -- http://s
Angel,
Have you stepped through this with a debugger? That might reveal something.
Have you tried doing kill -QUIT while waiting for those slow calls you
mention to return? Perhaps this will show that the slow calls spend their time
somewhere where the faster calls never go.
Otis
--
Sematext
Wojtek, yes, that's how you can loop through all docs in the index.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Wojtek H <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Sunday, April 13, 2008 1:38:35 PM
Subject: Re: Document ids in
Timo,
That is true. The only think I can recommend at the moment is to make sure you
specify the correct data type. If your sort field is a numeric field, make
that explicit.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Timo Nentwig <[EMA
Thanks Karl. I think your solution would be useful in
case we would like to partition the index into two
indexes and use ParallelReader to query both indexes
simultaneously.
If this solution is not getting including inside
future Lucene releases, what other options we have to
update just one of t
Hi Mathieu,
I can definitely store the foreign key inside the
dynamic index. However if I understand correctly, for
ParallelReader to work properly, doc ids for all
documents in both primary and secondary (dynamic)
index should be in same order.
How can we achieve it if there are frequest changes
Hi!
I found that when sorting the search result -depending on the amount of data
in the field to sort by - this can easily lead to FieldCacheImpl to allocate
hundreds of MByte RAM.
How does this work internally? It seems as if all data for this field found in
the entire index is read into memo
Thank you for the answer. So it means that I can without any problems
iterate over index documents using this algoritm (I don't want to use
MatchAllQuery):
- check maxDoc()
- iterate from 0 to maxDoc() and process doc if it is not deleted
Am I right?
Best,
wojtek
2008/4/12, Chris Hostetter <[EMA
Given the way people misspell these days, I think you could treat
"correct" terms as being incorrect and use your spellchecker to give
you the alternates based on your index? You might also look into the
FuzzyQuery.
On Apr 3, 2008, at 9:11 AM, Marjan Celikik wrote:
Hi everyone,
I know
For starters, you might have a look at Jackrabbit (Content Repo. built
on Lucene) as I know it powers several CMS systems.
More below.
On Apr 3, 2008, at 8:24 AM, Илья Казначеев wrote:
Hello.
We've designing a CMS in Java, and I've trying to implement site
search
function using lucene.
10 matches
Mail list logo