I sent my implementation to the list a while ago
(http://www.mail-archive.com/lucene-dev@jakarta.apache.org/msg00973.html)
I hope it helps.
peter
-Original Message-
From: Brian Goetz [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 21, 2002 11:00 PM
To: Lucene Users List
Subject:
Well it's me again :D
I have a funny feeling that this might not be recommended to do in Lucene.
Basically what I'm doing is search the index and for each document I need to
do an update of the field. Thus deleting the index and readding it again.
Is this OK to do?
A bit of code might help
Is it possible to configure your app server to have just one message driven
bean instance in the pool? Obviously this is not a solution in general to
concurrent access to Lucene but would remove the need for multiple
IndexWriters in your particular case and give you the same overall throughput
I am a little unclear,
When you index do you store
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Or
[EMAIL PROTECTED]
The reason why I ask is that Keyword does not tokenize (that is what ever
you put into that field is seen as a single term).
So if you want to find it you have
From: Victor Hadianto
The problem is that I couldn't search Lucene using this
field. If I store
this field as Text I can search and find the document but I
couldn't delete
it using the following:
indexReader.delete(new Term(id, [EMAIL PROTECTED]);
This will return 0.
If I
From: Victor Hadianto
A bit of code might help illustrate my situation:
IndexSearcher searcher = new IndexSearcer(dir);
Hits hits = search.search(query);
for (int i = 0; i hits.length(); i++) {
get the document and do modification here
IndexReader reader = new
Anyone,
I am trying to evaluate Lucene for use in our company. I tried the simple
test below. Before it finished creating the index, I got the exception
exception below. Examining the directory where the index was created, there
were more than 10,000 files created. The error I got was, not
This seems a little strange.
I index over 100K documents on a windows machine and I don't get this
problem.
One way to solve it though is to optimize periodically. Just call
Iw.optimize(); every 5 records or so. This will consolidate all the
individual files into a single file (or set of
Peter,
I found the problem. Had nothing to do with optimize. The problem is in the
line:
Field f = new Field(FIELD_ + (i + 1), Integer.toString(i), true,
true, true);
Should be FIELD_ + (j + 1); the i caused me to have a separate file for
each field. I guess it makes a separate file
From: Christian Meunier
Hi, i have few questions regarding the Filter class.
Why this is not an interface ?
No good reason. Since interfaces have some performance penalties with most
JVMs, when I first wrote Lucene I only used interfaces where multiple
inheritance was required. In
Are there are known problems with indexes over very small numbers of files? I have a
program which works fine when it is indexing plenty of documents, but when it only
indexes 10 or so, all that gets created is an 8 byte segments file. I build the index
in RAM, and then merge it to disc, and I
- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 23, 2002 10:04 PM
Subject: RE: Few questions regarding the design of the Filter class
From: Christian Meunier
Hi, i have few questions regarding the Filter class.
Why this is not an
When you index do you store
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Or
[EMAIL PROTECTED]
This is what I store, just a single message id.
The reason why I ask is that Keyword does not tokenize (that is what ever
you put into that field is seen as a single term).
13 matches
Mail list logo