Hello,
I came across the following problem with No tvx file.
How could I manage to get it?
I like to have transaction processes in Lucene.
After my reading dev-lucene and user-lucene lists and analysing what people
suggested
I made up my own.
The problem in my case is that I had to make
I'm not sure about the tvx error, but I think I recall somebody
changing some code around it a month or two ago. I also believe
System.out.println is on the TODO list for elimination.
Otis
--- commandor [EMAIL PROTECTED] wrote:
Hello,
I came across the following problem with No tvx file.
About 3 months ago I developed a external storage engine which ties into
lucene.
I'd like to discuss making a contribution so that this is integrated
into a future version of Lucene.
I'm going to paste my original PROPOSAL in this email.
There wasn't a ton of feedback first time around but
i got the c# of lucene thanks god @ http://sourceforge.net/projects/nlucene
what about the new version that include the compression facility ?
you did n't replay on my qustion does it compress original text files and
its indexs like Great MG
thanks alot
part of my indexing process is determining if an older instance of the
current document bring indexed exists.. and if it does, delete it.
which required me to have an IndexWriter open .. and then an
IndexReader... when I call the delete of a document, I of course get
an io exception caused by the
Hi Guys
Apologies.
a)
1) SEARCH FOR SUBINDEX IN A OPTIMISED MERGED INDEX
2) DELETE THE FOUND SUBINDEX FROM THE OPTIMISED MERGERINDEX
3) OPTIMISE THE MERGERINDEX
4) ADD A NEW VERSION OF THE SUBINDEX TO THE MERGER INDEX
5) OPTIMISE THE MERGERINDEX
b)
1) SEARCH FOR SUBINDEX IN
Well if you do all the steps in one run, I guess optimizing once at the
end would be faster overall, but all you have to do is test it out and
time it, performance wise, I don't think that step 3 (OPTIMIZE) in
scenario (a) will really improve the performance of the new index merge.
my 2 cents
Hi Chris,
we had the same problem and we have fixed it changing our indexing procedure.
The basic idea is to have a stored/not tokenized/indexed unique
primary key for each document (let's call it simply ID, even if it not
the docID managed by lucene itself).
With this ID, you can insert all you
Hi Kevin
On Sun, 07 Nov 2004 13:47:10 -0800, Kevin A. Burton
[EMAIL PROTECTED] wrote:
About 3 months ago I developed a external storage engine which ties into
lucene.
I'd like to discuss making a contribution so that this is integrated
into a future version of Lucene.
I'm going to paste