Hi Jokin,

actually I found some information about it. As far as I've discovered compression can be applied to fields of documents, before adding them to the index, even if Lucene.Net doesn't supply it out of the box. But the issue I reported doesn't have to do with this, because index size reduction seems to be applied to a higher level by Luke, I mean, to an index already containing documents with uncompressed fields. In fact, when reopening the index with Lucene.Net after it's been opened - and you see, optimized - by Luke, I am still able to read it, even if I didn't configure support for compression. This means that Luke didn't compress the contents of the documents contained in the index (it would be a weird behavior after all), but instead did something like optimizing the format of the files of the index. Another detail is that when I write my index with Lucene.Net I end up with at least 3 files, while when I open it with Luke I always get 2 files only. And yes, I am calling IndexWriter.Optimize() when finished indexing. Am I missing something maybe?

Simone

Jokin Cuadrado wrote:
maybe the java index is using compression, while if you need
compression in lucene.net you must use an external library (SharpZLib)
and tell lucene.net to use it. there must be a "how to" use
compression in lucene.net in the web.

Jokin

On 7/17/07, Simone Busoli <[EMAIL PROTECTED]> wrote:

 Hello,

 I discovered that an index optimization done by Lucene.Net with
IndexWriter.Optimize() is less "optimizing" than the same operation done on
the same index with Java Lucene. I found it out because I am using Luke to
browse my index and when opening the index with Luke automatically reduces
its size of 50%, even if it had just been optimized by my application
running Lucene.Net.

 Did anyone else notice this?


Reply via email to