Hi all,

 

I am a Lucene.Net user. Since I need a fast indexing in my current project I
try to use Lucene 2.3.2 which I convert to .Net with IKVM(Since Lucene.Net
is currently in v2.1) and I use the same instances of document and fields to
gain some speed improvements.

 

I use TokenStreams to set the value of fields.

 

My problem is that I get NullPointerException in "addDocument".

 

Exception in thread "main" java.lang.NullPointerException

        at
org.apache.lucene.store.IndexOutput.writeString(IndexOutput.java:99)

        at
org.apache.lucene.index.FieldsWriter.writeField(FieldsWriter.java:127)

        at
org.apache.lucene.index.DocumentsWriter$ThreadState$FieldData.processField(D
ocumentsWriter.java:1418)

        at
org.apache.lucene.index.DocumentsWriter$ThreadState.processDocument(Document
sWriter.java:1121)

        at
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:
2442)

        at
org.apache.lucene.index.DocumentsWriter.addDocument(DocumentsWriter.java:242
4)

        at
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1464)

        at
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1442)

        at MainClass.Test(MainClass.java:39)

        at MainClass.main(MainClass.java:10)

 

To show the same bug in Java I prepared a sample application (oh, that was
hard since this is my second app. in java(first one was a "Hello World"
app.))

 

Is something wrong with my application or is it a bug in Lucene?

 

Thanks,

DIGY

 

 

 

SampleCode:

    public class MainClass

    {

             

        DummyTokenStream DummyTokenStream1 = new DummyTokenStream();

        DummyTokenStream DummyTokenStream2 = new DummyTokenStream();

 

       //use the same document&field instances for Indexing

        org.apache.lucene.document.Document Doc = new
org.apache.lucene.document.Document();

 

        org.apache.lucene.document.Field Field1 = new
org.apache.lucene.document.Field("Field1", "",
org.apache.lucene.document.Field.Store.YES,
org.apache.lucene.document.Field.Index.TOKENIZED);

        org.apache.lucene.document.Field Field2 = new
org.apache.lucene.document.Field("Field2", "",
org.apache.lucene.document.Field.Store.YES,
org.apache.lucene.document.Field.Index.TOKENIZED);

 

        public MainClass()

        {

            Doc.add(Field1);

            Doc.add(Field2);

        }

 

 

        public void Index() throws 

                           org.apache.lucene.index.CorruptIndexException,

 
org.apache.lucene.store.LockObtainFailedException,

                           java.io.IOException

        {

              System.out.println("Index Started"); 

             org.apache.lucene.index.IndexWriter wr = new
org.apache.lucene.index.IndexWriter("testindex", new
org.apache.lucene.analysis.WhitespaceAnalyzer(),true);

            

            for (int i = 0; i < 100; i++)

            {

                    PrepDoc();

                    wr.addDocument(Doc);

            }

            wr.close();

             System.out.println("Index Completed"); 

        }

 

        void PrepDoc()

        {

            DummyTokenStream1.SetText("test1"); //Set a new Text to Token
Stream

            Field1.setValue(DummyTokenStream1); //Set TokenStream to Field
Value

 

 

            DummyTokenStream2.SetText("test2"); //Set a new Text to Token
Stream

            Field2.setValue(DummyTokenStream2); //Set TokenStream to Field
Value

        }

 

       public static void main(String[] args)  throws

                    org.apache.lucene.index.CorruptIndexException,

                    org.apache.lucene.store.LockObtainFailedException,

                    java.io.IOException

       {

              MainClass m = new MainClass();

              m.Index();

       }

 

 

 

             

       public class DummyTokenStream extends
org.apache.lucene.analysis.TokenStream

       {

              String Text = "";

              boolean EndOfStream = false;

              org.apache.lucene.analysis.Token Token = new
org.apache.lucene.analysis.Token();

 

             //return "Text" as the first token and null as the second

             public org.apache.lucene.analysis.Token next()

             {

                    if (EndOfStream == false)

                    {

                           EndOfStream = true;

 

                           Token.setTermText(Text);

                           Token.setStartOffset(0);

                           Token.setEndOffset(Text.length() - 1);

                           Token.setTermLength(Text.length());

                           return Token;

                    }

                    return null;

             }

 

             public void SetText(String Text)

             {

                    EndOfStream = false;

                    this.Text = Text;

             }

       }

 

    }

      

 

Reply via email to