I have a web page that is use to extract the information from the index. All
seam to work fine, but when a search engine like google start to parse the
site. For some reason, more files was created in the index directory, and
the index seams to be corrupt. Here is the code that I used.

 

 

ArrayList documents = new ArrayList();

                        

IndexReader reader = IndexReader.Open(indexPath);

Searcher searcher = new IndexSearcher(reader);

Analyzer analyzer = new StandardAnalyzer();

Sort.Sort sort = new Lucene.Net.Search.Sort("rating", false);

QueryParser parser = new QueryParser("content", analyzer);

 

Query query = parser.Parse(texte);

Hits hits = searcher.Search(query, sort);

for (int i = 0; i <= hits.Length() - 1; i += 1)

{

Document doc = hits.Doc(i);

      documents.Add(doc);

}

reader.Close();

return documents;

 

Any idea ?

 

Regards 

 

Yves

Reply via email to