My bad :( That's what I get when I start responding to things before drinking coffee in the morning.
I'll open another issue on the setter and post a patch for it so any overflow is handled gracefully. Michael -----Original Message----- From: Digy (JIRA) [mailto:j...@apache.org] Sent: Wednesday, November 18, 2009 8:57 AM To: lucene-net-dev@incubator.apache.org Subject: [jira] Commented: (LUCENENET-257) TestCheckIndex.TestDeletedDocs [ https://issues.apache.org/jira/browse/LUCENENET-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12779523#action_12779523 ] Digy commented on LUCENENET-257: -------------------------------- Some may want to set 1.5 MB merge size :-) Anyway, I applied the patch as it is (leaving SetMaxMergeMB accepting double). DIGY > TestCheckIndex.TestDeletedDocs > ------------------------------- > > Key: LUCENENET-257 > URL: https://issues.apache.org/jira/browse/LUCENENET-257 > Project: Lucene.Net > Issue Type: Bug > Reporter: Andrei Iliev > Assignee: Digy > Attachments: LUCENENET-257.patch > > > Setting writer.SetMaxBufferedDocs(2) couse flushing buffers after adding > every 2 docs. That results in total 10 segments in the index and failing > TestDeletedDocs (test case assumes that there is only 1 segment file). So > question arise: > 1) does performing a flush has to start a new segment file? > 2) if so, in order to run TestDeletedDocs smoothly either change > writer.SetMaxBufferedDocs(2) to, say, writer.SetMaxBufferedDocs(20) or > call writer.optimize(1) before closing writer. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.