[ 
https://issues.apache.org/jira/browse/LUCENE-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664573#action_12664573
 ] 

Jason Rutherglen commented on LUCENE-1476:
------------------------------------------

{quote}
If we moved to using only iterator API for accessing deleted docs within Lucene 
then we could explore fixes for the copy-on-write cost w/o changing on-disk 
representation of deletes. IE tombstones are perhaps overkill for Lucene, since 
we're not using the filesystem as the intermediary for communicating deletes to 
a reopened reader. We only need an in-RAM incremental solution.
{quote}

+1 Agreed.  Good point about not needing to change the on disk representation 
as that would make implementation a bit more complicated.  Sounds like we need 
a tombstones patch as well that plays well with IndexReader.clone.

Exposing deleted docs as a DocIdSet allows possible future implementations that 
DO return deleted docs as discussed (via a flag to IndexReader) from TermDocs.  
Deleted docs DocIdSet can then be used on a higher level as a filter/query.  

> BitVector implement DocIdSet
> ----------------------------
>
>                 Key: LUCENE-1476
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1476
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>    Affects Versions: 2.4
>            Reporter: Jason Rutherglen
>            Priority: Trivial
>         Attachments: LUCENE-1476.patch, quasi_iterator_deletions.diff
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> BitVector can implement DocIdSet.  This is for making 
> SegmentReader.deletedDocs pluggable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to