[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16709931#comment-16709931
 ] 

Adrien Grand commented on LUCENE-8585:
--------------------------------------

bq. It seems that the meta data for the DocValue entries are loaded one-off and 
that any subsequently needed data is fetched from the data-slices.

This is correct.

bq. In line with TermsDictEntry, the jump-tables for IndexedDISI-blocks and 
vBPV can be stored at the end of their respective data-slices, with only the 
offsets to the jump-tables being stored in meta.

I don't think they are comparable: the terms dictionary is kept on disk while 
jump tables would be fully loaded in memory. Like you said, meta is about 
information that is loaded one-off, so this is where we should put jump tables.

bq. The downside to this solution is that the full jump-tables needs to be kept 
in memory until the data has been written, for a worst-case temporary overhead 
of 2MB and 8MB when flushing 2 billion documents with values.

Right. I'm a bit less worried about the write-time memory usage than I am about 
search-time: at write time we only write one field at once. On the other hand 
at search time we would have jump tables loaded in memory for every field at 
the same time. Maybe we could explore inlining skip data between DISI blocks 
similarly to postings alternatively, that would still require memory at 
index-time because of buffering, but almost nothing at search time.

bq. DENSE ranks belongs naturally in their blocks in data

+1

> Create jump-tables for DocValues at index-time
> ----------------------------------------------
>
>                 Key: LUCENE-8585
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8585
>             Project: Lucene - Core
>          Issue Type: Improvement
>          Components: core/codecs
>    Affects Versions: master (8.0)
>            Reporter: Toke Eskildsen
>            Priority: Minor
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to