[ 
https://issues.apache.org/jira/browse/ARROW-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887577#comment-16887577
 ] 

Wes McKinney commented on ARROW-3772:
-------------------------------------

I'm looking at this. This is not a small project -- the assumption that values 
are fully materialized is pretty deeply baked into the library. We also have to 
deal with the "fallback" case where a column chunk starts out dictionary 
encoded and switches mid-stream because the dictionary got too big. What to do 
in that case is ambiguous:

* One option is to dictionary-encode the additional pages, so we could end up 
with one big dictionary
* Another option is to optimistically leave things dictionary-encoded, and if 
we hit the fallback case then we fully materialize. We can always do a cast on 
the Arrow side after the fact in this case

FWIW, the fallback scenario is not at all esoteric because the default 
dictionary pagesize limit in the C++ library is 1MB. I think Java is the same 

https://github.com/apache/parquet-mr/blob/master/parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java#L44

I think adding an option to raise the limit to 2GB or so when writing Arrow 
DictionaryArray would help. 

Things are made a bit more complex by the code duplication between 
parquet/column_reader.cc and parquet/arrow/record_reader.cc. I'll see if 
there's some things I can do to fix that while I'm working on this

> [C++] Read Parquet dictionary encoded ColumnChunks directly into an Arrow 
> DictionaryArray
> -----------------------------------------------------------------------------------------
>
>                 Key: ARROW-3772
>                 URL: https://issues.apache.org/jira/browse/ARROW-3772
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++
>            Reporter: Stav Nir
>            Assignee: Wes McKinney
>            Priority: Major
>              Labels: parquet
>             Fix For: 1.0.0
>
>
> Dictionary data is very common in parquet, in the current implementation 
> parquet-cpp decodes dictionary encoded data always before creating a plain 
> arrow array. This process is wasteful since we could use arrow's 
> DictionaryArray directly and achieve several benefits:
>  # Smaller memory footprint - both in the decoding process and in the 
> resulting arrow table - especially when the dict values are large
>  # Better decoding performance - mostly as a result of the first bullet - 
> less memory fetches and less allocations.
> I think those benefits could achieve significant improvements in runtime.
> My direction for the implementation is to read the indices (through the 
> DictionaryDecoder, after the RLE decoding) and values separately into 2 
> arrays and create a DictionaryArray using them.
> There are some questions to discuss:
>  # Should this be the default behavior for dictionary encoded data
>  # Should it be controlled with a parameter in the API
>  # What should be the policy in case some of the chunks are dictionary 
> encoded and some are not.
> I started implementing this but would like to hear your opinions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to