[ 
https://issues.apache.org/jira/browse/SPARK-22736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-22736:
---------------------------------
    Labels: bulk-closed  (was: )

> Consider caching decoded dictionaries in VectorizedColumnReader
> ---------------------------------------------------------------
>
>                 Key: SPARK-22736
>                 URL: https://issues.apache.org/jira/browse/SPARK-22736
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.2.1
>            Reporter: Henry Robinson
>            Priority: Major
>              Labels: bulk-closed
>
> {{VectorizedColumnReader.decodeDictionaryIds()}} calls 
> {{dictionary.decodeToX}} for every dictionary ID encountered in a 
> dict-encoded Parquet page.
> The whole idea of dictionary encoding is that a) values are repeated in a 
> page and b) the dictionary only contains values that are in a page. So we 
> should be able to save some decoding cost by decoding the entire dictionary 
> page once, at the cost of using some memory (but theoretically we could 
> discard the encoded dictionary, I think), and using the decoded dictionary to 
> populate rows. 
> This is particularly true for TIMESTAMP data, which after SPARK-12297, might 
> have a timezone conversion as part of its decoding step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to