[ 
https://issues.apache.org/jira/browse/DRILL-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005013#comment-15005013
 ] 

ASF GitHub Bot commented on DRILL-4053:
---------------------------------------

Github user parthchandra commented on the pull request:

    https://github.com/apache/drill/pull/254#issuecomment-156603019
  
    Your question suggests that Jackson can take care of deserializing into the 
correct Java objects based on version. I just may not have spent enough time 
figuring it out. I'll take a look but if you have pointers will gladly accept 
the same. In that case I can go back to the old file name and maintain multiple 
versions.


> Reduce metadata cache file size
> -------------------------------
>
>                 Key: DRILL-4053
>                 URL: https://issues.apache.org/jira/browse/DRILL-4053
>             Project: Apache Drill
>          Issue Type: Improvement
>          Components: Metadata
>    Affects Versions: 1.3.0
>            Reporter: Parth Chandra
>            Assignee: Parth Chandra
>             Fix For: 1.4.0
>
>
> The parquet metadata cache file has fair amount of redundant metadata that 
> causes the size of the cache file to bloat. Two things that we can reduce are 
> :
> 1) Schema is repeated for every row group. We can keep a merged schema 
> (similar to what was discussed for insert into functionality) 2) The max and 
> min value in the stats are used for partition pruning when the values are the 
> same. We can keep the maxValue only and that too only if it is the same as 
> the minValue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to