Github user liancheng commented on the pull request:

    https://github.com/apache/spark/pull/6796#issuecomment-114810864
  
    @rtreffer I'm working on improving compatibility and interoperability of 
Spark SQL's Parquet support. The first part is #6617, where I refactored the 
scheme conversion code so that now we can stick to the most recent Parquet 
format spec (`ParquetTypes.scala` replaced with 
`CatalystSchemaConverter.scala`). Schema conversion part of the decimal 
precision problem is also handled there ([1] [1], [2] [2]). Would you mind if I 
merge that one and then you rebase this PR? I think it would be much easier to 
work with. Basically you only need to:
    
    1. Remove the `precision <= ...` part in [this line] [3], and
    1. Always use `FIXED_LENGTH_BYTE_ARRAY` to store decimals
    
    [1]: 
https://github.com/apache/spark/pull/6617/files#diff-a4c01298c63223d113645a31c01141baR370
    [2]: 
https://github.com/apache/spark/pull/6617/files#diff-a4c01298c63223d113645a31c01141baR118
    [3]: 
https://github.com/apache/spark/pull/6617/files#diff-a4c01298c63223d113645a31c01141baR377


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to