[ 
https://issues.apache.org/jira/browse/IMPALA-7087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16688669#comment-16688669
 ] 

Tim Armstrong commented on IMPALA-7087:
---------------------------------------

I don't have an objection. Usually I think we should err on the side of safety 
and not implicitly convert things, but I can see valid workflows where you'd 
want to reduce precision.

What does Impala do for decimal columns in text tables with extra precision?

> Impala is unable to read Parquet decimal columns with lower precision/scale 
> than table metadata
> -----------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-7087
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7087
>             Project: IMPALA
>          Issue Type: Sub-task
>          Components: Backend
>            Reporter: Tim Armstrong
>            Assignee: Sahil Takiar
>            Priority: Major
>              Labels: decimal, parquet
>
> This is similar to IMPALA-2515, except relates to a different precision/scale 
> in the file metadata rather than just a mismatch in the bytes used to store 
> the data. In a lot of cases we should be able to convert the decimal type on 
> the fly to the higher-precision type.
> {noformat}
> ERROR: File '/hdfs/path/000000_0_x_2' column 'alterd_decimal' has an invalid 
> type length. Expecting: 11 len in file: 8
> {noformat}
> It would be convenient to allow reading parquet files where the 
> precision/scale in the file can be converted to the precision/scale in the 
> table metadata without loss of precision.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to