[ 
https://issues.apache.org/jira/browse/PHOENIX-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15685539#comment-15685539
 ] 

Josh Mahonin commented on PHOENIX-3504:
---------------------------------------

Hi [~sergey.soldatov]

The behaviour has been in since PHOENIX-2288 was resolved with this commit:
https://github.com/apache/phoenix/commit/9343743157f1cc03b1b8b815289b6127a30d740f

As I recall, those specific values of (38, 18) for precision and scale were 
chosen since that's what Spark was using as a default.

The patch looks good, although it might be good to retain the previous bounds 
check as well, e.g.:
{noformat}
if (columnInfo.getPrecision == null || columnInfo.getPrecision < 0)
{noformat}



> Spark integration doesn't work with decimal columns that are using default 
> precision
> ------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3504
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3504
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.8.0
>            Reporter: Sergey Soldatov
>            Assignee: Sergey Soldatov
>         Attachments: PHOENIX-3504-1.patch
>
>
> Not sure when this issue was introduced and whether this code was working 
> well before, but in PhoenixRDD.phoenixTypeToCatalystType for decimal 
> precision we have a check 
> (columnInfo.getPrecision < 0)
> which is fail for decimal columns that were created with default precision 
> and scale because precision is null in this case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to