[ 
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987440#comment-14987440
 ] 

Josh Mahonin commented on PHOENIX-2288:
---------------------------------------

Any issues with this PR [~jamestaylor] [[email protected]] ?

I've updated the ignored unit test and verifies the precision and scale are 
carried forward from schema creation to Spark.

> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark 
> DataFrame
> -------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2288
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.2
>            Reporter: Josh Mahonin
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, 
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying 
> column. These appear to be exposed in the ResultSetMetaData, but if there was 
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to