Josh Mahonin created PHOENIX-2288:
-------------------------------------

             Summary: Phoenix-Spark: PDecimal precision and scale aren't 
carried through to Spark DataFrame
                 Key: PHOENIX-2288
                 URL: https://issues.apache.org/jira/browse/PHOENIX-2288
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.5.2
            Reporter: Josh Mahonin


When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, the 
underlying precision and scale aren't carried forward to Spark.

The Spark catalyst schema converter should load these from the underlying 
column. These appear to be exposed in the ResultSetMetaData, but if there was a 
way to expose these somehow through ColumnInfo, it would be cleaner.

I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to