[
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968402#comment-14968402
]
ASF GitHub Bot commented on PHOENIX-2288:
-----------------------------------------
GitHub user navis opened a pull request:
https://github.com/apache/phoenix/pull/124
PHOENIX-2288 Phoenix-Spark: PDecimal precision and scale aren't carried
through to Spark DataFrame
from jira description
>When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type,
the underlying precision and scale aren't carried forward to Spark.
>The Spark catalyst schema converter should load these from the underlying
column. These appear to be exposed in the ResultSetMetaData, but if there was a
way to expose these somehow through ColumnInfo, it would be cleaner.
>I'm not sure if Pig has the same issues or not, but I suspect it may.
It seemed enough just for current usage in spark-interagation. But in long
term, PDataType should contain meta information like maxLength or precision,
etc.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/navis/phoenix PHOENIX-2288
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/phoenix/pull/124.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #124
----
commit 25fa16bcef4e4fcbc9fcf07d935839ed563a9b52
Author: navis.ryu <[email protected]>
Date: 2015-10-22T02:32:30Z
PHOENIX-2288 Phoenix-Spark: PDecimal precision and scale aren't carried
through to Spark DataFrame
----
> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark
> DataFrame
> -------------------------------------------------------------------------------------
>
> Key: PHOENIX-2288
> URL: https://issues.apache.org/jira/browse/PHOENIX-2288
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.5.2
> Reporter: Josh Mahonin
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type,
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying
> column. These appear to be exposed in the ResultSetMetaData, but if there was
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)