GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/5833
[SPARK-7299][SQL] Set up Decimal's precision and scale according to table
schema instead of returned BigDecimal
JIRA: https://issues.apache.org/jira/browse/SPARK-7299
When connecting with oracle db through jdbc, the precision and scale of
`BigDecimal` object returned by `ResultSet.getBigDecimal` is not correctly
matched to the table schema reported by `ResultSetMetaData.getPrecision` and
`ResultSetMetaData.getScale`.
So in case you insert a value like `19999` into a column with `NUMBER(12,
2)` type, you get through a `BigDecimal` object with scale as 0. But the
dataframe schema has correct type as `DecimalType(12, 2)`. Thus, after you save
the dataframe into parquet file and then retrieve it, you will get wrong result
`199.99`.
Because it is reported to be problematic on jdbc connection with oracle db.
It might be difficult to add test case for it.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/viirya/spark-1 jdbc_decimal_precision
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/5833.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #5833
----
commit 5f9da94fc89aa268781674909d5c3d049b6859af
Author: Liang-Chi Hsieh <[email protected]>
Date: 2015-05-01T08:51:40Z
Set up Decimal's precision and scale according to table schema instead of
returned BigDecimal.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]