GitHub user poolis opened a pull request:
https://github.com/apache/spark/pull/10899
[SPARK-12928][SQ] Oracle FLOAT datatype is not properly handled when
reading via JDBC
The contribution is your original work and that you license the work to the
project under the project's open source license.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/poolis/spark spark-12928
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/10899.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #10899
----
commit 1b508ae25989bcab9b4d4630c6d368dfd3c5ad19
Author: Greg Michalopoulos <[email protected]>
Date: 2016-01-20T15:53:01Z
Handle FLOAT datatype in the OracleDialect
commit 03cd7edb0504341f7fbfa71efa462b1f56ae37a6
Author: Greg Michalopoulos <[email protected]>
Date: 2016-01-20T18:22:32Z
Added tests for both FLOAT and NUMERIC datatypes that are handled by the
dialect.
commit 1bfb6b31db4528536a02d454b1748213dd092808
Author: poolis <[email protected]>
Date: 2016-01-20T18:55:32Z
Correct comment
commit b3e12cd87399bc30c5b08d5ada52e2997291f4b0
Author: poolis <[email protected]>
Date: 2016-01-25T14:47:26Z
Modified approach by adding scale to the field MetadataBuilder and using
that to determine if scale was specified on column creation. This should catch
floats or other datatypes if their scale is undefined, which is what Spark
seems to be having trouble with.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]