[
https://issues.apache.org/jira/browse/SPARK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163288#comment-17163288
]
Abhijit Das commented on SPARK-20427:
-------------------------------------
[~Kyrdan]
Can you explain your implementation ?
Have you used custom JdbcDialect or OracleDialect ?
> Issue with Spark interpreting Oracle datatype NUMBER
> ----------------------------------------------------
>
> Key: SPARK-20427
> URL: https://issues.apache.org/jira/browse/SPARK-20427
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.1.0
> Reporter: Alexander Andrushenko
> Assignee: Yuming Wang
> Priority: Major
> Fix For: 2.3.0
>
>
> In Oracle exists data type NUMBER. When defining a filed in a table of type
> NUMBER the field has two components, precision and scale.
> For example, NUMBER(p,s) has precision p and scale s.
> Precision can range from 1 to 38.
> Scale can range from -84 to 127.
> When reading such a filed Spark can create numbers with precision exceeding
> 38. In our case it has created fields with precision 44,
> calculated as sum of the precision (in our case 34 digits) and the scale (10):
> "...java.lang.IllegalArgumentException: requirement failed: Decimal precision
> 44 exceeds max precision 38...".
> The result was, that a data frame was read from a table on one schema but
> could not be inserted in the identical table on other schema.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]