[ 
https://issues.apache.org/jira/browse/SPARK-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522875#comment-14522875
 ] 

Ken Geis commented on SPARK-7196:
---------------------------------

My table has

TOTAL_DIRECT_COST_TOTAL NUMBER(12, 2)

I don't understand your question about defaultDataSourceName. I'm not familiar 
with that. I've pasted almost the entire script except for setting the Oracle 
JDBC URL and putting the Oracle JDBC driver in the SPARK_CLASSPATH variable.

> decimal precision lost when loading DataFrame from JDBC
> -------------------------------------------------------
>
>                 Key: SPARK-7196
>                 URL: https://issues.apache.org/jira/browse/SPARK-7196
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.3.1
>            Reporter: Ken Geis
>            Assignee: Liang-Chi Hsieh
>             Fix For: 1.3.2, 1.4.0
>
>
> I have a decimal database field that is defined as 10.2 (i.e. ##########.##). 
> When I load it into Spark via sqlContext.jdbc(..), the type of the 
> corresponding field in the DataFrame is DecimalType, with precisionInfo None. 
> Because of that loss of precision information, SPARK-4176 is triggered when I 
> try to .saveAsTable(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to