a-shkarupin commented on a change in pull request #23456: [SPARK-26538][SQL] 
Set default precision and scale for elements of postgres numeric array
URL: https://github.com/apache/spark/pull/23456#discussion_r246414756
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
 ##########
 @@ -60,7 +60,12 @@ private object PostgresDialect extends JdbcDialect {
     case "bytea" => Some(BinaryType)
     case "timestamp" | "timestamptz" | "time" | "timetz" => Some(TimestampType)
     case "date" => Some(DateType)
-    case "numeric" | "decimal" => Some(DecimalType.bounded(precision, scale))
+    case "numeric" | "decimal" => if (precision > 0) {
 
 Review comment:
   Postgres [doc](https://www.postgresql.org/docs/9.6/datatype-numeric.html) 
says that 
   
   > The precision must be positive, the scale zero or positive. 
   
   What the postgres jdbc driver returned in case of `numeric` was 0 for both 
scale and precision.
   The condition proposed in the linked ticket and currently used 
[here](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L230)
 was roughly `precision > 0 || scale > 0`, but I can not come up with a valid 
case having precision <=0 while having scale > 0. 
   Is there another case where we would have a decimal with precision 0?
   Could someone explain?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to