saikocat commented on a change in pull request #31252:
URL: https://github.com/apache/spark/pull/31252#discussion_r560697129



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -306,13 +306,14 @@ object JdbcUtils extends Logging {
       }
       val metadata = new MetadataBuilder()
       // SPARK-33888
-      // - include scale in metadata for only DECIMAL & NUMERIC
+      // - include scale in metadata for only DECIMAL & NUMERIC as well as 
ARRAY (for Postgres)

Review comment:
       We can do that for simplification. But we need to fix the tests for the 
rest of the dialects. Cos it adds `{"scale": 0}` to all metadata, then existing 
tests failed (previously metadata didn't get build in getSchema(), so the JSON 
meta didn't generated)
   
   That's why I choose to set it to only decimal and numeric. The problem also 
eluded me cos the test for array in Postgresql dialects call the toCalaystType 
directly instead of going through the code path of using metadata. Sorry on 
phone so it's hard for me to link the line.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to