skestle commented on a change in pull request #31252:
URL: https://github.com/apache/spark/pull/31252#discussion_r560703040



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -306,13 +306,14 @@ object JdbcUtils extends Logging {
       }
       val metadata = new MetadataBuilder()
       // SPARK-33888
-      // - include scale in metadata for only DECIMAL & NUMERIC
+      // - include scale in metadata for only DECIMAL & NUMERIC as well as 
ARRAY (for Postgres)
       // - include TIME type metadata
       // - always build the metadata
       dataType match {
         // scalastyle:off
         case java.sql.Types.NUMERIC => metadata.putLong("scale", fieldScale)
         case java.sql.Types.DECIMAL => metadata.putLong("scale", fieldScale)
+        case java.sql.Types.ARRAY   => metadata.putLong("scale", fieldScale) 
// PostgresDialect.scala wants this information

Review comment:
       The only way `fieldScale` can make it into the dialect is by the field 
metadata.
   It was always added prior to the previous commit (which I agree with on a 
fundamental level)
   
https://github.com/skestle/spark/commit/0b647fe69cf201b4dcbc0f4dfc0eb504a523571d#diff-c3859e97335ead4b131263565c987d877bea0af3adbd6c5bf2d3716768d2e083




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to