saikocat commented on a change in pull request #31252:
URL: https://github.com/apache/spark/pull/31252#discussion_r560796346



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -306,13 +306,14 @@ object JdbcUtils extends Logging {
       }
       val metadata = new MetadataBuilder()
       // SPARK-33888
-      // - include scale in metadata for only DECIMAL & NUMERIC
+      // - include scale in metadata for only DECIMAL & NUMERIC as well as 
ARRAY (for Postgres)

Review comment:
       But the only way `fieldScale` can make it into the dialect is by the 
field metadata. So it is a very chicken and egg problem. 
   
   EDIT: Postgresql utilize the metadatabuilder to get the scale for array[][] 
of type numeric for example - cos dataType is `ARRAY ` but the typeName is 
`_numeric` - note the underscore specific for Postgresql. Whereas MySQL dialect 
is putting more info into the metadata (like put("binarylong")). The use cases 
differ.
   
   Might have to change the interface somehow to let the ResultSetMetadata to 
be passed or init-ed to the dialect. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to