saikocat commented on a change in pull request #31252:
URL: https://github.com/apache/spark/pull/31252#discussion_r560730953
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -306,13 +306,14 @@ object JdbcUtils extends Logging {
}
val metadata = new MetadataBuilder()
// SPARK-33888
- // - include scale in metadata for only DECIMAL & NUMERIC
+ // - include scale in metadata for only DECIMAL & NUMERIC as well as
ARRAY (for Postgres)
Review comment:
Alright, let me elaborate more so you two (cc: @skestle) can decide on
which approach to go for. Though I'm kind of favor the current approach of
adding data type matching for adding scale metadata cos fixing the failing
tests will be more difficult and it makes the JDBCSuite test `"jdbc data source
shouldn't have unnecessary metadata in its schema"` test slightly lose its
meaning.
So in order to push "logical_time_type" to metadata, I have to force
metadata to be built in the field type as of here:
https://github.com/skestle/spark/commit/0b647fe69cf201b4dcbc0f4dfc0eb504a523571d#diff-c3859e97335ead4b131263565c987d877bea0af3adbd6c5bf2d3716768d2e083R323
whereas previously, metadata can be built by the dialect or completely
ignored (as default)
This will cause 3 tests in `JDBCSuite` to fail cause of schema mismatch
(extra `{"scale": 0}` in the metadata always present) [1. `"jdbc API support
custom schema"`, 2. `"jdbc API custom schema DDL-like strings."`, 3. `"jdbc
data source shouldn't have unnecessary metadata in its schema"`].
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]