Eugene-Mark commented on PR #36499: URL: https://github.com/apache/spark/pull/36499#issuecomment-1129099918
@srowen I'm also not a Teradata guy, just invokes Teradata's API from Spark and found the issue. I didn't find the document explaining the issue at Teradata side. I tried to print metadata from [JdbcUtils.scala -> getSchema](https://github.com/apache/spark/blob/191e535b975e5813719d3143797c9fcf86321368/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L277), which indicates that the `fieldScale` is `0` before it passes to downstream invoker. The metadata is fetched from `ResultSet`, which is generated right after Spark execute `statement.executeQuery` with query `s"SELECT * FROM $table WHERE 1=0"`. Maybe it's good enough to give user a default decimal instead of a round int before we find better way to explain what happened on Teradata side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
