Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/20023
Following ANSI SQL compliance sounds good to me. However, many details are
vendor-specific. That means, the query results still varies even if we can be
100% ANSI SQL compliant.
To avoid frequently introducing behavior breaking changes, we can also
introduce a new mode `strict` for `spark.sql.typeCoercion.mode`. (Hive is also
not 100% ANSI SQL compliant) Instead of inventing a completely new one, we can
try to follow one of the mainstream open-source databases. For example,
Postgres.
Before introducing the new mode, we first need to understand the difference
between Spark SQL and the other. That is the reason why we need to write the
test cases first. Then, we can run them against different systems. This PR
clearly shows the current test cases do not cover the scenarios of 2 and 3.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]