HyukjinKwon commented on a change in pull request #33638:
URL: https://github.com/apache/spark/pull/33638#discussion_r683062290
##########
File path: docs/sql-ref-ansi-compliance.md
##########
@@ -257,6 +257,15 @@ The behavior of some SQL operators can be different under
ANSI mode (`spark.sql.
- `map_col[key]`: This operator throws `NoSuchElementException` if key does
not exist in map.
- `GROUP BY`: aliases in a select list can not be used in GROUP BY clauses.
Each column referenced in a GROUP BY clause shall unambiguously reference a
column of the table resulting from the FROM clause.
+### Special functions for using the ANSI dialect
+
+After turning ANSI mode on, if you expect some of your SQL operations to not
throw exceptions on errors, as Spark's default behavior, you can use the
following functions.
+ - `try_cast`: identical to `CAST`, except that it returns `NULL` result
instead of throwing an exception on runtime error.
+ - `try_add`: identical to the add operator `+`, except that it returns
`NULL` result instead of throwing an exception on integral value overflow.
+ - `try_divide`: identical to the division operator `/`, except that it
returns `NULL` result instead of throwing an exception on dividing 0.
+
+Note that the behavior of these expressions doesn't depend on configuration
`spark.sql.ansi.enabled`.
Review comment:
I think we can maybe remove this ...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]