HyukjinKwon commented on a change in pull request #33638:
URL: https://github.com/apache/spark/pull/33638#discussion_r683061928
##########
File path: docs/sql-ref-ansi-compliance.md
##########
@@ -257,6 +257,15 @@ The behavior of some SQL operators can be different under
ANSI mode (`spark.sql.
- `map_col[key]`: This operator throws `NoSuchElementException` if key does
not exist in map.
- `GROUP BY`: aliases in a select list can not be used in GROUP BY clauses.
Each column referenced in a GROUP BY clause shall unambiguously reference a
column of the table resulting from the FROM clause.
+### Special functions for using the ANSI dialect
+
+After turning ANSI mode on, if you expect some of your SQL operations to not
throw exceptions on errors, as Spark's default behavior, you can use the
following functions.
Review comment:
How about this?
```suggestion
When ANSI mode is on, it throws exceptions for invalid operations. You can
use the following functions for SQL operations to suppress such exceptions as
if ANSI mode is disabled.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]