cloud-fan commented on a change in pull request #30593:
URL: https://github.com/apache/spark/pull/30593#discussion_r535410969



##########
File path: docs/sql-ref-ansi-compliance.md
##########
@@ -21,7 +21,8 @@ license: |
 
 Since Spark 3.0, Spark SQL introduces two experimental options to comply with 
the SQL standard: `spark.sql.ansi.enabled` and 
`spark.sql.storeAssignmentPolicy` (See a table below for details).
 
-When `spark.sql.ansi.enabled` is set to `true`, Spark SQL follows the standard 
in basic behaviours (e.g., arithmetic operations, type conversion, SQL 
functions and SQL parsing).
+When `spark.sql.ansi.enabled` is set to `true`, Spark SQL Spark SQL uses an 
ANSI compliant dialect instead of being Hive compliant. For example, Spark will 
throw an exception at runtime instead of returning null results if the inputs 
to a SQL operator/function are invalid. Some ANSI dialect features may not be 
from the ANSI SQL standard directly, but their behaviors align with ANSI SQL's 
style.

Review comment:
       `may not be` -> `may not`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to