cloud-fan commented on code in PR #37631:
URL: https://github.com/apache/spark/pull/37631#discussion_r954000072
##########
docs/sql-migration-guide.md:
##########
@@ -28,6 +28,7 @@ license: |
- Since Spark 3.4, v1 database, table, permanent view and function
identifier will include 'spark_catalog' as the catalog name if database is
defined, e.g. a table identifier will be: `spark_catalog.default.t`. To restore
the legacy behavior, set `spark.sql.legacy.v1IdentifierNoCatalog` to `true`.
- Since Spark 3.4, when ANSI SQL mode(configuration
`spark.sql.ansi.enabled`) is on, Spark SQL always returns NULL result on
getting a map value with a non-existing key. In Spark 3.3 or earlier, there
will be an error.
- Since Spark 3.4, the SQL CLI `spark-sql` does not print the prefix `Error
in query:` before the error message of `AnalysisException`.
+ - Since Spark 3.4, `split` function ignores trailing empty strings when
`regex` parameter is empty and `limit` parameter is not specified or set to 0
or -1.
Review Comment:
```suggestion
- Since Spark 3.4, `split` function ignores trailing empty strings when
`regex` parameter is empty and `limit` parameter is not specified or set to a
non-positive number.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]