AngersZhuuuu commented on a change in pull request #30202:
URL: https://github.com/apache/spark/pull/30202#discussion_r515933291
##########
File path: docs/sql-migration-guide.md
##########
@@ -52,7 +52,7 @@ license: |
- In Spark 3.1, the `schema_of_json` and `schema_of_csv` functions return
the schema in the SQL format in which field names are quoted. In Spark 3.0, the
function returns a catalog string without field quoting and in lower case.
- - In Spark 3.1, when
`spark.sql.legacy.transformationPadNullWhenValueLessThenSchema` is true, Spark
will pad NULL value when script transformation's output value size less then
schema size in default-serde mode(script transformation with row format of `ROW
FORMAT DELIMITED`). If false, Spark will keep original behavior to throw
`ArrayIndexOutOfBoundsException`.
+ - In Spark 3.1, when script transformation output's value size is less then
schema size in default-serde mode(script transformation with row format of `ROW
FORMAT DELIMITED`), Spark will pad NUll value to supplement data. In Spark 3.0
or earlier, Spark will do nothing and throw `ArrayIndexOutOfBoundsException`.
To restore the behavior before Spark 3.1, you can set
`spark.sql.legacy.transformationNotPadNullToSupplementData.enabled` to `true`.
Review comment:
So, What should I do next? Personally, we need to let users know about
this change
##########
File path: docs/sql-migration-guide.md
##########
@@ -52,7 +52,7 @@ license: |
- In Spark 3.1, the `schema_of_json` and `schema_of_csv` functions return
the schema in the SQL format in which field names are quoted. In Spark 3.0, the
function returns a catalog string without field quoting and in lower case.
- - In Spark 3.1, when
`spark.sql.legacy.transformationPadNullWhenValueLessThenSchema` is true, Spark
will pad NULL value when script transformation's output value size less then
schema size in default-serde mode(script transformation with row format of `ROW
FORMAT DELIMITED`). If false, Spark will keep original behavior to throw
`ArrayIndexOutOfBoundsException`.
+ - In Spark 3.1, when script transformation output's value size is less then
schema size in default-serde mode(script transformation with row format of `ROW
FORMAT DELIMITED`), Spark will pad NUll value to supplement data. In Spark 3.0
or earlier, Spark will do nothing and throw `ArrayIndexOutOfBoundsException`.
To restore the behavior before Spark 3.1, you can set
`spark.sql.legacy.transformationNotPadNullToSupplementData.enabled` to `true`.
Review comment:
All right, we can just revert last pr.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]