asfgit closed pull request #23465: [MINOR][DOC] Fix typos in the SQL migration
guide
URL: https://github.com/apache/spark/pull/23465
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):
diff --git a/docs/sql-migration-guide-upgrade.md
b/docs/sql-migration-guide-upgrade.md
index 7e6a0c097d242..0fcdd420bcfe3 100644
--- a/docs/sql-migration-guide-upgrade.md
+++ b/docs/sql-migration-guide-upgrade.md
@@ -17,7 +17,7 @@ displayTitle: Spark SQL Upgrading Guide
- Since Spark 3.0, the `from_json` functions supports two modes -
`PERMISSIVE` and `FAILFAST`. The modes can be set via the `mode` option. The
default mode became `PERMISSIVE`. In previous versions, behavior of `from_json`
did not conform to either `PERMISSIVE` nor `FAILFAST`, especially in processing
of malformed JSON records. For example, the JSON string `{"a" 1}` with the
schema `a INT` is converted to `null` by previous versions but Spark 3.0
converts it to `Row(null)`.
- - In Spark version 2.4 and earlier, the `from_json` function produces
`null`s for JSON strings and JSON datasource skips the same independetly of its
mode if there is no valid root JSON token in its input (` ` for example). Since
Spark 3.0, such input is treated as a bad record and handled according to
specified mode. For example, in the `PERMISSIVE` mode the ` ` input is
converted to `Row(null, null)` if specified schema is `key STRING, value INT`.
+ - In Spark version 2.4 and earlier, the `from_json` function produces
`null`s for JSON strings and JSON datasource skips the same independently of
its mode if there is no valid root JSON token in its input (` ` for example).
Since Spark 3.0, such input is treated as a bad record and handled according to
specified mode. For example, in the `PERMISSIVE` mode the ` ` input is
converted to `Row(null, null)` if specified schema is `key STRING, value INT`.
- The `ADD JAR` command previously returned a result set with the single
value 0. It now returns an empty result set.
@@ -27,21 +27,21 @@ displayTitle: Spark SQL Upgrading Guide
- In Spark version 2.4 and earlier, float/double -0.0 is semantically equal
to 0.0, but users can still distinguish them via `Dataset.show`,
`Dataset.collect` etc. Since Spark 3.0, float/double -0.0 is replaced by 0.0
internally, and users can't distinguish them any more.
- - In Spark version 2.4 and earlier, users can create a map with duplicated
keys via built-in functions like `CreateMap`, `StringToMap`, etc. The behavior
of map with duplicated keys is undefined, e.g. map look up respects the
duplicated key appears first, `Dataset.collect` only keeps the duplicated key
appears last, `MapKeys` returns duplicated keys, etc. Since Spark 3.0, these
built-in functions will remove duplicated map keys with last wins policy. Users
may still read map values with duplicated keys from data sources which do not
enforce it (e.g. Parquet), the behavior will be udefined.
+ - In Spark version 2.4 and earlier, users can create a map with duplicated
keys via built-in functions like `CreateMap`, `StringToMap`, etc. The behavior
of map with duplicated keys is undefined, e.g. map look up respects the
duplicated key appears first, `Dataset.collect` only keeps the duplicated key
appears last, `MapKeys` returns duplicated keys, etc. Since Spark 3.0, these
built-in functions will remove duplicated map keys with last wins policy. Users
may still read map values with duplicated keys from data sources which do not
enforce it (e.g. Parquet), the behavior will be undefined.
- In Spark version 2.4 and earlier, partition column value is converted as
null if it can't be casted to corresponding user provided schema. Since 3.0,
partition column value is validated with user provided schema. An exception is
thrown if the validation fails. You can disable such validation by setting
`spark.sql.sources.validatePartitionColumns` to `false`.
- In Spark version 2.4 and earlier, the `SET` command works without any
warnings even if the specified key is for `SparkConf` entries and it has no
effect because the command does not update `SparkConf`, but the behavior might
confuse users. Since 3.0, the command fails if a `SparkConf` key is used. You
can disable such a check by setting
`spark.sql.legacy.setCommandRejectsSparkCoreConfs` to `false`.
- - Since Spark 3.0, CSV/JSON datasources use java.time API for parsing and
generating CSV/JSON content. In Spark version 2.4 and earlier,
java.text.SimpleDateFormat is used for the same purpuse with fallbacks to the
parsing mechanisms of Spark 2.0 and 1.x. For example, `2018-12-08 10:39:21.123`
with the pattern `yyyy-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0
because the timestamp does not match to the pattern but it can be parsed by
earlier Spark versions due to a fallback to `Timestamp.valueOf`. To parse the
same timestamp since Spark 3.0, the pattern should be `yyyy-MM-dd
HH:mm:ss.SSS`. To switch back to the implementation used in Spark 2.4 and
earlier, set `spark.sql.legacy.timeParser.enabled` to `true`.
+ - Since Spark 3.0, CSV/JSON datasources use java.time API for parsing and
generating CSV/JSON content. In Spark version 2.4 and earlier,
java.text.SimpleDateFormat is used for the same purpose with fallbacks to the
parsing mechanisms of Spark 2.0 and 1.x. For example, `2018-12-08 10:39:21.123`
with the pattern `yyyy-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0
because the timestamp does not match to the pattern but it can be parsed by
earlier Spark versions due to a fallback to `Timestamp.valueOf`. To parse the
same timestamp since Spark 3.0, the pattern should be `yyyy-MM-dd
HH:mm:ss.SSS`. To switch back to the implementation used in Spark 2.4 and
earlier, set `spark.sql.legacy.timeParser.enabled` to `true`.
- In Spark version 2.4 and earlier, CSV datasource converts a malformed CSV
string to a row with all `null`s in the PERMISSIVE mode. Since Spark 3.0, the
returned row can contain non-`null` fields if some of CSV column values were
parsed and converted to desired types successfully.
- In Spark version 2.4 and earlier, JSON datasource and JSON functions like
`from_json` convert a bad JSON record to a row with all `null`s in the
PERMISSIVE mode when specified schema is `StructType`. Since Spark 3.0, the
returned row can contain non-`null` fields if some of JSON column values were
parsed and converted to desired types successfully.
- - Since Spark 3.0, the `unix_timestamp`, `date_format`, `to_unix_timestamp`,
`from_unixtime`, `to_date`, `to_timestamp` functions use java.time API for
parsing and formatting dates/timestamps from/to strings by using ISO chronology
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html)
based on Proleptic Gregorian calendar. In Spark version 2.4 and earlier,
java.text.SimpleDateFormat and java.util.GregorianCalendar (hybrid calendar
that supports both the Julian and Gregorian calendar systems, see
https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html) is
used for the same purpuse. New implementation supports pattern formats as
described here
https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
and performs strict checking of its input. For example, the `2015-07-22
10:00:00` timestamp cannot be parse if pattern is `yyyy-MM-dd` because the
parser does not consume whole input. Another example is the `31/01/2015 00:00`
input cannot be parsed by the `dd/MM/yyyy hh:mm` pattern because `hh` supposes
hours in the range `1-12`. To switch back to the implementation used in Spark
2.4 and earlier, set `spark.sql.legacy.timeParser.enabled` to `true`.
+ - Since Spark 3.0, the `unix_timestamp`, `date_format`, `to_unix_timestamp`,
`from_unixtime`, `to_date`, `to_timestamp` functions use java.time API for
parsing and formatting dates/timestamps from/to strings by using ISO chronology
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html)
based on Proleptic Gregorian calendar. In Spark version 2.4 and earlier,
java.text.SimpleDateFormat and java.util.GregorianCalendar (hybrid calendar
that supports both the Julian and Gregorian calendar systems, see
https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html) is
used for the same purpose. New implementation supports pattern formats as
described here
https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
and performs strict checking of its input. For example, the `2015-07-22
10:00:00` timestamp cannot be parse if pattern is `yyyy-MM-dd` because the
parser does not consume whole input. Another example is the `31/01/2015 00:00`
input cannot be parsed by the `dd/MM/yyyy hh:mm` pattern because `hh` supposes
hours in the range `1-12`. To switch back to the implementation used in Spark
2.4 and earlier, set `spark.sql.legacy.timeParser.enabled` to `true`.
- - Since Spark 3.0, JSON datasource and JSON function `schema_of_json` infer
TimestampType from string values if they matches to the pattern defined by the
JSON option `timestampFormat`. Set JSON option `inferTimestamp` to `false` to
disable such type inferring.
+ - Since Spark 3.0, JSON datasource and JSON function `schema_of_json` infer
TimestampType from string values if they match to the pattern defined by the
JSON option `timestampFormat`. Set JSON option `inferTimestamp` to `false` to
disable such type inferring.
## Upgrading From Spark SQL 2.3 to 2.4
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]