Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22746#discussion_r226247607
  
    --- Diff: docs/sql-migration-guide-upgrade.md ---
    @@ -0,0 +1,520 @@
    +---
    +layout: global
    +title: Spark SQL Upgrading Guide
    +displayTitle: Spark SQL Upgrading Guide
    +---
    +
    +* Table of contents
    +{:toc}
    +
    +## Upgrading From Spark SQL 2.4 to 3.0
    +
    +  - In PySpark, when creating a `SparkSession` with 
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, 
the builder was trying to update the `SparkConf` of the existing `SparkContext` 
with configurations specified to the builder, but the `SparkContext` is shared 
by all `SparkSession`s, so we should not update them. Since 3.0, the builder 
come to not update the configurations. This is the same behavior as Java/Scala 
API in 2.3 and above. If you want to update them, you need to update them prior 
to creating a `SparkSession`.
    +
    +## Upgrading From Spark SQL 2.3 to 2.4
    +
    +  - In Spark version 2.3 and earlier, the second parameter to 
array_contains function is implicitly promoted to the element type of first 
array type parameter. This type promotion can be lossy and may cause 
`array_contains` function to return wrong result. This problem has been 
addressed in 2.4 by employing a safer type promotion mechanism. This can cause 
some change in behavior and are illustrated in the table below.
    +  <table class="table">
    +        <tr>
    +          <th>
    +            <b>Query</b>
    +          </th>
    +          <th>
    +            <b>Result Spark 2.3 or Prior</b>
    +          </th>
    +          <th>
    +            <b>Result Spark 2.4</b>
    +          </th>
    +          <th>
    +            <b>Remarks</b>
    +          </th>
    +        </tr>
    +        <tr>
    +          <th>
    +            <b>SELECT <br> array_contains(array(1), 1.34D);</b>
    +          </th>
    +          <th>
    +            <b>true</b>
    +          </th>
    +          <th>
    +            <b>false</b>
    +          </th>
    +          <th>
    +            <b>In Spark 2.4, left and right parameters are  promoted to 
array(double) and double type respectively.</b>
    +          </th>
    +        </tr>
    +        <tr>
    +          <th>
    +            <b>SELECT <br> array_contains(array(1), '1');</b>
    +          </th>
    +          <th>
    +            <b>true</b>
    +          </th>
    +          <th>
    +            <b>AnalysisException is thrown since integer type can not be 
promoted to string type in a loss-less manner.</b>
    +          </th>
    +          <th>
    +            <b>Users can use explict cast</b>
    +          </th>
    +        </tr>
    +        <tr>
    +          <th>
    +            <b>SELECT <br> array_contains(array(1), 'anystring');</b>
    +          </th>
    +          <th>
    +            <b>null</b>
    +          </th>
    +          <th>
    +            <b>AnalysisException is thrown since integer type can not be 
promoted to string type in a loss-less manner.</b>
    +          </th>
    +          <th>
    +            <b>Users can use explict cast</b>
    +          </th>
    +        </tr>
    +  </table>
    +
    +  - Since Spark 2.4, when there is a struct field in front of the IN 
operator before a subquery, the inner query must contain a struct field as 
well. In previous versions, instead, the fields of the struct were compared to 
the output of the inner query. Eg. if `a` is a `struct(a string, b int)`, in 
Spark 2.4 `a in (select (1 as a, 'a' as b) from range(1))` is a valid query, 
while `a in (select 1, 'a' from range(1))` is not. In previous version it was 
the opposite.
    +  - In versions 2.2.1+ and 2.3, if `spark.sql.caseSensitive` is set to 
true, then the `CURRENT_DATE` and `CURRENT_TIMESTAMP` functions incorrectly 
became case-sensitive and would resolve to columns (unless typed in lower 
case). In Spark 2.4 this has been fixed and the functions are no longer 
case-sensitive.
    +  - Since Spark 2.4, Spark will evaluate the set operations referenced in 
a query by following a precedence rule as per the SQL standard. If the order is 
not specified by parentheses, set operations are performed from left to right 
with the exception that all INTERSECT operations are performed before any 
UNION, EXCEPT or MINUS operations. The old behaviour of giving equal precedence 
to all the set operations are preserved under a newly added configuration 
`spark.sql.legacy.setopsPrecedence.enabled` with a default value of `false`. 
When this property is set to `true`, spark will evaluate the set operators from 
left to right as they appear in the query given no explicit ordering is 
enforced by usage of parenthesis.
    +  - Since Spark 2.4, Spark will display table description column Last 
Access value as UNKNOWN when the value was Jan 01 1970.
    +  - Since Spark 2.4, Spark maximizes the usage of a vectorized ORC reader 
for ORC files by default. To do that, `spark.sql.orc.impl` and 
`spark.sql.orc.filterPushdown` change their default values to `native` and 
`true` respectively.
    +  - In PySpark, when Arrow optimization is enabled, previously `toPandas` 
just failed when Arrow optimization is unable to be used whereas 
`createDataFrame` from Pandas DataFrame allowed the fallback to 
non-optimization. Now, both `toPandas` and `createDataFrame` from Pandas 
DataFrame allow the fallback by default, which can be switched off by 
`spark.sql.execution.arrow.fallback.enabled`.
    +  - Since Spark 2.4, writing an empty dataframe to a directory launches at 
least one write task, even if physically the dataframe has no partition. This 
introduces a small behavior change that for self-describing file formats like 
Parquet and Orc, Spark creates a metadata-only file in the target directory 
when writing a 0-partition dataframe, so that schema inference can still work 
if users read that directory later. The new behavior is more reasonable and 
more consistent regarding writing empty dataframe.
    +  - Since Spark 2.4, expression IDs in UDF arguments do not appear in 
column names. For example, an column name in Spark 2.4 is not `UDF:f(col0 AS 
colA#28)` but ``UDF:f(col0 AS `colA`)``.
    +  - Since Spark 2.4, writing a dataframe with an empty or nested empty 
schema using any file formats (parquet, orc, json, text, csv etc.) is not 
allowed. An exception is thrown when attempting to write dataframes with empty 
schema.
    +  - Since Spark 2.4, Spark compares a DATE type with a TIMESTAMP type 
after promotes both sides to TIMESTAMP. To set `false` to 
`spark.sql.legacy.compareDateTimestampInTimestamp` restores the previous 
behavior. This option will be removed in Spark 3.0.
    +  - Since Spark 2.4, creating a managed table with nonempty location is 
not allowed. An exception is thrown when attempting to create a managed table 
with nonempty location. To set `true` to 
`spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation` restores the 
previous behavior. This option will be removed in Spark 3.0.
    +  - Since Spark 2.4, renaming a managed table to existing location is not 
allowed. An exception is thrown when attempting to rename a managed table to 
existing location.
    +  - Since Spark 2.4, the type coercion rules can automatically promote the 
argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest 
common type, no matter how the input arguments order. In prior Spark versions, 
the promotion could fail in some specific orders (e.g., TimestampType, 
IntegerType and StringType) and throw an exception.
    +  - Since Spark 2.4, Spark has enabled non-cascading SQL cache 
invalidation in addition to the traditional cache invalidation mechanism. The 
non-cascading cache invalidation mechanism allows users to remove a cache 
without impacting its dependent caches. This new cache invalidation mechanism 
is used in scenarios where the data of the cache to be removed is still valid, 
e.g., calling unpersist() on a Dataset, or dropping a temporary view. This 
allows users to free up memory and keep the desired caches valid at the same 
time.
    +  - In version 2.3 and earlier, Spark converts Parquet Hive tables by 
default but ignores table properties like `TBLPROPERTIES (parquet.compression 
'NONE')`. This happens for ORC Hive table properties like `TBLPROPERTIES 
(orc.compress 'NONE')` in case of `spark.sql.hive.convertMetastoreOrc=true`, 
too. Since Spark 2.4, Spark respects Parquet/ORC specific table properties 
while converting Parquet/ORC Hive tables. As an example, `CREATE TABLE t(id 
int) STORED AS PARQUET TBLPROPERTIES (parquet.compression 'NONE')` would 
generate Snappy parquet files during insertion in Spark 2.3, and in Spark 2.4, 
the result would be uncompressed parquet files.
    +  - Since Spark 2.0, Spark converts Parquet Hive tables by default for 
better performance. Since Spark 2.4, Spark converts ORC Hive tables by default, 
too. It means Spark uses its own ORC support by default instead of Hive SerDe. 
As an example, `CREATE TABLE t(id int) STORED AS ORC` would be handled with 
Hive SerDe in Spark 2.3, and in Spark 2.4, it would be converted into Spark's 
ORC data source table and ORC vectorization would be applied. To set `false` to 
`spark.sql.hive.convertMetastoreOrc` restores the previous behavior.
    +  - In version 2.3 and earlier, CSV rows are considered as malformed if at 
least one column value in the row is malformed. CSV parser dropped such rows in 
the DROPMALFORMED mode or outputs an error in the FAILFAST mode. Since Spark 
2.4, CSV row is considered as malformed only when it contains malformed column 
values requested from CSV datasource, other values can be ignored. As an 
example, CSV file contains the "id,name" header and one row "1234". In Spark 
2.4, selection of the id column consists of a row with one column value 1234 
but in Spark 2.3 and earlier it is empty in the DROPMALFORMED mode. To restore 
the previous behavior, set `spark.sql.csv.parser.columnPruning.enabled` to 
`false`.
    +  - Since Spark 2.4, File listing for compute statistics is done in 
parallel by default. This can be disabled by setting 
`spark.sql.statistics.parallelFileListingInStatsComputation.enabled` to `False`.
    +  - Since Spark 2.4, Metadata files (e.g. Parquet summary files) and 
temporary files are not counted as data files when calculating table size 
during Statistics computation.
    +  - Since Spark 2.4, empty strings are saved as quoted empty strings `""`. 
In version 2.3 and earlier, empty strings are equal to `null` values and do not 
reflect to any characters in saved CSV files. For example, the row of `"a", 
null, "", 1` was writted as `a,,,1`. Since Spark 2.4, the same row is saved as 
`a,,"",1`. To restore the previous behavior, set the CSV option `emptyValue` to 
empty (not quoted) string.  
    +  - Since Spark 2.4, The LOAD DATA command supports wildcard `?` and `*`, 
which match any one character, and zero or more characters, respectively. 
Example: `LOAD DATA INPATH '/tmp/folder*/'` or `LOAD DATA INPATH 
'/tmp/part-?'`. Special Characters like `space` also now work in paths. 
Example: `LOAD DATA INPATH '/tmp/folder name/'`.
    +  - In Spark version 2.3 and earlier, HAVING without GROUP BY is treated 
as WHERE. This means, `SELECT 1 FROM range(10) HAVING true` is executed as 
`SELECT 1 FROM range(10) WHERE true`  and returns 10 rows. This violates SQL 
standard, and has been fixed in Spark 2.4. Since Spark 2.4, HAVING without 
GROUP BY is treated as a global aggregate, which means `SELECT 1 FROM range(10) 
HAVING true` will return only one row. To restore the previous behavior, set 
`spark.sql.legacy.parser.havingWithoutGroupByAsWhere` to `true`.
    +
    +## Upgrading From Spark SQL 2.3.0 to 2.3.1 and above
    +
    +  - As of version 2.3.1 Arrow functionality, including `pandas_udf` and 
`toPandas()`/`createDataFrame()` with `spark.sql.execution.arrow.enabled` set 
to `True`, has been marked as experimental. These are still evolving and not 
currently recommended for use in production.
    +
    +## Upgrading From Spark SQL 2.2 to 2.3
    +
    +  - Since Spark 2.3, the queries from raw JSON/CSV files are disallowed 
when the referenced columns only include the internal corrupt record column 
(named `_corrupt_record` by default). For example, 
`spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()`
 and `spark.read.schema(schema).json(file).select("_corrupt_record").show()`. 
Instead, you can cache or save the parsed results and then send the same query. 
For example, `val df = spark.read.schema(schema).json(file).cache()` and then 
`df.filter($"_corrupt_record".isNotNull).count()`.
    +  - The `percentile_approx` function previously accepted numeric type 
input and output double type results. Now it supports date type, timestamp type 
and numeric types as input types. The result type is also changed to be the 
same as the input type, which is more reasonable for percentiles.
    +  - Since Spark 2.3, the Join/Filter's deterministic predicates that are 
after the first non-deterministic predicates are also pushed down/through the 
child operators, if possible. In prior Spark versions, these filters are not 
eligible for predicate pushdown.
    +  - Partition column inference previously found incorrect common type for 
different inferred types, for example, previously it ended up with double type 
as the common type for double type and date type. Now it finds the correct 
common type for such conflicts. The conflict resolution follows the table below:
    +    <table class="table">
    +      <tr>
    +        <th>
    +          <b>InputA \ InputB</b>
    +        </th>
    +        <th>
    +          <b>NullType</b>
    +        </th>
    +        <th>
    +          <b>IntegerType</b>
    +        </th>
    +        <th>
    +          <b>LongType</b>
    +        </th>
    +        <th>
    +          <b>DecimalType(38,0)*</b>
    +        </th>
    +        <th>
    +          <b>DoubleType</b>
    +        </th>
    +        <th>
    +          <b>DateType</b>
    +        </th>
    +        <th>
    +          <b>TimestampType</b>
    +        </th>
    +        <th>
    +          <b>StringType</b>
    +        </th>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>NullType</b>
    +        </td>
    +        <td>NullType</td>
    +        <td>IntegerType</td>
    +        <td>LongType</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>DoubleType</td>
    +        <td>DateType</td>
    +        <td>TimestampType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>IntegerType</b>
    +        </td>
    +        <td>IntegerType</td>
    +        <td>IntegerType</td>
    +        <td>LongType</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>DoubleType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>LongType</b>
    +        </td>
    +        <td>LongType</td>
    +        <td>LongType</td>
    +        <td>LongType</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>DecimalType(38,0)*</b>
    +        </td>
    +        <td>DecimalType(38,0)</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>DecimalType(38,0)</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>DoubleType</b>
    +        </td>
    +        <td>DoubleType</td>
    +        <td>DoubleType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>DoubleType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>DateType</b>
    +        </td>
    +        <td>DateType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>DateType</td>
    +        <td>TimestampType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>TimestampType</b>
    +        </td>
    +        <td>TimestampType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>TimestampType</td>
    +        <td>TimestampType</td>
    +        <td>StringType</td>
    +      </tr>
    +      <tr>
    +        <td>
    +          <b>StringType</b>
    +        </td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +        <td>StringType</td>
    +      </tr>
    +    </table>
    +
    +    Note that, for <b>DecimalType(38,0)*</b>, the table above 
intentionally does not cover all other combinations of scales and precisions 
because currently we only infer decimal type like `BigInteger`/`BigInt`. For 
example, 1.1 is inferred as double type.
    +  - In PySpark, now we need Pandas 0.19.2 or upper if you want to use 
Pandas related functionalities, such as `toPandas`, `createDataFrame` from 
Pandas DataFrame, etc.
    +  - In PySpark, the behavior of timestamp values for Pandas related 
functionalities was changed to respect session timezone. If you want to use the 
old behavior, you need to set a configuration 
`spark.sql.execution.pandas.respectSessionTimeZone` to `False`. See 
[SPARK-22395](https://issues.apache.org/jira/browse/SPARK-22395) for details.
    +  - In PySpark, `na.fill()` or `fillna` also accepts boolean and replaces 
nulls with booleans. In prior Spark versions, PySpark just ignores it and 
returns the original Dataset/DataFrame.
    +  - Since Spark 2.3, when either broadcast hash join or broadcast nested 
loop join is applicable, we prefer to broadcasting the table that is explicitly 
specified in a broadcast hint. For details, see the section [Broadcast 
Hint](sql-performance-turing.html#broadcast-hint-for-sql-queries) and 
[SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489).
    +  - Since Spark 2.3, when all inputs are binary, `functions.concat()` 
returns an output as binary. Otherwise, it returns as a string. Until Spark 
2.3, it always returns as a string despite of input types. To keep the old 
behavior, set `spark.sql.function.concatBinaryAsString` to `true`.
    +  - Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an 
output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always 
returns as a string despite of input types. To keep the old behavior, set 
`spark.sql.function.eltOutputAsString` to `true`.
    +
    + - Since Spark 2.3, by default arithmetic operations between decimals 
return a rounded value if an exact representation is not possible (instead of 
returning NULL). This is compliant with SQL ANSI 2011 specification and Hive's 
new behavior introduced in Hive 2.2 (HIVE-15331). This involves the following 
changes
    +    - The rules to determine the result type of an arithmetic operation 
have been updated. In particular, if the precision / scale needed are out of 
the range of available values, the scale is reduced up to 6, in order to 
prevent the truncation of the integer part of the decimals. All the arithmetic 
operations are affected by the change, ie. addition (`+`), subtraction (`-`), 
multiplication (`*`), division (`/`), remainder (`%`) and positive module 
(`pmod`).
    +    - Literal values used in SQL operations are converted to DECIMAL with 
the exact precision and scale needed by them.
    +    - The configuration `spark.sql.decimalOperations.allowPrecisionLoss` 
has been introduced. It defaults to `true`, which means the new behavior 
described here; if set to `false`, Spark uses previous rules, ie. it doesn't 
adjust the needed scale to represent the values and it returns NULL if an exact 
representation of the value is not possible.
    +  - In PySpark, `df.replace` does not allow to omit `value` when 
`to_replace` is not a dictionary. Previously, `value` could be omitted in the 
other cases and had `None` by default, which is counterintuitive and 
error-prone.
    +  - Un-aliased subquery's semantic has not been well defined with 
confusing behaviors. Since Spark 2.3, we invalidate such confusing cases, for 
example: `SELECT v.i from (SELECT i FROM v)`, Spark will throw an analysis 
exception in this case because users should not be able to use the qualifier 
inside a subquery. See 
[SPARK-20690](https://issues.apache.org/jira/browse/SPARK-20690) and 
[SPARK-21335](https://issues.apache.org/jira/browse/SPARK-21335) for more 
details.
    +
    +  - When creating a `SparkSession` with 
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, 
the builder was trying to update the `SparkConf` of the existing `SparkContext` 
with configurations specified to the builder, but the `SparkContext` is shared 
by all `SparkSession`s, so we should not update them. Since 2.3, the builder 
come to not update the configurations. If you want to update them, you need to 
update them prior to creating a `SparkSession`.
    --- End diff --
    
    `the build come` -> `the builder comes`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to