Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/8441#discussion_r37953102
--- Diff: docs/sql-programming-guide.md ---
@@ -1988,6 +2005,27 @@ options.
# Migration Guide
+## Upgrading From Spark SQL 1.4 to 1.5
+
+ - Optimized execution using manually managed memory (Tungsten) is now
enabled by default, along with
+ code generation for expression evaluation. These features can both be
disabled by setting
+ `spark.sql.tungsten.enabled` to `false.
+ - Parquet schema merging is no longer enabled by default. It can be
re-enabled by setting
+ `spark.sql.parquet.mergeSchema` to `true`.
+ - Resolution of strings to columns in python now supports using dots
(`.`) to qualify the column or
+ access nested values. For example `df['table.column.nestedField']`.
However, this means that if
+ your column name contains any dots you must now escape them using
backticks.
--- End diff --
should give an example of using backticks here
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]