Github user jmchung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18865#discussion_r137928316
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1542,6 +1542,10 @@ options.
     
     # Migration Guide
     
    +## Upgrading From Spark SQL 2.2 to 2.3
    +
    +  - The queries which select only `spark.sql.columnNameOfCorruptRecord` 
column are disallowed now. Notice that the queries which have only the column 
after column pruning (e.g. filtering on the column followed by a counting 
operation) are also disallowed. If you want to select only the corrupt records, 
you should cache or save the underlying Dataset and DataFrame before running 
such queries.
    --- End diff --
    
    Maybe a typo `_corrupt_column`, it should be `_corrupt_record`.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to