wayneguow commented on code in PR #47506:
URL: https://github.com/apache/spark/pull/47506#discussion_r1711177701
##########
docs/sql-migration-guide.md:
##########
@@ -627,7 +627,7 @@ license: |
## Upgrading from Spark SQL 2.2 to 2.3
- - Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when
the referenced columns only include the internal corrupt record column (named
`_corrupt_record` by default). For example,
`spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()`
and `spark.read.schema(schema).json(file).select("_corrupt_record").show()`.
Instead, you can cache or save the parsed results and then send the same query.
For example, `val df = spark.read.schema(schema).json(file).cache()` and then
`df.filter($"_corrupt_record".isNotNull).count()`.
+ - Since Spark 2.3, the queries from raw JSON files are disallowed when the
referenced columns only include the internal corrupt record column (named
`_corrupt_record` by default). For example,
`spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()`
and `spark.read.schema(schema).json(file).select("_corrupt_record").show()`.
Instead, you can cache or save the parsed results and then send the same query.
For example, `val df = spark.read.schema(schema).json(file).cache()` and then
`df.filter($"_corrupt_record".isNotNull).count()`.
Review Comment:
Gentle ping @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]