Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18865#discussion_r137930149
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala
 ---
    @@ -113,6 +113,21 @@ class JsonFileFormat extends TextBasedFileFormat with 
DataSourceRegister {
           }
         }
     
    +    if (requiredSchema.length == 1 &&
    +      requiredSchema.head.name == parsedOptions.columnNameOfCorruptRecord) 
{
    +      throw new AnalysisException(
    +        s"'${parsedOptions.columnNameOfCorruptRecord}' cannot be selected 
alone without other\n" +
    +        "data columns, because its content is completely derived from the 
data columns parsed.\n" +
    +        "Even your queries looks not only select this column, if after 
column pruning it isn't\n" +
    +        "involving paring any data fields, e.g., filtering on the column 
followed by a \n" +
    +        "counting, it can produce incorrect results and so disallowed.\n" +
    +        "If you want to select corrupt records only, cache or save the 
Dataset\n" +
    +        "before executing queries, as this parses all fields under the 
hood. For example: \n" +
    +        "df.cache()\n" +
    +        s"""df.select("${parsedOptions.columnNameOfCorruptRecord}")"""
    --- End diff --
    
    How about also improving this based on the one we changed in 
`sql-programming-guide.md`? Thanks!


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to