Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20953#discussion_r181010211
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
 ---
    @@ -179,7 +182,23 @@ class FileScanRDD(
                 currentIterator = readCurrentFile()
               }
     
    -          hasNext
    +          try {
    +            hasNext
    +          } catch {
    +            case e: SchemaColumnConvertNotSupportedException =>
    +              val message = "Parquet column cannot be converted in " +
    +                s"file ${currentFile.filePath}. Column: ${e.getColumn}, " +
    +                s"Expected: ${e.getLogicalType}, Found: 
${e.getPhysicalType}"
    +              throw new QueryExecutionException(message, e)
    --- End diff --
    
    other changes LGTM too but one question is, why is it 
`QueryExecutionException`? I thought it'd be `SparkException`.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to