Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21320#discussion_r199356283
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadSupport.scala
---
@@ -47,16 +47,25 @@ import org.apache.spark.sql.types._
*
* Due to this reason, we no longer rely on [[ReadContext]] to pass
requested schema from [[init()]]
* to [[prepareForRead()]], but use a private `var` for simplicity.
+ *
+ * @param parquetMrCompatibility support reading with parquet-mr or
Spark's built-in Parquet reader
*/
-private[parquet] class ParquetReadSupport(val convertTz: Option[TimeZone])
+private[parquet] class ParquetReadSupport(val convertTz: Option[TimeZone],
+ parquetMrCompatibility: Boolean)
--- End diff --
```Scala
private[parquet] class ParquetReadSupport(
val convertTz: Option[TimeZone],
parquetMrCompatibility: Boolean)
extends ReadSupport[UnsafeRow] with Logging {
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]