Github user SongYadong commented on a diff in the pull request:
https://github.com/apache/spark/pull/21836#discussion_r204290930
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -111,7 +111,8 @@ class HadoopTableReader(
filterOpt: Option[PathFilter]): RDD[InternalRow] = {
assert(!hiveTable.isPartitioned, """makeRDDForTable() cannot be called
on a partitioned table,
- since input formats may differ across partitions. Use
makeRDDForTablePartitions() instead.""")
+ since input formats may differ across partitions. Use
makeRDDForPartitionedTable()
--- End diff --
That's Good. I will modify it, thanks.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]