zjuwangg commented on a change in pull request #9310: [FLINK-13190]add test to
verify partition pruning for HiveTableSource
URL: https://github.com/apache/flink/pull/9310#discussion_r309571719
##########
File path:
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/HiveTableSourceTest.java
##########
@@ -133,4 +133,48 @@ public void testReadPartitionTable() throws Exception {
assertArrayEquals(new String[]{"2014,3,0", "2014,4,0",
"2015,2,1", "2015,5,1"}, rowStrings);
}
+ @Test
+ public void testPartitionPrunning() throws Exception {
+ final String dbName = "source_db";
+ final String tblName = "test_table_pt";
+ hiveShell.execute("CREATE TABLE source_db.test_table_pt " +
+ "(year STRING, value INT)
partitioned by (pt int);");
+ hiveShell.insertInto("source_db", "test_table_pt")
+ .withColumns("year", "value", "pt")
+ .addRow("2014", 3, 0)
+ .addRow("2014", 4, 0)
+ .addRow("2015", 2, 1)
+ .addRow("2015", 5, 1)
+ .commit();
+ TableEnvironment tEnv = HiveTestUtils.createTableEnv();
+ ObjectPath tablePath = new ObjectPath(dbName, tblName);
+ CatalogTable catalogTable = (CatalogTable)
hiveCatalog.getTable(tablePath);
+ tEnv.registerTableSource("src", new HiveTableSource(new
JobConf(hiveConf), tablePath, catalogTable));
Review comment:
Maybe we can't. During the optimization, a new TableSource will be returned
instead of the original one.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services