lirui-apache commented on a change in pull request #9721: [FLINK-14129][hive]
HiveTableSource should implement ProjectableTable…
URL: https://github.com/apache/flink/pull/9721#discussion_r326479552
##########
File path:
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableInputFormat.java
##########
@@ -88,17 +86,23 @@
private transient InputFormat mapredInputFormat;
private transient HiveTablePartition hiveTablePartition;
+ // indices of fields to be returned, with projection applied (if any)
+ // TODO: push projection into underlying input format that supports it
Review comment:
> can RecordReader support project push down?
I guess it depends on the implementation. For example, Hive has its own [ORC
input
format](https://github.com/apache/hive/blob/rel/release-2.3.4/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java).
And it seems Hive pushes down the projection via
[configurations](https://github.com/apache/hive/blob/rel/release-2.3.4/serde/src/java/org/apache/hadoop/hive/serde2/ColumnProjectionUtils.java#L52).
> Can we support it now?
I'd rather leave it to a separate task because it needs further
investigation. Even though this PR doesn't reduce the amount of data to read,
it avoids the costs of inspecting unused columns, which is good for queries
selecting only a few columns from tables with lots of columns. And for input
format that doesn't support projection push down (e.g. text), this is probably
the best we can do.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services