JingsongLi commented on code in PR #280:
URL: https://github.com/apache/flink-table-store/pull/280#discussion_r958015488
##########
flink-table-store-core/src/main/java/org/apache/flink/table/store/file/operation/KeyValueFileStoreRead.java:
##########
@@ -106,15 +126,24 @@ public RecordReader<KeyValue> createReader(Split split)
throws IOException {
} else {
// in this case merge tree should merge records with same key
// Do not project key in MergeTreeReader.
- DataFileReader dataFileReader =
- dataFileReaderFactory.create(split.partition(),
split.bucket(), false, filters);
- MergeTreeReader reader =
- new MergeTreeReader(
- new IntervalPartition(split.files(),
keyComparator).partition(),
- true,
- dataFileReader,
- keyComparator,
- mergeFunction.copy());
+ DataFileReader dataFileReaderWithAllFilters =
createDataFileReader(split, false, true);
+ DataFileReader dataFileReaderWithKeyFilters =
createDataFileReader(split, false, false);
+ MergeFunction mergeFunc = mergeFunction.copy();
+ List<ConcatRecordReader.ReaderSupplier<KeyValue>> readers = new
ArrayList<>();
+ for (List<SortedRun> section : sections) {
+ DataFileReader dataFileReader;
+ if (section.size() == 1) {
+ // key ranges do not overlap, and value filters can be
pushed down
+ dataFileReader = dataFileReaderWithAllFilters;
+ } else {
+ dataFileReader = dataFileReaderWithKeyFilters;
+ }
+ readers.add(
Review Comment:
Maybe we can do more, we can filter the files by `Predicate.test`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]