luoyuxia commented on code in PR #2383:
URL: https://github.com/apache/fluss/pull/2383#discussion_r2694314282
##########
fluss-flink/fluss-flink-common/src/main/java/org/apache/fluss/flink/source/FlinkTableSource.java:
##########
@@ -385,12 +384,11 @@ public boolean isBounded() {
}
}
- private boolean pushTimeStampFilterToLakeSource(
- LakeSource<?> lakeSource, RowType flussRowType) {
+ private boolean pushTimeStampFilterToLakeSource(LakeSource<?> lakeSource) {
// will push timestamp to lake
// we will have three additional system columns, __bucket, __offset,
__timestamp
// in lake, get the __timestamp index in lake table
- final int timestampFieldIndex = flussRowType.getFieldCount() + 2;
+ final int timestampFieldIndex = tableOutputType.getFieldCount() + 2;
Review Comment:
`flussRowType` is projected row, we should use origin `tableOutputType`
##########
fluss-flink/fluss-flink-common/src/main/java/org/apache/fluss/flink/lake/LakeSplitGenerator.java:
##########
@@ -119,7 +119,7 @@ public List<SourceSplitBase>
generateHybridLakeFlussSplits() throws Exception {
lakeSplits, isLogTable, tableBucketsOffset,
partitionNameById);
} else {
Map<Integer, List<LakeSplit>> nonPartitionLakeSplits =
- lakeSplits.values().iterator().next();
+ lakeSplits.isEmpty() ? null :
lakeSplits.values().iterator().next();
Review Comment:
when no split is generate, it may be empty. Let's use the safe way.
##########
fluss-flink/fluss-flink-common/src/main/java/org/apache/fluss/flink/source/enumerator/FlinkSourceEnumerator.java:
##########
@@ -792,6 +782,17 @@ private void assignPendingSplits(Set<Integer>
pendingReaders) {
assignedPartitions.put(partitionId,
partitionName);
}
});
+
+ if (pendingHybridLakeFlussSplits != null) {
Review Comment:
move out from the loop to speed it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]