stevenzwu commented on a change in pull request #3817:
URL: https://github.com/apache/iceberg/pull/3817#discussion_r779657429
##########
File path:
flink/v1.14/flink/src/main/java/org/apache/iceberg/flink/source/FlinkSplitGenerator.java
##########
@@ -37,9 +40,20 @@ private FlinkSplitGenerator() {
static FlinkInputSplit[] createInputSplits(Table table, ScanContext context)
{
List<CombinedScanTask> tasks = tasks(table, context);
FlinkInputSplit[] splits = new FlinkInputSplit[tasks.size()];
- for (int i = 0; i < tasks.size(); i++) {
- splits[i] = new FlinkInputSplit(i, tasks.get(i));
- }
+ boolean localityPreferred = context.locality();
+
+ Tasks.range(tasks.size())
+ .stopOnFailure()
+ .executeWith(localityPreferred ? ThreadPools.getWorkerPool() : null)
Review comment:
@hililiwei Thanks for the Spark code reference. As I said earlier, using
thread pool does make sense to me. I just want to double check the behavior in
iceberg-mr module. Hence like to double check with folks with more context.
My 2nd part of the comment is not addressed. please take a look.
```
public static String[] blockLocations(FileIO io, CombinedScanTask task)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]