jshmchenxi commented on a change in pull request #2577:
URL: https://github.com/apache/iceberg/pull/2577#discussion_r636791805
##########
File path: spark2/src/main/java/org/apache/iceberg/spark/source/Reader.java
##########
@@ -227,15 +233,58 @@ public StructType readSchema() {
Broadcast<Table> tableBroadcast =
sparkContext.broadcast(SerializableTable.copyOf(table));
List<InputPartition<InternalRow>> readTasks = Lists.newArrayList();
- for (CombinedScanTask task : tasks()) {
- readTasks.add(new ReadTask<>(
- task, tableBroadcast, expectedSchemaString, caseSensitive,
- localityPreferred, InternalRowReaderFactory.INSTANCE));
- }
+
+ initializeReadTasks(readTasks, tableBroadcast, expectedSchemaString, () ->
InternalRowReaderFactory.INSTANCE);
return readTasks;
}
+ /**
+ * Initialize ReadTasks with multi threads as get block locations can be slow
+ *
+ * @param readTasks Result list to return
+ */
+ private <T> void initializeReadTasks(List<InputPartition<T>> readTasks,
+ Broadcast<Table> tableBroadcast, String expectedSchemaString,
Supplier<ReaderFactory<T>> supplier) {
+ int taskInitThreads = Math.max(1,
PropertyUtil.propertyAsInt(table.properties(), LOCALITY_TASK_INITIALIZE_THREADS,
+ LOCALITY_TASK_INITIALIZE_THREADS_DEFAULT));
+
+ if (!localityPreferred || taskInitThreads == 1) {
+ for (CombinedScanTask task : tasks()) {
+ readTasks.add(new ReadTask<>(
+ task, tableBroadcast, expectedSchemaString, caseSensitive,
+ localityPreferred, supplier.get()));
+ }
+ return;
+ }
+
+ List<Future<ReadTask<T>>> futures = new ArrayList<>();
+
+ final ExecutorService pool = Executors.newFixedThreadPool(
+ taskInitThreads,
+ new ThreadFactoryBuilder()
+ .setDaemon(true)
+ .setNameFormat("Init-ReadTask-%d")
+ .build());
Review comment:
> Seems to me like how parallel and whether or not you want to wait for
locality would be specific to the cluster that the job is running on.
I aggre with that. And this property is useful only with spark. Maybe it's
better we define it as a user-defined spark configuration that can be set per
session?
##########
File path: spark2/src/main/java/org/apache/iceberg/spark/source/Reader.java
##########
@@ -227,15 +233,58 @@ public StructType readSchema() {
Broadcast<Table> tableBroadcast =
sparkContext.broadcast(SerializableTable.copyOf(table));
List<InputPartition<InternalRow>> readTasks = Lists.newArrayList();
- for (CombinedScanTask task : tasks()) {
- readTasks.add(new ReadTask<>(
- task, tableBroadcast, expectedSchemaString, caseSensitive,
- localityPreferred, InternalRowReaderFactory.INSTANCE));
- }
+
+ initializeReadTasks(readTasks, tableBroadcast, expectedSchemaString, () ->
InternalRowReaderFactory.INSTANCE);
return readTasks;
}
+ /**
+ * Initialize ReadTasks with multi threads as get block locations can be slow
+ *
+ * @param readTasks Result list to return
+ */
+ private <T> void initializeReadTasks(List<InputPartition<T>> readTasks,
+ Broadcast<Table> tableBroadcast, String expectedSchemaString,
Supplier<ReaderFactory<T>> supplier) {
+ int taskInitThreads = Math.max(1,
PropertyUtil.propertyAsInt(table.properties(), LOCALITY_TASK_INITIALIZE_THREADS,
+ LOCALITY_TASK_INITIALIZE_THREADS_DEFAULT));
+
+ if (!localityPreferred || taskInitThreads == 1) {
+ for (CombinedScanTask task : tasks()) {
+ readTasks.add(new ReadTask<>(
+ task, tableBroadcast, expectedSchemaString, caseSensitive,
+ localityPreferred, supplier.get()));
+ }
+ return;
+ }
+
+ List<Future<ReadTask<T>>> futures = new ArrayList<>();
+
+ final ExecutorService pool = Executors.newFixedThreadPool(
+ taskInitThreads,
+ new ThreadFactoryBuilder()
+ .setDaemon(true)
+ .setNameFormat("Init-ReadTask-%d")
+ .build());
Review comment:
> Seems to me like how parallel and whether or not you want to wait for
locality would be specific to the cluster that the job is running on.
I aggre with that. And this property is useful only with spark. Maybe it's
better we define it as a user-defined spark configuration that can be set per
session?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]