kbendick commented on a change in pull request #3120:
URL: https://github.com/apache/iceberg/pull/3120#discussion_r711324646



##########
File path: data/src/main/java/org/apache/iceberg/data/DeleteFilter.java
##########
@@ -90,6 +95,9 @@ protected DeleteFilter(FileScanTask task, Schema tableSchema, 
Schema requestedSc
     this.eqDeletes = eqDeleteBuilder.build();
     this.requiredSchema = fileProjection(tableSchema, requestedSchema, 
posDeletes, eqDeletes);
     this.posAccessor = 
requiredSchema.accessorForField(MetadataColumns.ROW_POSITION.fieldId());
+
+    this.readService = ThreadPools.getWorkerPool();
+    this.readParallelism = ThreadPools.WORKER_THREAD_POOL_PARALLELISM;

Review comment:
       Should the default implementation use the worker thread pool? It seems 
if we make it configurable, that we would want to check the configuration first 
as a parallelism of 1 might not require the worker pool at all (like the 
current behavior).
   
   Though possibly I missed something when looking through it that makes the 
readService always required, even when the user provided delete file read 
parallelism = 1.
   
   And I'm still in favor of instantiating a named thread pool like Jack 
mentioned, like here: 
https://github.com/apache/iceberg/blob/master/spark/src/main/java/org/apache/iceberg/spark/actions/BaseRewriteDataFilesSparkAction.java#L187-L194.
 Unless there's a reason not to do that.
   
   Using a named thread pool makes debugging much easier. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to