yittg commented on a change in pull request #4177:
URL: https://github.com/apache/iceberg/pull/4177#discussion_r824320365



##########
File path: core/src/main/java/org/apache/iceberg/util/ThreadPools.java
##########
@@ -61,6 +62,26 @@ public static ExecutorService getWorkerPool() {
     return WORKER_POOL;
   }
 
+  public static ExecutorService newWorkerPool(String namePrefix, Integer 
parallelism) {
+    return MoreExecutors.getExitingExecutorService(
+        (ThreadPoolExecutor) Executors.newFixedThreadPool(
+            Optional.ofNullable(parallelism).orElse(WORKER_THREAD_POOL_SIZE),
+            new ThreadFactoryBuilder()
+                .setDaemon(true)
+                .setNameFormat(namePrefix + "-%d")
+                .build()));
+  }
+
+  public static ExecutorService newKeyedWorkerPool(String key, String 
namePrefix, Integer parallelism) {

Review comment:
       @rdblue ,sorry, i don't get your point exactly. let me guess, what you 
really mean is sharing pools for all sources or all sinks, not for all sources 
and sinks?
   To be clear, for example, if a job consists of:
   Source: Iceberg A(parallelism: 3), Source:Iceberg B, Sink:Iceberg C, Sink: 
Iceberg D.
   What's your favor?
   
   1. share btw all parallelism of one operator, like 3 subtask for Iceberg A 
(it can be run in different slots in one TaskManager or different TaskManagers) 
;
   2. share btw all sources or all sinks, like sharing one for btw A and B, and 
another one for C and D; 
   3. share btw all operators, like sharing btw A, B, C, and D; all subtasks in 
same TaskManager can share.
   
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to