ic4y commented on issue #2279:
URL: 
https://github.com/apache/incubator-seatunnel/issues/2279#issuecomment-1197739548

   @mosence Thanks for your comment
   
   1. Is there a timeout design for the thread exclusive to task1 that times 
out?
          Exclusive threads are designed without timeouts, and the call method 
can run for any length of time. Because the Task may be a real-time task or an 
offline task.
          `If not: how to design overall execution execution timeout failure? 
`I don't understand what this means, please explain in detail.
   
   2. Is the exclusive thread separated from the original thread pool?
          This problem has not yet been specifically designed. Whether it is 
separated from the thread pool or not, I don’t think it will cause the task to 
time out, because the thread pool used here does not limit the number of 
threads, similar to `cachedThreadPool`, because multiple tasks may appear 
during the execution process. The execution time of the Call method timed out. 
As for how to limit the number of threads, I think the number of threads should 
be indirectly limited by limiting the number of tasks running.
          The management of the exclusive thread can be managed separately, and 
the thread will be automatically released after the execution of the task that 
the thread is responsible for is completed.
   
   ————————————————————————————————————
   
   1、超时的task1独享的线程是否有超时设计?
         独享线程是没有超时设计的,call方法可以运行任意时长。应为Task可能是实时任务或者离线任务。
         `如果没有:如何设计整体执行执行超时失败?`这个没有理解到什么意思,麻烦详细讲一下
   
   
   2、独享的线程是从原来的线程池里分离出来的么?
         
这个问题目前还没具体设计,是不是从线程池里分离出来我认为都不能导致任务超时,因为这里线程池是不限制线程数量类似于`cachedThreadPool`,因为执行过程中可能出现多个Task
 Call方法执行时间超时。至于如何显示线程的数量,我认为是应该通过限制Task的运行数量来间接限制线程的数量。
         独享线程的管理可以单独管理,当线程负责的Task执行完成后会自动释放线程。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to