Hi Wangsan,
We were struggling with this for many days as well. In the end we found a work
around. Well work-around, this for sure qualifies as one of the ugliest hacks I
have ever contemplated. Our work-around for Flink immediately interrupting the
close, is to continue closing on another thre
Hi,
'Join' method can be call with a timeout (as is called in TaskCanceler), so it
won't be block forever if the respective thread is in deadlock state. Maybe
calling 'interrupt()' after 'join(timeout)' is more reasonable, altought it
still can not make sure operations inside 'close()' metho
Hi,
I would speculate that the reason for this order is that we want to shutdown
the tasks quickly by interrupting blocking calls in the event of failure, so
that recover can begin as fast as possible. I am looping in Stephan who might
give more details about this code.
Best,
Stefan
> Am 27.
Hi,
We are currently using BucketingSink to save data into HDFS in parquet format.
But when the flink job was cancelled, we always got Exception in
BucketingSink's close method. The datailed exception info is as below:
[ERROR] [2017-09-26 20:51:58,893]
[org.apache.flink.streaming.runtime.task