zentol commented on pull request #19047:
URL: https://github.com/apache/flink/pull/19047#issuecomment-1065045198


   > The rational behind this change is to make the user aware of the fact that 
there's still a JobResultEntry laying around. Flink itself is behaving as 
expected. Alternatively, we could add a warning. I was just afraid that the 
user might not notice it and, therefore, went for the exception approach, 
instead.
   
   Will in practice, if say you use Kubernetes, this not result in the job 
being re-submitted again and again because the cluster fails until some failure 
rate policy is triggered? I'm not sure if this is the better alternative.
   In particular because this can happen without the user doing something 
wrong; say the JM crashes after having cleaned the job result. In that 
situation we very much want Flink to just shut down.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to