[ 
https://issues.apache.org/jira/browse/FLINK-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16542744#comment-16542744
 ] 

ASF GitHub Bot commented on FLINK-9413:
---------------------------------------

Github user tillrohrmann commented on the issue:

    https://github.com/apache/flink/pull/6103
  
    With [FLINK-5129](https://issues.apache.org/jira/browse/FLINK-5129) which 
is part of Flink 1.3.0, we allow the `BlobCache` to retrieve the blobs from the 
HA storage location. In your case this seems to be HDFS. The idea was that this 
should alleviate the I/O pressure from the `JobManager` because not all blobs 
are served by a single node. Thus, this should rather improve the situation 
with this issue.


> Tasks can fail with PartitionNotFoundException if consumer deployment takes 
> too long
> ------------------------------------------------------------------------------------
>
>                 Key: FLINK-9413
>                 URL: https://issues.apache.org/jira/browse/FLINK-9413
>             Project: Flink
>          Issue Type: Bug
>          Components: Distributed Coordination
>    Affects Versions: 1.4.0, 1.5.0, 1.6.0
>            Reporter: Till Rohrmann
>            Assignee: zhangminglei
>            Priority: Critical
>              Labels: pull-request-available
>
> {{Tasks}} can fail with a {{PartitionNotFoundException}} if the deployment of 
> the producer takes too long. More specifically, if it takes longer than the 
> {{taskmanager.network.request-backoff.max}}, then the {{Task}} will give up 
> and fail.
> The problem is that we calculate the {{InputGateDeploymentDescriptor}} for a 
> consuming task once the producer has been assigned a slot but we do not wait 
> until it is actually running. The problem should be fixed if we wait until 
> the task is in state {{RUNNING}} before assigning the result partition to the 
> consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to