[ 
https://issues.apache.org/jira/browse/FLINK-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14360460#comment-14360460
 ] 

ASF GitHub Bot commented on FLINK-1350:
---------------------------------------

Github user uce commented on the pull request:

    https://github.com/apache/flink/pull/471#issuecomment-79017193
  
    I've rebased this on the latest master and set the default I/O mode to 
synchronous, i.e. we currently use the simpler synchronous spilled subpartition 
view when consuming intermediate results.
    
    As Stephan pointed out, it makes sense to have the simple version in place 
as long as it is not clear what the benefits of the tricky asynchronous version 
is.
    
    The memory configuration has not been changed in this PR, because I don't 
think that it makes too much sense to give more memory to the network stack to 
(maybe) keep blocking results in-memory as long as we don't have any 
functionality in place to leverage these cached partitions.


> Add blocking intermediate result partitions
> -------------------------------------------
>
>                 Key: FLINK-1350
>                 URL: https://issues.apache.org/jira/browse/FLINK-1350
>             Project: Flink
>          Issue Type: Improvement
>          Components: Distributed Runtime
>            Reporter: Ufuk Celebi
>            Assignee: Ufuk Celebi
>
> The current state of runtime support for intermediate results (see 
> https://github.com/apache/incubator-flink/pull/254 and FLINK-986) only 
> supports pipelined intermediate results (with back pressure), which are 
> consumed as they are being produced.
> The next variant we need to support are blocking intermediate results 
> (without back pressure), which are fully produced before being consumed. This 
> is for example desirable in situations, where we currently may run into 
> deadlocks when running pipelined.
> I will start working on this on top of my pending pull request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to