[ 
https://issues.apache.org/jira/browse/FLINK-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890272#comment-16890272
 ] 

Andrey Zagrebin commented on FLINK-13245:
-----------------------------------------

[~zjwang] [~NicoK]
I agree that we should address the semantics of partition lifecycle separately. 
But the fine grained recovery is already implemented and planned for the 
release. I just wanted to make sure that network stack and this fix are in sync 
with the lifecycle semantics and the fine grained recovery effort.

`release on consumption` notion was an optimisation to save RPC release calls 
from JM and fallback to the previous behaviour, but as [~zjwang] pointed out 
this internal release is unreliable atm. If activated, it may always be done 
now, also in case of consumer failure (both notify/release), but only on a best 
effort. It means we have to adjust JM and send RPC release at least in case of 
consumer failure or just producer restart (FLINK-13371). When the fine-grained 
recovery is stable we might not need this option at all and can simplify 
network stack later if needed.

> Network stack is leaking files
> ------------------------------
>
>                 Key: FLINK-13245
>                 URL: https://issues.apache.org/jira/browse/FLINK-13245
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Network
>    Affects Versions: 1.9.0
>            Reporter: Chesnay Schepler
>            Assignee: zhijiang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.9.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> There's file leak in the network stack / shuffle service.
> When running the {{SlotCountExceedingParallelismTest}} on Windows a large 
> number of {{.channel}} files continue to reside in a 
> {{flink-netty-shuffle-XXX}} directory.
> From what I've gathered so far these files are still being used by a 
> {{BoundedBlockingSubpartition}}. The cleanup logic in this class uses 
> ref-counting to ensure we don't release data while a reader is still present. 
> However, at the end of the job this count has not reached 0, and thus nothing 
> is being released.
> The same issue is also present on the {{ResultPartition}} level; the 
> {{ReleaseOnConsumptionResultPartition}} also are being released while the 
> ref-count is greater than 0.
> Overall it appears like there's some issue with the notifications for 
> partitions being consumed.
> It is feasible that this issue has recently caused issues on Travis where the 
> build were failing due to a lack of disk space.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to