[
https://issues.apache.org/jira/browse/FLINK-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347155#comment-16347155
]
zhu.qing commented on FLINK-8534:
---------------------------------
And Code is attached as a BFS in graph use native flink api.
> if insert too much BucketEntry into one bucket in join of iteration will
> cause a error (Caused : java.io.FileNotFoundException release file error)
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-8534
> URL: https://issues.apache.org/jira/browse/FLINK-8534
> Project: Flink
> Issue Type: Bug
> Components: Local Runtime
> Environment: windows, intellij idea, 8g ram, 4core i5 cpu, Flink
> 1.4.0, and parallelism = 2 will cause problem and others will not.
> Reporter: zhu.qing
> Priority: Major
> Attachments: T2AdjSetBfs.java
>
>
> When insert too much entry into bucket (MutableHashTable insertBucketEntry()
> line 1054 more than 255) will cause spillPartition() (HashPartition line
> 317). So
> this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel,
> bufferReturnQueue);
> And in
> prepareNextPartition() of ReOpenableMutableHashTable (line 156)
> furtherPartitioning = true;
> so in
> finalizeProbePhase() in HashPartition (line 367)
> this.probeSideChannel.close();
> //the file will be delete
> this.buildSideChannel.deleteChannel();
> this.probeSideChannel.deleteChannel();
> after deleteChannel the next iteartion will fail.
>
> And I use web-google(SNAP) as dataset.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)