[ 
https://issues.apache.org/jira/browse/FLINK-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347086#comment-16347086
 ] 

zhu.qing commented on FLINK-8534:
---------------------------------

And I failed to use 16g laptop to reproduce the bug. The key to the bug is that 
you need insert enough entry in function insertBucketEntry upto 256 and that 
cause spillPartition(). But in 8g desktop it will always reproduce

> if insert too much BucketEntry into one bucket in join of iteration will 
> cause a error (Caused : java.io.FileNotFoundException release file error)
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-8534
>                 URL: https://issues.apache.org/jira/browse/FLINK-8534
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime
>         Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
>            Reporter: zhu.qing
>            Priority: Major
>         Attachments: T2AdjSetBfs.java
>
>
> When insert too much entry into bucket (more than 255 )will cause  
> spillPartition(). So 
> this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
> bufferReturnQueue); 
> And in 
> prepareNextPartition() of reopenablemutable hash table 
> furtherPartitioning = true; 
> so in 
> finalizeProbePhase()  in HashPartition
>  this.probeSideChannel.close();
> //the file will be delete 
>  this.buildSideChannel.deleteChannel();
>  this.probeSideChannel.deleteChannel();
> after deleteChannel the next iteartion will fail.
>  
> And I use web-google as dataset(SNAP). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to