[ 
https://issues.apache.org/jira/browse/FLINK-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhu.qing updated FLINK-8534:
----------------------------
    Description: 
When insert too much entry into bucket (more than 255 )will cause  

spillPartition(). So 

this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
bufferReturnQueue); 

And in 

prepareNextPartition() of reopenablemutable hash table 

furtherPartitioning = true; 

so in 

finalizeProbePhase()  in HashPartition (line 367)
 this.probeSideChannel.close();

//the file will be delete 
 this.buildSideChannel.deleteChannel();
 this.probeSideChannel.deleteChannel();

after deleteChannel the next iteartion will fail.

 

And I use web-google as dataset(SNAP). 

  was:
When insert too much entry into bucket (more than 255 )will cause  

spillPartition(). So 

this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
bufferReturnQueue); 

And in 

prepareNextPartition() of reopenablemutable hash table 

furtherPartitioning = true; 

so in 

finalizeProbePhase()  in HashPartition
 this.probeSideChannel.close();

//the file will be delete 
 this.buildSideChannel.deleteChannel();
 this.probeSideChannel.deleteChannel();

after deleteChannel the next iteartion will fail.

 

And I use web-google as dataset(SNAP). 


> if insert too much BucketEntry into one bucket in join of iteration will 
> cause a error (Caused : java.io.FileNotFoundException release file error)
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-8534
>                 URL: https://issues.apache.org/jira/browse/FLINK-8534
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime
>         Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
>            Reporter: zhu.qing
>            Priority: Major
>         Attachments: T2AdjSetBfs.java
>
>
> When insert too much entry into bucket (more than 255 )will cause  
> spillPartition(). So 
> this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
> bufferReturnQueue); 
> And in 
> prepareNextPartition() of reopenablemutable hash table 
> furtherPartitioning = true; 
> so in 
> finalizeProbePhase()  in HashPartition (line 367)
>  this.probeSideChannel.close();
> //the file will be delete 
>  this.buildSideChannel.deleteChannel();
>  this.probeSideChannel.deleteChannel();
> after deleteChannel the next iteartion will fail.
>  
> And I use web-google as dataset(SNAP). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to