zhu.qing created FLINK-8534:
-------------------------------

             Summary: if insert too much BucketEntry into one bucket in join of 
iteration. Will cause Caused by: java.io.FileNotFoundException release file 
error
                 Key: FLINK-8534
                 URL: https://issues.apache.org/jira/browse/FLINK-8534
             Project: Flink
          Issue Type: Bug
         Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0
            Reporter: zhu.qing


When insert too much entry into bucket will cause 

spillPartition(). So 

this.buildSideChannel = ioAccess.createBlockChannelWriter(targetChannel, 
bufferReturnQueue); 

And in 

prepareNextPartition() of reopenablemutable hash table

furtherPartitioning = true; 

so in 

finalizeProbePhase()

freeMemory.add(this.probeSideBuffer.getCurrentSegment());

// delete the spill files
this.probeSideChannel.close();
System.out.println("HashPartition probeSideRecordCounter Delete");
this.buildSideChannel.deleteChannel();
this.probeSideChannel.deleteChannel();

after deleteChannel the next iteartion will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to